Page 2 of 2 FirstFirst 12
Results 11 to 13 of 13

Thread: Sandboxing Desktop Apps With LXD

  1. #11
    Join Date
    Mar 2011
    Location
    19th Hole
    Beans
    Hidden!
    Distro
    Ubuntu 22.04 Jammy Jellyfish

    Running a Full, Separate Distro

    Let's say we want to run another full, separate 'buntu distro within a container. Assume that our host is running vanilla Ubuntu 22.04 and we want to create a Xubuntu container based on 23.04:
    Code:
    lxc launch ubuntu:23.04 --profile default --profile x11 xubuntu
    Do our update and housekeeping:
    Code:
    ubuntu@xubuntu:~$ sudo apt update && sudo apt full-upgrade && sudo apt autoremove && sudo apt clean
    Now prepare for a long download of the full Xubuntu desktop:
    Code:
    ubuntu@xubuntu:~$ sudo apt install xubuntu-desktop
    Reboot:
    Code:
    ubuntu@xubuntu:~$ sudo reboot
    It's almost too easy. After these simple steps, we have a working Xubuntu image. To bring up the Xubuntu desktop: <Alt> + <F2> →
    Code:
    lxc exec xubuntu -- sudo --login --user ubuntu xfce4-panel
    But running two different desktop environments concurrently is a schizophrenic experience. Panels overlay each other, are unreadable and unreachable. Let's clean things up.

    The lower Xubuntu XFCE panel should be at the bottom of your screen. Let's customize things so that only the bottom panel is used (it's really the only one we need): Right click on it → Panel → Panel Preferences. If all panels are hidden such that the mouse can't reach them, then from a shell do:
    Code:
    ubuntu@xubuntu:~$ xfce4-settings-manager
    …and select Panels. On my installation, the bottom panel is "Panel 2".

    1. First, under the "Display" tab, let's permit this panel to float by turning off Lock panel.
    2. Set Automatically hide the panel to "Never"
    3. Set Mode to suit your taste.
    4. Set Row size, Number of rows and Length to suit your taste.
    5. In the "Items" tab, click the "+" button and add either Applications Menu or Whisker Menu to the panel.

    Panel 2 is now configured with everything we really need and can be moved to anywhere on our desktop. Clicking on its Applications or Whisker menu allows us to choose any of the apps that come preinstalled with the Xubuntu desktop.

    You may notice that Xubuntu's top panel is a faint ghostly presence hidden behind the Gnome panel and none of its features are reachable with the mouse. To deal with this:

    • In Panel Preferences, switch to "Panel 1".
    • We can either play around with the Display properties to put it where we want, or delete it altogether with the "-" button. I decided to delete mine because it was redundant.

    When we launch the XFCE panel, all that we are really doing is invoking a means to get at Xubuntu's apps. We're keeping our host Gnome environment for our desktop because we only want to deal with one DE.

    It's pretty cool to effortlessly bring up newer versions of apps. Try launching the 23.04 version of GIMP and compare it to Jammy's. You can have both versions running at the same time because one belongs to the container and the other to the host.

    Note
    Some apps won't work properly when launched from Xubuntu's panel menu (eg gnome-software). This may be due to permissions, ownerships, paths or some other obscure parameters. In such cases, the app may behave well if launched from a proper shell. Also remember that the container is not a real computer. System apps like power settings and display utilities will fail because the container is tightly jailed and cannot touch the host's resources—which is in fact what we desire. But it means that Xubuntu system apps will crash. We should not remove the apps because they may be integrated into Xubuntu. But if the presence of these useless menu launchers is distracting, eliminating them is left as an exercise for the reader.
    Note
    We cannot log out of our Xubuntu instance the normal way, using the Logout menu entry in the Panel. Shutdown, Reboot and Logout are system calls and this is one of the ways in which a container is not like a VM. We have already established that a containerized instance is prohibited from accessing system resources. If it were allowed to, these system calls would shut down our host, not our container.

    To close the XFCE panel, right click on it, then → Panel → Logout. We may get a message that closing the panel will also kill X, but it is safe to proceed. The message is referring to the container's X server, not the host's.
    I hope this tutorial is sufficient to get us on the way to compartmentalizing many of our desktop apps. Used in conjunction with VMs it will hopefully add one more tool to our collection of security measures in our continuing battle with the bad guys.

    Good Luck and Happy Ubuntu-ing!
    Last edited by DuckHook; February 25th, 2023 at 10:20 AM.

  2. #12
    Join Date
    Mar 2011
    Location
    19th Hole
    Beans
    Hidden!
    Distro
    Ubuntu 22.04 Jammy Jellyfish

    Problems and Troubleshooting

    Pulseaudio Alternative:

    I've recently started playing with a beta Focal install.

    Simos's proxy stack doesn't appear to work in Focal. At least, I can't get pulseaudio to map. Therefore, it was necessary to go back to Simo's earlier stack which maps X and audio as disk devices. For reference, his stack is reproduced below:
    Code:
    config:
      environment.DISPLAY: :0
      raw.idmap: "both 1000 1000"
      user.user-data: |
        #cloud-config
        runcmd:
          - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
          - 'echo export PULSE_SERVER=unix:/tmp/.pulse-native | tee --append /home/ubuntu/.profile'
        packages:
          - x11-apps
          - mesa-utils
          - pulseaudio
    description: GUI LXD profile
    devices:
      PASocket:
        path: /tmp/.pulse-native
        source: /run/user/1000/pulse/native
        type: disk
      X0:
        path: /tmp/.X11-unix/X0
        source: /tmp/.X11-unix/X0
        type: disk
      mygpu:
        type: gpu
    name: gui
    used_by:
    Note the main difference—type: disk instead of type: proxy.

    Therefore, a better strategy may be as follows:

    1. Since X as a proxy appears to work without issues, create a pure x.profile:
      Code:
      config:
        environment.DISPLAY: :0
        user.user-data: |
          #cloud-config
          packages:
            - x11-apps
            - mesa-utils
      description: GUI LXD profile
      devices:
        X0:
          bind: container
          connect: unix:@/tmp/.X11-unix/X0
          listen: unix:@/tmp/.X11-unix/X0
          security.gid: "1000"
          security.uid: "1000"
          type: proxy
        mygpu:
          type: gpu
      name: x11
      used_by: []
    2. Create the x profile:
      Code:
      duckhook@Zeus:~$ lxc profile create x
      duckhook@Zeus:~$ cat /bin/x.profile | tee lxc profile edit x
    3. Then create two different pulseaudio profiles, one of the proxy type (pa_proxy.profile) and the other as the disk type (pa_disk.profile):
      Code:
      config:
        environment.PULSE_SERVER: unix:/home/ubuntu/.pulse-native
        user.user-data: |
          #cloud-config
          runcmd:
            - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
          packages:
            - pulseaudio
      description: pulseaudio proxy LXD profile
      devices:
        PASocket1:
          bind: container
          connect: unix:/run/user/1000/pulse/native
          listen: unix:/home/ubuntu/.pulse-native
          security.gid: "1000"
          security.uid: "1000"
          uid: "1000"
          gid: "1000"
          mode: "0777"
          type: proxy
      name: pa_proxy
      used_by: []
      Code:
      config:
        raw.idmap: "both 1000 1000"
        user.user-data: |
          #cloud-config
          runcmd:
            - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
            - 'echo export PULSE_SERVER=unix:/tmp/.pulse-native | tee --append /home/ubuntu/.profile'
          packages:
            - pulseaudio
      description: pulseaudio disk LXD profile
      devices:
        PASocket:
          path: /tmp/.pulse-native
          source: /run/user/1000/pulse/native
          type: disk
      name: pa_disk
      used_by:
    4. Code:
      duckhook@Zeus:~$ lxc profile create pa_proxy
      duckhook@Zeus:~$ cat /bin/pa_proxy.profile | tee lxc profile edit pa_proxy
      duckhook@Zeus:~$ lxc profile create pa_disk
      duckhook@Zeus:~$ cat /bin/pa_disk.profile | tee lxc profile edit pa_disk
    5. Now we can just mix and match:
      Code:
      duckhook@Zeus:~$ lxc launch ubuntu:19.10 --profile default --profile x --profile pa_proxy test
      …or…
      Code:
      duckhook@Zeus:~$ lxc launch ubuntu:19.10 --profile default --profile x --profile pa_disk test
      You may have to experiment to find the pulseaudio stack that works for you.



    White Screen of Death

    On some video cards (example: old nVidia GPUs), Chromium and browsers based on its Blink rendering engine will display only a white screen of death. The browser launches and the screen is indeed populated with fields and text, but nothing can be seen. Apparently, Blink makes use of X system calls on these cards that LXD does not pass through to the host. The only workaround that I've found is to force Chromium into a specific software rendering mode. It is not enough to disable gpu. One must specifically invoke the swiftshader rendering engine:
    Code:
    chromium-browser --use-gl=swiftshader
    Firefox and derivatives don't appear to suffer from this problem.
    Last edited by DuckHook; April 13th, 2020 at 11:26 PM.

  3. #13
    Join Date
    Mar 2011
    Location
    19th Hole
    Beans
    Hidden!
    Distro
    Ubuntu 22.04 Jammy Jellyfish

    Tips and Tricks

    LXD on a ZFS-based Install

    Starting with Eoan and then with Focal, it is possible to install the host OS to a ZFS file system. But the ZFS file system takes up the whole disk. Therefore, running LXD on such a host involves a slightly different strategy.

    In a default Focal install, a small pool called bpool is created for the boot directory and all files needed to boot. The rest of the disk is dedicated to OS and user data. Let's see what a typical ZFS install looks like:
    Code:
    duckhook@Zeus:~$ sudo zfs list
    NAME                                               USED  AVAIL     REFER  MOUNTPOINT
    bpool                                              170M  1.58G       96K  /boot
    bpool/BOOT                                         169M  1.58G       96K  none
    bpool/BOOT/ubuntu_mdulm6                           169M  1.58G     91.7M  /boot
    rpool                                             3.70G   207G       96K  /
    rpool/ROOT                                        3.69G   207G       96K  none
    rpool/ROOT/ubuntu_mdulm6                          3.69G   207G     2.03G  /
    rpool/ROOT/ubuntu_mdulm6/srv                        96K   207G       96K  /srv
    rpool/ROOT/ubuntu_mdulm6/usr                       304K   207G       96K  /usr
    rpool/ROOT/ubuntu_mdulm6/usr/local                 208K   207G      144K  /usr/local
    rpool/ROOT/ubuntu_mdulm6/var                       610M   207G       96K  /var
    rpool/ROOT/ubuntu_mdulm6/var/games                  96K   207G       96K  /var/games
    rpool/ROOT/ubuntu_mdulm6/var/lib                   604M   207G      471M  /var/lib
    rpool/ROOT/ubuntu_mdulm6/var/lib/AccountServices    96K   207G       96K  /var/lib/AccountServices
    rpool/ROOT/ubuntu_mdulm6/var/lib/NetworkManager    220K   207G      132K  /var/lib/NetworkManager
    rpool/ROOT/ubuntu_mdulm6/var/lib/apt              89.9M   207G     34.2M  /var/lib/apt
    rpool/ROOT/ubuntu_mdulm6/var/lib/dpkg             39.5M   207G     31.3M  /var/lib/dpkg
    rpool/ROOT/ubuntu_mdulm6/var/log                  4.71M   207G     3.46M  /var/log
    rpool/ROOT/ubuntu_mdulm6/var/mail                   96K   207G       96K  /var/mail
    rpool/ROOT/ubuntu_mdulm6/var/snap                  112K   207G      112K  /var/snap
    rpool/ROOT/ubuntu_mdulm6/var/spool                 200K   207G      120K  /var/spool
    rpool/ROOT/ubuntu_mdulm6/var/www                    96K   207G       96K  /var/www
    rpool/USERDATA                                    5.81M   207G       96K  /
    rpool/USERDATA/duckhook_ief1eb                    5.61M   207G     3.50M  /home/duckhook
    rpool/USERDATA/root_ief1eb                         104K   207G      104K  /root
    Note that the bulk of the disk is assigned to rpool. What we want to do is define a dataset within rpool for LXD containers:
    Code:
    duckhook@Zeus:~$ sudo zfs create rpool/containers
    Now we can proceed to initialize LXD in the usual way (Setting up a ZFS pool) but with the following change:
    Code:
    duckhook@Zeus:~$ lxd init
    …
    Name of the existing ZFS pool or dataset: rpool/containers
    …
    The dataset assigned to LXD is unconstrained and will grow as needed. Once initialized, it will be owned by LXD and will no longer be visible to general users.



    Accessing the CDROM Drive

    Installing old games via the CDROM drive into a WINE container can be done the following way:

    1. Physically insert the game CD. Your system will likely automount it.
    2. We must unmount it from the host before we can pass the mount into the container
      Code:
      duckhook@Zeus:~$ sudo umount /media/path/to/CD
    3. Now we can mount it within the container:
      Code:
      duckhook@Zeus:~$ lxc config device add <container_name> cdrom disk readonly=true source=/dev/sr0 path=/media/cdrom
      The syntax breaks down as follows:
      • lxc config device add is obvious.
      • <container_name> replace with the name of your container
      • cdrom is the host label we assign to the device. You can name it anything you want.
      • disk is the device type.
      • readonly=true permits only read access.
      • source=/dev/sr0 self-explanatory
      • path=/media/cdrom defines the mountpoint inside the container

    We can now access the CDROM within the container. This can be done with DVDs and Blu-Rays too. Note that the container has exclusive unshared access to the drive and it becomes inaccessible to the host. Therefore, when finished with its use, we must release it from the container (make sure there are no container processes still dependent on it). Also, the container may refuse to start after a reboot if the CD that it is expecting has been ejected, which is another good reason to clean up properly. To unmount it:
    Code:
    duckhook@Zeus:~$ lxc config device remove <container_name> cdrom
    Some might decide that it's better to just copy the CD's contents into some directory in the container, chmod that directory and all its contents to 555, then map that directory to some disk in winecfg and designate it as a CDROM. This is a viable strategy especially if the game requires a CD to run, but it can be space-intensive. In our world of cheap storage, this is less of a consideration.
    Last edited by DuckHook; April 12th, 2020 at 07:51 PM.

Page 2 of 2 FirstFirst 12

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •