Here's where things get interesting (and a little complicated).
The act of initializing LXD automatically creates a default profile that sets up essential functionality like a network bridge, mapping LXD to the container pool and the other primary services needed to operate a LXD instance. If we were to create such an instance at this point, it would already be quite usable. We could invoke a shell to login and we could install usable command line apps like mutt, lynx and irssi. However, we would not be able to launch graphical apps because we don't yet have the hooks that the instance needs in order to display graphics through x11 and output sound through pulseaudio. To make those hooks, we must create another profile that links internal container audio-visual outputs to host outputs.
It's best to do this by creating a plain text file first so that we can edit it to our heart's content. We will then export this text file into LXD once it's been vetted and finalized. I store all configuration files of this sort in my personal bin directory.
Code:
duckhook@Zeus:~$ nano /home/duckhook/bin/x11.profile # use your GUI editor if you prefer
Paste the following into x11.profile:
Code:
config:
environment.DISPLAY: :0
environment.PULSE_SERVER: unix:/var/pulse-native
user.user-data: |
#cloud-config
runcmd:
- 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
packages:
- x11-apps
- mesa-utils
- pulseaudio
write_files:
- owner: root:root
permissions: '0644'
append: true
content: |
PULSE_SERVER=unix:/var/pulse-native
path: /etc/environment
description: GUI LXD profile
devices:
PASocket1:
bind: container
connect: unix:/run/user/1000/pulse/native
gid: "1000"
listen: unix:/var/pulse-native
mode: "0777"
security.gid: "1000"
security.uid: "1000"
type: proxy
uid: "1000"
X1:
bind: container
connect: unix:@/tmp/.X11-unix/X0
listen: unix:@/tmp/.X11-unix/X0
security.gid: "1000"
security.uid: "1000"
type: proxy
mygpu:
type: gpu
name: x11
used_by:
Some very important things to note:
- This file makes no allowance for proprietary graphics drivers. If you use closed-source drivers like many people who have nVidia GPUs do, then additional settings are needed. I do not use nVidia, so the following are purely settings gleaned from on-line sources. I cannot troubleshoot them and cannot help you with them. For nVidia GPUs using the closed-source drivers, add the following lines under config:
Code:
nvidia.driver.capabilities: all
nvidia.runtime: "true"
- If you only have nouveau installed, DO NOT use these lines. If you have no nVidia GPU, DO NOT add these lines. These options will break the profile for any GPU that is not nVidia running on closed-source drivers.
- This profile assumes a standard x11 display environment on the host of "0". If you have more than one monitor or you access through RDP, your display environment might not be "0", in which case, you must change this line:
Code:
connect: unix:@/tmp/.X11-unix/X0
…and replace X0 with whatever your display environment is: say, X1. To determine what your display environment is, do:
If the result is :0, then keep the setting at X0. If the result is :1, then change the setting to X1. - Check your work carefully, save x11.profile and exit.
Next, we create a profile called x11
Code:
duckhook@Zeus:~$ lxc profile create x11
Then, we pipe the contents of x11.profile into the profile just created:
Code:
duckhook@Zeus:~$ cat ~/bin/x11.profile | lxc profile edit x11
Alternatively, you can also just open up the x11 profile with:
Code:
duckhook@Zeus:~$ lxc profile edit x11
…then copy and paste the contents of x11.profile into the empty profile. However, lxc uses VIM as its default editor and most general users find VIM to be a difficult editor to learn.
There's a lot going on here. If you wish to unpack the meaning for these keys, the primary resource that I used was: https://blog.simos.info/running-x11-...xd-containers/
If everything went smoothly, we are now ready to launch our first container. Before doing so, I should touch upon a few further points:
- To launch our first container instance, we need to base it on a specific image. LXD has a number of images prebuilt online, one of which must be downloaded before the container instance can be launched.
- This download can take some time and will use considerable bandwidth. Mine was over 1.3 GB of data. Make sure that you have a fast, decent connection when downloading an image for the first time.
- LXD has an impressive collection of images. One doesn't have to use Ubuntu. Containers can be based on Arch, Debian, Gentoo, Fedora, you name it. To see all the images available, do:
Code:
duckhook@Zeus:~$ lxc image list images: | less
To see just the list of Ubuntu images:
Code:
duckhook@Zeus:~$ lxc image list ubuntu: | less
Piping through less is a good idea because the selection is pretty extensive. - For our purposes, we will use Ubuntu 22.04. We want to base our container on a familiar and well tested image with known stability.
- Once an image is downloaded, it resides within the image directory of LXD. Thereafter, creating new containers based on that image is very fast because it is not necessary to download it again. Of course, basing containers on a different image will require that image to be downloaded in its entirety the first time.
Let's create our first container and call it firefox:
Code:
duckhook@Zeus:~$ lxc launch ubuntu:22.04 --profile default --profile x11 firefox
We are using two profiles for this container: the default profile created in the LXD initialization process and the x11 profile that we just created. The first is necessary for basic functionality; the second enables a working GUI.
Start the container:
Code:
duckhook@Zeus:~$ lxc start firefox
To check its status:
Code:
duckhook@Zeus:~$ lxc list
Let's start a shell and log in:
Code:
duckhook@Zeus:~$ lxc exec firefox -- sudo --user ubuntu --login
You can poke around to your heart's content. If you have a favourite bash profile, feel free to edit .bashrc.
The first item of importance is to check that all the parts we wanted are present and working:
Code:
ubuntu@firefox:~$ glxinfo -B
Mine returns the following. Yours will differ depending on your video subsystem:
Code:
ubuntu@firefox:~$ glxinfo -B
name of display: :0
display: :0 screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
Vendor: X.Org (0x1002)
Device: AMD TAHITI (DRM 2.50.0, 4.15.0-88-generic, LLVM 9.0.0) (0x679a)
Version: 19.2.8
Accelerated: yes
Video memory: 3072MB
Unified memory: no
Preferred profile: core (0x1)
Max core profile version: 4.5
Max compat profile version: 4.5
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.2
Memory info (GL_ATI_meminfo):
VBO free memory - total: 2194 MB, largest block: 2194 MB
VBO free aux. memory - total: 2019 MB, largest block: 2019 MB
Texture free memory - total: 2194 MB, largest block: 2194 MB
Texture free aux. memory - total: 2019 MB, largest block: 2019 MB
Renderbuffer free memory - total: 2194 MB, largest block: 2194 MB
Renderbuffer free aux. memory - total: 2019 MB, largest block: 2019 MB
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 3072 MB
Total available memory: 5115 MB
Currently available dedicated video memory: 2194 MB
OpenGL vendor string: X.Org
OpenGL renderer string: AMD TAHITI (DRM 2.50.0, 4.15.0-88-generic, LLVM 9.0.0)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 19.2.8
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 4.5 (Compatibility Profile) Mesa 19.2.8
OpenGL shading language version string: 4.50
OpenGL context flags: (none)
OpenGL profile mask: compatibility profile
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 19.2.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
For fun, and also to test x11, lets try xclock, glxgears and xman:
Code:
ubuntu@firefox:~$ xclock
ubuntu@firefox:~$ glxgears
ubuntu@firefox:~$ xman
This should bring up each of these three X apps on your desktop.
Now, to test if the pulseaudio hooks have been properly mapped:
Code:
ubuntu@firefox:~$ pactl info
Mine looks like this. Yours should look similar:
Code:
ubuntu@firefox:~$ pactl info
Server String: unix:/home/ubuntu/.pulse-native
Library Protocol Version: 32
Server Protocol Version: 32
Is Local: yes
Client Index: 46
Tile Size: 65472
User Name: duckhook
Host Name: Zeus
Server Name: pulseaudio
Server Version: 11.1
Default Sample Specification: s16le 2ch 44100Hz
Default Channel Map: front-left,front-right
Default Sink: alsa_output.usb-Logitech_Logitech_Wireless_Headset_000D44B25A6A-00.analog-stereo
Default Source: alsa_input.usb-Logitech_Logitech_Wireless_Headset_000D44B25A6A-00.analog-mono
Cookie: 0793:b74f
NOTE
If you don't see something like the above, you will have to troubleshoot. Sound is often problematic and may not work at the outset. If you do not get sound, try rebooting the container and wait a few minutes for the needed modules to load before logging in with a shell. If pulseaudio continues to be problematic, we may have to use a different approach. See
Problems and Troubleshooting.
Before installing anything, your first real action should be to update the container image:
Code:
ubuntu@firefox:~$ sudo apt update && sudo apt full-upgrade && sudo apt autoremove && sudo apt clean
If the update installs a new kernel or other system-level component, you should reboot the container before proceeding further:
Code:
ubuntu@firefox:~$ sudo reboot
If all of the above turn out well, pat yourself on the back. The hardest part is over and we should be good to go for the prize.
Bookmarks