1 LXD lab

You will run the following lab on your Ganeti server (hostX).

Log in as the usual user, then become root using the sudo command:

$ sudo -s
#

We'll be configuring LXD on the server.

Note:

First, identify the current version of LXD.

# lxc version
...
Client version: 3.0.3
Server version: 3.0.3

1.1 Upgrade LXD

Ok, let's install a more recent version using the SNAP package management system.

To read more about SNAP, look here: https://snapcraft.io/about

# snap install lxd

You might get an error message like this:

error: cannot install "lxd": Post https://api.snapcraft.io/v2/snaps/refresh: CONNECT denied (ask
       the admin to allow HTTPS tunnels)

In this case, we'll need to tell snap not to use the in-classroom proxy:

remove the file /etc/systemd/system/snapd.service.d/snap_proxy.conf

# rm /etc/systemd/system/snapd.service.d/snap_proxy.conf

Now reconfigure the snap service so it will not use the proxy:

# systemctl daemon-reload

You should now be able to install the LXD snap:

# snap install lxd

You will see a progress bar, and you will be informed that the latest stable of LXD is being downloaded and installed

At the end, you should see:

lxd 4.19 from Canonical✓ installed

But wait, we still need to remove the built-in 3.0.3 and convert any existing container configurations, as by default 3.0.3 is still active:

# lxc version
...
Client version: 3.0.3
Server version: 3.0.3

Luckily we don't have any yet, so it will be rather fast:

# lxd.migrate
=> Connecting to source server
=> Connecting to destination server
=> Running sanity checks
The source server is empty, no migration needed.

The migration is now complete and your containers should be back online.

You will be asked if you want to remove the old LXD install, say yes:

Do you want to uninstall the old LXD (yes/no) [default=yes]? yes

All done. You may need to close your current shell and open a new one to have the "lxc" command work.

To pick up the new path for the lxc command, run:

# hash -r

Note: This re-scans all the commands that can be found in all the directories listed in the $PATH environment variable. If you don't run this, you'll see an error message like: "bash: /usr/bin/lxc: No such file or directory"

Confirm that you're running 4.19:

# lxc version
...
Client version: 4.19
Server version: 4.19

1.2 Setup LXD

You're also told that you need to run lxd init to set up the LXD environment, as we haven't done this yet.

Let's proceed - in the output below, we've prefixed with -> the questions you have to provide a NON default answer to.

When prompted for the Trust password, use the class password.

# lxd init

   Would you like to use LXD clustering? (yes/no) [default=no]:
   Do you want to configure a new storage pool? (yes/no) [default=yes]:  
   Name of the new storage pool [default=default]:
   Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]:
   Create a new ZFS pool? (yes/no) [default=yes]:
   Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
-> Size in GB of the new loop device (1GB minimum) [default=7GB]: 20GB
   Would you like to connect to a MAAS server? (yes/no) [default=no]: no
-> Would you like to create a new local network bridge? (yes/no) [default=yes]: no
-> Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
-> Name of the existing bridge or host interface: br-lan
-> Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
   Address to bind LXD to (not including port) [default=all]:
   Port to bind LXD to [default=8443]:
-> Trust password for new clients: *******
   Again: *******
-> Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
   Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no

This should finish without errors.

We've now set aside 20GB for running containers - it may not seem like much but it's fine for this lab. Also, remember that containers only use as much disk space as the files they contain, so we don't need to create an entire disk that will sit mostly empty.

We're now ready to deploy our first LXD container.

1.3 LXD image repositories

Hundreds of images are available by default, from different online repositories.

To see all the repositories configured:

# lxc remote list

You should see:

+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     | NO     |
| local (current) | unix://                                  | lxd           | file access | NO     | YES    | NO     |
| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    | NO     |
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+

To see all images under a given repo, run:

# lxc image list NAME:

Note the : after the NAME, where NAME is one of the repositories listed in the leftmost column.

For instance:

# lxc image list ubuntu:

Wow! Several hundre images will scroll past - use | less to page through the list. You will notice many different distributions, architectures, and versions.

Notice also the CONTAINER and VIRTUAL-MACHINE keywords in the TYPE column:

| ALIAS | FINGERPRINT  | PUBLIC |                   DESCRIPTION               | ARCH.  |      TYPE       |   SIZE   |          UPLOAD DATE          |
|       | 3f7089b26821 | yes    | ubuntu 20.04 LTS amd64 (release) (20210720) | x86_64 | VIRTUAL-MACHINE | 535.19MB | Jul 20, 2021 at 12:00am (UTC) |
|       | cea91a28441a | yes    | ubuntu 20.04 LTS amd64 (release) (20210720) | x86_64 | CONTAINER       | 364.31MB | Jul 20, 2021 at 12:00am (UTC) |

LXD can manage both virtual machine images, and containers. We're interested in containers right now. If you list images in the 'images:' repository, you'll see plenty of other Linux distributions like Debian, Fedora, Arch, ...

At this point we could simply write lxc launch ubuntu:20.04 and it would download, unpack, configure and run an Ubuntu 20.04 LTS continer image.

BUT, we'd rather you download the images we've prepared from the local server to speed things up.

1.4 Add the classroom repository

We need to tell our local LXD instance about the classroom LXD repository.

To do so, run the following command:

# lxc remote add nsrc-images s1:4443

You will be presented with the following output, and a prompt to accept the certificate on s1. Enter y to accept:

   Generating a client certificate. This may take a minute...
   Certificate fingerprint: 7f3ea2544b44e3e64efacc7bba147337c271a33739f0bc98809497e88c3d6291
-> ok (y/n/[fingerprint])? y
   Admin password for s1:
   Client certificate now trusted by server: s1

Run lxc remote list once again -- the new repository nsrc-images should be there.

To view the images available on nsrc-images, run:

# lxc image list nsrc-images:

You should see:

+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|    ALIAS     | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| centos-8     | efb603d87fd4 | no     | Centos 8 amd64 (20211027_07:08)             | x86_64       | CONTAINER | 129.20MB | Oct 27, 2021 at 9:51pm (UTC) |
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| debian-11    | c8fbb6e215f5 | no     | Debian bullseye amd64 (20211017_05:24)      | x86_64       | CONTAINER | 80.55MB  | Oct 27, 2021 at 9:56pm (UTC) |
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| ubuntu-20.04 | 5fc94479f588 | no     | ubuntu 20.04 LTS amd64 (release) (20211021) | x86_64       | CONTAINER | 370.44MB | Oct 27, 2021 at 7:22pm (UTC) |
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+

1.5 Create a container (finally!)

Let's create a CentOS container!

# lxc launch nsrc-images:centos-8 my-centos

This should take only a few seconds:

Creating my-centos
Starting my-centos

Now, list the containers on your system:

# lxc list

+-----------+---------+--------------------+------+-----------+-----------+
|   NAME    |  STATE  |        IPV4        | IPV6 |   TYPE    | SNAPSHOTS |
+-----------+---------+--------------------+------+-----------+-----------+
| my-centos | RUNNING | 100.64.0.XX (eth0) |      | CONTAINER | 0         |
+-----------+---------+--------------------+------+-----------+-----------+

If you don't see an address under IPV4, wait a few seconds and run lxc list again.

Once you see an IP, try and ping it:

# ping 100.64.0.XX

You should be getting a reply.

Now, it's time enter the container. You can launch a shell directly into the running container, without having to use SSH or use a console:

# lxc shell my-centos

[root@my-centos ~]#

Note the change in the command prompt.

How do we know we're really on a CentOS distribution ?

# cat /etc/redhat-release
CentOS Linux release 8.4.2105

1.6 Install some software inside the container

Let's install a web server!

# yum update
# yum install nginx
...
# systemctl start nginx

Verify that it's working -- you can try and open the web page in your browser: http://ip.of.your.container/

Now, run ps ax to view the running processes:

# ps ax | grep nginx

Note the PIDs of the running nginx processes. Observe the PIDs (leftmost column).

Exit the container, returning to the main host:

[root@my-centos ~]# exit

Now, re-run the above command.

# ps ax | grep nginx

Note the PIDs once more -- what do you notice ?

Try entering the container once more (lxc shell ...), and run ps ax -- how many processes do you see ? You can count them: ps ax | wc -l

Run the same command on the main host - you might notice many more processes. It seems that "inside" the container, you cannot see processes running "outside" -- or in other containers.

1.7 Where are the containers stored ?

If you paid attention during the LXD init session, you may have noticed that we were asked to create a new storage pool, stored on ZFS.

ZFS is a very scalable, error-correcting file system and storage manager. When we answered the above questions, LXD created a new storage pool inside of a disk image.

Although LXD uses ZFS, by default the zfs command line tools aren't installed, so we need to install some command line utilities to interact with zfs from the shell:

# apt-get install zfsutils-linux

You should now be able to see all ZFS filesystems:

# zfs list

The output should be something similar to this:

NAME                                                                              USED  AVAIL  REFER  MOUNTPOINT
default                                                                           414M  17.5G    24K  none
default/containers                                                               66.9M  17.5G    24K  none
default/containers/my-centos                                                     66.9M  17.5G   408M  none
default/custom                                                                     24K  17.5G    24K  none
...
default/images                                                                    347M  17.5G    24K  none
default/images/efb603d87fd4d0db2917046dc50ecdf56701412b86a7d07b588d9332330bd7a7   347M  17.5G   347M  none

You can also get details on the ZFS pool where all the ZFS filesystems are stored:

# zpool status

Output should be similar to this:

 state: ONLINE
  scan: none requested
config:

    NAME                                          STATE     READ WRITE CKSUM
    default                                       ONLINE       0     0     0
      /var/snap/lxd/common/lxd/disks/default.img  ONLINE       0     0     0

This tells us that there is a zfs pool called default, stored on a single disk image, the file /var/snap/lxd/common/lxd/disks/default.img we mentioned earlier.

This file can be grown as needed -- or better, on a production system, you'd use a dedicated disk partition (or several disks) for the ZFS pool.

1.8 More things to try