1 LXD clustering

1.1 Enable clustering

Assuning each table in your table group has set up an LXD VM host - pick one of you to be the cluster master.

On each LXD VM host, run the following commands:

$ lxc config set core.https_address IP.OF.YOUR.VM:8443
$ lxc config set core.trust_password 'class-password'

... remember to replace IP.OF.YOUR.VM with the IP of your VM on br-lan.

Then, on the LXD cluster master, say:

$ lxc cluster enable NAME_OF_MASTER_LXD_NODE

... where "NAME_OF_MASTER_LXD_NODE" is the hostname of the node that you picked to be master of thee cluster.

After a few seconds, you should see:

Clustering enabled

Look at your cluster:

$ lxc cluster list
+------+--------------------------+----------+--------+-------------------+
| NAME |          URL             | DATABASE | STATE  |      MESSAGE      |
+------+--------------------------+----------+--------+-------------------+
| lxd1 | https://100.64.0.XX:8443 | YES      | ONLINE | fully operational |
+------+--------------------------+----------+--------+-------------------+

A one-node cluster isn't very useful... Let's add more LXD hosts!

Make note of your IP address on br-lan, as we'll be using that for others to join our cluster.

2 On the other LXD VM hosts

Now, we'll get the other LXD VM hosts to join.

Unfortunately, there isn't a way to preserve containers on nodes which join an existing cluster. So first, we need to do a few things:

$ lxc list

Stop and delete all containers:

$ lxc stop --all
$ lxc delete <..1..>
$ lxc delete <..2..>

Then we need to remove the current storage pool - otherwise it will conflict when you join the cluster.

To do this:

$ lxc profile edit default

In the editor, find the line:

    pool: default

... and remove it.

Save the file, and exit.

Now, delete the 'default' storage pool:

$ lxc storage delete default

You should see:

Storage pool default deleted

The node is ready to be reinitialized and joined to the cluster.

To do this, re-run LXD init (only ON THE OTHER NODES!)

$ sudo lxd init
> Would you like to use LXD clustering? (yes/no) [default=no]: yes
  What name should be used to identify this node in the cluster? [default=lxd2]:
> What IP address or DNS name should be used to reach this node? [default=X.X.X.X]: 100.64.0.XX
> Are you joining an existing cluster? (yes/no) [default=no]: yes
> IP address or FQDN of an existing cluster node: 100.64.0.YY

Once you enter the IP of the cluster master, you will be asked to accept the fingerprint, and enter the trust password (class password).

  Cluster fingerprint: 8382977ec86db0779f61bdadc18b3026baa3a6d108a8a4bd0de7fd82349eba71
  You can validate this fingerpring by running "lxc info" locally on an existing node.
> Is this the correct fingerprint? (yes/no) [default=no]: yes
> Cluster trust password: <class_password>

Once you enter the password, you are warned all data will be lost on the node that is joining the cluster. Say yes:

> All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes

Then, you are asked:

Choose the local disk or dataset for storage pool "default" (empty for loop disk):

Just press RETURN (empty) for loop disk.

And finally, answer no at the last question:

Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no

This node is now part of the cluster!

Try

$ lxc cluster list

You should see:

+------+-----------------------+----------+--------+-------------------+
| NAME |          URL          | DATABASE | STATE  |      MESSAGE      |
+------+-----------------------+----------+--------+-------------------+
| lxd1 | https://10.0.2.7:8443 | YES      | ONLINE | fully operational |
+------+-----------------------+----------+--------+-------------------+
| lxd2 | https://10.0.2.8:8443 | YES      | ONLINE | fully operational |
+------+-----------------------+----------+--------+-------------------+

3 Move a container to another VM:

lxc stop srv1
lxc move srv1 --target lxd2