1 Objectives

You will each of you install the Ganeti virtualization cluster management software on your Linux server.

You will build clusters of three of four hosts, depending on how the instructor wishes to organise the classroom. For example, a lab with 15 hosts might be organised like this:

cluster IP master node additional nodes
gnt1.ws.nsrc.org host1.ws.nsrc.org host2.ws.nsrc.org, host3.ws.nsrc.org
gnt2.ws.nsrc.org host4.ws.nsrc.org host5.ws.nsrc.org, host6.ws.nsrc.org
gnt3.ws.nsrc.org host7.ws.nsrc.org host8.ws.nsrc.org, host9.ws.nsrc.org
gnt4.ws.nsrc.org host10.ws.nsrc.org host11.ws.nsrc.org, host12.ws.nsrc.org
gnt5.ws.nsrc.org host13.ws.nsrc.org host14.ws.nsrc.org, host15.ws.nsrc.org

Note that ganeti requires you to use fully-qualified domain names, and these must resolve to the correct IP addresses (either in the DNS or in the /etc/hosts file on every node)

2 Become root

All of the actions in this exercise are done as "root", so if you are not root already type:

$ sudo -s
#

3 Configure the Hostname

Look at the contents of the file /etc/hostname and check it contains the fully-qualified domain name, i.e.

hostX.ws.nsrc.org

(where X is your machine number). If not, then edit it so that it looks like that, then get the system to re-read this file:

# hostname -F /etc/hostname

Also check /etc/hosts to ensure that you have the both the fully-qualified name and the short name there, pointing to the correct IP address:

127.0.0.1   localhost
10.10.0.X   hostX.ws.nsrc.org hostX

4 Logical Volume Manager

Type the following command:

# vgs

If it shows you have a volume group called 'ganeti' then skip to the next section, "Configure the Network"

If the command is not found, then install the lvm2 package:

# apt-get install lvm2

Now, your host machine should have either a spare partition or a spare hard drive which you will use for LVM. If it's a second hard drive it will be /dev/vdb or /dev/sdb. Check which you have:

# ls /dev/vd*
# ls /dev/sd*

The following instructions assume the spare drive is /dev/vdb but please adjust them as necessary.

Turning this drive into a physical volume for LVM will destroy any data which is on it, so double-check that the drive is not in use, by looking at what filesystems are currently mounted:

# mount

For example, you may see /dev/vda1 mounted (which means the first partition on device /dev/vda is in use)

Assuming /dev/vdb is spare, let's mark it as a physical volume for LVM:

# pvcreate /dev/vdb
# pvs   # should show the physical volume

Now we need to create a volume group called ganeti containing just this one physical volume. (Volume groups can be extended later by adding more physical volumes)

# vgcreate ganeti /dev/vdb
# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  ganeti   1   0   0 wz--n- 24.00g 24.00g

You should see that the volume group has been created, it consists of one Physical Volume (PV), and no Logical Volumes (LV) have been created within it, so all the space is free.

More detailed information can be seen by typing vgdisplay

If you want to create, extend and delete a logical volume called "foo", these are the commands you would use. If you have not used them before, this is a good time to try them out.

# lvcreate --size 1G --name foo ganeti      # create volume called "foo" of 1GB
# lvs
# ls -l /dev/ganeti
# blockdev --getsize64 /dev/ganeti/foo      # shows device size in bytes
# lvextend --size +1G /dev/ganeti/foo       # grow by 1GB
# blockdev --getsize64 /dev/ganeti/foo      # shows device size in bytes
# vgs                                       # check free space in VG
# lvremove /dev/ganeti/foo
# vgs

Note: on a production Ganeti server it is recommended to configure LVM not to scan DRBD devices for physical volumes. The documentation suggests editing /etc/lvm/lvm.conf and adding a reject expression to the filter variable, like this:

filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]

You can tighten this further by allowing only devices which match the expected pattern. If you know that all your attached physical disks start with /dev/sd then you can accept only those and reject everything else:

filter = [ "a|^/dev/sd|", "r|.*|" ]

5 Configure the Network

We're now going to reconfigure the network on our machine, so that we will be using VLANs. While it would be perfectly fine to use a single network for running virtual machines, there are a number of limitations, including:

Instead of using separate ethernet cards, we'll use VLANs.

We need to implement three networks: management, replication, and service.

Ideally, we would create three VLANs:

5.1 VLAN configuration

To be on the safe side, let's install the vlan and bridge management tools (these should already have been installed by you earlier).

# apt-get install vlan bridge-utils

Let's make changes to the network configuration file for your system. If you remember, this is /etc/network/interfaces.

Edit this file, and look for the br-lan definition. This is the bridge interface you created earlier, and eth0 is attached to it.

If should looks something like this:

# Management interface
auto eth0
iface eth0 inet manual

auto br-lan
iface br-lan inet static
        address         10.10.0.X
        netmask         255.255.255.0
        gateway         10.10.0.254
        dns-nameservers 10.10.0.241
        bridge_ports    eth0
        bridge_stp      off
        bridge_fd       0
        bridge_maxwait  0

We're going to leave this alone, and not going to use VLAN tagging (802.1q) for our management network. What it means is that we will have both untagged and tagged (VLAN) frames going through eth0 2.

We will proceed to create VLANs 100 and 255, and the associated bridge interfaces for them.

5.1.1 Replication network

Let's start with the Replication network. To do this, add the following lines below the br-lan section:

# Replication network
auto eth0.100
iface eth0.100 inet manual

auto br-rep
iface br-rep inet static
        address 10.10.100.X
        netmask 255.255.255.0
        bridge_ports    eth0.100
        bridge_stp      off
        bridge_fd       0
        bridge_maxwait  0

Remember to replace X with the number of your class PC.

This does two things:

5.1.2 Service network

Now we add the Service network. Same as before, go to the end of the file, and add the following lines:

# Service network
auto eth0.255
iface eth0.255 inet manual

auto br-svc
iface br-svc inet manual
        bridge_ports    eth0.255
        bridge_stp      off
        bridge_fd       0
        bridge_maxwait  0

This is very similar to VLAN 100, but notice that we have NOT configured an IP address for br-svc. This is because we do not want the physical host OS to be connected to this network with IP: the host OS shouldn't be reachable via SSH on this network, for security reasons.

Review the work you have just done. The resulting file should look something like this (IPs should be the ones for your PC, of course):

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

auto br-lan
iface br-lan inet static
        address 10.10.0.X
        netmask 255.255.255.0
        gateway 10.10.0.254
        dns-nameservers 10.10.0.241
        bridge_ports    eth0
        bridge_stp      off
        bridge_fd       0
        bridge_maxwait  0

# Replication network
auto eth0.100
iface eth0.100 inet manual

auto br-rep
iface br-rep inet static
        address 10.10.100.X
        netmask 255.255.255.0
        bridge_ports    eth0.100
        bridge_stp      off
        bridge_fd       0
        bridge_maxwait  0

auto eth0.255
iface eth0.255 inet manual

# Service network
auto br-svc
iface br-svc inet manual
        bridge_ports    eth0.255
        bridge_stp      off
        bridge_fd       0
        bridge_maxwait  0

5.2 Summary of the topology

We now have the following configuration. Think of eth0, eth0.100 and eth0.255 as 3 different interfaces, connected to 3 different virtual switches (br-lan, br-rep and br-svc, respectively).

                 -----------+--------------
                            |
                          br-lan 
                            |         host X
                  +---------+---------+
                  |        eth0       |
                  |                   |
                  |eth0.255   eth0.100|
                  +--+-----------+----+
                     |           |
                   br-svc      br-rep
                     |           |
         VMs --------+           +------> to other hosts

5.3 Activate network configuration

At this point you can now activate your new network interfaces:

# ifup br-rep
# ifup br-svc

Check that the bridge interfaces have been created:

# brctl show

(Note: on a production machine it's a good idea to reboot after making major changes to /etc/network/interfaces to ensure they are picked up correctly at system startup)

Verify that your colleagues have finished their configuration, and test that you can ping each other:

If you have problems:

You way want to test that you can resolve the following hostnames using the dig command:

dig +short host1.ws.nsrc.org
dig +short host2.ws.nsrc.org
..
dig +short gnt1.ws.nsrc.org
dig +short gnt2.ws.nsrc.org
..

5.4 Synchronize the clock

It's important that the nodes have synchronized time, so install the NTP daemon on every node:

# apt-get install ntp

6 Install the Ganeti software

Now install the software from the right package repository. How to do this depends on whether your machine is running Debian or Ubuntu.

6.1 Debian

On Debian, the available version of ganeti is too old, but fortunately the current version is available in a backports repository 4.

As root, create a file /etc/apt/sources.list.d/wheezy-backports.list containing this one line:

deb http://cdn.debian.net/debian/ wheezy-backports main

Then refresh the index of available packages:

# apt-get update

Now, install the Ganeti software package. Note that the backports packages are not used unless you ask for them explicitly.

# apt-get install ganeti/wheezy-backports

This will install the current released version of Ganeti on your system; but any dependencies it pulls in will be the stable versions.

The ganeti-htools package is installed as a dependency. This provides the instance allocator ("hail") which can automatically place VMs for you.

6.2 Ubuntu

For server applications you are recommended to use a Long Term Support (LTS) version of Ubuntu. The current LTS versions are 12.04 and 14.04.

The version of Ganeti provided in Ubuntu 12.04 is very old; the version in Ubuntu 14.04 is newer (Ganeti 2.9.x), but it's still better to work with up-to-date code. Also, Ganeti 2.10 introduced a mechanism to make upgrades to later versions much easier.

Luckily, a newer version of Ganeti is available for Ubuntu 12.04 and 14.04, via a "Private Package Archive" (PPA).

https://launchpad.net/~pkg-ganeti-devel/+archive/lts

To add the necessary information to the our list of packages sources (/etc/apt/sources.list), run the following commands:

# apt-get install python-software-properties
# add-apt-repository ppa:pkg-ganeti-devel/lts

The second command will prompt you:

You are about to add the following PPA to your system:
 This PPA contains stable versions of Ganeti backported to Ubuntu LTS. Currently
 it covers 12.04 LTS (Precise) and 14.04 LTS (Trusty).
 More info: https://launchpad.net/~pkg-ganeti-devel/+archive/lts
Press [ENTER] to continue or ctrl-c to cancel adding it

Just press [ENTER]

The package archive will now be available. We still need to update the list of available packages:

# apt-get update

Now, install the Ganeti software package:

# apt-get install ganeti

This will install the current released version of Ganeti on your system.

7 Setup DRBD

We'll now set up DRBD (Distributed Replicated Block Device), which will make it possible for VMs to have redundant storage across two physical machines.

DRBD was already installed when we installed Ganeti, but we still need to change the configuration:

# echo "options drbd minor_count=128 usermode_helper=/bin/true" >/etc/modprobe.d/drbd.conf
# echo "drbd" >>/etc/modules
# rmmod drbd      # ignore error if the module isn't already loaded
# modprobe drbd

The entry in /etc/modules ensures that drbd is loaded at boot time.

8 Create a root password

Ganeti will need to log in as root to the other nodes in the cluster so it can set up the configuration files there. After the first login, SSH keys are used (and therefore no password is used), but for the first connection, we need to set a root password.

For Ubuntu servers only: you need to set a root password on each node. (For Debian servers, this will have already been done at installation time)

Note: You only need to do this on the slave nodes in each cluster of servers.

# passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

Use the in-class password!

Finally, create a directory for SSH keys to be stored for the root user:

# mkdir /root/.ssh
# chmod 700 /root/.ssh

9 Initialize the cluster - MASTER NODE ONLY

We are now ready to run the commands that will create the Ganeti cluster. Do this only on the MASTER node of the cluster.

# gnt-cluster init --master-netdev=br-lan --enabled-hypervisors=kvm \
  -N link=br-svc -s 10.10.100.X --vg-name ganeti gntN.ws.nsrc.org

# gnt-cluster modify -H kvm:kernel_path=,initrd_path=,vnc_bind_address=0.0.0.0

where X is the number of your host (like host1, host2 etc), and N is the number of your cluster (gnt1, gnt2 etc)

Explanation of the above parameters:

If everything goes well, the command gnt-cluster init will take 5-6 seconds to complete. It will not output anything unless a problem occurred.

The second command sets some hypervisor default parameters (-H):

These will be used by all instances that don't explicitly override them.

Observe that there is an interface br-lan:0 now configured:

# ifconfig br-lan:0

The IP address should be that which the hostname gntN.ws.nsrc.org resolves to.

During the cluster creation, the node you ran the command on (the master node) was automatically added to the cluster. So we don't need to do that and can proceed directly to adding the other nodes in the cluster.

9.1 Adding nodes to the cluster - MASTER NODE ONLY

So let's run the command to add the other nodes. Note the use of the -s option to indicate which IP address will be used for disk replication on the node you are adding.

Run this command only on the MASTER node of the cluster.

# gnt-node add -s 10.10.100.Y hostY.ws.nsrc.org

You will be warned that the command will replace the SSH keys on the destination machine (the node you are adding) with new ones. This is normal.

-- WARNING --
Performing this operation is going to replace the ssh daemon keypair
on the target machine (hostY) with the ones of the current one
and grant full intra-cluster ssh root access to/from it

When asked if you want to continue connection, say yes:

The authenticity of host 'hostY (10.10.0.Y)' can't be established.
ECDSA key fingerprint is a1:af:e8:20:ad:77:6f:96:4a:19:56:41:68:40:2f:06.
Are you sure you want to continue connecting (yes/no)? yes

When prompted for the root password for hostY, enter it:

Warning: Permanently added 'hostY' (ECDSA) to the list of known hosts.
root@hostY's password:

You may see the following informational message; you can ignore it:

Restarting OpenBSD Secure Shell server: sshd.
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service ssh restart

Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the stop(8) and then start(8) utilities,
e.g. stop ssh ; start ssh. The restart(8) utility is also available.
ssh stop/waiting
ssh start/running, process 2921

The last message you should see is this:

Tue Jan 14 01:07:40 2014  - INFO: Node will be a master candidate

This means that the machine you have just added into the node (hostY) can take over the role of configuration master for the cluster, should the master (hostX) crash or be unavailable.

Repeat to add the third and/or fourth nodes of your cluster, again always running the commands on the master node.

9.2 Verify the configuration of your cluster

Again only on the MASTER node of the cluster:

# gnt-cluster verify

This will tell you if there are any errors in your configuration. It is possible you will see errors about "orphan volumes":

Thu Feb  6 05:02:47 2014 * Verifying orphan volumes
Thu Feb  6 05:02:47 2014   - ERROR: node hostX.ws.nsrc.org: volume ganeti/swap is unknown
Thu Feb  6 05:02:47 2014   - ERROR: node hostX.ws.nsrc.org: volume ganeti/var is unknown
Thu Feb  6 05:02:47 2014   - ERROR: node hostX.ws.nsrc.org: volume ganeti/root is unknown

This means logical volumes which were already created in the volume group but which ganeti does not know about or manage. You can avoid this error by telling ganeti to ignore those logical volumes:

# gnt-cluster modify --reserved-lvs=ganeti/root,ganeti/swap,ganeti/var
# gnt-cluster verify

If you still have any errors, please talk to the instructors.

To see detailed information on how your cluster is configured, try these commands:

# gnt-cluster info | more

Look at the output.

# gnt-node list
# gnt-node list-storage

You are done with the basic installation!

10 Securing the VNC consoles

It would be good idea to make sure that the VNC consoles for the VMs was protected by a password.

To do this, we can create a cluster-wide password for every VM console.

This can later be overridden (changed) for each instance (VM).

To create the cluster-wide password, run this command on the master:

# echo 'xyzzy' >/etc/ganeti/vnc-cluster-password
# chmod 600 /etc/ganeti/vnc-cluster-password
# gnt-cluster modify -H kvm:vnc_password_file=/etc/ganeti/vnc-cluster-password

You will probably see an error message:

Failure: command execution error:
Hypervisor parameter validation failed on node hostY.ws.nsrc.org: Parameter 'vnc_password_file' fails validation: not found or not a file (current value: '/etc/ganeti/vnc-cluster-password')

Hmm, we just added the file - but wait! It's telling us that the file is missing from the slave nodes.

That's because we only created /etc/ganeti/vnc-cluster-password on the master node. It needs to be on every node (host) since any one of them could become a cluster master in the future.

There's a great command for this in ganeti: gnt-cluster copyfile

gnt-cluster copyfile will take a file as a parameter, and will take care of copying it to every node in the cluster.

In this case, we want our file /etc/ganet/vnc-cluster-password to be copied.

To do this (on the master host - you will get a complaint if you try and run this on the other nodes):

# gnt-cluster copyfile /etc/ganeti/vnc-cluster-password

You can now re-run the command from earlier:

# gnt-cluster modify -H kvm:vnc_password_file=/etc/ganeti/vnc-cluster-password

That's it! Next up, we'll create some instances (VMs) and test migration.

11 Optional: Burn-in

If you have spare time, you can run a "burn-in". This is a comprehensive self-test which will check your cluster's ability to create, migrate and destroy virtual machines. It takes about half an hour, and reports its progress as it runs.

The name of the VM to create (here "testvm") should be unique. If you have any existing VM with this name, it will be destroyed. It also needs to resolve, so on each cluster node create an /etc/hosts entry like this:

192.0.2.1       testvm

Then run this on the cluster master node:

# /usr/lib/ganeti/tools/burnin -o debootstrap+default \
   -H kvm:kernel_path=/vmlinuz,initrd_path=/initrd.img \
   --disk-size 1024m --no-name-check --no-ip-check testvm

  1. Note that VLAN 1 can have a special meaning. On many switches, VLAN 1 is the "default" VLAN, and cannot be removed. Some switches only allow management using VLAN 1. For security reasons, it's good practice to disable VLAN 1 and use other VLAN numbers. In our workshop, we'll keep it to make things simpler in our labs.

  2. This isn't a typical network setup, but it keeps things simpler here so we don't have to change the network configuration for our management network.

  3. We won't be attaching (connecting) any virtual machines to br-rep, so the bridge interface is not strictly necessary (we could have allocated the IP directly to eth0.100)

  4. backports are newer versions of the third party software originally packaged for your version of the operating system. These newer versions, packaged for a newer releases of Debian (or Ubuntu), have been made available (or backported) to the version of Debian we are using.