1 Objectives

You will each of you install the Ganeti virtualization cluster management software on your Linux server.

You will build clusters of three of four hosts, depending on how the instructor wishes to organise the classroom. For example, a lab with 15 hosts might be organised like this:

cluster IP master node additional nodes
gnt1.ws.nsrc.org host1.ws.nsrc.org host2.ws.nsrc.org, host3.ws.nsrc.org
gnt2.ws.nsrc.org host4.ws.nsrc.org host5.ws.nsrc.org, host6.ws.nsrc.org
gnt3.ws.nsrc.org host7.ws.nsrc.org host8.ws.nsrc.org, host9.ws.nsrc.org
gnt4.ws.nsrc.org host10.ws.nsrc.org host11.ws.nsrc.org, host12.ws.nsrc.org
gnt5.ws.nsrc.org host13.ws.nsrc.org host14.ws.nsrc.org, host15.ws.nsrc.org

Note that ganeti requires you to use fully-qualified domain names, and these must resolve to the correct IP addresses (either in the DNS or in the /etc/hosts file on every node)

Your host's static IP address is 10.10.0.X, where X is your host number (e.g. host 4 is 10.10.0.4)

2 Become root

All of the actions in this exercise are done as "root", so if you are not root already type:

$ sudo -s
#

3 Configure the Hostname

Look at the contents of the file /etc/hostname and check it contains the fully-qualified domain name, i.e.

hostX.ws.nsrc.org

(where X is your machine number). If not, then edit it so that it looks like that, then get the system to re-read this file:

# hostname -F /etc/hostname

Also check /etc/hosts to ensure that you have the both the fully-qualified name and the short name there, pointing to your static IP address:

127.0.0.1   localhost
10.10.0.X   hostX.ws.nsrc.org hostX

4 Logical Volume Manager

You don't need to do this for the lab, but on a production Ganeti server it is recommended to configure LVM not to scan DRBD devices for physical volumes. The documentation suggests editing /etc/lvm/lvm.conf and adding a reject expression to the filter variable, like this:

filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]

You can tighten this further by allowing only devices which match the expected pattern. If you know that all your attached physical disks start with /dev/sd then you can accept only those and reject everything else:

filter = [ "a|^/dev/sd|", "r|.*|" ]

5 Configure the Network

We need a software bridge on our machine, so that VMs can connect to it and have shared access to the physical ethernet port on the host.

You may already have built your machine with a bridge interface and static IP address - if so you can skip this section.

If not, then check that the bridge management utilities are installed:

# apt-get install bridge-utils

Then edit the file /etc/network/interfaces so that it looks like this:

# The loopback network interface
auto lo
iface lo inet loopback

# Management interface
auto eth0
iface eth0 inet manual

auto br-lan
iface br-lan inet static
        address         10.10.0.X
        netmask         255.255.255.0
        gateway         10.10.0.254
        dns-nameservers 10.10.0.241
        bridge_ports    eth0
        bridge_stp      off
        bridge_fd       0
        bridge_maxwait  0

If you changed this file, the safest thing to do now is to reboot so that the network interfaces are brought up according to these settings.

Then check that the bridge interface has been created:

# brctl show
# ifconfig br-lan

Verify that your colleagues have finished their configuration, and test that you can ping their 10.10.0.X addresses.

If you have problems:

As a final check of networking and DNS, test that you can resolve the following hostnames using the dig command:

dig +short host1.ws.nsrc.org
dig +short host2.ws.nsrc.org
..
dig +short gnt1.ws.nsrc.org
dig +short gnt2.ws.nsrc.org
..

5.1 Synchronize the clock

It's important that the nodes have synchronized time, so install the NTP daemon on every node:

# apt-get install ntp ntpdate

6 Install the Ganeti software

Now install the software from the right package repository. How to do this depends on whether your machine is running Debian or Ubuntu.

6.1 Debian

On Debian, the available version of ganeti is too old, but fortunately the current version is available in a backports repository 1.

As root, create a file /etc/apt/sources.list.d/wheezy-backports.list containing this one line:

deb http://cdn.debian.net/debian/ wheezy-backports main

Then refresh the index of available packages:

# apt-get update

Now, install the Ganeti software package. Note that the backports packages are not used unless you ask for them explicitly.

# apt-get install ganeti/wheezy-backports

This will install the current released version of Ganeti on your system; but any dependencies it pulls in will be the stable versions.

The ganeti-htools package is installed as a dependency. This provides the instance allocator ("hail") which can automatically place VMs for you.

6.2 Ubuntu

For server applications you are recommended to use a Long Term Support (LTS) version of Ubuntu. The current LTS versions are 12.04 and 14.04.

The version of Ganeti provided in Ubuntu 12.04 is very old; the version in Ubuntu 14.04 is newer (Ganeti 2.9.x), but it's still better to work with up-to-date code. Also, Ganeti 2.10 introduced a mechanism to make upgrades to later versions much easier.

Luckily, a newer version of Ganeti is available for Ubuntu 12.04 and 14.04, via a "Private Package Archive" (PPA).

https://launchpad.net/~pkg-ganeti-devel/+archive/lts

To add the necessary information to the our list of packages sources (/etc/apt/sources.list), run the following commands:

# apt-get install python-software-properties
# add-apt-repository ppa:pkg-ganeti-devel/lts

The second command will prompt you:

You are about to add the following PPA to your system:
 This PPA contains stable versions of Ganeti backported to Ubuntu LTS. Currently
 it covers 12.04 LTS (Precise) and 14.04 LTS (Trusty).
 More info: https://launchpad.net/~pkg-ganeti-devel/+archive/lts
Press [ENTER] to continue or ctrl-c to cancel adding it

Just press [ENTER]

The package archive will now be available. We still need to update the list of available packages:

# apt-get update

Now, install the Ganeti software package:

# apt-get install ganeti

This will install the current released version of Ganeti on your system.

7 Setup DRBD

We'll now set up DRBD (Distributed Replicated Block Device), which will make it possible for VMs to have redundant storage across two physical machines.

DRBD was already installed when we installed Ganeti, but we still need to change the configuration:

# echo "options drbd minor_count=128 usermode_helper=/bin/true" >/etc/modprobe.d/drbd.conf
# echo "drbd" >>/etc/modules
# rmmod drbd      # ignore error if the module isn't already loaded
# modprobe drbd

The entry in /etc/modules ensures that drbd is loaded at boot time.

8 Create a root password [Ubuntu servers only]

Ganeti will need to log in as root to the other nodes in the cluster so it can set up the configuration files there. After the first login, SSH keys are used (and therefore no password is used), but for the first connection, we need to set a root password.

For Ubuntu servers only: you need to set a root password on each node. (For Debian servers, this will have already been done at installation time)

Note: You only need to do this on the slave nodes in each cluster of servers.

# passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

Use the in-class password!

Finally, create a directory for SSH keys to be stored for the root user:

# mkdir /root/.ssh
# chmod 700 /root/.ssh

9 Create script to create VM users

# vi ~/create_users.sh

10 Write script to create VM users

#!/bin/bash

for i in `seq 120 130`; 
do 
    useradd -u $i -s /bin/false -M kvm-$i

done

11 Execute user creation script on local node

chmod 700 create_users.sh && ./create_users.sh

12 Execute user creation script on remote nodes (optional)

ssh host1 "bash -s" < ./create_users.sh

13 Initialize the cluster - MASTER NODE ONLY

We are now ready to run the commands that will create the Ganeti cluster. Do this only on the MASTER node of the cluster.

# gnt-cluster init --master-netdev=br-lan --enabled-hypervisors=kvm \
  --uid-pool=120-130 --prealloc-wipe-disks=yes -H kvm:security_model=pool,use_chroot=true \
  -N link=br-lan --vg-name ganeti gnt1.ws.nsrc.org

# gnt-cluster modify -H kvm:kernel_path=,initrd_path=,vnc_bind_address=0.0.0.0

where X is the number of your host (like host1, host2 etc), and Z is the number of your cluster (gnt1, gnt2 etc)

Explanation of the above parameters:

If everything goes well, the command gnt-cluster init will take 5-6 seconds to complete. It will not output anything unless a problem occurred.

The second command sets some hypervisor default parameters (-H):

These will be used by all instances that don't explicitly override them.

Observe that there is an interface br-lan:0 now configured:

# ifconfig br-lan:0

The IP address should be that which the hostname gntZ.ws.nsrc.org resolves to.

During the cluster creation, the node you ran the command on (the master node) was automatically added to the cluster. So we don't need to do that and can proceed directly to adding the other nodes in the cluster.

13.1 Adding nodes to the cluster - MASTER NODE ONLY

So let's run the command to add the other nodes.

Run this command only on the MASTER node of the cluster.

# gnt-node add hostY.ws.nsrc.org

You will be warned that the command will replace the SSH keys on the destination machine (the node you are adding) with new ones. This is normal.

-- WARNING --
Performing this operation is going to replace the ssh daemon keypair
on the target machine (hostY) with the ones of the current one
and grant full intra-cluster ssh root access to/from it

When asked if you want to continue connection, say yes:

The authenticity of host 'hostY (10.10.0.Y)' can't be established.
ECDSA key fingerprint is a1:af:e8:20:ad:77:6f:96:4a:19:56:41:68:40:2f:06.
Are you sure you want to continue connecting (yes/no)? yes

When prompted for the root password for hostY, enter it:

Warning: Permanently added 'hostY' (ECDSA) to the list of known hosts.
root@hostY's password:

You may see the following informational message; you can ignore it:

Restarting OpenBSD Secure Shell server: sshd.
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service ssh restart

Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the stop(8) and then start(8) utilities,
e.g. stop ssh ; start ssh. The restart(8) utility is also available.
ssh stop/waiting
ssh start/running, process 2921

The last message you should see is this:

Tue Jan 14 01:07:40 2014  - INFO: Node will be a master candidate

This means that the machine you have just added into the node (hostY) can take over the role of configuration master for the cluster, should the master (hostX) crash or be unavailable.

Repeat to add the third and/or fourth nodes of your cluster, again always running the commands on the master node.

13.2 Verify the configuration of your cluster

Again only on the MASTER node of the cluster:

# gnt-cluster verify

This will tell you if there are any errors in your configuration. It is possible you will see errors about "orphan volumes":

Thu Feb  6 05:02:47 2014 * Verifying orphan volumes
Thu Feb  6 05:02:47 2014   - ERROR: node hostX.ws.nsrc.org: volume ganeti/swap is unknown
Thu Feb  6 05:02:47 2014   - ERROR: node hostX.ws.nsrc.org: volume ganeti/var is unknown
Thu Feb  6 05:02:47 2014   - ERROR: node hostX.ws.nsrc.org: volume ganeti/root is unknown

This means logical volumes which were already created in the volume group but which ganeti does not know about or manage. You can avoid this error by telling ganeti to ignore those logical volumes:

# gnt-cluster modify --reserved-lvs=ganeti/root,ganeti/swap,ganeti/var
# gnt-cluster verify

If you still have any errors, please talk to the instructors.

To see detailed information on how your cluster is configured, try these commands:

# gnt-cluster info | more

Look at the output.

# gnt-node list
# gnt-node list-storage

You are done with the basic installation!

14 Securing the VNC consoles

It would be good idea to make sure that the VNC consoles for the VMs are protected by a password.

To do this, we can create a cluster-wide password for every VM console.

This can later be overridden (changed) for each instance (VM).

To create the cluster-wide password, run this command on the master:

# echo 'xyzzy' >/etc/ganeti/vnc-cluster-password
# chmod 600 /etc/ganeti/vnc-cluster-password
# gnt-cluster modify -H kvm:vnc_password_file=/etc/ganeti/vnc-cluster-password

You will probably see an error message:

Failure: command execution error:
Hypervisor parameter validation failed on node hostY.ws.nsrc.org: Parameter 'vnc_password_file' fails validation: not found or not a file (current value: '/etc/ganeti/vnc-cluster-password')

Hmm, we just added the file - but wait! It's telling us that the file is missing from the slave nodes.

That's because we only created /etc/ganeti/vnc-cluster-password on the master node. It needs to be on every node (host) since any one of them could become a cluster master in the future.

There's a great command for this in ganeti: gnt-cluster copyfile

gnt-cluster copyfile will take a file as a parameter, and will take care of copying it to every node in the cluster.

In this case, we want our file /etc/ganeti/vnc-cluster-password to be copied.

To do this (on the master host - you will get a complaint if you try and run this on the other nodes):

# gnt-cluster copyfile /etc/ganeti/vnc-cluster-password

You can now re-run the command from earlier:

# gnt-cluster modify -H kvm:vnc_password_file=/etc/ganeti/vnc-cluster-password

That's it! Next up, we'll create some instances (VMs) and test migration.

15 Optional: Burn-in

If you have spare time, you can run a "burn-in". This is a comprehensive self-test which will check your cluster's ability to create, migrate and destroy virtual machines. It takes about half an hour, and reports its progress as it runs.

The name of the VM to create (here "testvm") should be unique. If you have any existing VM with this name, it will be destroyed. It also needs to resolve, so on each cluster node create an /etc/hosts entry like this:

192.0.2.1       testvm

Then run this on the cluster master node:

# /usr/lib/ganeti/tools/burnin -o debootstrap+default \
   -H kvm:kernel_path=/vmlinuz,initrd_path=/initrd.img \
   --disk-size 1024m --no-name-check --no-ip-check testvm

  1. backports are newer versions of the third party software originally packaged for your version of the operating system. These newer versions, packaged for a newer releases of Debian (or Ubuntu), have been made available (or backported) to the version of Debian we are using.