# 1 Objectives

In this exercise, you will:

• Configure a separate replication network for DRBD traffic
• Configure a separate service network for VMs to communicate to the outside world

whilst keeping the existing 10.10.0.X network for management traffic (i.e. administering the host machines)

By the end, your host will have three interfaces:

                    management 10.10.0.X
-----------+--------------
|
br-lan
|
+---------+---------+
|        eth0       | host X
|                   |
|eth0.255   eth0.100|
+--+-----------+----+
|           |
br-svc        |
|           |
VMs --------+           +------> to other hosts
10.10.100.X

These could be three physical interfaces (e.g. eth0, eth1 and eth2), but in this exercise we will use VLANs. Tou can think of eth0, eth0.100 and eth0.255 as 3 different interfaces.

# 2 Become root

All of the actions in this exercise are done as "root", so if you are not root already, type:

\$ sudo -s
#

# 3 VLAN tools

Ensure you have the vlan management tools installed:

# apt-get install vlan

# 4 Replication network

Let's start with the Replication network. To do this, add the following to the end of /etc/network/interfaces, below the br-lan section.

# Replication network
auto eth0.100
iface eth0.100 inet static
netmask 255.255.255.0

Remember to replace X with the number of your host.

This creates a new "sub-interface", called eth0.100. This naming convention using dot is quite common, and it means that packets on this sub-interface enter and leave physical port eth0 with VLAN tag 100 added.

Now activate the interface:

# ifup eth0.100

Check that you can ping all the other hosts in your cluster on their 10.10.100.X addresses.

Note that we have not created a bridge. This network will just be used for the hosts to send DRBD traffic to each other. The individual VMs do not need to have access to this network.

## 4.1 Configuring Ganeti: MASTER NODE ONLY

Now we have to change the cluster configuration so that Ganeti uses the secondary network for DRBD traffic.

Like all configuration changes, this is done on the MASTER node. Only one person should do this section.

Don't do this until all the hosts in your cluster can ping each other on their 10.10.100.X networks.

First, shutdown all running DRBD instances.

# gnt-instance list -o name,pnode,snodes,status,disk_template

For any instances which have a secondary node and have status 'running', issue the command to shut them down:

# gnt-instance shutdown <NAME>

Now add a secondary IP address (-s) to the master node itself:

# gnt-node modify -s 10.10.100.X --force hostX.ws.nsrc.org

and repeat for the other nodes in the cluster:

# gnt-node modify -s 10.10.100.Y --force hostY.ws.nsrc.org

Check that all the nodes in the cluster have a secondary IP:

# gnt-node list -o name,sip

Once this has been done, you can restart your DRBD instances.

# 5 Service network

Now add the Service network to every host in your cluster. Same as before, go to the end of /etc/network/interfaces, and add the following lines:

# Service network
auto eth0.255
iface eth0.255 inet manual

auto br-svc
iface br-svc inet manual
bridge_ports    eth0.255
bridge_stp      off
bridge_fd       0
bridge_maxwait  0

This time we have made a bridge so that the VMs can attach to it, but notice that we have NOT configured an IP address for the host on br-svc. The VMs will send their ethernet frames through the bridge, but there is no need for the physical host itself to have an address on this network. Ensuring the host OS is not visible improves security.

Activate the interface:

# ifup br-svc

Since the host doesn't have an address on this network, we can't test using ping. However you can check that the bridge exists and has the right interface in it:

# brctl show
bridge name bridge id       STP enabled interfaces
br-svc      8000.d4ae52c12e7e   yes     eth0.255

## 5.1 Attaching VMs to the service network

Each of you can take one of your VMs and move its network interface to the service network, by logging into the MASTER node and using the gnt-instance modify command:

# gnt-instance modify --net 0:modify,link=br-svc wordpressX

You'll need to reboot the instance after you made the change:

# gnt-instance reboot wordpressX

When it restarts, the virtual eth0 interface on the instance should be connected to br-svc instead of br-lan.

Go into the VM's VNC console to find out what IP address it has picked up on the new network. You should find that it now has a 10.10.255.X address.

Repeat for other VMs.

You can use check the status using gnt-instance list:

# gnt-instance list -o name,nic.bridge/0

## 5.2 Changing the default interface: MASTER NODE ONLY

You can now configure the cluster so that all subsequent VMs you create will be connected to br-svc instead of br-lan.

# gnt-cluster modify -N link=br-svc

# 6 Review

After these changes, your /etc/network/interfaces should look something like this (IPs should be the ones for your PC, of course):

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

auto br-lan
iface br-lan inet static
gateway 10.10.0.254
dns-nameservers 10.10.0.241
bridge_ports    eth0
bridge_stp      off
bridge_fd       0
bridge_maxwait  0

# Replication network
auto eth0.100
iface eth0.100 inet static

# Service network
auto eth0.255
iface eth0.255 inet manual

auto br-svc
iface br-svc inet manual
bridge_ports    eth0.255
bridge_stp      off
bridge_fd       0
bridge_maxwait  0

(Note: on a production machine it's a good idea to reboot after making major changes to /etc/network/interfaces to check they are picked up correctly at system startup)

# 7 Optional extra exercise

Create an instance which has two virtual NICs: eth0 connected to br-lan and eth1 connected to br-svc. Use the information in the Ganeti cheat sheet, the presentations or man gnt-instance to work out how.