# 1 Objectives

In this exercise, you will start up a VM with shared storage disk image, and then migrate it to another group's host, while it is still running.

VM disk images have already been prepared, one for each of you, and are on a central class storage server.

Assuming you are working in pairs on the same host, then

• group 1 (host1) has disk image 1 and 2
• group 2 (host2) has disk image 3 and 4
• ...
• group N (hostN) has disk image 2N-1 and 2N

The nbd port number is then 20000 + the disk image number.

Please do try to use the correct image and not connect to someone else's image; this will result in corruption if two VMs are using the same image at once. However in case of an accident it is very quick for the instructors to copy a clean disk image for you.

# 2 Create VM

Both people in a pair have their own disk image, so they can work on this part independently.

ssh into your group's host server - you should be logged in as the "nsrc" user, not as root.

The disk image already exists, but you will still have to create a VM to attach the disk to.

This time, you're going to do it using an XML file. Check you are in the nsrc user's home directory, which is /home/nsrc

$pwd /home/nsrc Copy and paste the following XML into a file called "serverX.xml", where X is your disk image number: <domain type='kvm'> <name>serverX</name> <memory unit='KiB'>524288</memory> <currentMemory unit='KiB'>524288</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='pc-1.1'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='nbd'> <host name='s1.ws.nsrc.org' port='200XX'/> </source> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:FF:00:XX'/> <source bridge='br-lan'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='cirrus' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Change three things: • In <name>serverX</name>, change X to your disk image number • In port='200XX', change XX to your disk image number (01 upwards) • In <mac address='52:54:00:FF:00:XX'/>, change XX to your disk image number as two decimal digits. (So disk image 40 becomes 52:54:00:FF:00:40) • Save the file Now define your virtual machine using the file you have created: $ virsh define serverX.xml

If this is successful, you should see that it is present and be able to start it.

$virsh list --all$ virsh start --console serverX    # replace X with your disk image number

After around 10-15 seconds you should get a login prompt at the console. The username and password will be given to you in class.

In this console session, type ifconfig to find out what IP address has been allocated to it:

$ifconfig eth0 Write it down, then disconnect from the serial console (press ctrl and right-hand square bracket) Now you are back on your host server. Use these commands to show that your KVM process is running: $ virsh list
$ps auxwww | egrep '(kvm|qemu-system)' Look for -name serverX in the kvm command line, so you know it's yours. # 3 ssh into VM Now using another ssh connection, e.g. another putty window, ssh directly to this running VM, using the IP address you just noted. Obviously this is using the same username/password that you were given before, as it's just a different way of connecting to the same VM. All the VMs start off as being identical, so make some changes to yours so that it is distinctive. For example: • Type sudo -s to get a root shell • Edit /etc/hostname to change the hostname to anything of your choice • Edit /etc/hosts with the new hostname • Create the file /etc/motd.tail containing a welcome message • Type service hostname start to refresh the hostname If you then disconnect and reconnect your ssh client, you should see your welcome message and the new hostname in the prompt. Now you want to get the VM to do some visible work while you migrate it. Type the following command line: $ while true; do date; sleep 0.5; done

This will make it display the current date and time twice per second, so you should see this scrolling up the screen. Leave it running.

# 4 Suspend and resume

Just to prove that this is a running VM, now go back to your ssh session to the host server, while keeping the VM session window open so you can see the scrolling text.

On the host server, type the following commands:

$virsh suspend serverX$ virsh resume serverX

The 'suspend' should stop the messages from the VM scrolling up the screen, and 'resume' should restart them. Leave the VM running and the messages scrolling up.

# 5 Migrate

Now to migrate the virtual machine to another host. You will migrate to the next server numerically: those using host1 will migrate their servers to host2; those on host2 will migrate their servers to host3; and so on. The group on the last host will migrate their servers to host1.

This can be done either using the virsh command line, or the virt-manager GUI. Try the virsh command line first.

## 5.1 In the virsh command line

In the host server ssh window:

$virsh migrate --live --verbose serverX qemu+ssh://hostN.ws.nsrc.org/system where X is your disk image number and N is your group number plus 1. You'll be prompted for the ssh password to the next machine. That's it. Your VM should still be running, but will be on a different host! To check, use: $ virsh list
$ps auxwww | egrep '(kvm|qemu-system)' on your host, and on the next group's host. (Ask them to do it for you, or carefully ssh into their host and do it) From there, you can get them to migrate it back again: $ virsh migrate --live --verbose serverX qemu+ssh://hostM.ws.nsrc.org/system

where M is your group number. If you try without --live you may see a short pause in the output of the running VM.

## 5.2 Using virt-manager GUI

Start the virt-manager GUI. You should see a connection to your own hypervisor: localhost (QEMU)

Now you are going to add a connection to another hypervisor.

• File > Add Connection
• Hypervisor: QEMU/KVM
• [X] Connect to remote host
• Method: SSH
• Hostname: hostN.ws.nsrc.org (where N is the next group)
• Connect

The first time you connect, you may be questioned about the authenticity of the other host's SSH key. Type "yes" in full, and press OK.

You will then be prompted for the ssh password to the other server. Type the same password as you would normally use to login to that machine. (In a real environment you would streamline this using ssh keys).

You should now see the other hypervisor listed in the window, together with any VMs running there. To perform the migration:

• Double-click on the VM you want to migrate to get a console.
• Select Virtual Machine > Migrate...
• The drop down "New host" chooses the host you want to migrate to
• ("Migrate offline" means non-live, you can ignore this)
• Click Migrate

The console will disconnect because the machine is no longer running where it was before.

Close the console window, look at the list of VMs and you should see that your VM is now running on the other host, so you can double-click to see it there. If this is a remote host you will need to enter the ssh password again.

You should be able to migrate the VM back again in exactly the same way, from the same GUI instance. (In this way, a single machine running virt-manager could control a whole network of VM servers)

# 6 Optional exercises

Find two other groups who are happy to host your VM. Migrate your VM from yours to the first, then from the first to the second, then back to your host.

You can do this easily in the virt-manager GUI.

Alternatively, it's also straightforward in the virsh command line. You can issue commands to other hosts using the -c flag. For example, this is how you run virsh list on a remote hypervisor:

virsh -c qemu+ssh://<hostname>/system list

A third-party command to migrate a VM from <oldhost> to <newhost> looks like this:

virsh -c qemu+ssh://<oldhost>/system migrate --live --verbose \
<vmname> qemu+ssh://<newhost>/system

# 7 Notes on persistence

libvirt's behaviour can appear a little strange.

When a virtual machine is defined in an XML file on disk, it is called "persistent". However when it is migrated to another machine, for historical reasons the default is for it to be "transient" there (i.e. running but not defined on disk).

This means that:

• When you migrate from machine A to machine B, it remains defined on machine A but in a shutdown state. There is a risk that you might start it again on A while it is also running on B.
• When you then migrate from machine B to machine C, you will see it vanish from machine B (but it remains defined on machine A)

When using virsh there are two flags which can change this behaviour:

virsh migrate --undefine-source --persist ...

You can also patch /usr/share/virt-manager/virtManager/domain.py to get this behaviour in virt-manager.

For full details see libvirt migration doc