We are going to simulate a number of failure situations, and recover from them.

Try and replicate the scenarios on your hosts.

# 1 Initial setup

• Cluster with 3 or more Nodes
• Master is up (hostA)
• Slaves are up (hostB, hostC, etc.)
• DRBD instance "debianX" is running on one of the Slaves (let's say hostB) and is replicated to either the Master (hostA) or another Slave (hostC or other)

# 2 Scenario: Planned Node Maintenance

Let's imagine that we want to take down hostB for maintenance: more RAM, a disk replacement, etc.

You have probably many instances running on your cluster by now.

We can't simply do a migrate: it only switches primary and secondary around. The means that hostB will still be secondary for a number of instances.

Another issue is that you may have plain instances. If you shut down hostB, those instances will be shut down as well!

What do we do ? Since we have a third node (hostC), we can use that to move/copy instances away from hostB:

• DRBD instances for which hostB is primary will need to migrate to their secondary

• Once this is done, and hostB is only secondary for the instances, we'll need to move the disks of all DRBD instances to another node. (example: if A is primary for vmX, we move secondary disks from B to C)

• Plain instances running on hostB will need to be moved to another node (A or C)

Luckily, we have a couple of commands to help us do our work!

## 2.1 gnt-node evacuate

Read the man page for gnt-node and look for the section about the evacuate subcommand.

gnt-node evacuate will move DRBD instances away from a node. You will need to run this command for primary instances (-p) and for secondary instances (-s).

• We have debianX running as a DRBD instance on hostA (primary)
• hostB (secondary).

• We have debianY running as a plain instance on host B

What happens if we do:

# gnt-node evacuate -p nodeB

Relocate instance(s) debianY from node(s) nodeB?
y/[n]/?:

gnt-node evacuate has figured out that the plain debianY instance needs to be moved away. Answer y

Fri Sep 19 14:29:45 2014  - INFO: Evacuating instances from node 'hostB': debianY
Fri Sep 19 14:29:46 2014  - WARNING: Unable to evacuate instances debianY (Instances of type plain cannot be relocated)
Failure: command execution error:
Unable to evacuate instances debianY (Instances of type plain cannot be relocated)

Uh oh :(

Ok, we will need to move the instance manually :(

# gnt-instance move -n hostB debianY

Instance debianY will be moved. This requires a shutdown of the instance.
Continue?
y/[n]/?: y
Fri Sep 19 14:31:44 2014  - INFO: Shutting down instance debianY on source node hostB
Fri Sep 19 14:32:01 2014 disk/0 sent 450M, 77.2 MiB/s, 21%, ETA 21s
Fri Sep 19 14:32:37 2014 disk/0 finished receiving data
Fri Sep 19 14:32:37 2014 disk/0 finished sending data
Fri Sep 19 14:32:37 2014  - INFO: Removing the disks on the original node
Fri Sep 19 14:32:38 2014  - INFO: Starting instance debianY on node hostC

• Primary instances' disks will be to other nodes in the cluster.

In our case, we want to evacuate all

Note: for the time being, one needs to explicitly tell the evacuate command to move away either primary (-p) or secondary (-s) instances - it won't work for both at the same time.

Since hostB was primary for our debianX instance, we tell evacuate to only evacuate primary instances (for the time being):

# gnt-node evacuate -p hostB

Relocate instance(s) debianX from node(s) hostB?
y/[n]/?:

# 3 Scenario: Loss of a Slave Node

## 3.1 Loss of network connectivity

### 3.1.1 Initial state

• Confirm that debianX (or what the name of the DRBD VM you are using is) is running on hostB (gnt-instance list)
# gnt-instance list -o name,pnode,snodes,status
Instance Primary_node         Secondary_Nodes      Status
debianX  hostB.ws.nsrc.org    hostA.ws.nsrc.org    running
• Shut down (halt) hostB, the (make sure you run this on hostB, the primary node for this instance!)
# halt -p
• The VM goes down as a result (confirm this using ping / console)
# gnt-instance list -o name,pnode,snodes,status

Instance Primary_node         Secondary_Nodes      Status
debianX  hostB.ws.nsrc.org    hostA.ws.nsrc.org    ERROR_nodedown
• Run gnt-cluster verify (will take a while), and look at the output.

• Run gnt-node list, and look at the output, too.

As you notice, things are quite slow. This is because Ganeti is trying to contact the gnt-noded daemon on hostB, and it's timing out.

If this were a production environment, we'd have to examine hostB, and determine whether hostB was likely to come back online soon. If not, say, because of some hardware failure, we would decide to take the node "offline", so Ganeti would stop trying to talk to it.

Let's start by marking hostB as offline:

# gnt-node modify --offline=yes hostB.ws.nsrc.org

Modified node hostB.ws.nsrc.org
- master_candidate -> False
- offline -> True

It will take a little while, but now most commands will run faster as Ganeti stops trying to contact the other nodes in the cluster.

Try running gnt-instance list and gnt-node list again.

Also re-run gnt-cluster verify

### 3.1.2 Instance recovery

• We cannot live-migrate the host (hostB is down), so we need to failover

If you attempt to migrate, you will be told:

# gnt-instance migrate debianX

Failure: prerequisites not met for this operation:
error type: wrong_state, error details:
Can't migrate, please use failover: Node is marked offline
• Attempt failover
# gnt-instance failover debianX

Hopefully you will see messages ending with:

...
Sat Jan 18 15:58:11 2014 * activating the instance's disks on target node hostA.ws.nsrc.org
Sat Jan 18 15:58:11 2014  - WARNING: Could not prepare block device disk/0 on node hostB.ws.nsrc.org (is_primary=False, pass=1): Node is marked offline
Sat Jan 18 15:58:11 2014 * starting the instance on the target node hostA.ws.nsrc.org

If so, skip to the section "Confirm that the VM is now up on hostA"

If you see this message:

Sat Jan 18 20:57:55 2014 Failover instance debianX
Sat Jan 18 20:57:55 2014 * checking disk consistency between source and target
Failure: command execution error:
Disk 0 is degraded on target node, aborting failover

... you will need to force the operation. This should normally not happen when the node is marked offline. However, if you do get the message:

• Read man page on gnt-instance, find the section about failover:

If you are trying to migrate instances off a dead node, this will fail. Use the --ignore-consistency option for this purpose. Note that this option can be dangerous as errors in shutting down the instance will be ignored, resulting in possibly having the instance running on two machines in parallel (on disconnected DRBD drives).

• This is why we shut down hostB, and didn't simply disconnect. You MUST verify that hostB really is down, and not simply disconnected from the management / replication network, otherwise you risk ending up with two running instances of VM (if someone force starts it) and you will need to force a resolution.

• Re-run gnt-instance failover with the '--ignore-consistency' flag. We are in a situation that requires this (hostB down)

# gnt-instance failover --ignore-consistency debianX

There will be much more output this time, pay attention in particular if you see some warnings - these are normal since the hostB node is down, but we did it mark it as offline.

Sat Jan 18 21:03:15 2014 Failover instance debianX
Sat Jan 18 21:03:15 2014 * checking disk consistency between source and target

[ ... messages ... ]

Sat Jan 18 21:03:27 2014 * activating the instance's disks on target node hostA.ws.nsrc.org

[ ... messages ... ]

Sat Jan 18 21:03:33 2014 * starting the instance on the target node hostA.ws.nsrc.org
• Confirm that the VM is now up on hostA:
# gnt-instance list -o name,pnode,snodes,status

Instance Primary_node         Secondary_Nodes      Status
debianX  hostA.ws.nsrc.org    hostB.ws.nsrc.org    running

### 3.1.3 Re-adding the failed node

Ok, let's say hostB has been fixed.

• Restart hostB. (Depending on the class setup, you may need to ask the instructor to do this for you).

• Make sure you can ping it and can log in to it

We need to re-add it to the cluster. We do this using the gnt-node add --readd command on the cluster master node.

From the gnt-node man page:

In case you're readding a node after hardware failure, you can use the --readd parameter. In this case, you don't need to pass the secondary IP again, it will reused from the cluster. Also, the drained and offline flags of the node will be cleared before re-adding it.

# gnt-node add --readd hostB.ws.nsrc.org

[ ... question about SSH ...]

Sat Jan 18 22:09:43 2014  - INFO: Readding a node, the offline/drained flags were reset
Sat Jan 18 22:09:43 2014  - INFO: Node will be a master candidate

We're good! It could take a while to re-sync the DRBD data if a lot of disk activity (writing) has taken place on debianX, but this will happen in the background.

Inspect the node list:

# gnt-node list

Check the cluster configuration.

# gnt-cluster verify

Probably the DRBD instances on hostB have not yet been activated by the. As a result you may see some errors about your instance's disk beging degraded, similar to this:

Thu Sep 18 18:52:41 2014 * Verifying node status
Thu Sep 18 18:52:41 2014   - ERROR: node hostB: drbd minor 0 of instance debianX is not active
Thu Sep 18 18:52:41 2014 * Verifying instance status
Thu Sep 18 18:52:41 2014   - ERROR: instance debianX: disk/0 on hostA is degraded
Thu Sep 18 18:52:41 2014   - ERROR: instance debianX: couldn't retrieve status for disk/0 on hostB: Can't find device <DRBD8(hosts=03add4b7-d6d9-40d0-bf6e-74d1683aad49/0-93eef5d9-6b33-4c

Don't panic! This is normal, as it's possible the disks haven't been re-synchronized yet.

If so, you can use the command gnt-cluster verify-disks to fix this:

# gnt-cluster verify-disks

Submitted jobs 78
Waiting for job 78 ...
Activating disks for instance 'debianX'

Wait a few seconds, then run:

# gnt-cluster verify

When all is OK, let's try and migrate debianX back to hostB:

# gnt-instance migrate debianX

Test that the migration has worked.

Note: if you are certain that the node hostB is healthy (let's say it was just a power failure, and no corruption has happened on its filesystem or disks), you could simply do the following (DON'T DO THIS NOW!):

# gnt-node modify -O no hostB.ws.nsrc.org

Sat Jan 18 22:08:45 2014  - INFO: Auto-promoting node to master candidate
Sat Jan 18 22:08:45 2014  - WARNING: Transitioning node from offline to online state without using re-add. Please make sure the node is healthy!

## 3.2 Alternate decisions

### 3.2.1 Completely removing hostB from the cluster

Let's now imagine that the failure of hostB wasn't temporary: we imagine that cannot be fixed, and won't be back online for a while (it needs to be completely replaced). We could decide to remove hostB from the cluster.

To do this:

• If hostB has been restarted, let's shut it down (to simulate a failure)

Note: RUN THIS ON hostB !!!

# halt -p
• On the master:

Mark hostB as offline:

# gnt-node modify --offline=yes hostB.ws.nsrc.org

run gnt-cluster verify, and look at the output.

Sat Jan 18 21:31:56 2014   - NOTICE: 1 offline node(s) found.
• We marked hostB as down - let's assume hostB will be down for a while while it's being fixed.

• We decide to remove hostB from the cluster:

# gnt-node remove hostB.ws.nsrc.org

Failure: prerequisites not met for this operation:
error type: wrong_input, error details:
Instance debianX is still running on the node, please remove first

Ok, we are not allowed to remove the hostB, because Ganeti can see that we still have an instance (debianX) associated with hostB.

This is different from simply marking the node offline, as it means we are permanently getting rid of hostB, and we need to take a decision about what to do for DRBD instances that were associated with hostB.

# gnt-instance failover debianX

Failover will happen to image debianX. This requires a shutdown of
the instance. Continue?
y/[n]/?: y
Thu Sep 18 20:29:32 2014 Failover instance debianX
Thu Sep 18 20:29:32 2014 * checking disk consistency between source and target
Thu Sep 18 20:29:32 2014 Node hostB.ws.nsrc.org is offline, ignoring degraded disk 0 on target node hostA.ws.nsrc.org
Thu Sep 18 20:29:32 2014 * shutting down instance on source node
Thu Sep 18 20:29:32 2014  - WARNING: Could not shutdown instance debianX on node hostB.ws.nsrc.org, proceeding anyway; please make sure node hostB.ws.nsrc.org is down; error details: Node is marked offline
Thu Sep 18 20:29:32 2014 * deactivating the instance's disks on source node
Thu Sep 18 20:29:33 2014  - WARNING: Could not shutdown block device disk/0 on node hostB.ws.nsrc.org: Node is marked offline
Thu Sep 18 20:29:33 2014 * activating the instance's disks on target node hostA.ws.nsrc.org
Thu Sep 18 20:29:33 2014  - WARNING: Could not prepare block device disk/0 on node hostB.ws.nsrc.org (is_primary=False, pass=1): Node is marked offline
Thu Sep 18 20:29:33 2014 * starting the instance on the target node hostA.ws.nsrc.org

Followed by:

# gnt-node evacuate -s hostB

Relocate instance(s) debianX from node(s) hostB?
y/[n]/?: y
Thu Sep 18 20:32:37 2014  - INFO: Evacuating instances from node 'hostB.ws.nsrc.org': debianX
Thu Sep 18 20:32:37 2014  - INFO: Instances to be moved: debianX (to hostA.ws.nsrc.org, hostC.ws.nsrc.org)
...
Thu Sep 18 20:32:38 2014 STEP 3/6 Allocate new storage
Thu Sep 18 20:32:38 2014  - INFO: Adding new local storage on hostC.ws.nsrc.org for disk/0
...
Thu Sep 18 20:32:41 2014 STEP 6/6 Sync devices
Thu Sep 18 20:32:41 2014  - INFO: Waiting for instance debianX to sync disks
Thu Sep 18 20:32:41 2014  - INFO: - device disk/0:  1.20% done, 1m 55s remaining (estimated)
Thu Sep 18 20:33:41 2014  - INFO: Instance debianX's disks are in sync
All instances evacuated successfully.

Ok, check out the instance list:

# gnt-instance list -o name,pnode,snodes,status

Instance  Primary_node      Secondary_Nodes Status
debianX   hostA.ws.nsrc.org hostC.ws.nsrc.org  running

Perfect, hostB is not used by any instance. We can now re-attempt to remove node hostB from the cluster:

# gnt-node remove hostB.ws.nsrc.org

More WARNINGs! But did it work ?

# gnt-node list

Node              DTotal DFree MTotal MNode MFree Pinst Sinst
hostA.ws.nsrc.org  29.1G 12.6G   995M  145M  672M     2     0
hostC.ws.nsrc.org  29.0G 12.7G   995M  137M  680M     0     1

Yes, hostB is gone.

Note: Ganeti will modify /etc/hosts on your remaining nodes, and remove the line for hostB!

We can restart our debianX instance, by the way! (This may have already happened if you called gnt-instance failover)

# gnt-instance start debianX

Test that it comes up normally.

# 4 Scenario: Planned master failover (node maintenance)

Let's imagine that we need to temporarily service the cluster master (in this case, nodeA). It's rather easy. Decide first which of the other nodes will become master.

Read about master-failover: man gnt-cluster, find the MASTER-FAILOVER section.

Then, ON THE NODE YOU PICKED, run this command:

# gnt-cluster master-failover

If everything goes well, after 5-10 seconds, the node you ran this command on is now the new master.

Test this! For example, if hostB is your new master, run these commands on it:

Verify that the cluster IP is now on this host:

# ifconfig br-lan:0

Notice that the IP address in br-lan:0 is that of the cluster master.

This means that next time you log on using SSH using the cluster IP, you will be logged on to hostB.

Check which node is the master (remember, you need to run this on the master).

# gnt-cluster getmaster
hostB.ws.nsrc.org

All good!

# 5 Scenario: Loss of Master Node

Let's imagine a slightly more critical scenario: the crash of the master node.

Let's shut down the master node!

On hostB (it's now our master node, remember ?)

# halt -p

The node is now down. VM still running on other nodes are unaffected, but you are not able to make any changes (stop, start, modify, add VMs, change cluster configuration, etc...)

## 5.1 Promoting slave

Let's assume that hostB is not coming back right now, and we need to promote a master.

You will first need to decide which of the remaining nodes will become the master. Let's pick hostA.

To promote the slave:

• Log on to the node that will become master (hostA):

• Run the following command:

# gnt-cluster master-failover

Note here that you will NOT be asked to confirm the operation!

If you have 3 or more nodes in the cluster, the operation should be as smooth as in the previous section.

On the other hand, if you only had 2 nodes in your cluster, you would have to specify --no-voting as an option. This is because, if one node is down, there is only one node left in the cluster, and no election can take place.

At this point, the chosen node (hostA) is now master. You can verify this using the gnt-cluster getmaster command.

From this point, recovering downed machines is similar to what we did in the first scenario. But to be on the safe side:

• Try and run gnt-instance list

XXX error here - split brain

Normally, even though hostA was down while the promotion of hostB happened, the ganeti-masterd daemon running on hostA was informed, on startup, that hostA was no longer master. The above command should therefore fail with:

This is not the master node, please connect to node 'hostB.ws.nsrc.org' and
rerun the command

Which means that hostA is well aware that hostB is the master now.

Once you have done this, you may find that hostB and hostA have different versions of the cluster database. Type the following on hostB:

# gnt-cluster verify
...
Sat Jan 18 16:11:12 2014   - ERROR: cluster: File /var/lib/ganeti/config.data found with 2 different checksums (variant 1 on hostB.ws.nsrc.org, hostC.ws.nsrc.org; variant 2 on hostA.ws.nsrc.org)
Sat Jan 18 16:11:12 2014   - ERROR: cluster: File /var/lib/ganeti/ssconf_master_node found with 2 different checksums (variant 1 on hostB.ws.nsrc.org, hostC.ws.nsrc.org; variant 2 on hostA.ws.nsrc.org)

You can fix this by:

# gnt-cluster redist-conf

which pushes out the config from the current master to all the other nodes.

Re-run gnt-cluster verify to check everything is OK again.

Then to make hostA take over the master role again, login to hostA and run:

# gnt-cluster master-failover