TODO: replace host -> node

We are going to simulate a number of failure situations, and recover from them.

Try and replicate the scenarios on your hosts.

1 Initial setup

        host A                      host B                    host C           
       (master)
  +-----------------+        +-----------------+        +-----------------+     
  |                 |        |                 |        |                 |     
  | +====+          |  drbd  | ......   +====+ |  drbd  | ......          |     
  | | dX |.....................: dX :   | dY |............: dY :          |     
  | +====+          |        | :....:   +====+ |        | :....:          |
  |                 |        |                 |        |                 |     
  |                 |        |        +----+   |        |                 |
  |                 |        |  plain | dZ |   |        |                 |     
  |                 |        |        +----+   |        |                 |     
  +--------+--------+        +-------+---------+        +-------+---------+     
           |                         |                          |              
-----------+-------------------------+--------------------------+-----------

2 Scenario: Planned Node Maintenance

Let's imagine that we want to take down hostB for maintenance: more RAM, a disk replacement, etc.

You have probably many instances running on your cluster by now.

Here's the process:

  1. DRBD instances for which hostB is primary will need to migrate to their secondary, leaving hostB to only be secondary for any instances

  2. We need to move the disks of secondary DRBD instances from hostB to another host.

(if A is primary for debianX, we move its secondary disks from B to C)

  1. Plain instances running on hostB will need to be moved to another node (A or C)

Below are the commands we'll be using for each of the steps above.

2.1 Step 1: Migrate primary instances away from hostB

command: gnt-instance migrate

We've used this command before - we have to make sure that if hostB is primary for any instances, we migrate them to the secondary node.

In the example above, hostB is primary for dY. Let's migrate it over to hostC.

# gnt-instance migrate dY

After this is done, we are now in the following situation: hostB is only running the plain instance dZ.

        host A                      host B                    host C           
       (master)
  +-----------------+        +-----------------+        +-----------------+     
  |                 |        |                 |        |                 |     
  | +====+          |  drbd  | ......   ...... |  drbd  | +====+          |     
  | | dX |.....................: dX :   : dY :............| dY |          |     
  | +====+          |        | :....:   :....: |        | +====+          |
  |                 |        |                 |        |                 |     
  |                 |        |        +----+   |        |                 |
  |                 |        |  plain | dZ |   |        |                 |     
  |                 |        |        +----+   |        |                 |     
  +--------+--------+        +-------+---------+        +-------+---------+     
           |                         |                          |              
-----------+-------------------------+--------------------------+-----------

2.2 Step 2: Move secondary instances' disks on hostB to another host

command: gnt-instance replace-disks

# gnt-instance replace-disks XXX

XXX example replace-disks for dX and dZ

2.3 Step 3: Move plain instances away from hostB to another host.

command: gnt-instance move

Note that this will require shutting down the instance, as its disk(s) will first have to be copied to host C before it can be restarted there.

# gnt-instance move -n hostC debianY

Instance debianY will be moved. This requires a shutdown of the instance.
Continue?
y/[n]/?: y
Fri Sep 19 14:31:44 2014  - INFO: Shutting down instance debianY on source node hostB
Fri Sep 19 14:32:01 2014 disk/0 sent 450M, 77.2 MiB/s, 21%, ETA 21s
Fri Sep 19 14:32:37 2014 disk/0 finished receiving data
Fri Sep 19 14:32:37 2014 disk/0 finished sending data
Fri Sep 19 14:32:37 2014  - INFO: Removing the disks on the original node
Fri Sep 19 14:32:38 2014  - INFO: Starting instance debianY on node hostC

The hostB is ready to be shut down. Don't do this!

3 Scenario: Loss of a Slave Node

3.1 Initial state

# gnt-instance list -o name,pnode,snodes,status
Instance Primary_node         Secondary_Nodes      Status
debianX  hostB.ws.nsrc.org    hostA.ws.nsrc.org    running
debianY  hostC.ws.nsrc.org    hostB.ws.nsrc.org    running
debianZ  hostC.ws.nsrc.org                         running
# halt -p
# gnt-instance list -o name,pnode,snodes,status

Instance Primary_node         Secondary_Nodes      Status
debianX  hostB.ws.nsrc.org    hostA.ws.nsrc.org    ERROR_nodedown
debianY  hostC.ws.nsrc.org    hostB.ws.nsrc.org    running
debianZ  hostC.ws.nsrc.org                         running

As you notice, things are quite slow. This is because Ganeti is trying to contact the gnt-noded daemon on hostB, and it's timing out.

If this were a production environment, we'd have to examine hostB, and determine whether hostB was likely to come back online soon. If not, say, because of some hardware failure, we would decide to take the node "offline", so Ganeti would stop trying to talk to it.

Let's start by marking hostB as offline:

# gnt-node modify --offline=yes hostB.ws.nsrc.org

Modified node hostB.ws.nsrc.org
 - master_candidate -> False
 - offline -> True

It will take a little while, but now most commands will run faster as Ganeti stops trying to contact the other nodes in the cluster.

Try running gnt-instance list and gnt-node list again.

Also re-run gnt-cluster verify

3.1.1 Instance recovery

If you attempt to migrate, you will be told:

# gnt-instance migrate debianX

Failure: prerequisites not met for this operation:
error type: wrong_state, error details:
Can't migrate, please use failover: Node is marked offline
# gnt-instance failover debianX

Hopefully you will see messages ending with:

...
Sat Jan 18 15:58:11 2014 * activating the instance's disks on target node hostA.ws.nsrc.org
Sat Jan 18 15:58:11 2014  - WARNING: Could not prepare block device disk/0 on node hostB.ws.nsrc.org (is_primary=False, pass=1): Node is marked offline
Sat Jan 18 15:58:11 2014 * starting the instance on the target node hostA.ws.nsrc.org

If so, skip to the section "Confirm that the VM is now up on hostA"

If you see this message:

Sat Jan 18 20:57:55 2014 Failover instance debianX
Sat Jan 18 20:57:55 2014 * checking disk consistency between source and target
Failure: command execution error:
Disk 0 is degraded on target node, aborting failover

... you will need to force the operation. This should normally not happen when the node is marked offline. However, if you do get the message:

If you are trying to migrate instances off a dead node, this will fail. Use the --ignore-consistency option for this purpose. Note that this option can be dangerous as errors in shutting down the instance will be ignored, resulting in possibly having the instance running on two machines in parallel (on disconnected DRBD drives).

# gnt-instance failover --ignore-consistency debianX

There will be much more output this time, pay attention in particular if you see some warnings - these are normal since the hostB node is down, but we did it mark it as offline.

Sat Jan 18 21:03:15 2014 Failover instance debianX
Sat Jan 18 21:03:15 2014 * checking disk consistency between source and target

[ ... messages ... ]

Sat Jan 18 21:03:27 2014 * activating the instance's disks on target node hostA.ws.nsrc.org

[ ... messages ... ]

Sat Jan 18 21:03:33 2014 * starting the instance on the target node hostA.ws.nsrc.org
# gnt-instance list -o name,pnode,snodes,status

Instance Primary_node         Secondary_Nodes      Status
debianX  hostA.ws.nsrc.org    hostB.ws.nsrc.org    running
debianY  hostC.ws.nsrc.org    hostB.ws.nsrc.org    running
debianZ  hostC.ws.nsrc.org                         running

3.1.2 Re-adding the failed node

Ok, let's say hostB has been fixed.

We need to re-add it to the cluster. We do this using the gnt-node add --readd command on the cluster master node.

From the gnt-node man page:

In case you're readding a node after hardware failure, you can use the --readd parameter. In this case, you don't need to pass the secondary IP again, it will reused from the cluster. Also, the drained and offline flags of the node will be cleared before re-adding it.

# gnt-node add --readd hostB.ws.nsrc.org

[ ... question about SSH ...]

Sat Jan 18 22:09:43 2014  - INFO: Readding a node, the offline/drained flags were reset
Sat Jan 18 22:09:43 2014  - INFO: Node will be a master candidate

We're good! It could take a while to re-sync the DRBD data if a lot of disk activity (writing) has taken place on debianX, but this will happen in the background.

Inspect the node list:

# gnt-node list

Check the cluster configuration.

# gnt-cluster verify

Probably the DRBD instances on hostB have not yet been activated by the. As a result you may see some errors about your instance's disk beging degraded, similar to this:

Thu Sep 18 18:52:41 2014 * Verifying node status
Thu Sep 18 18:52:41 2014   - ERROR: node hostB: drbd minor 0 of instance debianX is not active
Thu Sep 18 18:52:41 2014 * Verifying instance status
Thu Sep 18 18:52:41 2014   - ERROR: instance debianX: disk/0 on hostA is degraded
Thu Sep 18 18:52:41 2014   - ERROR: instance debianX: couldn't retrieve status for disk/0 on hostB: Can't find device <DRBD8(hosts=03add4b7-d6d9-40d0-bf6e-74d1683aad49/0-93eef5d9-6b33-4c

Don't panic! This is normal, as it's possible the disks haven't been re-synchronized yet.

If so, you can use the command gnt-cluster verify-disks to fix this:

# gnt-cluster verify-disks

Submitted jobs 78
Waiting for job 78 ...
Activating disks for instance 'debianX'

Wait a few seconds, then run:

# gnt-cluster verify

When all is OK, let's try and migrate debianX back to hostB:

# gnt-instance migrate debianX

Test that the migration has worked.

3.2 Completely removing hostB from the cluster

Let's now imagine that the failure of hostB wasn't temporary: we imagine that it cannot be fixed, and won't be back online for a while (it needs to be completely replaced). We could decide to remove hostB from the cluster.

To do this:

Note: RUN THIS ON hostB !!!

# halt -p

Mark hostB as offline:

# gnt-node modify --offline=yes hostB.ws.nsrc.org

run gnt-cluster verify, and look at the output.

Sat Jan 18 21:31:56 2014   - NOTICE: 1 offline node(s) found.
# gnt-node remove hostB.ws.nsrc.org

Failure: prerequisites not met for this operation:
error type: wrong_input, error details:
Instance debianX is still running on the node, please remove first

Ok, we are not allowed to remove the hostB, because Ganeti can see that we still have an instance (debianX) associated with hostB.

This is different from simply marking the node offline, as it means we are permanently getting rid of hostB, and we need to take a decision about what to do for DRBD instances that were associated with hostB.

# gnt-instance failover debianX

Failover will happen to image debianX. This requires a shutdown of
the instance. Continue?
y/[n]/?: y
Thu Sep 18 20:29:32 2014 Failover instance debianX
Thu Sep 18 20:29:32 2014 * checking disk consistency between source and target
Thu Sep 18 20:29:32 2014 Node hostB.ws.nsrc.org is offline, ignoring degraded disk 0 on target node hostA.ws.nsrc.org
Thu Sep 18 20:29:32 2014 * shutting down instance on source node
Thu Sep 18 20:29:32 2014  - WARNING: Could not shutdown instance debianX on node hostB.ws.nsrc.org, proceeding anyway; please make sure node hostB.ws.nsrc.org is down; error details: Node is marked offline
Thu Sep 18 20:29:32 2014 * deactivating the instance's disks on source node
Thu Sep 18 20:29:33 2014  - WARNING: Could not shutdown block device disk/0 on node hostB.ws.nsrc.org: Node is marked offline
Thu Sep 18 20:29:33 2014 * activating the instance's disks on target node hostA.ws.nsrc.org
Thu Sep 18 20:29:33 2014  - WARNING: Could not prepare block device disk/0 on node hostB.ws.nsrc.org (is_primary=False, pass=1): Node is marked offline
Thu Sep 18 20:29:33 2014 * starting the instance on the target node hostA.ws.nsrc.org

Followed by:

# gnt-node evacuate -s hostB

Relocate instance(s) debianX from node(s) hostB?
y/[n]/?: y
Thu Sep 18 20:32:37 2014  - INFO: Evacuating instances from node 'hostB.ws.nsrc.org': debianX
Thu Sep 18 20:32:37 2014  - INFO: Instances to be moved: debianX (to hostA.ws.nsrc.org, hostC.ws.nsrc.org)
...
Thu Sep 18 20:32:38 2014 STEP 3/6 Allocate new storage
Thu Sep 18 20:32:38 2014  - INFO: Adding new local storage on hostC.ws.nsrc.org for disk/0
...
Thu Sep 18 20:32:41 2014 STEP 6/6 Sync devices
Thu Sep 18 20:32:41 2014  - INFO: Waiting for instance debianX to sync disks
Thu Sep 18 20:32:41 2014  - INFO: - device disk/0:  1.20% done, 1m 55s remaining (estimated)
Thu Sep 18 20:33:41 2014  - INFO: Instance debianX's disks are in sync
All instances evacuated successfully.

Ok, check out the instance list:

# gnt-instance list -o name,pnode,snodes,status

Instance  Primary_node      Secondary_Nodes Status
debianX   hostA.ws.nsrc.org hostC.ws.nsrc.org  running
XXX

Perfect, hostB is not used by any instance. We can now re-attempt to remove node hostB from the cluster:

# gnt-node remove hostB.ws.nsrc.org

More WARNINGs! But did it work ?

# gnt-node list

Node              DTotal DFree MTotal MNode MFree Pinst Sinst
hostA.ws.nsrc.org  29.1G 12.6G   995M  145M  672M     2     0
hostC.ws.nsrc.org  29.0G 12.7G   995M  137M  680M     0     1

Yes, hostB is gone.

Note: Ganeti will modify /etc/hosts on your remaining nodes, and remove the line for hostB!

We can restart our debianX instance, by the way! (This may have already happened if you called gnt-instance failover)

# gnt-instance start debianX

Test that it comes up normally.

4 Scenario: Planned master failover (node maintenance)

Let's imagine that we need to temporarily service the cluster master (in this case, nodeA). It's rather easy. Decide first which of the other nodes will become master.

Read about master-failover: man gnt-cluster, find the MASTER-FAILOVER section.

Then, ON THE NODE YOU PICKED, run this command:

# gnt-cluster master-failover

If everything goes well, after 5-10 seconds, the node you ran this command on is now the new master.

Test this! For example, if hostB is your new master, run these commands on it:

Verify that the cluster IP is now on this host:

# ifconfig br-lan:0

Notice that the IP address in br-lan:0 is that of the cluster master.

This means that next time you log on using SSH using the cluster IP, you will be logged on to hostB.

Check which node is the master (remember, you need to run this on the master).

# gnt-cluster getmaster
hostB.ws.nsrc.org

All good!

5 Scenario: Loss of Master Node

Let's imagine a slightly more critical scenario: the crash of the master node.

Let's shut down the master node!

On hostB (it's now our master node, remember ?)

# halt -p

The node is now down. VM still running on other nodes are unaffected, but you are not able to make any changes (stop, start, modify, add VMs, change cluster configuration, etc...)

5.1 Promoting slave

Let's assume that hostB is not coming back right now, and we need to promote a master.

You will first need to decide which of the remaining nodes will become the master. Let's pick hostA.

To promote the slave:

# gnt-cluster master-failover

Note here that you will NOT be asked to confirm the operation!

If you have 3 or more nodes in the cluster, the operation should be as smooth as in the previous section.

On the other hand, if you only had 2 nodes in your cluster, you would have to specify --no-voting as an option. This is because, if one node is down, there is only one node left in the cluster, and no election can take place.

At this point, the chosen node (hostA) is now master. You can verify this using the gnt-cluster getmaster command.

From this point, recovering downed machines is similar to what we did in the first scenario. But to be on the safe side:

XXX error here - split brain

Normally, even though hostA was down while the promotion of hostB happened, the ganeti-masterd daemon running on hostA was informed, on startup, that hostA was no longer master. The above command should therefore fail with:

This is not the master node, please connect to node 'hostB.ws.nsrc.org' and
rerun the command

Which means that hostA is well aware that hostB is the master now.

Once you have done this, you may find that hostB and hostA have different versions of the cluster database. Type the following on hostB:

# gnt-cluster verify
...
Sat Jan 18 16:11:12 2014   - ERROR: cluster: File /var/lib/ganeti/config.data found with 2 different checksums (variant 1 on hostB.ws.nsrc.org, hostC.ws.nsrc.org; variant 2 on hostA.ws.nsrc.org)
Sat Jan 18 16:11:12 2014   - ERROR: cluster: File /var/lib/ganeti/ssconf_master_node found with 2 different checksums (variant 1 on hostB.ws.nsrc.org, hostC.ws.nsrc.org; variant 2 on hostA.ws.nsrc.org)

You can fix this by:

# gnt-cluster redist-conf

which pushes out the config from the current master to all the other nodes.

Re-run gnt-cluster verify to check everything is OK again.

Then to make hostA take over the master role again, login to hostA and run:

# gnt-cluster master-failover

6 Some key commands

These are some of the commands we'll be making use of

UNUSED

6.1 Moving all instances away from a node

command: gnt-node evacuate

Read the man page for gnt-node and look for the section about the evacuate subcommand.

Note: for the time being, one needs to explicitly tell the evacuate command to move away either primary (-p) or secondary (-s) instances - it won't work for both at the same time.

Assuming we have:

What happens if we do:

# gnt-node evacuate -p nodeB

Relocate instance(s) debianY from node(s) nodeB?
y/[n]/?:

gnt-node evacuate has figured out that the plain debianY instance needs to be moved away. Answer y

Fri Sep 19 14:29:45 2014  - INFO: Evacuating instances from node 'hostB': debianY
Fri Sep 19 14:29:46 2014  - WARNING: Unable to evacuate instances debianY (Instances of type plain cannot be relocated)
Failure: command execution error:
Unable to evacuate instances debianY (Instances of type plain cannot be relocated)

Uh oh :(

What about gnt-node evacuate -s nodeB ?

6.2 Making a node online after it has been marked as offline

Note: if you are certain that the node hostB is healthy (let's say it was just a power failure, and no corruption has happened on its filesystem or disks), you could simply do the following (DON'T DO THIS NOW!):

# gnt-node modify -O no hostB.ws.nsrc.org

Sat Jan 18 22:08:45 2014  - INFO: Auto-promoting node to master candidate
Sat Jan 18 22:08:45 2014  - WARNING: Transitioning node from offline to online state without using re-add. Please make sure the node is healthy!

But you would be warned about this.