Linstor lab

If the instructors have already configured Linstor: please skip the first section and jump straight to the section headed “Using Linstor storage”.


Linstor

In this lab, you’re going to set up networked, replicated storage using DRBD managed by Linstor.

In each cluster, the Linstor controller has already been installed on nodeX1 and the Linstor satellite on nodeX2 to nodeX5 - but they have not been configured.

Apart from the controller setup, you’ll work in your existing groups, on the node you used before.

Controller setup

Only one person in your cluster should do this; others can watch

Get a shell on nodeX1 in your cluster.

Start the linstor controller (don’t type the ‘#’, this is just to show the shell prompt). It should come back without an error.

# systemctl enable --now linstor-controller
#

Check that the service is running:

# systemctl status linstor-controller
● linstor-controller.service - LINSTOR Controller Service
     Loaded: loaded (/lib/systemd/system/linstor-controller.service; enabled; preset: enabled)
    Drop-In: /run/systemd/system/service.d
             └─zzz-lxc-service.conf
     Active: active (running) since Fri 2025-07-25 07:42:44 UTC; 3h 0min ago
             ^^^^^^^^^^^^^^^^
...

If you are in a pager, e.g. you see lines 1-23/23 (END), then press “q” by itself to get back to the command line.

Test some linstor commands which communicate with the API, to check the controller is working.

# linstor controller version
linstor controller 1.32.3; GIT-hash: 6dac06aed233f2c89ac7cc6b1185d6dce9ec74c4
# linstor node list
╭─────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞═════════════════════════════════════╡
╰─────────────────────────────────────╯

Right now, the linstor controller doesn’t know about any nodes.

Finally, add the controller to its own list of nodes (change both instances of “X” to your cluster number):

# linstor node create --node-type controller nodeX1 100.64.0.1X1
SUCCESS:
Description:
    New node 'node01' registered.
Details:
    Node 'node01' UUID is: 18bd2f65-f740-4165-879b-8eb59837ddc0
# linstor node list
╭──────────────────────────────────────────────────────────╮
┊ Node   ┊ NodeType   ┊ Addresses                 ┊ State  ┊
╞══════════════════════════════════════════════════════════╡
┊ node01 ┊ CONTROLLER ┊ 100.64.0.101:3370 (PLAIN) ┊ Online ┊
╰──────────────────────────────────────────────────────────╯

Aside: in this lab we have separate nodes for the controller and the satellites, but it’s possible to have one node doing both functions; in that case it would be --node-type combined

If the node is not “Online” then maybe there was a problem adding the node. You can delete it with linstor node delete <nodename> and then re-add it.

Node setup

For this section, break up into your individual groups each working on your assigned node (nodeX2 to nodeX5).

Get a shell on your node (go to the GUI, select your node in the first column, and Shell >_ in the second column)

LVM configuration

We don’t want LVM on the node to find LVM volumes inside replicated DRBD volumes - it could get very confused seeing the same volume in multiple places. The Linstor instructions tell you to configure LVM’s “global_filter” to prevent this.

This should have been done for you already in the lab, but check this is the case:

# grep global_filter /etc/lvm/lvm.conf
    # Configuration option devices/global_filter.
    # Use global_filter to hide devices from these LVM system components.
    # global_filter are not opened by LVM.
    # global_filter = [ "a|.*|" ]
     global_filter=["r|/dev/zd.*|","r|/dev/rbd.*|","r|^/dev/drbd|"]

That global_filter line means that LVM will (r)eject devices whose names start with /dev/zd (ZFS), /dev/rbd (Ceph), and /dev/drbd - in other words, not look for LVM metadata inside them.

Satellite setup

Start the linstor satellite on your node:

# systemctl enable --now linstor-satellite
Created symlink /etc/systemd/system/multi-user.target.wants/linstor-satellite.service → /lib/systemd/system/linstor-satellite.service.
#

And check it is running:

# systemctl status linstor-satellite
● linstor-satellite.service - LINSTOR Satellite Service
     Loaded: loaded (/lib/systemd/system/linstor-satellite.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-07-25 10:47:54 UTC; 3min 23s ago
             ^^^^^^^^^^^^^^^^
...

If you are in a pager, e.g. you see lines 1-22/22 (END), then press “q” by itself to get back to the command line.

The linstor satellite doesn’t actually need any configuration file to run, although it defaults to insecure operation (no security or encryption).

An example configuration file is supplied, so you can see what sort of settings are available. Take a look at it.

# cat /etc/linstor/linstor_satellite-example.toml
...

Point CLI to controller

It’s helpful to have access to the linstor command line tools from every node.

Try it now on your node, but you’ll find it doesn’t work:

# linstor node list
Error: Unable to connect to linstor://localhost:3370: [Errno 111] Connection refused

That’s because it’s trying to access the controller on the same node. But it’s not there - it’s on nodeX1.

To fix this, you’ll need to create a file called /etc/linstor/linstor-client.conf. Use an editor of your choice. If you’re not familiar with editing files on Linux, we suggest you use nano:

# nano /etc/linstor/linstor-client.conf

Paste in the following, changing X to your cluster number:

[global]
controller=nodeX1.ws.nsrc.org

Hit ctrl-X to exit. When it says “Save modified buffer?” hit “y”. When it says “File Name to Write”, just hit Enter to confirm the file name shown.

Now try linstor node list again, and you should be able to communicate.

# linstor node list
╭──────────────────────────────────────────────────────────╮
┊ Node   ┊ NodeType   ┊ Addresses                 ┊ State  ┊
╞══════════════════════════════════════════════════════════╡
┊ node01 ┊ CONTROLLER ┊ 100.64.0.101:3370 (PLAIN) ┊ Online ┊
╰──────────────────────────────────────────────────────────╯

If it doesn’t work, read the error message, check the contents of this file, and check that someone has started the controller successfully on nodeX1 in your cluster. Don’t go any further until this is working.

Add your satellite

The next thing is to add your node to the satellite’s database. You have to add it by IP address. Change XY to your node number:

# linstor node create --node-type satellite nodeXY 100.64.0.1XY

for example, node35 would be:

# linstor node create --node-type satellite node35 100.64.0.135

The response should look like this:

SUCCESS:
Description:
    New node 'node02' registered.
Details:
    Node 'node02' UUID is: e087127c-c62f-4b3e-a37f-9e4e8cc1294f
SUCCESS:
Description:
    Node 'node02' authenticated
Details:
    Supported storage providers: [diskless, lvm, lvm_thin, zfs, zfs_thin, file, file_thin, remote_spdk, ebs_init, ebs_target]
    Supported resource layers  : [drbd, luks, nvme, writecache, cache, bcache, storage]
    Unsupported storage providers:
        SPDK: IO exception occured when running 'rpc.py spdk_get_version': Cannot run program "rpc.py": error=2, No such file or directory
        STORAGE_SPACES: This tool does not exist on the Linux platform.
        STORAGE_SPACES_THIN: This tool does not exist on the Linux platform.

Don’t worry about “Unsupported storage providers”. The supported storage providers include lvm and lvm_thin, and those are the only ones we care about.

Check the list of nodes: you should see your node now, and possibly other nodes added by other groups.

# linstor node list
╭──────────────────────────────────────────────────────────╮
┊ Node   ┊ NodeType   ┊ Addresses                 ┊ State  ┊
╞══════════════════════════════════════════════════════════╡
┊ node01 ┊ CONTROLLER ┊ 100.64.0.101:3370 (PLAIN) ┊ Online ┊
┊ node02 ┊ SATELLITE  ┊ 100.64.0.102:3366 (PLAIN) ┊ Online ┊
╰──────────────────────────────────────────────────────────╯

Make sure the state is “Online”. If it’s not, then there’s something wrong - perhaps your satellite is not running, or perhaps you gave the wrong IP address. If so, you can use linstor node delete nodeXY to remove it, then add it again with the correct address.

Configure separate replication network

Your nodes have been set up with an additional network interface on the 100.64.5 network, and we’re going to use this to separate the DRBD replication traffic (recommended for production clusters).

Find your node’s address on this network like this: look for the address which begins 100.64.5

# ip address list
...
4: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 10:66:6a:fa:ab:e0 brd ff:ff:ff:ff:ff:ff
    inet 100.64.5.102/24 scope global enp7s0
         ^^^^^^^^^^^^
...

To make linstor use this interface, you need to tell linstor about it and set it as the PrefNic (preferred network interface card): change XY as before.

# linstor node interface create nodeXY drbd 100.64.5.1XY
SUCCESS:
...

# linstor node set-property nodeXY PrefNic drbd
SUCCESS:
...

Check with:

# linstor node interface list nodeXY
╭─────────────────────────────────────────────────────────────────╮
┊ node02    ┊ NetInterface ┊ IP           ┊ Port ┊ EncryptionType ┊
╞═════════════════════════════════════════════════════════════════╡
┊ + StltCon ┊ default      ┊ 100.64.0.102 ┊ 3366 ┊ PLAIN          ┊
┊ +         ┊ drbd         ┊ 100.64.5.102 ┊      ┊                ┊ <<<
╰─────────────────────────────────────────────────────────────────╯

# linstor node list-properties nodeXY
╭───────────────────────────╮
┊ Key             ┊ Value   ┊
╞═══════════════════════════╡
┊ CurStltConnName ┊ default ┊
┊ NodeUname       ┊ node02  ┊
┊ PrefNic         ┊ drbd    ┊  <<<
╰───────────────────────────╯

The Linstor controller and satellites will still use the primary network address on 100.64.0 for API communication. Only the DRBD resources will be performing disk I/O over 100.64.5

Create storage pool

Linstor allocates resources from “storage pools”. We can make our LVM volume group available to Linstor by adding it to the database:

# linstor storage-pool create lvm nodeXY lvm-ssd ssd
                               ^     ^      ^     ^
      storage type (driver) ---'     |      |     |
      node where storage exists -----'      |     |
      name of Linstor storage pool ---------'     |
      LVM volume group ---------------------------'

You should get the usual SUCCESS response. Now you can list the storage pools and check yours exists:

# linstor storage-pool list
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node   ┊ Driver   ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName                  ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ node02 ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊ node02;DfltDisklessStorPool ┊
┊ lvm-ssd              ┊ node02 ┊ LVM      ┊ ssd      ┊    35.99 GiB ┊     35.99 GiB ┊ False        ┊ Ok    ┊ node02;lvm-ssd              ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

(You may see storage pools added by other groups too)

If by this point you’re getting tired of typing long linstor command lines, you might be happy to discover that there are also shortened ones: e.g.

# linstor sp l

Thin pool

You may have noticed just now that your storage pool has “CanSnapshots: False”. Snapshots of regular LVM volumes are very slow and inefficient, so Linstor disables them. If we want snapshots with LVM then we have to create VM storage volumes inside a thin pool.

This raises some design questions:

For this lab, we are choosing to give 75% of our LVM space to the thin pool (leaving 25% for standard LVM volumes and/or future expansion of the thin pool), and not to use mirroring, since drbd replication between nodes will give us the data protection we want.

Check you have plenty of space in your volume group (if not, delete any test LVs left over from the LVM lab)

# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  ssd   2   0   0 wz--n- 35.99g 35.99g

Now create the thin pool:

# lvcreate --name thin0 --type thin-pool --size 27g ssd
  Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
  Logical volume "thin0" created.
# lvs
  LV    VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  thin0 ssd twi-a-tz-- 27.00g             0.00   10.49

Now add your node’s thinpool to the Linstor database:

# linstor storage-pool create lvmthin nodeXY lvmthin-ssd ssd/thin0
SUCCESS:
...
# linstor sp l
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node   ┊ Driver   ┊ PoolName  ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName                  ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ node02 ┊ DISKLESS ┊           ┊              ┊               ┊ False        ┊ Ok    ┊ node02;DfltDisklessStorPool ┊
┊ lvm-ssd              ┊ node02 ┊ LVM      ┊ ssd       ┊    35.99 GiB ┊     35.99 GiB ┊ False        ┊ Ok    ┊ node02;lvm-ssd              ┊
┊ lvmthin-ssd          ┊ node02 ┊ LVM_THIN ┊ ssd/thin0 ┊       27 GiB ┊        27 GiB ┊ True         ┊ Ok    ┊ node02;lvmthin-ssd          ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

“lvmthin” is the driver name, “lvmthin-ssd” is the name we’re giving to the storage pool, and “ssd/thin0” is the LVM volumegroup/logicalvolume that identifies the thin pool.

You should see “CanSnapshots: True” for this one.

Note that “FreeCapacity” on lvm-ssd has not updated. This is because you created the thin pool directly via LVM, and Linstor doesn’t realise that the volume free space has changed.

To make it pick up the new FreeCapacity, it’s simplest just to restart the satellite:

# systemctl restart linstor-satellite

Within a few seconds, you should see the FreeCapacity figure update on the lvm storage pool:

# linstor sp l
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node   ┊ Driver   ┊ PoolName  ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName                  ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ node02 ┊ DISKLESS ┊           ┊              ┊               ┊ False        ┊ Ok    ┊ node02;DfltDisklessStorPool ┊
┊ lvm-ssd              ┊ node02 ┊ LVM      ┊ ssd       ┊     8.94 GiB ┊     35.99 GiB ┊ False        ┊ Ok    ┊ node02;lvm-ssd              ┊
┊ lvmthin-ssd          ┊ node02 ┊ LVM_THIN ┊ ssd/thin0 ┊       27 GiB ┊        27 GiB ┊ True         ┊ Ok    ┊ node02;lvmthin-ssd          ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Linstor resource groups

This section should only be done by ONE person on your cluster (but does not have to be the same person who configured the controller on nodeX1)

We now need to create one or more “resource groups” in Linstor. These are a sort of template from which the resources inherit; they define shared settings, in particular the number of replicas you want and which storage pool to allocate from.

# linstor resource-group create --storage-pool lvm-ssd --place-count 2 --description "LVM thick provisioning, no snapshots" ssd-thick
SUCCESS:
Description:
    New resource group 'ssd-thick' created.
Details:
    Resource group 'ssd-thick' UUID is: 0004ac57-fcaa-48da-b7e4-68716644a228

# linstor resource-group create --storage-pool lvmthin-ssd --place-count 2 --description "LVM thin provisioning with snapshots" ssd-thin
SUCCESS:
Description:
    New resource group 'ssd-thin' created.
Details:
    Resource group 'ssd-thin' UUID is: 15d89fba-7323-400b-9df8-ea173c3a26ea

(In a production environment you might want to choose place-count 3, for additional data security)

At this point, wait until at least 3 groups have finished creating storage pools (i.e. linstor sp l shows at least 3 instances of ssd-thin), otherwise you’ll get an error “Not enough available nodes”

Now it is possible manually to create a fully-replicated DRBD volume via Linstor. This is a one-line command to create a 200MiB volume:

# linstor resource-group spawn-resources ssd-thin testvol 200M
...
Description:
    Resource 'testvol' successfully autoplaced on 2 nodes
Details:
    Used nodes (storage pool name): 'node02 (lvmthin-ssd)', 'node03 (lvmthin-ssd)'
...

There will be a lot of responses because this creates a number of linked items, including:

# linstor resource-definition list     # or: linstor rd l
╭─────────────────────────────────────────────────────╮
┊ ResourceName ┊ ResourceGroup ┊ Layers       ┊ State ┊
╞═════════════════════════════════════════════════════╡
┊ testvol      ┊ ssd-thin      ┊ DRBD,STORAGE ┊ ok    ┊
╰─────────────────────────────────────────────────────╯
# linstor resource list                # or: linstor r l
╭──────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node   ┊ Layers       ┊ Usage  ┊ Conns ┊      State ┊ CreatedOn           ┊
╞══════════════════════════════════════════════════════════════════════════════════════════╡
┊ testvol      ┊ node02 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊   UpToDate ┊ 2025-10-30 15:49:26 ┊
┊ testvol      ┊ node03 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊   UpToDate ┊ 2025-10-30 15:49:26 ┊
┊ testvol      ┊ node04 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊ TieBreaker ┊ 2025-10-30 15:49:25 ┊
╰──────────────────────────────────────────────────────────────────────────────────────────╯
# linstor volume list                  # or: linstor v l
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource ┊ Node   ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊      State ┊ Repl           ┊
╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ testvol  ┊ node02 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊   125 KiB ┊ Unused ┊   UpToDate ┊ Established(2) ┊
┊ testvol  ┊ node03 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊   125 KiB ┊ Unused ┊   UpToDate ┊ Established(2) ┊
┊ testvol  ┊ node04 ┊ DfltDisklessStorPool ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊           ┊ Unused ┊ TieBreaker ┊ Established(2) ┊
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

“UpToDate” means the replicas are in sync with each other.

Our new volume can be accessed as /dev/drbd1000 on any of those three nodes. (The nodes were chosen automatically, but we could also add a further replica or diskless access on any other node)

The underlying storage for this DRBD device is an LVM logical volume which Linstor has created, and can be seen if you get a shell on one of those nodes:

root@node03:~# lvs
  LV            VG  Attr       LSize   Pool  Origin Data%  Meta%  Move Log Cpy%Sync Convert
  testvol_00000 ssd Vwi-aotz-- 204.00m thin0        0.06
  thin0         ssd twi-aotz--  27.00g              0.01   10.51

(Why 204MiB? Because we asked for 200MiB, but DRBD needs some space for its own metadata, so Linstor adds one extra extent to the logical volume)

But really we want Proxmox to create these volumes by itself, so let’s delete all this now before we move on.

# linstor resource-definition delete testvol
SUCCESS:
...

Configure Proxmox integration

Again, this section should only be done by ONE person on your cluster

On nodeX2 to nodeX5 the “linstor-proxmox” plugin has already been installed. You just need to configure linstor storage. This requires editing a configuration file (it can’t be added through the GUI).

Get a shell on any node in the cluster, and then edit the file /etc/pve/storage.cfg

# nano /etc/pve/storage.cfg

This will show your existing storage configuration.

Add the following to the end (changing X to your cluster number):

drbd: linstor-thick
    content images, rootdir
    controller nodeX1.ws.nsrc.org
    resourcegroup ssd-thick

drbd: linstor-thin
    content images, rootdir
    controller nodeX1.ws.nsrc.org
    resourcegroup ssd-thin

This change will automatically replicate to all nodes.

Using Linstor storage

Split up back into your individual node groups for this section.

Finally, you are ready to make use of the Linstor storage!

Migrate VM storage to Linstor

Go to the Proxmox GUI, and you should see the new storage available under Datacenter, Storage:

(If not, work out what went wrong with “Configure Proxmox integration” above, or ask for help)

You should have an existing virtual machine called “groupXY-web” which you created before. Find it in the GUI and click on it.

If it’s running, shut it down.

If it’s currently on nodeX1, then migrate it to your group’s assigned node.

Proxmox normally supports “live” storage migration on a running VM. It is possible with Linstor but a little more complicated because the new linstor volume is slightly larger than the original volume.

A limitation in the way the lab was built means that nodeX1 can’t access Linstor storage.

Now, to move the disk to Linstor.

In the second column click “Hardware”, in the third column click “Hard Disk (scsi0)” and then click “Disk Action > Move Storage”

From the “Move disk” page, select “linstor-thin” as the Target Storage. (For safety, leave Delete Source unchecked)

Watch the task results:

create full clone of drive scsi0 (local:102/vm-102-disk-0.raw)

NOTICE
  Trying to create diskful resource (pm-aacf5d16) on (node03).
transferred 0.0 B of 5.0 GiB (0.00%)
transferred 53.2 MiB of 5.0 GiB (1.04%)
...
TASK OK

Start the VM, check it works and that you can login at the VM console. Ask for help if there’s a problem.

Since we didn’t select “Delete Source”, after a successful move you should remove the old local disk volume by hand. Click on “Hardware” and you should see “Unused Disk 0” on your virtual machine. Click on this and click “Remove” to delete it.

Live migrate the VM

Now your VM is running on Linstor, you can live-migrate it to another node. We suggest you use the next one along (e.g. nodeX2->nodeX3, nodeX3->nodeX4, … nodeX5->nodeX2. Avoid nodeX1.)

Click “migrate” and it should now migrate much more quickly than before, because it no longer has to copy all the disk data - just the state in RAM and a small cloud-init disk.

Find the replicas

One problem with Proxmox Linstor integration is that it doesn’t show or let you manage where the DRBD replicas are. It’s possible you’ve migrated your VM to a node which doesn’t have a DRBD replica of your volume: it works fine, but all disk I/O is going over the network, so it’s less than optimal.

You also have to work out which Linstor volume corresponds to your VM. Proxmox-Linstor names the volumes with random IDs, but adds a property called Aux/pm/vmid giving your VM ID (number).

To see what’s going on, open a shell on your node.

# linstor rd l
╭────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Port ┊ ResourceGroup ┊ Layers       ┊ State ┊
╞════════════════════════════════════════════════════════════╡
┊ pm-aacf5d16  ┊ 7000 ┊ ssd-thin      ┊ DRBD,STORAGE ┊ ok    ┊
╰────────────────────────────────────────────────────────────╯
# linstor rd l -s Aux/pm/vmid
╭──────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Port ┊ ResourceGroup ┊ Layers       ┊ State ┊ Aux/pm/vmid ┊
╞══════════════════════════════════════════════════════════════════════════╡
┊ pm-aacf5d16  ┊ 7000 ┊ ssd-thin      ┊ DRBD,STORAGE ┊ ok    ┊ 102         ┊
╰──────────────────────────────────────────────────────────────────────────╯

Using the second version of the command, you can see which VM uses this volume (102 in the above example)

Note that the actual ResourceName and vmid will be different for your virtual machine

If you know the vmid you’re looking for, you can filter the table: replace “XXX” with your own VM ID.

# linstor rd list --props Aux/pm/vmid=XXX
╭────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Port ┊ ResourceGroup ┊ Layers       ┊ State ┊
╞════════════════════════════════════════════════════════════╡
┊ pm-aacf5d16  ┊ 7000 ┊ ssd-thin      ┊ DRBD,STORAGE ┊ ok    ┊
╰────────────────────────────────────────────────────────────╯

Now you know the resource name, you can ask where the volumes are located (substitute your own resource name in the command below)

# linstor v list -r pm-aacf5d16
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource    ┊ Node   ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊      State ┊ Repl           ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pm-aacf5d16 ┊ node02 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ Unused ┊   UpToDate ┊ Established(2) ┊
┊ pm-aacf5d16 ┊ node03 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ InUse  ┊   UpToDate ┊ Established(2) ┊
┊ pm-aacf5d16 ┊ node04 ┊ DfltDisklessStorPool ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊           ┊ Unused ┊ TieBreaker ┊ Established(2) ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

In the above example: the VM is running on node03 (it says “InUse”). There’s a replica on node02, so you can migrate there and there will still be local storage. But if you migrate to any other node, it will be “Diskless”.

Try migrating to the next node along which isn’t listed in that table. For this example, I migrated to node05.

Check that the VM is still running, but then look at the resources again:

# linstor v list -r pm-aacf5d16
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource    ┊ Node   ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊ Repl           ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pm-aacf5d16 ┊ node02 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ Unused ┊ UpToDate ┊ Established(2) ┊
┊ pm-aacf5d16 ┊ node03 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ Unused ┊ UpToDate ┊ Established(2) ┊
┊ pm-aacf5d16 ┊ node05 ┊ DfltDisklessStorPool ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊           ┊ InUse  ┊ Diskless ┊ Established(2) ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

The VM is running on node05 (InUse), but is diskless. All reads go over the network to node02 or node03; all writes go over the network to both.

To fix this, you can create new resource on node05, by toggling the status away from “diskless”

# linstor r toggle-disk node05 pm-aacf5d16
...
SUCCESS:
    Added disk on 'node05'

This has added another DRBD replica on node05:

# linstor v list -r pm-aacf5d16
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource    ┊ Node   ┊ StoragePool ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊ Repl           ┊
╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pm-aacf5d16 ┊ node02 ┊ lvmthin-ssd ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ Unused ┊ UpToDate ┊ Established(2) ┊
┊ pm-aacf5d16 ┊ node03 ┊ lvmthin-ssd ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ Unused ┊ UpToDate ┊ Established(2) ┊
┊ pm-aacf5d16 ┊ node05 ┊ lvmthin-ssd ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ InUse  ┊ UpToDate ┊ Established(2) ┊
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

This actually copies all the data to node05 - if you are quick you might see a status other than “UpToDate” while the replication takes place.

Therefore, we now have 3 replicas of the data - it’s more redundant than we originally asked for. If we only want 2 replicas, then we can toggle one of the unused replicas on another node back to diskless, say node02.

# linstor r toggle-disk --diskless node02 pm-aacf5d16
...
# linstor v list -r pm-aacf5d16
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource    ┊ Node   ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊ Repl           ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pm-aacf5d16 ┊ node02 ┊ DfltDisklessStorPool ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊           ┊ Unused ┊ Diskless ┊ Established(2) ┊
┊ pm-aacf5d16 ┊ node03 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ Unused ┊ UpToDate ┊ Established(2) ┊
┊ pm-aacf5d16 ┊ node05 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.58 GiB ┊ InUse  ┊ UpToDate ┊ Established(2) ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Note: “toggle-disk” only works for nodes where the resource already exists. The general commands to add and remove replicas on any node are:

# linstor r create nodeXY pm-XXXXXXX
# linstor r delete nodeXY pm-XXXXXXX

Congratulations: you have configured a redundant, replicated storage system using Linstor configuring DRBD over LVM!

Optional extra exercises

Try these out if there’s time available.

Snapshots

Since your VM has been moved to a Linstor on an LVM thin pool, it should support snapshots.

Try this while the VM is running:

Note that when you take a snapshot, there’s an option “Include RAM”. If you select this, your snapshot will be larger, as it includes the whole RAM state; but when you restore, the VM will still be running at exactly the same point it was when the snapshot was taken.

If you don’t include RAM, then when you restore from a snapshot, the VM will have to boot up from scratch.

Linstor GUI

As well as the command line, there is an optional Linstor GUI for administration. If installed, this can be accessed at http://nodeX1.ws.nsrc.org:3370/ui/ or via a drop-down link on the workshop lab page:

Have a look at nodes, resource groups, and resources. See if they match what you expect from what you’ve seen at the command line.

Auto-diskful

If you migrate a VM to a node which doesn’t have a replica, it will be diskless. However there is a Linstor feature called auto-diskful which will automatically promote it to a diskful copy if it has been running disklessly for more than a configurable number of minutes.

The following configuration will automatically convert a diskless resource to diskful, after it has been “InUse” for 3 minutes:

# linstor resource-group set-property ssd-thick DrbdOptions/auto-diskful 3
...
# linstor resource-group set-property ssd-thin DrbdOptions/auto-diskful 3
...
# linstor resource-group list-properties ssd-thin
╭──────────────────────────────────╮
┊ Key                      ┊ Value ┊
╞══════════════════════════════════╡
┊ DrbdOptions/auto-diskful ┊ 3     ┊
╰──────────────────────────────────╯

With those settings applied, you could repeat the live migration exercise from before. Find out where the resources are placed for your VM, migrate it to another node where it is diskless, wait 3 minutes, and then see if it changes.

Example: this VM has just been migrated to nodeX5.

# linstor v list -r pm-627b119e
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource    ┊ Node   ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊ Repl           ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pm-627b119e ┊ node02 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.71 GiB ┊ Unused ┊ UpToDate ┊ Established(2) ┊
┊ pm-627b119e ┊ node03 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.71 GiB ┊ Unused ┊ UpToDate ┊ Established(2) ┊
┊ pm-627b119e ┊ node05 ┊ DfltDisklessStorPool ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊           ┊ InUse  ┊ Diskless ┊ Established(2) ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

After 3 minutes, it starts to sync:

# linstor v list -r pm-627b119e
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource    ┊ Node   ┊ StoragePool ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊              State ┊ Repl                        ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pm-627b119e ┊ node02 ┊ lvmthin-ssd ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.71 GiB ┊ Unused ┊           UpToDate ┊ node05: PausedSyncS(21.08%) ┊
┊             ┊        ┊             ┊       ┊         ┊               ┊           ┊        ┊                    ┊ node03: Established         ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ pm-627b119e ┊ node03 ┊ lvmthin-ssd ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.71 GiB ┊ Unused ┊           UpToDate ┊ node05: SyncSource          ┊
┊             ┊        ┊             ┊       ┊         ┊               ┊           ┊        ┊                    ┊ node02: Established         ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ pm-627b119e ┊ node05 ┊ lvmthin-ssd ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊ 10.76 MiB ┊ InUse  ┊ SyncTarget(21.05%) ┊ node02: PausedSyncT(21.08%) ┊
┊             ┊        ┊             ┊       ┊         ┊               ┊           ┊        ┊                    ┊ node03: SyncTarget(21.05%)  ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

When the sync has completed, one of the other copies is automatically removed to get back to the desired place-count of 2 (this can be disabled via another option, DrbdOptions/auto-diskful-allow-cleanup)

# linstor v list -r pm-627b119e
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource    ┊ Node   ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊      State ┊ Repl           ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pm-627b119e ┊ node02 ┊ DfltDisklessStorPool ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊           ┊ Unused ┊ TieBreaker ┊ Established(2) ┊
┊ pm-627b119e ┊ node03 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.71 GiB ┊ Unused ┊   UpToDate ┊ Established(2) ┊
┊ pm-627b119e ┊ node05 ┊ lvmthin-ssd          ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊  2.71 GiB ┊ InUse  ┊   UpToDate ┊ Established(2) ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

References