This information may be useful in understanding how the platform works.

Addressing plan

Since all the labs use the 100.64.0.0/22 network for their external connectivity, there is a common addressing plan on the backbone.

IP address DNS name Description
100.64.0.1 gw.ws.nsrc.org The server itself (gateway to the external Internet)
100.64.0.2-4 Transit routers
100.64.0.5-7 IXP route servers
100.64.0.8-9 Reserved for second/third server
100.64.0.10-19 Group 1 out-of-band management
100.64.0.20-29 Group 2 out-of-band management
100.64.0.30-39 Group 3 out-of-band management
100.64.0.40-49 Group 4 out-of-band management
100.64.0.50-59 Group 5 out-of-band management
100.64.0.60-69 Group 6 out-of-band management
100.64.0.70-79 Group 7 out-of-band management
100.64.0.80-89 Group 8 out-of-band management
100.64.0.250 noc.ws.nsrc.org NOC VM
100.64.0.251 ap1.ws.nsrc.org Wireless access point
100.64.0.252 ap2.ws.nsrc.org Wireless access point
100.64.0.253 sw.ws.nsrc.org Switch
100.64.0.254 Target for inbound static route
100.64.1.0-100.64.3.254 DHCP (student laptops)

Some topologies use the same address space - in particular, CNDO and NMM use the same backbone addresses for transit routers and out-of-band management. This means that if you start both these topologies at the same time, it won't work.

IPv6 on the backbone uses 2001:DB8:0:0::/64 (e.g. for the transit routers to talk to each other, and for IPv6 to/from the NOC). It also uses link-local addresses for router next hop, specifically fe80::1 for the server and fe80::254 for the inbound static route.

Inside the labs, address space is taken from 100.64.0.0/10. This "looks like" public IP space, but is actually reserved space from RFC 6598.

Out-of-band management

The student VMs (srv1/hostN) are connected both to the IOSv/IOSvL2 campus network and the 100.64.0.0/22 network. Their default gateway points via the virtual campus network, but the 100.64.0.0/22 connection functions as an "out-of-band management" network.

When students connect to their VM on its 100.64.0 address, it bypasses the IOSv network. This is important because IOSv has a throughput limit of only 2Mbps (250KB/sec); it also minimises the load on the emulation.

Out-of-band management also means their VMs are accessible even when the virtual campus network is broken. This can be useful - for example they can break the campus network and still get into Nagios to see everything turn red.

The student machines are configured to fetch packages via 100.64.0.1 as a proxy (see /etc/apt/apt.conf.d/99proxy). This means that installing packages is also not throttled by IOSv, and reduces external bandwidth because of apt-cacher-ng.

IPv6

IPv6 uses 2001:db8::/32, the documentation prefix, and works internally between the nodes in the emulation. Some topologies also use 2001:10::/28 which comes out of the reserved prefix 2001::/23.

You do not require any external IPv6 connectivity to your server to be able to use IPv6 in the exercises.

If your server does have an IPv6 address on its WAN interface, then outbound IPv6 traffic from the emulation will NAT to this address. This means that ping6 and traceroute6 to the Internet will work as expected.

The srv1 VM has a customised /etc/gai.conf which prefers IPv4 over IPv6, except when talking to another 2001:db8:: address. This reduces the risk of timeouts when talking to a machine on the public Internet which advertises a AAAA record, but when no IPv6 connectivity is available.

The classroom wifi network intentionally does not have any IPv6, apart from link-local addresses, so as not to interfere with student Internet access. If you do want to deploy IPv6 on the classroom wifi, see unusual configurations.

Cloud-init

The Ubuntu VMs (such as srv1 in the CNDO and NMM topologies) have two virtual disks attached.

The first is the VM image itself, which can be quite large, but is shared by all instances of the VM.

The second is a small MSDOS image containing "cloud-init" files. This is read when the VM first boots, and is responsible for configuring the VM's static IP address and creating the default username and password (which are not hard-coded in the image itself).

When the VM appears multiple times in the same topology, this means a separate cloud-init image is needed for each instance to come up on the correct IP address.

When logged into the srv1 VM, you can examine its cloud-init configuration:

sudo mount -r /dev/vdb /mnt
ls /mnt
cat /mnt/network-config
cat /mnt/user-data

lxd containers

lxd is a lightweight virtualization technology, which allows host1-6 to exist in the NMM topology with minimal extra resource requirements. All the containers share the same underlying kernel and filesystem.

The fact that host1-6 are lxd containers is an implementation detail. However, if you login to srv1, you can see and manage the containers using the lxc command-line tool:

sysadm@srv1:~$ lxc list
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
|    NAME     |  STATE  |         IPV4          |                  IPV6                  |    TYPE    | SNAPSHOTS |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
| gold-master | STOPPED |                       |                                        | PERSISTENT | 0         |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
| host-master | STOPPED |                       |                                        | PERSISTENT | 0         |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
| host1       | RUNNING | 100.64.0.11 (eth1) | 2001:db8:1:1::131 (eth0)               | PERSISTENT | 0         |
|             |         | 100.68.1.131 (eth0)   | 2001:db8:1:1:216:3eff:fed8:988e (eth0) |            |           |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
| host2       | RUNNING | 100.64.0.12 (eth1) | 2001:db8:1:1::132 (eth0)               | PERSISTENT | 0         |
|             |         | 100.68.1.132 (eth0)   | 2001:db8:1:1:216:3eff:fef0:c02a (eth0) |            |           |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
| host3       | RUNNING | 100.64.0.13 (eth1) | 2001:db8:1:1::133 (eth0)               | PERSISTENT | 0         |
|             |         | 100.68.1.133 (eth0)   | 2001:db8:1:1:216:3eff:feec:66e (eth0)  |            |           |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
| host4       | RUNNING | 100.64.0.14 (eth1) | 2001:db8:1:1::134 (eth0)               | PERSISTENT | 0         |
|             |         | 100.68.1.134 (eth0)   | 2001:db8:1:1:216:3eff:fe7c:8e93 (eth0) |            |           |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
| host5       | RUNNING | 100.64.0.15 (eth1) | 2001:db8:1:1::135 (eth0)               | PERSISTENT | 0         |
|             |         | 100.68.1.135 (eth0)   | 2001:db8:1:1:216:3eff:fe33:e459 (eth0) |            |           |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+
| host6       | RUNNING | 100.64.0.16 (eth1) | 2001:db8:1:1::136 (eth0)               | PERSISTENT | 0         |
|             |         | 100.68.1.136 (eth0)   | 2001:db8:1:1:216:3eff:fe37:687 (eth0)  |            |           |
+-------------+---------+-----------------------+----------------------------------------+------------+-----------+

This can be useful. For example, if a student has broken the password in one of the hostX containers, you can login to srv1, get a root shell inside the container, and reset the password.

$ lxc exec host1 bash
# passwd sysadm
# exit

The "gold-master" and "host-master" are pre-built lxd images which are cloned to create host1-6 when the VM first starts up (controlled by cloud-init). You should not start these.

The filesystem in the VM is btrfs. This allows the host containers to be launched as zero-copy clones, and also allows de-duplication of blocks between the VM Ubuntu image and the container Ubuntu image.

The containers are also configured using cloud-init. You can see the cloud-init data passed in from the outer VM:

lxc config get host1 user.network-config
lxc config get host1 user.user-data

Kernel Samepage Merging

When you have many similar VMs running, Kernel Samepage Merging can save RAM by identifying identical pages and keeping only one copy. This means more RAM is then free for other purposes, e.g. disk cache.

Once you have GNS3 up and running, you can check whether KSM is working by seeing how many pages are shared:

$ cat /sys/kernel/mm/ksm/pages_sharing
53997

Multiply by 4 to get an estimate (in KB) of the amount of RAM being saved by KSM. You can also look in Netdata graphs under Memory > deduper (ksm) to monitor deduping as it occurs over time.

ksmd runs in the background and its rate of operation is limited to avoid consuming all CPU. Experimentation shows it can take several hours to complete deduping the memory of a running lab. You can make this more aggressive by:

echo 10 > /sys/kernel/mm/ksm/sleep_millisecs    # default is 200

You will see ksmd using more CPU, but the memory is deduped more quickly. If you want to make this change permanent, you can edit the setting SLEEP_MILLISECS in /etc/default/qemu-kvm

CSR1000v configuration management

IOSv stores its configuration in an NVRAM file, which gns3man is able to access and manipulate using utilities from gns3.

Unfortunately, CSR1000v stores its NVRAM in an encrypted partition which is not accessible. However, CSR1000v can load a configuration from an ISO CD-ROM file on initial boot.

We attach an ISO image which has the following command:

boot config bootflash:config.txt nvbypass

This tells the CSR to store its configuration in a plain text file on "bootflash:" (the root partition), where it can be easily manipulated.

Issues outstanding with GNS3

Some of these are being tracked on github.