Network Monitoring and Management (NMM)

This course teaches the principles of Network Monitoring and Management, illustrated by a variety of open-source tools which students themselves configure and use.

The NMM lab is a trimmed version of the CNDO topology.

NMM topology

The building edge switches are removed, and instead each srv1 host has 2.5GB RAM, sufficient to run multiple NMM tools. The total memory usage is again 27GB.

NMM campus

The core and distribution switches are configured with 4 interfaces, as opposed to 16 in CNDO. This reduces the amount of work for LibreNMS to do.


You will need the following files:

File Description
hosts-cndo-nmm /etc/hosts file to go on the server
index-nmm.html student navigation page to go in e.g. /var/www/html/index.html
nmm-<version>.gns3project the GNS3 project
vios-adventerprisek9-m.vmdk.SPA.157-3.M3 IOSv image - same as CNDO
vios_l2-adventerprisek9-m.SSA.high_iron_20180619.qcow2 IOSvL2 image - same as CNDO
nsrc-nmm-<version>.qcow2 the VM image with NMM tools pre-installed - same as NOC
nmm-srv1-campus<N>-hdb-<version>.img (x 6) cloud-init configs for srv1 in each campus

lxd containers

Each srv1 virtual machine starts 6 lxd containers inside it, called host1-host6. From the students' point of view, they see 7 virtual machines in their campus: srv1 and host1-6.

NMM campus student view

But inside GNS3, there is only srv1. Stopping this will also stop host1-6.

Each of the "host" containers has a set of the smaller NMM tools preinstalled:

  • nagios
  • snmp / snmpd
  • smokeping
  • rsyslog
  • swatch
  • (node_exporter and promtail)

This means that for exercises using these tools, you have 36 instances to play with, and each student can work on their own instance.

The top-level VM (srv1) contains the larger and more resource-intensive tools:

  • LibreNMS
  • cacti
  • nfsen
  • RT
  • rancid
  • mysql (used by LibreNMS, cacti and RT)
  • grafana
  • prometheus and related tools
  • loki

This means that for exercises using these tools, students will have to work in their campus groups.

Backbone addressing plan

All the containers have out-of-band interfaces, so that students' ssh and web traffic does not need to traverse the emulated network.

IP Address DNS Name (ditto for campus2) (ditto for campus3) (ditto for campus4) (ditto for campus5) (ditto for campus6) (on transit1-nren)

See the training materials for the addressing plan used inside the network.


These passwords are shared with the students:

Device Username Password Enable
Student routers nmmlab lab-PW lab-EN
srv1 and host1-6 sysadm nsrc+ws

Monitoring tool credentials are as per the NOC topology - it's the same VM image.

URLs are:

The instructor logins are not shared with the students:

Device Username Password Enable
transit1/2 nsrc nsrc-PW nsrc-EN

The nmmlab / lab-PW login also works on these devices, so that students can inspect the state of the infrastructure, e.g. show ip int brief, although they will not know the enable password.


There is a smaller set of snapshots provided.

  • default is the initial state. All routing is configured and the devices have usernames/passwords set, but ssh is not enabled.
  • ssh is a snapshot where ssh has been enabled and telnet disabled. Note however that you will need to login to each device and do crypto key generate rsa modulus 2048, as this key is not stored within the config.
  • ssh-snmp is similar, but snmp has also been configured.
  • There is no snapshot with netflow configured (yet)

Beware that resetting to any of these snapshots will also reset all srv1 and host1-6 to their default states - any work that students have done will be erased! Therefore you almost certainly only want to do this once, before the course starts.

Use the gns3man tool if you want to restore the configuration of an individual device.