Vrnetlab, or VR Network Lab, is an open-source network emulator that runs virtual routers using KVM and Docker. Software developers and network engineers use vrnetlab, along with continuous-integration processes, for testing network provisioning changes in a virtual network. Researchers and engineers may also use the vrnetlab command line interface to create and modify network emulation labs in an interactive way. In this post, I review vrnetlab’s main features and show how to use it to create a simple network emulation scenario using open-source routers.
Vrnetlab implementation
Vrnetlab creates Docker images for each type of router that will run in the virtual network. It packages the router’s disk image together with KVM software, Python scripts, and any other resources required by the router into the Docker image. Vrnetlab uses KVM to create and run VMs based on router software images, and uses Docker to manage the networking between the network nodes.
Virtual nodes
Vrnetlab network nodes are Docker containters started from Docker images that represent “virtual routers” and bundle all the software and scripts needed to start the router and connect to the virtual network. For example, a container created from an OpenWRT Docker image could logically be represented as shown below:
The router VM receives some “bootstrap” configurations from a launch script bundled with the image. The launch script is unique to each router type. For example, the launch script bundled with the OpenWRT Docker image will poll the router VM until it completes its startup, then it will log in to the router, change the password, and configure the LAN port. Users may need to modify the launch script if they have special requirements.
The router VM connects to the virtual network topology via the Docker container’s open TCP ports. We’ll discuss in the next chapter how the interfaces use TCP ports to implement network links.
Vrnetlab simplifies network emulation for complex commercial routers, especially in the cases where commercial routers require multiple VMs that implement different parts of the virtual router’s functionality, such as control or forwarding functions. Each virtual router container appears to the rest of the network as a single node, regardless of how many VMs are needed internally to implement it and regardless of how complex the networking requirements are between the router’s internal VMs.
You can see in the figure, above, how using Docker as a package format and as the interconnection layer greatly simplifies the user’s view of the network emulation scenario when using a complex commercial virtual router. The developer will spend effort creating a Docker image and writing the launch script, but the user only needs to know which ports map to which interfaces on the virtual router.
NOTE: The vrnetlab GitHub repository does not include any commercial router images so vrnetlab users must provide qemu disk images that they have obtained themselves.
Virtual network connections
Vrnetlab uses a cross-connect program named vr-xcon to define connections between node interfaces and to collect and transport data packets between those interfaces. All traffic between containers passes through the standard Docker0 management bridge, and the vr-xcon cross-connect program creates an overlay network of point-to-point TCP sessions on top of the management bridge. If the user stops the cross-connect script, the network connections between virtual nodes stop transporting packets.
The vr-xcon script runs in a Docker container and can take in a list of all point-to-point connections in the network and handle forwarding for all of them. If you set up your virtual network this way, then all connections will stop if you stop the script. You may also run many different instances of the script — each in its own Docker container — to create links one-by-one or in smaller groups. This way, you can “disconnect” and “reconnect” individual links by stopping and starting the container that runs the script for each link.
Helper scripts
Users run all vrnetlab operations using Docker commands. Some operations require the user to create complex commands combining Docker and Linux commands. Fortunately, the vrnetlab author created a set of shell functions that run the most common vrnetlab functions. The functions are contained in a single shell script named vrnetlab.sh and are loaded into your Bash shell using the source
command.
Open-source routers
Vrnetlab supports many commercial routers but currently supports only one open-source router, OpenWRT. OpenWRT supports a limited number of use-cases — mostly related to performing the role of gateway between a LAN and a WAN so the scenarios you can create with just OpenWRT are very limited. You need more node types that can represent core routers and users that run open-source software.
It is possible to extend vrnetlab and add more open-source router types. Maybe I’ll cover that in another post, in the future. For now, this post will cover using OpenWRT to create a two-node network.
Prepare the system
Vrnetlab is designed to run on an Ubuntu or Debian Linux system. I tested vrnetlab on a system running ubuntu 18.04 and it worked well. ^[See the documentation about vrnetlab system requirements and how vrnetlab works on other operating systems that support Docker.]
Before you install vrnetlab on an Ubuntu 18.04 LTS system, you must install some prerequisite software packages, such as Docker, git, Beautiful Soup, and sshpass. You may install them using the commands shown below:
T420:~$ sudo apt update
T420:~$ sudo apt -y install python3-bs4 sshpass make
T420:~$ sudo apt -y install git
T420:~$ sudo apt install -y \
apt-transport-https ca-certificates \
curl gnupg-agent software-properties-common
T420:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
T420:~$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
T420:~$ sudo apt update
T420:~$ sudo apt install -y docker-ce docker-ce-cli containerd.io
Install vrnetlab
To install vrnetlab, clone the vrnetlab repository from GitHub to your system. In this example, I cloned the repository to my home directory, as follows:
T420:~$ cd ~
T420:~$ git clone https://github.com/plajjan/vrnetlab.git
Go to the vrnetlab directory:
T420:~$ cd ~/vrnetlab
Now, you see the vrnetlab scripts and directories. Notice that there is a separate directory for each router type vrnetlab supports
T420:~$ ls
CODE_OF_CONDUCT.md config-engine-lite openwrt vr-bgp
CONTRIBUTING.md csr routeros vr-xcon
LICENSE git-lfs-repo.sh sros vrnetlab.sh
Makefile makefile-install.include topology-machine vrp
README.md makefile-sanity.include veos vsr1000
ci-builder-image makefile.include vmx xrv
common nxos vqfx xrv9k
Create a router image
Every router supported by vrnetlab has unique configuration and setup procedures. For the OpenWRT router, the vnetlab author created a makefile that will download the latest version of OpenWRT to the vrnetlab/openwrt directory and then build an OpenWRT Docker image.
However, the download script fails. It seems that the directory structure on the OpenWRT downloads web site changes since the script was written. The workaround is easy: use the wget
command to download the latest version of OpenWRT to the ~/vrnetlab/openwrt directory, then run the sudo make build
command again, as follows:
T420:~$ cd ~/vrnetlab/openwrt
T420:~$ wget https://downloads.openwrt.org/releases/18.06.2/targets/x86/64/openwrt-18.06.2-x86-64-combined-ext4.img.gz
T420:~$ sudo make build
This provides a lot of output. At the end of the output, you see text similar to the following lines:
Successfully built 0c0eef5fb556
Successfully tagged vrnetlab/vr-openwrt:18.06.2
make[2]: Leaving directory '/home/brian/vrnetlab/openwrt'
make[1]: Leaving directory '/home/brian/vrnetlab/openwrt'
The docker image is named vrnetlab/vr-openwrt:18.06.2. See all docker images with the docker images
command:
T420:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
vrnetlab/vr-openwrt 18.06.2 0c0eef5fb556 20 seconds ago 545MB
debian stable c04b519eaefa 8 days ago 101MB
Install the cross-connect program
Vrnetlab has two programs for building connections between virtual routers: vr-xcon and topo-machine:
- vr-xcon is a cross-connect program that adds point-to-point links between nodes. it is suitable for adding links one-by-one, or for building small topologies. I recommend using vr-xcon if you want to be able to “disconnect” and “reconnect” individual links in the network. We will use the vrbridge shell function in this post, which uses vr-xcon to build links between nodes
-
topo-machine creates virtual network nodes and links between nodes, where the nodes and links are described in a json file. I may write about topo-machine in the future but I do not discuss it in this post. Topo-machine is suitable for building complex topologies and would be especially useful to developers and testers who want to manage the network topology in a CI pipeline and/or source control repository.
vr-xcon point-to-point cross-connect program.
The vr-xcon Python script runs in a Docker container so you need to download the vr-xcon Docker image from the vrnetlab repository on Docker Hub, or build it locally. When you create a container from the image and run it, the Python script starts collecting and forwarding TCP packets between TCP ports on each node’s Docker container. Vrnetlab uses TCP sessions to create the point-to-point connections between interfaces.
First, you need to login to Docker Hub as follows:
T420:~$ sudo docker login
Enter your Docker Hub userid and password. If you do not have one yet, got to https://hub.docker.com/
and sign up. It’s free.
Pull the vr-xcon image using the following commands:
T420:~$ cd ~/vrnetlab
T420:~$ sudo docker pull vrnetlab/vr-xcon
This will download and install the vr-xcon image in your local Docker system.
Tag images
Tag your Docker images to simplify using them in Docker container commands. You must tag the image vrnetlab/vr-xcon:latest as, simply, vr-xcon so the helper shell scripts in vrnetlab.sh will work. They expect an image named vr-xcon exists in your repository.
T420:~$ sudo docker tag vrnetlab/vr-xcon:latest vr-xcon
You may also choose to tag the image vrnetlab/vr-openwrt:18.06.2 with a shorter name like openwrt:
T420:~$ sudo docker tag vrnetlab/vr-openwrt:18.06.2 openwrt
Check the Docker images and verify that the shorter tags have been added to the Docker repository:
T420:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openwrt latest 0c0eef5fb556 18 minutes ago 545MB
vrnetlab/vr-openwrt 18.06.2 0c0eef5fb556 18 minutes ago 545MB
debian stable c04b519eaefa 8 days ago 101MB
vr-xcon latest 0843f237b02a 2 months ago 153MB
vrnetlab/vr-xcon latest 0843f237b02a 2 months ago 153MB
Install the vrnetlab.sh shell functions
The vrnetlab.sh script loads some bash shell functions file into the current shell that you can use to manage your virtual routers. Go to the vrnetlab directory:
Change to the root user so you can source the shell script as root. You want to source it as the root user because you need to run all your Docker commands launched by the script as root.
T420:~$ sudo su
T420:~#
Then source the vrnetlab.sh script:
T420:~# cd /home/brian/vrnetlab
T420:~# source vrnetlab.sh
You need to stay as the root user to use the helper commands. You cannot go back to your normal user and use sudo. Also, you need to source the script, again, every time you change back to root user or if you login again.
Plan the network topology
Vrnetlab only supports one open-source router, OpenWRT, so a vrnetlab network consisting of only open-source routers will necessarily be very small.
Connect two OpenWRT routers together via their WAN ports and then ping from one WAN interface to the other. The figure below show the network topology.
Start the openwrt containers
Start two new containers from the openwrt Docker image. You must use the --privileged
option because we are starting a KVM VM in the container and KVM requires elevated privileges. Each container is a different router. Name the routers openwrt1 and openwrt2:
T420:~# docker run -d --privileged --name openwrt1 openwrt
T420:~# docker run -d --privileged --name openwrt2 openwrt
Get information about the running containers:
T420:~# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6695d10206a2 openwrt "/launch.py" About a minute ago Up About a minute (healthy) 22/tcp, 80/tcp, 830/tcp, 5000/tcp, 10000-10099/tcp, 161/udp openwrt2
2edcf17b07dd openwrt "/launch.py" About a minute ago Up About a minute (healthy) 22/tcp, 80/tcp, 830/tcp, 5000/tcp, 10000-10099/tcp, 161/udp openwrt1
To check the logs output by the container’s bootstrap script, use the docker logs
command as shown below. I removed a lot of the output to make the listing shorter but you can see the logs show the commands that were run as the router was started and configured by the bootstrap script.
T420:~# docker logs openwrt2
2019-03-13 22:45:07,899: vrnetlab DEBUG Creating overlay disk image
2019-03-13 22:45:07,917: vrnetlab DEBUG Starting vrnetlab OpenWRT
...cut text...
2019-03-13 22:45:23,522: vrnetlab DEBUG writing to serial console: mkdir -p /home/vrnetlab
2019-03-13 22:45:23,566: vrnetlab DEBUG writing to serial console: chown vr netlab /home/vrnetlab
2019-03-13 22:45:23,566: launch INFO completed bootstrap configuration
2019-03-13 22:45:23,566: launch INFO Startup complete in: 0:00:15.642478
Configure the routers
The bootstrap script configured each OpenWRT router so users can login to it via its LAN/management interface using SSH. To create a network we can test, we need to add more configuration to each node in the network.
configure the router openwrt1
Run the vrcons command (from the vrnetlab.sh script} to use Telnet to log into the console port of the router represented by container openwrt1. Run the command as follows:
T420:~# vrcons openwrt1
Trying 172.17.0.2...
Connected to 172.17.0.2.
Escape character is '^]'.
root@OpenWrt:/#
Check the active configuration of the LAN/management interface. We know from the OpenWRT documentation that the LAN interface is implemented on a bridge named br-lan.
root@OpenWrt:/# ifconfig br-lan
br-lan Link encap:Ethernet HWaddr 52:54:00:9C:BF:00
inet addr:10.0.0.15 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fd1a:531:2061::1/60 Scope:Global
inet6 addr: fe80::5054:ff:fe9c:bf00/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:47 errors:0 dropped:0 overruns:0 frame:0
TX packets:57 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5472 (5.3 KiB) TX bytes:9886 (9.6 KiB)
The LAN interface’s IP address is 10.0.0.15 which is required on any router connecting to the management port on the container. However, the persistent configuration is found using the uci command (or by listing the file */etc/config/network) as follows:
root@OpenWrt:/# uci show network.lan
The command lists the following persistent configuration for the LAN interface
network.lan=interface
network.lan.type='bridge'
network.lan.ifname='eth0'
network.lan.proto='static'
network.lan.ipaddr='192.168.1.1'
network.lan.netmask='255.255.255.0'
network.lan.ip6assign='60'
The LAN interface’s persistent configuration does not match the active configuration. If we restart the router VM or restart the networking service, we will revert the LAN interface’s IP address to the persistent configuration of 192.168.1.1
, which will break the router VM’s connection to the Docker container’s management port.
Fix the problem by setting the IP address using the uci utility:
root@OpenWrt:/# uci set network.lan.ipaddr='10.0.0.15'
Also, configure the WAN interface with a static IP address. use the IP address 10.10.10.1
. First, check the existing WAN interface configuration:
root@OpenWrt:/# uci show network.wan
This lists the configuration below:
network.wan=interface
network.wan.ifname='eth1'
network.wan.proto='dhcp'
Change the WAN interface configuration with the folloing uci set
commands.
root@OpenWrt:/# uci set network.wan.proto='static'
root@OpenWrt:/# uci set network.wan.ipaddr='10.10.10.1'
root@OpenWrt:/# uci set network.wan.netmask='255.255.255.0'
Finally, commit the configuration changes so they are saved on the router’s filesystem:
root@OpenWrt:/# uci commit network
Activate the changes by restarting the network service:
root@OpenWrt:/# service network restart
Verify that the WAN interface eth1 has an IP address:
root@OpenWrt:/# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 52:54:00:41:C3:01
inet addr:10.10.10.1 Bcast:10.10.10.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe41:c301/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:5968 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:2001262 (1.9 MiB)
Exit the router’s VM using the Ctrl-]
key combination. Then, quit the telnet connection to the container:
CTRL-]
telnet> quit
Connection closed.
T420:~#
configure the router openwrt2
Configure the virtual router openwrt2 the same way as shown above so its WAN interface has IP address 10.10.10.2/24
. Login to the router’s serial port as follows:
T420:~# vrcons openwrt1
root@OpenWrt:/#
Configure the router’s LAN and WAN interfaces:
root@OpenWrt:/# uci set network.lan.ipaddr='10.0.0.15'
root@OpenWrt:/# uci set network.wan.proto='static'
root@OpenWrt:/# uci set network.wan.ipaddr='10.10.10.2'
root@OpenWrt:/# uci set network.wan.netmask='255.255.255.0'
root@OpenWrt:/# uci commit network
root@OpenWrt:/# service network restart
Exit the router’s VM using the Ctrl-]
key combination. Then, quit the container’s telnet connection:
CTRL-]
telnet> quit
Connection closed.
T420:~#
Connect routers together
Run the vrbridge** command, which is a shell function from the *vrnetlab.sh script. Connect interface 1 on openwrt1 to interface 1 on openwrt2:
T420:~# vrbridge openwrt1 1 openwrt2 1
Remember, the vrbridge** command is a shell function that take the parameters you give it and builds a command that runs a *vr-xcon container. The vr-xcon command is sent to the host system to be executed. For example, the vr-xcon command created by the vrbridge function we ran above is shown below.
T420:~# docker run -d --privileged --name bridge-openwrt1-1-openwrt2-1 --link openwrt1 --link openwrt2 vr-xcon --p2p openwrt1/1--openwrt2/1
You can see the container running using the docker ps
command. The vrbridge function use the router names and port numbers to create a name for the new container, bridge-openwrt1-1-openwrt2-1.
T420:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e1e0298ff66 vr-xcon "/xcon.py --p2p open…" 1 min ago Up About a minute bridge-openwrt1-1-openwrt2-1
6695d10206a2 openwrt "/launch.py" 2 hours ago Up 2 hours (healthy) 22/tcp, 80/tcp, 830/tcp, 5000/tcp, 10000-10099/tcp, 161/udp openwrt2
2edcf17b07dd openwrt "/launch.py" 2 hours ago Up 2 hours (healthy) 22/tcp, 80/tcp, 830/tcp, 5000/tcp, 10000-10099/tcp, 161/udp openwrt1
Test the connection by logging into openwrt1 and pinging openwrt2:
# vrcons openwrt1
root@OpenWrt:/# ping 10.10.10.2
PING 10.10.10.2 (10.10.10.2): 56 data bytes
64 bytes from 10.10.10.2: seq=0 ttl=64 time=1.255 ms
64 bytes from 10.10.10.2: seq=1 ttl=64 time=0.860 ms
64 bytes from 10.10.10.2: seq=2 ttl=64 time=1.234 ms
^C
--- 10.10.10.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.860/1.116/1.255 ms
root@OpenWrt:/#
Exit the router:
CTRL-]
telnet> quit
Connection closed.
T420:~#
You may add the --debug
option to the vr-xcon
command when you run it. The vrbridge shell function you previously ran does not include the debug command so, to demonstrate the debug option, start another container running vr-xcon.
First, stop and delete the existing container bridge-openwrt1-1-openwrt2-1 as follows:
T420:~# docker rm -f bridge-openwrt1-1-openwrt2-1
Then use the following docker run
command to start the new container with the --debug
option enabled:
T420:~# docker run -d --privileged --name vr-xcon-1 --link openwrt1 --link openwrt2 vr-xcon --p2p openwrt1/1--openwrt2/1 --debug
This time, we named the container vr-xcon-1, just to make the command shorter. If you are building links one-by-one, you will create many containers running vr-xcon: one per link. In that case, I suggest you use more meaningful names like bridge-openwrt1-1-openwrt2-1 for each container running vr-xcon.
Run the ping command again from openwrt1 to openwrt2 and check the logs on the bridge-openwrt1-1-openwrt2-1 container:
T420:~# docker logs vr-xcon-1
2019-03-14 05:19:23,446: xcon DEBUG 00172 bytes openwrt2/1 -> openwrt1/1
2019-03-14 05:19:23,884: xcon DEBUG 00102 bytes openwrt1/1 -> openwrt2/1
2019-03-14 05:19:23,884: xcon DEBUG 00102 bytes openwrt2/1 -> openwrt1/1
2019-03-14 05:19:24,884: xcon DEBUG 00102 bytes openwrt1/1 -> openwrt2/1
2019-03-14 05:19:24,885: xcon DEBUG 00102 bytes openwrt2/1 -> openwrt1/1
2019-03-14 05:19:25,884: xcon DEBUG 00102 bytes openwrt1/1 -> openwrt2/1
2019-03-14 05:19:25,885: xcon DEBUG 00102 bytes openwrt2/1 -> openwrt1/1
You see the instance of vr-xcon running in container vr-xcon-1 is posting a log entry for each packet it handles. The --debug
option and the Docker logs function is useful for basic debugging, such as when you want to verify if the vr-xcon process is working properly.
Stop the network emulation
When you are done, you may stop all running containers.
T420:~# docker stop $(docker ps -a -q)
If you wish to delete the network emulation scenario, including all changes to configuration files on the router VMs, use the prune
command to delete all stopped containers and unused networks.
T420:~# docker system prune
Data persistence
Vrnetlab VMs save changes made in the router configuration files or to data files on their disks. These changes will persist in the qemu disk images after the container is stopped. For example, when you want to work on something else, you may stop the containers in your network emulation scenario and turn off your server. Then, when you are ready to start work again, you can start your server and start all the containers associated with your network emulation scenario, including all vr-xcon containers. Your configuration changes will still exist on the network nodes.
However, the state saved in a node’s disk is lost when you delete the container. If you want to re-run the network emulation scenario, new containers start from the original Docker images.
To create a network emulation scenario that starts up in a fully configured state every time, you would need to write a complex launch script that pulls in configuration files and applies them to each node in the network when that node’s container is started.
Conclusion
While vrnetlab is positioned mainly as a tool to support developers working with commercial routers, I think it is also usable by researchers who will create labs interactively, using vrnetlab’s command-line interface, provided they do not need data persistence on the node VMs.
I want to create more complex network emulation scenarios using open-source routers in vrnetlab. It seems possible to extend vrnetlab and add in support for a generic Linux VM running FRR, or some other routing software. I plan to try that in the future, when I have the time.
Sir I am new on mininet and I run a http server on mininet and I want to access files through web browser but it would be unreachable please suggest something sir