Juniper vMX – Installation on a KVM host

NOTE: I originally published this page in 2018; instructions may now be out of date.

These steps will help you install a Juniper vMX device on KVM. The steps assume you have already deployed a KVM host with all of the requirements that the vMX needs. If you did not already do this or you are deploying a new KVM host you can follow the instructions I have made here:

  • CentOS 7.4 – I recommend using this as the Ubuntu support is patchy at best, the requirements for Ubuntu are also very old.
  • Ubuntu 14.04 I highly recommend that you deploy the host using the above instructions before continuing – deploying the vMX on a host that does not have the exact configuration required can be a frustrating experience.

For my vMX installations I am using SR-IOV for best performance, so the guide assumes that the interfaces assigned to the vFP are all SR-IOV. You will need to adjust the instructions depending on the vMX release that you are installing, I have extracted the vMX files to /home/vMX-[RELEASE] (in this case /home/vMX-18.1R1) so that when I upgrade the vMX I can keep the old copies in an obvious place.

Extract Files

  1. Download a copy of the vMX .tar.gz file for KVM on the host. Since I am deploying the vMX release 18.1R1, the file name is vmx-bundle-18.1R1.9.tgz. The file can be downloaded from the Juniper vMX release page. This guide assumes that you have the .tar.gz file in /home/.
  2. Extract the tar.gz file. The files will be extracted to vmx/ – I recommend renaming this to the release as well.
cd /home
tar zxvf vmx-bundle-18.1R1.9.tgz
mv vmx vMX-18.1R1

Intel X710/XL710 Drivers – Ubuntu Only

Note: These steps only apply to Ubuntu 14.04 hosts. Do not do this for a CentOS host. This step also assumes that you are using an Intel X710 or XL710 NIC.

There are two drivers that need to be installed – Intel i40evf and Intel i40e. The Intel i40evf driver needs to be downloaded from the Intel website, the i40e driver is included with the vMX. The i40e driver included with the vMX is patched to make certain features work when using SR-IOV, things like 802.3ad frames will not be passed through to the vMX otherwise.

  1. Try installing the i40e driver first included with the vMX:
cd /home/vMX-18.1R1/drivers/i40e-2.1.26/src
make install

If you are trying to install a different release of the vMX, the drivers folder will most likely be different (eg. for 17.4R1 the i40e driver is located in drivers/i40e-1.3.46/src instead).

Make sure that this is successful. You may get errors when compiling the driver, as an example I got this error when deploying the vMX release 17.4R1:

/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:205:2: error: unknown field ‘show_attribute’ specified in initializer
  .show_attribute  = i40e_cfgfs_vsi_attr_show,
  ^
/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:205:2: warning: initialization from incompatible pointer type [enabled by default]
/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:205:2: warning: (near initialization for ‘i40e_cfgfs_vsi_item_ops.allow_link’) [enabled by default]
/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:206:2: error: unknown field ‘store_attribute’ specified in initializer
  .store_attribute = i40e_cfgfs_vsi_attr_store,
  ^
/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:206:2: warning: initialization from incompatible pointer type [enabled by default]
/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:206:2: warning: (near initialization for ‘i40e_cfgfs_vsi_item_ops.drop_link’) [enabled by default]
/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:300:2: error: unknown field ‘show_attribute’ specified in initializer
  .show_attribute = i40e_cfgfs_group_attr_show,
  ^
/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:300:2: warning: initialization from incompatible pointer type [enabled by default]
/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.c:300:2: warning: (near initialization for ‘i40e_cfgfs_group_item_ops.allow_link’) [enabled by default]
make[2]: *** [/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e/i40e_configfs.o] Error 1
make[1]: *** [_module_/home/vMX-17.4R1/drivers/i40e-1.3.46/src/i40e] Error 2
make[1]: Leaving directory `/usr/src/linux-headers-4.4.0-31-generic'
make: *** [i40e/i40e.ko] Error 2

If you do get errors like this, make sure that the kernel version that is currently running is supported by Juniper – check the “Minimum Hardware and Software Requirements” page for the vMX release you are deploying to see which kernel is required. If you have the required kernel already installed but not running you will need to set Grub to use the older kernel and reboot the host to apply before continuing. Do not continue until this step works.

  1. Download a copy of the Intel i40evf driver from Intel and extract it:
cd /usr/local/src
wget https://downloadmirror.intel.com/26003/eng/i40evf-1.4.15.tar.gz
tar zxvf i40evf-1.4.15.tar.gz
  1. Install the i40evf driver:
cd i40evf-1.4.15/src
make install
  1. Update the init image so that the new driver is included:
update-initramfs -u -k `uname -r`
  1. Activate the new driver (alternatively you can reboot the host):
rmmod i40e
modprobe i40e

Remove default bridge network – Ubuntu Only

Note: These steps only apply to Ubuntu 14.04 hosts. Do not do this for a CentOS host. This step also assumes that you are using an Intel X710 or XL710 NIC.

Remove the default bridge network that libvirt creates, this can cause issues starting the VM’s for the vMX:

ifconfig virbr0 down
brctl delbr virbr0

vMX Configuration File

The Juniper vMX configuration file needs to be created. The configuration file is in YAML format and defines what interfaces on the host to bind to the vMX and what resources the vMX will have. The Juniper vMX document from Juniper available here contains some sample configurations.

  1. Remove the default configuration file, its easier to copy the entire config from here and adjust as needed:
cd /home/vMX-18.1R1/config
mv vmx.conf vmx.conf.dist
  1. Edit vmx.conf in your favourite editor. You will need to change a few values from the sample configuration file I have provided:
  • Update the routing-engine-image, routing-engine-hdd and forwarding-engine-image paths under HOST: to be the correct file locations – the RE engine image and vFPC image names may be different as well if you are using a release other than 18.1R1.
  • Set the appropriate host-management-interface. This management interface is where the control plane and forwarding plane interfaces will be bound to to make the initial setup and troubleshooting more convenient.
  • Set the appropriate number of vCPU’s for the control plane. For my case, 3 vCPU’s is more than enough.
  • Set the appropriate amount of RAM for the control plane. For my case 12GB is more than enough.
  • Set the appropriate amount of vCPU’s for the forwarding plane. There are specific requirements from Juniper for this (available here), make sure that you follow the requeirements for your use case. Since I am using performance mode with SR-IOV I have assigned 17 vCPU’s.
  • The number of vCPU’s cannot be overcommitted. If you have a server with 24 logical CPU’s (dual CPU, each CPU with 6 cores/12 threads for example) the maximum number of vCPU’s you can allocate to both the control plane and forwarding plane cannot exceed 20 (4 CPU’s must be left for the host).
  • Set the correct IP details for the control plane and forwarding plane interfaces for management.
  • The MAC adddresses in the configuration file can be any MAC addresses that you like – they must be unique though.
  • Set the correct interfaces under JUNOS_DEVICES:. These are the interfaces being assigned to the forwarding plane (revenue ports). All of the interfaces I am adding are 10G SR-IOV interfaces with the MTU set to 9400 (the maximum I can use). The virtual-function should not be changed unless you have specific requirements to do that.

Here is the sample configuration you can use:

##############################################################
#
#  vmx.conf
#  Config file for vmx on the hypervisor.
#  Uses YAML syntax.
#  Leave a space after ":" to specify the parameter value.
#
##############################################################

---
#Configuration on the host side - management interface, VM images etc.
HOST:
    identifier                : vmx1   # Maximum 6 characters
    host-management-interface : bond0.51
    routing-engine-image      : "/home/vMX-17.4R1/images/junos-vmx-x86-64-17.4R1.16.qcow2"
    routing-engine-hdd        : "/home/vMX-17.4R1/images/vmxhdd.img"
    forwarding-engine-image   : "/home/vMX-17.4R1/images/vFPC-20171213.img"

---
#External bridge configuration

BRIDGES:
    - type  : external
      name  : br-ext                  # Max 10 characters

---
#vRE VM parameters
CONTROL_PLANE:
    vcpus       : 4
    memory-mb   : 12288
    console_port: 8601

    interfaces  :
      - type      : static
        ipaddr    : 172.25.0.2
        macaddr   : "0A:00:DD:C3:FD:0E"

---
#vPFE VM parameters
FORWARDING_PLANE:
    memory-mb   : 65536
    vcpus       : 32
    console_port: 8602
    device-type : sriov

    interfaces  :
      - type      : static
        ipaddr    : 172.25.0.3
        macaddr   : "0A:00:DD:C3:FD:10"

---
#Interfaces
JUNOS_DEVICES:
   - interface            : ge-0/0/0
     port-speed-mbps      : 10000
     nic                  : em1
     mtu                  : 9400
     virtual-function     : 0
     mac-address          : "02:06:0A:AE:EA:A1"
     description          : "ge-0/0/0 connects to em1"

   - interface            : ge-0/0/1
     port-speed-mbps      : 10000
     nic                  : em2
     mtu                  : 9400
     virtual-function     : 0
     mac-address          : "02:06:0A:AE:EA:A2"
     description          : "ge-0/0/1 connects to em2"

   - interface            : ge-0/0/2
     port-speed-mbps      : 10000
     nic                  : em3
     mtu                  : 9400
     virtual-function     : 0
     mac-address          : "02:06:0A:AE:EA:A3"
     description          : "ge-0/0/2 connects to em3"


   - interface            : ge-0/0/3
     port-speed-mbps      : 10000
     nic                  : em4
     mtu                  : 9400
     virtual-function     : 0
     mac-address          : "02:06:0A:AE:EA:A4"
     description          : "ge-0/0/3 connects to em4"

   - interface            : ge-0/0/4
     port-speed-mbps      : 10000
     nic                  : p7p3
     mtu                  : 9400
     virtual-function     : 0
     mac-address          : "02:06:0A:AE:EA:A5"
     description          : "ge-0/0/4 connects to p7p3"

   - interface            : ge-0/0/5
     port-speed-mbps      : 10000
     nic                  : p7p4
     mtu                  : 9400
     virtual-function     : 0
     mac-address          : "02:06:0A:AE:EA:A6"
     description          : "ge-0/0/5 connects to p7p4"

vMX Installation

The vMX can now be installed. The install process will also start the vCP and vFP VM’s.

Run the installer with verbose logging just in case you run into issues:

cd /home/vMX-18.1R1
./vmx.sh -lv --install

The vMX should start up, once the vmx.sh script is finished running you can then access the console of the vCP and vFP. The default username for the routing engine is root with no password. After the vCP finishes booting (it could take a couple of minutes) you should also be able to reach the management IP that is configured. The management IP uses the fxp0 interface.

After the VM is booted, log in to the console you can do the minimal configuration to get SSH access:

  1. Set the root password: set system root-authentication plain
  2. Enable SSH with root login: set system services ssh root-login allow
  3. Commit You can then SSH to the vCP management IP and continue the configuration. Using the console configuration will also work (the above 3 steps are not required) but there are issues if you are pasting in large amounts of configuration that are not present when using SSH. With release 18.1R1 I experienced an issue after doing the steps above where I lost reachability to the management IP (I couldn’t ping or SSH to it anymore) – I had to stop and start the VM for that to work again.

I also recommend applying these settings:

  • Apply the license to the VM. The default trial license will only allow 10mbit of throughput.
  • Enable performance mode: set chassis fpc 0 performance-mode
  • Set the number of ports for the FPC to the number of actual interfaces assigned to the vFP in the vmx.conf file. Eg. if you have 6 interfaces assigned to the vFP: set chassis fpc 0 pic 0 number-of-ports 6
  • Set the loopback device count to 1: set chassis fpc 0 loopback-device-count 1

Starting on boot

To start the vMX on boot you will need to add either an init script (Ubuntu 14.04) or a systemd service (CentOS 7.4). I have included instructions on adding those and examples here.

Leave a Reply

Your email address will not be published. Required fields are marked *