NOTE: This guide is being left in place for legacy reasons. I do not recommend deploying Ubuntu 14.04 for the vMX, instead I would suggest Ubuntu 16.04 with release 18.2 or 18.3 of the vMX.
This setup guide will help you setup and prepare a host running Ubuntu 14.04 to run the Juniper vMX product. This guide makes some assumptions about your environment:
- You want to use SR-IOV for better performance. This guide uses the exact steps I myself used to get SR-IOV working on Intel XL710 NIC’s.
- The Ubuntu release you are installing is 14.04. Other versions may have problems that I have not covered.
- The JunOS release for the vMX you are installing is 18.1R1.
- The host you are setting up has no data on it. This guide will take you through the install process of Ubuntu which means any existing data will be wiped. I do recommend starting from scratch, previous configurations may cause issues for you. Once you have completed this guide you can then proceed to the Juniper vMX KVM installation guide.
Warning: This guide must be followed exactly – some instructions sound like they can be skipped but they really can’t. I have had many issues trying to get the vMX to work properly on Ubuntu 14.04 and I had to use these exacts steps to install and setup the OS. If you do package upgrades for example or use a newer kernel you will experience issues.
Pre-Install Checks
Before you proceed with the install process, go into the BIOS for the server. I had to enable SR-IOV at the BIOS level as well as in the device configuration settings for the NIC’s. Also make sure that the CPU has the virtualization extensions enabled.
Ubuntu Installation
I am using Ubuntu 14.04 LTS as my host. The CD image I am using to install it is named ubuntu-14.04.5-server-amd64.iso
. The host OS install is as basic as it gets, pretty much the default options should be used for everything with only a couple of things different than a standard install.
- Boot the CD image and select the “Install Ubuntu Server” option.
- Select the language settings/keyboard settings. It is currently a requirement for the
vmx.sh
script that the locale is en_US.UTF-8. See this page for details about that requirement. - Configure the network settings. This will be used for SSH access, you can change this later if required.
- Set up the user to access the server after installation. I selected no when it asks if you want to encrypt your home directory.
- You may get a message about unmounting partitions in use. I got this as there was a previous Linux install. Select yes if you see the message.
- From the partition disks screen I selected “Guided – use entrire disk”. I only had one disk (a RAID volume) so I selected that then selected “Finish partitioning and write changes to disk”. WARNING: If you select LVM you may experience issues booting the server after the kernel is downgraded (a requirement for the vMX to run)
- Wait for the system to write the partition changes and install the OS.
- I just pushed enter to continue for the HTTP proxy screen for the package manager; I do not need this in my network.
- For the tasksel screen, ensure that “No automatic updates” is selected. If this is enabled, your package versions will not be correct for the vMX to run and you will have a lot of issues.
- For the software selection screen, do not select any of the options, just continue.
- Select “Yes” to install the Grub bootloader.
The host installation is now complete. Once you select the final continue option in the installer, the server will reboot from the freshly installed OS and you can proceed to the next step.
Host Configuration
The host can now be configured to work with the vMX. Most of these steps were adapted from the Juniper vMX KVM Installation Guide.
These steps must be followed in order; if you do them out of order things may not work as expected.
Packages
Install the following packages:
apt-get update apt-get install openssh-server ifenslave lldpd openntpd curl wget less screen dnsutils tcpdump \ tcptraceroute ethtool libvirt-bin ubuntu-vm-builder sysstat virtinst bridge-utils qemu-kvm libvirt-bin \ python python-netifaces vnc4server libyaml-dev python-yaml numactl libparted0-dev libpciaccess-dev \ libnuma-dev libyajl-dev libxml2-dev libglib2.0-dev libnl-3-dev python-pip python-dev libxml2-dev libxslt-dev \ libnl-route-3-dev vlan
Some of those packages are not required (lldpd, openntpd, curl, wget, less, screen, dnsutils, tcpdump, ifenslave, tcptraceroute) but I recommend installing them for easier management/troubleshooting.
Qemu
Edit the default settings for qemu-kvm, /etc/default/qemu-kvm
. You will need to change these two settings:
KSM_ENABLED
: Set this to0
(default is1
)KVM_HUGEPAGES
: Set this to1
(default is0
)
Edit the modprobe.d file for qemu to disable apicv and pml. Edit /etc/modprobe.d/qemu-system-x86.conf
and add enable_apicv=n pml=n
to the end, the content of this file should only be:
options kvm-intel nested=1 enable_apicv=n pml=n
Grub
These Grub settings are very important to get SR-IOV working and to enable huge pages. In my case, since I have 64GB of RAM allocated to the vFP I have allocated 64 x 1G pages. You can adjust this to suite your environment.
Edit the grub configuration file, /etc/default/grub
. Find these two lines:
GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX=""
Replace those two lines with:
GRUB_CMDLINE_LINUX_DEFAULT="processor.max_cstates=1 idle=poll pcie_aspm=off intel_iommu=on" GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=64"
After updating those settings, run update-grub
so that they are saved. Reboot the server now to apply the settings that have been changed so far.
Kernel
The kernel needs to be downgraded to 3.13.0.32 for the vMX management script to work. The default kernel version for my install of Ubuntu was 4.4.0-31-generic #50~14.04.1-Ubuntu
.
Install the old kernel and headers files using apt-get
:
apt-get install linux-firmware linux-image-3.13.0.32-generic linux-image-extra-3.13.0.32-generic linux-headers-3.13.0.32-generic
After installation, the grub configuration should be changed to set this older kernel as the default boot version. Edit /etc/default/grub
and look for the line that has GRUB_DEFAULT
(set to 0
by default). Change this line so it looks like this:
GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 3.13.0-32-generic"
You will need to run update-grub
again to use this setting on future boots. Do not reboot the server at this stage; there is a high chance that your network will not come up until the drivers have been installed.
libvirt
To use the VFP with performance mode (since you are setting up SR-IOV you will need this) the libvirt version must be 1.2.19.
- In the
/tmp
directory, download a copy of the libvirt source and extract it:
cd /tmp && wget http://libvirt.org/sources/libvirt-1.2.19.tar.gz && tar xzvf libvirt-1.2.19.tar.gz && cd libvirt-1.2.19
- Stop the existing libvirtd service:
service libvirt-bin stop
- Compile libvirt 1.2.19 and install it:
./configure --prefix=/usr --localstatedir=/ --with-numactl make make install
- Start the libvirtd service:
service libvirt-bin start
- Verify that the version is correct, both of these commands should return version 1.2.19:
/usr/sbin/libvirtd --version /usr/bin/virsh --version
Management Network
If you need to change the management network settings for the host, edit them now. You can update /etc/network/interfaces
and set the appropriate settings. For my host’s the ports are setup like this:
- The server has these interfaces:
Interface Name | Interface Type | Description |
---|---|---|
p4p1 | 1G Copper (Intel I350) | Not used |
p4p2 | 1G Copper (Intel I350) | Not used |
em1 | 10G SFP+ (Intel X710 – Onboard) | vMX SR-IOV NIC ge-0/0/0 |
em2 | 10G SFP+ (Intel X710 – Onboard) | vMX SR-IOV NIC ge-0/0/1 |
em3 | 10G SFP+ (Intel X710 – Onboard) | vMX SR-IOV NIC ge-0/0/2 |
em4 | 10G SFP+ (Intel X710 – Onboard) | vMX SR-IOV NIC ge-0/0/3 |
p5p1 | 10G SFP+ (Intel X710 – PCI Card) | Not used |
p5p2 | 10G SFP+ (Intel X710 – PCI Card) | Not used |
p5p3 | 10G SFP+ (Intel X710 – PCI Card) | Not used |
p5p4 | 10G SFP+ (Intel X710 – PCI Card) | Not used |
p7p1 | 10G SFP+ (Intel X710 – PCI Card) | Host Management (bond0 slave) |
p7p2 | 10G SFP+ (Intel X710 – PCI Card) | Host Management (bond0 slave) |
p7p3 | 10G SFP+ (Intel X710 – PCI Card) | vMX SR-IOV NIC ge-0/0/4 |
p7p4 | 10G SFP+ (Intel X710 – PCI Card) | vMX SR-IOV NIC ge-0/0/5 |
- The two host management ports are setup with LACP and are slaves to bond0.
- The management
bond0
interface is a trunk port which contains VLAN 50 for management of the server itself as well as VLAN 51 for management of the vMX router.
With this network setup, my /etc/network/interfaces
file looks like this:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # 1GB Copper Ports ## PCI-E Card 4 - 2 x 10/100/1000M Ports - Intel I350 ### Port 1 - Not assigned auto p4p1 iface p4p1 inet manual ### Port 2 - Not assigned auto p4p1 iface p4p1 inet manual # 10GB Ports ## Built in 4 x 10GB SFP+ Ports - Intel X710 ### Port 1 - vMX SR-IOV ge-0/0/0 auto em1 iface em1 inet manual ### Port 2 - vMX SR-IOV ge-0/0/1 auto em2 iface em2 inet manual ### Port 3 - vMX SR-IOV ge-0/0/2 auto em3 iface em3 inet manual ### Port 4 - vMX SR-IOV ge-0/0/3 auto em4 iface em4 inet manual ## PCI-E Card Slot 5 - 4 x 10GB SFP+ - Intel X710 ### Port 3 - Not assigned auto p5p1 iface p5p1 inet manual ### Port 3 - Not assigned auto p5p2 iface p5p2 inet manual ### Port 3 - Not assigned auto p5p3 iface p5p3 inet manual ### Port 3 - Not assigned auto p5p4 iface p5p4 inet manual ## PCI-E Card Slot 7 - 4 x 10GB SFP+ - Intel X710 ### Port 1 - Host Management - LACP Slave auto p7p1 iface p7p1 inet manual bond-master bond0 ### Port 2 - Host Management - LACP Slave auto p7p2 iface p7p2 inet manual bond-master bond0 ### Port 4 - vMX SR-IOV ge-0/0/4 auto p7p3 iface p7p3 inet manual ### Port 4 - vMX SR-IOV ge-0/0/5 auto p7p4 iface p7p4 inet manual # Bonded Ports auto bond0 iface bond0 inet manual bond-mode 802.3ad bond-miimon 100 bond-lacp-rate 4 bond-slaves none # VLAN's ## VLAN 50 - Host Management ### IPv4 auto bond0.50 iface bond0.50 inet static address 192.168.1.1 netmask 255.255.255.0 gateway 192.168.1.254 dns-nameservers 192.168.1.254 dns-search mydomain.com ### IPv6 iface bond0.50 inet6 static address ffff::1 netmask 64 gateway ffff::ffff ## VLAN 51 - vMX Management auto bond0.51 iface bond0.51 inet static address 192.168.2.1 netmask 255.255.255.0
Host Reboot
At this stage you will need to reboot the host to apply various settings that have been changed. Once the host has been rebooted, the host OS is now prepared and you are ready to start the vMX installation.