NOTE: This guide is being left in place for legacy reasons. I do not recommend deploying Ubuntu 14.04 for the vMX, instead I would suggest Ubuntu 16.04 with release 18.2 or 18.3 of the vMX.
This setup guide will help you setup and prepare a host running Ubuntu 14.04 to run the Juniper vMX product. This guide makes some assumptions about your environment:
Warning: This guide must be followed exactly - some instructions sound like they can be skipped but they really can't. I have had many issues trying to get the vMX to work properly on Ubuntu 14.04 and I had to use these exacts steps to install and setup the OS. If you do package upgrades for example or use a newer kernel you will experience issues.
Before you proceed with the install process, go into the BIOS for the server. I had to enable SR-IOV at the BIOS level as well as in the device configuration settings for the NIC's. Also make sure that the CPU has the virtualisation extensions enabled.
I am using Ubuntu 14.04 LTS as my host. The CD image I am using to install it is named
ubuntu-14.04.5-server-amd64.iso. The host OS install is as basic as it gets, pretty much the default options should be used for everything with only a couple of things different than a standard install.
vmx.shscript that the locale is en_US.UTF-8. See this page for details about that requirement.
The host installation is now complete. Once you select the final continue option in the installer, the server will reboot from the freshly installed OS and you can proceed to the next step.
The host can now be configured to work with the vMX. Most of these steps were adapted from the Juniper vMX KVM Installation Guide.
These steps must be followed in order; if you do them out of order things may not work as expected.
Install the following packages:
apt-get update apt-get install openssh-server ifenslave lldpd openntpd curl wget less screen dnsutils tcpdump \ tcptraceroute ethtool libvirt-bin ubuntu-vm-builder sysstat virtinst bridge-utils qemu-kvm libvirt-bin \ python python-netifaces vnc4server libyaml-dev python-yaml numactl libparted0-dev libpciaccess-dev \ libnuma-dev libyajl-dev libxml2-dev libglib2.0-dev libnl-3-dev python-pip python-dev libxml2-dev libxslt-dev \ libnl-route-3-dev vlan
Some of those packages are not required (lldpd, openntpd, curl, wget, less, screen, dnsutils, tcpdump, ifenslave, tcptraceroute) but I recommend installing them for easier management/troubleshooting.
Edit the default settings for qemu-kvm,
/etc/default/qemu-kvm. You will need to change these two settings:
KSM_ENABLED: Set this to
KVM_HUGEPAGES: Set this to
Edit the modprobe.d file for qemu to disable apicv and pml. Edit
/etc/modprobe.d/qemu-system-x86.conf and add
enable_apicv=n pml=n to the end, the content of this file should only be:
options kvm-intel nested=1 enable_apicv=n pml=n
These Grub settings are very important to get SR-IOV working and to enable huge pages. In my case, since I have 64GB of RAM allocated to the vFP I have allocated 64 x 1G pages. You can adjust this to suite your environment.
Edit the grub configuration file,
/etc/default/grub. Find these two lines:
Replace those two lines with:
GRUB_CMDLINE_LINUX_DEFAULT="processor.max_cstates=1 idle=poll pcie_aspm=off intel_iommu=on" GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=64"
After updating those settings, run
update-grub so that they are saved. Reboot the server now to apply the settings that have been changed so far.
The kernel needs to be downgraded to 126.96.36.199 for the vMX management script to work. The default kernel version for my install of Ubuntu was
Install the old kernel and headers files using
apt-get install linux-firmware linux-image-188.8.131.52-generic linux-image-extra-184.108.40.206-generic linux-headers-220.127.116.11-generic
After installation, the grub configuration should be changed to set this older kernel as the default boot version. Edit
/etc/default/grub and look for the line that has
GRUB_DEFAULT (set to
0 by default). Change this line so it looks like this:
GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 3.13.0-32-generic"
You will need to run
update-grub again to use this setting on future boots. Do not reboot the server at this stage; there is a high chance that your network will not come up until the drivers have been installed.
To use the VFP with performance mode (since you are setting up SR-IOV you will need this) the libvirt version must be 1.2.19.
/tmpdirectory, download a copy of the libvirt source and extract it:
cd /tmp && wget http://libvirt.org/sources/libvirt-1.2.19.tar.gz && tar xzvf libvirt-1.2.19.tar.gz && cd libvirt-1.2.19
service libvirt-bin stop
./configure --prefix=/usr --localstatedir=/ --with-numactl make make install
service libvirt-bin start
/usr/sbin/libvirtd --version /usr/bin/virsh --version
If you need to change the management network settings for the host, edit them now. You can update
/etc/network/interfaces and set the appropriate settings. For my host's the ports are setup like this:
|Interface Name||Interface Type||Description|
|p4p1||1G Copper (Intel I350)||Not used|
|p4p2||1G Copper (Intel I350)||Not used|
|em1||10G SFP+ (Intel X710 - Onboard)||vMX SR-IOV NIC ge-0/0/0|
|em2||10G SFP+ (Intel X710 - Onboard)||vMX SR-IOV NIC ge-0/0/1|
|em3||10G SFP+ (Intel X710 - Onboard)||vMX SR-IOV NIC ge-0/0/2|
|em4||10G SFP+ (Intel X710 - Onboard)||vMX SR-IOV NIC ge-0/0/3|
|p5p1||10G SFP+ (Intel X710 - PCI Card)||Not used|
|p5p2||10G SFP+ (Intel X710 - PCI Card)||Not used|
|p5p3||10G SFP+ (Intel X710 - PCI Card)||Not used|
|p5p4||10G SFP+ (Intel X710 - PCI Card)||Not used|
|p7p1||10G SFP+ (Intel X710 - PCI Card)||Host Management (bond0 slave)|
|p7p2||10G SFP+ (Intel X710 - PCI Card)||Host Management (bond0 slave)|
|p7p3||10G SFP+ (Intel X710 - PCI Card)||vMX SR-IOV NIC ge-0/0/4|
|p7p4||10G SFP+ (Intel X710 - PCI Card)||vMX SR-IOV NIC ge-0/0/5|
bond0interface is a trunk port which contains VLAN 50 for management of the server itself as well as VLAN 51 for management of the vMX router.
With this network setup, my
/etc/network/interfaces file looks like this:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # 1GB Copper Ports ## PCI-E Card 4 - 2 x 10/100/1000M Ports - Intel I350 ### Port 1 - Not assigned auto p4p1 iface p4p1 inet manual ### Port 2 - Not assigned auto p4p1 iface p4p1 inet manual # 10GB Ports ## Built in 4 x 10GB SFP+ Ports - Intel X710 ### Port 1 - vMX SR-IOV ge-0/0/0 auto em1 iface em1 inet manual ### Port 2 - vMX SR-IOV ge-0/0/1 auto em2 iface em2 inet manual ### Port 3 - vMX SR-IOV ge-0/0/2 auto em3 iface em3 inet manual ### Port 4 - vMX SR-IOV ge-0/0/3 auto em4 iface em4 inet manual ## PCI-E Card Slot 5 - 4 x 10GB SFP+ - Intel X710 ### Port 3 - Not assigned auto p5p1 iface p5p1 inet manual ### Port 3 - Not assigned auto p5p2 iface p5p2 inet manual ### Port 3 - Not assigned auto p5p3 iface p5p3 inet manual ### Port 3 - Not assigned auto p5p4 iface p5p4 inet manual ## PCI-E Card Slot 7 - 4 x 10GB SFP+ - Intel X710 ### Port 1 - Host Management - LACP Slave auto p7p1 iface p7p1 inet manual bond-master bond0 ### Port 2 - Host Management - LACP Slave auto p7p2 iface p7p2 inet manual bond-master bond0 ### Port 4 - vMX SR-IOV ge-0/0/4 auto p7p3 iface p7p3 inet manual ### Port 4 - vMX SR-IOV ge-0/0/5 auto p7p4 iface p7p4 inet manual # Bonded Ports auto bond0 iface bond0 inet manual bond-mode 802.3ad bond-miimon 100 bond-lacp-rate 4 bond-slaves none # VLAN's ## VLAN 50 - Host Management ### IPv4 auto bond0.50 iface bond0.50 inet static address 192.168.1.1 netmask 255.255.255.0 gateway 192.168.1.254 dns-nameservers 192.168.1.254 dns-search mydomain.com ### IPv6 iface bond0.50 inet6 static address ffff::1 netmask 64 gateway ffff::ffff ## VLAN 51 - vMX Management auto bond0.51 iface bond0.51 inet static address 192.168.2.1 netmask 255.255.255.0
At this stage you will need to reboot the host to apply various settings that have been changed. Once the host has been rebooted, the host OS is now prepared and you are ready to start the vMX installation.