NOTE: I originally published this page in 2018; instructions may now be out of date.
This setup guide will help you setup and prepare a host running CentOS to run the Juniper vMX product. This guide makes some assumptions about your environment:
- You want to use SR-IOV for better performance. This guide uses the exact steps I myself used to get SR-IOV working on Intel XL710 NIC’s.
- The CentOS release to install is version 7.4. Other versions may have problems that I have not covered.
- The JunOS release for the vMX you are installing is 18.1R1.
- The host you are setting up has no data on it. This guide will take you through the install process of CentOS which means any existing data will be wiped. I do recommend starting from scratch, previous configurations may cause issues for you. Once you have completed this guide you can then proceed to the Juniper vMX KVM installation guide.
Pre-Install Checks
Before you proceed with the install process, go into the BIOS for the server. I had to enable SR-IOV at the BIOS level as well as in the device configuration settings for the NIC’s. Also make sure that the CPU has the virtualisation extensions enabled.
CentOS Installation
I am using CentOS 7.4 as my host. The CD image I am using to install it is named CentOS-7-x86_64-DVD-1708.iso
which can be downloaded from the CentOS Website.
- Boot the CD image.
- Select the language settings/keyboard settings. It is currently a requirement for the
vmx.sh
script that the locale is en_US.UTF-8. See this page for details about that requirement. - Go to “Network & Host Name” and set the appropriate hostname/IP address for management of the host OS. If you are going to configure bonding or VLAN’s for the host OS I recommend doing it now.
- Go to “Date & Time” and configure the servers time zone. NTP should also be enabled at the top right.
- Go to “Security Policy” and set “Apply security policy” to off.
- Go to “Software Selection” and select the “virtualization host” option in the left menu. On the right menu select the “virtualization platform” option.
- Go to “Installation Destination”.
- Select the disk to install CentOS on. It should have a disk selected by default; since my server only has 1 disk presented by the RAID controller the correct one was selected. If the disk already has partitions on it (eg. it has a previous OS install), follow steps 8a to 8c. If the disk has no partitions on it, select “Automatically configure partitioning”. 8a. Select “I will configure partitioning” and click done. 8b. You can then delete the existing partitions you have (assuming there is no data you need to keep). 8c. Click the “Click here to automatically create them” link at the top. This will then create the required partitions.
- Click done and accept the partition changes if it pops up.
- Go to “Begin Installation”. The installation will now start.
- While you wait for the install to complete, set the root password. You do not need to add any user accounts.
After a while the installation will complete, you can then reboot to start configuring the host OS.
Host Configuration
Many of these steps were adapted from the Juniper vMX KVM Installation Guide. These steps should be done in order as certain steps rely on previous steps.
Default Services
By default, there will be a couple of services that you can stop and disable so they do not start on future boots. First stop and disable wpa_supplicant
:
systemctl stop wpa_supplicant systemctl disable wpa_supplicant
Since this host isn’t accessible externally in my case, I also stop and disable firewalld
:
systemctl stop firewalld systemctl disable firewalld
Postfix isn’t required either:
systemctl stop postfix systemctl disable postfix
Packages
Install the required repositories for packages:
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm yum install centos-release-scl
Upgrade all existing packages:
yum upgrade
Install the following packages:
yum install vim vim-common wget tcpdump screen bind-utils ethtool python27-python-pip python27-python-devel \ numactl-libs libpciaccess-devel parted-devel yajl-devel libxml2-devel glib2-devel libnl-devel libxslt-devel libyaml-devel \ numactl-devel redhat-lsb libvirt-daemon-kvm numactl telnet net-tools
The first few packages are not required (vim vim-common wget tcpdump screen bind-utils ethtool
), but I recommend installing them anyway to make management and troubleshooting a bit easier should you require it.
QEMU Module Parameters
Edit the modprobe.d file for qemu to disable apicv and pml and enable nested virtualization. Create /etc/modprobe.d/qemu-system-x86.conf
and add the following:
options kvm-intel nested=1 enable_apicv=n pml=n
This change requires a reboot of the host to apply, you can reboot later to save time. You can verify that the settings are correct after the reboot using these commands:
[root@server ~]# cat /sys/module/kvm_intel/parameters/enable_apicv N [root@server ~]# cat /sys/module/kvm_intel/parameters/pml N [root@server ~]# cat /sys/module/kvm_intel/parameters/nested Y
Dell Utils
Since the host is installed on a Dell server with an iDRAC, I will install the Dell iDRAC Service Module (current version at the time of writing this is 3.1.0) as well as Dell OpenManage. Please skip these steps if you are not using a Dell server.
Dell ISM
Download the module in /usr/local/src
and extract it:
cd /usr/local/src wget https://dl.dell.com/FOLDER06819999M/1/OM-iSM-Dell-Web-LX-360-2249_A00.tar.gz tar zxf OM-iSM-Dell-Web-LX-360-2249_A00.tar.gz
Install the required packages for the OS:
yum install usbutils
Install the RPM file for RHEL7:
rpm -ivh RHEL7/x86_64/dcism-3.6.0-2249.el7.x86_64.rpm
The service will start automatically. You can log into the iDRAC web UI and verify that from the “Service Module” heading at the top that the connection status is running. You can then enable the automated system recovery option if required (basically a hardware watchdog).
Dell OMSA
Run the bootstrap script to install the yum repo:
curl -s http://linux.dell.com/repo/hardware/dsu/bootstrap.cgi | bash
Install the OMSA packages:
yum install srvadmin-all
Add the systemd service file for OMSA so it will start on boot automatically:
cat << EOF > /etc/systemd/system/dell-omsa.service [Unit] Description=Dell OpenManage Server Administrator Wants=network-online.target After=network-online.target [Service] Type=oneshot User=root Group=root RemainAfterExit=true ExecStart=/opt/dell/srvadmin/sbin/srvadmin-services.sh start ExecStop=/opt/dell/srvadmin/sbin/srvadmin-services.sh stop ExecReload=/opt/dell/srvadmin/sbin/srvadmin-services.sh restart [Install] WantedBy=multi-user.target
Reload the systemd service files, enable OMSA on boot and start it immediately:
systemctl daemon-reload systemctl enable dell-omsa.service systemctl start dell-omsa.service
SR-IOV
To get SR-IOV working, install the kernel development package and GCC package:
yum install kernel-devel gcc
Enable Intel IOMMU functionality with grubby
:
grubby --args="intel_iommu=on" --update-kernel=ALL
This change requires a reboot of the host to apply, you can reboot later to save time.
Huge Pages
Enable huge pages on the host. In my case, since I have 64GB of RAM allocated to the vFP I have allocated 64 x 1G pages. You can adjust this to suite your environment.
Edit the file /etc/default/grub
and look for the GRUB_CMDLINE_LINUX
line. You will need to add the following to the end (after rhgb quiet
) of the variable:
default_hugepagesz=1G hugepagesz=1G hugepages=64 processor.max_cstates=1 idle=poll pcie_aspm=off intel_iommu=on
In my case, the line ends up looking like this:
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos_netvirt1/root rd.lvm.lv=centos_netvirt1/swap rhgb quiet default_hugepagesz=1G hugepagesz=1G hugepages=64 processor.max_cstates=1 idle=poll pcie_aspm=off intel_iommu=on"
The grub configuration needs to be regenerated after changing this setting:
grub2-mkconfig -o /boot/grub2/grub.cfg
This change requires a reboot of the host to apply, you can reboot later to save time.
General Setup
- Link the
qemu-kvm
binary toqemu-system-x86_64
:
ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
- Change the path so that the Python 2.7 release is used and install
pyyaml
andnetifaces
:
echo 'export PATH=/opt/rh/python27/root/usr/bin:$PATH' >> /etc/profile PATH=/opt/rh/python27/root/usr/bin:$PATH export PATH cd /opt/rh/python27/ && . enable && pip install netifaces pyyaml
If you get an error at this step from pip like this, make sure you didn’t skip setting the right variables (cd /opt/rh/python27/ && . enable
):
Traceback (most recent call last): File "/opt/rh/python27/root/usr/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/pkg_resources.py", line 16, in <module> import sys, os, time, re, imp, types, zipfile, zipimport File "/opt/rh/python27/root/usr/lib64/python2.7/zipfile.py", line 6, in <module> import io File "/opt/rh/python27/root/usr/lib64/python2.7/io.py", line 51, in <module> import _io ImportError: /opt/rh/python27/root/usr/lib64/python2.7/lib-dynload/_io.so: undefined symbol: _PyErr_ReplaceException
- Disable KSM service for best performance:
service ksmtuned stop service ksm stop systemctl disable ksmtuned systemctl disable ksm
- Disable SELINUX:
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux
Host Reboot
At this stage you will need to reboot the host to apply various settings that have been changed. Once the host has been rebooted, the host OS is now prepared and you are ready to start the vMX installation.