NOTE: I originally published this page in 2018; instructions may now be out of date.
When deploying a Juniper vMX router on both a CentOS and Ubuntu host I ran into an issue when starting it. The symptoms were slightly different:
- For CentOS when the vMX started it would get to the stage
Start vfp-vmx1but never continue; thevmx.shscript would just stop there. If I connected to the vFP or vCP VM’s with telnet (they were both running) there was nothing happening. Thelibvirtdprocess on the host was also using 100% CPU (for 1 core) and I could not kill that process, the host had to be rebooted. - For Ubuntu when the vMX started it would fail at the stage
Start vfp-vmx1The debug log file that the vMX generated didn’t show any issue.
In my case the startup log file from libvirt indicated the cause for the issue. The log files by default are stored in /var/log/libvirt/qemu. Since the issue was with the vFP and my vMX name is vmx1 the log file was named /var/log/libvirt/qemu/vfp-vmx1.log. The last line of the log was:
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
This error can happen if you do not have huge pages enabled on the host. If you assign a large amount of RAM (64GB for the vFP in my case) huge pages is required which is not on by default for CentOS or Ubuntu. The fix is slightly different depending on the host OS, I have included instructions to fix both CentOS and Ubuntu below.
You can verify the hugepages status by running cat /proc/meminfo.
Enable Huge Pages – CentOS
- Edit the grub configuration file
/etc/default/grub. - Look for the
GRUB_CMDLINE_LINUXline. - Add the following settings to the end of the
GRUB_CMDLINE_LINUXvariable:default_hugepagesz=1G hugepagesz=1G hugepages=64 processor.max_cstates=1 idle=poll pcie_aspm=off intel_iommu=on. The number of huge pages required may be slightly different for your use case. For my situation the vFP has 64GB of RAM allocated which means I need 64 x 1G pages. If you have more or less RAM allocated, adjust thehugepagesvariable to suite your needs. The modified line on my host looks like this:
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos_netvirt1/root rd.lvm.lv=centos_netvirt1/swap rhgb quiet default_hugepagesz=1G hugepagesz=1G hugepages=64 processor.max_cstates=1 idle=poll pcie_aspm=off intel_iommu=on"
- Save the file.
- Generate the new grub configuration:
grub2-mkconfig -o /boot/grub2/grub.cfg
- Reboot the host to apply the changes.
Note: The pcie_aspm and processor.max_cstates settings are not required but they are recommended by Juniper for best performance. The intel_iommu setting is required for SR-IOV support.
Enable Huge Pages – Ubuntu
- Edit the file
/etc/default/grub. - Look for the two
GRUB_CMDLINE_LINUXlines that look like this by default:
GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX=""
- Add the
intel_iommu=onoption to the end of theGRUB_CMDLINE_LINUX_DEFAULTvariable. In my case I also have huge pages enabled due to the amount of RAM for the vFP, so the modified lines look like this:
GRUB_CMDLINE_LINUX_DEFAULT="processor.max_cstates=1 idle=poll pcie_aspm=off intel_iommu=on" GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=64"
- Save the updated configuration file.
- Generate the new grub configuration file:
update-grub
- Edit
/etc/default/qemu-kvm. - Look for
KSM_ENABLEDand set this to0(the default is1). - Look for
KVM_HUGEPAGESand set this to1(the default is0). - Save the file.
- Reboot the host to apply the changes.
Note: The pcie_aspm and processor.max_cstates grub cmdline settings are not required but they are recommended by Juniper for best performance. The intel_iommu setting is required for SR-IOV support.