I need to determine the maximum number of KVM virtual machines that can run on an average laptop computer. Unfortunately, I cannot find authoritative information about the maximum number of KVM virtual machines that can run on a host computer. Most information I could find about KVM limits does not publish absolute limits but, instead, recommends best practices.
In this post, I will synthesize the information available from multiple sources into a single recommendation for the maximum number of KVM-based nodes that can run in an open-source network simulator running on a single host computer.
Number of virtual cores per VM
KVM allows the user to set the number of virtual cores used by each virtual machine. The best practices for virtual cores on each virtual machine are:
- Always use only one core per VM. Do not configure more than one virtual core on a virtual machine unless the application you will use on it absolutely requires more than one core. In network simulators, running only one core per VM results in the best performance.
- Never configure a higher number of virtual cores on each VM than the number of real cores available on the host computer. For example, most laptop computers have dual-core processors so, if you need more than one core on a VM, you can configure the VM to use no more than two virtual cores.
Number of KVM virtual machines per host
Most laptop and desktop PC processors offer hardware support for virtualization. But there is a limit to how many virtual machines can run using hardware support. After that limit is reached, KVM uses the software virtualization provided by QEMU, which is much slower ((UPDATE 3/22/16: This may not be an accurate way to describe how QEMU works with KVM. See the comments posted below by a Red Hat developer who disagrees with this description.)). Hardware support varies depending on the processor; some processors provide better performance than others.
As a best-practice guideline, when using normal consumer-grade computers or laptops, you should assume hardware support for KVM virtualization is limited to 8 virtual cores for each real processor core on the host computer. For example, a dual-core laptop computer can support 16 virtual machines (or 8 virtual machines with 2 virtual cores, each).
Memory limits
When a user starts a KVM virtual machine, she defines the amount of memory that VM will use. The total memory used by all virtual machines running together on a computer may be “overbooked”.
Most available information about KVM memory limits tells us we may overbook memory to a maximum of 150%, including the memory used by the host computer. For example, if you have 8 GB or RAM on your laptop, you may run ten virtual machines that are configured to use 1 GB of memory each, for a total of 10 GB plus the 2 GB required by the host computer which adds up to 150% of the physical RAM on the computer. However, this assumes that not all guests use all their allocated memory at the same time. If you run applications that use up all this memory on every virtual machine when overbooking memory, expect poor performance.
Good information about other KVM limitations for memory, network interfaces, and other resources are described at the Fedora project and the OpenSUSE project web sites.
Conclusion
I recommend that users of open-source network simulators that use KVM, such as Cloonix and GNS3, consistently configure the KVM-based network nodes in their simulations to use only one virtual CPU core each and do not exceed eight virtual CPUs per core provided on the host computer.
For example, when running Cloonix or GNS3 on a typical laptop computer that has a dual-core CPU, run no more than sixteen KVM nodes in your network simulation.
“After that limit is reached, KVM uses the software virtualization provided by QEMU, which is much slower.”
How exactly did you reach that conclusion? If your VM is configured to use KVM for CPU virtualization, it will never use QEMU CPU emulation mode (TCG). You may have other problems, depending on how much CPU time is required by software running inside each VM, but having the VMs switching to QEMU TCG mode isn’t one of them.
Hi Eduardo,
Thanks for your comment. When experimenting with the Cloonix network emulator, which uses KVM, I noticed a significant increase in the time it took for nodes to initialize after I had more than 16 VMs starting. This was on a dual-core Intel Core Duo process (with no hyperthreading) and 8 GB of RAM. When looking for some sort of authoritative information about how many KVM VMs I could run at the same time, I found the links to which I refer in this post. So, based on my observations and the information available to me, I created my conclusion. I would be happy to look at any documentation that would shed a brighter light on this issue. Can you point me to some documentation I can read about this topic? The main issue I want to understand is how many VMs can I run with hardware virtualization support, assuming the available RAM is always enough.
Thanks,
Brian
This is very interesting… I’m using libvirt/KVM to stage network simulations as well via https://github.com/CumulusNetworks/topology_converter and I’ve been trying to find a theoretical limit for the number of interfaces that can be simulated with KVM/QEMU. So far I’ve been able to boot a machine with 130 interfaces each connected to another device running Cumulus Quagga and BGP unnumbered across all of them. It is finicky for sure with this many interfaces but it absolutely boots and functions much to my surprise. Would love to figure out when this thing falls over.
Hi,
Even I have discussed with some SMEs on freenode IRC for KVM. The theoretical limit for number of vCPUs for a KVM hypervisor seems to be the number of virtual CPU threads supported by the host which in turn dependent on ulimit value. But there seems to be a limit on the number of vCPUS that can be allocated to a guest. I am still looking out for an official statements on these limits from Redhat