This post is show you the vNUMA enhancement that VMware brings in to the table with the release of it’s newest vSphere 6.0 hypervisor family. But before I move ahead, I wanted to set the stage about vNUMA and then will discuss what is the improvement of vNUMA in vSphere 6.0.
Setting the stage
Non Uniform Memory Access also known as NUMA is designed with memory locality in mind so that pools of adjacent memory are placed in islands called NUMA nodes. Each of today’s CPUs has multiple cores but that does not always result in a NUMA node with a given number of cores and RAM. It is the Integrated Memory Controller who decides that. There are multi- core CPU’s that are not NUMA aware (original XEON 7300/7400 CPU’s, for example), however on a different note, in a Nehalem-EX systems if it has four sockets each with 8 cores for a total of 32 cores and 256GB of RAM total, it would mean that each socket had 64GB of RAM.
Legacy OS functions in a similar manner by enabling “Node Interleaving” in a Proliant
BIOS so that the entire memory pool is seen as contiguous, with no differentiation between nodes. Without vNUMA, the OS and apps are not aware of the NUMA architecture and will just treat the vCPUs and vRAM as one big pool and assign memory and processes.
A characteristic aspect of vNUMA is that it incorporates distributed shared memory (DSM) inside the hypervisor, in contrast to the more traditional approach of providing it in the middleware.
When creating a virtual machine you have the option to specify the number of virtual sockets and the number of cores per virtual socket. If the number of cores per virtual socket on a vNUMA enabled virtual machine is set to any value other than the default
of one and that value doesn’t align with the underlying physical host topology, performance might be slightly reduced.
Therefore, for best performance, if a virtual machine is to be configured with a non-default number of cores per virtual socket, that number should be an integer multiple or integer divisor of the physical NUMA node size.
Enhancement in vSphere 6.0
When a vNUMA VM with the hot add memory option is enabled and memory is hot added to it, the newly added memory is now allocated equally across all NUMA regions. In earlier release of vSphere, all new memory was allocated only to region 0. This enhancement ensures that all regions benefit from the increase of RAM, enabling the VM to scale without requiring any downtime.