VMware – vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)


To totally unlock this section you need to Log-in


Login

VMware introduced multi core virtual CPU in vSphere 4.1 to avoid socket restrictions used by operating systems. In vSphere a vCPU is presented to the operating system as a single core cpu in a single socket, this limits the number of vCPUs that can be operating system.

Typically the OS-vendor only restricts the number of physical CPU and not the number of logical CPU (better know as cores).

The First Example

For example, Windows 2008 Standard is limited to 4 physical CPUs, and it will not utilize any additional vCPUs if you configure the virtual machine with more than 4 vCPUs. To solve the limitation of physical, VMware introduced the vCPU configuration options “virtual sockets” and “cores per socket”. With this change you can for example configure the virtual machine with 1 virtual sockets and 8 cores per socket allowing the operating system to use 8 vCPUs.

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

When reviewing the cpu configuration inside the Guest OS, the task manager shows 4 CPUs:

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

A final check by opening Windows Task Manager verified it only uses 4 vCPUs.

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

We reconfigured the virtual machine to present 8 vCPU using a single socket, so 8 number of cores per socket.

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

We proceeded to power-on the virtual machine:

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

The Second Example

You can have up to 64 vCPUs configured on a virtual machine, if you have vSphere Enterprise Plus (the number goes down as the edition of vSphere is reduced). BUT, you are also limited to assigning a maximum number of vCPUS that your physical server has available in logical CPUs.

If we take a look at one server in our lab, it’s a Dell T610 with a single physical CPU socket that has 4 cores (quad-core) and hyperthreading is enabled, which doubles the number of cores presented, for a total of 8 cores:

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

What this means is that the maximum number of vCPUs that I could configure for a VM on this host would be 8. Let’s verify.

If we edit the settings of a VM on that host, we see that we can either configure it with 8 virtual sockets and 1 virtual core per socket, 4 sockets and 2 cores per socket, 2 sockets and 4 cores per socket, or 8 sockets and 1 core per socket (all of which, if you multiple, totals 8):

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

On another host, a Dell M610, we have 2 physical sockets, 4 cores per socket, with hyperthreading enabled, which gives us a total of 16 logical processors:

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

If we look at a VM on that host (note that these VMs need to be hardware version 8 or above), we can configure any combination of virtual cores that total no more than 16 (could be 16 x 1, 1 x 16, 2 x 8, 8 x 2, 4 x 4, etc):

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

Now that you know the the limitations of the physical hosts and hypervisor, let’s look at why this differentiation of virtual sockets vs virtual cores is available and what you should choose.

The Guest OS Knows the Sockets and Cores

A very important part of understanding this is that when you configure a vCPU on a VM, that vCPU is actually a virtual core, not a virtual socket. Also, a vCPU has been traditionally presented to the guest OS in a VM as a single core, single socket processor.

What you might not have thought about is that the guest operating systems know not only the number of “CPUs” but also the number of sockets and cores that the CPU has available. You can use the CPU-Z utility to find out how many sockets and cores your virtual machine has.

The guest OS is scheduling the threads from each process onto a CPU core and, using the hypervisor (in this case, VMware), those virtualized threads are scheduled, by the VMkernel scheduler, on a logical CPU core of the operating system.

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

If it doesn’t have any effect on performance (vCores and vSockets), why would VMware even offered this option to specify the number of sockets per core for each VM? The answer is that it’s all related to software licensing for the OS and applications.

Performance Impact

Ok so it worked, now the big question, will it make a difference to use multiple sockets or one socket? How will the Vmkernel utilize the physical cores? Might it impact any NUMA configuration?

And it can be a very short answer: No! There is no performance impact between using virtual cores or virtual sockets. (Other than the number of usuable vCPU of course).

Abstraction layer

And its because of the power of the abstraction layer. Virtual socket and virtual cores are “constructs” presented upstream to the tightly isolated software which we call a virtual machine. When you run a operating system it detects the hardware (layout) within the virtual machine. The VMkernel schedules a Virtual Machine Monitor (VMM) for every vCPU. The virtual machine vCPU configuration is the sum of number of cores x number of sockets. Lets use the example of 2 virtual socket with 2 virtual core configuration:

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

The light blue box shows the configuration the virtual machine presents to the guest OS. When a CPU instruction leaves the virtual machine it get picked up the Vmkernel. For each vCPU the VMkernel schedules a VMM world. When a CPU instruction leaves the virtual machine it gets picked up by a vCPU VMM world. Socket configurations are transparent for the VMkernel.

NUMA

When a virtual machine powers on in a NUMA system, it is assigned a home node where memory is preferentially allocated. The vCPUs of a virtual machine are grouped in a NUMA client and this NUMA client is scheduled on a physical NUMA node.

To verify that the sockets have no impact on the NUMA scheduler we powered up a new virtual machine and configured it with two sockets with each 2 cores. The host running the virtual machine is a dual socket quad core machine with HT enabled. Providing 4 vCPUs to the virtual machine ensure us that it will fit inside a single NUMA node.

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

When reviewing the memory configuration of the virtual machine in ESXTOP we can deduct that its running on a single physical CPU using 4 cores on that die. Open the console, run ESXTOP, press M for memory view. Use V (capital v) to display on VM worlds only. Press F and select G for NUMA stats. You might want to disable other fields to reduce the amount of information on your screen.

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

The column, NHN , identifies the current Numa Home Node, which in Machine2 case is Numa node 0. N%L indicates how much memory is accessed by the NUMA client and it shows 100%, indicating that all vCPUs access local memory. The column GST_ND0 indicates how much memory is provided by Node0 to the Guest. This number is equal to the NLMEM counter, which indicated the current amount of local memory being accessed by VM on that home node.

vNUMA

What if you have a virtual machine with more than 8 CPU (for clarity, life of a Wide NUMA starts at a vCPU count of 9). Then the VMkernel presents the NUMA client home nodes to the Guest OS. Similar to the normal scheduling, the socket configuration are also transparent in this case.

Why differentiate between sockets and cores?

Well there is a difference and it has to do with the Hot-Add CPU feature. When enabling the option CPU Hot Plug you can only increase the virtual socket count.

VMware - vSphere Virtual Cores, Virtual Sockets, and Virtual CPU (vCPU)

In short using virtual sockets or virtual cores does not impact the performance of the virtual machine. It only effects the initial configuration and the ability to assign more vCPU when your Operating System restricts the maximum number of physical CPUs. Always check if your VM configuration is in compliance with the vendor licensing rules before increasing the vCPU count!