Last year I wrote about the scalable approach of Multicast Traffic and now with the introduction of VXLAN and other models, we saw a lot of multicast traffic.
Multicast is an efficient way of disseminating information and communicating over the network. A single sender can connect to multiple receivers and exchange information while conserving network bandwidth. Financial stock exchanges, multimedia content delivery networks, and commercial enterprises often use multicast as a communication mechanism. Multiple receivers can be enabled on a single ESXi host. Because the receivers are on the same host, the physical network does not have to transfer multiple copies of the same packet. Packet replication is carried out in the hypervisor instead.
But hey how about processing this broadcast of multicast traffic within an environment? I meant how about the CPU cycles? Are we going to saturate our CPU in this scenario?
Well, we have an answer to this and that is SplitRx mode.
SplitRx mode is an ESXi feature that uses multiple physical CPU’s to process network packets received in a single network queue. This feature provides a scalable and efficient platform for multicast receivers. SplitRx mode typically improves throughput and CPU efficiency for multicast traffic workloads.
SplitRx mode is supported only on vmxnet3 network adapters. This feature is disabled by default. We recommend enabling SplitRx Mode in situations where multiple virtual machines share a single physical NIC and receive a lot of multicast or broadcast packets.
SplitRx mode is individually configured for each virtual NIC.
SplitRx mode uses multiple physical CPU’s to process network packets received in a single network queue. This feature can significantly improve network performance for certain workloads. These workloads include:
- Multiple virtual machines on one ESXi host all receiving multicast traffic from the same source.
- Traffic via the vNetwork Appliance (DVFilter) API between two virtual machines on the same ESXi host. SplitRx mode will typically improve throughput and maximum packet rates for these workloads.
vSphere 5.1 automatically enables this feature for a VMXNET3 virtual network adapter (the only adapter type on which it is supported) when it detects that a single network queue on a physical NIC is both (a) heavily utilized and (b) servicing more than eight clients (that is, virtual machines or the vmknic) that have evenly distributed loads.
Now the question is how do you enable that or disable that (if you need to do that). Here is the way how you enable / disable it.
- Open up vSphere Client
- Login to the vCenter Server
- In the home screen select Hosts and Clusters
- Select the ESXi host you wish to change
- Under the Configuration tab, in the Software pane, click Advanced Settings
- Click on the Net section of the left hand side tree
- Find NetSplitRxMode
- Click on the value to be changed and configure it as you wish
NetSplitRxMode = “0”
This value disables splitRx mode for the ESXi host.
NetSplitRxMode = “1”
This value (the default) enables splitRx mode for the ESXi host.
The change will take effect immediately and does not require the ESXi host to be restarted.
The SplitRx mode feature can also be configured individually for each virtual NIC using the ethernetX.emuRxMode variable in each virtual machine’s .vmx file (where X is replaced with the network adapter’s ID).
The possible values for this variable are:
ethernetX.emuRxMode = “0”
This value disables splitRx mode for ethernetX.
ethernetX.emuRxMode = “1”
This value enables splitRx mode for ethernetX.
So, if you want to change the value of this on individual VMs through vSphere Client, you should follow the below steps:
- Select the virtual machine you wish to change, and then click Edit virtual machine settings.
- Under the Options tab, select General, and then click Configuration Parameters.
- Look for ethernetX.emuRxMode (where X is the number of the desired NIC). If the variable isn’t present, click Add Row and enter it as a new variable.
- Click on the value to be changed and configure it as you wish.
Note: The change will not take effect until the virtual machine has been restarted.