Let me ask you this question.
How many of you have seen LBT and IP Hash working together in a single VMware vDS?
Interestingly enough one of my friend was working on a project that has decided to use HP BladeSystem with the B22HP FEX adapters connected to N5K and then to N7K’s back in Core. vPC Plus is a possibility to the host which offers a the ability to leverage LACP with Route base on IP Hash on the two 10GE LOM host config. This particular configuration would be highly beneficial for the NFS storage (NetApp) that they use.
Now in this design one interesting question arises and that is; can we mix LACP + IP Hash with LBT, or virtual-port-id on different port groups on the vDS with the same two LACP enabled uplinks?
Now even if you do not understand the network-level LACP behavior, the rational is, use IP Hash for the NFS port group, virtual port id for multi-nic vmotion and LBT for VM’s. Look at this below pictorial representation.
Well to me LACP+IP Hash across the board benefits the NFS storage OR LBT/virtual-port-id for VM’s and multi-nic vMotion.
Now the question is whether it will work or not.
Well, I have tested it in my Lab environment. I have LACP running in my Arista Switches and MLAG configured in my SuperMicro Blades. All of my dVPG were running on Route based on IP Hash.
Now I have created a new PG using LBT and put couple of VMs on that and it is working perfectly without any issues. I put a VMK as well and there also I could not find any issues at all.
Now you should note that the following teaming algorithms have no dependency on the physical switch configuration:
- Port ID hash
- MAC hash
- Load based teaming
- Explicit failover
This is because we will always send the traffic from a particular Source MAC address through the same physical uplink. This is pretty important requirement from the physical network switches perspective.
Now, in the case of LBT before moving any traffic we indicate to the physical switch through reverse arp request that traffic from a particular MAC will be moved to another physical uplink.
I ran this idea with our Network experts as well (Vyenkatesh Deshpande). As per him:
What you are seeing is not surprising. The question is do we want customers to do such type of mixed configuration. We don’t like it because it creates more confusion. Already, network people are struggling to understand that teaming configuration is not tied to physical NICs in virtual switch (VSS/VDS). It is good to be consistent across the physical switch and virtual switch configuration.
Also you need to note that with LACP on the upstream switch, the only Supported solution is Route Based on IP Hash.
If you have missed these two posts which talks a lot about these two (IP Hash and LBT), then read these two below article:
Note: One of my friend tested this today again and uncovered some interesting results.
So YES you can configure a vDS as proposed but what we found with LACP enabled and NOT using IP Hash for the VM port groups is that the VMs could not communicate until IP Hash was toggled on then off for that VM PG. After a VM is powered off then on again it no longer has network connectivity until you toggle IP Hash again.
So it seems that the policy must apply to all new dvports that are connected and once connected you can change the load balancing policy and the VMs continue to work.
Well now it seems unless we know of any way to shape this behavior the idea is debunked.