We have seen two kind of Flex Fabric Design. In the first design we connect two 10GB uplink to a single switch from one bay and the other switch is being used to connect two 10GB uplink from the other bay of the Flex Fabric. Design scenario is as below. So here I have connected Enclousre0: Bay 1: X5 to Switch 1 Port 1 and Enclosure0: Bay 1: X6 to Switch 1 Port 2. Secondly I have connected Enclosure 0: Bay 2: X5 to Switch 2 Port 1 and Enclosure 0: Bay 2: X6 to Switch 2 Port 2. I have created a LACP also for these two ports on both of the switches.
Now for the second design we split the uplinks across separate switches. Below is the pictorial representation. Here we have connected Enclosure 0: Bay 1: X5 to Switch 1 Port1 and Enclosure 0: Bay 1: X6 to Switch 2 Port 1. Similarly we have connected Enclosure 0: Bay 2: X5 to Switch 1 Port 2 and Enclosure 0: Bay 2: X6 to Switch 2 Port 2. I have stacking links setup in between these two Physical Switches.
Primarily we use both of these designs for a VMware deployment. Now there are a couple of questions which came into mind.
- Which design will be appropriate in terms of Availability, Performance or both?
- In the second design will LACP work without something like Cisco’s VSS/vPC/StackWise or HP Network IRF?
- What would be the impact of STP on both of these cases (As a procedure I have already enabled the STP portfast on the switch side)?
- What would be the throughput I am going to get in each scenario (keeping in mind that I will create SUS and both of my SUS will be designed Active/Active)?
- From a VMware perspective is there any consideration I need to make in terms of Load Balancing Policy?
Now I tried to answer them as below:
- based on the above two diagram we will take the second design for the sake of Highly Available scenario. Article to support this statement is here, N+2 Failover for a Blade Design with upstream switches. For the Performance both will work exactly the same way so the consideration of Performance is negligible.
- In the second design we split the uplink in cross switches. LACP does not work in a cross switch uplink until and unless you use either of Cisco vPC/VSS or HP IRF Technology. So if you have a Cisco switch at the distribution or access layer then you need to configure vPC/VSS. Otherwise if you are using a HP House then you need to use HP IRF.
- HP Virtual Connect does not participate in STP. The upstream switches only see this VC as a forwarding devices so it actually does not participate in STP operation. VC does not send any STP BPDU. The only thing we need to do is to change the uplink port of the switch as Spanning Tree Edge Port. So by making a port in STP Portfast mode will bypass those ports to act as a forwarding port and the decision of making a port block or forwarding will not happen. So we will not land up on convergence time of 30-50 seconds.
- for both the cases we gonna get 40GB of Throughput.
- Well not coming to the point of VMware, we need several consideration when choosing the Load Balancing Policy. First thing to note that VMware vSwitches does not support LACP. So you need to create a static Ether Channel at the upstream switches. Second thing is even if you create a static ether channel at the upstream then also you can’t use “Route Based on IP Hash“. HP does not support route based on ip hash with VC as the interconnect module. So you need to fail back to the other available. For me if you wanted to scale and at the same time wanted to take the full benefit of performance then use “Route Based on Physical NIC Load“. To support this statement I would like to forward you to the article IP Hash versus LBT written by Frank Denneman.