VXLAN Teaming Policy Trade-Off – How are you Interacting with Upstream Network

We are now already moving into scalable and elastic Cloud Datacenter where scalability is our main lookout and that was enough to move our self from VLAN based data center networking to VXLAN based data center networking.

I will not talk about how to configure VXLAN for your vCloud Datacenter, rather I would talk about the how do you want to interact with the upstream network. If you want to see or learn how to configure or prepare your vCloud environment for VXLAN, I would suggest you to look at the below articles.

So, before we go deep into the subject let us look what actually pushed us towards the VXLAN environment and then I would talk about what are the ways of interacting with the upstream network.

Today we have many challenges towards using VLAN based Datacenter networking. Few of them are as below:

  • It cannot scale beyond 4096 tenants and has issues around STP and broadcast containment in very large environments
  • It does not provide the flexibility and agility associated with VM provisioning and on going movement within the Datacenter
  • Does not provide overlapping IP and mac’s within and across a tenant on the same layer 2 typically needed for test/dev workloads

So, to overcome these obstacles we had to opt for something which would give us:

  • Enablement of vCloud to allow a customer to consume the full capacity wherever available.
  • Greater scalability and flexibility in Cloud Networking
  • Ease of workload provisioning which will allow creation of logical on demand network which should have same property as VLAN
  • NAT for workload in Public Cloud and Firewall rules in Cloud
  • Stretched/Elastic Cluster which means long distance vMotion, typically in a Metro Cluster scenario

Now thankfully all these can be tackled through VXLAN, but now the question is when you prepare your Cloud to use VXLAN capability, how do you want to interact with upstream network. Well, if you read those above articles you know that we need to set the VXLAN teaming policy as per our upstream network design. So, what are those options which you have today. Let me show you.

Here I am talking about Fail Over, Static EtherChannel, LACP – Active Mode and LACP – Passive Mode.

But hey wait for a minute!! Do we know the consequences for each one of these.

Now let us look at the first one which is Fail Over.

Well, Fail Over teaming policy for VXLAN vmkernel NIC is not recommended. You may want to ask me why. Well, with Fail Over teaming only one uplink is used for all VXLAN traffic. Although redundancy is available via the standby link, all available bandwidth is not used. If the physical hardware does not support LACP or Etherchannel it is recommended to have at least one 10G NIC to handle the traffic.

Now going to the Static EtherChannel. So, what is Etherchannel?

This is a link aggregation (port trunking) method used to provide fault-tolerance and high-speed links between switches, routers, and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links.

EtherChannel provides incremental trunk speeds between Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet. EtherChannel combines multiple Fast Ethernet up to 800Mbps, Gigabit Ethernet up to 8Gbps , and 10 Gigabit Ethernet up to 80Gbps.

Static Etherchannel requires IP Hash Load Balancing be configured on the switching infrastructure, which uses a hashing algorithm based on source and destination IP address to determine which host uplink egress traffic should be routed through.

But do you know Etherchannel and IP Hash Load Balancing is technically very complex to implement and has a number of prerequisites and limitations, such as, you can’t use beacon probing, you can’t configure standby or unused link etc.

This is a Cisco EtherChannel sample configuration:

interface Port-channel10
switchport
switchport access vlan 32
switchport mode access
no ip address
!
interface GigabitEthernet1/1
switchport
switchport access vlan 32
switchport mode access
no ip address
channel-group 10 mode on
!

Now going on to the next which is LACP. So what is LACP?

LACP, otherwise known as IEEE 802.1ad Link Aggregation Control Protocol, is simply a way to dynamically build an EtherChannel. Essentially, the “active” end of the LACP group sends out special frames advertising the ability and desire to form an EtherChannel. It’s possible, and quite common, that both ends are set to an “active” state (versus a passive state). Additionally, LACP only supports full duplex links (which isn’t a concern for gigabit or faster links). Once these frames are exchanged, and if the ports on both side agree that they support the requirements, LACP will form an EtherChannel.

To use LACP with vSphere, you need to have vDS 5.1 or you need to have Cisco Nexus 1000V virtual switch. However, advantage of using Cisco Nexus 1000V is, you can use any load distribution policy you want. LACP with the Nexus 1000v has 19 different hashing algorithms.

This is a Cisco LACP sample configuration:

switch# configure terminal
switch (config)# feature lacp
switch (config)# interface ethernet 1/4
switch(config-if)# channel-group 5 mode active

There are two modes of LACP you can use and if you choose any one of them configure the Teaming Policy for VXLAN properly in vShield too.

Active LACP mode that places a port into an active negotiating state, in which the port initiates negotiations with other ports by sending LACP packets.

Passive LACP mode that places a port into a passive negotiating state, in which the port responds to LACP packets that it receives but does not initiate LACP negotiation.

Update: After discussion with Duncan, Venky and Manish, I have updated the LACP section. It was a slip from my side to say LACP is not yet supported in vDS. However indeed it is supported from vSphere 5.1, but you can’t use vSphere Client to configure it. You should use Web Client to configure it.

 

About Prasenjit Sarkar

Prasenjit Sarkar is a Product Manager at Oracle for their Public Cloud with primary focus on Cloud Strategy, Cloud Native Applications and API Platform. His primary focus is driving Oracle’s Cloud Computing business with commercial and public sector customers; helping to shape and deliver on a strategy to build broad use of Oracle’s Infrastructure as a Service (IaaS) offerings such as Compute, Storage, Network & Database as a Service. He is also responsible for developing public/private cloud integration strategies, customer’s Cloud Computing architecture vision, future state architectures, and implementable architecture roadmaps in the context of the public, private, and hybrid cloud computing solutions Oracle can offer.

4 Replies to “VXLAN Teaming Policy Trade-Off – How are you Interacting with Upstream Network”

  1. Can you validate this statement “LACP is not currently supported on the native VMware vDS”? As Duncan mentioned it is supported on regular VDS so wondering what makes you think it is unsupported. Check this link when you get a chance “http://pubs.vmware.com/vsphere-51/topic/com.vmware.vsphere.networking.doc/GUID-118C7F3F-B5C9-4870-83E4-2A309027E63E.html” which discusses how to enable/disable LACP on VDS using vSphere Web Client. If the physical hardware does not support the feature then how its an issue about supporting it on vSphere. Even selecting proper load balancing or fail over features has their own up/down sides depending on the situations so there is no one perfect solution for every problem. To be frank, I still did not get the intention of this post my friend. :-).

  2. Certainly Manish. After the quick spot from Duncan (which indeed is a miss from my end) I am now updating the post.

    Thanks to Duncan, you and Venky for clarifying the doubt. My intention for this post was to showcase what are the choices you have today around the teaming for VXLAN and what you should do also in the vSphere layer to make it work. So that people just dont go ahead blindly and select anything and everything.

  3. Pingback: VXLAN Concept Simplified - Virtualization Team

Leave a Reply