Adaptive Load Balancing or LACP – Finding the sweet spot

Adaptive Load Balancing combines the benefits of the increased bandwidth of 802.3ad with the network redundancy of Active-Passive. Both NICs are made active and they can be connected to different switches, no additional configuration on physical switch level is needed.

When an ALB bond is configured, it creates an interface. This interface balances traffic through both nics. But how will this work with the iSCSI protocol?

For any iSCSI request issued over a TCP connection, the corresponding response and/or other related PDU(s) MUST be sent over the same connection.  We call this “connection allegiance”.

This means that the P4000 node must use the same MAC address to send the IO back. How will this affect the bandwidth? As stated in the iSCSI SAN configuration guide; “ESX Server‐based iSCSI initiators establish only one connection to each target.”.

It looks like ESX will communicate with the gateway connection of the LUN with only NIC.

When you create a bond on an P4000 node, the bond becomes the ‘interface’ and the MAC address of one of the NICs becomes the MAC address for the bond.  The individual NICs become ‘slaves’ at that point.  I/O will be sent to and from the ‘interface’ which is the bond and the bonding logic figures out how to manage the 2 slaves behind the scenes.  So with ALB, for transmitting packets, it will use both NICs or slaves, but they will be associated with the MAC of the bond interface, not the slave device.

The bond uses the same IP and MAC address of the first onboard NIC. This means the node will uses both interfaces to transmit data, but only one to receive.

Adaptive Load Balancing (ALB) is the most flexible NIC bonding technique that can be enabled on the storage nodes and provides for increased bandwidth and fault-tolerance. There is typically no special switch configuration required to implement ALB. Both NICs in the storage nodes are made active, and they can be connected to different switches for active-active port failover. ALB operates at 2 gigabits of aggregated bandwidth. Adaptive Load Balancing is only supported for NICs with the same speeds – i.e., two GigE NICs. ALB is not supported between a 10Gb and a GigE NIC.

About Prasenjit Sarkar

Prasenjit Sarkar is a Product Manager at Oracle for their Public Cloud with primary focus on Cloud Strategy, Oracle Openstack, PaaS, Cloud Native Applications and API Platform. His primary focus is driving Oracle’s Cloud Computing business with commercial and public sector customers; helping to shape and deliver on a strategy to build broad use of Oracle’s Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings such as Compute, Storage, Java as a Service, and Database as a Service. He is also responsible for developing public/private cloud integration strategies, customer’s Cloud Computing architecture vision, future state architectures, and implementable architecture roadmaps in the context of the public, private, and hybrid cloud computing solutions Oracle can offer.

One thought on “Adaptive Load Balancing or LACP – Finding the sweet spot

  1. Have a look at http://en.wikipedia.org/wiki/Link_aggregation
    There you will find: “Adaptive load balancing (balance-alb)
    includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.”
    So “The bond uses the same IP and MAC address of the first onboard NIC” is not completely right.