While we are evolving to the Cloud more and more, we have seen in many instances that we talk about Storage High Availability and not maintaining any Single Point of failure.
However, I fail to find a guide or a blog post which talks about the High Availability of Storage from all the perspective, like, Host Perspective, FC SAN, iSCSI SAN, Overall Storage Side, LUNs and RAID, Network Side for iSCSI. So, I thought to create separate blog posts covering all these different areas.
Today I am going to write about the best practices for maintaining a Highly Available host for Storage Design. Firstly I will talk about what it takes for FC SAN and then I will talk about the iSCSI side.
Fiber Channel SAN Perspective
- High availability requires at least two HBA connections to provide redundant paths to the SAN or storage system. It is a best practice to have redundant HBAs. Using more than one single port HBA enables port and path failure isolation, and may provide performance benefits. Using a multiport HBA provides a component cost savings and efficient port management. Multiport HBAs are useful for hosts with few available I/O bus slots, but represent a single point of failure for several ports. With a single ported HBA, a failure would affect only one port.
- HBAs should also be placed on separate host buses for performance and availability. This may not be possible on hosts that have a single bus or a limited number of bus slots. In this case, multiport HBAs are the only option.
- Always use an HBA that equals or exceeds the bandwidth of the storage network, e.g. do not use 4 Gb/s or slower HBAs for connections to 8 Gb/s SANs. FC SANs reduce the speed of the network path to the HBA’s speed either as far as the first connected switch, or to the storage system’s frontend port if directly connected. This may cause a bottleneck when the intention is to optimize bandwidth.
- Finally, using the most current HBA firmware and driver from the manufacturer is always recommended.
iSCSI SAN Perspective
iSCSI environments may make use of NICs, TOE cards or iSCSI HBAs. The differences include cost, host CPU utilization, and features such as security. The same server cannot use NICs and HBAs to connect to the same storage system.
NICs are the typical way of connecting a host to an Ethernet network, and are supported by software iSCSI initiators.
- Ethernet networks will auto negotiate down to the lowest common device speed. So a slower NIC may bottleneck the storage network’s bandwidth. We should not use legacy 10/100 Mb/s NICs for iSCSI connections to 1 Gb/s or higher Ethernet networks.
- Always use a TOE NIC. A TOE is a faster type of NIC. A TOE has onboard processors that offload TCP packet segmentation, checksum calculations, and optionally IPSec from the host CPU to themselves. This allows the host CPU(s) to be used exclusively for application processing.
- Redundant NICs, iSCSI HBAs, and TOEs should be used for availability. NICs may be either single or multiported. A host with a multiported NIC or more than one NIC is called a multihomed host. Typically, each NIC or NIC port is configured to be on a separate subnet. Ideally, when more than one NIC is provisioned, they should also be placed on separate host buses. Note this may not be possible on smaller hosts having a single bus or a limited number of bus slots, or when the on‐board host NIC is used.
- All NICs do not have the same level of performance. This is particularly true of host motherboard NICs, 10 Gb/s NICs, and 10 Gb/s HBAs.