In vCC 2.0, we saw a parallel copy operation which could consume more time to copy a VM from On-Prem Datacenter to the Cloud Datacenter.
However, now with the release of VMware vCloud Connector 2.5, VMware brings a new feature and that is path optimization UDP based Data Transfer aka UDT.
In vCC 2.0, the full OVF file needed to be copied in totality to each server in the path before it progress to the next, this added to the overall copy time and also required that the staging area on the nodes be large enough to accommodate the full OVF.
In vCC 2.5 an architectural change has been made with the goal of decreasing the time it takes to copy VMs, vApps and Templates between Clouds. This new copy mechanism is called Path Optimization and offers better copy speeds and less reliance on the staging space of the vCC Nodes. These gains are result of streaming data as opposed to moving full files.
With Path Optimization, Data is streamed in small chunks straight from source to destination. As data is being exported from the source cloud, it is transferred and imported into the destination cloud. Unlike previous versions of vCloud Connector, files are only written to the vCC Node staging area in the event of a transfer bottleneck.
You could look at this like the difference between downloading a movie and watching it on Netflix
- If you are downloading you will need to wait for the entire file before watching
- With Netflix, the file is streamed and you can start watching almost instantly
Let me show you the old transfer method where the OVF file needed to be copied totally on each server in the path. Here the file has to traverse the entire path.
Now, let me show you how VMware introduced the path optimization. Here the chunk does not need to wait for the previous one to traverse the entire path, rather it just copy these files in parallel.
In addition to HTTP transfer, a new protocol called UDP-Bases Data Transfer or UDT has been added. UDT is a reliable, high-speed data transfer protocol based on UDP (User Datagram Protocol). UDT offers significantly higher speeds for transfer over high-latency, high-bandwidth networks.
UDT itself is an application layer protocol like FTP and SMTP. It is able to leverage the speed of UDP but also adds its own controls to make sure all data makes it to its destination. The data is encoded and then encapsulated into UDP packets at the Transport Layer, and passed down the stack for transmission.
As mentioned, UDT uses UDP but also implements features that allow it to be:
- Fast – using the inherit nature of UDP
- Fair – estimates the network capacity and avoids flooding
- Friendly – establishes a connection with its target
Lets talk about some of the features:
Connection Setup – In the client/server mode one UDT entity starts first as the server, and its peer side (the client) that wants to connect to it will send a handshake packet. The client should keep on sending the handshake packet every constant interval until it receives a response handshake from the server or a time-out timer expires.
Reliability – UDT uses periodic acknowledgments (ACK) to confirm packet delivery, while negative ACKs (loss reports) are used to report packet loss.
Periodic ACKs help to reduce control traffic when the data transfer speed is high, because in these situations, the number of ACKs is proportional to time, rather than the number of data packets.
Congestion Control is the mechanism to effectively utilize the network bandwidth. It uses packet loss information to identify issues. The increase in round trip time delay can also indicate congestion somewhere along the network.
Bandwidth Estimation uses pairs of packets to periodically check the network bandwidth. The packets are sent back to back and the receiver checks the difference in arrival times and uses and algorithm to calculate the bandwidth.
Full details regarding the UDT protocol can be found at http://udt.sourceforge.net/
In my next post, I will talk about how to enable UDT in vCC and some of the Deep Dives of Path Optimization and UDT.