FAST cache technology is an extension of your DRAM cache where it allocate certain FLASH drives to serve as FAST cache. The benefit is that hotter data from applications running inside virtual machine will be copied to FAST cache. Hence, these applications will see improved response time and throughput since the I/O is now serviced from FLASH drives.
In VDI environments during boot storm, refresh, anti-virus when there is a lot of I/O activity on the VMs we see tremendous improvements.
FAST VP and FAST Cache can be used together to improve storage system performance. Customers with a limited number of Flash drives can create FAST Cache and storage pools consisting of performance and capacity drives. For performance, FAST Cache will provide immediate benefits for any burst-prone data, while FAST VP will move warmer data to performance drives and colder data to capacity drives.
In general, FAST Cache should be used in cases where storage performance needs to improve immediately for I/O that is burst-prone in nature.
FAST Cache is storage system ‘aware’ where storage system resources are not wasted by unnecessarily copying data to FAST Cache if it is already on Flash drives. If FAST VP moves a slice of data to the extreme performance tier, FAST Cache will not promote that slice into FAST Cache – even if the FAST Cache criteria is met for promotion.
When initially deploying Flash drives in a storage system, use them for FAST Cache. FAST Cache will track I/Os smaller than 128 KB and requires multiple cache hits to 64 KB chunks. This will initiate promotions from performance or capacity drives to Flash Cache and as a result, I/O profiles that do not meet this criteria are better served by Flash drives in a pool or RAID group. So, in a nutshell below are the benefits of using FAST Cache:
•Hot data copied to FLASH drives•Performance improvements for applications running inside virtual machines•Substantial improvements in VDI environments during boot storm, refresh, anti-virus scans of virtual machines