For last of couple of weeks my Personal Study and Office Work has taken a toll on my Blog.
But anyways, after some time I got quite a few interesting topics to write about. One of them is FAST Cache and FAST VP Design. Well, couple of weeks back, my and one my friend had been invite into a storage design workshop (for EMC Storage) where other technologists were also there.
Well, in these workshops we design real life customer scenarios. I am going to talk about a real life scenario where we worked. As you know that designing a storage specially a EMC storage comes with lot of considerations. One of such consideration is how to design FAST Cache and FAST VP.
Four months back I wrote some articles on EMC FAST Cache and Fast VP, and how they can help us in Virtualization area.
- EMC FAST and VMware SDRS I/O Metric – Redundant Employee
- EMC VNX FAST Cache – Game Changer in VMware Virtualization
- EMC FAST VP and VMware SIOC – Goes Hand in Hand
Well, now let me show you an actual customer requirement and let us do the math to design the FAST Cache and FAST VP. The requirement goes like this:
A customer will implement a new VNX system to host data for 2 business‐critical applications.
The VNX system proposed has 2 back‐end ports per SP, and allows a maximum of 400 GB of FAST Cache to be configured. 200 GB Flash drives will be used for FAST Cache.
Application 1 uses 4500 GB of storage, and application 2 uses 5500 GB of storage. Data gathered from the customer site indicates that the combined skew for the 2 applications is 90%.
The workload for Application 1 has a read/write ratio of 3:1, with a total of 8000 IOPs generated, while Application 2 has a read/write ratio of 4:1, with 7000 IOPs generated. All I/O is small and random.
The customer is interested in implementing FAST VP and FAST Cache, and would like to see NL‐SAS drives used to improve storage efficiency.
Now let me show you the math.
Total application space used is 4500 GB + 5500 GB = 10000 GB.
Skew is 90%, so working data set is 10000 GB x 0.1 = 1000 GB. This data generates 0.9 x 15000 IOPs = 13500 IOPs.
We need to accomodate 1000 GB and 13500 IOPs into Flash drives. At a ROT of 3500 IOPs per drive, this is 4 drives, but the data capacity requires 5 drives or more.
We can start with FAST Cache: 4 drives of 200 GB each allows 14000 IOPs and 400 GB of data. An additional 600 GB of data space is required on Flash drives, so the Pool could have 5 x 200 GB drives as 4+1 RAID5, for a total of 800 GB and 17500 IOPs. The IOPs capacity of the Flash drives exceeds the requirement, even if write penalties due to RAID type are factored in.
The remainder is 9000 GB of data and 1500 IOPs. This 1500 IOPs consists of 200 + 140 = 340 writes/s and 600 + 560 = 1160 reads/s , for a total of 2520 disk IOPs. At 90 IOPs per NL‐SAS drive, this would mean 28 NL‐SAS drives. Capacity needs would be met if the drives were 1 TB drives.