After speaking to our local rep and attending many different classes at the most recent EMC World in Vegas, I came away with some good information and a very logical best practice for implementing multi-tiered FAST VP storage pools.
First and foremost, you have to use Flash. High RPM Fiber Channel drives are neighter capactiy efficient or performance efficient, the highest IO data needs to be hosted on Flash drives. The most effective split of drives in a storage pool is 5% Flash, 20% Fiber Channel, and 75% SATA.
Using this example, if you have an existing SAN with 167 15,000 RPM 600GB Fiber Channel Drives, you would replace them with 97 drives in the 5/20/75 blend to get the same capacity with much improved performance:
- 25 200GB Flash Drives
- 34 15K 600GB Fiber Channel Drives
- 38 2TB SATA Drives
The ideal scenario is to implement FAST Cache along with FAST VP. FAST Cache continously ensures that the hottest data is serverd from Flash Drives. With FAST Cache, up to 80% of your data IO will come from Cache (Legacy DRAM Cache served up only about 20%).
It can be a hard pill to swallow when you see how much the Flash drives cost, but their cost is negated by increased disk utilization and reduction in the number of total drives and DAEs that you need to buy. With all FC drives, disk utilization is sacrificed to get the needed performance – very little of the capacity is used, you just buy tons of disks in order to get more spindles in the raid groups for better performance. Flash drives can achieve much higher utilization, reducing the effective cost.
After implementing this at my company I’ve seen dramatic performance improvements. It’s an effective strategy that really works in the real world.
In addition to this, I’ve also been implementing storage pools in pairs of two, each sized identically. The first pool is designated only for SP A, the second is for SPB. When I get a request for data storage, in this case let’s say for 1 TB, I will create a 500GB LUN in the first pool on SP A, and a 500GB LUN in the second pool on SP B. When the disks are presented to the host server, the server administrator will then stripe the data across the two LUNs. Using this method, I can better balance the load across the storage processors on the back end.