Strategies for implementing Multi-tiered FAST VP Storage Pools

After speaking to our local rep and attending many different classes at the most recent EMC World in Vegas, I came away with some good information and a very logical best practice for implementing multi-tiered FAST VP storage pools.

First and foremost, you have to use Flash.  High RPM Fiber Channel drives are neighter capactiy efficient or performance efficient, the highest IO data needs to be hosted on Flash drives.  The most effective split of drives in a storage pool is 5% Flash, 20% Fiber Channel, and 75% SATA.

Using this example, if you have an existing SAN with 167 15,000 RPM 600GB Fiber Channel Drives, you would replace them with 97 drives in the 5/20/75 blend to get the same capacity with much improved performance:

  • 25 200GB Flash Drives
  • 34 15K 600GB Fiber Channel Drives
  • 38 2TB SATA Drives

The ideal scenario is to implement FAST Cache along with FAST VP.  FAST Cache continously ensures that the hottest data is serverd from Flash Drives.  With FAST Cache, up to 80% of your data IO will come from Cache (Legacy DRAM Cache served up only about 20%).

It can be a hard pill to swallow when you see how much the Flash drives cost, but their cost is negated by increased disk utilization and reduction in the number of total drives and DAEs that you need to buy.   With all FC drives, disk utilization is sacrificed to get the needed performance – very little of the capacity is used, you just buy tons of disks in order to get more spindles in the raid groups for better performance.  Flash drives can achieve much higher utilization, reducing the effective cost.

After implementing this at my company I’ve seen dramatic performance improvements.  It’s an effective strategy that really works in the real world.

In addition to this, I’ve also been implementing storage pools in pairs of two, each sized identically.  The first pool is designated only for SP A, the second is for SPB.  When I get a request for data storage, in this case let’s say for 1 TB, I will create a 500GB LUN in the first pool on SP A, and a 500GB LUN in the second pool on SP B.  When the disks are presented to the host server, the server administrator will then stripe the data across the two LUNs.  Using this method, I can better balance the load across the storage processors on the back end.

7 thoughts on “Strategies for implementing Multi-tiered FAST VP Storage Pools”

  1. Did you place your existing storage groups under fast vp control or did you create a new storage group under fast vp control?

    1. On our existing NS-960, we created the storage pools with new drives and then gradually migrated data from the traditionally provisioned RAID groups to the new tiered storage pools. On our newly deployed VNX arrays we set them up with tiered storage pools from the very beginning.

      If you already have storage pools with multiple drive types created, you can enable fast vp on them at a later time if you didn’t purchase the license right away.

  2. Is there some more documentation available about this? we have an integrated NS-960 and got some SSD’s for testing, but our local EMC rep was only able to configure the SSD as fast cache fore some LUN’s. But told us that the NS-960 is not FAST VP capable. Here im reading that you implemented it, are there differences between NS-960ies?

    1. Andi, I don’t work for EMC so I can’t definitively say that all NS-960’s are the same worldwide, but I can tell you that we’ve had FAST VP implemented on our NS-960 since mid 2010. As long as you are running FLARE 30 (or higher) and you are licensed for it, FAST VP should be available to you. I haven’t looked for any specific documentation but if you look around on or you should be able to find some. It seems to me your rep doesn’t know what he’s talking about.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.