Strategies for implementing Multi-tiered FAST VP Storage Pools

After speaking to our local rep and attending many different classes at the most recent EMC World in Vegas, I came away with some good information and a very logical best practice for implementing multi-tiered FAST VP storage pools.

First and foremost, you have to use Flash.  High RPM Fiber Channel drives are neighter capactiy efficient or performance efficient, the highest IO data needs to be hosted on Flash drives.  The most effective split of drives in a storage pool is 5% Flash, 20% Fiber Channel, and 75% SATA.

Using this example, if you have an existing SAN with 167 15,000 RPM 600GB Fiber Channel Drives, you would replace them with 97 drives in the 5/20/75 blend to get the same capacity with much improved performance:

  • 25 200GB Flash Drives
  • 34 15K 600GB Fiber Channel Drives
  • 38 2TB SATA Drives

The ideal scenario is to implement FAST Cache along with FAST VP.  FAST Cache continously ensures that the hottest data is serverd from Flash Drives.  With FAST Cache, up to 80% of your data IO will come from Cache (Legacy DRAM Cache served up only about 20%).

It can be a hard pill to swallow when you see how much the Flash drives cost, but their cost is negated by increased disk utilization and reduction in the number of total drives and DAEs that you need to buy.   With all FC drives, disk utilization is sacrificed to get the needed performance – very little of the capacity is used, you just buy tons of disks in order to get more spindles in the raid groups for better performance.  Flash drives can achieve much higher utilization, reducing the effective cost.

After implementing this at my company I’ve seen dramatic performance improvements.  It’s an effective strategy that really works in the real world.

In addition to this, I’ve also been implementing storage pools in pairs of two, each sized identically.  The first pool is designated only for SP A, the second is for SPB.  When I get a request for data storage, in this case let’s say for 1 TB, I will create a 500GB LUN in the first pool on SP A, and a 500GB LUN in the second pool on SP B.  When the disks are presented to the host server, the server administrator will then stripe the data across the two LUNs.  Using this method, I can better balance the load across the storage processors on the back end.

Advertisements

Critical Celerra FCO for Mac OS X 10.7

Here’s the description of this issue from EMC:

EMC has become aware of a compatibility issue with the new version of MacOS 10.7 (Lion). If you install this new MacOS 10.7 (Lion) release, it will cause the data movers in your Celerra to reboot which may result in data unavailability. Please note that your Celerra may encounter this issue when internal or external users operating the MacOS 10.7 (Lion) release access your Celerra system.  In order to protect against this occurrence, EMC has issued the earlier ETAs and we are proactively upgrading the NAS code operating on your affected Celerra system. EMC strongly recommends that you allow for this important upgrade as soon as possible.

Wow.  A single Mac user running 10.7 can cause a kernel panic and a reboot of your data mover.  According to our rep, Apple changed the implementation of SMB in 10.7 and added a few commands that the Celerra doesn’t recognize.  This is a big deal folks, call your Account rep for a DART upgrade if you haven’t already.