Problem with soft media errors on SSD drives and FastCache

4/25/2012 Update:  EMC has released a fix for this issue.  Call your account service representative and say you need to upgrade your NS-960 dart to 6.0.55.300 and flare to 4.30.000.5.524 plus drive firmware upgrade on all SSD drives to TC3Q.

Do you have FastCache enabled on your array?  Keep a close eye on your SP event logs for soft media errors on your SSD drives.  I just noticed over 2000 soft media errors on one of my FastCache enabled arrays, and found a technical advisory from EMC (emc282741) that desribes this as a potentially critical problem.  I just opened a case with EMC for my array to be reviewed for a possible disk replacement.  In the event a second disk drive in the same FastCache RAID group encounters soft media errors before the system automatically retires the first drive a dual faulted RAID Group could occur.  This can result in storage pools going offline and becoming completely inaccessible to the attached hosts.  That’s basically a total SAN outage, not good.

Look for errors like the following in your SP event logs:

“Date Stamp”  “Time Stamp” Bus1 Enc1 Dsk0  820 Soft Media Error [Bad block]

EMC states in emc282741 that enhancements are targeted for Q1 2012 to address SSD media errors and dual hardware faults, but in the meantime, make sure you review the SP logs if you have CLARiiON or VNX arrays that are configured with SSD disk drives or are using FAST Cache.  If any instance of the “Soft Media Error” listed above is associated with any one of the solid state disk drives in your arrays, the array should be upgraded to at least FLARE Release 04.30.000.5.522 (for CX4 Series arrays) or Release 05.31.000.5.509 (for VNX Series arrays) and then start a Proactive Copy (PACO) to a hot spare and replace the drive as soon as possible.

In order to quickly review this on each of my arrays, I wrote the following script to update my intranet site with a report every morning:

naviseccli -h clariion1a getlog >clariion1a.txt
naviseccli -h clariion1b getlog >clariion1b.txt  
cat clariion1a.txt | grep -i ‘soft media’ >clariion1_softmedia_errors.csv
cat clariion1b.txt | grep -i ‘soft media’ >>clariion1_softmedia_errors.csv
./csv2htm.pl -e -T -i /home/scripts/clariion1_softmedia_errors.csv -o /<intranet_web_server>/clariion1_softmedia_errors.html
 

The script dumps the entire SP log from each SP into a text file, greps for only soft media errors in each file, then converts the output to HTML and writes it to my intranet web server.

 
Advertisements

Reporting on the state of VNX auto-tiering

 

To go along with my previous post (reporting on LUN tier distribution) I also include information on the same intranet page about the current state of the auto-tiering job.  We run auto-tiering from 10PM to 6AM in the morning to avoid the movement of data during business hours or our normal backup window in the evening.

Sometimes the auto-tiering job will get very backed up and would theoretically never finish in the time slot that we have for data movement.  I like to keep tabs on the amount of data that needs to move up or down, and the amount of time that the array estimates until it’s completion.  If needed, I will sometimes modify the schedule to run 24 hours a day over the weekend and change it back early on Monday morning.  Unfortunately, EMC did not design the auto-tiering scheduler to allow for creating different time windows on different days. It’s a manual process.

This is a relatively simple, one line CLI command, but it provides very useful info and it’s convenient to add it to a daily report to see it at a glance.

I run this script at 6AM every day, immediately following the end of the window for data to move:

naviseccli -h clariion1_hostname autotiering -info -state -rate -schedule -opStatus > c:\inetpub\wwwroot\clariion1_hostname.autotier.txt

naviseccli -h clariion2_hostname autotiering -info -state -rate -schedule -opStatus > c:\inetpub\wwwroot\clariion2_hostname.autotier.txt

naviseccli -h clariion3_hostname autotiering -info -state -rate -schedule -opStatus > c:\inetpub\wwwroot\clariion3_hostname.autotier.txt

naviseccli -h clariion4_hostname autotiering -info -state -rate -schedule -opStatus > c:\inetpub\wwwroot\clariion4_hostname.autotier.txt

 ....
 The output for each individual clariion looks like this:
Auto-Tiering State: Enabled
Relocation Rate: Medium

Schedule Name: Default Schedule
Schedule State: Enabled
Default Schedule: Yes
Schedule Days: Sun Mon Tue Wed Thu Fri Sat
Schedule Start Time: 22:00
Schedule Stop Time: 6:00
Schedule Duration: 8 hours
Storage Pools: Clariion1_SPB, Clariion2_SPA

Storage Pool Name: Clariion2_SPA
Storage Pool ID: 0
Relocation Start Time: 12/05/11 22:00
Relocation Stop Time: 12/06/11 6:00
Relocation Status: Inactive
Relocation Type: Scheduled
Relocation Rate: Medium
Data to Move Up (GBs): 1854.11
Data to Move Down (GBs): 909.06
Data Movement Completed (GBs): 2316.00
Estimated Time to Complete: 9 hours, 12 minutes
Schedule Duration Remaining: None

Storage Pool Name: Clariion1_SPB
Storage Pool ID: 1
Relocation Start Time: 12/05/11 22:00
Relocation Stop Time: 12/06/11 6:00
Relocation Status: Inactive
Relocation Type: Scheduled
Relocation Rate: Medium
Data to Move Up (GBs): 1757.11
Data to Move Down (GBs): 878.05
Data Movement Completed (GBs): 1726.00
Estimated Time to Complete: 11 hours, 42 minutes
Schedule Duration Remaining: None
 
 

Reporting on LUN auto-tier distribution

We have auto-tiering turned on in all of our storage pools, which all use EFD, FC, and SATA disks.  I created a script that will generate a list of all of our LUNs and the current tier distribution for each LUN.  Note that this script is designed to run in unix.  It can be run using cygwin installed on a Windows server if you don’t have access to a unix based server.

You will first need to create a text file with a list of the hostnames for your arrays (or the IP to one of the storage processors for each array).  Separate lists must be made for VNX vs. older Clariion arrays, as the naviseccli output was changed for VNX.  For example, “Flash” in the text output on a CX was changed to “Extreme Performance” as the output from a VNX when you run the same command.  I have one file named san.list for the older arrays, and another named san2.list for the VNX arrays.

As I mentioned in my previous post, our naming convention for LUNs includes the pool ID, LUN number, server name, filesystem/drive letter, last four digits of the array’s serial number, and size (in GB). Having all of this information in the LUN name makes for very easy reporting.  This information is what truly makes this report useful, as simply having a list of LUNs gives me all the information I need for reporting.  If I need to look at tier distribution for a certain server from this report, I simply filter the list in the spreadsheet for the server name (which is included in the LUN name).

Here’s what our LUN names looks like: P1_LUN100_SPA_0000_servername_filesystem_150G

As I said earlier, because of output differences from the naviseccli command on VNX arrays vs. older CX’s, I have two separate scripts.  I’ll include the complete scripts first, then explain in more detail what each section does.

Here is the script for CX series arrays:

for san in `/bin/cat /reports/tiers/san.list`
do
naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out 
     for lun in `cat $san.out`
        do
        sleep 2
        echo $san
        naviseccli -h $san -np lun -list -name $lun -tiers > $lun.$san.dat &
     done 

mv $san.report.csv $san.report.`date +%j`.csv 
echo "LUN Name","FLASH","FC","SATA" > $san.report.csv 
     for lun in `cat  $san.out`
        do
        echo $lun
        echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i flash $lun.$san.dat |awk '{print $2}'`","`grep -i fc $lun.$san.dat |awk '{print $2}'`","`grep -i sata $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
     done
 done

./csv2htm.pl -e -T -i /reports/clariion1_hostname.report.csv -o /reports/clariion1_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion2_hostname.report.csv -o /reports/clariion2_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion3_hostname.report.csv -o /reports/clariion3_hostname.report.html

Here is the script for VNX series arrays:

for san in `/bin/cat /reports/tiers2/san2.list`
do
naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out
   for lun in `cat $san.out`
     do
     sleep 2
     echo $san.Generating-LUN-List
     naviseccli -NoPoll -h $san lun -list -name $lun -tiers > $lun.$san.dat &
  done

mv $san.report.csv $san.report.`date +%j`.csv
echo "LUN Name","FLASH","FC","SATA" > $san.report.csv
   for lun in `cat  $san.out`
      do
      echo $lun
      echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i extreme $lun.$san.dat |awk '{print $3}'`","`grep -i Performance $lun.$san.dat |grep -v Extreme|awk '{print $2}'`","`grep -i Capacity $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
   done
 done

./csv2htm.pl -e -T -i /reports/VNX1_hostname.report.csv -o /reports/VNX1_hostname.report.html

./csv2htm.pl -e -T -i /reports/VNX2_hostname.report.csv -o /reports/VNX2_hostname.report.html

./csv2htm.pl -e -T -i /reports/VNX3_hostname.report.csv -o /reports/VNX3_hostname.report.html
 Here is a more detailed explanation of the script.

Section 1:

The entire script runs in a loop based on the SAN hostname entries.   We’ll use this list in the next section to get the LUN information from each SAN that needs to be monitored.

for san in `/bin/cat /reports/tiers/san.list`

do

naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out
 Section 2:

This section will run the naviseccli command for every lun in each of the <san_hostname>.out files, and output a single text file with the tier distribution for every LUN.  If you have 500 LUNs, then 500 text files will be created in the same directory that your run the script in.

     for lun in `cat $san.out`
        do
        sleep 2
        echo $san
        naviseccli -h $san -np lun -list -name $lun -tiers > $lun.$san.dat &
     done
 Each file will be named <lun_name>.dat, and the contents of the file looks like this:
LOGICAL UNIT NUMBER 962
Name:  P1_LUN962_0000_SPB_servername_filesystem_350G
Tier Distribution: 
Flash:  4.74%
FC:  95.26%
 Section 3:

This line simply makes a copy of the previous day’s output file for archiving purposes.  The %j adds the Julian date to the file (which is 1-365, the day of the year), so the files will automatically be overwritten after one year.  It’s a self cleaning archive directory.  🙂

mv $san.report.csv $san.report.`date +%j`.csv

Section 4:

This section then processes each individual LUN file pulling out only the tier information that we need, and then combines the list into one large output file in csv format.

The first line creates a blank CSV file with the appropriate column headers.

echo "LUN Name","FLASH","FC","SATA" > $san.report.csv

This block of code parses each individual LUN file, doing a grep for each column item that we need added to the report, and awk to only grab the specific text that we want from that line.  For example, if the LUN output file has “Flash:  4.74%” in one line, and we only want the “4.74%” and the word “Flash:” stripped off, we would do an awk ‘{print $2}’ to grab only the second line item.

     for lun in `cat  $san.out`
        do
        echo $lun
        echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i flash $lun.$san.dat |awk '{print $2}'`","`grep -i fc $lun.$san.dat |awk '{print $2}'`","`grep -i sata $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
     done
done
 Once every LUN file has been processed and added to the report, I run the csv2html.pl perl script (from http://www.jpsdomain.org/source/perl.html) to add to our intranet website.  The csv files are also added as download links on the site.
./csv2htm.pl -e -T -i /reports/clariion1_hostname.report.csv -o /reports/clariion1_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion2_hostname.report.csv -o /reports/clariion2_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion3_hostname.report.csv -o /reports/clariion3_hostname.report.html
 And finally, the output looks like this:
LUN Name FLASH FC SATA
P0_LUN101_0000_SPA_servername_filesystem_100G

24.32%

67.57%

8.11%

P0_LUN102_0000_SPA_servername_filesystem_100G

5.92%

58.77%

35.31%

P1_LUN103_0000_SPA_servername_filesystem_100G

7.00%

81.79%

11.20%

P1_LUN104_0000_SPA_servername_filesystem_100G

1.40%

77.20%

21.40%

P0_LUN200_0000_SPA_servername_filesystem_100G

5.77%

75.06%

19.17%

P0_LUN201_0000_SPA_servername_filesystem_100G

6.44%

71.21%

22.35%

P0_LUN202_0000_SPA_servername_filesystem_100G

4.55%

90.91%

4.55%

P0_LUN203_0000_SPA_servername_filesystem_100G

10.73%

80.76%

8.52%

P0_LUN204_0000_SPA_servername_filesystem_100G

8.62%

88.31%

3.08%

P0_LUN205_0000_SPA_servername_filesystem_100G

10.88%

82.65%

6.46%

P0_LUN206_0000_SPA_servername_filesystem_100G

7.00%

81.79%

11.20%

P0_LUN207_0000_SPA_servername_filesystem_100G

1.40%

77.20%

21.40%

P0_LUN208_0000_SPA_servername_filesystem_100G

5.77%

75.06%

19.17%

Reporting on Trespassed LUNs

 

All of our production clariions are configured with two large tiered storage pools, one for LUNs on SPA and one for LUNs on SPB.  When storage is created on a server, two identical LUNs are created (one in each pool) and are striped at the host level.  I do it that way to more evenly balance the load on the storage processors.

I’ve noticed that LUNs will occassionally trespass to the other SP.  In order to keep the SP’s balanced how I want them, I will routinely check and trespass them back to their default owner.  Our naming convention for LUNs includes the SP that the LUN was initially configured to use, as well as the pool ID, server name, filesystem/drive letter, last four digits of serial number, and size.  Having all of this information in the LUN name makes for very easy reporting.  Having the default SP in the LUN name is required for this script to work as written.

Here’s what our LUN names looks like:     P1_LUN100_SPA_0000_servername_filesystem_150G

To quickly check on the status of any mismatched LUNs every morning, I created a script that generates a daily report.  The script first creates output files that list all of the LUNs on each SP, then uses simple grep commands to output only the LUNs whose SP designation in the name does not match the current owner.   The csv output files are then parsed by the csv2html perl script, which converts the csv into easy to read HTML files that are automatically posted on our intranet web site.  The csv2html perl script is from http://www.jpsdomain.org/source/perl.html and is under a GNU General Public License.  Note that this script is designed to run in unix.  It can be run using cygwin installed on a Windows server if you don’t have access to a unix based server.

Here’s the shell script (I have one for each clariion/VNX):

naviseccli -h clariion_hostname getlun -name -owner |grep -i name > /reports/sp/lunname.out

sleep 5

naviseccli -h clariion_hostname getlun -name -owner |grep -i current >  /reports/sp/currentsp.out

sleep 5

paste -d , /reports/sp/lunname.out /reports/sp/currentsp.out >  /reports/sp/clariion_hostname.spowner.csv

./csv2htm.pl -e -T -i /reports/sp/clariion_hostname.spowner.csv -o /reports/sp/clariion_hostname.spowner.html

#Determine SP mismatches between LUNs and SPs, output to separate files

cat /reports/sp/clariion_hostname.spowner.csv | grep 'SP B' > /reports/sp/clariion_hostname_spb.csv

grep SPA /reports/sp/clariion_hostname_spb.csv > /reports/sp/clariion_hostname_spb_mismatch.csv

cat /reports/sp/clariion_hostname.spowner.csv | grep 'SP A' > /reports/sp/clariion_hostname_spa.csv

grep SPB /reports/sp/clariion_hostname_spa.csv > /reports/sp/clariion_hostname_spa_mismatch.csv

#Convert csv output files to HTML for intranet site

./csv2htm.pl -e -d -T -i /reports/sp/clariion_hostname_spa_mismatch.csv -o /reports/sp/clariion_hostname_spa_mismatch.html

./csv2htm.pl -e -d -T -i /reports/sp/clariion_hostname_spb_mismatch.csv -o /reports/sp/clariion_hostname_spb_mismatch.html
 The output files look like this (clariion_hostname_spa_mismatch.html from the script):
Name: P1_LUN100_SPA_0000_servername_filesystem1_150G       Current Owner: SPB

Name: P1_LUN101_SPA_0000_servername_filesystem2_250G      Current Owner: SPB

Name: P1_LUN102_SPA_0000_servername_filesystem3_350G      Current Owner: SPB

Name: P1_LUN103_SPA_0000_servername_filesystem4_450G
Current Owner: SPB

Name: P1_LUN104_SPA_0000_servername_filesystem5_550G      
Current Owner: SPB
 The 0000 represents the last four digits of the serial number of the Clariion.

That’s it, a quick and easy way to report on trespassed LUNs in our environment.