Tag Archives: reporting

Reporting on Celerra / VNX NAS Pool capacity with a bash script

I recently created a script that I run on all of our celerras and VNX’s that reports on NAS pool size.   The output from each array is then converted to HTML and combined on a single intranet page to provide a quick at-a-glance view of our global NAS capacity and disk space consumption.  I made another post that shows how to create a block storage pool report as well:  http://emcsan.wordpress.com/2013/08/09/reporting-on-celerravnx-block-storage-pool-capacity-with-a-bash-script/

The default command unfortunately outputs in Megabytes with no option to change to GB or TB.  This script performs the MB to GB conversion and adds a comma as the numerical separator (what we use in the USA) to make the output much more readable.

First, identify the ID number for each of your NAS pools.  You’ll need to insert the ID numbers into the script itself.

[nasadmin@celerra]$ nas_pool -list
 id      inuse   acl     name                      storage system
 10      y       0       NAS_Pool0_SPA             AKM00111000000
 18      y       0       NAS_Pool1_SPB             AKM00111000000
Note that the default output of the command that provides the size of each pool is in a very hard to read format.  I wanted to clean it up to make it easier to read on our reporting page.  Here’s the default output:
[nasadmin@celerra]$ nas_pool -size -all
id           = 10
name         = NAS_Pool0_SPA
used_mb      = 3437536
avail_mb     = 658459
total_mb     = 4095995
potential_mb = 0
id           = 18
name         = NAS_Pool1_SPB
used_mb      = 2697600
avail_mb     = 374396
total_mb     = 3071996
potential_mb = 1023998
 My script changes the output to look like the example below.
Name (Site)   ; Total GB ; Used GB  ; Avail GB
 NAS_Pool0_SPA ; 4,000    ; 3,356.97 ; 643.03
 NAS_Pool1_SPB ; 3,000    ; 2,634.38 ; 365.62
 In this example there are two NAS pools and this script is set up to report on both.  It could be easily expanded or reduced depending on the number of pools on your array. The variable names I used include the Pool ID number from the output above, that should be changed to match your ID’s.  You’ll also need to update the ‘id=’ portion of each command to match your Pool ID’s.

Here’s the script:

#!/bin/bash

NAS_DB="/nas"
export NAS_DB

# Set the Locale to English/US, used for adding the comma as a separator in a cron job
export LC_NUMERIC="en_US.UTF-8"
TODAY=$(date)

 

# Gather Pool Name, Used MB, Avaialble MB, and Total MB for First Pool

# Set variable to pull the Name of the pool from the output of 'nas_pool -size'.
name18=`/nas/bin/nas_pool -size id=18 | /bin/grep name | /bin/awk '{print $3}'`

# Set variable to pull the Used MB of the pool from the output of 'nas_pool -size'.
usedmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep used_mb | /bin/awk '{print $3}'`

# Set variable to pull the Available MB of the pool from the output of 'nas_pool -size'.
availmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep avail_mb | /bin/awk '{print $3}'`
# Set variable to pull the Total MB of the pool from the output of 'nas_pool -size'.

totalmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep total_mb | /bin/awk '{print $3}'`

# Convert MB to GB, Add Comma as separator in output

# Remove '...b' variables if you don't want commas as a separator

# Convert Used MB to Used GB
usedgb18=`/bin/echo $usedmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
usedgb18b=`/usr/bin/printf "%'.2f\n" "$usedgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Available MB to Available GB
availgb18=`/bin/echo $availmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
availgb18b=`/usr/bin/printf "%'.2f\n" "$availgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Total MB to Total GB
totalgb18=`/bin/echo $totalmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
totalgb18b=`/usr/bin/printf "%'.2f\n" "$totalgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Gather Pool Name, Used MB, Avaialble MB, and Total MB for Second Pool

# Set variable to pull the Name of the pool from the output of 'nas_pool -size'.
name10=`/nas/bin/nas_pool -size id=10 | /bin/grep name | /bin/awk '{print $3}'`

# Set variable to pull the Used MB of the pool from the output of 'nas_pool -size'.
usedmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep used_mb | /bin/awk '{print $3}'`

# Set variable to pull the Available MB of the pool from the output of 'nas_pool -size'.
availmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep avail_mb | /bin/awk '{print $3}'`

# Set variable to pull the Total MB of the pool from the output of 'nas_pool -size'.
totalmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep total_mb | /bin/awk '{print $3}'`
 
# Convert MB to GB, Add Comma as separator in output

# Remove '...b' variables if you don't want commas as a separator
 
# Convert Used MB to Used GB
usedgb10=`/bin/echo $usedmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
usedgb10b=`/usr/bin/printf "%'.2f\n" "$usedgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Available MB to Available GB
availgb10=`/bin/echo $availmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
availgb10b=`/usr/bin/printf "%'.2f\n" "$availgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Total MB to Total GB
totalgb10=`/bin/echo $totalmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
totalgb10b=`/usr/bin/printf "%'.2f\n" "$totalgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Create Output File

# If you don't want the comma separator in the output file, substitute the variable without the 'b' at the end.

# I use the semicolon rather than the comma as a separator due to the fact that I'm using the comma as a numerical separator.

# The comma could be substituted here if desired.

/bin/echo $TODAY > /scripts/NasPool.txt
/bin/echo "Name" ";" "Total GB" ";" "Used GB" ";" "Avail GB" >> /scripts/NasPool.txt
/bin/echo $name18 ";" $totalgb18b ";" $usedgb18b ";" $availgb18b >> /scripts/NasPool.txt
/bin/echo $name10 ";" $totalgb10b ";" $usedgb10b ";" $availgb10b >> /scripts/NasPool.txt
 Here’s what the Output looks like:
Wed Jul 17 23:56:29 JST 2013
 Name (Site) ; Total GB ; Used GB ; Avail GB
 NAS_Pool0_SPA ; 4,000 ; 3,356.97 ; 643.03
 NAS_Pool1_SPB ; 3,000 ; 2,634.38 ; 365.62
 I use a cron job to schedule the report daily and copy it to our internal web server.  I then run the csv2html.pl perl script (from http://www.jpsdomain.org/source/perl.html) to convert it to an HTML output file to add to our intranet report page.

Note that I had to modify the csv2html.pl command to accomodate the use of a semicolon instead of the default comma in a csv file.  Here is the command I use to do the conversion:

./csv2htm.pl -e -T -D “;” -i /reports/NasPool.txt -o /reports/NasPool.html
 Below is what the output looks like after running the HTML conversion tool.

NASPool

Advertisements

Disk space reporting on sub folders on a VNX File CIFS shared file system

I recently had a comment on a different post asking how to report on the size of multiple folders on a single file system, so I thought I’d share the method I use to create reports and alerts on the space that sub folders on file systems consume. There is a way to navigate to all of the file systems from the control station, simply navigate to /nas/quota/slot_<x>. Slot_<x> refers to the data mover. The file systems on server_2 would be in the slot_2 folder.  Because we have access to those folders, we can simply run the standard unix ‘du’ command to get the amount of used disk space for each sub folder on any file system.

Running this command:

sudo du -h /nas/quota/slot_2/File_System/Sub_Folder

Will give an output that looks like this:

24K     /nas/quota/slot_2/File_System/Sub_Folder/1
0       /nas/quota/slot_2/File_System/Sub_Folder/2
16K     /nas/quota/slot_2/File_System/Sub_Folder

Each sub folder of the file system named “File_System” is listed with the space each sub folder uses. You’ll notice that I used the sudo command to run the du command. Unfortunately you need root access in order to run the du command on the /nas/quota directory, so you’ll have to add whatever account you log in to the celerra with to the /etc/sudoers file. If you don’t, you’ll get the error “Sorry, user nasadmin is not allowed to execute ‘/usr/bin/du -h -s /nas/quota/slot_2/File_System’ as root on “. I generally log in to the celerra as nasadmin, so I added that account to sudoers. To modify the file, su to root first and then (using vi) add the following to the very bottom of the file of /etc/sudoers (substitute nasadmin for the username you will be using):

nasadmin ALL=/usr/bin/du, /usr/bin/, /sbin/, /usr/sbin, /nas/bin, /nas/bin/, /nas/sbin, /nas/sbin/, /bin, /bin/
nasadmin ALL=/nas/bin/server_df

Once that is complete, you’ll have access to run the command.  You can now write a script that will report on specific folders for you. I have two scripts created that I’m going to share, one I use to send out a daily email report and the other will send an email only if the folder sizes have crossed a certain threshold of space utilization. I have them both scheduled as a cron job on the Celerra itself.  It should be easy to modify these scripts for anyone’s environment.

The following script will generate a report of folder and sub folder sizes for any given file system. In this example, I am reporting on three specific subfolders on a file system (Production, Development, and Test). I also use the grep and awk options at the beginning to pull out the percent full from the df command for the entire file system and include that in the subject line of the email.

#!/bin/bash
 NAS_DB="/nas"
 export NAS_DB
 TODAY=$(date)
 HOST=$(hostname)

PERCENT=`sudo df -h /nas/quota/slot_2/File_System | grep server_2 | awk '{print $5}'`

echo "File_System Folder Size Report Date: $TODAY Host:$HOST" > /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt
 echo "Production" >> /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt

sudo du -h -S /nas/quota/slot_2/File_System/Prod >> /home/nasadmin/report/fs_report.txt

echo " " >> /home/nasadmin/report/fs_report.txt
 echo "Development" >> /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt

sudo du -h -S /nas/quota/slot_2/File_System/Dev >> /home/nasadmin/report/fs_report.txt

echo " " >> /home/nasadmin/report/fs_report.txt
 echo "Test" >> /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt

sudo du -h -S /nas/quota/slot_2/File_System/Test >> /home/nasadmin/report/fs_report.txt

echo " " >> /home/nasadmin/report/fs_report.txt
 echo "% Remaining on File_System Filesystem" >> /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt

sudo df -h /nas/quota/slot_2/File_System/ >> /home/nasadmin/report/fs_report.txt

echo $PERCENT

mail -s "Folder Size Report ($PERCENT In Use)" youremail@domain.com < /home/nasadmin/report/fs_report.txt

Below is what the output of that script looks like. It is included in the body of the email as plain text.

File_System Folder Size Report
Date: Fri Jun 28 08:00:02 CDT 2013
Host:<celerra_name>

Production

 24K     /nas/quota/slot_2/File_System/Production/1
 0       /nas/quota/slot_2/File_System/Production/2
 16K     /nas/quota/slot_2/File_System/Production

Development

 8.0K    /nas/quota/slot_2/File_System/Development/1
 108G    /nas/quota/slot_2/File_System/Development/2
 0          /nas/quota/slot_2/File_System/Development/3
 16K     /nas/quota/slot_2/File_System/Development

Test

 0       /nas/quota/slot_2/File_System/Test/1
 422G    /nas/quota/slot_2/File_System/Test/2
 0       /nas/quota/slot_2/File_System/Test/3
 16K     /nas/quota/slot_2/File_System/Test

% Remaining on File_System Filesystem

Filesystem Size Used Avail Use% Mounted on
 server_2:/ 2.5T 529G 1.9T 22% /nasmcd/quota/slot_2
 .

The following script will only send an email if the folder size has crossed a defined threshold. Simply change the path to the folder you want to report on and change the value of the ‘threshold’ variable in the script to whatever you want it to be, I have mine set at 95%.

#Get the % Disk Utilization from the server_df command
 percent=`sudo df -h /nas/quota/slot_2/File_System | grep server_2 | awk '{print $5}'`

#Strip the % Sign from the output value
 percentvalue=`echo $percent | awk -F"%" '{print $1}'`

#Set the critical threshold that will trigger an email
 threshold=95

#compare the threshold value to the reported value, send an email if needed
 if [ $percentvalue -eq 0 ] && [ $threshold -eq 0 ]
 then
  echo "Both are zero"
 elif [ $percentvalue -eq $threshold ]
 then
  echo "Both Values are equal"
 elif [ $percentvalue -gt $threshold ]
 then
 mail -s "File_System Critical Disk Space Alert: $percentvalue% utilization is above the threshold of $threshold%" youremail@domain.com < /home/nasadmin/report/fs_report.txt

else
  echo "$percentvalue is less than $threshold"
 fi

Automating VNX Storage Processor Percent Utilization Alerts

Note:  The original post describes a method that requires EMC Control Center and Performance Manager.  That tool has been deprecated by EMC in favor of ViPR SRM.  There is still a method you can use to gather CPU information for use in bash scripts. I don’t have script examples that use this command, but if anyone needs help send me a comment and I’ll help. The Navisphere CLI command to get busy/idle ticks for the Storage processors is naviseccli -h getcontrol -cbt.

The output looks like this:

Controller busy ticks: 1639432
Controller idle ticks: 1773844

The SP utilization statistics outputted are an average of the utilization across all the cores of the SP’s processors since the last reset. To get the actual point-in-time SP CPU utilization from this output requires a calculation. You need to poll twice, create a delta for the individual counters by subtracting the earlier value from the later, and apply this formula:

Utilization = Busy Ticks / (Busy Ticks + Idle Ticks)

What follows is the original method I posted that requries EMC Control Center.

I was tasked with coming up with a way to get email alerts whenever our SP utilization breaks a certain threshold.  Since none of the monitoring tools that we own will do that right now, I had to come up with a way using custom scripts.  This is my 2nd post on the same subject, I removed my post from yesterday as it didn’t work as I intended.  This time I used EMC’s Performance Manager rather than pulling data from the SP with the Navisphere CLI.

First, I’m running all of my bash scripts on a windows sever using cygwin.  These should run fine on any linux box as well, however.  Because I don’t have a native sendmail configuration set up on the windows server, I’m using the control station on the Celerra to actually do the comparison of the utilization numbers in the text files and then email out an alert.  The Celerra control station automatically pulls the file via FTP from the windows server every 30 minutes and sends out an email alert if the numbers cross the threshold.  A description of each script and the schedule is below.

Windows Server:

Export.cmd:

This first windows batch script runs an export (with pmcli) from EMC Performance Manager that does a dump of all the performance stats for the current day.

For /f "tokens=2-4 delims=/ " %%a in ('date /t') do (set date=%%c%%a%%b)

C:\ECC\Client.610\PerformanceManager\pmcli.exe -export -out c:\cygwin\home\scripts\sputil999_interval.csv -type interval -class clariion -date %date% -id APM00400500999

Data.cmd:

This cygwin/bash script manipulates the file export from above and ultimately creates two single text files (one for SPA and one for SPB) with a single numerical value of the most recent SP Utilization.  There are a few extra steps at the beginning of the script that are irrelevant to the SP utilization, they’re there for other purposes.

#This will pull only the timestamp line from the top

grep -m 1 "/" /home/scripts/sputil/0999_interval.csv > /home/scripts/sputil/timestamp.csv

# This will pull out only the "disk utilization" line.

grep -i "^% Utilization" /home/scripts/sputil/0999_interval.csv >> /home/scripts/sputil/stats.csv

# This will pull out the disk/LUN title info for the first column

grep -i "Data Collected for DiskStats -" /home/scripts/sputil/0999_interval.csv > /home/scripts/sputil/diskstats.csv

grep -i "Data Collected for LUNStats -" /home/scripts/sputil/0999_interval.csv > /home/scripts/sputil/lunstats.csv

# This will create a column with the disk/LUN number

cat /home/scripts/sputil/diskstats.csv /home/scripts/sputil/lunstats.csv > /home/scripts/sputil/data.csv

# This combines the disk/LUN column with the data column

paste /home/scripts/sputil/data.csv /home/scripts/sputil/stats.csv > /home/scripts/sputil/combined.csv

cp /home/scripts/sputil/combined.csv /home/scripts/sputil/utilstats.csv
 

#  This removes all the temporary files
rm /home/scripts/sputil/timestamp.csv
rm /home/scripts/sputil/stats.csv
rm /home/scripts/sputil/diskstats.csv
rm /home/scripts/sputil/lunstats.csv
rm /home/scripts/sputil/data.csv
rm /home/scripts/sputil/combined.csv

# This next line strips the file of all but the last two rows, which are SP Utilization.

# The 1 looks at the first character in the row, the D specifies "starts with D", then deletes rows meeting those conditions.

awk -v FS="" -v OFS="" '$1 != "D"' < /home/scripts/sputil/utilstats.csv > /home/scripts/sputil/sputil.csv

#This pulls the values from the last column, which would be the most recent.

awk -F, '{print $(NF-1)}' < /home/scripts/sputil/sputil.csv > /home/scripts/sputil/sp_util.csv

#pull 1st line (SPA) into separate file

sed -n 1,1p < /home/scripts/sputil/sp_util.csv > /home/scripts/sputil/spAutil.txt

#pull 2nd line (SPB) into separate file

sed -n 2,2p < /home/scripts/sputil/sp_util.csv > /home/scripts/sputil/spButil.txt

#The spAutil.txt/spButil.txt files now contain only a single numerical value, which would be the most recent %utilization from the Control Center/Performance Manager dump file.

#Copy files to web server root directory

cp /home/scripts/sputil/*.txt /cygdrive/c/inetpub/wwwroot

Celerra Control Station:

CelerraArray:/home/nasadmin/sputil/ftpsp.sh

The script below connects to the windows server and grabs the current SP utilization text files via FTP every 30 minutes (via a cron job).

#!/bin/bash
cd /home/nasadmin/sputil
ftp windows_server.domain.net <<SCRIPT
get spAutil.txt
get spButil.txt
quit
SCRIPT
 CelerraArray:/home/nasadmin/sputil/spcheck.sh:

This script does the comparison check to see if the SP utilization is over our threshold. If it is, it sends an email alert that includes the %Utilization number in the subject line of the email. To change the threshold setting, you’d need to change the THRESHOLD=<XX> line in the script.  The line containing printf “%2.0f” converts the floating point value to an integer, as bash scripts don’t recognize floating point values.

#!/bin/bash

SPB=`cat /home/nasadmin/sputil/spButil.txt` 
SPBcheck= printf "%2.0f" $SPB > /home/nasadmin/sputil/spButil2.txt 
SPB=`cat /home/nasadmin/sputil/spButil2.txt`

echo $SPB
THRESHOLD=50
if [ $SPB -eq 0 ] && [ $THRESHOLD -eq 0 ] 
then 
        echo "Both are zero"
 elif [ $SPB -eq $THRESHOLD ]
 then         
        echo "Both Values are equal"
 elif [ $SPB -gt $THRESHOLD ]
 then          
        echo "SPB is greater than the threshold.  Sending alert" 

        uuencode spButil.txt | mail -s "<array_name> SPB Utilization Alert: $SPB % above threshold of $THRESHOLD %" notify@domain.com
else         
echo "$SPB is lesser than $THRESHOLD" 
fi

CelerraArray Crontab schedule:

The FTP script is currently set to pull SP utilization files.  Run “crontab –e” to edit the scheduler.  I’ve got the alert script set to run at the top of the hour and half past the hour, and the updated SP files from the web server are FTP’d in a few minutes prior.

[nasadmin@CelerraArray sputil]$ crontab –l
58,28 * * * * /home/nasadmin/sputil/ftpsp.sh
0,30 * * * * /home/nasadmin/sputil/spcheck.sh
 Overall Scheduling:

Windows Server:

Performance Manager Dump runs 15 minutes past the hour (exports data)
Data script runs at 20 minutes past the hour (processes data to get SP Utilization)

Celerra Server:

FTP script pulls new SP utilization text files at 28 minutes past the hour
Alert script runs at 30 minutes past the hour

The cycle then repeats at minute 45, minute 50, minute 58, and minute 0.

 

Making a case for file archiving

We’ve been investigating options for archiving unstructured (file based) data that resides on our Celerra for a while now. There are many options available, but before looking into a specific solution I was asked to generate a report that showed exactly how much of the data has been accessed by users for the last 60 days and for the last 12 months.  As I don’t have permissions to the shared folders from my workstation I started looking into ways to run the report directly from the Celerra control station.  The method I used will also work on VNX File.

After a little bit of digging I discovered that you can access all of the file systems from the control station by navigating to /nas/quota/slot_.  The slot_2 folder would be for the server_2 data mover, slot_3 would be for server_3, etc.  With full access to all the file systems, I simply needed to write a script that scanned each folder and counted the number of files that had been modified within a certain time window.

I always use excel for scripts I know are going to be long.  I copy the file system list from Unisphere then put the necessary commands in different columns, and end it with a concatenate formula that pulls it all together.  If you put echo -n in A1, “Users_A,” in B1, and >/home/nasadmin/scripts/Users_A.dat in C1, you’d just need to type the formula “=CONCATENATE(A1,B1,C1)” into cell D1.  D1 would then contain echo -n “Users_A,” > /home/nasadmin/scripts/Users_A.dat. It’s a simple and efficient way to make long scripts very quickly.

In this case, the script needed four different sections.  All four of these sections I’m about to go over were copied into a single shell script and saved in my /home/nasadmin/scripts directory.  After creating the .sh file, I always do a chmod +X and chmod 777 on the file.  Be prepared for this to take a very long time to run.  It of course depends on the number of file systems on your array, but for me this script took about 23 hours to complete.

First, I create a text file for each file system that contains the name of the filesystem (and a comma) which is used later to populate the first column of the final csv output.  It’s of course repeated for each file system.

echo -n "Users_A," > home/nasadmin/scripts/Users_A.dat
echo -n "Users_B," > home/nasadmin/scripts/Users_B.dat

... <continued for each filesystem>
 Second, I use the ‘find’ command to walk each directory tree and count the number of files that were accessed over 60 days ago.  The output is written to another text file that will be used in the csv output file later.
find /nas/quota/slot_2/ Users_A -mtime +365 | wc -l > /home/nasadmin/scripts/ Users_A_wc.dat

find /nas/quota/slot_2/ Users_B -mtime +365 | wc -l > /home/nasadmin/scripts/ Users_B_wc.dat

... <continued for each filesystem>
 Third, I want to count the total number of files in each file system.  A third text file is written with that number, again for the final combined report that’s generated at the end.
find /nas/quota/slot_2/Users_B | wc -l > /home/nasadmin/scripts/Users_B_total.dat

find /nas/quota/slot_2/Users_B | wc -l > /home/nasadmin/scripts/Users_B_total.dat

... <continued for each filesystem>
 Finally, each file is combined into the final report.  The output will show each filesystem with two columns, Total Files & Files Accessed 60 days ago.  You can then easily update the report in Excel and add columns that show files accessed in the last 60 days, the percentage of files accessed in the last 60 days, etc., with some simple math.
cat /home/nasadmin/scripts/Users_A.dat /home/nasadmin/scripts/Users_A_wc.dat /home/nasadmin/scripts/comma.dat /home/nasadmin/scripts/Users_A_total.dat | tr -d "\n" > /home/nasadmin/scripts/fsoutput.csv | echo " " > /home/nasadmin/scripts/fsoutput.csv

cat /home/nasadmin/scripts/Users_B.dat /home/nasadmin/scripts/Users_B_wc.dat /home/nasadmin/scripts/comma.dat /home/nasadmin/scripts/Users_B_total.dat | tr -d "\n" >> /home/nasadmin/scripts/fsoutput.csv | echo " " > /home/nasadmin/scripts/fsoutput.csv

... <continued for each filesystem>

My final output looks like this:

Total Files Accessed 60+ days ago Accessed in Last 60 days % Accessed in last 60 days
Users_A            827,057                734,848               92,209                                                   11.15
Users_B              61,975                  54,727                 7,248                                                   11.70
Users_C            150,166                132,457               17,709                                                   11.79

The three example filesystems above show that only about 11% of the files have been accessed in the last 60 days.   Most user data has a very short lifecycle, it’s ‘hot’ for a month or less then dramatically tapers off as the business value of the data drops.  These file systems would be prime candidates for archiving.

My final report definitely supported the need for archving, but we’ve yet to start a project to complete it.  I like the possibility of using EMC’s cloud tiering appliance which can archive data directly to the cloud service of your choice.  I’ll make another post in the future about archiving solutions once I’ve had more time to research it.

Reporting on LUN auto-tier distribution

We have auto-tiering turned on in all of our storage pools, which all use EFD, FC, and SATA disks.  I created a script that will generate a list of all of our LUNs and the current tier distribution for each LUN.  Note that this script is designed to run in unix.  It can be run using cygwin installed on a Windows server if you don’t have access to a unix based server.

You will first need to create a text file with a list of the hostnames for your arrays (or the IP to one of the storage processors for each array).  Separate lists must be made for VNX vs. older Clariion arrays, as the naviseccli output was changed for VNX.  For example, “Flash” in the text output on a CX was changed to “Extreme Performance” as the output from a VNX when you run the same command.  I have one file named san.list for the older arrays, and another named san2.list for the VNX arrays.

As I mentioned in my previous post, our naming convention for LUNs includes the pool ID, LUN number, server name, filesystem/drive letter, last four digits of the array’s serial number, and size (in GB). Having all of this information in the LUN name makes for very easy reporting.  This information is what truly makes this report useful, as simply having a list of LUNs gives me all the information I need for reporting.  If I need to look at tier distribution for a certain server from this report, I simply filter the list in the spreadsheet for the server name (which is included in the LUN name).

Here’s what our LUN names looks like: P1_LUN100_SPA_0000_servername_filesystem_150G

As I said earlier, because of output differences from the naviseccli command on VNX arrays vs. older CX’s, I have two separate scripts.  I’ll include the complete scripts first, then explain in more detail what each section does.

Here is the script for CX series arrays:

for san in `/bin/cat /reports/tiers/san.list`
do
naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out 
     for lun in `cat $san.out`
        do
        sleep 2
        echo $san
        naviseccli -h $san -np lun -list -name $lun -tiers > $lun.$san.dat &
     done 

mv $san.report.csv $san.report.`date +%j`.csv 
echo "LUN Name","FLASH","FC","SATA" > $san.report.csv 
     for lun in `cat  $san.out`
        do
        echo $lun
        echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i flash $lun.$san.dat |awk '{print $2}'`","`grep -i fc $lun.$san.dat |awk '{print $2}'`","`grep -i sata $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
     done
 done

./csv2htm.pl -e -T -i /reports/clariion1_hostname.report.csv -o /reports/clariion1_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion2_hostname.report.csv -o /reports/clariion2_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion3_hostname.report.csv -o /reports/clariion3_hostname.report.html

Here is the script for VNX series arrays:

for san in `/bin/cat /reports/tiers2/san2.list`
do
naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out
   for lun in `cat $san.out`
     do
     sleep 2
     echo $san.Generating-LUN-List
     naviseccli -NoPoll -h $san lun -list -name $lun -tiers > $lun.$san.dat &
  done

mv $san.report.csv $san.report.`date +%j`.csv
echo "LUN Name","FLASH","FC","SATA" > $san.report.csv
   for lun in `cat  $san.out`
      do
      echo $lun
      echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i extreme $lun.$san.dat |awk '{print $3}'`","`grep -i Performance $lun.$san.dat |grep -v Extreme|awk '{print $2}'`","`grep -i Capacity $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
   done
 done

./csv2htm.pl -e -T -i /reports/VNX1_hostname.report.csv -o /reports/VNX1_hostname.report.html

./csv2htm.pl -e -T -i /reports/VNX2_hostname.report.csv -o /reports/VNX2_hostname.report.html

./csv2htm.pl -e -T -i /reports/VNX3_hostname.report.csv -o /reports/VNX3_hostname.report.html
 Here is a more detailed explanation of the script.

Section 1:

The entire script runs in a loop based on the SAN hostname entries.   We’ll use this list in the next section to get the LUN information from each SAN that needs to be monitored.

for san in `/bin/cat /reports/tiers/san.list`

do

naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out
 Section 2:

This section will run the naviseccli command for every lun in each of the <san_hostname>.out files, and output a single text file with the tier distribution for every LUN.  If you have 500 LUNs, then 500 text files will be created in the same directory that your run the script in.

     for lun in `cat $san.out`
        do
        sleep 2
        echo $san
        naviseccli -h $san -np lun -list -name $lun -tiers > $lun.$san.dat &
     done
 Each file will be named <lun_name>.dat, and the contents of the file looks like this:
LOGICAL UNIT NUMBER 962
Name:  P1_LUN962_0000_SPB_servername_filesystem_350G
Tier Distribution: 
Flash:  4.74%
FC:  95.26%
 Section 3:

This line simply makes a copy of the previous day’s output file for archiving purposes.  The %j adds the Julian date to the file (which is 1-365, the day of the year), so the files will automatically be overwritten after one year.  It’s a self cleaning archive directory.  🙂

mv $san.report.csv $san.report.`date +%j`.csv

Section 4:

This section then processes each individual LUN file pulling out only the tier information that we need, and then combines the list into one large output file in csv format.

The first line creates a blank CSV file with the appropriate column headers.

echo "LUN Name","FLASH","FC","SATA" > $san.report.csv

This block of code parses each individual LUN file, doing a grep for each column item that we need added to the report, and awk to only grab the specific text that we want from that line.  For example, if the LUN output file has “Flash:  4.74%” in one line, and we only want the “4.74%” and the word “Flash:” stripped off, we would do an awk ‘{print $2}’ to grab only the second line item.

     for lun in `cat  $san.out`
        do
        echo $lun
        echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i flash $lun.$san.dat |awk '{print $2}'`","`grep -i fc $lun.$san.dat |awk '{print $2}'`","`grep -i sata $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
     done
done
 Once every LUN file has been processed and added to the report, I run the csv2html.pl perl script (from http://www.jpsdomain.org/source/perl.html) to add to our intranet website.  The csv files are also added as download links on the site.
./csv2htm.pl -e -T -i /reports/clariion1_hostname.report.csv -o /reports/clariion1_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion2_hostname.report.csv -o /reports/clariion2_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion3_hostname.report.csv -o /reports/clariion3_hostname.report.html
 And finally, the output looks like this:
LUN Name FLASH FC SATA
P0_LUN101_0000_SPA_servername_filesystem_100G

24.32%

67.57%

8.11%

P0_LUN102_0000_SPA_servername_filesystem_100G

5.92%

58.77%

35.31%

P1_LUN103_0000_SPA_servername_filesystem_100G

7.00%

81.79%

11.20%

P1_LUN104_0000_SPA_servername_filesystem_100G

1.40%

77.20%

21.40%

P0_LUN200_0000_SPA_servername_filesystem_100G

5.77%

75.06%

19.17%

P0_LUN201_0000_SPA_servername_filesystem_100G

6.44%

71.21%

22.35%

P0_LUN202_0000_SPA_servername_filesystem_100G

4.55%

90.91%

4.55%

P0_LUN203_0000_SPA_servername_filesystem_100G

10.73%

80.76%

8.52%

P0_LUN204_0000_SPA_servername_filesystem_100G

8.62%

88.31%

3.08%

P0_LUN205_0000_SPA_servername_filesystem_100G

10.88%

82.65%

6.46%

P0_LUN206_0000_SPA_servername_filesystem_100G

7.00%

81.79%

11.20%

P0_LUN207_0000_SPA_servername_filesystem_100G

1.40%

77.20%

21.40%

P0_LUN208_0000_SPA_servername_filesystem_100G

5.77%

75.06%

19.17%

Reporting on Trespassed LUNs

 

All of our production clariions are configured with two large tiered storage pools, one for LUNs on SPA and one for LUNs on SPB.  When storage is created on a server, two identical LUNs are created (one in each pool) and are striped at the host level.  I do it that way to more evenly balance the load on the storage processors.

I’ve noticed that LUNs will occassionally trespass to the other SP.  In order to keep the SP’s balanced how I want them, I will routinely check and trespass them back to their default owner.  Our naming convention for LUNs includes the SP that the LUN was initially configured to use, as well as the pool ID, server name, filesystem/drive letter, last four digits of serial number, and size.  Having all of this information in the LUN name makes for very easy reporting.  Having the default SP in the LUN name is required for this script to work as written.

Here’s what our LUN names looks like:     P1_LUN100_SPA_0000_servername_filesystem_150G

To quickly check on the status of any mismatched LUNs every morning, I created a script that generates a daily report.  The script first creates output files that list all of the LUNs on each SP, then uses simple grep commands to output only the LUNs whose SP designation in the name does not match the current owner.   The csv output files are then parsed by the csv2html perl script, which converts the csv into easy to read HTML files that are automatically posted on our intranet web site.  The csv2html perl script is from http://www.jpsdomain.org/source/perl.html and is under a GNU General Public License.  Note that this script is designed to run in unix.  It can be run using cygwin installed on a Windows server if you don’t have access to a unix based server.

Here’s the shell script (I have one for each clariion/VNX):

naviseccli -h clariion_hostname getlun -name -owner |grep -i name > /reports/sp/lunname.out

sleep 5

naviseccli -h clariion_hostname getlun -name -owner |grep -i current >  /reports/sp/currentsp.out

sleep 5

paste -d , /reports/sp/lunname.out /reports/sp/currentsp.out >  /reports/sp/clariion_hostname.spowner.csv

./csv2htm.pl -e -T -i /reports/sp/clariion_hostname.spowner.csv -o /reports/sp/clariion_hostname.spowner.html

#Determine SP mismatches between LUNs and SPs, output to separate files

cat /reports/sp/clariion_hostname.spowner.csv | grep 'SP B' > /reports/sp/clariion_hostname_spb.csv

grep SPA /reports/sp/clariion_hostname_spb.csv > /reports/sp/clariion_hostname_spb_mismatch.csv

cat /reports/sp/clariion_hostname.spowner.csv | grep 'SP A' > /reports/sp/clariion_hostname_spa.csv

grep SPB /reports/sp/clariion_hostname_spa.csv > /reports/sp/clariion_hostname_spa_mismatch.csv

#Convert csv output files to HTML for intranet site

./csv2htm.pl -e -d -T -i /reports/sp/clariion_hostname_spa_mismatch.csv -o /reports/sp/clariion_hostname_spa_mismatch.html

./csv2htm.pl -e -d -T -i /reports/sp/clariion_hostname_spb_mismatch.csv -o /reports/sp/clariion_hostname_spb_mismatch.html
 The output files look like this (clariion_hostname_spa_mismatch.html from the script):
Name: P1_LUN100_SPA_0000_servername_filesystem1_150G       Current Owner: SPB

Name: P1_LUN101_SPA_0000_servername_filesystem2_250G      Current Owner: SPB

Name: P1_LUN102_SPA_0000_servername_filesystem3_350G      Current Owner: SPB

Name: P1_LUN103_SPA_0000_servername_filesystem4_450G
Current Owner: SPB

Name: P1_LUN104_SPA_0000_servername_filesystem5_550G      
Current Owner: SPB
 The 0000 represents the last four digits of the serial number of the Clariion.

That’s it, a quick and easy way to report on trespassed LUNs in our environment.

Celerra replication monitoring script

This script allows me to quickly monitor and verify the status of my replication jobs every morning.  It will generate a csv file with six columns for file system name, interconnect, estimated completion time, current transfer size,current transfer size remaining, and current write speed.

I recently added two more remote offices to our replication topology and I like to keep a daily tab on how much longer they have to complete the initial seeding, and it will also alert me to any other jobs that are running too long and might need my attention.

Step 1:

Log in to your Celerra and create a directory for the script.  I created a subdirectory called “scripts” under /home/nasadmin.

Create a text file named ‘replfs.list’ that contains a list of your replicated file systems.  You can cut and paste the list out of Unisphere.

The contents of the file should should look something like this:

Filesystem01
Filesystem02
Filesystem03
Filesystem04
Filesystem05
 Step 2:

Copy and paste all of the code into a text editor and modify it for your needs (the complete code is at the bottom of this post).  I’ll go through each section here with an explanation.

1: The first section will create a text file ($fs.dat) for each filesystem in the replfs.list file you made eariler.

for fs in `cat replfs.list`
         do
         nas_replicate -info $fs | egrep 'Celerra|Name|Current|Estimated' > $fs.dat
         done
 The output will look like this:
Name                                        = Filesystem_01
Source Current Data Port            = 57471
Current Transfer Size (KB)          = 232173216
Current Transfer Remain (KB)     = 230877216
Estimated Completion Time        = Thu Nov 24 06:06:07 EST 2011
Current Transfer is Full Copy      = Yes
Current Transfer Rate (KB/s)       = 160
Current Read Rate (KB/s)           = 774
Current Write Rate (KB/s)           = 3120
 2: The second section will create a blank csv file with the appropriate column headers:
echo 'Name,System,Estimated Completion Time,Current Transfer Size (KB),Current Transfer Remain (KB),Write Speed (KB)' > replreport.csv

3: The third section will parse all of the output files created by the first section, pulling out only the data that we’re interested in.  It places it in columns in the csv file.

         for fs in `cat replfs.list`

         do

         echo $fs","`grep Celerra $fs.dat | awk '{print $5}'`","`grep -i Estimated $fs.dat |awk '{print $5,$6,$7,$8,$9,$10}'`","`grep -i Size $fs.dat |awk '{print $6}'`","`grep -i Remain $fs.dat |awk '{print $6}'`","`grep -i Write $fs.dat |awk '{print $6}'` >> replreport.csv

        done
 If you’re not familiar with awk, I’ll give a brief explanation here.  When you grep for a certain line in the output code, awk will allow you to output only one word in the line.

For example, if you want the output of “Yes” put into a column in the csv file, but the output code line looks like “Current Transfer is Full Copy      = Yes”, then you could pull out only the “Yes” by typing in the following:

 nas_replicate -info Filesystem01 | grep  Full | awk '{print $7}'

Because the word ‘Yes’ is the 7th item in the line, the output would only contain the word Yes.

4: The final section will send an email with the csv output file attached.

uuencode replreport.csv replreport.csv | mail -s "Replication Status Report" user@domain.com

Step 3:

Copy and paste the modified code into a script file and save it.  I have mine saved in the /home/nasadmin/scripts folder. Once the file is created, make it executable by typing in chmod +X scriptfile.sh, and change the permissions with chmod 755 scriptfile.sh.

Step 4:

You can now add the file to crontab to run automatically.  Add it to cron by typing in crontab –e, to view your crontab entries type crontab –l.  For details on how to add cron entries, do a google search as there is a wealth of info available on your options.

Script Code:

for fs in `cat replfs.list`

         do

         nas_replicate -info $fs | egrep 'Celerra|Name|Current|Estimated' > $fs.dat

        done

 echo 'Name,System,Estimated Completion Time,Current Transfer Size (KB),Current Transfer Remain (KB),Write Speed (KB)' > replreport.csv

         for fs in `cat replfs.list`

         do

         echo $fs","`grep Celerra $fs.dat | awk '{print $5}'`","`grep -i Estimated $fs.dat |awk '{print $5,$6,$7,$8,$9,$10}'`","`grep -i Size $fs.dat |awk '{print $6}'`","`grep -i Remain $fs.dat |awk '{print $6}'`","`grep -i Write $fs.dat |awk '{print $6}'` >> replreport.csv

         done

 uuencode replreport.csv replreport.csv | mail -s "Replication Status Report" user@domain.com
 The final output of the script generates a report that looks like the sample below.  Filesystems that have all zeros and no estimated completion time are caught up and not currently performing a data synchronization.
Name System Estimated Completion Time Current Transfer Size (KB) Current Transfer Remain (KB) Write Speed (KB)
SA2Users_03 SA2VNX5500 0 0 0
SA2Users_02 SA2VNX5500 Wed Dec 16 01:16:04 EST 2011 211708152 41788152 2982
SA2Users_01 SA2VNX5500 Wed Dec 16 18:53:32 EST 2011 229431488 59655488 3425
SA2CommonFiles_04 SA2VNX5500 0 0 0
SA2CommonFiles_03 SA2VNX5500 Wed Dec 16 10:35:06 EST 2011 232173216 53853216 3105
SA2CommonFiles_02 SA2VNX5500 Mon Dec 14 15:46:33 EST 2011 56343592 12807592 2365
SA2commonFiles_01 SA2VNX5500 0 0 0

Use the CLI to determine replication job throughput

This handy command will allow you to determine exactly how much bandwidth you are using for your Celerra replication jobs.

Run this command first, it will generate a file with the stats for all of your replication jobs:

nas_replicate -info -all > /tmp/rep.out

Run this command next:

grep "Current Transfer Rate" /tmp/rep.out |grep -v "= 0"

The output looks like this:

Current Transfer Rate (KB/s)   = 196
 Current Transfer Rate (KB/s)   = 104
 Current Transfer Rate (KB/s)   = 91
 Current Transfer Rate (KB/s)   = 90
 Current Transfer Rate (KB/s)   = 91
 Current Transfer Rate (KB/s)   = 88
 Current Transfer Rate (KB/s)   = 94
 Current Transfer Rate (KB/s)   = 89
 Current Transfer Rate (KB/s)   = 112
 Current Transfer Rate (KB/s)   = 108
 Current Transfer Rate (KB/s)   = 91
 Current Transfer Rate (KB/s)   = 117
 Current Transfer Rate (KB/s)   = 118
 Current Transfer Rate (KB/s)   = 119
 Current Transfer Rate (KB/s)   = 112
 Current Transfer Rate (KB/s)   = 27
 Current Transfer Rate (KB/s)   = 136
 Current Transfer Rate (KB/s)   = 117
 Current Transfer Rate (KB/s)   = 242
 Current Transfer Rate (KB/s)   = 77
 Current Transfer Rate (KB/s)   = 218
 Current Transfer Rate (KB/s)   = 285
 Current Transfer Rate (KB/s)   = 287
 Current Transfer Rate (KB/s)   = 184
 Current Transfer Rate (KB/s)   = 224
 Current Transfer Rate (KB/s)   = 82
 Current Transfer Rate (KB/s)   = 324
 Current Transfer Rate (KB/s)   = 210
 Current Transfer Rate (KB/s)   = 328
 Current Transfer Rate (KB/s)   = 156
 Current Transfer Rate (KB/s)   = 156

Each line represents the throughput for one of your replication jobs.  Adding all of those numbers up will give you the amount of bandwidth you are consuming.  In this case, I’m using about 4.56MB/s on my 100MB link.

This same technique can of course be applied to any part of the output file.  If you want to know the estimated completion date of each of your replication jobs, you’d run this command against the rep.out file:

grep "Estimated Completion Time" /tmp/rep.out

That will give you a list of dates, like this:

Estimated Completion Time      = Fri Jul 15 02:12:53 EDT 2011
 Estimated Completion Time      = Fri Jul 15 08:06:33 EDT 2011
 Estimated Completion Time      = Mon Jul 18 18:35:37 EDT 2011
 Estimated Completion Time      = Wed Jul 13 15:24:03 EDT 2011
 Estimated Completion Time      = Sun Jul 24 05:35:35 EDT 2011
 Estimated Completion Time      = Tue Jul 19 16:35:25 EDT 2011
 Estimated Completion Time      = Fri Jul 15 12:10:25 EDT 2011
 Estimated Completion Time      = Sun Jul 17 16:47:31 EDT 2011
 Estimated Completion Time      = Tue Aug 30 00:30:54 EDT 2011
 Estimated Completion Time      = Sun Jul 31 03:23:08 EDT 2011
 Estimated Completion Time      = Thu Jul 14 08:12:25 EDT 2011
 Estimated Completion Time      = Thu Jul 14 20:01:55 EDT 2011
 Estimated Completion Time      = Sun Jul 31 05:19:26 EDT 2011
 Estimated Completion Time      = Thu Jul 14 17:12:41 EDT 2011

Very useful stuff. 🙂

 

Use the CLI to quickly determine the size of your Celerra checkpoint filesystems

Need to quickly figure out which checkpoint filesystems are taking up all of your precious savvol space?  Run the CLI command below.  Filling up the savvol storage pool can cause all kinds of problems besides failing checkpoints.  It can also cause filesystem replication jobs to fail.

To view it on the screen:

nas_fs -query:IsRoot==False:TypeNumeric==1 -format:’%s\n%q’ -fields:Name,Checkpoints -query:TypeNumeric==7 -format:’   %40s : %5d : %s\n’ -fields:Name,ID,Size

To save it in a file:

nas_fs -query:IsRoot==False:TypeNumeric==1 -format:’%s\n%q’ -fields:Name,Checkpoints -query:TypeNumeric==7 -format:’   %40s : %5d : %s\n’ -fields:Name,ID,Size > checkpoints.txt

vi checkpoints.txt   (to view the file)

Here’s a sample of the output:

UserFilesystem_01
ckpt_ckpt_UserFilesystem_01_monthly_001 :   836 : 220000
ckpt_ckpt_UserFilesystem_01_monthly_002 :   649 : 220000

UserFilesystem_02
ckpt_ckpt_UserFilesystem_02_monthly_001 :   836 : 80000
ckpt_ckpt_UserFilesystem_02_monthly_002 :   649 : 80000

The numbers are in MB.

 

Auto generating daily performance graphs with EMC Control Center / Performance Manager

This document describes the process I used to pull performance data using the ECC pmcli command line tool, parse the data to make it more usable with a graphing tool, and then use perl scripts to automatically generate graphs.

You must install Perl.  I use ActiveState Perl (Free Community Edition) (http://www.activestate.com/activeperl/downloads).

You must install Cygwin.  Link: http://www.cygwin.com/install.html. I generally choose all packages.

I use the follow CPAN Perl modules:

Step 1:

Once you have the software set up, the first step is to use the ECC command line utility to extract the interval performance data that you’re interested in graphing.  Below is a sample PMCLI command line script that could be used for this purpose.

:Get the current date

For /f “tokens=2-4 delims=/” %%a in (‘date /t’) do (set date=%%c%%a%%b)

:Export the interval file for today’s date.

D:\ECC\Client.610\PerformanceManager\pmcli.exe -export -out D:\archive\interval.csv -type interval -class clariion -date %date% -id APM00324532111

:Copy all the export data to my cygwin home directory for processing later.

copy /y e:\san712_interval.csv C:\cygwin\home\<userid>

You can schedule the command script above to run using windows task scheduler.  I run it at 11:46PM every night, as data is collected on our SAN in 15 minute intervals, and that gives me a file that reports all the way up to the end of one calendar day.

Note that there are 95 data collection points from 00:15 to 23:45 every day if you collect data at 15 minute intervals.  The storage processor data resides in the last two lines of the output file.

Here is what the output file looks like:

EMC ControlCenter Performance manager generated file from: <path>

Data Collected for DiskStats

Data Collected for DiskStats – 0_0_0

                                                             3/28/11 00:15       3/28/11 00:30      3/28/11  00:45      3/28/11 01:00 

Number of Arrivals with Non Zero Queue     12                         20                        23                      23 

% Utilization                                                30.2                     33.3                     40.4                  60.3

Response Time                                              1.8                        3.3                        5.4                     7.8

Read Throughput IO per sec                        80.6                    13.33                   90.4                    10.3

Great information in there, but the format of the data makes it very hard to do anything meaningful with the data in an excel chart.  If I want to chart only % utilization, that data is difficult to chart because there are so many counters around it that are also have data collected on them.   My next goal was to write a script to reformat the data in a much more usable format to automatically create a graph for one specific counter that I’m interested in (like daily utilization numbers), which could then be emailed daily or auto-uploaded to an internal website.

Step 2:

Once the PMCLI data is exported, the next step is to use cygwin bash scripts to parse the csv file and pull out only the performance data that is needed.  Each SAN will need a separate script for each type of performance data.  I have four scripts configured to run based on the data that I want to monitor.  The scripts are located in my cygwin home directory.

The scripts I use:

  • Iostats.sh (for total IO throughput)
  • Queuestats.sh (for disk queue length)
  • Resptime.sh (for disk response time in ms)
  • Utilstats.sh (for % utilization)

Here is a sample shell script for parsing the CSV export file (iostats.sh):

#!/usr/bin/bash

#This will pull only the timestamp line from the top of the CSV output file. I’ll paste it back in later.

grep -m 1 “/” interval.csv > timestamp.csv

#This will pull out only lines that begin with “total througput io per sec”.

grep -i “^Total Throughput IO per sec” interval.csv >> stats.csv

#This will pull out the disk/LUN title info for the first column.  I’ll add this back in later.

grep -i “Data Collected for DiskStats -” interval.csv > diskstats.csv

grep -i “Data Collected for LUNStats -” interval.csv > lunstats.csv

#This will create a column with the disk/LUN number .  I’ll paste it into the first column later.

cat diskstats.csv lunstats.csv > data.csv

#This adds the first column (disk/LUN) and combines it with the actual performance data columns.

paste data.csv stats.csv > combined.csv

#This combines the timestamp header at the top with the combined file from the previous step to create the final file we’ll use for the graph.  There is also a step to append the current date and copy the csv file to an archive directory.

cat timestamp.csv combined.csv > iostats.csv

cp iostats.csv /cygdrive/e/SAN/csv_archive/iostats_archive_$(date +%y%m%d).csv

#  This removes all the temporary files created earlier in the script.  They’re no longer needed.

rm timestamp.csv

rm stats.csv

rm diskstats.csv

rm lunstats.csv

rm data.csv

rm combined.csv

#This strips the last two lines of the CSV (Storage Processor data).  The resulting file is used for the “all disks” spreadsheet.  We don’t want the SP
data to skew the graph.  This CSV file is also copied to the archive directory.

sed ‘$d’ < iostats.csv > iostats2.csv

sed ‘$d’ < iostats2.csv > iostats_disk.csv

rm iostats2.csv

cp iostats_disk.csv /cygdrive/e/SAN/csv_archive/iostats_disk_archive_$(date +%y%m%d).csv

Note: The shell script above can be run in the windows task scheduler as long as you have cygwin installed.  Here’s the syntax:

c:\cygwin\bin\bash.exe -l -c “/home/<username>/iostats.sh”

After running the shell script above, the resulting CSV file contains only Total Throughput (IO per sec) data for each disk and lun.  It will contain data from 00:15 to 23:45 in 15 minute increments.  After the cygwin scripts have run we will have csv datasets that are ready to be exported to a graph.

The Disk and LUN stats are combined into the same CSV file.  It is entirely possible to rewrite the script to only have one or the other.  I put them both in there to make it easier to manually create a graph in excel for either disk or lun stats at a later time (if necessary).  The “all disks graph” does not look any different with both disk and lun stats in there, I tried it both ways and they overlap in a way that makes the extra data indistinguishable in the image.

The resulting data output after running the iostats.sh script is shown below.  I now have a nice, neat excel spreadsheet that lists the total throughput for each disk in the array for the entire day in 15 minute increments.   Having the data formatted in this way makes it super easy to create charts.  But I don’t want to have to do that manually every day, I want the charts to be created automatically.

                                                             3/28/11 00:15       3/28/11 00:30      3/28/11  00:45      3/28/11 01:00

Total Throughput IO per sec   – 0_0_0          12                             20                             23                           23 

Total Throughput IO per sec    – 0_0_1        30.12                        33.23                        40.4                         60.23

Total Throughput IO per sec    – 0_0_2         1.82                          3.3                           5.4                              7.8

Total Throughput IO per sec    -0_0_3         80.62                        13.33                        90.4                         10.3 

Step 3:

Now I want to automatically create the graphs every day using a Perl script.  After the CSV files are exported to a more usable format from the previous step, I Use the GD::Graph library from CPAN (http://search.cpan.org/~mverb/GDGraph-1.43/Graph.pm) to auto-generate the graphs.

Below is a sample Perl script that will autogenerate a great looking graph based on the CSV ouput file from the previous step.

#!/usr/bin/perl

#Declare the libraries that will be used.

use strict;

use Text::ParseWords;

use GD::Graph::lines;

use Data::Dumper;

#Specify the csv file that will be used to create the graph

my $file = ‘C:\cygwin\home\<username>\iostats_disk.csv’;

#my $file  = $ARGV[0];

my ($output_file) = ($file =~/(.*)\./);

#Create the arrays for the data and the legends

my @data;

my @legends;

#parse csv, generate an error if it fails

open(my $fh, ‘<‘, $file) or die “Can’t read csv file ‘$file’ [$!]\n”;

my $countlines = 0;

while (my $line = <$fh>) {

chomp $line;

my @fields = Text::ParseWords::parse_line(‘,’, 0, $line);

#There are 95 fields generated to correspond to the 95 data collection points in each
of the output files.

my @field =

(@fields[1],@fields[2],@fields[3],@fields[4],@fields[5],@fields[6],@fields[7],@fields[8],@fields[9],@fields[10],@fields[11],@fields[12],@fields[13],@fields[14],@fields[15],@fields[16],@fields[17],@fields[18],@fields[19],@fields[20],@fields[21],@fields[22],@fields[23],@fields[24],@fields[25],@fields[26],@fields[27],@fields[28],@fields[29],@fields[30],@fields[31],@fields[32],@fields[33],@fields[34],@fields[35],@fields[36],@fields[37],@fields[38],@fields[39],@fields[40],@fields[41],@fields[42],@fields[43],@fields[44],@fields[45],@fields[46],@fields[47],@fields[48],@fields[49],@fields[50],@fields[51],@fields[52],@fields[53],@fields[54],@fields[55],@fields[56],@fields[57],@fields[58],@fields[59],@fields[60],@fields[61],@fields[62],@fields[63],@fields[64],@fields[65],@fields[66],@fields[67],@fields[68],@fields[69],@fields[70],@fields[71],@fields[72],@fields[3],@fields[74],@fields[75],@fields[76],@fields[77],@fields[78],@fields[79],@fields[80],@fields[81],@fields[82],@fields[83],@fields[84],@fields[85],@fields[86],@fields[87],@fields[88],@fields[89],@fields[90],@fields[91],@fields[92],@fields[93],@fields[94],@fields[95]);
push @data, \@field;

if($countlines >= 1){

push @legends, @fields[0];

}

$countlines++;

}

#The data and legend arrays will read 820 lines of the CSV file.  This number will change based on the number of disks in the SAN, and will be different depending on the SAN being reported on.  The legend info will read the first column of the spreadsheet and create a color box that corresponds to the graph line.  For the purpose of this graph, I won’t be using it because 820+ legend entries look like a mess on the screen.

splice @data, 1, -820;

splice @legends, 0, -820;

#Set Graphing Options

my $mygraph = GD::Graph::lines->new(1024, 768);

# There are many graph options that can be changed using the GD::Graph library.  Check the website (and google) for lots of examples.

$mygraph->set(

title => ‘SP IO Utilization (00:15 – 23:45)’,

y_label => ‘IOs Per Second’,

y_tick_number => 4,

values_vertical => 6,

show_values => 0,

x_label_skip => 3,

) or warn $mygraph->error;

#As I said earlier, because of the large number of legend entries for this type of graph, I change the legend to simply read “All disks”.  If you want the legend to actually put the correct entries and colors, use this line instead:  $mygraph->set_legend(@legends);

$mygraph->set_legend(‘All Disks’);

#Plot the data

my $myimage = $mygraph->plot(\@data) or die $mygraph->error;

# Export the graph as a gif image.  The images are currently moved to the IIS folder (c:\inetpub\wwwroot) with one of the scripts.  The could also be emailed using a sendmail utility.

my $format = $mygraph->export_format;

open(IMG,”>$output_file.$format”) or die $!;

binmode IMG;

print IMG $myimage->gif;

close IMG;

After this script runs the resulting image file will be saved in the cygwin home directory (It saves it in the same directory that the CSV file is located in).  One of the nightly scripts I run will copy the image to our interal IIS server’s image directory, and sendmail will email the graph to the SAN Admin team.

That’s it!  You now have lots of pretty graphs with which you can impress your management team. 🙂

Here is a sample graph that was generated with the Perl script:

Reporting on Soft media errors

 

Ah, soft media errors.  The silent killer.  We had an issue with one of our Clariion LUNs that had many uncorrectable sector errors.  Prior to the LUN failure, there were hundreds of soft media errors reported in the navisphere logs.  Why weren’t we alerted about them?  Beats me.  I created my own script to pull and parse the alert logs so I can manually check for these type of errors.

What exactly is a soft media error?  Soft Media errors indicate that the SAN has identified a bad sector on the disk and is reconstructing the data from RAID parity data  in order to fulfill the read request.   It can indicate a failing disk.

To run a report that pulls only soft media errors from the SP log, put the following in a windows batch file:

naviseccli -h <SP IP Address> getlog >textfile.txt

for /f "tokens=1,2,3,4,5,6,7,8,9,10,11,12,13,14" %%i in ('findstr Soft textfile.txt') do (echo %%i %%j %%k %%l %%m %%n %%o %%p %%q %%r %%s %%t %%u %%v)  >>textfile_mediaerrors.txt

The text file output looks like this:

10/25/2010 19:40:17 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:22 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:22 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:27 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:27 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:33 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:33 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:38 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:38 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:44 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:44 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:49 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:49 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5

If you see lots of soft media errors, do yourself a favor and open a case with EMC.  Too many can lead to the failure of one of your LUNs.

The script can be automated to run and send an email with daily alerts, if you so choose.  I just run it manually about once a week for review.

Tiering reports for EMC’s FAST VP

Note: On a separate blog post, I shared a script to generate a report of the tiering status of all LUNs.

One of the items that EMC did not implement along with FAST VP is the ability to run a canned report on how your LUNs are being allocated among the different tiers of storage.  While there is no canned report, alas, it is possible to get this information from the CLI.

The naviseccli –h {SP IP or hostname} lun –list –tiers command fits the bill. It shows how a specific LUN is distributed across the different drive types.  I still need to come up with a script to pull out only the information that I want, but the info is definitely in the command’s output.

Here’s the sample output:

LOGICAL UNIT NUMBER 6
 Name:  LUN 6
 Tier Distribution:
 Flash:  13.83%
 FC:  86.17%

The storagepool report gives some good info as well.  Here’s an excerpt of what you see with the naviseccli –h {SP IP or hostname} storagepool –list –tiers command:

SPA

Tier Name:  Flash
 Raid Type:  r_5
 User Capacity (GBs):  1096.07
 Consumed Capacity (GBs):  987.06
 Available Capacity (GBs):  109.01
 Percent Subscribed:  90.05%
 Data Targeted for Higher Tier (GBs):  0.00
 Data Targeted for Lower Tier (GBs):  11.00

Tier Name:  FC
 Raid Type:  r_5
 User Capacity (GBs):  28981.77
 Consumed Capacity (GBs):  10592.65
 Available Capacity (GBs):  18389.12
 Percent Subscribed:  36.55%

Tier Name:  SATA
 Raid Type:  r_5
 User Capacity (GBs):  11004.67
 Consumed Capacity (GBs):  260.02
 Available Capacity (GBs):  10744.66
 Percent Subscribed:  2.36%
 Data Targeted for Higher Tier (GBs):  3.00
 Data Targeted for Lower Tier (GBs):  0.00
 Disks (Type):

SPB

Tier Name:  Flash
 Raid Type:  r_5
 User Capacity (GBs):  1096.07
 Consumed Capacity (GBs):  987.06
 Available Capacity (GBs):  109.01
 Percent Subscribed:  90.05%
 Data Targeted for Higher Tier (GBs):  0.00
 Data Targeted for Lower Tier (GBs):  25.00

Tier Name:  FC
 Raid Type:  r_5
 User Capacity (GBs):  28981.77
 Consumed Capacity (GBs):  10013.61
 Available Capacity (GBs):  18968.16
 Percent Subscribed:  34.55%
 Data Targeted for Higher Tier (GBs):  25.00
 Data Targeted for Lower Tier (GBs):  0.00

Tier Name:  SATA
 Raid Type:  r_5
 User Capacity (GBs):  11004.67
 Consumed Capacity (GBs):  341.02
 Available Capacity (GBs):  10663.65
 Percent Subscribed:  3.10%
 Data Targeted for Higher Tier (GBs):  20.00
 Data Targeted for Lower Tier (GBs):  0.00

Good stuff in there.   It’s on my to-do list to run these commands periodically, and then parse the output to filter out only what I want to see.  Once I get that done I’ll post the script here too.

Note: I did create and post a script to generate a report of the tiering status of all LUNs.