Reporting on the Estimated Job Completion Times for FAST VP Data Relocation

Another one of the many daily reports I run reports on the current time remaining on the FAST VP data relocation times for all of our arrays.  I also make a single backup copy of the report to show the times for the previous day so I can get a quick view of progress that was made over the previous 24 hours.  Both reports are presented side by side on my intranet report page for easy comparison.

I made a post last year regarding how to deal with long running FAST VP data relocation jobs (http://emcsan.wordpress.com/2012/01/18/long-running-fast-vp-relocation-job/), and this report helps identify any arrays that could be falling behind.  If your estimated completion time is longer than the time window you have defined for your data relocation job you may need to make some changes, see my previous post for more information about that.

You can get the current status of the data relocation job at any time by running the following command:

naviseccli -h [array_hostname] autotiering -info -state -rate -schedule -opStatus -poolID [Pool_ID_Number]
 

The output looks like this:

Auto-Tiering State:  Enabled
Relocation Rate:  Medium
 
Schedule Name:  Default Schedule
Schedule State:  Enabled
Default Schedule:  Yes
Schedule Days:  Sun Mon Tue Wed Thu Fri Sat
Schedule Start Time:  20:00
Schedule Stop Time:  6:00
Schedule Duration:  10 hours
Storage Pools:  Array1_Pool1_SPB, Array1_Pool0_SPA
 
Storage Pool Name:  Array1_Pool0_SPA
Storage Pool ID:  0
Relocation Start Time:  08/15/13 20:00
Relocation Stop Time:  08/16/13 6:00
Relocation Status:  Inactive
Relocation Type:  Scheduled
Relocation Rate:  Medium
Data to Move Up (GBs):  8.00
Data to Move Down (GBs):  8.00
Data Movement Completed (GBs):  2171.00
Estimated Time to Complete:  4 minutes
Schedule Duration Remaining:  None
 
Storage Pool Name:  Array1_Pool1_SPB
Storage Pool ID:  1
Relocation Start Time:  08/15/13 20:00
Relocation Stop Time:  08/16/13 6:00
Relocation Status:  Inactive
Relocation Type:  Scheduled
Relocation Rate:  Medium
Data to Move Up (GBs):  14.00
Data to Move Down (GBs):  14.00
Data Movement Completed (GBs):  1797.00
Estimated Time to Complete:  5 minutes
Schedule Duration Remaining:  None
 

The output of the command is very verbose, I want to trim it down to only show me the pool name and the estimated time for the relocation job to complete.   This bash script will trim it down to only show the pool names and estimated completion times.

The final output of the script generated report looks like this: 

Runtime: Thu Aug 11 07:00:01 CDT 2013
Array1_Pool0:  9 minutes
Array1_Pool1:  6 minutes
Array2_Pool0:  1 hour, 47 minutes
Array2_Pool1:  3 minutes
Array2_Pool2:  2 days, 7 hours, 25 minutes
Array2_Pool3:  1 day, 9 hours, 58 minutes
Array3_Pool0:  1 minute
Array4_Pool0:  N/A
Array4_Pool1:  2 minutes
Array5_Pool1:  5 minutes
Array5_Pool0:  5 minutes
Array6_Pool0:  N/A
Array6_Pool1:  N/A

 

Below is the bash script that generates the report. The script is set up to report on six different arrays, it can be easily modified to suit your environment. 

TODAY=$(date)
echo “Runtime: $TODAY” > /reports/tierstatus.txt
echo $TODAY
#
naviseccli -h [array_hostname1] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname1]_tierstatus0.out
naviseccli -h [array_hostname1] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname1]_tierstatus1.out
#
echo `grep “Pool Name:” /reports/[array_hostname1]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname1]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname1]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname1]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname2] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname2]_tierstatus0.out
naviseccli -h [array_hostname2] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname2]_tierstatus1.out
naviseccli -h [array_hostname2] autotiering -info -state -rate -schedule -opStatus -poolID 2 > /reports/[array_hostname2]_tierstatus2.out
naviseccli -h [array_hostname2] autotiering -info -state -rate -schedule -opStatus -poolID 3 > /reports/[array_hostname2]_tierstatus3.out
#
echo `grep “Pool Name:” /reports/[array_hostname2]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname2]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname2]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname2]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname2]_tierstatus2.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname2]_tierstatus2.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname2]_tierstatus3.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname2]_tierstatus3.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname3] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname3]_tierstatus0.out
#
echo `grep “Pool Name:” /reports/[array_hostname3]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname3]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname4] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname4]_tierstatus0.out
naviseccli -h [array_hostname4] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname4]_tierstatus1.out
#
echo `grep “Pool Name:” /reports/[array_hostname4]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname4]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname4]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname4]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname5] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname5]_tierstatus0.out
naviseccli -h [array_hostname5] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname5]_tierstatus1.out
#
echo `grep “Pool Name:” /reports/[array_hostname5]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname5]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname5]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname5]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname6] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname6]_tierstatus0.out
naviseccli -h [array_hostname6] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname6]_tierstatus1.out
#
echo `grep “Pool Name:” /reports/[array_hostname6]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname6]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname6]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname6]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
#Copy the current report to a new file and rename it, one prior day is saved.
cp /cygdrive/c/inetpub/wwwroot/tierstatus.txt /cygdrive/c/inetpub/wwwroot/tierstatus_yesterday.txt
#Remove the current report on the web page.
rm /cygdrive/c/inetpub/wwwroot/tierstatus.txt
#Copy the new report to the web page.
cp /reports/tierstatus.txt /cygdrive/c/inetpub/wwwroot

 

 

Advertisements

Gartner’s Market Share Analysis for Storage Vendors

Here’s an interesting market share analysis by Gartner that was published a few months ago:  http://www.gartner.com/technology/reprints.do?id=1-1GUZA31&ct=130703&st=sb.  It looks like EMC and NetApp rule the market, with EMC on top.  Below are the key findings copied from the linked article.

  • EMC and NetApp retained almost 80% of the market. They were separated by more than $2 billion in revenue from IBM and HP, their next two largest competitors.
  • No. 1 EMC grew its overall network-attached storage (NAS)/unified storage share to 47.9% (up from 41.7% in 2011), while No. 2 NetApp’s overall NAS/unified storage share dropped to 30.3% (down from 36% in 2011).
  • In the overall NAS/unified storage share ranking, the positions of the nine named vendors remained unchanged in 2012 (in order of share rank: EMC, NetApp, IBM, HP, Oracle, Netgear, Dell, Hitachi/Hitachi Data Systems and Fujitsu).
  • For the fifth consecutive year, iSCSI storage area network (SAN) revenue and Fibre Channel SAN revenue continued to gain proportionate share in the overall NAS/unified market.
  • The “pure NAS” market continues to grow at a much faster rate (15.9%) than the overall external controller-based (ECB) block-access market (2.3%), in large part due to the expanding NAS support of growing vertical applications and virtualization.

Reporting on Clariion / VNX Block Storage Pool capacity with a bash script

I recently added a post about how to report on Celerra & VNX File pool sizes with a bash script. I’ve also been doing that for a long time with our Clariion and VNX block pools so I thought I’d share that information as well.

I use a cron job to schedule the report daily and copy it to our internal web server. I then run the csv2html.pl perl script (from http://www.jpsdomain.org/source/perl.html) to convert it to an HTML output file to add to our intranet report page. This is likely the most viewed report I create on a daily basis as we always seem to be running low on available capacity.

The output of the script looks similar to this:

PoolReport

Here is the bash script that creates the report:

TODAY=$(date)
#Add the current time/date stamp to the top of the report 
echo $TODAY > /scripts/PoolReport.csv

#Create a file with the standard navisphere CLI output for each storage pool (to be processed later into the format I want)

naviseccli -h VNX_1 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_0.csv

naviseccli -h VNX_1 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_1.csv

naviseccli -h VNX_1 storagepool -list -id 2 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_2.csv

naviseccli -h VNX_1 storagepool -list -id 3 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_3.csv

naviseccli -h VNX_1 storagepool -list -id 4 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_6.csv

#

naviseccli -h VNX_2 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_4.csv

naviseccli -h VNX_2 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_5.csv

naviseccli -h VNX_2 storagepool -list -id 2 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_7.csv

naviseccli -h VNX_2 storagepool -list -id 3 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_8.csv

#

Naviseccli -h VNX_3 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_6.csv

#

naviseccli -h VNX_4 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_0.csv

naviseccli -h VNX_4 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_1.csv

naviseccli -h VNX_4 storagepool -list -id 2 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_2.csv

naviseccli -h VNX_4 storagepool -list -id 3 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_3.csv

#

naviseccli -h VNX_5 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_4.csv

#

naviseccli -h VNX_6 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site3_0.csv

naviseccli -h VNX_6 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site3_1.csv

#

naviseccli -h VNX_7 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site3_0.csv

naviseccli -h VNX_7 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site3_1.csv

#

naviseccli -h VNX_8 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site4_0.csv

naviseccli -h VNX_8 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site4_1.csv

#

naviseccli -h VNX_9 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site5_0.csv

naviseccli -h VNX_9 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site5_1.csv

#

#Create a new file for each site's storage pool (from the file generated in the previous step) hat contains only the info that I want.

#

cat /scripts/report_site1_4.csv | grep Name > /scripts/Site1Pool4.csv

cat /scripts/report_site1_4.csv | grep GBs >>/scripts/Site1Pool4.csv

cat /scripts/report_site1_4.csv | grep Full >>/scripts/Site1Pool4.csv

#

cat /scripts/report_site1_5.csv | grep Name > /scripts/Site1Pool5.csv

cat /scripts/report_site1_5.csv | grep GBs >>/scripts/Site1Pool5.csv

cat /scripts/report_site1_5.csv | grep Full >>/scripts/Site1Pool5.csv

#

cat /scripts/report_site1_0.csv | grep Name > /scripts/Site1Pool0.csv

cat /scripts/report_site1_0.csv | grep GBs >>/scripts/Site1Pool0.csv

cat /scripts/report_site1_0.csv | grep Full >>/scripts/Site1Pool0.csv

#

cat /scripts/report_site1_1.csv | grep Name > /scripts/Site1Pool1.csv

cat /scripts/report_site1_1.csv | grep GBs >;;>;;/scripts/Site1Pool1.csv

cat /scripts/report_site1_1.csv | grep Full >;;>;;/scripts/Site1Pool1.csv

#

cat /scripts/report_site1_2.csv | grep Name > /scripts/Site1Pool2.csv

cat /scripts/report_site1_2.csv | grep GBs >>/scripts/Site1Pool2.csv

cat /scripts/report_site1_2.csv | grep Full >>/scripts/Site1Pool2.csv

#

cat /scripts/report_site1_7.csv | grep Name > /scripts/Site1Pool7.csv

cat /scripts/report_site1_7.csv | grep GBs >>/scripts/Site1Pool7.csv

cat /scripts/report_site1_7.csv | grep Full >>/scripts/Site1Pool7.csv

#

cat /scripts/report_site1_8.csv | grep Name > /scripts/Site1Pool8.csv

cat /scripts/report_site1_8.csv | grep GBs >>/scripts/Site1Pool8.csv

cat /scripts/report_site1_8.csv | grep Full >>/scripts/Site1Pool8.csv

#

cat /scripts/report_site1_3.csv | grep Name > /scripts/Site1Pool3.csv

cat /scripts/report_site1_3.csv | grep GBs >>/scripts/Site1Pool3.csv

cat /scripts/report_site1_3.csv | grep Full >>/scripts/Site1Pool3.csv

#

cat /scripts/report_site1_6.csv | grep Name > /scripts/Site1Pool6.csv

cat /scripts/report_site1_6.csv | grep GBs >>/scripts/Site1Pool6.csv

cat /scripts/report_site1_6.csv | grep Full >>/scripts/Site1Pool6.csv

#

cat /scripts/report_site1_6.csv | grep Name > /scripts/Site1Pool6.csv

cat /scripts/report_site1_6.csv | grep GBs >>/scripts/Site1Pool6.csv

cat /scripts/report_site1_6.csv | grep Full >>/scripts/Site1Pool6.csv

#

cat /scripts/report_site2_0.csv | grep Name > /scripts/Site2Pool0.csv

cat /scripts/report_site2_0.csv | grep GBs >>/scripts/Site2Pool0.csv

cat /scripts/report_site2_0.csv | grep Full >>/scripts/Site2Pool0.csv

#

cat /scripts/report_site2_1.csv | grep Name > /scripts/Site2Pool1.csv

cat /scripts/report_site2_1.csv | grep GBs >>/scripts/Site2Pool1.csv

cat /scripts/report_site2_1.csv | grep Full >>/scripts/Site2Pool1.csv

#

cat /scripts/report_site2_2.csv | grep Name > /scripts/Site2Pool2.csv

cat /scripts/report_site2_2.csv | grep GBs >>/scripts/Site2Pool2.csv

cat /scripts/report_site2_2.csv | grep Full >>/scripts/Site2Pool2.csv

#

cat /scripts/report_site2_3.csv | grep Name > /scripts/Site2Pool3.csv

cat /scripts/report_site2_3.csv | grep GBs >>/scripts/Site2Pool3.csv

cat /scripts/report_site2_3.csv | grep Full >>/scripts/Site2Pool3.csv

#

cat /scripts/report_site2_4.csv | grep Name > /scripts/Site2Pool4.csv

cat /scripts/report_site2_4.csv | grep GBs >>/scripts/Site2Pool4.csv

cat /scripts/report_site2_4.csv | grep Full >>/scripts/Site2Pool4.csv

#

cat /scripts/report_site3_0.csv | grep Name > /scripts/Site3Pool0.csv

cat /scripts/report_site3_0.csv | grep GBs >>/scripts/Site3Pool0.csv

cat /scripts/report_site3_0.csv | grep Full >>/scripts/Site3Pool0.csv

#

cat /scripts/report_site3_1.csv | grep Name > /scripts/Site3Pool1.csv

cat /scripts/report_site3_1.csv | grep GBs >>/scripts/Site3Pool1.csv

cat /scripts/report_site3_1.csv | grep Full >>/scripts/Site3Pool1.csv

#

cat /scripts/report_site3_0.csv | grep Name > /scripts/Site4Pool0.csv

cat /scripts/report_site3_0.csv | grep GBs >>/scripts/Site4Pool0.csv

cat /scripts/report_site3_0.csv | grep Full >>/scripts/Site4Pool0.csv

#

cat /scripts/report_site3_1.csv | grep Name > /scripts/Site4Pool1.csv

cat /scripts/report_site3_1.csv | grep GBs >>/scripts/Site4Pool1.csv

cat /scripts/report_site3_1.csv | grep Full >>/scripts/Site4Pool1.csv

#

cat /scripts/report_site4_0.csv | grep Name > /scripts/Site5Pool0.csv

cat /scripts/report_site4_0.csv | grep GBs >>/scripts/Site5Pool0.csv

cat /scripts/report_site4_0.csv | grep Full >>/scripts/Site5Pool0.csv

#

cat /scripts/report_site4_1.csv | grep Name > /scripts/Site5Pool1.csv

cat /scripts/report_site4_1.csv | grep GBs >>/scripts/Site5Pool1.csv

cat /scripts/report_site4_1.csv | grep Full >>/scripts/Site5Pool1.csv

#

cat /scripts/report_site5_0.csv | grep Name > /scripts/Site6Pool0.csv

cat /scripts/report_site5_0.csv | grep GBs >>/scripts/Site6Pool0.csv

cat /scripts/report_site5_0.csv | grep Full >>/scripts/Site6Pool0.csv

#

cat /scripts/report_site5_1.csv | grep Name > /scripts/Site6Pool1.csv

cat /scripts/report_site5_1.csv | grep GBs >>/scripts/Site6Pool1.csv

cat /scripts/report_site5_1.csv | grep Full >>/scripts/Site6Pool1.csv

#

#The last section creates the final output for the report before it is processed into an html table. It creates a single line for each storage pool with the total GB available, total GB used, available GB, and the percent utilization of the pool.

#

echo 'Pool Name','Total GB ','Used GB ','Available GB ','Percent Full ' >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool2.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool2.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool2.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool2.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool2.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool3.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool3.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool3.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool3.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool3.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool6.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool6.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool4.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool4.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool4.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool4.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool4.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool5.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool5.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool5.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool5.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool5.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool7.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool7.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool7.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool7.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool7.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool8.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool8.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool8.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool8.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool8.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site1Pool6.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool6.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site2Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site2Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site2Pool2.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool2.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool2.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool2.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool2.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site2Pool3.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool3.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool3.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool3.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool3.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site2Pool4.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool4.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool4.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool4.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool4.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site3Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site3Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site3Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site3Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site3Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site3Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site3Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site3Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site3Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site3Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site4Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site4Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site4Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site4Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site4Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site4Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site4Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site4Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site4Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site4Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site5Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site5Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site5Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site5Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site5Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site5Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site5Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site5Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site5Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site5Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site6Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site6Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site6Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site6Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site6Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site6Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site6Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site6Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site6Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site6Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#Convert the file to HTML for use on the internal web server 

#

./csv2htm.pl -e -T -i /scripts/PoolReport.csv -o /webfolder/PoolReport.html

Scripting Alias and Zone creation for Brocade Switches

Whenever a new array is added in our environment we usually have to add hundreds and hundreds of new zones to our core brocade fabric.  It’s a very tedious job so I started investigating ways to script the process.

Here is the syntax for running an ssh command remotely (telnet is disabled on all of our switches):

ssh userid@switchname [command]
 

so, if I wanted to show all of the zones on a switch the command would look like this:

ssh admin@10.0.0.1 zoneshow
 

While those commands work perfectly and could be added in to a bash script as-is, the caveat is that the password must be typed in for every command.  That’s not very practical if there are hundreds of lines in the script.  You could set up ssh keys on every switch, but I was looking for a quicker, easier solution.  I found that quick solution with an opensource package called sshpass (http://sourceforge.net/projects/sshpass).  It allows you to enter the password on the command line.  I use Cygwin on a windows server for all my bash scripts and it was a very simple installation in that environment, and I’m sure would be just as easy on Linux.  Download it to a folder,  uncompress it, run “./configure”, “make”, and “make install” and you’re all done.

Once sshpass is installed, the syntax for showing the zones on a brocade switch would look like this:

sshpass –p “[password]” ssh admin@10.0.0.1 zoneshow
 

Now that the commands can be run very easily from a remote shell, I needed to make creating the script much easier.  That’s where a spreadsheet application helps tremendously.  Divide the script line into different cells on the spreadsheet, making sure to separate any part of the command that’s going to change.  In the last cell on the line, you concatenate all of the previous cells together to create the complete command.  Below is how I did it.

Here’s the syntax output I want for creating a zone:

sshpass -p ‘password’ ssh [User_ID]@[Switch_Name] zonecreate “[Zone_Name]”,”[Host_Alias]”,”[SP_Alias]”
 

Here’s how I divide the command up in the spreadsheet:

A1          sshpass  -p ‘password’ ssh

B1          [User_ID]

C1          @

D1          [switch_name]

E1           Zonecreate “

F1           [Zone_Name]

G1           “,”

H1           [Host_Alias]

I1            “,”

J1            [Clarrion/VNX_SP_Alias]

K1           “

L1           =concatenate(A1,B1,C1,D1,E1,F1,G1,H1,J1,K1)

Now you can copy line 1 of the spreadsheet to as many lines as you need, change the User ID, Switch Name, Zone_Name, Host Alias, and Clariion/VNX SP Alias as needed, and the L column will have all the completed commands for you that you can cut and paste into a bash script.  Create a blank script file with ‘touch file.sh’, do a ‘chmod +X file.sh’ and ‘chmod 777 file.sh’ on it, use vi to copy and paste in the script data, then run it with ‘./file.sh’ from the prompt.

The same thing can be done for creating the initial aliases, here’s the syntax of the command for that:

sshpass -p ‘password’ ssh [User_ID]@[Switch_Name] alicreate “[Alias_Name]”,”[WWN]”
 

And finally, here’s what it looks like entered into a spreadsheet:

BrocadeScript

Reporting on Celerra / VNX NAS Pool capacity with a bash script

I recently created a script that I run on all of our celerras and VNX’s that reports on NAS pool size.   The output from each array is then converted to HTML and combined on a single intranet page to provide a quick at-a-glance view of our global NAS capacity and disk space consumption.  I made another post that shows how to create a block storage pool report as well:  http://emcsan.wordpress.com/2013/08/09/reporting-on-celerravnx-block-storage-pool-capacity-with-a-bash-script/

The default command unfortunately outputs in Megabytes with no option to change to GB or TB.  This script performs the MB to GB conversion and adds a comma as the numerical separator (what we use in the USA) to make the output much more readable.

First, identify the ID number for each of your NAS pools.  You’ll need to insert the ID numbers into the script itself.

[nasadmin@celerra]$ nas_pool -list
 id      inuse   acl     name                      storage system
 10      y       0       NAS_Pool0_SPA             AKM00111000000
 18      y       0       NAS_Pool1_SPB             AKM00111000000
Note that the default output of the command that provides the size of each pool is in a very hard to read format.  I wanted to clean it up to make it easier to read on our reporting page.  Here’s the default output:
[nasadmin@celerra]$ nas_pool -size -all
id           = 10
name         = NAS_Pool0_SPA
used_mb      = 3437536
avail_mb     = 658459
total_mb     = 4095995
potential_mb = 0
id           = 18
name         = NAS_Pool1_SPB
used_mb      = 2697600
avail_mb     = 374396
total_mb     = 3071996
potential_mb = 1023998
 My script changes the output to look like the example below.
Name (Site)   ; Total GB ; Used GB  ; Avail GB
 NAS_Pool0_SPA ; 4,000    ; 3,356.97 ; 643.03
 NAS_Pool1_SPB ; 3,000    ; 2,634.38 ; 365.62
 In this example there are two NAS pools and this script is set up to report on both.  It could be easily expanded or reduced depending on the number of pools on your array. The variable names I used include the Pool ID number from the output above, that should be changed to match your ID’s.  You’ll also need to update the ‘id=’ portion of each command to match your Pool ID’s.

Here’s the script:

#!/bin/bash

NAS_DB="/nas"
export NAS_DB

# Set the Locale to English/US, used for adding the comma as a separator in a cron job
export LC_NUMERIC="en_US.UTF-8"
TODAY=$(date)

 

# Gather Pool Name, Used MB, Avaialble MB, and Total MB for First Pool

# Set variable to pull the Name of the pool from the output of 'nas_pool -size'.
name18=`/nas/bin/nas_pool -size id=18 | /bin/grep name | /bin/awk '{print $3}'`

# Set variable to pull the Used MB of the pool from the output of 'nas_pool -size'.
usedmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep used_mb | /bin/awk '{print $3}'`

# Set variable to pull the Available MB of the pool from the output of 'nas_pool -size'.
availmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep avail_mb | /bin/awk '{print $3}'`
# Set variable to pull the Total MB of the pool from the output of 'nas_pool -size'.

totalmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep total_mb | /bin/awk '{print $3}'`

# Convert MB to GB, Add Comma as separator in output

# Remove '...b' variables if you don't want commas as a separator

# Convert Used MB to Used GB
usedgb18=`/bin/echo $usedmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
usedgb18b=`/usr/bin/printf "%'.2f\n" "$usedgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Available MB to Available GB
availgb18=`/bin/echo $availmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
availgb18b=`/usr/bin/printf "%'.2f\n" "$availgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Total MB to Total GB
totalgb18=`/bin/echo $totalmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
totalgb18b=`/usr/bin/printf "%'.2f\n" "$totalgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Gather Pool Name, Used MB, Avaialble MB, and Total MB for Second Pool

# Set variable to pull the Name of the pool from the output of 'nas_pool -size'.
name10=`/nas/bin/nas_pool -size id=10 | /bin/grep name | /bin/awk '{print $3}'`

# Set variable to pull the Used MB of the pool from the output of 'nas_pool -size'.
usedmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep used_mb | /bin/awk '{print $3}'`

# Set variable to pull the Available MB of the pool from the output of 'nas_pool -size'.
availmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep avail_mb | /bin/awk '{print $3}'`

# Set variable to pull the Total MB of the pool from the output of 'nas_pool -size'.
totalmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep total_mb | /bin/awk '{print $3}'`
 
# Convert MB to GB, Add Comma as separator in output

# Remove '...b' variables if you don't want commas as a separator
 
# Convert Used MB to Used GB
usedgb10=`/bin/echo $usedmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
usedgb10b=`/usr/bin/printf "%'.2f\n" "$usedgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Available MB to Available GB
availgb10=`/bin/echo $availmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
availgb10b=`/usr/bin/printf "%'.2f\n" "$availgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Total MB to Total GB
totalgb10=`/bin/echo $totalmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
totalgb10b=`/usr/bin/printf "%'.2f\n" "$totalgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Create Output File

# If you don't want the comma separator in the output file, substitute the variable without the 'b' at the end.

# I use the semicolon rather than the comma as a separator due to the fact that I'm using the comma as a numerical separator.

# The comma could be substituted here if desired.

/bin/echo $TODAY > /scripts/NasPool.txt
/bin/echo "Name" ";" "Total GB" ";" "Used GB" ";" "Avail GB" >> /scripts/NasPool.txt
/bin/echo $name18 ";" $totalgb18b ";" $usedgb18b ";" $availgb18b >> /scripts/NasPool.txt
/bin/echo $name10 ";" $totalgb10b ";" $usedgb10b ";" $availgb10b >> /scripts/NasPool.txt
 Here’s what the Output looks like:
Wed Jul 17 23:56:29 JST 2013
 Name (Site) ; Total GB ; Used GB ; Avail GB
 NAS_Pool0_SPA ; 4,000 ; 3,356.97 ; 643.03
 NAS_Pool1_SPB ; 3,000 ; 2,634.38 ; 365.62
 I use a cron job to schedule the report daily and copy it to our internal web server.  I then run the csv2html.pl perl script (from http://www.jpsdomain.org/source/perl.html) to convert it to an HTML output file to add to our intranet report page.

Note that I had to modify the csv2html.pl command to accomodate the use of a semicolon instead of the default comma in a csv file.  Here is the command I use to do the conversion:

./csv2htm.pl -e -T -D “;” -i /reports/NasPool.txt -o /reports/NasPool.html
 Below is what the output looks like after running the HTML conversion tool.

NASPool