Tag Archives: performance manager

Automating VNX Storage Processor Percent Utilization Alerts

Note:  The original post describes a method that requires EMC Control Center and Performance Manager.  That tool has been deprecated by EMC in favor of ViPR SRM.  There is still a method you can use to gather CPU information for use in bash scripts. I don’t have script examples that use this command, but if anyone needs help send me a comment and I’ll help. The Navisphere CLI command to get busy/idle ticks for the Storage processors is naviseccli -h getcontrol -cbt.

The output looks like this:

Controller busy ticks: 1639432
Controller idle ticks: 1773844

The SP utilization statistics outputted are an average of the utilization across all the cores of the SP’s processors since the last reset. To get the actual point-in-time SP CPU utilization from this output requires a calculation. You need to poll twice, create a delta for the individual counters by subtracting the earlier value from the later, and apply this formula:

Utilization = Busy Ticks / (Busy Ticks + Idle Ticks)

What follows is the original method I posted that requries EMC Control Center.

I was tasked with coming up with a way to get email alerts whenever our SP utilization breaks a certain threshold.  Since none of the monitoring tools that we own will do that right now, I had to come up with a way using custom scripts.  This is my 2nd post on the same subject, I removed my post from yesterday as it didn’t work as I intended.  This time I used EMC’s Performance Manager rather than pulling data from the SP with the Navisphere CLI.

First, I’m running all of my bash scripts on a windows sever using cygwin.  These should run fine on any linux box as well, however.  Because I don’t have a native sendmail configuration set up on the windows server, I’m using the control station on the Celerra to actually do the comparison of the utilization numbers in the text files and then email out an alert.  The Celerra control station automatically pulls the file via FTP from the windows server every 30 minutes and sends out an email alert if the numbers cross the threshold.  A description of each script and the schedule is below.

Windows Server:

Export.cmd:

This first windows batch script runs an export (with pmcli) from EMC Performance Manager that does a dump of all the performance stats for the current day.

For /f "tokens=2-4 delims=/ " %%a in ('date /t') do (set date=%%c%%a%%b)

C:\ECC\Client.610\PerformanceManager\pmcli.exe -export -out c:\cygwin\home\scripts\sputil999_interval.csv -type interval -class clariion -date %date% -id APM00400500999

Data.cmd:

This cygwin/bash script manipulates the file export from above and ultimately creates two single text files (one for SPA and one for SPB) with a single numerical value of the most recent SP Utilization.  There are a few extra steps at the beginning of the script that are irrelevant to the SP utilization, they’re there for other purposes.

#This will pull only the timestamp line from the top

grep -m 1 "/" /home/scripts/sputil/0999_interval.csv > /home/scripts/sputil/timestamp.csv

# This will pull out only the "disk utilization" line.

grep -i "^% Utilization" /home/scripts/sputil/0999_interval.csv >> /home/scripts/sputil/stats.csv

# This will pull out the disk/LUN title info for the first column

grep -i "Data Collected for DiskStats -" /home/scripts/sputil/0999_interval.csv > /home/scripts/sputil/diskstats.csv

grep -i "Data Collected for LUNStats -" /home/scripts/sputil/0999_interval.csv > /home/scripts/sputil/lunstats.csv

# This will create a column with the disk/LUN number

cat /home/scripts/sputil/diskstats.csv /home/scripts/sputil/lunstats.csv > /home/scripts/sputil/data.csv

# This combines the disk/LUN column with the data column

paste /home/scripts/sputil/data.csv /home/scripts/sputil/stats.csv > /home/scripts/sputil/combined.csv

cp /home/scripts/sputil/combined.csv /home/scripts/sputil/utilstats.csv
 

#  This removes all the temporary files
rm /home/scripts/sputil/timestamp.csv
rm /home/scripts/sputil/stats.csv
rm /home/scripts/sputil/diskstats.csv
rm /home/scripts/sputil/lunstats.csv
rm /home/scripts/sputil/data.csv
rm /home/scripts/sputil/combined.csv

# This next line strips the file of all but the last two rows, which are SP Utilization.

# The 1 looks at the first character in the row, the D specifies "starts with D", then deletes rows meeting those conditions.

awk -v FS="" -v OFS="" '$1 != "D"' < /home/scripts/sputil/utilstats.csv > /home/scripts/sputil/sputil.csv

#This pulls the values from the last column, which would be the most recent.

awk -F, '{print $(NF-1)}' < /home/scripts/sputil/sputil.csv > /home/scripts/sputil/sp_util.csv

#pull 1st line (SPA) into separate file

sed -n 1,1p < /home/scripts/sputil/sp_util.csv > /home/scripts/sputil/spAutil.txt

#pull 2nd line (SPB) into separate file

sed -n 2,2p < /home/scripts/sputil/sp_util.csv > /home/scripts/sputil/spButil.txt

#The spAutil.txt/spButil.txt files now contain only a single numerical value, which would be the most recent %utilization from the Control Center/Performance Manager dump file.

#Copy files to web server root directory

cp /home/scripts/sputil/*.txt /cygdrive/c/inetpub/wwwroot

Celerra Control Station:

CelerraArray:/home/nasadmin/sputil/ftpsp.sh

The script below connects to the windows server and grabs the current SP utilization text files via FTP every 30 minutes (via a cron job).

#!/bin/bash
cd /home/nasadmin/sputil
ftp windows_server.domain.net <<SCRIPT
get spAutil.txt
get spButil.txt
quit
SCRIPT
 CelerraArray:/home/nasadmin/sputil/spcheck.sh:

This script does the comparison check to see if the SP utilization is over our threshold. If it is, it sends an email alert that includes the %Utilization number in the subject line of the email. To change the threshold setting, you’d need to change the THRESHOLD=<XX> line in the script.  The line containing printf “%2.0f” converts the floating point value to an integer, as bash scripts don’t recognize floating point values.

#!/bin/bash

SPB=`cat /home/nasadmin/sputil/spButil.txt` 
SPBcheck= printf "%2.0f" $SPB > /home/nasadmin/sputil/spButil2.txt 
SPB=`cat /home/nasadmin/sputil/spButil2.txt`

echo $SPB
THRESHOLD=50
if [ $SPB -eq 0 ] && [ $THRESHOLD -eq 0 ] 
then 
        echo "Both are zero"
 elif [ $SPB -eq $THRESHOLD ]
 then         
        echo "Both Values are equal"
 elif [ $SPB -gt $THRESHOLD ]
 then          
        echo "SPB is greater than the threshold.  Sending alert" 

        uuencode spButil.txt | mail -s "<array_name> SPB Utilization Alert: $SPB % above threshold of $THRESHOLD %" notify@domain.com
else         
echo "$SPB is lesser than $THRESHOLD" 
fi

CelerraArray Crontab schedule:

The FTP script is currently set to pull SP utilization files.  Run “crontab –e” to edit the scheduler.  I’ve got the alert script set to run at the top of the hour and half past the hour, and the updated SP files from the web server are FTP’d in a few minutes prior.

[nasadmin@CelerraArray sputil]$ crontab –l
58,28 * * * * /home/nasadmin/sputil/ftpsp.sh
0,30 * * * * /home/nasadmin/sputil/spcheck.sh
 Overall Scheduling:

Windows Server:

Performance Manager Dump runs 15 minutes past the hour (exports data)
Data script runs at 20 minutes past the hour (processes data to get SP Utilization)

Celerra Server:

FTP script pulls new SP utilization text files at 28 minutes past the hour
Alert script runs at 30 minutes past the hour

The cycle then repeats at minute 45, minute 50, minute 58, and minute 0.

 

Advertisements

Auto generating daily performance graphs with EMC Control Center / Performance Manager

This document describes the process I used to pull performance data using the ECC pmcli command line tool, parse the data to make it more usable with a graphing tool, and then use perl scripts to automatically generate graphs.

You must install Perl.  I use ActiveState Perl (Free Community Edition) (http://www.activestate.com/activeperl/downloads).

You must install Cygwin.  Link: http://www.cygwin.com/install.html. I generally choose all packages.

I use the follow CPAN Perl modules:

Step 1:

Once you have the software set up, the first step is to use the ECC command line utility to extract the interval performance data that you’re interested in graphing.  Below is a sample PMCLI command line script that could be used for this purpose.

:Get the current date

For /f “tokens=2-4 delims=/” %%a in (‘date /t’) do (set date=%%c%%a%%b)

:Export the interval file for today’s date.

D:\ECC\Client.610\PerformanceManager\pmcli.exe -export -out D:\archive\interval.csv -type interval -class clariion -date %date% -id APM00324532111

:Copy all the export data to my cygwin home directory for processing later.

copy /y e:\san712_interval.csv C:\cygwin\home\<userid>

You can schedule the command script above to run using windows task scheduler.  I run it at 11:46PM every night, as data is collected on our SAN in 15 minute intervals, and that gives me a file that reports all the way up to the end of one calendar day.

Note that there are 95 data collection points from 00:15 to 23:45 every day if you collect data at 15 minute intervals.  The storage processor data resides in the last two lines of the output file.

Here is what the output file looks like:

EMC ControlCenter Performance manager generated file from: <path>

Data Collected for DiskStats

Data Collected for DiskStats – 0_0_0

                                                             3/28/11 00:15       3/28/11 00:30      3/28/11  00:45      3/28/11 01:00 

Number of Arrivals with Non Zero Queue     12                         20                        23                      23 

% Utilization                                                30.2                     33.3                     40.4                  60.3

Response Time                                              1.8                        3.3                        5.4                     7.8

Read Throughput IO per sec                        80.6                    13.33                   90.4                    10.3

Great information in there, but the format of the data makes it very hard to do anything meaningful with the data in an excel chart.  If I want to chart only % utilization, that data is difficult to chart because there are so many counters around it that are also have data collected on them.   My next goal was to write a script to reformat the data in a much more usable format to automatically create a graph for one specific counter that I’m interested in (like daily utilization numbers), which could then be emailed daily or auto-uploaded to an internal website.

Step 2:

Once the PMCLI data is exported, the next step is to use cygwin bash scripts to parse the csv file and pull out only the performance data that is needed.  Each SAN will need a separate script for each type of performance data.  I have four scripts configured to run based on the data that I want to monitor.  The scripts are located in my cygwin home directory.

The scripts I use:

  • Iostats.sh (for total IO throughput)
  • Queuestats.sh (for disk queue length)
  • Resptime.sh (for disk response time in ms)
  • Utilstats.sh (for % utilization)

Here is a sample shell script for parsing the CSV export file (iostats.sh):

#!/usr/bin/bash

#This will pull only the timestamp line from the top of the CSV output file. I’ll paste it back in later.

grep -m 1 “/” interval.csv > timestamp.csv

#This will pull out only lines that begin with “total througput io per sec”.

grep -i “^Total Throughput IO per sec” interval.csv >> stats.csv

#This will pull out the disk/LUN title info for the first column.  I’ll add this back in later.

grep -i “Data Collected for DiskStats -” interval.csv > diskstats.csv

grep -i “Data Collected for LUNStats -” interval.csv > lunstats.csv

#This will create a column with the disk/LUN number .  I’ll paste it into the first column later.

cat diskstats.csv lunstats.csv > data.csv

#This adds the first column (disk/LUN) and combines it with the actual performance data columns.

paste data.csv stats.csv > combined.csv

#This combines the timestamp header at the top with the combined file from the previous step to create the final file we’ll use for the graph.  There is also a step to append the current date and copy the csv file to an archive directory.

cat timestamp.csv combined.csv > iostats.csv

cp iostats.csv /cygdrive/e/SAN/csv_archive/iostats_archive_$(date +%y%m%d).csv

#  This removes all the temporary files created earlier in the script.  They’re no longer needed.

rm timestamp.csv

rm stats.csv

rm diskstats.csv

rm lunstats.csv

rm data.csv

rm combined.csv

#This strips the last two lines of the CSV (Storage Processor data).  The resulting file is used for the “all disks” spreadsheet.  We don’t want the SP
data to skew the graph.  This CSV file is also copied to the archive directory.

sed ‘$d’ < iostats.csv > iostats2.csv

sed ‘$d’ < iostats2.csv > iostats_disk.csv

rm iostats2.csv

cp iostats_disk.csv /cygdrive/e/SAN/csv_archive/iostats_disk_archive_$(date +%y%m%d).csv

Note: The shell script above can be run in the windows task scheduler as long as you have cygwin installed.  Here’s the syntax:

c:\cygwin\bin\bash.exe -l -c “/home/<username>/iostats.sh”

After running the shell script above, the resulting CSV file contains only Total Throughput (IO per sec) data for each disk and lun.  It will contain data from 00:15 to 23:45 in 15 minute increments.  After the cygwin scripts have run we will have csv datasets that are ready to be exported to a graph.

The Disk and LUN stats are combined into the same CSV file.  It is entirely possible to rewrite the script to only have one or the other.  I put them both in there to make it easier to manually create a graph in excel for either disk or lun stats at a later time (if necessary).  The “all disks graph” does not look any different with both disk and lun stats in there, I tried it both ways and they overlap in a way that makes the extra data indistinguishable in the image.

The resulting data output after running the iostats.sh script is shown below.  I now have a nice, neat excel spreadsheet that lists the total throughput for each disk in the array for the entire day in 15 minute increments.   Having the data formatted in this way makes it super easy to create charts.  But I don’t want to have to do that manually every day, I want the charts to be created automatically.

                                                             3/28/11 00:15       3/28/11 00:30      3/28/11  00:45      3/28/11 01:00

Total Throughput IO per sec   – 0_0_0          12                             20                             23                           23 

Total Throughput IO per sec    – 0_0_1        30.12                        33.23                        40.4                         60.23

Total Throughput IO per sec    – 0_0_2         1.82                          3.3                           5.4                              7.8

Total Throughput IO per sec    -0_0_3         80.62                        13.33                        90.4                         10.3 

Step 3:

Now I want to automatically create the graphs every day using a Perl script.  After the CSV files are exported to a more usable format from the previous step, I Use the GD::Graph library from CPAN (http://search.cpan.org/~mverb/GDGraph-1.43/Graph.pm) to auto-generate the graphs.

Below is a sample Perl script that will autogenerate a great looking graph based on the CSV ouput file from the previous step.

#!/usr/bin/perl

#Declare the libraries that will be used.

use strict;

use Text::ParseWords;

use GD::Graph::lines;

use Data::Dumper;

#Specify the csv file that will be used to create the graph

my $file = ‘C:\cygwin\home\<username>\iostats_disk.csv’;

#my $file  = $ARGV[0];

my ($output_file) = ($file =~/(.*)\./);

#Create the arrays for the data and the legends

my @data;

my @legends;

#parse csv, generate an error if it fails

open(my $fh, ‘<‘, $file) or die “Can’t read csv file ‘$file’ [$!]\n”;

my $countlines = 0;

while (my $line = <$fh>) {

chomp $line;

my @fields = Text::ParseWords::parse_line(‘,’, 0, $line);

#There are 95 fields generated to correspond to the 95 data collection points in each
of the output files.

my @field =

(@fields[1],@fields[2],@fields[3],@fields[4],@fields[5],@fields[6],@fields[7],@fields[8],@fields[9],@fields[10],@fields[11],@fields[12],@fields[13],@fields[14],@fields[15],@fields[16],@fields[17],@fields[18],@fields[19],@fields[20],@fields[21],@fields[22],@fields[23],@fields[24],@fields[25],@fields[26],@fields[27],@fields[28],@fields[29],@fields[30],@fields[31],@fields[32],@fields[33],@fields[34],@fields[35],@fields[36],@fields[37],@fields[38],@fields[39],@fields[40],@fields[41],@fields[42],@fields[43],@fields[44],@fields[45],@fields[46],@fields[47],@fields[48],@fields[49],@fields[50],@fields[51],@fields[52],@fields[53],@fields[54],@fields[55],@fields[56],@fields[57],@fields[58],@fields[59],@fields[60],@fields[61],@fields[62],@fields[63],@fields[64],@fields[65],@fields[66],@fields[67],@fields[68],@fields[69],@fields[70],@fields[71],@fields[72],@fields[3],@fields[74],@fields[75],@fields[76],@fields[77],@fields[78],@fields[79],@fields[80],@fields[81],@fields[82],@fields[83],@fields[84],@fields[85],@fields[86],@fields[87],@fields[88],@fields[89],@fields[90],@fields[91],@fields[92],@fields[93],@fields[94],@fields[95]);
push @data, \@field;

if($countlines >= 1){

push @legends, @fields[0];

}

$countlines++;

}

#The data and legend arrays will read 820 lines of the CSV file.  This number will change based on the number of disks in the SAN, and will be different depending on the SAN being reported on.  The legend info will read the first column of the spreadsheet and create a color box that corresponds to the graph line.  For the purpose of this graph, I won’t be using it because 820+ legend entries look like a mess on the screen.

splice @data, 1, -820;

splice @legends, 0, -820;

#Set Graphing Options

my $mygraph = GD::Graph::lines->new(1024, 768);

# There are many graph options that can be changed using the GD::Graph library.  Check the website (and google) for lots of examples.

$mygraph->set(

title => ‘SP IO Utilization (00:15 – 23:45)’,

y_label => ‘IOs Per Second’,

y_tick_number => 4,

values_vertical => 6,

show_values => 0,

x_label_skip => 3,

) or warn $mygraph->error;

#As I said earlier, because of the large number of legend entries for this type of graph, I change the legend to simply read “All disks”.  If you want the legend to actually put the correct entries and colors, use this line instead:  $mygraph->set_legend(@legends);

$mygraph->set_legend(‘All Disks’);

#Plot the data

my $myimage = $mygraph->plot(\@data) or die $mygraph->error;

# Export the graph as a gif image.  The images are currently moved to the IIS folder (c:\inetpub\wwwroot) with one of the scripts.  The could also be emailed using a sendmail utility.

my $format = $mygraph->export_format;

open(IMG,”>$output_file.$format”) or die $!;

binmode IMG;

print IMG $myimage->gif;

close IMG;

After this script runs the resulting image file will be saved in the cygwin home directory (It saves it in the same directory that the CSV file is located in).  One of the nightly scripts I run will copy the image to our interal IIS server’s image directory, and sendmail will email the graph to the SAN Admin team.

That’s it!  You now have lots of pretty graphs with which you can impress your management team. 🙂

Here is a sample graph that was generated with the Perl script: