Tag Archives: performance

Storage Performance Metrics

I often get requests from application owners to review performance stats.  I thought I’d give a quick overview of some of the things I look at, what the myriad of performance metrics in Navisphere Analyzer and ECC Performance Manager mean, and how you might use some of them to investigate a performance problem.  Performance analysis is very much an art (not a science) and it’s sometimes difficult to pinpoint exact causes based on the mix of applications and workload on the array. Taking all of the metrics into account with a holistic view is needed to be successful.  Performing data collection of application workloads over time is recommended because application workload characteristics will likely vary over time. If you have a major problem, I would always recommend opening an SR with EMC.

This post is just an overview of SAN performance metrics and isn’t meant to dive in to every possible scenario from every angle. EMC already has excellent guides for performance best practices that you can read here:

Because we have EMC’s Performance Manager tool installed in our environment, I always go to that tool first rather than Navisphere Analyzer.  Both use the same metrics, so the following information will be useful regardless of which method you use.

The first thing I do is look at the Storage Processors.  This will give you a good indication of the overall health of the array before you dive into the specific LUN (or LUNs) used by the application.

  • SP Cache Dirty Pages (%). These are pages in write cache that have received new data from hosts but have not yet been flushed to disk.  You should have a high percentage of dirty pages as it increases the chance of a read coming from cache or additional writes to the same block of data being absorbed by the cache. If an IO is served from cache the performance is better than if the data had to be retrieved from disk.  That’s why the default watermarks are usually around 60/80% or 70/90%.  You don’t want dirty pages to reach 100%, they should fluctuate between the high and low watermarks (which means the Cache is healthy).  Periodic spikes or drops outside the watermarks are ok, but consistently hitting 100% indicates that the write cache is overstressed.
  • SP Utilization (%). Check and see if either SP is running higher than about 75%.  If either is running that high application response time will be increased.  Also, both will need to be under 50% for non-disruptive upgrades. We had to do a large scale migration of data from one SAN to another at one point in order to get a NDU accomplished.  You’ll also want to check for proper balance.  If one is much higher than the other, you should consider migrating LUNs from one SP owner to another.  I check SP balance on all of our arrays on a daily basis.
  • SP Response time (ms). Make sure again that both SPs are even and that response time is acceptable. I like to see response times under 10ms.  If you see that one SP has high utilization and response time but the other SP doesn’t, look for LUNs owned by the busier SP that are using more array resources. Looking at total IO on a per LUN basis can help confirm If both SPs have relatively similar throughput but one SP has much higher bandwidth.  That could mean that there is some large block IO occurring.
  • SP Port Queue Full Count. This represents the number of times that a front end port issued a QFULL response back to the hosts. If you are seeing QFULL’s it could mean that the Queue Depth on the HBA is too large for the LUNs being accessed.  A Clariion/VNX front end port has a queue depth of 1600 which is the maximum number of simultaneous IO’s that port can process.  Each LUN on the array has a maximum queue depth that is calculated using a formula based on the number of data disks in the RAID Group. For example, a port with 512 queues and a typical LUN queue depth of 32 can support up to: 512 / 32 = 16 LUNs on 1 Initiator (HBA) or 16 Initiators (HBAs) with 1 LUN each or any combination not to exceed this number. Configurations that exceed this number are in danger of returning QFULL conditions. A QFULL condition signals that the target/storage port is unable to process more IO requests and thus the initiator will need to throttle IO to the storage port. As a result of this, application response times will increase and IO activity will decrease.

The next thing I do is look at the specific LUNs that the application owner is asking about. The list below includes the basic performance metrics that I most often look at when investigating a performance problem.

  • Utilization (%) represents the fraction of an observation period during which a LUN has any outstanding requests. When the LUN becomes the bottleneck, the utilization will be at or close to 100%. However, since I/Os can get serviced by multiple disks an increase in workload might still result in a higher throughput.  Utilization by itself is not a very good indicator of the overall performance of the LUN, it needs to be factored in with several other things. For example, If you are writing to a LUN (100% Writes) and the location of the data is in a small physical space on the LUN, it may be possible to get to 100% with write cache re-hits. This means that all writes are being serviced by the write cache and since you are writing data to the same locations over and over you do not flush any of the data to the disks. This can cause your LUN Utilization to be 100% but there will actually be no IO to the disks. Utilization is very affected by caching, both read and write. The LUN can be very busy but may not have a problem. Use Utilization to assist in identifing busy LUNs then look at queuing and response times to see if there really is an issue.
  • Queue Length is the average number of requests within a polling interval that are outstanding to this LUN. A queue length of zero indicates an idle LUN. If three requests arrive at an idle LUN at the same time, only one of them can be served immediately; the other two must wait in the queue. That scenario would result in a queue length of 3.  My general guideline for “bad performance” on a LUN is a queue length greater than 2 for a single disk drive.
  • Average Busy Queue Length is the average number of outstanding requests when the LUN was busy. This does not include any idle time. This value should not exceed 2 times the number of spindles on a LUN. For example, if a LUN has 25 spindles, a value of 50 is acceptable. Since this queue length is counted only when the LUN is not idle, the value indicates the frequency variation (burst frequency) of incoming requests. The higher the value, the bigger the burst and the longer the average response time at this component. In contrast to this metric, the average queue length does also include idle periods when no requests are pending. If you have 50% of the time just one outstanding request, and the other 50% the LUN is idle, the average busy queue length will be 1. The average queue length however, will be ½.
  • Response Time (ms) is the average time, in milliseconds, that a request to this LUN is outstanding, including its waiting time. The higher the queue length for a LUN, the more requests are waiting in its queue, thus increasing the average response time of a single request. For a given workload, queue length and response time are directly proportional.  Keep in mind that cache re-hits bring down the average response time (and service times), whether they are reads or writes. LUN Response time is a good starting point for troubleshooting. It gives a good indicator of what the host system is experiencing. Usually if your LUN response time (Response time = queue length * service time) is good then the host performance is good. High response times don’t always mean that the CLARiiON is busy, it can also indicate that you’re having issues with your host or Fabric.  We use the Brocade Health report on a regular basis to identify hosts that have an excessive amount of traffic, as well as running the EMC HEAT report on hosts that have reported issues (which can identify incorrect HBA Drivers, Bad HBA, etc).These are my general guidelines for response time:
    Less than 10 ms: very good
    Between 10 – 20 ms: okay
    Between 20 – 50 ms: slow, needs attention
    Greater than 50 ms:  I/O bottleneck
  • Service Time (ms) represents the Time, in milliseconds, a request spent being serviced by a component. It does not include time waiting in a queue. Service time is mainly a characteristic of the system component. However, larger I/Os take longer and therefore usually result in lower throughput (IO/s) but better bandwidth (Mbytes/s). In general, Service time is simply the time it takes to actually send the I/O request to the storage and get an answer back. In general, I like to see service times below 20ms.
  • Total Throughput (IO/sec) is the average number of host requests that is passed through the LUN per second. This includes both read and write requests. Smaller requests usually result in a higher total throughput than larger requests.  Examining total throughput (along with %Utilization) is a good way to identify the busiest LUNs on the array. In general, here are the IOPs limits by drive type:
RPM        Drive Type      IOPs
7,200      SATA,NL-SAS     ~80
10,000     SATA,NL-SAS     ~130
10,000     FC,SAS          ~140
15,000     FC,SAS          ~180
N/A        EFD             ~1500 (Read/Write, 60/40)
N/A        EFD             ~6000 (Read)
N/A        EFD             ~3000 (Write)
  • Write Throughput (I/O/sec) The average number of host write requests that is passed through the LUN per second. Smaller requests usually result in a higher write throughput than larger requests.  When troubleshooting specific LUNs, check the write IO size and see if the size is what you would expect for the application you are investigating. Extremely large IO sizes coupled with high IOPS may cause write cache contention.
  • Read Throughput (I/O/sec) The average number of host read requests that is passed through the LUN per second. Smaller requests usually result in a higher read throughput than larger requests.
  • Total Bandwidth (MB/s) The average amount of host data in Mbytes that is passed through the LUN per second. This includes both read and write requests. Larger requests usually result in a higher total bandwidth than smaller requests.
  • Read Bandwidth (MB/s) The average amount of host read data in Mbytes that is passed through the LUN per second. Larger requests usually result in a higher bandwidth than smaller requests.
  • Write Bandwidth (MB/s) The average amount of host write data in Mbytes that is passed through the LUN per second. Larger requests usually result in a higher bandwidth than smaller requests. Keep in mind that writes consume many more array resources than reads.
  • Read Size (KB) The average read request size in Kbytes seen by the LUN. This number indicates whether the overall read workload is oriented more toward throughput (I/Os per second) or bandwidth (Mbytes/second). For a finer distinction of I/O sizes, use an IO Size Distribution chart for this LUN.
  • Write Size (KB) The average write request size in Kbytes seen by the LUN. This number indicates whether the overall write workload is oriented more toward throughput (I/Os per second) or bandwidth (Mbytes/second). For a finer distinction of I/O sizes, use an IO Size Distribution chart for the LUNs.

Below is an explanation of additional performance metrics that I don’t use as frequently, but I’m including them for completeness.

  • Forced Flushes/s Number of times per second the cache had to flush pages to disk to free up space for incoming write requests. Forced flushes are a measure of how often write requests will have to wait for disk I/O rather than be satisfied by an empty slot in the write cache. In most well performing systems this should be zero most of the time. 
  • Full Stripe Writes/s Average number of write requests per second that spanned a whole stripe (all disks in a LUN). This metric is applicable only to LUNs that are part of a RAID5 or RAID3 group.
  • Used Prefetches (%) The percentage of prefetched data in the read cache that was read during the last polling interval.
  • Disk Crossing (%) Percentage of host requests that require I/O to at least two disks compared to the total number of host requests. A single disk crossing can involve more than two disk drives.
  • Disk Crossings/s Number of times per second that a request requires access to at least two disk drives. A single disk crossing can involve more than two disks.
  • Read Cache Hits/s Average number of read requests per second that were satisfied by either read or write cache without requiring any disk access. A read cache hit occurs when recently accessed data is re-referenced while it is still in the cache.
  • Read Cache Misses/s Average number of read requests per second that did require one or more disk accesses.
  • Reads From Write Cache/s Average number of read requests per second that were satisfied by write cache only. Reads from write cache occur when recently written data is read again while it is still in the write cache. This is a subset of read cache hits which includes requests satisfied by either the write or the read cache.
  • Reads From Read Cache/s Average number of read requests per second that were satisfied by the read cache only. Reads from read cache occur when data that has been recently read or prefetched is re-read while it is still in the read cache. This is a subset of read cache hits which includes requests satisfied by either the write or the read cache.
  • Read Cache Hit Ratio The fraction of read requests served from both read and write caches vs. the total number of read requests. A higher ratio indicates better read performance.
  • Write Cache Hits/s Average number of write requests per second that were satisfied by the write cache without  requiring any disk access. Write requests that are not write cache hits are referred to as write cache misses.
  • Write Cache Misses/s Average number of write requests per second that did require one or multiple disk accesses. Write requests that cause forced flushes or that bypass the write cache due to their size are examples of write cache misses.
  • Write Cache Rehits/s Average number of write requests per second that were satisfied by the write cache since they had been referenced before and not yet flushed to the disks. Write cache rehits occur when recently accessed data is referenced again while it is still in the write cache. This is a subset of Write Cache Hits.
  • Write Cache Hit Ratio The ratio of write requests that the write cache satisfied without requiring any disk access vs. the total number of write requests to this LUN. A higher ratio indicates better write performance.
  • Write Cache Rehit Ratio The ratio of write requests that the write cache satisfied since they have been referenced before and not yet flushed to the disks vs. the total number of write requests to this LUN. This is a measure of how often the write cache succeeded in eliminating a write operation to disk. While improving the rehit ratio is useful it is more beneficial to reduce the number of forced flushes.
Advertisements

Celerra data mover performance and port configuration

I had a request to review my experience with data mover performance and port configuration on our production Celerras.  When I started supporting our Celerras I had no experience at all, so my current configuration is the result of trial and error troubleshooting and tackling performance problems as they appeared.

To keep this simple, I’ll review my configuration for a Celerra with only one primary data mover and one standby.  There really is no specific configuration needed on your standby data mover, just remember to perfectly match all active network ports on both primary and standby, so in the event of a failover the port configuration matches between the two.

Our primary data mover has two Ethernet modules with four ports each (for a total of eight ports).  I’ll map out how each port is configured and then explain why I did it that way.

Cge 1-0             Failsafe Config for Primary CIFS  (combined with cge1-1), assigned to ‘CIFS1’ prod file server.

Cge 1-1             Failsafe Config for Primary CIFS (combined with cge1-0), assigned to ‘CIFS1’ prod file server.

Cge 1-2             Interface configured for backup traffic, assigned to ‘CIFSBACKUP1’ server, VLAN 1.

Cge 1-3             Interface configured for backup traffic, assigned to ‘CIFSBACKUP2’ server. VLAN 1.

Cge 2-0             Interface configured for backup traffic, assigned to ‘CIFSBACKUP3’ server, VLAN 2.

Cge 2-1             Interface configured for backup traffic, assigned to ‘CIFSBACKUP4’ server, VLAN 2.

Cge 2-2             Interface configured for replication traffic, assigned to replication interconnect.

Cge 2-3             Interface configured for replication traffic, assigned to replication interconnect.

Primary CIFS Server – You do have a choice in this case to use either link aggregation or a fail safe network configuration.  Fail safe is an active/passive configuration.  If one port fails the other will take over.  I chose a fail safe configuration for several reasons, but there are good reasons to choose aggregation as well.  I chose fail safe primarily due to the ease of configuration, as there was no need for me to get the network team involved to make changes to our production switch (fail safe is configured only on the Celerra side), and our CIFS server performance requirements don’t necessitate two active links.  If you need the extra bandwidth, definitely go for aggregation.

I originally set up the fail safe network in an emergency situation, as the single interface to our prod CIFS server went down and could not be brought back online.  EMC’s answer was to reboot the data mover.  That fixed it, but it’s not such a good solution during the middle of a business day.

Backup Interfaces – We were having issues with our backups exceeding the time we had for our backup window.  In order to increase backup performance, I created four additional CIFS servers, all sharing the same file systems as production.  Our backup administrator splits the load on the four backup interfaces between multiple media servers and tape libraries (on different VLANs), and does not consume any bandwidth on the production interface that users need to access the CIFS shares.  This configuration definitely improved our backup performance.

Replication – All of our production file systems are replicated to another Celerra in a different country for disaster recovery purposes.   Because of the huge amount of data that needs to be replicated, I created two interfaces specifically for replication traffic.  Just like the backup interfaces, it separates replication traffic from the production CIFS server interface.  Even with the separate interfaces, I still have imposed a bandwidth limitation (no more than 50MB/s) in the interconnect configuration, as I need to share the same 100MB WAN link with our data domain for replication.

This configuration has proven to be very effective for me.  Our links never hit 100% utilization and I rarely get complaints about CIFS server performance.  The only real performance related troubleshooting I’ve had to do on our production CIFS servers has been related to file system deduplication, I’ve disabled it on certain file systems that see a high amount of activity.

Other thoughts about celerra configuration:

  1. We recently added a third data mover to the Celerra in our HQ data center because of the file system limitation on one data mover.  You can only have 2048 total filesystems on one data mover.  We hit that limitation due to the number of checkpoints that we keep for operational file restores.  If you make a checkpoint of one filesystem twice a day for a month, that would be 61 filesystems used against the 2048 total, which adds up quickly if you have a CIFS server filled with dozens of small shares.  I simply added another CIFS server and all new shares are now created on the new CIFS server.  The names and locations of the shares are transparent to all of our users as all file shares are presented to users with DFS links, so there were no major changes required for our Active Directory/Windows administrators.
  2. Use the Celerra monitor to keep an eye on CPU and Memory usage throughout the day.  Once you launch it from Unisphere, it runs independently of your Unisphere session (unisphere can be closed) and has a very small memory footprint on your laptop/PC.
  3. Always create your CIFS server on VDM’s, especially if you are replicating data for disaster recovery.   VDM’s are designed specifically for windows environments, allow for easy migration between data movers and allow for easy recreation of a CIFS server and it’s shares in a replication/DR scenario.  They store all the information for local groups, shares, security credentials, audit logs, and home directory info.  If you need to recreate a CIFS server from scratch, you’ll need to re-do all of those things from scratch as well.  Always use VDM’s!
  4. Write scripts for monitoring purposes.  I have only one running on my Celerras now that emails me a report of the status all replication jobs in the morning.  Of course, you can put any valid command into a bash script (adding a mailx command to email you the results), stick it in crontab, and away you go.

Optimizing Java Memory for Navisphere / Unisphere

If you have a CLARiiON system with a large configuration in terms of disks, LUNs, initiator records, etc, you may experience a slowdown when managing the system with Navisphere or Unisphere.  If you increase the amount of memory that Java can use, you can significantly improve the response time when using the management console.

Here are the steps:

  1. Log in to the CLARiiON setup page (http://<clariion IP>/setup).  Go to Set Update Parameters > Update Interval.  Change it to 300.
  2. On the Management Server (or your local PC/laptop) go to Control Panel and launch the Java icon.
  3. Go to the Java tab and click view.
  4. Enter -Xmx128m under Java Runtime Parameter, which allocates 128MB for Java.  This number can be increased as you see fit, you may see better results with 512 or 1024.

Auto generating daily performance graphs with EMC Control Center / Performance Manager

This document describes the process I used to pull performance data using the ECC pmcli command line tool, parse the data to make it more usable with a graphing tool, and then use perl scripts to automatically generate graphs.

You must install Perl.  I use ActiveState Perl (Free Community Edition) (http://www.activestate.com/activeperl/downloads).

You must install Cygwin.  Link: http://www.cygwin.com/install.html. I generally choose all packages.

I use the follow CPAN Perl modules:

Step 1:

Once you have the software set up, the first step is to use the ECC command line utility to extract the interval performance data that you’re interested in graphing.  Below is a sample PMCLI command line script that could be used for this purpose.

:Get the current date

For /f “tokens=2-4 delims=/” %%a in (‘date /t’) do (set date=%%c%%a%%b)

:Export the interval file for today’s date.

D:\ECC\Client.610\PerformanceManager\pmcli.exe -export -out D:\archive\interval.csv -type interval -class clariion -date %date% -id APM00324532111

:Copy all the export data to my cygwin home directory for processing later.

copy /y e:\san712_interval.csv C:\cygwin\home\<userid>

You can schedule the command script above to run using windows task scheduler.  I run it at 11:46PM every night, as data is collected on our SAN in 15 minute intervals, and that gives me a file that reports all the way up to the end of one calendar day.

Note that there are 95 data collection points from 00:15 to 23:45 every day if you collect data at 15 minute intervals.  The storage processor data resides in the last two lines of the output file.

Here is what the output file looks like:

EMC ControlCenter Performance manager generated file from: <path>

Data Collected for DiskStats

Data Collected for DiskStats – 0_0_0

                                                             3/28/11 00:15       3/28/11 00:30      3/28/11  00:45      3/28/11 01:00 

Number of Arrivals with Non Zero Queue     12                         20                        23                      23 

% Utilization                                                30.2                     33.3                     40.4                  60.3

Response Time                                              1.8                        3.3                        5.4                     7.8

Read Throughput IO per sec                        80.6                    13.33                   90.4                    10.3

Great information in there, but the format of the data makes it very hard to do anything meaningful with the data in an excel chart.  If I want to chart only % utilization, that data is difficult to chart because there are so many counters around it that are also have data collected on them.   My next goal was to write a script to reformat the data in a much more usable format to automatically create a graph for one specific counter that I’m interested in (like daily utilization numbers), which could then be emailed daily or auto-uploaded to an internal website.

Step 2:

Once the PMCLI data is exported, the next step is to use cygwin bash scripts to parse the csv file and pull out only the performance data that is needed.  Each SAN will need a separate script for each type of performance data.  I have four scripts configured to run based on the data that I want to monitor.  The scripts are located in my cygwin home directory.

The scripts I use:

  • Iostats.sh (for total IO throughput)
  • Queuestats.sh (for disk queue length)
  • Resptime.sh (for disk response time in ms)
  • Utilstats.sh (for % utilization)

Here is a sample shell script for parsing the CSV export file (iostats.sh):

#!/usr/bin/bash

#This will pull only the timestamp line from the top of the CSV output file. I’ll paste it back in later.

grep -m 1 “/” interval.csv > timestamp.csv

#This will pull out only lines that begin with “total througput io per sec”.

grep -i “^Total Throughput IO per sec” interval.csv >> stats.csv

#This will pull out the disk/LUN title info for the first column.  I’ll add this back in later.

grep -i “Data Collected for DiskStats -” interval.csv > diskstats.csv

grep -i “Data Collected for LUNStats -” interval.csv > lunstats.csv

#This will create a column with the disk/LUN number .  I’ll paste it into the first column later.

cat diskstats.csv lunstats.csv > data.csv

#This adds the first column (disk/LUN) and combines it with the actual performance data columns.

paste data.csv stats.csv > combined.csv

#This combines the timestamp header at the top with the combined file from the previous step to create the final file we’ll use for the graph.  There is also a step to append the current date and copy the csv file to an archive directory.

cat timestamp.csv combined.csv > iostats.csv

cp iostats.csv /cygdrive/e/SAN/csv_archive/iostats_archive_$(date +%y%m%d).csv

#  This removes all the temporary files created earlier in the script.  They’re no longer needed.

rm timestamp.csv

rm stats.csv

rm diskstats.csv

rm lunstats.csv

rm data.csv

rm combined.csv

#This strips the last two lines of the CSV (Storage Processor data).  The resulting file is used for the “all disks” spreadsheet.  We don’t want the SP
data to skew the graph.  This CSV file is also copied to the archive directory.

sed ‘$d’ < iostats.csv > iostats2.csv

sed ‘$d’ < iostats2.csv > iostats_disk.csv

rm iostats2.csv

cp iostats_disk.csv /cygdrive/e/SAN/csv_archive/iostats_disk_archive_$(date +%y%m%d).csv

Note: The shell script above can be run in the windows task scheduler as long as you have cygwin installed.  Here’s the syntax:

c:\cygwin\bin\bash.exe -l -c “/home/<username>/iostats.sh”

After running the shell script above, the resulting CSV file contains only Total Throughput (IO per sec) data for each disk and lun.  It will contain data from 00:15 to 23:45 in 15 minute increments.  After the cygwin scripts have run we will have csv datasets that are ready to be exported to a graph.

The Disk and LUN stats are combined into the same CSV file.  It is entirely possible to rewrite the script to only have one or the other.  I put them both in there to make it easier to manually create a graph in excel for either disk or lun stats at a later time (if necessary).  The “all disks graph” does not look any different with both disk and lun stats in there, I tried it both ways and they overlap in a way that makes the extra data indistinguishable in the image.

The resulting data output after running the iostats.sh script is shown below.  I now have a nice, neat excel spreadsheet that lists the total throughput for each disk in the array for the entire day in 15 minute increments.   Having the data formatted in this way makes it super easy to create charts.  But I don’t want to have to do that manually every day, I want the charts to be created automatically.

                                                             3/28/11 00:15       3/28/11 00:30      3/28/11  00:45      3/28/11 01:00

Total Throughput IO per sec   – 0_0_0          12                             20                             23                           23 

Total Throughput IO per sec    – 0_0_1        30.12                        33.23                        40.4                         60.23

Total Throughput IO per sec    – 0_0_2         1.82                          3.3                           5.4                              7.8

Total Throughput IO per sec    -0_0_3         80.62                        13.33                        90.4                         10.3 

Step 3:

Now I want to automatically create the graphs every day using a Perl script.  After the CSV files are exported to a more usable format from the previous step, I Use the GD::Graph library from CPAN (http://search.cpan.org/~mverb/GDGraph-1.43/Graph.pm) to auto-generate the graphs.

Below is a sample Perl script that will autogenerate a great looking graph based on the CSV ouput file from the previous step.

#!/usr/bin/perl

#Declare the libraries that will be used.

use strict;

use Text::ParseWords;

use GD::Graph::lines;

use Data::Dumper;

#Specify the csv file that will be used to create the graph

my $file = ‘C:\cygwin\home\<username>\iostats_disk.csv’;

#my $file  = $ARGV[0];

my ($output_file) = ($file =~/(.*)\./);

#Create the arrays for the data and the legends

my @data;

my @legends;

#parse csv, generate an error if it fails

open(my $fh, ‘<‘, $file) or die “Can’t read csv file ‘$file’ [$!]\n”;

my $countlines = 0;

while (my $line = <$fh>) {

chomp $line;

my @fields = Text::ParseWords::parse_line(‘,’, 0, $line);

#There are 95 fields generated to correspond to the 95 data collection points in each
of the output files.

my @field =

(@fields[1],@fields[2],@fields[3],@fields[4],@fields[5],@fields[6],@fields[7],@fields[8],@fields[9],@fields[10],@fields[11],@fields[12],@fields[13],@fields[14],@fields[15],@fields[16],@fields[17],@fields[18],@fields[19],@fields[20],@fields[21],@fields[22],@fields[23],@fields[24],@fields[25],@fields[26],@fields[27],@fields[28],@fields[29],@fields[30],@fields[31],@fields[32],@fields[33],@fields[34],@fields[35],@fields[36],@fields[37],@fields[38],@fields[39],@fields[40],@fields[41],@fields[42],@fields[43],@fields[44],@fields[45],@fields[46],@fields[47],@fields[48],@fields[49],@fields[50],@fields[51],@fields[52],@fields[53],@fields[54],@fields[55],@fields[56],@fields[57],@fields[58],@fields[59],@fields[60],@fields[61],@fields[62],@fields[63],@fields[64],@fields[65],@fields[66],@fields[67],@fields[68],@fields[69],@fields[70],@fields[71],@fields[72],@fields[3],@fields[74],@fields[75],@fields[76],@fields[77],@fields[78],@fields[79],@fields[80],@fields[81],@fields[82],@fields[83],@fields[84],@fields[85],@fields[86],@fields[87],@fields[88],@fields[89],@fields[90],@fields[91],@fields[92],@fields[93],@fields[94],@fields[95]);
push @data, \@field;

if($countlines >= 1){

push @legends, @fields[0];

}

$countlines++;

}

#The data and legend arrays will read 820 lines of the CSV file.  This number will change based on the number of disks in the SAN, and will be different depending on the SAN being reported on.  The legend info will read the first column of the spreadsheet and create a color box that corresponds to the graph line.  For the purpose of this graph, I won’t be using it because 820+ legend entries look like a mess on the screen.

splice @data, 1, -820;

splice @legends, 0, -820;

#Set Graphing Options

my $mygraph = GD::Graph::lines->new(1024, 768);

# There are many graph options that can be changed using the GD::Graph library.  Check the website (and google) for lots of examples.

$mygraph->set(

title => ‘SP IO Utilization (00:15 – 23:45)’,

y_label => ‘IOs Per Second’,

y_tick_number => 4,

values_vertical => 6,

show_values => 0,

x_label_skip => 3,

) or warn $mygraph->error;

#As I said earlier, because of the large number of legend entries for this type of graph, I change the legend to simply read “All disks”.  If you want the legend to actually put the correct entries and colors, use this line instead:  $mygraph->set_legend(@legends);

$mygraph->set_legend(‘All Disks’);

#Plot the data

my $myimage = $mygraph->plot(\@data) or die $mygraph->error;

# Export the graph as a gif image.  The images are currently moved to the IIS folder (c:\inetpub\wwwroot) with one of the scripts.  The could also be emailed using a sendmail utility.

my $format = $mygraph->export_format;

open(IMG,”>$output_file.$format”) or die $!;

binmode IMG;

print IMG $myimage->gif;

close IMG;

After this script runs the resulting image file will be saved in the cygwin home directory (It saves it in the same directory that the CSV file is located in).  One of the nightly scripts I run will copy the image to our interal IIS server’s image directory, and sendmail will email the graph to the SAN Admin team.

That’s it!  You now have lots of pretty graphs with which you can impress your management team. 🙂

Here is a sample graph that was generated with the Perl script: