Category Archives: Scripting

A collection of my Storage related scripts for VNX/Clariion, Data Domain, Isilon, Brocade FOS, and Control Center.

Configuring a Powershell to Isilon Connection with SSL

PowerShell allows an easy method to access the Isilon ReST API, but in my environment I need to use true SSL validation. If you are using the default self-signed certificate of the Isilon, your connection will likely fail with an error similar to the one below:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Isilon generates a self signed certificate by default.  Certificate validation for the current PowerShell session can be disabled with the script below, however in my environment I’m not allowed to do that.  I’m including it for completeness in case it is useful for someone else, it was not written by me but uses a BSD 3-Clause license.

function Disable-SSLValidation{
<#
.SYNOPSIS
    Disables SSL certificate validation
.DESCRIPTION
    Disable-SSLValidation disables SSL certificate validation by using reflection to implement the System.Net.ICertificatePolicy class.
    Author: Matthew Graeber (@mattifestation)
    License: BSD 3-Clause
.NOTES
    Reflection is ideal in situations when a script executes in an environment in which you cannot call csc.ese to compile source code. If compiling code is an option, then implementing System.Net.ICertificatePolicy in C# and Add-Type is trivial.
.LINK
    http://www.exploit-monday.com
#>
    Set-StrictMode -Version 2
    # You have already run this function if ([System.Net.ServicePointManager]::CertificatePolicy.ToString() -eq 'IgnoreCerts') { Return }
    $Domain = [AppDomain]::CurrentDomain
    $DynAssembly = New-Object System.Reflection.AssemblyName('IgnoreCerts')
    $AssemblyBuilder = $Domain.DefineDynamicAssembly($DynAssembly, [System.Reflection.Emit.AssemblyBuilderAccess]::Run)
    $ModuleBuilder = $AssemblyBuilder.DefineDynamicModule('IgnoreCerts', $false)
    $TypeBuilder = $ModuleBuilder.DefineType('IgnoreCerts', 'AutoLayout, AnsiClass, Class, Public, BeforeFieldInit', [System.Object], [System.Net.ICertificatePolicy])
  $TypeBuilder.DefineDefaultConstructor('PrivateScope, Public, HideBySig, SpecialName, RTSpecialName') | Out-Null
    $MethodInfo = [System.Net.ICertificatePolicy].GetMethod('CheckValidationResult')
    $MethodBuilder = $TypeBuilder.DefineMethod($MethodInfo.Name, 'PrivateScope, Public, Virtual, HideBySig, VtableLayoutMask', $MethodInfo.CallingConvention, $MethodInfo.ReturnType, ([Type[]] ($MethodInfo.GetParameters() | % {$_.ParameterType})))
    $ILGen = $MethodBuilder.GetILGenerator()
    $ILGen.Emit([Reflection.Emit.Opcodes]::Ldc_I4_1)
    $ILGen.Emit([Reflection.Emit.Opcodes]::Ret)
    $TypeBuilder.CreateType() | Out-Null

    # Disable SSL certificate validation
   [System.Net.ServicePointManager]::CertificatePolicy = New-Object IgnoreCerts
}

While that code may work fine for some, for security reasons you may not want to or be able to disable certificate validation.  Fortunately, you can create your own key pair with puttygen.  This solution was tested to work with OneFS v 7.2.x and PowerShell V3.

Here are the steps for creating your own key pair for PowerShell SSL authentication to Isilon:

Generate the Key

  1. Download Puttygen to generate the keypair for authentication.
    Open Puttygen and click Generate.
  2. It’s important to note that PowerShell requires exporting the key in OpenSSH format, which is done under the Conversions menu, and the option ‘Export OpenSSHKey’.  Save the key without a passphrase.  It can be named something like “SSH.key”.
  3. Next we need to save the public key.  Copy the information in the upper text box labeled “public key for pasting into OpenSSH authorized_keys file”, and paste it into a new text file.  You can then save the file as “authorized_keys” for later use.

Copy the Key

  1. Copy the authorized_keys file to the Isilon cluster to the location of your choosing.
  2. Open an SSH connection to the Isilon cluster and create a folder for the authorized_keys file.
    Example command:  isi_for_array mkdir /root/.ssh
  3. Copy the file to all nodes. Example command: isi_for_array cp /ifs/local/authorized_keys /root/.ssh/
  4. Verify that the file is available on all of the nodes, and it’s also a good idea to verify that the checksum is correct. Example command: isi_for_array md5 /root/.ssh/authorized_keys

Install PowerShell SSH Module

  1. In order to execute commands via SSH using PowerShell you will need to use an SSH module.  Various options exist, however the module from PowerShellAdmin works fine. It works for running commands via SSH on remote hosts such as Linux or Unix computers, VMware ESX(i) hosts or network equipment such as routers and switches that support SSH. It works well with OpenSSH-type servers.

You can visit the PowerShellAdmin page here,  and here is the direct download link for the SessionsPSv3.zip file.

  1. Once you’ve downloaded it, unzip the file to the SSH-Sessions folder, located in C:\Windows\System32\WindowsPowerShell\v1.0\Modules. With that module in place, we are now ready to connect with PowerShell to the Isilon cluster.

Test it

Below is a powershell script you can use to test your connection, it simply runs a df command on the cluster.

#PowerShell Test Script
Import-Module "SSH-Sessions"
$Isilon = "<hostname>"
KeyFile = "C:\scripts\<filename>.key"
New-SshSession -ComputerName $Isilon -Username root -KeyFile $KeyFile
Invoke-SshCommand -verbose -ComputerName $Isilon -Command df  
Remove-SshSession -ComputerName $Isilon

 

 

 

Advertisements

Scripting a VNX/Celerra to Isilon Data Migration with EMCOPY and Perl

datamigration

Below is a collection of perl scripts that make data migration from VNX/Celerra file systems to an Isilon system much easier.  I’ve already outlined the process of using isi_vol_copy_vnx in a prior post, however using EMCOPY may be more appropriate in a specific use case, or simply more familiar to administrators for support and management of the tool.  Note that while I have tested these scripts in my environment, they may need some modification for your use.  I recommend running them in a test environment prior to using them in production.

EMCOPY can be downloaded directly from DellEMC with the link below.  You will need to be a registered user in order to download it.

https://download.emc.com/downloads/DL14101_EMCOPY_File_migration_tool_4.17.exe

What is EMCOPY?

For those that haven’t used it before, EMCOPY is an application that allows you to copy a file, directory, and subdirectories between NTFS partitions while maintaining security information, an improvement over the similar robocopy tool that many veteran system administrators are familiar with. It allows you to back up the file and directory security ACLs, owner information, and audit information from a source directory to a destination directory.

Notes about using EMCOPY:

1) In my testing, EMCopy has shown up to a 25% performance improvement when copying CIFS data compared to Robocopy while using the same number of threads. I recommend using EMCopy over Robocopy as it has other feature improvements as well, for instance sidmapfile, which allows migrating local user data to Active Directory users. It’s available in version 4.17 or later.  Robocopy is also not an EMC supported tool, while EMCOPY is.

2) Unlike isi_vol_copy_vnx, EMCOPY is a windows application and must be run from a windows host.  I highly recommend a dedicated server for any migration tasks.  The isi_vol_copy_vnx utility runs directly on the Isilon OneFS CLI which eliminates any intermediary copy hosts, theoretically providing a much faster solution.

3) There are multiple methods to compare data sizes between the source and destination. I would recommend maintaining a log of each EMCopy session as that log indicates how much data was copied and if there were any errors.

4) If you are migrating over a WAN connection, I recommend first restoring from tape and then using an incremental data sync with EMCOPY.

Getting Started

I’ve divided this post up into a four step process.  Each step includes the relevant script and a description of the process.

  • Export File System information (export_fs.pl  Script)

Export file system information from the Celerra & generate the Isilon commands to re-create them.

  • Export SMB information (export_smb.pl Script)

Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

  • Export NFS information (export_nfs.pl Script)

Export NFS information from the Celerra & generate the Isilon commands to re-create them.

  • Create the EMCOPY migration script (EMCOPY_create.pl Script)

Perform the data migration with EMCOPY using the output from this script.

Exporting information from the Celerra to run on the Isilon

These Perl scripts are designed to be run directly on the Control Station and will subsequently create shell scripts that will run on the Isilon to assist with the migration.  You will need to manually copy the output files from the VNX/Celerra to the Isilon. The first three steps I’ve outlined do not move the data or permissions, they simply run a nas_fs query on the Celerra to generate the Isilon script files that actually make the directories, create quotas, and create the NFS and SMB shares. They are “scripts that generate scripts”. 🙂

Before you run the scripts, make sure you edit them to correctly specify the appropriate Data Mover.  Once complete, You’ll end up with three .sh files created for you to move to your Isilon cluster.  They should be run in the same order as they were created.

Note that EMC occasionally changes the syntax of certain commands when they update OneFS.  Below is a sample of the isilon specific commands that are generated by the first three scripts.  I’d recommend verifying that the syntax is still correct with your version of OneFS, and then modify the scripts if necessary with the new syntax.  I just ran a quick test with OneFS 8.0.0.2, and the base commands and switches appear to be compatible.

isi quota create –directory –path=”/ifs/data1″ –enforcement –hard-threshold=”1032575M” –container=1
isi smb share create –name=”Data01″ –path=”/ifs/Data01/data”
isi nfs exports create –path=”/Data01/data”  –roclient=”Data” –rwclient=”Data” –rootclient=”Data”

 

Step 1 – Export File system information

This script will generate a list of the file system names from the Celerra and place the appropriate Isilon commands that create the directories and quotes into a file named “create_filesystems_xx.sh”.

#!/usr/bin/perl

# Export_fs.pl – Export File system information
# Export file system information from the Celerra & generate the Isilon commands to re-create them.

use strict;
my $nas_fs="nas_fs -query:inuse=y:type=uxfs:isroot=false -fields:ServersNumeric,Id,Name,SizeValues -format:'%s,%s,%s,%sQQQQQQ'";
my @data;

open (OUTPUT, ">> create_filesystems_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nas_fs |") || die "cannot open $nas_fs: $!\n\n";

while ()

{
   chomp;
   @data = split("QQQQQQ", $_);
}

close(CMD);
foreach (@data)

{
   my ($dm, $id, $dir,$size,$free,$used_per, $inodes) = split(",", $_);
   print OUTPUT "mkdir /ifs/$dir\n";
   print OUTPUT "chmod 755 /ifs/$dir\n";
   print OUTPUT "isi quota create --directory --path=\"/ifs/$dir\" --enforcement --hard-threshold=\"${size}M\" --container=1\n";
}

The Output of the script looks like this (this is an excerpt from the create_filesystems_xx.sh file):

isi quota create --directory --path="/ifs/data1" --enforcement --hard-threshold="1032575M" --container=1
mkdir /ifs/data1
chmod 755 /ifs/data1
isi quota create --directory --path="/ifs/data2" --enforcement --hard-threshold="20104M" --container=1
mkdir /ifs/data2
chmod 755 /ifs/data2
isi quota create --directory --path="/ifs/data3" --enforcement --hard-threshold="100774M" --container=1
mkdir /ifs/data3
chmod 755 /ifs/data3

The output script can now be copied to and run from the Isilon.

Step 2 – Export SMB Information

This script will generate a list of the smb share names from the Celerra and place the appropriate Isilon commands into a file named “create_smb_exports_xx.sh”.

#!/usr/bin/perl

# Export_smb.pl – Export SMB/CIFS information
# Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "cifs";:wq!
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $path =~ s/^"/\"\/ifs/;
   print  OUTPUT "isi smb share create --name=$name --path=$path\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_smb_exports_xx.sh file):

isi smb share create --name="Data01" --path="/ifs/Data01/data"
isi smb share create --name="Data02" --path="/ifs/Data02/data"
isi smb share create --name="Data03" --path="/ifs/Data03/data"
isi smb share create --name="Data04" --path="/ifs/Data04/data"
isi smb share create --name="Data05" --path="/ifs/Data05/data"

 The output script can now be copied to and run from the Isilon.

Step 3 – Export NFS Information

This script will generate a list of the NFS export names from the Celerra and place the appropriate Isilon commands into a file named “create_nfs_exports_xx.sh”.

#!/usr/bin/perl

# Export_nfs.pl – Export NFS information
# Export NFS information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "nfs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep export";

open (OUTPUT, ">> create_nfs_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $test = @vars;
   my $i=2;
   my ($ro, $rw, $root, $access, $name);
   my $path=$vars[1];

   for ($i; $i < $test; $i++)
   {
      my ($type, $value) = split("=", $vars[$i]);

      if ($type eq "ro") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }
      if ($type eq "rw") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $rw .= " --rwclient=\"$_\""; }
      }

      if ($type eq "root") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $root .= " --rootclient=\"$_\""; }
      }

      if ($type eq "access") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }

      if ($type eq "name") { $name=$value; }
   }
   print OUTPUT "isi nfs exports create --path=$path $ro $rw $root\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_nfs_exports_xx.sh file):

isi nfs exports create --path="/Data01/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data02/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data03/data" --roclient="Backup" --roclient="Data" --rwclient="Backup" --rwclient="Data" --rootclient="Backup" --rootclient="Data"
isi nfs exports create --path="/Data04/data" --roclient="Backup" --roclient="ProdGroup" --rwclient="Backup" --rwclient="ProdGroup" --rootclient="Backup" --rootclient="ProdGroup"
isi nfs exports create --path="/" --roclient="127.0.0.1" --roclient="127.0.0.1" --roclient="127.0.0.1" -rootclient="127.0.0.1"

The output script can now be copied to and run from the Isilon.

Step 4 – Generate the EMCOPY commands

Now that the scripts have been generated and run on the Isilon, the next step is the actual data migration using EMCOPY.  This script will generate the commands for a migration script, which should be run from a windows server that has access to both the source and destination locations. It should be run after the previous three scripts have successfully completed.

This script will output the commands directly to the screen, it can then be cut and pasted from the screen directly into a windows batch script on your migration server.

#!/usr/bin/perl

# EMCOPY_create.pl – Create the EMCOPY migration script
# Perform the data migration with EMCOPY using the output from this script.

use strict;

my $datamover = "server_4";
my $source = "\\\\celerra_path\\";
my $dest = "\\\\isilon_path\\";
my $prot = "cifs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cant open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $name =~ s/\"//g;
   $path =~ s/^/\/ifs/;

   my $log = "c:\\" . $name . "";
   $log =~ s/ //;
   my $src = $source . $name;
   my $dst = $dest . $name;

   print "emcopy \"$src\" \"$  dst\" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:$log\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the screen output):

emcopy "\\celerra_path\Data01" "\\isilon_path\billing_tmip_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_tmip_01
emcopy "\\celerra_path\Data02" "\\isilon_path\billing_trxs_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_trxs_01
emcopy "\\celerra_path\Data03" "\\isilon_path\billing_vru_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_vru_01
emcopy "\\celerra_path\Data04" "\\isilon_path\billing_rpps_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_rpps_01

That’s it.  Good luck with your data migration, and I hope this has been of some assistance.  Special thanks to Mark May and his virtualstoragezone blog, he published the original versions of these scripts here.

Custom Reporting with Isilon OneFS API Calls

Have you ever wondered where the metrics you see in InsightIQ come from?  InsightIQ uses OneFS API calls to gather information, and you can use the same API calls for custom reporting and scripting.  Whether you’re interested in performance metrics relating to cluster capacity, CPU utilization, network latency & throughput, or disk activities, you have access to all of that information.

I spent a good deal of time already on how to make this work and investigating options that are available to make the gathered data be presentable in some useful manner.  This is really just the beginning, I’m hoping to take some more time later to work on additional custom script examples that gather specific info and have useful output options.  For now, this should get anyone started who’s interested in trying this out.  This post also includes a list of available API calls you can make.  I cover these basic steps in this post to get you started:

  1. How to authenticate to the Isilon Cluster using cookies.
  2. How to make the API call to the Isilon to generate the JSON output.
  3. How to install the jq utility to parse JSON output files.
  4. Some examples of using the jq utility to parse the JSON output.

Authentication

First I’ll go over how to authenticate to the Isilon cluster using cookies. You’ll have to create a credentials file first.  Name the file auth.json and enter the following info into it:

{
 "username":"root",
 "password":"<password>",
 "services":["platform","namespace"]
 }

Note that I am using root for this example, but it would certainly be possible to create a separate account on the Isilon to use for this purpose.  Just give the account the Platform API and Statistics roles.

Once the file is created, you can make a session call to get a cookie:

curl -v -k –insecure -H “Content-Type: application/json” -c cookiefile -X POST -d @auth.json https://10.10.10.10:8080/session/1/session

The output will be over one page long, but you’re looking to verify that the cookie was generated.  You should see two lines similar to this:

* Added cookie isisessid="123456-xxxx-xxxx-xxxx-a193333a99bc" for domain 10.10.10.10, path /, expire 0
< Set-Cookie: isisessid=123456-xxxx-xxxx-xxxx-a193333a99bc; path=/; HttpOnly; Secure

Run a test command to gather data

Below is a sample string to gather some sample statistics.  Later in this document I’ll review all of the possible strings you can use to gather info on CPU, disk, performance, etc.

curl -k –insecure -b @cookiefile ‘https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total’

The command above generates the following json output:

{
"stats" :
[

{
"devid" : 0,
"error" : null,
"error_code" : null,
"key" : "ifs.bytes.total",
"time" : 1398840008,
"value" : 433974304096256
}
]
}

Install JQ

Now that we have data in json format, we need to be able to parse it and change it into a more readable format.  I’m looking to convert it to csv.  There are many different scripts, tools, and languages available for that purpose online.  I personally looked for a method that can be used in a simple bash script and jq is a good solution for that.  I use Cygwin on a windows box for my scripts, but you can download any version you like for your flavor of OS.  You can download the JQ parser here: https://github.com/stedolan/jq/releases.

Instructions for the installation of jq for Cygwin:

  1. Download the latest source tarball for jq from https://stedolan.github.io/jq/download/
  2. Open Cygwin to create the folder you’d like to extract it in
  3. Copy the ‘jq-1.5.tar.gz’ file into your folder to make it available within Cygwin
  4. From a Cygwin command shell, enter the following to uncompress the tarball file : ‘tar -xvzf jq-1.5.tar.gz’
  5. Change folder location to the uncompressed folder e.g. ‘cd /jq-1.5’
  6. Next enter ‘./configure’ and wait for the command to complete (about 5 minutes)
  7. Then enter the commands ‘make’, followed by ‘make install’
  8. You’re done.

Once jq is installed, we can play around with using it to make our json output more readable.  One simple way to make it into a comma separated output, is with this command:

cat sample.json | jq “.stats | .[]” –compact-output

It will turn json output like this:

{
"stats" :
[

{
"devid" : 8,
"error" : null,
"error_code" : null,
"key" : "node.ifs.bytes.used",
"values" :
[

{
"time" : 1498745964,
"value" : 51694140276736
},
{
"time" : 1498746264,
"value" : 51705407610880
}
]
},

Into single line, comma separated output like this:

{"devid":8,"error":null,"error_code":null,"key":"node.ifs.bytes.used","values":[{"time":1498745964,"value":51694140276736},{"time":1498746264,"value":51705407610880}]}

You can further improve the output by removing the quote marks with sed:

cat sample.json | jq “.stats | .[]” –compact-output | sed ‘s/\”//g’

At this point the data is formatted well enough to easily modify it to suit my needs in Excel.

{devid:8,error:null,error_code:null,key:node.ifs.bytes.used,values:[{time:1498745964,value:51694140276736},{time:1498746264,value:51705407610880}]}

JQ CSV

Using the –compact-output switch isn’t the only way to manipulate the data, and probably not the best way.  I haven’t had much time to work with the @csv option in JQ, but it looks very promising. for this.  Below are a few notes on using it, I will include more samples in an edit to this post or a new post in the future that relate this more directly to using it with the Isilon-generated output.  I prefer to use csv files for report output due to the ease of working with them and manipulating them with scripts.

Order is significant for csv, but not for JSON fields.  Specify the mapping from JSON named fields to csv positional fields by constructing an array of those fields, using [.date,.count,.title]:

input: { "date": "2011-01-12 13:14", "count": 17, "title":"He's dead, Jim!" }
jq -r '[.date,.count,.title] | @csv'
"2017-06-12 08:19",17,"You're going too fast!"

You also may want to apply this to an array of objects, in which case you’ll need to use the .[] operator, which streams each item of an array in turn:

jq -r '.[] | [.date, .count, .title] | @csv'
"2017-06-12 08:19",17,"You're going too fast!"
"2017-06-15 11:50",4711,"That's impossible"?"
"2017-06-19 00:01",,"I can't drive 55!"

You’ll likely also want the csv file to populate the csv field names at the top. The easiest way to do this is to add them in manually:

jq -r '["date", "count", "title"], (.[] | [.date, .count, .title]) | @csv'
"date","count","title"
"2017-06-12 08:19",17,"You're going too fast!"
"2017-06-15 11:50",4711,"That's impossible"?"
"2017-06-19 00:01",,"I can't drive 55!"

We can avoid repeating the same list of field names by reusing the header array to lookup the fields in each object.

jq -r '["date", "count", "title"] as $fields| $fields, (.[] | [.[$fields[]]]) | @csv'

Here it is as a function, with a slightly nicer field syntax, using path():

def csv(fs): [path(null|fs)[]] as $fields| $fields, (.[] | [.[$fields[]]]) | @csv;
USAGE: csv(.date, .count, .title)

If the input is not an array of objects but just a sequence of objects  then we can omit the .[] – but then we can’t get the header at the top.  It’s best to convert it to an array using the –slurp/-s option (or put [] around it if it’s generated within jq).

More to come on formatting JSON for Isilon in the future…

Isilon API calls

All of these specific API calls were pulled from the EMC community forum, I didn’t compose this list myself.  It’s a list of the calls that InsightIQ makes to the OneFS API.  They can be queried in exactly the same way that I demonstrated in the examples earlier in this post.

Please note the following about the API calls regarding time ranges:

  1. Every call to the “/platform/1/statistics/current” APIs do not contain query parameters for &begin and &end time range.
  2. Every call to the “/platform/1/statistics/history” APIs always contain query parameters for &begin and &end POSIX time range.

Capacity

https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total&key=ifs.ssd.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.bytes.avail&key=ifs.ssd.bytes.avail&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=ifs.bytes.avail&key=ifs.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.ssd.bytes.avail&key=ifs.ssd.bytes.total&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.out.rate&key=node.ifs.bytes.in.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.total&key=node.ifs.ssd.bytes.used&key=node.ifs.ssd.bytes.total&key=node.ifs.bytes.used&devid=all&degraded=true&interval=300&memory_only=true

CPU

https://10.10.10.10:8080/platform/1/statistics/history?key=node.cpu.idle.avg&devid=all&degraded=true&interval=30&memory_only=true

Network

https://10.10.10.10:8080/platform/1/statistics/current?key=node.net.iface.name.0&key=node.net.iface.name.1&key=node.net.iface.name.2&key=node.net.iface.name.3&key=node net.iface.name.4&key=node.net.iface.name.5&key=node.net.iface.name.6&key=node.net.iface.name.7&key=node.net.iface.name.8&key=node.net.iface.name.9&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.ext.packets.in.rate&key=node.net.ext.errors.in.rate&key=node.net.ext.bytes.out.rate&key=node.net.ext.errors.out.rate&key=node.net.ext.bytes.in.rate&key=node.net.ext.packets.out.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.iface.bytes.out.rate.0&key=node.net.iface.bytes.out.rate.1&key=node.net.iface.bytes.out.rate.2&key=node.net.iface.bytes.out.rate.3&key=node.net.iface.bytes.out.rate.4&key=node.net.iface.bytes.out.rate.5&key=node.net.iface.bytes.out.rate.6&key=node.net.iface.bytes.out.rate.7&key=node.net.iface.bytes.out.rate.8&key=node.net.iface.bytes.out.rate.9&key=node.net.iface.errors.in.rate.0&key=node.net.iface.errors.in.rate.1&key=node.net.iface.errors.in.rate.2&key=node.net.iface.errors.in.rate.3&key=node.net.iface.errors.in.rate.4&key=node.net.iface.errors.in.rate.5&key=node.net.iface.errors.in.rate.6&key=node.net.iface.errors.in.rate.7&key=node.net.iface.errors.in.rate.8&key=node.net.iface.errors.in.rate.9&key=node.net.iface.errors.out.rate.0&key=node.net.iface.errors.out.rate.1&key=node.net.iface.errors.out.rate.2&key=node.net.iface.errors.out.rate.3&key=node.net.iface.errors.out.rate.4&key=node.net.iface.errors.out.rate.5&key=node.net.iface.errors.out.rate.6&key=node.net.iface.errors.out.rate.7&key=node.net.iface.errors.out.rate.8&key=node.net.iface.errors.out.rate.9&key=node.net.iface.packets.in.rate.0&key=node.ne .iface.packets.in.rate.1&key=node.net.iface.packets.in.rate.2&key=node.net.iface.packets.in.rate.3&key=node.net.iface.packets.in.rate.4&key=node.net.iface.packets.in.rate.5&key=node.net.iface.packets.in.rate.6&key=node.net.iface.packets.in.rate.7&key=node.net.iface.packets.in.rate.8&key=node.net.iface.packets.in.rate.9&key=node.net.iface.bytes.in.rate.0&key=node.net.iface.bytes.in.rate.1&key=node.net.iface.bytes.in.rate.2&key=node.net.iface.bytes.in.rate.3&key=node.net.iface.bytes.in.rate.4&key=node.net.iface.bytes.in.rate.5&key=node.net.iface.bytes.in.rate.6&key=node.net.iface.bytes.in.rate.7&key=node.net.iface.bytes.in.rate.8&key=node.net.iface.bytes.in.rate.9&key=node.net.iface.packets.out.rate.0&key=node.net.iface.packets.out.rate.1&key=node.net.iface.packets.out.rate.2&key=node.net.iface.packets.out.rate.3&key=node.net.iface.packets.out.rate.4&key=node.net.iface.packets.out.rate.5&key=node.net.iface.packets.out.rate.6&key=node.net.iface.packets.out.rate.7&key=node.net.iface.packets.out.rate.8&key=node.net.iface.packets.out.rate.9&devid=all&degraded=true&interval=30&memory_only=true

Disk

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.count&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.name.0&key=node.disk.name.1&key=node.disk.name.2&key=node.disk.name.3&key=node.disk.name.4&key=node.disk.name.5&key=node.disk.name.6&key=node.disk.name.7&key=node.disk.name.8&key=node.disk.name.9&key=node.disk.name.10&key=node.disk.name.11&key=node.disk.name.12&key=node.disk.name.13&key=node.disk.name.14&key=node.disk.name.15&key=node.disk.name.16&key=node.disk.name.17&key=node.disk.name.18&key=node.disk.name.19&key=node.disk.name.20&key=node.disk.name.21&key=node.disk.name.22&key=node.disk.name.23&key=node.disk.name.24&key=node.disk.name.25&key=node.disk.name.26&key=node.disk.name.27&key=node.disk.name.28&key=node.disk.name.29&key=node.disk.name.30&key=node.disk.name.31&key=node.disk.name.32&key=node.disk.name.33&key=node.disk.name.34&key=node.disk.name.35&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.slow.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.busy.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.queue.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.in.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.out.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

Complete List of API calls made by InsightIQ

Here is a complete a list of all of the API calls that InsightIQ makes to the Isilon cluster using OneFS API. For complete reference of what these APIs actually do, you can refer to the OneFS API Info Hub and the OneFS API Reference documentation.

https://10.10.10.10:8080/platform/1/cluster/config

https://10.10.10.10:8080/platform/1/cluster/identity

https://10.10.10.10:8080/platform/1/cluster/time

https://10.10.10.10:8080/platform/1/dedupe/dedupe-summary

https://10.10.10.10:8080/platform/1/dedupe/reports

https://10.10.10.10:8080/platform/1/fsa/path

https://10.10.10.10:8080/platform/1/fsa/results

https://10.10.10.10:8080/platform/1/job/types

https://10.10.10.10:8080/platform/1/license/licenses

https://10.10.10.10:8080/platform/1/license/licenses/InsightIQ

https://10.10.10.10:8080/platform/1/quota/reports

https://10.10.10.10:8080/platform/1/snapshot/snapshots-summary

https://10.10.10.10:8080/platform/1/statistics/current?key=cluster.health&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total&key=ifs.ssd.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.bytes.avail&key=ifs.ssd.bytes.avail&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.count&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.name.0&key=node.disk.name.1&key=node.disk.name.2&key=node.disk.name.3&key=node.disk.name.4&key=node.disk.name.5&key=node.disk.name.6&key=node.disk.name.7&key=node.disk.name.8&key=node.disk.name.9&key=node.disk.name.10&key=node.disk.name.11&key=node.disk.name.12&key=node.disk.name.13&key=node.disk.name.14&key=node.disk.name.15&key=node.disk.name.16&key=node.disk.name.17&key=node.disk.name.18&key=node.disk.name.19&key=node.disk.name.20&key=node.disk.name.21&key=node.disk.name.22&key=node.disk.name.23&key=node.disk.name.24&key=node.disk.name.25&key=node.disk.name.26&key=node.disk.name.27&key=node.disk.name.28&key=node.disk.name.29&key=node.disk.name.30&key=node.disk.name.31&key=node.disk.name.32&key=node.disk.name.33&key=node.disk.name.34&key=node.disk.name.35&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.net.iface.name.0&key=node.net.iface.name.1&key=node.net.iface.name.2&key=node.net.iface.name.3&key=node.net.iface.name.4&key=node.net.iface.name.5&key=node.net.iface.name.6&key=node.net.iface.name.7&key=node.net.iface.name.8&key=node.net.iface.name.9&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=cluster.dedupe.estimated.saved.bytes&key=cluster.dedupe.logical.deduplicated.bytes&key=cluster.dedupe.logical.saved.bytes&key=cluster.dedupe.estimated.deduplicated.bytes&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=ifs.bytes.avail&key=ifs.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.ssd.bytes.avail&key=ifs.ssd.bytes.total&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.ftp&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.hdfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.http&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nfs3&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nfs4&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nlm&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.papi&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.siq&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.smb1&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.smb2&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.ftp&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.hdfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.http&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.nfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.nlm&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.papi&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.siq&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.smb&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.ftp&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.hdfs&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.http&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nfs3&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nfs4&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nlm&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.papi&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.siq&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.smb1&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.smb2&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.cpu.idle.avg&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.slow.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.busy.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.queue.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.in.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.out.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.out.rate&key=node.ifs.bytes.in.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.total&key=node.ifs.ssd.bytes.used&key=node.ifs.ssd.bytes.total&key=node.ifs.bytes.used&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.cache&key=node.ifs.cache.l3.data.read.miss&key=node.ifs.cache.l3.meta.read.hit&key=node.ifs.cache.l3.data.read.hit&key=node.ifs.cache.l3.data.read.start&key=node.ifs.cache.l3.meta.read.start&key=node.ifs.cache.l3.meta.read.miss&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.blocked&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.blocked.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.contended&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.contended.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.deadlocked&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.deadlocked.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.getattr&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.getattr.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.link&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.link.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lock&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lock.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lookup&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lookup.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.read&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.read.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.rename&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.rename.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.setattr&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.setattr.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.unlink&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.unlink.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.write&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.write.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.je.num_workers&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.ext.packets.in.rate&key=node.net.ext.errors.in.rate&key=node.net.ext.bytes.out.rate&key=node.net.ext.errors.out.rate&key=node.net.ext.bytes.in.rate&key=node.net.ext.packets.out.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.iface.bytes.out.rate.0&key=node.net.iface.bytes.out.rate.1&key=node.net.iface.bytes.out.rate.2&key=node.net.iface.bytes.out.rate.3&key=node.net.iface.bytes.out.rate.4&key=node.net.iface.bytes.out.rate.5&key=node.net.iface.bytes.out.rate.6&key=node.net.iface.bytes.out.rate.7&key=node.net.iface.bytes.out.rate.8&key=node.net.iface.bytes.out.rate.9&key=node.net.iface.errors.in.rate.0&key=node.net.iface.errors.in.rate.1&key=node.net.iface.errors.in.rate.2&key=node.net.iface.errors.in.rate.3&key=node.net.iface.errors.in.rate.4&key=node.net.iface.errors.in.rate.5&key=node.net.iface.errors.in.rate.6&key=node.net.iface.errors.in.rate.7&key=node.net.iface.errors.in.rate.8&key=node.net.iface.errors.in.rate.9&key=node.net.iface.errors.out.rate.0&key=node.net.iface.errors.out.rate.1&key=node.net.iface.errors.out.rate.2&key=node.net.iface.errors.out.rate.3&key=node.net.iface.errors.out.rate.4&key=node.net.iface.errors.out.rate.5&key=node.net.iface.errors.out.rate.6&key=node.net.iface.errors.out.rate.7&key=node.net.iface.errors.out.rate.8&key=node.net.iface.errors.out.rate.9&key=node.net.iface.packets.in.rate.0&key=node.net.iface.packets.in.rate.1&key=node.net.iface.packets.in.rate.2&key=node.net.iface.packets.in.rate.3&key=node.net.iface.packets.in.rate.4&key=node.net.iface.packets.in.rate.5&key=node.net.iface.packets.in.rate.6&key=node.net.iface.packets.in.rate.7&key=node.net.iface.packets.in.rate.8&key=node.net.iface.packets.in.rate.9&key=node.net.iface.bytes.in.rate.0&key=node.net.iface.bytes.in.rate.1&key=node.net.iface.bytes.in.rate.2&key=node.net.iface.bytes.in.rate.3&key=node.net.iface.bytes.in.rate.4&key=node.net.iface.bytes.in.rate.5&key=node.net.iface.bytes.in.rate.6&key=node.net.iface.bytes.in.rate.7&key=node.net.iface.bytes.in.rate.8&key=node.net.iface.bytes.in.rate.9&key=node.net.iface.packets.out.rate.0&key=node.net.iface.packets.out.rate.1&key=node.net.iface.packets.out.rate.2&key=node.net.iface.packets.out.rate.3&key=node.net.iface.packets.out.rate.4&key=node.net.iface.packets.out.rate.5&key=node.net.iface.packets.out.rate.6&key=node.net.iface.packets.out.rate.7&key=node.net.iface.packets.out.rate.8&key=node.net.iface.packets.out.rate.9&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.ftp&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.hdfs&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.http&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nfs3&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nfs4&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nlm&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.papi&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.siq&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.smb1&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.smb2&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/keys

https://10.10.10.10:8080/platform/1/statistics/protocols

https://10.10.10.10:8080/platform/1/storagepool/nodepools

https://10.10.10.10:8080/platform/1/storagepool/tiers

https://10.10.10.10:8080/platform/1/storagepool/unprovisioned

https://10.10.10.10:8080/session/1/session

Pure Storage data reduction report

I had a request to publish a daily report that outlines our data reduction numbers for each LUN on our production Pure storage arrays.  I wrote a script that will log in to the Pure CLI and issue the appropriate command, using ‘expect’ to do a screen grab and output the data to a csv file.  The csv file is then converted to an HTML table and published on our internal web site.

The ‘expect’ commands (saved as purevol.exp in the same directory as the bash script)

#!/usr/bin/expect -f
spawn ssh pureuser@10.10.10.10
expect “logon as: ”
send “pureuser\r”
expect “pureuser@10.10.10.10’s password: ”
send “password\r”
expect “pureuser@pure01> ”
send “purevol list –space\r”
expect “pureuser@pure01> ”
send “exit\r”

.

The bash script (saved as purevol.sh):

#!/bin/bash

# Pure Data Reduction Report Script
# 11/28/16

#Define a timestamp function
#The output looks like this: 6-29-2016/8:45:12
timestamp() {
date +”%m-%d-%Y/%H:%M:%S”
}

#Remove existing output file
rm /home/data/pure/purevol_532D.txt

#Run the expect script to create the output file
/usr/bin/expect -f /home/data/pure/purevol.exp > /home/data/pure/purevol_532D.txt

#Remove the first ten lines of the output file
#The first 12 lines contain login and command execution info not needed in the report
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt

#Remove the last line of the output file
#This is because the expect script leaves a CLI prompt as the last line of the output
sed -i ‘$ d’ /home/data/pure/purevol_532D.txt

#Add date to output file, remove previous temp files
rm /home/data/pure/purevol_532D-1.csv
rm /home/data/pure/purevol_532D-2.csv
echo -n “Run time: ” > /home/data/pure/purevol_532D-1.csv
echo $(timestamp) >> /home/data/pure/purevol_532D-1.csv

#Add titles to new csv file
echo “Volume”,”Size”,”Thin Provisioning”,”Data Reduction”,” “,” “,”Total Reduction”,” “,” “,”Volume”,”Snapshots”,”Shared Space”,”System”,”Total” >> /home/data/pure/purevol_532D-1.csv

#Convert the space delimited file into a comma delimited file
sed -r ‘s/^\s+//;s/\s+/,/g’ /home/data/pure/purevol_532D.txt > /home/data/pure/purevol_532D-2.csv

#Combine the csv files into one
cat /home/data/pure/purevol_532D-1.csv /home/data/pure/purevol_532D-2.csv > /home/data/pure/purevol_532D.csv

#Use the csv2htm perl script to convert the csv to an html table
#csv2html script available here:  http://web.simmons.edu/~boyd3/imap/csv2html/
./csv2htm.pl -e -T -i /home/data/pure/purevol_532D.csv -o /home/data/pure/purevol_532D.html

#Copy the html file to the www folder to publish it
cp /home/data/pure/purevol_532D.html /cygdrive/C/inetpub/wwwroot

Below is an example of what the output looks like after the script is run and the output is converted to an HTML table.  Note there are columns missing to the right in order to fit the formatting of this post.  Also included are the numbers for Total reduction and snapshots.

Name Size Thin Provisioning Data Reduction
LUN001_PURE_0025_ESX_5T 5T 78% 16.4 to 1
LUN002_PURE_0025_ESX_5T 5T 75% 7.8 to 1
LUN003_PURE_0025_ESX_5T 5T 71% 9.3 to 1
LUN004_PURE_0025_ESX_5T 5T 87% 10.5 to 1

 

 

 

 

Scripting an alert to verify the availability of individual VNX CIFS server shares

It was recently asked to come up with a method to alert on the availability of specific CIFS file shares in our environment.  This was due to a recent issue we had on our VNX with our data mover crashing and causing the corruption of a single file system when it came back up.  We were unaware for several hours of the one file system being unavailable on our CIFS server.

This particular script would require maintenance whenever a new file system share is added to a CIFS server.  A unique line must to be added for every file system share that you have configured.  If a file system is not mounted and the share is inaccessible, an email alert will be sent.  If the share is accessible the script does nothing when run from the scheduler.  If it’s run manually from the CLI, it will echo back to the screen that the path is active.

This is a bash shell script, I run it on a windows server with Cygwin installed using the ‘email’ package for SMTP.  It should also run fine from a linux server, and you could substitute the ‘email’ syntax for sendmail or whatever other mail application you use.   I have it scheduled to check the availability of CIFS shares every one hour.

DIR1=file_system_1; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne ": " && [ -d //$SRV1/$DIR1 ] && echo "Network Path is Active" || email -b -s "Network Path \\\\$SRV1\\$DIR1 is offline" emailaddress@email.com

DIR2=file_system_2; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne ": " && [ -d //$SRV1/$DIR2 ] && echo "Network Path is Active" || email -b -s "Network Path \\\\$SRV1\\$DIR2 is offline" emailaddress@email.com

DIR3=file_system_3; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne ": " && [ -d //$SRV1/$DIR3 ] && echo "Network Path is Active" || email -b -s "Network Path \\\\$SRV1\\$DIR3 is offline" emailaddress@email.com

DIR4=file_system_4; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne ": " && [ -d //$SRV1/$DIR4 ] && echo "Network Path is Active" || email -b -s "Network Path \\\\$SRV1\\$DIR4 is offline" emailaddress@email.com

DIR5=file_system_5; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne ": " && [ -d //$SRV1/$DIR5 ] && echo "Network Path is Active" || email -b -s "Network Path \\\\$SRV1\\$DIR5 is offline" emailaddress@email.com

Gathering performance data on a virtual windows server

When troubleshooting a potential storage related performance problem on a virtual windows server, it’s a bit more difficult to anaylze a because many virtual hosts share the same LUN for a datastore in ESX.  Using EMC’s analyzer or Control Center Performance Manager only gives me statistics on specific disks or LUNs, I have no visibility into a specific virtual server with those tools.  When this situation arises, I use a windows batch script to gather data with the typeperf command line utility for a specific time period and run it directly on the server.  Typically I’ll let it run for 24 hours and then analyze the data in Excel, where it’s easy to make charts and graphs to get a visual view of what’s going on.

Sometimes the most difficult thing to figure out is the correct syntax for the command and which parameters to use.  For reference, here is the command and it’s parameters:

Syntax:

Typeperf [Path [path ...]] [-cf FileName] [-f {csv|tsv|bin}] [-si interval] [-o FileName] [-q [object]] [-qx [object]] [-sc samples] [-config FileName] [-s computer_name]

Parameters:

-c { Path [ path ... ] | -cf   FileName } : Specifies the performance counter path to log. To list multiple counter paths, separate each command path by a space.
 -cf FileName : Specifies the file name of the file that contains the counter paths that you want to monitor, one per line.
 -f { csv | tsv | bin } : Specifies the output file format. File formats are csv (comma-delimited), tsv (tab-delimited), and bin (binary). Default format is csv.
 -si interval [ mm: ] ss   : Specifies the time between samples, in the [mm:] ss format. Default is one second.
 -o FileName   : Specifies the pathname of the output file. Defaults to stdout.
 -q [ object ] : Displays and queries available counters without instances. To display counters for one object, include the object name.
 -qx [ object ] : Displays and queries all available counters with instances. To display counters for one object, include the object name.
 -sc samples : Specifies the number of samples to collect. Default is to sample until you press CTRL+C.
 -config FileName : Specifies the pathname of the settings file that contains command line parameters.
 -s computer_name : Specifies the system to monitor if no server is specified in the counter path.
 /? : Displays help at the command prompt.

EMC’s Analyzer vs. Windows Perfmon Metrics

I tend to look at Response time, disk queue length, Total/Read/Write IO, and Service time first.   I dive into how to interpret many of the SAN performance metrics in my older post here. 

The counters you’ll choose in Windows performance monitor don’t precisely line up with what we commonly look at using EMC’s tools in how they are named, and in addition you can choose ‘LogicalDisk’ and ‘PhysicalDisk’ when selecting the counters.

What is the difference between the Physical Disk vs. Logical Disk performance objects in Perfmon, and why monitor both? Their counters are calculated the same way but their scope is different. I generally use both “\LogicalDisk(*)\” and “\PhysicalDisk(*)\” when I run my perfmon script.

The Physical Disk performance object monitors disk drives on the computer. It identifies the instances representing the physical hardware, and the counters are the sum of the access to all partitions on the physical instance.

The Logical Disk Performance object monitors logical partitions. Performance monitor identifies logical disks by their drive letter or mount point. If a physical disk contains multiple partitions, this counter will report the values just for the partition selected and not for the entire disk. On the other hand, when using Dynamic Disks the logical volumes may span more than one physical disk, in this scenario the counter values will include the access to the logical disk in all the physical disks it spans.

Here are the performance monitor counters that I frequently use, and how they compare to EMC’s navisphere analyzer (or ECC):

“\LogicalDisk(*)\Avg. Disk Queue Length” – (Named the same as EMC) The average number of outstanding requests when the disk was busy
“\LogicalDisk(*)\%% Disk Time” – (No direct EMC equivalent) The “% Disk Time” counter is the “Avg. Disk Queue Length” counter multiplied by 100. It is the same value displayed in a different scale.
“\LogicalDisk(*)\Disk Transfers/sec” – Total Throughput (IO/sec) – the total number of individual disk IO requests completed over a period of one second.  We’ll use this value to help determine Disk Service Time.
“\LogicalDisk(*)\Disk Reads/sec” – Read Throughput (IO/sec)
“\LogicalDisk(*)\Disk Writes/sec” – Write Throughput (IO/sec)
“\LogicalDisk(*)\%% Idle Time” –  (No direct EMC equivalent) This counter provides a very precise measurement of how much time the disk remained in idle state, meaning all the requests from the operating system to the disk have been completed and there are zero pending requests. We’ll also use this to calculate disk service time.
“\LogicalDisk(*)\Avg. Disk sec/Transfer” – Response time (sec) – EMC uses milliseconds, windows uses seconds, so you’ll see 8ms represented as .008 in the results.
“\LogicalDisk(*)\Avg. Disk sec/Read” – Response times for read IO
“\LogicalDisk(*)\Avg. Disk sec/Write” – Response times for write IO

Disk Service Time is caculated with this formula:  Disk Utilization = 100 – %Idle Time, then Disk Utilization  /  Disk Transfers/Sec. = Disk Service Time.

Configuring the Script

This batch script collects all of the relevant data for disk activity.  After 24 hours, it will dump the data into a csv file.  The length of time is controller by the combination of the “-sc” and “-si” parameters.  To collect data in one minute intervals for 24 hours, you’d set si to 60 (collect data every 60 seconds), and sc to 1440 (1440 minutes = 24 hours). To collect data every one minute for 30 minutes, you’d enter “-si 60 -sc 30”.  This script assumes you have a local directory on the C: Drive named ‘Collection’.

@echo off
cd c:\collection

@for /f "tokens=1,2,3,4 delims=/ " %%A in ('date /t') do @(set all=%%A%%B%%C%%D)
@for /f "tokens=1,2,3 delims=: " %%A in ('time /t') do @(set allm=%%A%%B%%C)

typeperf “\LogicalDisk(*)\Avg. Disk Queue Length” “\LogicalDisk(*)\%% Disk Time” “\LogicalDisk(*)\Disk 
Transfers/sec” “\LogicalDisk(*)\Disk Reads/sec” “\LogicalDisk(*)\Disk Writes/sec” “\LogicalDisk(*)\%% Idle Time” 
“\LogicalDisk(*)\Avg. Disk sec/Transfer” “\LogicalDisk(*)\Avg. Disk sec/Read” “\LogicalDisk(*)\Avg. Disk sec/Write” 
“\PhysicalDisk(*)\Avg. Disk Queue Length” “\PhysicalDisk(*)\%% Disk Time” “\PhysicalDisk(*)\Disk Transfers/sec” “\PhysicalDisk(*)\Disk Reads/sec” “\PhysicalDisk(*)\Disk Writes/sec” \PhysicalDisk(*)\%% Idle Time” “\PhysicalDisk(*)\Avg. Disk sec/Transfer” “\PhysicalDisk(*)\Avg. Disk sec/Read” "\PhysicalDisk(*)\Avg. Disk sec/Write” -si 60 -sc 1440 -o PerfCounters-%All%-%Allm%.csv
 

Scripting automatic reports for Isilon from the CLI

We recently set up a virtual demo of an Isilon system on our network as we are evaluating Isilon for a possible purchase.  You can obtain a virtual node that runs on ESX from your local EMC Isilon representative, along with temporary licenses to test everything out.  As part of the test I wanted to see if it was possible to create custom CLI scripts in the same way that I create them on the Celerra or VNX File.  On the Celerra, I run daily scripts that output file pool sizes, file systems & disk space, failover status, logs, health check info, checkpoint info, etc. to my web report page.  Can you do the same thing on an Isilon?

Well, to start with, Isilon’s commands are completely different.  The first step of course was to look at what was available to me.  All of the Isilon administration commands appear to begin with ‘isi’.  If you type isi by itself it will show you the basic list of commands and what they do:

isilon01-1% isi
Description:
OneFS cluster administration.
Usage:
isi  <subcommand>
[--timeout <integer>]
[{--help | -h}]
Subcommands:
Cluster Monitoring:
alert*           An alias for "isi events".
audit            Manage audit configuration.
events*          Manage cluster events.
perfstat*        View cluster performance statistics.
stat*            An alias for "isi status".
statistics*      View general cluster statistics.
status*          View cluster status.
Cluster Configuration:
config*          Manage general cluster settings.
email*           Configure email settings.
job              Isilon job management commands.
license*         Manage software licenses.
networks*        Manage network settings.
services*        Manage cluster services.
update*          Update OneFS system software.
pkg*             Manage OneFS system software patches.
version*         View system version information.
remotesupport    Manage remote support settings.
Hardware & Devices:
batterystatus*   View battery status.
devices*         Manage cluster hardware devices.
fc*              Manage Fibre Channel settings.
firmware*        Manage system firmware.
lun*             Manage iSCSI logical units (LUNs).
target*          Manage iSCSI targets.
readonly*        Toggle node read-write state.
servicelight*    Toggle node service light.
tape*            Manage tape and media changer devices.
File System Configuration:
get*             View file system object properties.
set*             Manage file system object settings.
quota            Manage SmartQuotas, notifications and reports.
smartlock*       Manage SmartLock settings.
domain*          Manage file system domains.
worm*            Manage SmartLock WORM settings.
dedupe           Manage Dedupe settings.
Access Management:
auth             Manage authentication, identities and role-based access.
zone             Manage access zones.
Data Protection:
avscan*          Manage antivirus service settings.
ndmp*            Manage backup (NDMP) settings.
snapshot         Manage SnapshotIQ file system snapshots and settings.
sync             SyncIQ management interface.
Protocols:
ftp*             Manage FTP settings.
hdfs*            Manage HDFS settings.
iscsi*           Manage iSCSI settings.
nfs              Manage NFS exports and protocol settings.
smb              Manage SMB shares and protocol settings.
snmp*            Manage SNMP settings.
Utilities:
exttools*        External tools.
Other Subcommands:
filepool         Manage filepools on the cluster.
storagepool      Configure and monitor storage pools
Options:
Display Options:
--timeout <integer>
Number of seconds for a command timeout.
--help | -h
Display help for this command.

Noe that subcommands or actions that are marked with an asterisk(*) require root login.  If you log in with a normal admin account you’ll get the following error message when you run the command:

isilon01-1% isi status
Commands not enabled for role-based administration require root user access.

Isilon’s OneFS operating system is based on FreeBSD as opposed to Linux for the Celerra/VNX DART OS.   Since it’s a unix based OS with console access, you can create shell scripts and cron job schedules just like the Celerra/VNX File.

Because all of the commands I want to run require root access, I had to create the scripts logged in as root.  Be careful doing this!  This is only a test for me, I would likely look for a workaround for a prod system.    Knowing that I’m going to be FTPing the output files to my web report server, I started by creating the .netrc file in the /root folder.  This is where I store the default login and password for the FTP server. Permissions must be changed for it to work, use chmod 600 on the file after you create it.  It didn’t work for me at first as the syntax is different on FreeBSD than on Linux, so looking at my Celerra notes didn’t help  (For Celerra/VNX File I used “machine <ftp_server_name> login <ftp_login_id> password <ftp_password>”).

For Isilon, the correct syntax is this:

default login <ftp_username> password <ftp_password>
 

The script that FTP’s the files would then look like this for Isilon:

HOST=”10.1.1.1″
ftp $HOST <<SCRIPT
put /root/isilon_stats.txt
put /root/isilon_repl.txt
put /root/isilon_df.txt
put /root/isilon_dedupe.txt
put /root/isilon_perf.txt
SCRIPT
 

For this demo, I created a script that generates reports for File system utilization, Deduplication Status, Performance Stats, Replication stats, and Array Status.  For the Filesystem utilization report, I used two different methods as I wasn’t sure which I’d like better.   Using ‘df –h –a /ifs’ will get you similar information to ‘isi storagepool list’, but the output format is different.   I used cron to schedule the job directly on the Isilon.

Here is the reporting script:

TODAY=$(date)
HOST=$(hostname)
sleep 15
echo “———————————————————————————” > /root/isilon_stats.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_stats.txt
echo “———————————————————————————” >> /root/isilon_stats.txt
/usr/bin/isi status >> /root/isilon_stats.txt
echo “———————————————————————————” > /root/isilon_repl.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_repl.txt
echo “———————————————————————————” >> /root/isilon_repl.txt
/usr/bin/isi sync reports list >> /root/isilon_repl.txt
echo “———————————————————————————” > /root/isilon_df.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_df.txt
echo “———————————————————————————” >> /root/isilon_df.txt
df -h -a /ifs >> /root/isilon_df.txt
echo ” ” >> /root/isilon_df.txt
/usr/bin/isi storagepool list >> /root/isilon_df.txt
echo “———————————————————————————” > /root/isilon_dedupe.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_dedupe.txt
echo “———————————————————————————” >> /root/isilon_dedupe.txt
/usr/bin/isi dedupe stats  >> /root/isilon_dedupe.txt
echo “———————————————————————————” > /root/isilon_perf.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_perf.txt
echo “———————————————————————————” >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–System Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics system  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Client Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics client  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Protocol Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics protocol  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Protocol Data–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics pstat  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Drive Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics drive  >> /root/isilon_perf.txt
 

Once the ouput files are FTP’d to the web server, I have a basic HTML page that uses iframes to show the text files.  The web page is then automatically updated as soon as the new text files are FTP’d.  Below is a screenshot of my demo report page.  It doesn’t show the entire page, but you’ll get the idea.

IsilonReports

Easy reporting on data domain using the autosupport log

EMC-Data-Domain-DD2200-IMG-03.png

I was looking for an easy (and free) way to do some daily reporting on our data domain hardware.  I was most interested in reporting on disk space, but decided to gather some other data as well.  The easiest method I found is to use the info collected in the autosupport log.  First, I’ll explain how to automatically gather the autosupport log from your data domain, and then move on to how to pull only the information you want from it for reporting purposes.

First, you need to enable ftp on the data domain and add the IP address of the FTP server you’re going to use to pull the file.  Here are the commands you need to run on your data domain:

adminaccess enable ftp
adminaccess add ftp 10.1.1.1 
 

Next, you’ll want to pull the files from the data domain.  I am using a windows FTP server to pull the autosupport files.  The windows batch script I use is scheduled to run once a day and is listed below.  I use the ftp command and call a text file with the specific IP and login info for each data domain box.

ftp -s:10.1.1.2.txt
ftp -s:10.1.1.3.txt
ftp -s:10.2.1.2.txt
ftp -s:10.2.1.3.txt
 

The text files that the ftp script calls (that are named <ipaddress.txt>) look like this:

open 10.1.1.2
sysadmin
<password>
cd support
lcd e:\reports\dd\SITE1_DD_01
get autosupport
quit
 

Once you’ve downloaded the autosupport file, you can run scripts against it to pull out only the info you’re interested in.  I prefer to use unix shell scripts for parsing files because of the extra functionality over standard windows batch scripts.  I have Cygwin installed on my web server to run these shell scripts.

If you take a look at the autosupport log, you’ll notice that it’s very long and contains some duplicate characters in multiple areas, which makes using grep, awk, and sed a bit more challenging.  It took a bit of trial and error, but the scripts below will pull only the specific areas I wanted:  System Alerts, File system compression, Locked files, Replication info, File distribution, and Disk space information.

The first script below will gather disk space information using grep.  Spacing inside the quotes is very important in this case, as I was looking for specific unique strings within the autosupport log, and using grep for these unique strings will produce the correct output.  In this example, I am gathering info for four data domain boxes. For those unfamiliar with Cygwin, you can access windows drive letters by navigating to /cygdrive/<drive letter>.  To view the root of the C drive, you would type ‘cd /cygdrive/c’ from the CLI.  In this case, my windows batch script that pulls the autosupport files places them in the e:\reports\dd folder, which would be /cygdrive/e/reports/dd folder when using the cygwill shell.

Here is the script:

# Site 1 Data Domain 01
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “=  SERVE” > /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “Active Tier:” >> /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “Resource           Size” >> /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “/data:” >> /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “/ddvar     ” >> /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
 
# Site 1 Data Domain 02
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “=  SERVE” > /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “Active Tier:” >> /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “Resource           Size” >> /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “/data:” >> /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “/ddvar     ” >> /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
 
# Site 2 Data Domain 01
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “=  SERVE” > /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “Active Tier:” >> /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “Resource           Size” >> /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “/data:” >> /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “/ddvar     ” >> /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
 
# Site 2 Data Domain 02
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “=  SERVE” > /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “Active Tier:” >> /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “Resource           Size” >> /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “/data:” >> /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “/ddvar     ” >> /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
 
# Copy the reports to the web server directory.
cp /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt /cygdrive/c/inetpub/wwwroot/
cp /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt /cygdrive/c/inetpub/wwwroot/
cp /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt /cygdrive/c/inetpub/wwwroot/
cp /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt /cygdrive/c/inetpub/wwwroot/
 

For the remaining reports, I used the ‘sed’ command rather than ‘grep’.  As an example, I’ll explain how I stripped out the information for the System Alerts section.  In order to strip out it out, I use sed to start cutting text when it reaches ‘Current Alerts’, and stop cutting text when it reaches ‘There are’.   This same process is repeated to pull the information for the other reports as well (alerts, compression, locked files, replication, distribution).

This is what the Current Alerts section looks like in the autosupport log:

Current Alerts
--------------
Alert Id   Time                       Severity   Class               Object          Message                                                             
--------   ------------------------   --------   -----------------   -------------   ---------------------------------------------------------------------
774        Wed Dec 19 11:32:19 2012   CRITICAL   SystemMaintenance                   Core dump capability is now disabled due to lack of space in /ddvar.
807        Sun Jan 20 09:07:18 2013   WARNING    Replication         context=3       repl ctx 3: Sync-as-of time is more than 461 hours ago.             
808        Sun Jan 20 09:07:18 2013   WARNING    Replication         context=4       repl ctx 4: Sync-as-of time is more than 461 hours ago.             
809        Sun Jan 20 09:07:19 2013   WARNING    Replication         context=5       repl ctx 5: Sync-as-of time is more than 461 hours ago.              
814        Mon Jan 28 23:20:45 2013   CRITICAL   Filesystem          FilesysType=2   Space usage in Data Collection has exceeded 95% threshold.          
820        Wed Jan 30 15:52:40 2013   WARNING    Replication         context=2       repl ctx 2: Sync-as-of time is more than 148 hours ago.             
824        Mon Feb  4 12:36:39 2013   WARNING    Replication         context=10      repl ctx 10: Sync-as-of time is more than 24 hours ago.             
825        Wed Feb  6 02:13:51 2013   WARNING    Replication         context=19      repl ctx 19: Sync-as-of time is more than 24 hours ago.             
--------   ------------------------   --------   -----------------   -------------   ---------------------------------------------------------------------
There are 8 active alerts.
 

In this case, sed searches for ‘Current Alerts’ and copies all of the text until it reaches the string ‘There are’, at which point it stops and outputs the file.  The output files are written directly to the wwwroot folder on my internal reporting web page.  Each page encapsulates the output files in an iframe, so the web page is automatically updated every morning when the script runs.

Here is the second script that captures the remaining data:

#For Alerts
sed -n ‘/Current Alerts/,/There are/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_alerts.txt
sed -n ‘/Current Alerts/,/There are/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_alerts.txt
sed -n ‘/Current Alerts/,/There are/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_alerts.txt
sed -n ‘/Current Alerts/,/There are/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_alerts.txt
 
#For Filesystem Compression
sed -n ‘/Filesys Compression/,/((Pre-/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_filecomp.txt
sed -n ‘/Filesys Compression/,/((Pre-/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_filecomp.txt
sed -n ‘/Filesys Compression/,/((Pre-/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_filecomp.txt
sed -n ‘/Filesys Compression/,/((Pre-/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_filecomp.txt
 
#For Locked Files:
sed -n ‘/Locked files/,/Active/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_lockedfiles.txt
sed -n ‘/Locked files/,/Active/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_lockedfiles.txt
sed -n ‘/Locked files/,/Active/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_lockedfiles.txt
sed -n ‘/Locked files/,/Active/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_lockedfiles.txt
 
#Repl Transferred over 24 hrs
sed -n ‘/Replication Data Transferred/,/(sum)/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_repl.txt
sed -n ‘/Replication Data Transferred/,/(sum)/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_repl.txt
sed -n ‘/Replication Data Transferred/,/(sum)/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_repl.txt
sed -n ‘/Replication Data Transferred/,/(sum)/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_repl.txt
 
#File Distribution
sed -n ‘/File Distribution/,/> 500 GiB/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_filedist.txt
sed -n ‘/File Distribution/,/> 500 GiB/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_filedist.txt
sed -n ‘/File Distribution/,/> 500 GiB/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_filedist.txt
sed -n ‘/File Distribution/,/> 500 GiB/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_filedist.txt
 

When all is said and done, the report output looks like this:

Alerts:

Current Alerts
--------------
Alert Id   Time                       Severity   Class         Object          Message                                                   
--------   ------------------------   --------   -----------   -------------   ----------------------------------------------------------
1157       Wed Feb  5 12:23:19 2014   WARNING    Replication   context=3       Sync-as-of time is more than 24 hours ago.                
1161       Sun Feb  9 08:33:48 2014   CRITICAL   Filesystem    FilesysType=2   Space usage in Data Collection has exceeded 95% threshold.
--------   ------------------------   --------   -----------   -------------   ----------------------------------------------------------
There are 2 active alerts.

Filesystem Compression:

Filesys Compression
--------------                  
From: 2014-03-06 05:00 To: 2014-03-13 06:00

                  Pre-Comp   Post-Comp   Global-Comp   Local-Comp      Total-Comp
                     (GiB)       (GiB)        Factor       Factor          Factor
                                                                    (Reduction %)
---------------   --------   ---------   -----------   ----------   -------------
Currently Used:   495122.5    122948.5             -            -     4.0x (75.2)
Written:*                                                                        
  Last 7 days       9204.2      5656.6          1.1x         1.5x     1.6x (38.5)
  Last 24 hrs         74.1        36.4          1.0x         2.0x     2.0x (50.9)
---------------   --------   ---------   -----------   ----------   -------------
 * Does not include the effects of pre-comp file deletes/truncates
   since the last cleaning on 2014/03/11 02:07:50.

Replications:

Replication Data Transferred over 24hr
--------------------------------------
Directory/MTree Replication:
Date         Time         CTX   Pre-Comp (KB)   Pre-Comp (KB)           Replicated (KB)   Low-bw-   Sync-as-of      
                                      Written       Remaining    Pre-Comp       Network     optim   Time            
----------   --------   -----   -------------   -------------   -----------------------   -------   ----------------
2014/03/13   06:36:18       2     104,186,675      86,628,349   55,522,200   37,756,047      1.00   Thu Mar 13 05:53
                            3               0               0            0            0      0.00   Tue Feb  4 12:00
                            4               0               0            0        4,882      0.00   Thu Mar 13 06:00
                            8               0               0            0        5,120      0.00   Thu Mar 13 06:00
                           29               0               0            0        5,098      0.00   Thu Mar 13 06:00
                        (sum)     104,186,675                   55,522,200   37,771,148      1.00  

File Distribution:

File Distribution
-----------------
169,432 files in 8,691 directories

                          Count                         Space
               -----------------------------   --------------------------
         Age         Files       %    cumul%        GiB       %    cumul%
   ---------   -----------   -----   -------   --------   -----   -------
       1 day            13     0.0       0.0       74.4     0.0       0.0
      1 week           223     0.1       0.1     2133.7     0.4       0.4
     2 weeks         5,103     3.0       3.2    39121.2     7.8       8.3
     1 month        12,819     7.6      10.7    79336.5    15.9      24.1
    2 months        21,853    12.9      23.6   153873.8    30.8      54.9
    3 months         5,743     3.4      27.0    45154.6     9.0      64.0
    6 months         3,402     2.0      29.0    13674.3     2.7      66.7
      1 year        17,035    10.1      39.1    72937.7    14.6      81.3
    > 1 year       103,241    60.9     100.0    93461.7    18.7     100.0
   ---------   -----------   -----   -------   --------   -----   -------

                          Count                         Space
               -----------------------------   --------------------------
        Size         Files       %    cumul%        GiB       %    cumul%
   ---------   -----------   -----   -------   --------   -----   -------
       1 KiB        11,257     6.6       6.6        0.0     0.0       0.0
      10 KiB        44,396    26.2      32.8        0.3     0.0       0.0
     100 KiB        38,488    22.7      55.6        1.3     0.0       0.0
     500 KiB         9,652     5.7      61.3        2.1     0.0       0.0
       1 MiB        27,460    16.2      77.5       70.4     0.0       0.0
       5 MiB        12,136     7.2      84.6       27.3     0.0       0.0
      10 MiB         3,861     2.3      86.9       26.9     0.0       0.0
      50 MiB         4,367     2.6      89.5       96.6     0.0       0.0
     100 MiB           853     0.5      90.0       58.2     0.0       0.1
     500 MiB           861     0.5      90.5      201.0     0.0       0.1
       1 GiB           495     0.3      90.8      309.7     0.1       0.2
       5 GiB           567     0.3      91.1     1460.2     0.3       0.5
      10 GiB           336     0.2      91.3     2574.1     0.5       1.0
      50 GiB        14,691     8.7     100.0   494083.5    98.9      99.8
     100 GiB            11     0.0     100.0      683.3     0.1     100.0
     500 GiB             1     0.0     100.0      173.0     0.0     100.0
   > 500 GiB             0     0.0     100.0        0.0     0.0     100.0

Disk Space:

==========  SERVER USAGE   ==========
Active Tier:
Resource           Size GiB   Used GiB   Avail GiB   Use%   Cleanable GiB
/data: pre-comp           -   245211.8           -      -               -
/data: post-comp    51484.9    31811.0     19673.9    62%            35.4
/ddvar                 78.7        7.9        66.8    11%               -

Alerting on VNX File SP Failover

You may see a Celerra alert description that states “Disk dx has been trespassed” or “Storage Processor is ready to restore”.  This would mean that one or more Celerra LUNs aren’t being accessed through the default SP.  It can happen during a DART upgrade, some other maintenance activity, or simply a temporary loss of connectivity to the default SP for the LUN.  I wanted to set up an email alert to let the storage team know when an SP failover occurs, so I wrote a script to send an alert email when a failover is detected.  It can be run directly on the data mover and scheduled as frequently as you like via cron.

Here’s the script:

#!/bin/bash

TODAY=$(date) 
HOST=$(hostname) 

STATUS=`cat /scripts/health/arrayFOstatus.txt`

# Checks and displays status of NAS storage, will show SP/disk Failover info. 
# We will use this info to include in the alert email if needed. 

/nas/bin/nas_storage -check -all > /scripts/health/backendcheck.txt 

#  [nasadmin@celerra]$ nas_storage -check -all     
#  Discovering storage (may take several minutes) 
#  Error 5017: storage health check failed #  CK900052700319 SPA is failed over #  CK900052700319 d6 is failed over

# Shows detailed info, I'm only pulling out failover info. 

/nas/bin/nas_storage -info -all | grep failed_over > /scripts/health/failovercheck.txt 

# The command above results in this output: 
#   failed_over = <True/False> 
#   failed_over = <True/False> 
#   failed_over = <True/False> 

# The first entry is the value for the array, second is SPA, third is SPB. 

# The next line pulls the True/False value for the entire array (the third value on the first line of output) 

echo `cat /scripts/health/failovercheck.txt | awk '{if (NR<2) print $3}'` > /scripts/health/arrayFOstatus.txt

# Now we check the value in the 'arrayFOstatus.txt' file, if it's 'True', we send an email notification that there is an SP failed over. 

# In addition to sending an email, you could also run the 'nas_storage -failback id=1' command to automatically fail it back.

if [ "$STATUS" == "False" ]; then  
   echo "Value is False" 
fi

if [ "$STATUS" == "True" ]; then  
   mail -s "SP Failover on $HOST" username@domain.com < /scripts/health/backendcheck.txt  

   #nas_storage -failback id=1 #Optionally fail it back, our team decided to alert only and fail back manually.

   echo "Value is True" 
fi

If a failover is detected, you can manually fail it back with the following commands:

Determine/Confirm the ID number:

[nasadmin@celerra]$ nas_storage -list
 id   acl    name                     serial_number
 1    0      CK900052700319 CK900052700319

Fail it back (will fail back Celerra/VNX File LUNs only):

[nasadmin@celerra]$ nas_storage -failback id=1
id  = 1  
serial_number   = CCK900052700319  
name  = CCK900052700319  
acl  = 0  
done

Automating Config / Zone backups for Brocade Switches

I was looking for an easy way to make backups of our fabric and report the status of the backup on our internal web page. I wrote a script that will remotely run configupload and pull the script config file from all the switches via FTP, move the previous config file to an archive location, create a report of the output and copy it to a web page folder. I run the bash script using cygwin on our internal IIS server, and it’s scheduled to run daily.

The script uses ssh, so you could either set up ssh keys or use an opensource package called sshpass (http://sourceforge.net/projects/sshpass). In this example script I’m using sshpass to avoid having to type in a password for each command when the script runs.

I figured out that you must connect to a new switch at least once manually without using sshpass, as it supresses the output that asks you to confirm adding it as a known host. Below is the output, I simply ran a ‘zoneshow’ as the initial command to set it up.

$ ssh USERID@brocade_switch_1 zoneshow
 The authenticity of host ‘brocade_switch_1 (10.0.0.9)’ can’t be established.
 DSA key fingerprint is 3b:cd:98:62:4a:67:99:28:c4:41:f3:19:8d:f1:7d:a0.
 Are you sure you want to continue connecting (yes/no)? yes
 Warning: Permanently added ‘brocade_switch_1,10.0.0.9’ (DSA) to the list of known hosts.

My original script is set up to run for multiple geographical locations which is why I have separate lists set up.   This sample script is set up for two separate theoretical locations, it could easily be expanded or reduced based on your environment.

#Set Environment 
TODAY=`date`
TIMESTAMP=`date +”%Y%m%d%H%M”`
LOCALPATH=”/cygdrive/c/scripts/brocade”
WEBPATH=”/cygdrive/c/inetpub/wwwroot”
FTPHOST=”10.0.0.1″
FTPUSER=”ftpuser”
FTPPATH=”/brocade”
FTPPASSWORD=”password”

#Add timestamp to top of report
echo $TODAY > $WEBPATH/brocade_backup_report.txt
echo ” ” >> $WEBPATH/brocade_backup_report.txt

#Clear data from last run
>$LOCALPATH/brocade_backup_report_1.txt
>$LOCALPATH/brocade_backup_report_2.txt

#Move yesterday’s backups to an archive location
mv $WEBPATH/brocade/*.txt /cygdrive/e/archive/brocade

#List of Switches to be backed up
SWITCHLIST1=”switch1siteA switch2siteA switch3siteA switch4siteA”
SWITCHLIST2=”switch1siteB switch2siteB switch3siteB switch4siteB switch5siteB switch6siteB”

for x in $SWITCHLIST1
do
echo “$x”: “$FTPPATH/$x.$TIMESTAMP” >> $LOCALPATH/brocade_backup_report_1.txt
sshpass -p ‘password’ ssh admin@$x configupload -ftp $FTPHOST,$FTPUSER,$FTPPATH/$x.$TIMESTAMP.txt,$FTPPASSWORD >> $LOCALPATH/brocade_backup_report_1.txt
echo ” ” >> $LOCALPATH/brocade_backup_report_1.txt
done

for x in $SWITCHLIST2
do
echo “$x”: “$FTPPATH/$x.$TIMESTAMP” >> $LOCALPATH/brocade_backup_report_2.txt
sshpass -p ‘password’ ssh USERID@$x configupload -ftp $FTPHOST,$FTPUSER,$FTPPATH/$x.$TIMESTAMP.txt,$FTPPASSWORD >> $LOCALPATH/brocade_backup_report_2.txt
echo ” ” >> $LOCALPATH/brocade_backup_report_2.txt
done

# This last section creates the report for the web page.
cat $LOCALPATH/brocade_backup_report_1.txt.txt $LOCALPATH/brocade_backup_report_2.txt.txt >> $WEBPATH/brocade_backup_report.txt

The report output looks like this:

Thu Sep 12 06:00:01 CDT 2013
 
switch1siteA: /brocade/switch1siteA.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch2siteA: /brocade/switch2siteA.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch3siteA: /brocade/switch3siteA.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch4siteA: /brocade/switch4siteA.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch1siteB: /brocade/switch1siteB.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch2siteB: /brocade/switch2siteB.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch3siteB: /brocade/switch3siteB.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch4siteB: /brocade/switch4siteB.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch5siteB: /brocade/switch5siteB.201309120600
configUpload complete: All selected config parameters are uploaded
 
switch6siteB: /brocade/switch6siteB.201309120600
configUpload complete: All selected config parameters are uploaded

Reporting on the Estimated Job Completion Times for FAST VP Data Relocation

Another one of the many daily reports I run reports on the current time remaining on the FAST VP data relocation times for all of our arrays.  I also make a single backup copy of the report to show the times for the previous day so I can get a quick view of progress that was made over the previous 24 hours.  Both reports are presented side by side on my intranet report page for easy comparison.

I made a post last year regarding how to deal with long running FAST VP data relocation jobs (http://emcsan.wordpress.com/2012/01/18/long-running-fast-vp-relocation-job/), and this report helps identify any arrays that could be falling behind.  If your estimated completion time is longer than the time window you have defined for your data relocation job you may need to make some changes, see my previous post for more information about that.

You can get the current status of the data relocation job at any time by running the following command:

naviseccli -h [array_hostname] autotiering -info -state -rate -schedule -opStatus -poolID [Pool_ID_Number]
 

The output looks like this:

Auto-Tiering State:  Enabled
Relocation Rate:  Medium
 
Schedule Name:  Default Schedule
Schedule State:  Enabled
Default Schedule:  Yes
Schedule Days:  Sun Mon Tue Wed Thu Fri Sat
Schedule Start Time:  20:00
Schedule Stop Time:  6:00
Schedule Duration:  10 hours
Storage Pools:  Array1_Pool1_SPB, Array1_Pool0_SPA
 
Storage Pool Name:  Array1_Pool0_SPA
Storage Pool ID:  0
Relocation Start Time:  08/15/13 20:00
Relocation Stop Time:  08/16/13 6:00
Relocation Status:  Inactive
Relocation Type:  Scheduled
Relocation Rate:  Medium
Data to Move Up (GBs):  8.00
Data to Move Down (GBs):  8.00
Data Movement Completed (GBs):  2171.00
Estimated Time to Complete:  4 minutes
Schedule Duration Remaining:  None
 
Storage Pool Name:  Array1_Pool1_SPB
Storage Pool ID:  1
Relocation Start Time:  08/15/13 20:00
Relocation Stop Time:  08/16/13 6:00
Relocation Status:  Inactive
Relocation Type:  Scheduled
Relocation Rate:  Medium
Data to Move Up (GBs):  14.00
Data to Move Down (GBs):  14.00
Data Movement Completed (GBs):  1797.00
Estimated Time to Complete:  5 minutes
Schedule Duration Remaining:  None
 

The output of the command is very verbose, I want to trim it down to only show me the pool name and the estimated time for the relocation job to complete.   This bash script will trim it down to only show the pool names and estimated completion times.

The final output of the script generated report looks like this: 

Runtime: Thu Aug 11 07:00:01 CDT 2013
Array1_Pool0:  9 minutes
Array1_Pool1:  6 minutes
Array2_Pool0:  1 hour, 47 minutes
Array2_Pool1:  3 minutes
Array2_Pool2:  2 days, 7 hours, 25 minutes
Array2_Pool3:  1 day, 9 hours, 58 minutes
Array3_Pool0:  1 minute
Array4_Pool0:  N/A
Array4_Pool1:  2 minutes
Array5_Pool1:  5 minutes
Array5_Pool0:  5 minutes
Array6_Pool0:  N/A
Array6_Pool1:  N/A

 

Below is the bash script that generates the report. The script is set up to report on six different arrays, it can be easily modified to suit your environment. 

TODAY=$(date)
echo “Runtime: $TODAY” > /reports/tierstatus.txt
echo $TODAY
#
naviseccli -h [array_hostname1] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname1]_tierstatus0.out
naviseccli -h [array_hostname1] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname1]_tierstatus1.out
#
echo `grep “Pool Name:” /reports/[array_hostname1]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname1]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname1]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname1]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname2] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname2]_tierstatus0.out
naviseccli -h [array_hostname2] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname2]_tierstatus1.out
naviseccli -h [array_hostname2] autotiering -info -state -rate -schedule -opStatus -poolID 2 > /reports/[array_hostname2]_tierstatus2.out
naviseccli -h [array_hostname2] autotiering -info -state -rate -schedule -opStatus -poolID 3 > /reports/[array_hostname2]_tierstatus3.out
#
echo `grep “Pool Name:” /reports/[array_hostname2]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname2]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname2]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname2]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname2]_tierstatus2.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname2]_tierstatus2.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname2]_tierstatus3.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname2]_tierstatus3.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname3] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname3]_tierstatus0.out
#
echo `grep “Pool Name:” /reports/[array_hostname3]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname3]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname4] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname4]_tierstatus0.out
naviseccli -h [array_hostname4] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname4]_tierstatus1.out
#
echo `grep “Pool Name:” /reports/[array_hostname4]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname4]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname4]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname4]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname5] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname5]_tierstatus0.out
naviseccli -h [array_hostname5] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname5]_tierstatus1.out
#
echo `grep “Pool Name:” /reports/[array_hostname5]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname5]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname5]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname5]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
naviseccli -h [array_hostname6] autotiering -info -state -rate -schedule -opStatus -poolID 0 > /reports/[array_hostname6]_tierstatus0.out
naviseccli -h [array_hostname6] autotiering -info -state -rate -schedule -opStatus -poolID 1 > /reports/[array_hostname6]_tierstatus1.out
#
echo `grep “Pool Name:” /reports/[array_hostname6]_tierstatus0.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname6]_tierstatus0.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
echo `grep “Pool Name:” /reports/[array_hostname6]_tierstatus1.out |awk ‘{print $4}’`”:  “`grep Complete: /reports/[array_hostname6]_tierstatus1.out |awk ‘{print $5,$6,$7,$8,$9,$10}’` >> /reports/tierstatus.txt
#
#Copy the current report to a new file and rename it, one prior day is saved.
cp /cygdrive/c/inetpub/wwwroot/tierstatus.txt /cygdrive/c/inetpub/wwwroot/tierstatus_yesterday.txt
#Remove the current report on the web page.
rm /cygdrive/c/inetpub/wwwroot/tierstatus.txt
#Copy the new report to the web page.
cp /reports/tierstatus.txt /cygdrive/c/inetpub/wwwroot

 

 

Reporting on Clariion / VNX Block Storage Pool capacity with a bash script

I recently added a post about how to report on Celerra & VNX File pool sizes with a bash script. I’ve also been doing that for a long time with our Clariion and VNX block pools so I thought I’d share that information as well.

I use a cron job to schedule the report daily and copy it to our internal web server. I then run the csv2html.pl perl script (from http://www.jpsdomain.org/source/perl.html) to convert it to an HTML output file to add to our intranet report page. This is likely the most viewed report I create on a daily basis as we always seem to be running low on available capacity.

The output of the script looks similar to this:

PoolReport

Here is the bash script that creates the report:

TODAY=$(date)
#Add the current time/date stamp to the top of the report 
echo $TODAY > /scripts/PoolReport.csv

#Create a file with the standard navisphere CLI output for each storage pool (to be processed later into the format I want)

naviseccli -h VNX_1 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_0.csv

naviseccli -h VNX_1 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_1.csv

naviseccli -h VNX_1 storagepool -list -id 2 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_2.csv

naviseccli -h VNX_1 storagepool -list -id 3 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_3.csv

naviseccli -h VNX_1 storagepool -list -id 4 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_6.csv

#

naviseccli -h VNX_2 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_4.csv

naviseccli -h VNX_2 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_5.csv

naviseccli -h VNX_2 storagepool -list -id 2 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_7.csv

naviseccli -h VNX_2 storagepool -list -id 3 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_8.csv

#

Naviseccli -h VNX_3 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site1_6.csv

#

naviseccli -h VNX_4 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_0.csv

naviseccli -h VNX_4 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_1.csv

naviseccli -h VNX_4 storagepool -list -id 2 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_2.csv

naviseccli -h VNX_4 storagepool -list -id 3 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_3.csv

#

naviseccli -h VNX_5 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site2_4.csv

#

naviseccli -h VNX_6 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site3_0.csv

naviseccli -h VNX_6 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site3_1.csv

#

naviseccli -h VNX_7 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site3_0.csv

naviseccli -h VNX_7 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site3_1.csv

#

naviseccli -h VNX_8 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site4_0.csv

naviseccli -h VNX_8 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site4_1.csv

#

naviseccli -h VNX_9 storagepool -list -id 0 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site5_0.csv

naviseccli -h VNX_9 storagepool -list -id 1 -availableCap -consumedCap -UserCap -prcntFull >/scripts/report_site5_1.csv

#

#Create a new file for each site's storage pool (from the file generated in the previous step) hat contains only the info that I want.

#

cat /scripts/report_site1_4.csv | grep Name > /scripts/Site1Pool4.csv

cat /scripts/report_site1_4.csv | grep GBs >>/scripts/Site1Pool4.csv

cat /scripts/report_site1_4.csv | grep Full >>/scripts/Site1Pool4.csv

#

cat /scripts/report_site1_5.csv | grep Name > /scripts/Site1Pool5.csv

cat /scripts/report_site1_5.csv | grep GBs >>/scripts/Site1Pool5.csv

cat /scripts/report_site1_5.csv | grep Full >>/scripts/Site1Pool5.csv

#

cat /scripts/report_site1_0.csv | grep Name > /scripts/Site1Pool0.csv

cat /scripts/report_site1_0.csv | grep GBs >>/scripts/Site1Pool0.csv

cat /scripts/report_site1_0.csv | grep Full >>/scripts/Site1Pool0.csv

#

cat /scripts/report_site1_1.csv | grep Name > /scripts/Site1Pool1.csv

cat /scripts/report_site1_1.csv | grep GBs >;;>;;/scripts/Site1Pool1.csv

cat /scripts/report_site1_1.csv | grep Full >;;>;;/scripts/Site1Pool1.csv

#

cat /scripts/report_site1_2.csv | grep Name > /scripts/Site1Pool2.csv

cat /scripts/report_site1_2.csv | grep GBs >>/scripts/Site1Pool2.csv

cat /scripts/report_site1_2.csv | grep Full >>/scripts/Site1Pool2.csv

#

cat /scripts/report_site1_7.csv | grep Name > /scripts/Site1Pool7.csv

cat /scripts/report_site1_7.csv | grep GBs >>/scripts/Site1Pool7.csv

cat /scripts/report_site1_7.csv | grep Full >>/scripts/Site1Pool7.csv

#

cat /scripts/report_site1_8.csv | grep Name > /scripts/Site1Pool8.csv

cat /scripts/report_site1_8.csv | grep GBs >>/scripts/Site1Pool8.csv

cat /scripts/report_site1_8.csv | grep Full >>/scripts/Site1Pool8.csv

#

cat /scripts/report_site1_3.csv | grep Name > /scripts/Site1Pool3.csv

cat /scripts/report_site1_3.csv | grep GBs >>/scripts/Site1Pool3.csv

cat /scripts/report_site1_3.csv | grep Full >>/scripts/Site1Pool3.csv

#

cat /scripts/report_site1_6.csv | grep Name > /scripts/Site1Pool6.csv

cat /scripts/report_site1_6.csv | grep GBs >>/scripts/Site1Pool6.csv

cat /scripts/report_site1_6.csv | grep Full >>/scripts/Site1Pool6.csv

#

cat /scripts/report_site1_6.csv | grep Name > /scripts/Site1Pool6.csv

cat /scripts/report_site1_6.csv | grep GBs >>/scripts/Site1Pool6.csv

cat /scripts/report_site1_6.csv | grep Full >>/scripts/Site1Pool6.csv

#

cat /scripts/report_site2_0.csv | grep Name > /scripts/Site2Pool0.csv

cat /scripts/report_site2_0.csv | grep GBs >>/scripts/Site2Pool0.csv

cat /scripts/report_site2_0.csv | grep Full >>/scripts/Site2Pool0.csv

#

cat /scripts/report_site2_1.csv | grep Name > /scripts/Site2Pool1.csv

cat /scripts/report_site2_1.csv | grep GBs >>/scripts/Site2Pool1.csv

cat /scripts/report_site2_1.csv | grep Full >>/scripts/Site2Pool1.csv

#

cat /scripts/report_site2_2.csv | grep Name > /scripts/Site2Pool2.csv

cat /scripts/report_site2_2.csv | grep GBs >>/scripts/Site2Pool2.csv

cat /scripts/report_site2_2.csv | grep Full >>/scripts/Site2Pool2.csv

#

cat /scripts/report_site2_3.csv | grep Name > /scripts/Site2Pool3.csv

cat /scripts/report_site2_3.csv | grep GBs >>/scripts/Site2Pool3.csv

cat /scripts/report_site2_3.csv | grep Full >>/scripts/Site2Pool3.csv

#

cat /scripts/report_site2_4.csv | grep Name > /scripts/Site2Pool4.csv

cat /scripts/report_site2_4.csv | grep GBs >>/scripts/Site2Pool4.csv

cat /scripts/report_site2_4.csv | grep Full >>/scripts/Site2Pool4.csv

#

cat /scripts/report_site3_0.csv | grep Name > /scripts/Site3Pool0.csv

cat /scripts/report_site3_0.csv | grep GBs >>/scripts/Site3Pool0.csv

cat /scripts/report_site3_0.csv | grep Full >>/scripts/Site3Pool0.csv

#

cat /scripts/report_site3_1.csv | grep Name > /scripts/Site3Pool1.csv

cat /scripts/report_site3_1.csv | grep GBs >>/scripts/Site3Pool1.csv

cat /scripts/report_site3_1.csv | grep Full >>/scripts/Site3Pool1.csv

#

cat /scripts/report_site3_0.csv | grep Name > /scripts/Site4Pool0.csv

cat /scripts/report_site3_0.csv | grep GBs >>/scripts/Site4Pool0.csv

cat /scripts/report_site3_0.csv | grep Full >>/scripts/Site4Pool0.csv

#

cat /scripts/report_site3_1.csv | grep Name > /scripts/Site4Pool1.csv

cat /scripts/report_site3_1.csv | grep GBs >>/scripts/Site4Pool1.csv

cat /scripts/report_site3_1.csv | grep Full >>/scripts/Site4Pool1.csv

#

cat /scripts/report_site4_0.csv | grep Name > /scripts/Site5Pool0.csv

cat /scripts/report_site4_0.csv | grep GBs >>/scripts/Site5Pool0.csv

cat /scripts/report_site4_0.csv | grep Full >>/scripts/Site5Pool0.csv

#

cat /scripts/report_site4_1.csv | grep Name > /scripts/Site5Pool1.csv

cat /scripts/report_site4_1.csv | grep GBs >>/scripts/Site5Pool1.csv

cat /scripts/report_site4_1.csv | grep Full >>/scripts/Site5Pool1.csv

#

cat /scripts/report_site5_0.csv | grep Name > /scripts/Site6Pool0.csv

cat /scripts/report_site5_0.csv | grep GBs >>/scripts/Site6Pool0.csv

cat /scripts/report_site5_0.csv | grep Full >>/scripts/Site6Pool0.csv

#

cat /scripts/report_site5_1.csv | grep Name > /scripts/Site6Pool1.csv

cat /scripts/report_site5_1.csv | grep GBs >>/scripts/Site6Pool1.csv

cat /scripts/report_site5_1.csv | grep Full >>/scripts/Site6Pool1.csv

#

#The last section creates the final output for the report before it is processed into an html table. It creates a single line for each storage pool with the total GB available, total GB used, available GB, and the percent utilization of the pool.

#

echo 'Pool Name','Total GB ','Used GB ','Available GB ','Percent Full ' >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool2.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool2.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool2.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool2.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool2.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool3.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool3.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool3.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool3.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool3.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool6.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool6.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool4.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool4.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool4.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool4.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool4.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool5.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool5.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool5.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool5.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool5.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool7.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool7.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool7.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool7.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool7.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site1Pool8.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool8.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool8.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool8.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool8.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site1Pool6.csv |awk '{print $3}'`","`grep -i User /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Available /scripts/Site1Pool6.csv |awk '{print $4}'`","`grep -i Full /scripts/Site1Pool6.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site2Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site2Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site2Pool2.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool2.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool2.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool2.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool2.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site2Pool3.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool3.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool3.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool3.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool3.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site2Pool4.csv |awk '{print $3}'`","`grep -i User /scripts/Site2Pool4.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site2Pool4.csv |awk '{print $4}'`","`grep -i Available /scripts/Site2Pool4.csv |awk '{print $4}'`","`grep -i Full /scripts/Site2Pool4.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site3Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site3Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site3Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site3Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site3Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site3Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site3Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site3Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site3Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site3Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site4Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site4Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site4Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site4Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site4Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site4Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site4Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site4Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site4Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site4Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site5Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site5Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site5Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site5Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site5Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site5Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site5Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site5Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site5Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site5Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#

echo " ",'Total GB','Used GB','Available GB','Percent Full' >> /scripts/PoolReport.csv

#

#

echo `grep Name /scripts/Site6Pool0.csv |awk '{print $3}'`","`grep -i User /scripts/Site6Pool0.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site6Pool0.csv |awk '{print $4}'`","`grep -i Available /scripts/Site6Pool0.csv |awk '{print $4}'`","`grep -i Full /scripts/Site6Pool0.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

echo `grep Name /scripts/Site6Pool1.csv |awk '{print $3}'`","`grep -i User /scripts/Site6Pool1.csv |awk '{print $4}'`","`grep -i Consumed /scripts/Site6Pool1.csv |awk '{print $4}'`","`grep -i Available /scripts/Site6Pool1.csv |awk '{print $4}'`","`grep -i Full /scripts/Site6Pool1.csv |awk '{print $3}'` >> /scripts/PoolReport.csv

#

#Convert the file to HTML for use on the internal web server 

#

./csv2htm.pl -e -T -i /scripts/PoolReport.csv -o /webfolder/PoolReport.html

Scripting Alias and Zone creation for Brocade Switches

Whenever a new array is added in our environment we usually have to add hundreds and hundreds of new zones to our core brocade fabric.  It’s a very tedious job so I started investigating ways to script the process.

Here is the syntax for running an ssh command remotely (telnet is disabled on all of our switches):

ssh userid@switchname [command]
 

so, if I wanted to show all of the zones on a switch the command would look like this:

ssh admin@10.0.0.1 zoneshow
 

While those commands work perfectly and could be added in to a bash script as-is, the caveat is that the password must be typed in for every command.  That’s not very practical if there are hundreds of lines in the script.  You could set up ssh keys on every switch, but I was looking for a quicker, easier solution.  I found that quick solution with an opensource package called sshpass (http://sourceforge.net/projects/sshpass).  It allows you to enter the password on the command line.  I use Cygwin on a windows server for all my bash scripts and it was a very simple installation in that environment, and I’m sure would be just as easy on Linux.  Download it to a folder,  uncompress it, run “./configure”, “make”, and “make install” and you’re all done.

Once sshpass is installed, the syntax for showing the zones on a brocade switch would look like this:

sshpass –p “[password]” ssh admin@10.0.0.1 zoneshow
 

Now that the commands can be run very easily from a remote shell, I needed to make creating the script much easier.  That’s where a spreadsheet application helps tremendously.  Divide the script line into different cells on the spreadsheet, making sure to separate any part of the command that’s going to change.  In the last cell on the line, you concatenate all of the previous cells together to create the complete command.  Below is how I did it.

Here’s the syntax output I want for creating a zone:

sshpass -p ‘password’ ssh [User_ID]@[Switch_Name] zonecreate “[Zone_Name]”,”[Host_Alias]”,”[SP_Alias]”
 

Here’s how I divide the command up in the spreadsheet:

A1          sshpass  -p ‘password’ ssh

B1          [User_ID]

C1          @

D1          [switch_name]

E1           Zonecreate “

F1           [Zone_Name]

G1           “,”

H1           [Host_Alias]

I1            “,”

J1            [Clarrion/VNX_SP_Alias]

K1           “

L1           =concatenate(A1,B1,C1,D1,E1,F1,G1,H1,J1,K1)

Now you can copy line 1 of the spreadsheet to as many lines as you need, change the User ID, Switch Name, Zone_Name, Host Alias, and Clariion/VNX SP Alias as needed, and the L column will have all the completed commands for you that you can cut and paste into a bash script.  Create a blank script file with ‘touch file.sh’, do a ‘chmod +X file.sh’ and ‘chmod 777 file.sh’ on it, use vi to copy and paste in the script data, then run it with ‘./file.sh’ from the prompt.

The same thing can be done for creating the initial aliases, here’s the syntax of the command for that:

sshpass -p ‘password’ ssh [User_ID]@[Switch_Name] alicreate “[Alias_Name]”,”[WWN]”
 

And finally, here’s what it looks like entered into a spreadsheet:

BrocadeScript

Reporting on Celerra / VNX NAS Pool capacity with a bash script

I recently created a script that I run on all of our celerras and VNX’s that reports on NAS pool size.   The output from each array is then converted to HTML and combined on a single intranet page to provide a quick at-a-glance view of our global NAS capacity and disk space consumption.  I made another post that shows how to create a block storage pool report as well:  http://emcsan.wordpress.com/2013/08/09/reporting-on-celerravnx-block-storage-pool-capacity-with-a-bash-script/

The default command unfortunately outputs in Megabytes with no option to change to GB or TB.  This script performs the MB to GB conversion and adds a comma as the numerical separator (what we use in the USA) to make the output much more readable.

First, identify the ID number for each of your NAS pools.  You’ll need to insert the ID numbers into the script itself.

[nasadmin@celerra]$ nas_pool -list
 id      inuse   acl     name                      storage system
 10      y       0       NAS_Pool0_SPA             AKM00111000000
 18      y       0       NAS_Pool1_SPB             AKM00111000000
Note that the default output of the command that provides the size of each pool is in a very hard to read format.  I wanted to clean it up to make it easier to read on our reporting page.  Here’s the default output:
[nasadmin@celerra]$ nas_pool -size -all
id           = 10
name         = NAS_Pool0_SPA
used_mb      = 3437536
avail_mb     = 658459
total_mb     = 4095995
potential_mb = 0
id           = 18
name         = NAS_Pool1_SPB
used_mb      = 2697600
avail_mb     = 374396
total_mb     = 3071996
potential_mb = 1023998
 My script changes the output to look like the example below.
Name (Site)   ; Total GB ; Used GB  ; Avail GB
 NAS_Pool0_SPA ; 4,000    ; 3,356.97 ; 643.03
 NAS_Pool1_SPB ; 3,000    ; 2,634.38 ; 365.62
 In this example there are two NAS pools and this script is set up to report on both.  It could be easily expanded or reduced depending on the number of pools on your array. The variable names I used include the Pool ID number from the output above, that should be changed to match your ID’s.  You’ll also need to update the ‘id=’ portion of each command to match your Pool ID’s.

Here’s the script:

#!/bin/bash

NAS_DB="/nas"
export NAS_DB

# Set the Locale to English/US, used for adding the comma as a separator in a cron job
export LC_NUMERIC="en_US.UTF-8"
TODAY=$(date)

 

# Gather Pool Name, Used MB, Avaialble MB, and Total MB for First Pool

# Set variable to pull the Name of the pool from the output of 'nas_pool -size'.
name18=`/nas/bin/nas_pool -size id=18 | /bin/grep name | /bin/awk '{print $3}'`

# Set variable to pull the Used MB of the pool from the output of 'nas_pool -size'.
usedmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep used_mb | /bin/awk '{print $3}'`

# Set variable to pull the Available MB of the pool from the output of 'nas_pool -size'.
availmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep avail_mb | /bin/awk '{print $3}'`
# Set variable to pull the Total MB of the pool from the output of 'nas_pool -size'.

totalmb18=`/nas/bin/nas_pool -size id=18 | /bin/grep total_mb | /bin/awk '{print $3}'`

# Convert MB to GB, Add Comma as separator in output

# Remove '...b' variables if you don't want commas as a separator

# Convert Used MB to Used GB
usedgb18=`/bin/echo $usedmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
usedgb18b=`/usr/bin/printf "%'.2f\n" "$usedgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Available MB to Available GB
availgb18=`/bin/echo $availmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
availgb18b=`/usr/bin/printf "%'.2f\n" "$availgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Total MB to Total GB
totalgb18=`/bin/echo $totalmb18/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
totalgb18b=`/usr/bin/printf "%'.2f\n" "$totalgb18" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Gather Pool Name, Used MB, Avaialble MB, and Total MB for Second Pool

# Set variable to pull the Name of the pool from the output of 'nas_pool -size'.
name10=`/nas/bin/nas_pool -size id=10 | /bin/grep name | /bin/awk '{print $3}'`

# Set variable to pull the Used MB of the pool from the output of 'nas_pool -size'.
usedmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep used_mb | /bin/awk '{print $3}'`

# Set variable to pull the Available MB of the pool from the output of 'nas_pool -size'.
availmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep avail_mb | /bin/awk '{print $3}'`

# Set variable to pull the Total MB of the pool from the output of 'nas_pool -size'.
totalmb10=`/nas/bin/nas_pool -size id=10 | /bin/grep total_mb | /bin/awk '{print $3}'`
 
# Convert MB to GB, Add Comma as separator in output

# Remove '...b' variables if you don't want commas as a separator
 
# Convert Used MB to Used GB
usedgb10=`/bin/echo $usedmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
usedgb10b=`/usr/bin/printf "%'.2f\n" "$usedgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Available MB to Available GB
availgb10=`/bin/echo $availmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
availgb10b=`/usr/bin/printf "%'.2f\n" "$availgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Convert Total MB to Total GB
totalgb10=`/bin/echo $totalmb10/1024 | /usr/bin/bc -l | /bin/sed 's/^\./0./;s/0*$//;s/0*$//;s/\.$//'`

# Add comma separator
totalgb10b=`/usr/bin/printf "%'.2f\n" "$totalgb10" | /bin/sed 's/\.00$// ; s/\(\.[1-9]\)0$/\1/'`

# Create Output File

# If you don't want the comma separator in the output file, substitute the variable without the 'b' at the end.

# I use the semicolon rather than the comma as a separator due to the fact that I'm using the comma as a numerical separator.

# The comma could be substituted here if desired.

/bin/echo $TODAY > /scripts/NasPool.txt
/bin/echo "Name" ";" "Total GB" ";" "Used GB" ";" "Avail GB" >> /scripts/NasPool.txt
/bin/echo $name18 ";" $totalgb18b ";" $usedgb18b ";" $availgb18b >> /scripts/NasPool.txt
/bin/echo $name10 ";" $totalgb10b ";" $usedgb10b ";" $availgb10b >> /scripts/NasPool.txt
 Here’s what the Output looks like:
Wed Jul 17 23:56:29 JST 2013
 Name (Site) ; Total GB ; Used GB ; Avail GB
 NAS_Pool0_SPA ; 4,000 ; 3,356.97 ; 643.03
 NAS_Pool1_SPB ; 3,000 ; 2,634.38 ; 365.62
 I use a cron job to schedule the report daily and copy it to our internal web server.  I then run the csv2html.pl perl script (from http://www.jpsdomain.org/source/perl.html) to convert it to an HTML output file to add to our intranet report page.

Note that I had to modify the csv2html.pl command to accomodate the use of a semicolon instead of the default comma in a csv file.  Here is the command I use to do the conversion:

./csv2htm.pl -e -T -D “;” -i /reports/NasPool.txt -o /reports/NasPool.html
 Below is what the output looks like after running the HTML conversion tool.

NASPool

Disk space reporting on sub folders on a VNX File CIFS shared file system

I recently had a comment on a different post asking how to report on the size of multiple folders on a single file system, so I thought I’d share the method I use to create reports and alerts on the space that sub folders on file systems consume. There is a way to navigate to all of the file systems from the control station, simply navigate to /nas/quota/slot_<x>. Slot_<x> refers to the data mover. The file systems on server_2 would be in the slot_2 folder.  Because we have access to those folders, we can simply run the standard unix ‘du’ command to get the amount of used disk space for each sub folder on any file system.

Running this command:

sudo du -h /nas/quota/slot_2/File_System/Sub_Folder

Will give an output that looks like this:

24K     /nas/quota/slot_2/File_System/Sub_Folder/1
0       /nas/quota/slot_2/File_System/Sub_Folder/2
16K     /nas/quota/slot_2/File_System/Sub_Folder

Each sub folder of the file system named “File_System” is listed with the space each sub folder uses. You’ll notice that I used the sudo command to run the du command. Unfortunately you need root access in order to run the du command on the /nas/quota directory, so you’ll have to add whatever account you log in to the celerra with to the /etc/sudoers file. If you don’t, you’ll get the error “Sorry, user nasadmin is not allowed to execute ‘/usr/bin/du -h -s /nas/quota/slot_2/File_System’ as root on “. I generally log in to the celerra as nasadmin, so I added that account to sudoers. To modify the file, su to root first and then (using vi) add the following to the very bottom of the file of /etc/sudoers (substitute nasadmin for the username you will be using):

nasadmin ALL=/usr/bin/du, /usr/bin/, /sbin/, /usr/sbin, /nas/bin, /nas/bin/, /nas/sbin, /nas/sbin/, /bin, /bin/
nasadmin ALL=/nas/bin/server_df

Once that is complete, you’ll have access to run the command.  You can now write a script that will report on specific folders for you. I have two scripts created that I’m going to share, one I use to send out a daily email report and the other will send an email only if the folder sizes have crossed a certain threshold of space utilization. I have them both scheduled as a cron job on the Celerra itself.  It should be easy to modify these scripts for anyone’s environment.

The following script will generate a report of folder and sub folder sizes for any given file system. In this example, I am reporting on three specific subfolders on a file system (Production, Development, and Test). I also use the grep and awk options at the beginning to pull out the percent full from the df command for the entire file system and include that in the subject line of the email.

#!/bin/bash
 NAS_DB="/nas"
 export NAS_DB
 TODAY=$(date)
 HOST=$(hostname)

PERCENT=`sudo df -h /nas/quota/slot_2/File_System | grep server_2 | awk '{print $5}'`

echo "File_System Folder Size Report Date: $TODAY Host:$HOST" > /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt
 echo "Production" >> /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt

sudo du -h -S /nas/quota/slot_2/File_System/Prod >> /home/nasadmin/report/fs_report.txt

echo " " >> /home/nasadmin/report/fs_report.txt
 echo "Development" >> /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt

sudo du -h -S /nas/quota/slot_2/File_System/Dev >> /home/nasadmin/report/fs_report.txt

echo " " >> /home/nasadmin/report/fs_report.txt
 echo "Test" >> /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt

sudo du -h -S /nas/quota/slot_2/File_System/Test >> /home/nasadmin/report/fs_report.txt

echo " " >> /home/nasadmin/report/fs_report.txt
 echo "% Remaining on File_System Filesystem" >> /home/nasadmin/report/fs_report.txt
 echo " " >> /home/nasadmin/report/fs_report.txt

sudo df -h /nas/quota/slot_2/File_System/ >> /home/nasadmin/report/fs_report.txt

echo $PERCENT

mail -s "Folder Size Report ($PERCENT In Use)" youremail@domain.com < /home/nasadmin/report/fs_report.txt

Below is what the output of that script looks like. It is included in the body of the email as plain text.

File_System Folder Size Report
Date: Fri Jun 28 08:00:02 CDT 2013
Host:<celerra_name>

Production

 24K     /nas/quota/slot_2/File_System/Production/1
 0       /nas/quota/slot_2/File_System/Production/2
 16K     /nas/quota/slot_2/File_System/Production

Development

 8.0K    /nas/quota/slot_2/File_System/Development/1
 108G    /nas/quota/slot_2/File_System/Development/2
 0          /nas/quota/slot_2/File_System/Development/3
 16K     /nas/quota/slot_2/File_System/Development

Test

 0       /nas/quota/slot_2/File_System/Test/1
 422G    /nas/quota/slot_2/File_System/Test/2
 0       /nas/quota/slot_2/File_System/Test/3
 16K     /nas/quota/slot_2/File_System/Test

% Remaining on File_System Filesystem

Filesystem Size Used Avail Use% Mounted on
 server_2:/ 2.5T 529G 1.9T 22% /nasmcd/quota/slot_2
 .

The following script will only send an email if the folder size has crossed a defined threshold. Simply change the path to the folder you want to report on and change the value of the ‘threshold’ variable in the script to whatever you want it to be, I have mine set at 95%.

#Get the % Disk Utilization from the server_df command
 percent=`sudo df -h /nas/quota/slot_2/File_System | grep server_2 | awk '{print $5}'`

#Strip the % Sign from the output value
 percentvalue=`echo $percent | awk -F"%" '{print $1}'`

#Set the critical threshold that will trigger an email
 threshold=95

#compare the threshold value to the reported value, send an email if needed
 if [ $percentvalue -eq 0 ] && [ $threshold -eq 0 ]
 then
  echo "Both are zero"
 elif [ $percentvalue -eq $threshold ]
 then
  echo "Both Values are equal"
 elif [ $percentvalue -gt $threshold ]
 then
 mail -s "File_System Critical Disk Space Alert: $percentvalue% utilization is above the threshold of $threshold%" youremail@domain.com < /home/nasadmin/report/fs_report.txt

else
  echo "$percentvalue is less than $threshold"
 fi