Category Archives: Isilon

Configuring a Powershell to Isilon Connection with SSL

PowerShell allows an easy method to access the Isilon ReST API, but in my environment I need to use true SSL validation. If you are using the default self-signed certificate of the Isilon, your connection will likely fail with an error similar to the one below:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Isilon generates a self signed certificate by default.  Certificate validation for the current PowerShell session can be disabled with the script below, however in my environment I’m not allowed to do that.  I’m including it for completeness in case it is useful for someone else, it was not written by me but uses a BSD 3-Clause license.

function Disable-SSLValidation{
<#
.SYNOPSIS
    Disables SSL certificate validation
.DESCRIPTION
    Disable-SSLValidation disables SSL certificate validation by using reflection to implement the System.Net.ICertificatePolicy class.
    Author: Matthew Graeber (@mattifestation)
    License: BSD 3-Clause
.NOTES
    Reflection is ideal in situations when a script executes in an environment in which you cannot call csc.ese to compile source code. If compiling code is an option, then implementing System.Net.ICertificatePolicy in C# and Add-Type is trivial.
.LINK
    http://www.exploit-monday.com
#>
    Set-StrictMode -Version 2
    # You have already run this function if ([System.Net.ServicePointManager]::CertificatePolicy.ToString() -eq 'IgnoreCerts') { Return }
    $Domain = [AppDomain]::CurrentDomain
    $DynAssembly = New-Object System.Reflection.AssemblyName('IgnoreCerts')
    $AssemblyBuilder = $Domain.DefineDynamicAssembly($DynAssembly, [System.Reflection.Emit.AssemblyBuilderAccess]::Run)
    $ModuleBuilder = $AssemblyBuilder.DefineDynamicModule('IgnoreCerts', $false)
    $TypeBuilder = $ModuleBuilder.DefineType('IgnoreCerts', 'AutoLayout, AnsiClass, Class, Public, BeforeFieldInit', [System.Object], [System.Net.ICertificatePolicy])
  $TypeBuilder.DefineDefaultConstructor('PrivateScope, Public, HideBySig, SpecialName, RTSpecialName') | Out-Null
    $MethodInfo = [System.Net.ICertificatePolicy].GetMethod('CheckValidationResult')
    $MethodBuilder = $TypeBuilder.DefineMethod($MethodInfo.Name, 'PrivateScope, Public, Virtual, HideBySig, VtableLayoutMask', $MethodInfo.CallingConvention, $MethodInfo.ReturnType, ([Type[]] ($MethodInfo.GetParameters() | % {$_.ParameterType})))
    $ILGen = $MethodBuilder.GetILGenerator()
    $ILGen.Emit([Reflection.Emit.Opcodes]::Ldc_I4_1)
    $ILGen.Emit([Reflection.Emit.Opcodes]::Ret)
    $TypeBuilder.CreateType() | Out-Null

    # Disable SSL certificate validation
   [System.Net.ServicePointManager]::CertificatePolicy = New-Object IgnoreCerts
}

While that code may work fine for some, for security reasons you may not want to or be able to disable certificate validation.  Fortunately, you can create your own key pair with puttygen.  This solution was tested to work with OneFS v 7.2.x and PowerShell V3.

Here are the steps for creating your own key pair for PowerShell SSL authentication to Isilon:

Generate the Key

  1. Download Puttygen to generate the keypair for authentication.
    Open Puttygen and click Generate.
  2. It’s important to note that PowerShell requires exporting the key in OpenSSH format, which is done under the Conversions menu, and the option ‘Export OpenSSHKey’.  Save the key without a passphrase.  It can be named something like “SSH.key”.
  3. Next we need to save the public key.  Copy the information in the upper text box labeled “public key for pasting into OpenSSH authorized_keys file”, and paste it into a new text file.  You can then save the file as “authorized_keys” for later use.

Copy the Key

  1. Copy the authorized_keys file to the Isilon cluster to the location of your choosing.
  2. Open an SSH connection to the Isilon cluster and create a folder for the authorized_keys file.
    Example command:  isi_for_array mkdir /root/.ssh
  3. Copy the file to all nodes. Example command: isi_for_array cp /ifs/local/authorized_keys /root/.ssh/
  4. Verify that the file is available on all of the nodes, and it’s also a good idea to verify that the checksum is correct. Example command: isi_for_array md5 /root/.ssh/authorized_keys

Install PowerShell SSH Module

  1. In order to execute commands via SSH using PowerShell you will need to use an SSH module.  Various options exist, however the module from PowerShellAdmin works fine. It works for running commands via SSH on remote hosts such as Linux or Unix computers, VMware ESX(i) hosts or network equipment such as routers and switches that support SSH. It works well with OpenSSH-type servers.

You can visit the PowerShellAdmin page here,  and here is the direct download link for the SessionsPSv3.zip file.

  1. Once you’ve downloaded it, unzip the file to the SSH-Sessions folder, located in C:\Windows\System32\WindowsPowerShell\v1.0\Modules. With that module in place, we are now ready to connect with PowerShell to the Isilon cluster.

Test it

Below is a powershell script you can use to test your connection, it simply runs a df command on the cluster.

#PowerShell Test Script
Import-Module "SSH-Sessions"
$Isilon = "<hostname>"
KeyFile = "C:\scripts\<filename>.key"
New-SshSession -ComputerName $Isilon -Username root -KeyFile $KeyFile
Invoke-SshCommand -verbose -ComputerName $Isilon -Command df  
Remove-SshSession -ComputerName $Isilon

 

 

 

Advertisements

Isilon Mitrend Data Gathering Procedure

Mitrend is an extremely useful IT Infrastructure analysis service. They provide excellent health, growth and workload profiling assessments.  The service can process input source data from EMC and many non-EMC arrays, from host operating systems, and also from some applications.  In order to use the service, certain support files must be gathered before submitting your analysis request.  I had previously run the reports myself as an EMC customer, but sometime in the recent past they removed that ability for customers and it is now restricted to EMC employees and partners. You can of course simply send the files to your local EMC support team and they will be able to submit the files for a report on your behalf.  The reports are very detailed and extremely helpful for a general health check of your array, data is well organized into a powerpoint slide presentation and raw data is also made available in excel format.

My most recent analysis request was for Isilon, and below are the steps you’ll need to take to gather the appropriate information to receive your Isilon Mitrend report.  The performance impact of running the data gather is expected to be minimal, but in situations where the performance impact may be a concern then you should consider the timing of the run. I have never personally had an issue with performance when running the data gather, and the performance data is much more useful if it’s run during peak periods. The script is compatible with the virtual OneFS Simulator and can be executed and can be tested prior to running on any production cluster. If you notice performance concerns while the script is running, pressing Control + C in the console window will terminate it.

Obtain & Verify isi_gather_perf file

You will need to obtain a copy of the isi_gather_perf.tgz file from your local EMC team if you don’t already have a copy.  Verify that the file you receive file is 166 KB in size. To verify the isi_gather_perf.tgz is not corrupted or truncated you can run the following command once the file is on the Isilon cluster.

Isilon-01# file /ifs/isi_gather_perf.tgz

Example of a good file:

Isilon-01# file /ifs/isi_gather_perf.tgz /ifs/isi_gather_perf.tgz:
gzip compressed data, from Unix, last modified: Tue Nov 18 08:33:49 2014
data file is ready to be executed

Example of a corrupt file:

Isilon-01# file /ifs/isi_gather_perf.tgz /ifs/isi_gather_perf.tgz:
data file is corrupt

Once you’ve verified that the file is valid, you must manually run a Cluster Diagnostics gather. On the OneFS web interface, navigate to Cluster Management > Diagnostics > Gather Info and click the “Start Gather” button. Depending on the size of the cluster, it will take about 15 minutes. This process will automatically create a folder on the cluster called “Isilon_Support”, created under “ifs/data/”.

Gather Performance Info

Below is the process that I used.  Different methods of transferring files can of course be used, but I use WinSCP to copy files directly to the cluster from my Windows laptop, and I use putty for CLI management of the cluster via ssh.

1. Copy the isi_gather_perf.tgz to the Isilon cluster via SCP.

2.  Log into the cluster via ssh.

3. Copy the isi_gather_perf.tgz to /ifs/data/Isilon_Support, if it’s not there already.

4. Change to the Isilon Support Directory

 Isilon-01# cd /ifs/data/Isilon_Support

5. Extract the compressed file

 Isilon-01# tar zxvf /ifs/data/Isilon_Support/isi_gather_perf.tgz

After extraction, a new directory will be automatically created within the “Isilon_Support” directory named “isi_gather_perf”.

6. Start ‘Screen’

 Isilon-01# screen

7.  Execute the performance gather.  All output data is written to /ifs/data/Isilon_Support/isi_gather_perf/.  Extracting the file creates a new directory named “isi_gather_perf” which contains the script “isi_gather_perf”.  The default option gathers 24 hours of performance data and then creates a bundle with the gathered data.

Isilon-01# nohup python /ifs/data/Isilon_Support/isi_gather_perf/isi_gather_perf

8. At the end of the run, the script will create a .tar.gz archive of the capture data to /ifs/data/Isilon_Support/isi_gather_perf/. Gather the output files and send them to EMC.  Once EMC submits the files to Mitrend, it can take up to 24 hours for them to be processed.

Notes:

Below is a list of the command options available.  You may want to change the frequency the command is executed and the length of time the command is run with the I and r options.

 Usage: isi_gather_perf [options]

 Options:
 -h, --help Show this help message and exit
 -v, --version Print Version
 -d, --debug Enable debug log output Logs: /tmp/isi_gather_perf.log
 -i INTERVAL, --interval=INTERVAL
 Interval in seconds to sample performance data
 -r REPEAT, --repeat=REPEAT
 Number of times to repeat specified interval.

Logs:

The logs are located in /ifs/data/Isilon_Support/isi_gather_perf/gathers/ and by default are set to debug level, so they are extremely verbose.

Output:

The output from isi_gather_info will go to /ifs/data/Isilon_Support/pkg/
The output from isi_gather_perf will be /ifs/data/Isilon_Support/isi_gather_perf/gathers/

 

 

 

 

 

 

 

Scripting a VNX/Celerra to Isilon Data Migration with EMCOPY and Perl

datamigration

Below is a collection of perl scripts that make data migration from VNX/Celerra file systems to an Isilon system much easier.  I’ve already outlined the process of using isi_vol_copy_vnx in a prior post, however using EMCOPY may be more appropriate in a specific use case, or simply more familiar to administrators for support and management of the tool.  Note that while I have tested these scripts in my environment, they may need some modification for your use.  I recommend running them in a test environment prior to using them in production.

EMCOPY can be downloaded directly from DellEMC with the link below.  You will need to be a registered user in order to download it.

https://download.emc.com/downloads/DL14101_EMCOPY_File_migration_tool_4.17.exe

What is EMCOPY?

For those that haven’t used it before, EMCOPY is an application that allows you to copy a file, directory, and subdirectories between NTFS partitions while maintaining security information, an improvement over the similar robocopy tool that many veteran system administrators are familiar with. It allows you to back up the file and directory security ACLs, owner information, and audit information from a source directory to a destination directory.

Notes about using EMCOPY:

1) In my testing, EMCopy has shown up to a 25% performance improvement when copying CIFS data compared to Robocopy while using the same number of threads. I recommend using EMCopy over Robocopy as it has other feature improvements as well, for instance sidmapfile, which allows migrating local user data to Active Directory users. It’s available in version 4.17 or later.  Robocopy is also not an EMC supported tool, while EMCOPY is.

2) Unlike isi_vol_copy_vnx, EMCOPY is a windows application and must be run from a windows host.  I highly recommend a dedicated server for any migration tasks.  The isi_vol_copy_vnx utility runs directly on the Isilon OneFS CLI which eliminates any intermediary copy hosts, theoretically providing a much faster solution.

3) There are multiple methods to compare data sizes between the source and destination. I would recommend maintaining a log of each EMCopy session as that log indicates how much data was copied and if there were any errors.

4) If you are migrating over a WAN connection, I recommend first restoring from tape and then using an incremental data sync with EMCOPY.

Getting Started

I’ve divided this post up into a four step process.  Each step includes the relevant script and a description of the process.

  • Export File System information (export_fs.pl  Script)

Export file system information from the Celerra & generate the Isilon commands to re-create them.

  • Export SMB information (export_smb.pl Script)

Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

  • Export NFS information (export_nfs.pl Script)

Export NFS information from the Celerra & generate the Isilon commands to re-create them.

  • Create the EMCOPY migration script (EMCOPY_create.pl Script)

Perform the data migration with EMCOPY using the output from this script.

Exporting information from the Celerra to run on the Isilon

These Perl scripts are designed to be run directly on the Control Station and will subsequently create shell scripts that will run on the Isilon to assist with the migration.  You will need to manually copy the output files from the VNX/Celerra to the Isilon. The first three steps I’ve outlined do not move the data or permissions, they simply run a nas_fs query on the Celerra to generate the Isilon script files that actually make the directories, create quotas, and create the NFS and SMB shares. They are “scripts that generate scripts”. 🙂

Before you run the scripts, make sure you edit them to correctly specify the appropriate Data Mover.  Once complete, You’ll end up with three .sh files created for you to move to your Isilon cluster.  They should be run in the same order as they were created.

Note that EMC occasionally changes the syntax of certain commands when they update OneFS.  Below is a sample of the isilon specific commands that are generated by the first three scripts.  I’d recommend verifying that the syntax is still correct with your version of OneFS, and then modify the scripts if necessary with the new syntax.  I just ran a quick test with OneFS 8.0.0.2, and the base commands and switches appear to be compatible.

isi quota create –directory –path=”/ifs/data1″ –enforcement –hard-threshold=”1032575M” –container=1
isi smb share create –name=”Data01″ –path=”/ifs/Data01/data”
isi nfs exports create –path=”/Data01/data”  –roclient=”Data” –rwclient=”Data” –rootclient=”Data”

 

Step 1 – Export File system information

This script will generate a list of the file system names from the Celerra and place the appropriate Isilon commands that create the directories and quotes into a file named “create_filesystems_xx.sh”.

#!/usr/bin/perl

# Export_fs.pl – Export File system information
# Export file system information from the Celerra & generate the Isilon commands to re-create them.

use strict;
my $nas_fs="nas_fs -query:inuse=y:type=uxfs:isroot=false -fields:ServersNumeric,Id,Name,SizeValues -format:'%s,%s,%s,%sQQQQQQ'";
my @data;

open (OUTPUT, ">> create_filesystems_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nas_fs |") || die "cannot open $nas_fs: $!\n\n";

while ()

{
   chomp;
   @data = split("QQQQQQ", $_);
}

close(CMD);
foreach (@data)

{
   my ($dm, $id, $dir,$size,$free,$used_per, $inodes) = split(",", $_);
   print OUTPUT "mkdir /ifs/$dir\n";
   print OUTPUT "chmod 755 /ifs/$dir\n";
   print OUTPUT "isi quota create --directory --path=\"/ifs/$dir\" --enforcement --hard-threshold=\"${size}M\" --container=1\n";
}

The Output of the script looks like this (this is an excerpt from the create_filesystems_xx.sh file):

isi quota create --directory --path="/ifs/data1" --enforcement --hard-threshold="1032575M" --container=1
mkdir /ifs/data1
chmod 755 /ifs/data1
isi quota create --directory --path="/ifs/data2" --enforcement --hard-threshold="20104M" --container=1
mkdir /ifs/data2
chmod 755 /ifs/data2
isi quota create --directory --path="/ifs/data3" --enforcement --hard-threshold="100774M" --container=1
mkdir /ifs/data3
chmod 755 /ifs/data3

The output script can now be copied to and run from the Isilon.

Step 2 – Export SMB Information

This script will generate a list of the smb share names from the Celerra and place the appropriate Isilon commands into a file named “create_smb_exports_xx.sh”.

#!/usr/bin/perl

# Export_smb.pl – Export SMB/CIFS information
# Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "cifs";:wq!
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $path =~ s/^"/\"\/ifs/;
   print  OUTPUT "isi smb share create --name=$name --path=$path\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_smb_exports_xx.sh file):

isi smb share create --name="Data01" --path="/ifs/Data01/data"
isi smb share create --name="Data02" --path="/ifs/Data02/data"
isi smb share create --name="Data03" --path="/ifs/Data03/data"
isi smb share create --name="Data04" --path="/ifs/Data04/data"
isi smb share create --name="Data05" --path="/ifs/Data05/data"

 The output script can now be copied to and run from the Isilon.

Step 3 – Export NFS Information

This script will generate a list of the NFS export names from the Celerra and place the appropriate Isilon commands into a file named “create_nfs_exports_xx.sh”.

#!/usr/bin/perl

# Export_nfs.pl – Export NFS information
# Export NFS information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "nfs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep export";

open (OUTPUT, ">> create_nfs_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $test = @vars;
   my $i=2;
   my ($ro, $rw, $root, $access, $name);
   my $path=$vars[1];

   for ($i; $i < $test; $i++)
   {
      my ($type, $value) = split("=", $vars[$i]);

      if ($type eq "ro") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }
      if ($type eq "rw") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $rw .= " --rwclient=\"$_\""; }
      }

      if ($type eq "root") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $root .= " --rootclient=\"$_\""; }
      }

      if ($type eq "access") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }

      if ($type eq "name") { $name=$value; }
   }
   print OUTPUT "isi nfs exports create --path=$path $ro $rw $root\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_nfs_exports_xx.sh file):

isi nfs exports create --path="/Data01/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data02/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data03/data" --roclient="Backup" --roclient="Data" --rwclient="Backup" --rwclient="Data" --rootclient="Backup" --rootclient="Data"
isi nfs exports create --path="/Data04/data" --roclient="Backup" --roclient="ProdGroup" --rwclient="Backup" --rwclient="ProdGroup" --rootclient="Backup" --rootclient="ProdGroup"
isi nfs exports create --path="/" --roclient="127.0.0.1" --roclient="127.0.0.1" --roclient="127.0.0.1" -rootclient="127.0.0.1"

The output script can now be copied to and run from the Isilon.

Step 4 – Generate the EMCOPY commands

Now that the scripts have been generated and run on the Isilon, the next step is the actual data migration using EMCOPY.  This script will generate the commands for a migration script, which should be run from a windows server that has access to both the source and destination locations. It should be run after the previous three scripts have successfully completed.

This script will output the commands directly to the screen, it can then be cut and pasted from the screen directly into a windows batch script on your migration server.

#!/usr/bin/perl

# EMCOPY_create.pl – Create the EMCOPY migration script
# Perform the data migration with EMCOPY using the output from this script.

use strict;

my $datamover = "server_4";
my $source = "\\\\celerra_path\\";
my $dest = "\\\\isilon_path\\";
my $prot = "cifs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cant open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $name =~ s/\"//g;
   $path =~ s/^/\/ifs/;

   my $log = "c:\\" . $name . "";
   $log =~ s/ //;
   my $src = $source . $name;
   my $dst = $dest . $name;

   print "emcopy \"$src\" \"$  dst\" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:$log\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the screen output):

emcopy "\\celerra_path\Data01" "\\isilon_path\billing_tmip_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_tmip_01
emcopy "\\celerra_path\Data02" "\\isilon_path\billing_trxs_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_trxs_01
emcopy "\\celerra_path\Data03" "\\isilon_path\billing_vru_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_vru_01
emcopy "\\celerra_path\Data04" "\\isilon_path\billing_rpps_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_rpps_01

That’s it.  Good luck with your data migration, and I hope this has been of some assistance.  Special thanks to Mark May and his virtualstoragezone blog, he published the original versions of these scripts here.

Custom Reporting with Isilon OneFS API Calls

Have you ever wondered where the metrics you see in InsightIQ come from?  InsightIQ uses OneFS API calls to gather information, and you can use the same API calls for custom reporting and scripting.  Whether you’re interested in performance metrics relating to cluster capacity, CPU utilization, network latency & throughput, or disk activities, you have access to all of that information.

I spent a good deal of time already on how to make this work and investigating options that are available to make the gathered data be presentable in some useful manner.  This is really just the beginning, I’m hoping to take some more time later to work on additional custom script examples that gather specific info and have useful output options.  For now, this should get anyone started who’s interested in trying this out.  This post also includes a list of available API calls you can make.  I cover these basic steps in this post to get you started:

  1. How to authenticate to the Isilon Cluster using cookies.
  2. How to make the API call to the Isilon to generate the JSON output.
  3. How to install the jq utility to parse JSON output files.
  4. Some examples of using the jq utility to parse the JSON output.

Authentication

First I’ll go over how to authenticate to the Isilon cluster using cookies. You’ll have to create a credentials file first.  Name the file auth.json and enter the following info into it:

{
 "username":"root",
 "password":"<password>",
 "services":["platform","namespace"]
 }

Note that I am using root for this example, but it would certainly be possible to create a separate account on the Isilon to use for this purpose.  Just give the account the Platform API and Statistics roles.

Once the file is created, you can make a session call to get a cookie:

curl -v -k –insecure -H “Content-Type: application/json” -c cookiefile -X POST -d @auth.json https://10.10.10.10:8080/session/1/session

The output will be over one page long, but you’re looking to verify that the cookie was generated.  You should see two lines similar to this:

* Added cookie isisessid="123456-xxxx-xxxx-xxxx-a193333a99bc" for domain 10.10.10.10, path /, expire 0
< Set-Cookie: isisessid=123456-xxxx-xxxx-xxxx-a193333a99bc; path=/; HttpOnly; Secure

Run a test command to gather data

Below is a sample string to gather some sample statistics.  Later in this document I’ll review all of the possible strings you can use to gather info on CPU, disk, performance, etc.

curl -k –insecure -b @cookiefile ‘https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total’

The command above generates the following json output:

{
"stats" :
[

{
"devid" : 0,
"error" : null,
"error_code" : null,
"key" : "ifs.bytes.total",
"time" : 1398840008,
"value" : 433974304096256
}
]
}

Install JQ

Now that we have data in json format, we need to be able to parse it and change it into a more readable format.  I’m looking to convert it to csv.  There are many different scripts, tools, and languages available for that purpose online.  I personally looked for a method that can be used in a simple bash script and jq is a good solution for that.  I use Cygwin on a windows box for my scripts, but you can download any version you like for your flavor of OS.  You can download the JQ parser here: https://github.com/stedolan/jq/releases.

Instructions for the installation of jq for Cygwin:

  1. Download the latest source tarball for jq from https://stedolan.github.io/jq/download/
  2. Open Cygwin to create the folder you’d like to extract it in
  3. Copy the ‘jq-1.5.tar.gz’ file into your folder to make it available within Cygwin
  4. From a Cygwin command shell, enter the following to uncompress the tarball file : ‘tar -xvzf jq-1.5.tar.gz’
  5. Change folder location to the uncompressed folder e.g. ‘cd /jq-1.5’
  6. Next enter ‘./configure’ and wait for the command to complete (about 5 minutes)
  7. Then enter the commands ‘make’, followed by ‘make install’
  8. You’re done.

Once jq is installed, we can play around with using it to make our json output more readable.  One simple way to make it into a comma separated output, is with this command:

cat sample.json | jq “.stats | .[]” –compact-output

It will turn json output like this:

{
"stats" :
[

{
"devid" : 8,
"error" : null,
"error_code" : null,
"key" : "node.ifs.bytes.used",
"values" :
[

{
"time" : 1498745964,
"value" : 51694140276736
},
{
"time" : 1498746264,
"value" : 51705407610880
}
]
},

Into single line, comma separated output like this:

{"devid":8,"error":null,"error_code":null,"key":"node.ifs.bytes.used","values":[{"time":1498745964,"value":51694140276736},{"time":1498746264,"value":51705407610880}]}

You can further improve the output by removing the quote marks with sed:

cat sample.json | jq “.stats | .[]” –compact-output | sed ‘s/\”//g’

At this point the data is formatted well enough to easily modify it to suit my needs in Excel.

{devid:8,error:null,error_code:null,key:node.ifs.bytes.used,values:[{time:1498745964,value:51694140276736},{time:1498746264,value:51705407610880}]}

JQ CSV

Using the –compact-output switch isn’t the only way to manipulate the data, and probably not the best way.  I haven’t had much time to work with the @csv option in JQ, but it looks very promising. for this.  Below are a few notes on using it, I will include more samples in an edit to this post or a new post in the future that relate this more directly to using it with the Isilon-generated output.  I prefer to use csv files for report output due to the ease of working with them and manipulating them with scripts.

Order is significant for csv, but not for JSON fields.  Specify the mapping from JSON named fields to csv positional fields by constructing an array of those fields, using [.date,.count,.title]:

input: { "date": "2011-01-12 13:14", "count": 17, "title":"He's dead, Jim!" }
jq -r '[.date,.count,.title] | @csv'
"2017-06-12 08:19",17,"You're going too fast!"

You also may want to apply this to an array of objects, in which case you’ll need to use the .[] operator, which streams each item of an array in turn:

jq -r '.[] | [.date, .count, .title] | @csv'
"2017-06-12 08:19",17,"You're going too fast!"
"2017-06-15 11:50",4711,"That's impossible"?"
"2017-06-19 00:01",,"I can't drive 55!"

You’ll likely also want the csv file to populate the csv field names at the top. The easiest way to do this is to add them in manually:

jq -r '["date", "count", "title"], (.[] | [.date, .count, .title]) | @csv'
"date","count","title"
"2017-06-12 08:19",17,"You're going too fast!"
"2017-06-15 11:50",4711,"That's impossible"?"
"2017-06-19 00:01",,"I can't drive 55!"

We can avoid repeating the same list of field names by reusing the header array to lookup the fields in each object.

jq -r '["date", "count", "title"] as $fields| $fields, (.[] | [.[$fields[]]]) | @csv'

Here it is as a function, with a slightly nicer field syntax, using path():

def csv(fs): [path(null|fs)[]] as $fields| $fields, (.[] | [.[$fields[]]]) | @csv;
USAGE: csv(.date, .count, .title)

If the input is not an array of objects but just a sequence of objects  then we can omit the .[] – but then we can’t get the header at the top.  It’s best to convert it to an array using the –slurp/-s option (or put [] around it if it’s generated within jq).

More to come on formatting JSON for Isilon in the future…

Isilon API calls

All of these specific API calls were pulled from the EMC community forum, I didn’t compose this list myself.  It’s a list of the calls that InsightIQ makes to the OneFS API.  They can be queried in exactly the same way that I demonstrated in the examples earlier in this post.

Please note the following about the API calls regarding time ranges:

  1. Every call to the “/platform/1/statistics/current” APIs do not contain query parameters for &begin and &end time range.
  2. Every call to the “/platform/1/statistics/history” APIs always contain query parameters for &begin and &end POSIX time range.

Capacity

https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total&key=ifs.ssd.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.bytes.avail&key=ifs.ssd.bytes.avail&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=ifs.bytes.avail&key=ifs.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.ssd.bytes.avail&key=ifs.ssd.bytes.total&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.out.rate&key=node.ifs.bytes.in.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.total&key=node.ifs.ssd.bytes.used&key=node.ifs.ssd.bytes.total&key=node.ifs.bytes.used&devid=all&degraded=true&interval=300&memory_only=true

CPU

https://10.10.10.10:8080/platform/1/statistics/history?key=node.cpu.idle.avg&devid=all&degraded=true&interval=30&memory_only=true

Network

https://10.10.10.10:8080/platform/1/statistics/current?key=node.net.iface.name.0&key=node.net.iface.name.1&key=node.net.iface.name.2&key=node.net.iface.name.3&key=node net.iface.name.4&key=node.net.iface.name.5&key=node.net.iface.name.6&key=node.net.iface.name.7&key=node.net.iface.name.8&key=node.net.iface.name.9&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.ext.packets.in.rate&key=node.net.ext.errors.in.rate&key=node.net.ext.bytes.out.rate&key=node.net.ext.errors.out.rate&key=node.net.ext.bytes.in.rate&key=node.net.ext.packets.out.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.iface.bytes.out.rate.0&key=node.net.iface.bytes.out.rate.1&key=node.net.iface.bytes.out.rate.2&key=node.net.iface.bytes.out.rate.3&key=node.net.iface.bytes.out.rate.4&key=node.net.iface.bytes.out.rate.5&key=node.net.iface.bytes.out.rate.6&key=node.net.iface.bytes.out.rate.7&key=node.net.iface.bytes.out.rate.8&key=node.net.iface.bytes.out.rate.9&key=node.net.iface.errors.in.rate.0&key=node.net.iface.errors.in.rate.1&key=node.net.iface.errors.in.rate.2&key=node.net.iface.errors.in.rate.3&key=node.net.iface.errors.in.rate.4&key=node.net.iface.errors.in.rate.5&key=node.net.iface.errors.in.rate.6&key=node.net.iface.errors.in.rate.7&key=node.net.iface.errors.in.rate.8&key=node.net.iface.errors.in.rate.9&key=node.net.iface.errors.out.rate.0&key=node.net.iface.errors.out.rate.1&key=node.net.iface.errors.out.rate.2&key=node.net.iface.errors.out.rate.3&key=node.net.iface.errors.out.rate.4&key=node.net.iface.errors.out.rate.5&key=node.net.iface.errors.out.rate.6&key=node.net.iface.errors.out.rate.7&key=node.net.iface.errors.out.rate.8&key=node.net.iface.errors.out.rate.9&key=node.net.iface.packets.in.rate.0&key=node.ne .iface.packets.in.rate.1&key=node.net.iface.packets.in.rate.2&key=node.net.iface.packets.in.rate.3&key=node.net.iface.packets.in.rate.4&key=node.net.iface.packets.in.rate.5&key=node.net.iface.packets.in.rate.6&key=node.net.iface.packets.in.rate.7&key=node.net.iface.packets.in.rate.8&key=node.net.iface.packets.in.rate.9&key=node.net.iface.bytes.in.rate.0&key=node.net.iface.bytes.in.rate.1&key=node.net.iface.bytes.in.rate.2&key=node.net.iface.bytes.in.rate.3&key=node.net.iface.bytes.in.rate.4&key=node.net.iface.bytes.in.rate.5&key=node.net.iface.bytes.in.rate.6&key=node.net.iface.bytes.in.rate.7&key=node.net.iface.bytes.in.rate.8&key=node.net.iface.bytes.in.rate.9&key=node.net.iface.packets.out.rate.0&key=node.net.iface.packets.out.rate.1&key=node.net.iface.packets.out.rate.2&key=node.net.iface.packets.out.rate.3&key=node.net.iface.packets.out.rate.4&key=node.net.iface.packets.out.rate.5&key=node.net.iface.packets.out.rate.6&key=node.net.iface.packets.out.rate.7&key=node.net.iface.packets.out.rate.8&key=node.net.iface.packets.out.rate.9&devid=all&degraded=true&interval=30&memory_only=true

Disk

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.count&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.name.0&key=node.disk.name.1&key=node.disk.name.2&key=node.disk.name.3&key=node.disk.name.4&key=node.disk.name.5&key=node.disk.name.6&key=node.disk.name.7&key=node.disk.name.8&key=node.disk.name.9&key=node.disk.name.10&key=node.disk.name.11&key=node.disk.name.12&key=node.disk.name.13&key=node.disk.name.14&key=node.disk.name.15&key=node.disk.name.16&key=node.disk.name.17&key=node.disk.name.18&key=node.disk.name.19&key=node.disk.name.20&key=node.disk.name.21&key=node.disk.name.22&key=node.disk.name.23&key=node.disk.name.24&key=node.disk.name.25&key=node.disk.name.26&key=node.disk.name.27&key=node.disk.name.28&key=node.disk.name.29&key=node.disk.name.30&key=node.disk.name.31&key=node.disk.name.32&key=node.disk.name.33&key=node.disk.name.34&key=node.disk.name.35&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.slow.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.busy.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.queue.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.in.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.out.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

Complete List of API calls made by InsightIQ

Here is a complete a list of all of the API calls that InsightIQ makes to the Isilon cluster using OneFS API. For complete reference of what these APIs actually do, you can refer to the OneFS API Info Hub and the OneFS API Reference documentation.

https://10.10.10.10:8080/platform/1/cluster/config

https://10.10.10.10:8080/platform/1/cluster/identity

https://10.10.10.10:8080/platform/1/cluster/time

https://10.10.10.10:8080/platform/1/dedupe/dedupe-summary

https://10.10.10.10:8080/platform/1/dedupe/reports

https://10.10.10.10:8080/platform/1/fsa/path

https://10.10.10.10:8080/platform/1/fsa/results

https://10.10.10.10:8080/platform/1/job/types

https://10.10.10.10:8080/platform/1/license/licenses

https://10.10.10.10:8080/platform/1/license/licenses/InsightIQ

https://10.10.10.10:8080/platform/1/quota/reports

https://10.10.10.10:8080/platform/1/snapshot/snapshots-summary

https://10.10.10.10:8080/platform/1/statistics/current?key=cluster.health&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total&key=ifs.ssd.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.bytes.avail&key=ifs.ssd.bytes.avail&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.count&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.name.0&key=node.disk.name.1&key=node.disk.name.2&key=node.disk.name.3&key=node.disk.name.4&key=node.disk.name.5&key=node.disk.name.6&key=node.disk.name.7&key=node.disk.name.8&key=node.disk.name.9&key=node.disk.name.10&key=node.disk.name.11&key=node.disk.name.12&key=node.disk.name.13&key=node.disk.name.14&key=node.disk.name.15&key=node.disk.name.16&key=node.disk.name.17&key=node.disk.name.18&key=node.disk.name.19&key=node.disk.name.20&key=node.disk.name.21&key=node.disk.name.22&key=node.disk.name.23&key=node.disk.name.24&key=node.disk.name.25&key=node.disk.name.26&key=node.disk.name.27&key=node.disk.name.28&key=node.disk.name.29&key=node.disk.name.30&key=node.disk.name.31&key=node.disk.name.32&key=node.disk.name.33&key=node.disk.name.34&key=node.disk.name.35&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.net.iface.name.0&key=node.net.iface.name.1&key=node.net.iface.name.2&key=node.net.iface.name.3&key=node.net.iface.name.4&key=node.net.iface.name.5&key=node.net.iface.name.6&key=node.net.iface.name.7&key=node.net.iface.name.8&key=node.net.iface.name.9&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=cluster.dedupe.estimated.saved.bytes&key=cluster.dedupe.logical.deduplicated.bytes&key=cluster.dedupe.logical.saved.bytes&key=cluster.dedupe.estimated.deduplicated.bytes&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=ifs.bytes.avail&key=ifs.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.ssd.bytes.avail&key=ifs.ssd.bytes.total&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.ftp&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.hdfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.http&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nfs3&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nfs4&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nlm&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.papi&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.siq&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.smb1&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.smb2&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.ftp&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.hdfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.http&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.nfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.nlm&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.papi&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.siq&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.smb&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.ftp&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.hdfs&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.http&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nfs3&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nfs4&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nlm&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.papi&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.siq&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.smb1&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.smb2&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.cpu.idle.avg&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.slow.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.busy.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.queue.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.in.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.out.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.out.rate&key=node.ifs.bytes.in.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.total&key=node.ifs.ssd.bytes.used&key=node.ifs.ssd.bytes.total&key=node.ifs.bytes.used&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.cache&key=node.ifs.cache.l3.data.read.miss&key=node.ifs.cache.l3.meta.read.hit&key=node.ifs.cache.l3.data.read.hit&key=node.ifs.cache.l3.data.read.start&key=node.ifs.cache.l3.meta.read.start&key=node.ifs.cache.l3.meta.read.miss&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.blocked&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.blocked.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.contended&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.contended.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.deadlocked&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.deadlocked.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.getattr&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.getattr.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.link&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.link.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lock&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lock.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lookup&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lookup.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.read&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.read.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.rename&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.rename.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.setattr&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.setattr.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.unlink&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.unlink.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.write&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.write.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.je.num_workers&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.ext.packets.in.rate&key=node.net.ext.errors.in.rate&key=node.net.ext.bytes.out.rate&key=node.net.ext.errors.out.rate&key=node.net.ext.bytes.in.rate&key=node.net.ext.packets.out.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.iface.bytes.out.rate.0&key=node.net.iface.bytes.out.rate.1&key=node.net.iface.bytes.out.rate.2&key=node.net.iface.bytes.out.rate.3&key=node.net.iface.bytes.out.rate.4&key=node.net.iface.bytes.out.rate.5&key=node.net.iface.bytes.out.rate.6&key=node.net.iface.bytes.out.rate.7&key=node.net.iface.bytes.out.rate.8&key=node.net.iface.bytes.out.rate.9&key=node.net.iface.errors.in.rate.0&key=node.net.iface.errors.in.rate.1&key=node.net.iface.errors.in.rate.2&key=node.net.iface.errors.in.rate.3&key=node.net.iface.errors.in.rate.4&key=node.net.iface.errors.in.rate.5&key=node.net.iface.errors.in.rate.6&key=node.net.iface.errors.in.rate.7&key=node.net.iface.errors.in.rate.8&key=node.net.iface.errors.in.rate.9&key=node.net.iface.errors.out.rate.0&key=node.net.iface.errors.out.rate.1&key=node.net.iface.errors.out.rate.2&key=node.net.iface.errors.out.rate.3&key=node.net.iface.errors.out.rate.4&key=node.net.iface.errors.out.rate.5&key=node.net.iface.errors.out.rate.6&key=node.net.iface.errors.out.rate.7&key=node.net.iface.errors.out.rate.8&key=node.net.iface.errors.out.rate.9&key=node.net.iface.packets.in.rate.0&key=node.net.iface.packets.in.rate.1&key=node.net.iface.packets.in.rate.2&key=node.net.iface.packets.in.rate.3&key=node.net.iface.packets.in.rate.4&key=node.net.iface.packets.in.rate.5&key=node.net.iface.packets.in.rate.6&key=node.net.iface.packets.in.rate.7&key=node.net.iface.packets.in.rate.8&key=node.net.iface.packets.in.rate.9&key=node.net.iface.bytes.in.rate.0&key=node.net.iface.bytes.in.rate.1&key=node.net.iface.bytes.in.rate.2&key=node.net.iface.bytes.in.rate.3&key=node.net.iface.bytes.in.rate.4&key=node.net.iface.bytes.in.rate.5&key=node.net.iface.bytes.in.rate.6&key=node.net.iface.bytes.in.rate.7&key=node.net.iface.bytes.in.rate.8&key=node.net.iface.bytes.in.rate.9&key=node.net.iface.packets.out.rate.0&key=node.net.iface.packets.out.rate.1&key=node.net.iface.packets.out.rate.2&key=node.net.iface.packets.out.rate.3&key=node.net.iface.packets.out.rate.4&key=node.net.iface.packets.out.rate.5&key=node.net.iface.packets.out.rate.6&key=node.net.iface.packets.out.rate.7&key=node.net.iface.packets.out.rate.8&key=node.net.iface.packets.out.rate.9&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.ftp&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.hdfs&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.http&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nfs3&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nfs4&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nlm&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.papi&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.siq&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.smb1&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.smb2&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/keys

https://10.10.10.10:8080/platform/1/statistics/protocols

https://10.10.10.10:8080/platform/1/storagepool/nodepools

https://10.10.10.10:8080/platform/1/storagepool/tiers

https://10.10.10.10:8080/platform/1/storagepool/unprovisioned

https://10.10.10.10:8080/session/1/session

Using isi_vol_copy_vnx for VNX to Isilon data migration

For most data migrations from VNX to Isilon EMC recommends that you use the OneFS migration tool isi_vol_copy_vnx. It can often be more efficient than host-based tools (such as EMCopy and Robocopy) because the performance of host-based tools performance is dependent on the network connectivity of the host, while isi_vol_copy_vnx depends only on the network connection between the source device and the Isilon cluster. Below is a basic outline of the syntax, the steps required, and a few troubleshooting tips.

You might consider migrating data with a host based tool if one or more of the following conditions apply to your migration:

  • The source device and Isilon cluster are on separate networks.
  • Security restrictions prevent the source device and Isilon cluster from communicating directly.

Command Syntax:

isi_vol_copy_vnx --help

The source must contain a source host, a colon, and then the absolute source path name.

 isi_vol_copy_vnx <src_filer>:<src_dir> <dest_dir>
                 [-sa user: | user:password]
                 [-sport ndmp_src_port]
                 [-dport ndmp_data_port]
                 [-full | -incr] [-level_based]
                 [-dhost dest_ip_addr]
                 [-no_acl]

 isi_vol_copy_vnx -list [migration-id] | [[-detail] [-state=<state>] [-destination=<pathname>]]
 isi_vol_copy_vnx -cleanup <migration-id> [-everything [-noprompt]]
 isi_vol_copy_vnx -get_config
 isi_vol_copy_vnx -set_config <name>=<value>
 isi_vol_copy_vnx -h | -help
 Defaults:
   src_auth_user       = root
   src_auth_password   =
   ndmp_src_port       = 0  (0 means NDMP default, usually 10000)
   ndmp_data_port      = any
   dest_ip_addr        = none

Note: This tool uses NDMP to transfer the data from the source VNX to the Isilon.

Migration Steps:

  1. Configure NDMP User

Create a new NDMP user on the source VNX. Log in to the control station and run the following command:

/nas/sbin/server_user -add <new_username> -ndmp_md5 -passwd

Select the defaults when prompted and be sure to make note of the password.

  1. Determine the absolute path of your filesystems and shares

If you’re using virtual data movers it changes the root path of your filesystem.  Issue the following command to review your file systems and mount paths:

server_mount ALL

Note the the specific path for the file system that is targed for migration. The path when using a VDM will be similar to this:

FILESYSTEM1 on /root_vdm_1/FILESYSTEM1 uxfs,perm,rw

In this case the path will be /root_vdm_1/FILESYSTEM1, which will be used for the source path in the isi_vol_copy_vnx command.

    3. Determine the target Isilon Data Location

Determine the destination location on the Isilon in the /ifs/data folder where the data will migrated.  If the destination folder doesn’t exist on the Isilon, it will create it with the exact same NTFS permissions as the source.  Create the command with the following syntax:

isi_vol_copy_vnx <datamoverIP>:<source_path> <target Isilon path> -sa : <-full or -incr>

isi_vol_copy_vnx 10.10.10.10:/root_vdm_1/FILESYSTEM1 /ifs/data/FILESYSTEM1 -sa ndmpuser1: -full
  1. Migrate the Data

The command outlined above will run a full copy using the ndmpuser1 account and will prompt for a password, it does not have to be shown in plain text. The password can be specified in the command by using the appropriate syntax (<ndmpuser:password>), however you will still be required to follow the username with a colon.

If successful, the message “msg SnapSure file system creation succeeds” will appear, which means the NDMP session created a checkpoint successfully and is starting to copy data from that checkpoint.

Note that this does not migrate shares, just data.  Sharedupe can be utilized for that or the CIFS shares/NFS exports can be manually re-created.  It is recommended that any other data migrations on the source VNX be disabled prior to the copy so that you don’t run into performance issues.

Caveats:

  • There is no bandwidth throttling option with this command, it will consume all available bandwidth.
  • Isilon does not  support Historical SIDs in versions 8.0.0 or earlier, which may result in permission issues due to being unable to resolve historical SIDs post migration from other platforms (see KB468562).  If SIDHistory is in use on the source, then this is not the proper tool.   From the comments section, please note that OneFS does support Security Identifier (SID) history beginning in 8.0.1 and later releases, and 8.0.0.3 and later releases (see latest docu44518_Isilon-Supportability-and-Compatibility-Guide).
  • If fs_dedupe is enabled on the Celerra or VNX, you will need to change the backup threshold to zero for each filesystem.  This means that when sending the data over NDMP send the full file, not the compressed or de-duped version.  Note that there is a risk here of inflating existing backup sets if they are being done over NDMP.
  • On the source the account performing the copy needs local administrator or backup operators permissions to the source CIFS Server, and full control over the source share.
  • Standard NDMP backups and isi_vol_copy_vnx can affect each other and the data backed up by the two NDMP clients.  See KB187008 for a workaround.

Best Practices:

  • isi_vol_copy target data use

Do not touch the data on the target Isilon until after the isi_vol_copy has completed.

Why:  This will create problems and you may have to re-do a full copy.

  • Simultaneous isi_vol_copy use

Do not execute multiple isi_vol_copy commands going to the same target, i.e. don’t have all your isi_vol_copy migrations going to the same target directory.

For example:
filer1:/vol/sourcedir -> isilon:/ifs/data
filer2:/vol/sourcedir2 -> isilon:/ifs/data

Why:  Creates problems for the copy process and may require remediation after migration.

Instead: Use an additional directory level:
filer1:/vol/sourcedir -> isilon:/ifs/data/filer1/sourcedir
filer2:/vol/sourcedir2-> isilon:/ifs/data/filer2/sourcedir

If consolidation is required, this can happen after the data is migrated and any potential merging of identically named subdirectories can be addressed.

  • isi_vol_copy use

isi_vol_copy is optimized to streams as much data as possible across a network, always monitor load on the source and target systems for potential impact.

Why:  Since isi_vol_copy is optimized to streams as much data possible, don’t overwhelm older source systems and create potential link saturation or disk problems especially if there are users connected and attempting to access files.

  • isi_vol_copy limits

Recommend less than 40 million files per volume transfer when using isi_vol_copy.

Why:  All programs have limits and this is the recommended maximum when using isi_vol_copy for each individual transfer.  Larger source volumes should be broken up into smaller chunks (i.e. use a separate isi_vol_copy stream for multiple subdirectories instead of one large transfer of an entire volume).

  • Optimize the network for the migration traffic

Optimize the migration network path; look to limit other production traffic from this network, limit network devices the traffic traverse (firewalls, IDS etc.). Ideally look to create a dedicated private migration network that can be optimized for only the migration traffic.

Why:  Separating the migration traffic from other network traffic will allow for maximum throughput and reduce potential impact to existing production traffic by limiting network saturation.

6. Use a specific migration account or account with group membership that has the required access to all source and target data, i.e. root.

Why:  Using a dedicated account will allow for oversight and management of the migration data access.  It will also allow for the separation of migration tasks and users from other production accounts

  • Watch out for root_squash

On the source cluster exports sometimes restrict access by using root_squash to prevent root users connecting remotely and having root privileges.  But this is something we need for migrating data.  Use the option, “no_root_squash” to turn off root squashing.

Why:  Must have root access (or equivalent) to migrate all files and directories.

  • NFS Exports

Create the new Isilon NFS exports and permissions prior to the data migration.

Why:  This will allow the creation and setup of the exports and export permissions prior to data migration and cutover for initial testing and access validation

Troubleshooting Tips:

  1. Checkpoint Creation on the VNX

The most common issue when running the isi_vol_copy_vnx command is with checkpoint creation on the source VNX.

If you are receiving a message that’s similar to msg SnapSure file system creation failed during a copy session, the command is failing to create a snapshot of the source file system. It could be due to many reasons including a lack of available disk space. Try manually creating a snapshot on the source VNX file system to see if it fails, below is the syntax to do so:

#fs_ckpt Test_FS -name Test_FS-ckpt01 –Create
  1. Permission or Connection Issues.

In general the error message itself will be self explanatory. Make sure you are using the correct credentials for the NDMP user in the migration command. The user should have sufficient rights on the source system and target systems. The logged in user should be able to create and modify directories and the files contained within.  As an example, in the case below, NDMP Port 53169 is blocked between the VNX and the Isilon.  Opening the port on the firewall resolved the issue.

ISILON568-1# isi_vol_copy_vnx 10.10.10.10:/Volcopytest /ifs/data/Volcopytest -sa Volcopytest:Volcopytest -sport 53169 -full -dhost 10.10.10.11
system call (connect): Connection refused
Could not open NDMP connection to host 10.10.10.10
isi_vol_copy_vnx did not run properly

3.  32bit Unix Application Issues.

If your app is 32bit, 32bit settings on the new NFS export must be enabled.

ISILON# isi nfs exports modify EXPID –zone=NFSZone –return-32bit-file-ids=yes

Replace the EXPID with the ID of the target export, verify the bit settings by viewing the export.

ISILON# isi nfs exports view EXPID –zone=NFSZone | grep -i return

ISILON# _

4. Snapshot creation on the target Isilon array.

Snapshots can fail for many reasons, but most often it’s due to lack of available space. In the example below the snapshot creation failed because there was an existing snapshot with the same name.

ISILON568# isi_vol_copy_vnx VNX-SERVER3:/Test_FS/NFS01 /ifs/data/Test_NFS01 -sa ndmp:NDMPpassword -incr
Snapshot with conflicting name ‘isi_vol_copy.011.1.snap’ found. Remove/Rename the conflicting snapshot to continue with further migration runs.
snapshot already exists
ISILON568-1#

Either delete or rename the existing snap to resolve the issue.  In the example below the snapshot was deleted.

ISILON568-1# isi snapshot snapshots list| grep isi_vol_copy.011
134 isi_vol_copy.011.0.snap /ifs/.ifsvar/modules/isi_vol_copy/011/persistent_store
136 isi_vol_copy.011.1.snap /ifs/.ifsvar/modules/isi_vol_copy/011/persistent_store
ISILON568-1#

ISILON568-1# isi snapshot snapshots delete –snapshot=134
Are you sure? (yes/[no]): y
ISILON568-1#

Isilon CLI Command Reference

Below is a command reference for almost all of the CLI commands available in the Isilon OneFS CLI.  The basic commands are outlined and in many cases multiple samples of using the commands are provided.  This information was gathered as an easier to use, more condensed reference than the 1000+ page long CLI Administration guide provided by EMC, however I recommend you refer to that guide for more specific information on the commands.  The information gathered here applies to OneFS 8.0.0.0.

Device Commands
isi devices drive firmware list Displays a list of firmware details for the data drives in a node.
isi devices drive firmware list –node-lnn all View the drive firmware status of all the nodes.
isi devices drive firmware list –node-lnn View the drive firmware status of drives on a specific node.
isi devices drive firmware update start all –node-lnn all Update the drive firmware for your entire cluster.
isi devices drive firmware update start all –node-lnn Update the drive firmware for a specific node only.
isi devices -d Confirm that a node has finished updating.
isi devices firmware list Displays a list of firmware details for the data drives in a node.
isi devices drive firmware update list Ensure that no drive firmware updates are currently in progress.
isi devices firmware update start Updates firmware on one or more drives in a node.
isi devices firmware update view Displays info about a drive firmware update for a node.
isi devices firmware view { | –lnum } Displays information about the firmware on a single drive
isi devices node list Identify the serial number of the node.
isi devices node add Join a node to the cluster.
isi devices node smartfail Smartfails a node and removes it from the cluster.
isi devices node stopfail Discontinues the smartfail process on a node.
isi devices –action smartfail –device 3 This command removes a node with a logical node number (LNN) of 3.
lnnset 22 83 switches the LNN of a node from 22 to 83
isi devices add Scans for available drives and adds the drives
isi devices drive add Scans for available drives and adds the drives to the node.
isi devices drive view Displays information about a single drive.
isi devices drive format Formats a drive so you can add it to a node.
isi devices drive list Displays a list of data drives in a node.
isi devices drive suspend Temporarily suspends all activities for a drive.
isi devices drive purposelist Displays a list of possible use cases for drives.
isi devices drive smartfail Smartfails a drive so you can remove it from a node.
isi devices drive stopfail Discontinues the smartfail process on a drive.
isi devices drive suspend Temporarily suspends all activities for a drive.
isi readonly list Displays a list of read-only status by node.
isi readonly modify Modifies a node’s read-only status.
isi readonly view Displays the read-only status of a node.
isi servicelight list Displays a list of service LEDs in the cluster by node and the status of each service LED.
isi servicelight modify Turns a node’s service LED on or off.
isi servicelight view Displays the status of a node’s service LED.
isi devices drive purpose Assigns a use case to a drive. For example, you can designate a drive for normal data
  storage operations, or you can designate the drive for L3 caching instead of storage.
System Commands
reboot Reboots the cluster.
reboot 8 Reboots a single node, in this case node 8.
shutdown all shuts down all nodes on the cluster.
isi email settings modify Modify email settings for the cluster.
isi email settings view View cluster email settings.
isi services [-l | -a]  [ [{enable | disable}]] Displays a list of available services.
isi set Works similar to chmod, providing a mechanism to adjust OneFS-specific file attributes
isi version [–format {list | json}]  –verbose Displays cluster version information.
isi_for_array Runs commands on multiple nodes in an array, either in parallel or in serial.
isi_gather_info Collects and uploads the most recent cluster log information to ESRS
isi_phone_home Modify the settings for the isi_phone_home feature.
isi license licenses view View information about the current status of any optional Isilon software modules.
isi status Cluster node and drive health, storage data sizes, ip addresses, throughput, critical events and job status
isi status -n 1 Displays status info about a specific logical node.
isi batterystatus list View the status of all NVRAM batteries and charging systems on the node.
isi snmp settings Modify SNMP settings for a cluster.
isi snmp settings view View snmp settings.
isi get Displays information about a set of files, including the requested protection, current
isi batterystatus list Displays a list of batteries in the cluster by node & the status of each battery.
isi batterystatus view Displays the status of a node’s batteries.
  actual protection, and whether write-coalescing is enabled.
Config Commands
isi config Opens a new prompt where node and cluster settings can be altered, only isi config commands are valid
changes Displays a list of changes to the configuration that have not been committed.
commit Commits configuration settings and then exits isi config.
date Displays or sets the current date and time on the cluster.
encoding [list] Sets the default encoding character set for the cluster.
exit Exits the isi config subsystem.
help Displays a list of all isi config commands.
interface {enable | disable} Displays the IP ranges, netmask, and MTU and enables or disables internal interfaces.
iprange Displays a list of internal IP addresses that can be assigned to nodes, or adds addresses to the list.
joinmode Displays the setting for how nodes are added to the current cluster.
migrate Displays a list of IP address ranges that can be assigned to nodes or both adds and removes IP ranges from that list.
mtu Displays the size of the maximum transmission unit (MTU) that the cluster uses
name Displays the names currently assigned to clusters when run with no arguments, or assigns new name.
netmask []] Displays or sets the subnetmask on the cluster.
quit Exits the isi config subsystem.
reboot [{ | all}] Reboots one or more nodes, specified by LNN.
shutdown [{ | all}] Shuts down one or more nodes, specified by LNN.
status [advanced] Displays current information on the status of the cluster.
timezone [] Displays the current time zone or specifies new time zones.
version Displays information about the current OneFS version.
wizard Activates a wizard on unconfigured nodes.
deliprange Displays a list of internal network IP addresses that can be assigned to nodes or
  removes specified addresses from the list.
lnnset Displays a table of logical node number (LNN), device ID, and internal IP address for
  each node in the cluster when run without arguments. Changes the LNN when specified.
Statistics Commands
isi statistics client Displays the most active, by throughput, clients accessing the cluster for each supported protocol.
isi statistics drive Displays performance information by drive.
isi statistics heat Displays the most active /ifs paths for varous metrics.
isi statistics query current Displays current statistics.
isi statistics query history Displays available historical statistics.
isi statistics list keys Displays a list of all available keys.
isi statistics list operations Displays a list of valid arguments for the –operations option.
isi statistics protocol Displays statistics by protocol, such as NFSv3 and HTTP.
isi statistics pstat Displays a selection of cluster-wide and protocol data.
isi statistics system Displays general cluster statistics & op rates for supported protocols, network & disk traffic.
Access Zones
isi zone zones create Isolates data and restrict which users can access the data
isi zone zones create DevZone /ifs/hr/data Creates an access zone named DevZone and sets the base directory to /ifs/hr/data.
isi zone zones modify –add-auth-providers : Add an authentication provider.
isi zone zones modify DevZone –clear-auth-providers Remove all authentication providers.
isi zone zones delete DevZone Delete any access zone except the built-in System zone.
isi zone zones list View a list of all access zones on the cluster.
isi zone zones view TestZone Display the setting details of TestZone.
isi zone restrictions create Prohibits user or group access to the /ifs directory.
isi zone restrictions delete Removes a restriction that prohibits user or group access to the /ifs directory.
isi zone restrictions list Displays a list of users or groups that are prohibited from accessing the /ifs directory.
Authentication/Active Directory
isi auth ads create Create an Active Directory provider.
isi auth ads list Displays a list of Active Directory providers.
isi auth ads view Displays the properties of an Active Directory provider.
isi auth ads create –name=adserver.corp.com \ –user=admin –groupnet=group5 Specific example of adding a domain.
isi auth ads modify Modify the advanced settings for an Active Directory provider.
isi auth ads delete Delete an Active Directory provider.
isi auth ldap create Create an LDAP provider.
isi auth ldap modify Modify any setting for an LDAP provider (except its name).
isi auth ldap delete Delete an LDAP provider.
isi auth ldap list Displays a list of LDAP providers.
isi auth ldap view Displays the properties of an LDAP provider.
isi auth nis create Configure a NIS provider.
isi auth nis modify Modify any setting for an NIS provider (except its name).
isi auth nis delete Delete a NIS provider.
isi auth nis list Displays a list of NIS providers and indicates whether a provider is functioning correctly.
isi auth nis view Displays the properties of an NIS provider.
isi auth krb5 create { | –keytab-file } Creates an MIT Kerberos provider and joins a user to an MIT Kerberos realm.
isi auth krb5 delete [–force] Deletes an MIT Kerberos authentication provider and removes the user from an MIT Kerberos realm.
isi auth krb5 list Displays a list of MIT Kerberos authentication providers.
isi auth krb5 view Displays the properties of an MIT Kerberos authentication provider.
isi auth krb5 domain create [–realm ] Creates an MIT Kerberos domain mapping.
isi auth krb5 domain delete [–force] Deletes an MIT Kerberos domain mapping.
isi auth krb5 domain list Displays a list of MIT Kerberos domain mappings.
isi auth krb5 domain modify [–realm ] Modifies an MIT Kerberos domain mapping.
isi auth krb5 domain view Displays the properties of an MIT Kerberos domain mapping.
isi auth krb5 realm create Creates an MIT Kerberos realm.
isi auth krb5 realm modify Modify an MIT Kerberos realm.
isi auth krb5 realm list View a list of all Kerberos realms configured on the cluster.
isi auth krb5 realm view View a list of all Kerberos realms configured on the cluster.
isi auth krb realm view TEST.corp.COM View a list for a specific domain.
isi auth krb5 realm delete Delete an MIT Kerberos realm.
isi auth krb5 domain create Add an MIT Kerberos domain to an MIT Kerberos realm.
isi auth krb5 domain modify Modify a Kerberos domain.
isi auth krb5 domain modify –realm Example of modifying a Kerberos domain by specificing an alternate realm.
isi auth krb5 domain view View the properties of an MIT Kerberos domain mapping.
isi auth krb5 domain list List one or more MIT Kerberos domains.
isi auth krb5 domain delete Delete one or more MIT Kerberos domain mappings.
isi auth krb5 spn list View the service principal names (SPNs) and their associated keys that are registered for an MIT Kerberos provider.
isi auth krb5 spn delete –all Delete all keys for a specified SPN or a specific version of a key.
isi auth krb5 spn create Add or update keys for an SPN.
isi auth krb5 spn check Compare the list of registered SPNs against the list of discovered SPNs.
isi auth krb5 spn fix Fix the missing SPNs.
isi auth krb5 spn fix Add missing SPNs for an MIT Kerberos service provider
isi auth krb5 spn import Import the keys of a keytab file.
isi auth ads spn check Checks valid service principal names (SPNs).
isi auth ads spn create Adds one or more service principal names (SPNs) for a machine account.
isi auth ads spn delete Deletes one or more SPNs that are registered against a machine account.
isi auth ads spn fix Adds missing service principal names (SPNs) for an Active Directory provider.
isi auth ads spn list Displays a list of service principal names (SPNs) that are registered against a machine account.
isi auth krb5 spn create Creates or updates keys for an MIT Kerberos provider.
isi auth krb5 spn delete { | –all} Deletes keys from an MIT Kerberos provider.
isi auth krb5 spn check Checks for missing service principal names (SPNs) for an MIT Kerberos provider.
isi auth krb5 spn fix Adds the missing service principal names (SPNs) for an MIT Kerberos provider.
isi auth krb5 spn import Imports keys from a keytab file for an MIT Kerberos provider.
isi auth krb5 spn list Lists the service principal names (SPNs) and keys registered for an MIT Kerberos provider.
isi auth ads trusts controllers list Displays a list of domain controllers for a trusted domain.
isi auth ads trusts list Displays a list of trusted domains.
isi auth ads trusts view Displays the properties of a trusted domain.
isi auth error Displays error code definitions from the authentication log files.
isi auth file create Creates a file provider.
isi auth file delete Deletes a file provider.
isi auth file list Displays a list of file providers.
isi auth file modify Modifies a file provider.
isi auth file view Displays the properties of a file provider.
isi auth mapping create {| –source-uid Creates a manual mapping between a source identity and target identity
isi auth mapping delete {| –source-uid Deletes one or more identity mappings.
isi auth mapping dump Displays or prints the kernel mapping database.
isi auth mapping flush Flushes the cache for one or all identity mappings.
isi auth mapping import Imports mappings from a source file to the ID mapping database.
isi auth mapping list Displays the ID mapping database for an access zone.
isi auth mapping modify Sets or modifies a mapping between two identities.
isi auth mapping token Displays the access token that is calculated for a user during authentication.
isi auth mapping view Displays mappings for an identity.
isi auth netgroups view Displays information about a netgroup.
isi auth privileges Displays a list of system privileges.
isi auth refresh Refreshes authentication system configuration settings.
isi auth roles create Creates an empty role.  Run the isi auth roles modify command to add items.
isi auth roles delete Deletes a role.
isi auth roles list [–verbose] Displays a list of roles.
isi auth roles members list Displays a list of the members of a role.
isi auth roles modify Modifies a role.
isi auth roles privileges list Displays a list of privileges that are associated with a role.
isi auth roles view Displays the properties of a role.
isi auth settings acls modify Modifies access control list (ACL) settings for OneFS.
isi auth settings acls view Displays access control list (ACL) settings for OneFS.
isi auth settings global modify Modifies the global authentication settings.
isi auth settings global view Displays global authentication settings.
isi auth settings krb5 modify Modifies the global settings of an MIT Kerberos authentication provider.
isi auth settings krb5 view Displays MIT Kerberos provider authentication settings.
isi auth settings mapping modify Modifies identity mapping settings.
isi auth settings mapping view [–zone ] Displays identity mapping settings in an access zone.
isi auth status Displays provider status,available authentication providers, and which are functioning.
isi auth privileges –verbose To view a list of all privileges.
isi auth id To view a list of your privileges.
isi auth mapping token To view a list of privileges for another user.
Managing file providers
isi auth file create Specify replacement files for any combination of users, groups, and netgroups.
pwd_mkdb /ifs/test.passwd Generates an spwd.db file in the /etc directory.
isi auth file modify Modify any setting for a file provider, including its name.
isi auth file delete Delete a file provider.
Managing local users and groups
isi auth users create Creates a user account.
isi auth users delete { | –uid | –sid } Deletes a local user from the system.
isi auth users flush Flushes cached user information.
isi auth users list Displays a list of users.
isi auth users modify { | –uid | –sid } Modifies a local user.
isi auth users view { | –uid | –sid } Displays the properties of a user.
isi auth users list –provider=”:” View a list of users and groups for a specified provider.
isi auth users list –provider=”lsa-ldap-provider:Unix LDAP” List users and groups for an LDAP provider type that is named Unix LDAP.
isi auth users create –provider=”local:” \ –password=”” Create a local user.
isi auth groups create –provider “local:” Create a local group.
isi auth local view system View the current password settings.
isi auth local list Displays a list of local providers.
isi auth local modify Modifies a local provider.
isi auth local view Displays the properties of a local provider.
isi auth log-level modify [–verbose] Specifies the logging level for the authentication service on the node.
isi auth log-level view Displays the logging level for the authentication service on the node.
isi auth users modify Modify any setting for a local user account except the user name.
isi auth groups modify Add or remove members from a local group.
isi auth users delete Delete a local user.
isi auth groups delete Delete a local group.
isi auth groups flush Flushes cached group information.
isi auth groups list Displays a list of groups.
isi auth groups members list { | –gid | –sid } Displays a list of members that are associated with a group.
isi auth groups modify { | –gid | –sid } Modifies a local group.
isi auth groups view { | –gid | –sid } Displays the properties of a group.
isi auth id Displays your access token.
isi auth access /ifs/ Lists the permissions that a user has to access a given file or directory.
SMB
isi smb settings global view View the global SMB settings.
isi smb settings global modify Modify SMB Global Settings.
isi smb shares create Create SMB Shares.
isi smb shares modify Modify SMB Shares.
isi smb shares modify Share2 –file-filtering-enabled=yes \ file-filter-extensions=.wav,.mpg Enables file filtering on a share, denies .wav and .mpg.
isi smb shares list Displays a list of SMB shares.
isi smb shares permission create Creates permissions for an SMB share.
isi smb shares permission delete Deletes user or group permissions for an SMB share.
isi smb shares permission list Displays a list of permissions for an SMB share.
isi smb shares permission modify Modifies permissions for an SMB share.
isi smb shares permission view Displays a single permission for an SMB share.
isi smb shares view [–zone ] Displays information about an SMB share.
isi smb settings shares view View the default SMB share settings specific to an access zone.
isi smb settings shares modify Configure SMB share settings specific to each access zone.
isi smb settings global modify –zone=TestZone –impersonate-guest=never Specifies that guests are never allowed access to shares in zone 5.
isi smb shares delete Share1 –zone=zone-5 Deletes a share named Share1 from the access zone named zone-5.
isi smb shares permission modify Modify SMB Share Permissions.
isi smb shares permission list ifs List permissions on a share.
isi smb log-level filters create Creates a new SMB log filter.
isi smb log-level filters delete Deletes SMB log filters.
isi smb log-level filters list Lists SMB log filters.
isi smb log-level filters view View an individual SMB log-level filter.
isi smb log-level modify [–verbose] Sets the log level for the SMB service.
isi smb log-level view Shows the current log level for the SMB service.
isi smb openfiles list View a list of open files.
isi smb openfiles close [–force] Closes an open file.
isi smb openfiles list Displays a list of files that are open in SMB shares.
isi smb sessions delete [{–user | –uid | –sid }] [–force] Deletes SMB sessions, filtered first by computer and then optionally by user.
isi smb sessions delete computer1 Deletes all SMB sessions associated with a computer named computer1.
isi smb sessions delete computer1 –user=user1 Deletes all SMB sessions associated with a computer named computer1 and a user named user1.
isi smb sessions delete-user { | –uid | –sid } [–computer-name ] Deletes SMB sessions, filtered first by user then optionally by computer.
isi smb sessions list Displays a list of open SMB sessions.
isi smb settings global modify Modifies global SMB settings.
isi smb settings global view Displays the default SMB configuration settings.
isi smb settings shares modify Modifies default settings for SMB shares.
isi smb settings shares view [–zone ]
NFS
isi nfs settings global view View the global NFS settings that are applied to all nodes in the cluster.
isi nfs settings global modify Configure NFS file sharing.
isi nfs settings global modify –nfsv4-enabled=yes Enables NFSv4 support.
isi nfs settings export view [–zone ] View the current default export settings.
isi nfs settings export modify Configure default NFS export settings.
isi nfs settings export modify –max-file-size 1099511627776 Specifies a maximum export file size of one terabyte.
isi nfs settings export modify –revert-max-file-size Restores the maximum export file size to the system default.
isi nfs exports view View NFS Exports.
isi nfs exports view 1 Displays the settings of the default export.
isi nfs exports modify 1 –map-root-enabled true –map-root nobody Enable root-squash for the default NFS export.
isi nfs exports list List NFS Exports.
isi nfs exports create Create NFS exports to share files in OneFS.
isi nfs exports create /ifs/data/projects,/ifs/home –all-dirs=yes Creates an export supporting client access to multiple paths and their subdirectories.
isi nfs exports check Check for errors in NFS exports, conflicting export rules, invalid paths, etc.
isi nfs exports modify Modify the settings for an existing NFS export.
isi nfs exports modify 2 –add-read-write-clients 10.1.1.100 For example, the following adds a client with read-write access to NFS export 2
isi nfs exports delete Delete unneeded NFS exports.
isi nfs exports delete 2 Deletes an export whose ID is 2.
isi nfs exports delete 3 –force Deletes an export whose ID is 3 without displaying a confirmation prompt
isi nfs exports reload Reloads the NFS exports configuration.
isi nfs aliases create Create an NFS alias to map a long directory path to a simple pathname.
isi nfs aliases create /home /ifs/data/offices/hq/home –zone hq-home Creates an alias to a full pathname in OneFS in an access zone named hq-home
isi nfs aliases modify Modify an NFS alias.
isi nfs aliases modify /home –zone hq-home –name /home1 Changes the name of an alias in the access zone hq-home.
isi nfs aliases delete Delete an NFS alias.
isi nfs aliases delete /home –zone hq-home Deletes the alias /home in an access zone named hq-home.
isi nfs aliases list [–zone] View a list of NFS aliases that have already been defined for a particular zone.
isi nfs aliases view List NFS Aliases.
isi nfs aliases view /projects –zone hq-home –check Provides information on an alias in the access zone, hqhome, including the health of the alias.
isi nfs log-level modify Sets the logging level for the NFS service.
isi nfs log-level view Shows the logging level for the NFS service.
isi nfs netgroup check Updates the NFS netgroup cache.
isi nfs netgroup flush Flushes the NFS netgroup cache.
isi nfs netgroup modify Modifies the NFS netgroup cache settings.
isi nfs nlm locks list Applies to NFSv3 only. Displays a list of NFS Network Lock Manager (NLM) advisory locks.
isi nfs nlm locks waiters List of clients waiting to place a Network Lock Manager (NLM) lock on a currently locked file.
isi nfs nlm sessions check Searches for lost locks.
isi nfs nlm sessions delete Delete NFS NLM Sessions.
isi nfs nlm sessions list Displays a list of clients holding NFS Network Lock Manager (NLM) locks.
isi nfs nlm sessions refresh Refreshes an NFS Network Lock Manager (NLM) client.
isi nfs nlm sessions view Displays information about NFS Network Lock Manager (NLM) client connections.
isi nfs settings zone modify Modifies the default NFS zone settings for the NFSv4 ID mapper.
isi nfs settings zone view Displays the default NFSv4-related access zone settings.
FTP
isi ftp settings view View a list of current FTP configuration settings.
isi services vsftpd enable Enable FTP.  The FTP service, vsftpd, is disabled by default.
isi ftp settings modify Modify FTP Settings.
isi ftp settings modify –server-to-server=yes Enables server-to-server transfers.
isi ftp settings modify –allow-anon-upload=no Disables anonymous uploads.
HTTP and HTTPS
isi http settings modify Modifies HTTP global settings.
isi http settings modify –service=enabled –dav=yes \ basic-authentication=yes Enables the HTTP service, WebDAV, and basic authentication.
isi_gconfig -t http-config https_enabled=true Enable HTTPS.
isi_gconfig -t http-config https_enabled=false Disable HTTPS.
isi http settings view Displays HTTP global settings.
File Filtering
isi file-filter settings modify
isi file-filter settings view View file filtering settings in an access zone.
isi file-filter settings view –zone=DevZone Displays file filtering settings in the DevZone access zone.
isi file-filter settings modify –zone=DevZone \ file-filtering-enabled=yes file-filter-type=allow \ Enables file filtering in the DevZone access zone and allows users to write only to specific file types.
  file-filter-extensions=.xml,.html,.txt
isi file-filter settings modify –zone=DevZone \ file-filtering-enabled=yes file-filter-type=deny \ Enables file filtering in DevZone and denies users write access only to specific file types.
  file-filter-extensions=.xml,.html,.txt
Auditing
isi_audit_viewer View both configuration audit and protocol audit logs.
isi_audit_viewer -t protcol View protocol access audit logs.
isi_audit_viewer -t config View system configuration logs.
isi audit settings global modify [–protocol-auditing-enabled {yes | no}] Modify the types of protocol access events to be audited.
isi audit settings modify –syslog-forwarding-enabled Enable forwarding of protocol access events to syslog.
isi audit settings modify –syslog-forwarding-enabled=no –zone=DevZone Disables forwarding of audited protocol access events from the DevZone access zone.
isi audit settings global modify –config-auditing-enabled=yes Enables system configuration auditing on the cluster.
isi audit settings global modify –config-syslog-enabled=yes Enables forwarding of system configuration changes.
isi audit settings global modify –config-syslog-enabled=no Disables forwarding of system configuration changes.
isi audit settings modify –audit-failure=create,close,delete –zone=DevZone Creates a filter that audits the failure of create, close, and delete events in the DevZone access zone.
isi audit settings global view Displays global audit settings configured on the EMC Isilon cluster.
isi audit settings view [–zone] [–verbose] Displays audit filter settings in an access zone and whether syslog forwarding is enabled.
isi audit topics list Displays a list of configured audit topics, which are internal collections of audit data.
isi audit topics modify Modifies the properties of an audit topic.
isi audit topics view Displays the properties of an audit topic.
Snapshots
It is recommended that you do not create more than 1,000 snapshots of a single directory to avoid performance degradation.
You can create up to 20,000 snapshots on a cluster at a time.
isi snapshot snapshots modify Modify the name and expiration date of a snapshot.
isi snapshot snapshots modify HourlyBackup_07-15-2014_22:00 \ –expires 2014-07-25T01:30 Causes HourlyBackup_07-15-2014_22.00 to expire on 1.30PM on July 25th, 2014.
isi snapshot snapshots modify Modify the alias of a snapshot to assign an alternative name for the snapshot.
isi snapshot snapshots modify HourlyBackup_03-15-2017_22:00 \ –alias LastKnownGood Assigns an alias of LastKnownGood to HourlyBackup_03-15-2017_22.00.
isi snapshot snapshots list View a list of snapshots or detailed information about a specific snapshot.
isi snapshot snapshots view Displays the properties of an individual snapshot.
isi snapshot snapshots delete –snapshot newSnap1 Deletes newSnap1.
isi job jobs start snaprevert –snapid 46 Reverts HourlyBackup_07-15-2014_23.00
isi snapshot schedules modify Modify a snapshot schedule.
isi snapshot schedules modify hourly_media_snap –duration 14D Snapshots created with the schedule hourly_media_snap are deleted 14 days after creation.
isi snapshot schedules delete [–force] [–verbose] Deletes a snapshot schedule.
isi snapshot schedules delete hourly_media_snap Deletes a snapshot schedule named hourly_media_snap.
isi snapshot schedules view Displays information about a snapshot schedule.
isi snapshot schedules view every-other-hour Displays detailed information about the snapshot schedule every-other-hour
isi snapshot schedules modify WeeklySnapshot –alias LatestWeekly Configures the alias LatestWeekly for the snapshot schedule WeeklySnapshot.
isi snapshot schedules create Creates a snapshot schedule.
isi snapshot schedules pending list Displays a list of snapshots that are scheduled to be generated by snapshot schedules.
isi snapshot aliases create [–verbose] Assigns a snapshot alias to a snapshot or to the live version of the file system.
isi snapshot aliases create latestWeekly Weekly-01-30-2017 Creates a snapshot alias for Weekly-01-30-2017.
isi snapshot aliases modify latestWeekly –target LIVE Reassigns the latestWeekly alias to the live file system.
isi snapshot aliases delete { | –all} [–force] [–verbose] Deletes a snapshot alias.
isi snapshot aliases list View a list of all snapshot aliases by running the following command.
isi snapshot aliases view latestWeekly Displays information about latestWeekly.
isi snapshot locks create Creates a snapshot lock.
isi snapshot locks create SnapshotApril2016 –expires 1M \ –comment “Maintenance Lock” Applies a snapshot lock to SnapshotApril2016, sets expiration in one month, and adds a description.
isi snapshot locks modify Modify the expiration date of a snapshot lock.
isi snapshot locks modify SnapshotApril2014 1 –expires 3D Sets an expiration date three days from the present date for a snapshot lock with an ID of 1.
isi snapshot locks delete Delete a snapshot lock.
isi snapshot locks delete Snapshot2014Apr16 1 Deletes a snapshot lock that is applied to SnapshotApril2014 with a lock id of 1
isi snapshot locks view Displays information about a snapshot lock.
isi snapshot locks list View snapshot lock information.
isi snapshot settings view View current SnapshotIQ settings.
isi snapshot settings modify –reserve 30 Sets the snapshot reserve to 30%.
isi job jobs start ChangelistCreate –older-snapid 3 –newersnapid 7 Create a changelist that shows what data was changed between snapshots.
isi_changelist_mod -k 21_23 Deletes changelist 21_23.
isi_changelist_mod -l View the IDs of changelists.
isi_changelist_mod -a 2_6 Displays the contents of a changelist named 2_6.
isi job jobs start domainmark –root /ifs/data/media \ –dm-type SnapRevert Creates a SnapRevert domain for /ifs/data/media.
isi snapshot snapshots create /ifs/data/media –name media-snap Creates a snapshot for /ifs/data/media.
isi snapshot snapshots delete OldSnapshot Deletes a snapshot named OldSnapshot.
isi snapshot schedules create hourly /ifs/data/media \ Creates a snapshot schedule for /ifs/data/media.
 HourlyBackup_%m-%d-%Y_%H:%M “Every day every hour” \ –duration 1M
isi job jobs start snapshotdelete Increase the speed at which deleted snapshot data is freed on the cluster, use this command
isi job jobs start shadowstoredelete Increase the speed at which deleted data shared between deduplicated and cloned
  files are freed on the cluster.
Deduplication
isi dedupe settings modify –assess-paths /ifs/data/archive Assess the amount of disk space you will save by deduplicating a directory.
isi job jobs start dedupeassessment Start the assessment job by running the following command.
isi dedupe reports list Identify the ID of the assessment report by running the following command.
isi dedupe reports view View prospective space savings by running the isi dedupe reports view.
isi dedupe settings modify –paths /ifs/data/media,/ifs/data/archive This command targets /ifs/data/archive and /ifs/data/media for deduplication.
isi job types Dedupe –schedule “Every Friday at 11:00 PM” Configures the deduplication job to be run every Friday at 11PM.
isi dedupe stats View the amount of disk space that you are currently saving with deduplication.
isi dedupe settings view Displays current deduplication settings.
Data Replication
isi sync settings modify Configure default settings for replication policies.
isi sync settings modify –report-max-age 3Y Configures SyncIQ to delete replication reports that are older than three years.
isi sync settings view Displays global replication settings.
isi sync policies resolve [–force] Resolving a replication policy enables you to run the policy again.
isi sync policies reset { | –all} [–verbose] If you cannot resolve the issue that caused the error, you can reset the replication policy.
isi sync policies delete { | –all} Deletes a replication policy.
isi sync policies modify Modify the settings of a replication policy.
isi sync policies modify newPolicy \ –target-compare-initial-sync on Enables differential synchronization, Run the policy by running isi sync jobs start.
isi sync policies modify dailySync –schedule “” Ensures that the policy dailySync runs only manually.
isi sync policies delete dailySync Eeletes dailySync from the source cluster.
isi sync policies enable dailySync Enables dailySync.
isi sync policies disable dailySync Disables dailySync.
isi sync policies list View information about replication policies.
isi sync policies view dailySync Displays detailed information about dailySync.
isi job jobs start domainmark –root /ifs/data/media \ –dm-type SyncIQ Start Domainmark.
isi sync jobs start dailySync –test Creates a report about how much data will be transferred when dailySync is run.
isi sync reports view dailySync 1 Displays the assessment report for dailySync.
isi sync jobs start dailySync Starts ‘dailySync’ replication job.
isi sync jobs start dailySync \ –source-snapshot HourlyBackup_07-15-2013_23:00 Replicates the source directory of dailySync according to snapshot HourlyBackup_07-15-2013_23.00.
isi sync jobs pause dailySync Pauses ‘dailySync’ replication job.
isi sync jobs resume dailySync Resumes ‘dailySync’ replication job.
isi sync jobs cancel dailySync Cancels ‘dailySync’ replication job.
isi sync jobs list View all active replication jobs.
isi sync jobs reports list Displays information about running replication jobs targeting the local cluster.
isi sync jobs reports view Displays information about a running replication job targeting the local cluster.
isi sync jobs view dailySync Displays detailed information about a replication job.
isi sync recovery allow-write dailySync Enables replicated directories and files specified in the dailySync policy to be writable.
isi sync recovery allow-write newPolicy –revert Reverts a failover operation for newPolicy.
isi sync recovery resync-prep dailySync Creates a mirror policy for dailySync.
isi sync jobs start dailySync_mirror Runs a mirror policy named dailySync_mirror immediately.
isi sync modify dailySync_mirror –enabled yes –schedule “every day at 12:01 AM” Schedules a mirror policy named dailySync_mirror to run daily at 12.01 AM.
isi sync recovery allow-write dailySync_mirror Allows writes to the directories specified in the dailySync_mirror policy.
isi sync recovery resync-prep dailySync_mirror Complete failback for dailySync_mirror, places secondary clust into RO mode, checks consistency.
isi sync recovery allow-write SmartLockSync Enables writes to the target directory of SmartLockSync.
isi worm domains modify –domain /ifs/data/smartlock \ –autocommit-offset 1m Automatically commits all files in /ifs/data/ smartlock to a WORM state after one minute.
isi sync target cancel Cancel a replication job that is targeting the local cluster.
isi sync target cancel dailySync Cancels a replication job created according to dailySync
isi sync target cancel –all Cancel all jobs targeting the local cluster.
isi sync target break Break local target association.
isi sync target break dailySync Breaks the association between dailySync and the local cluster.
isi sync target list View information about replication policies that are currently replicating data.
isi sync target view dailySync Displays detailed information about dailySync.
isi sync target reports list Displays information about completed replication jobs targeting the local cluster.
isi sync target reports view Displays information about a completed replication job that targeted the local cluster.
isi sync target reports subreports list Displays subreports about completed replication jobs targeting the local cluster.
isi sync target reports subreports view Displays a subreport about a completed replication job targeting the local cluster.
isi sync rules create Create a network traffic rule
isi sync rules create bandwidth 9:00-17:00 M-F 100 Creates a network traffic rule that limits bandwidth (100 KB per second from 9AM to 5PM weekdays).
isi sync rules create file_count 9:00-17:00 M-F 3 limits the file-send rate to 3 files per second from 9.00 AM to 5.00 PM every weekday
isi sync rules list Identify the ID of the performance rule.
isi sync rules view Displays information about a replication performance rule.
isi sync rules modify bw-0 –days X,S Performance rule with an ID of bw-0 to be enforced only on Saturday and Sunday.
isi sync rules delete { | –all | –type } [–force] [–verbose] Deletes a replication performance rule.
isi sync rules delete bw-0 Deletes a performance rule with an ID of bw-0.
isi sync rules modify bw-0 –enabled true Enables a performance rule with an ID of bw-0..
isi sync rules modify bw-0 –enabled false Disables a performance rule with an ID of bw-0.
isi sync reports list View a list of all replication reports.
isi sync reports view dailySync 2 Displays a replication report for dailySync.
isi sync reports subreports list dailySync 1 Displays subreports for dailySync.
isi sync reports subreports view dailySync 1 2 Displays a subreport for dailySync.
isi sync reports rotate [–verbose] Causes excess reports to be deleted immediately.
isi sync policies create Create a replication policy with SyncIQ.
isi sync policies create mypolicy sync /ifs/data/source Creates a policy that replicates the directory /ifs/data/
  10.1.99.36 /ifs/data/target –schedule “Every Sunday at 12:00 AM”   source on the source cluster to /ifs/data/target on target cluster 10.1.99.36
  –target-snapshot-archive on –target-snapshot-expiration 1Y   every week. The command also creates archival snapshots on the target cluster.
  –target-snapshot-pattern “%{PolicyName}-%{SrcCluster}-%Y-%m-%d   creates a SyncIQ domain for /ifs/data/source.
NDMP
isi ndmp settings global modify –service=yes Enable NDMP backup.
isi ndmp settings global modify –dma=emc configures OneFS to interact with EMC NetWorker.
isi ndmp settings global modify service=no Disable NDMP backup.
isi ndmp settings global view View NDMP backup settings.
isi ndmp settings diagnostics modify Modifies NDMP diagnostics settings.
isi ndmp settings diagnostics view [–format {list | json}] Displays NDMP diagnostic settings.
isi ndmp users create NDMPuser –password=1234 Creates an NDMP user account called NDMPuser.
isi ndmp users modify NDMPuser –password=5678 Modifies the password of a user named NDMPuser.
isi ndmp users delete NDMPuser Deletes a user named NDMPuser.
isi ndmp users view View NDMP user accounts.
isi ndmp users view Displays information about the account for a specific user.
isi tape rescan –node=18 Detects devices on node 18.
isi tape rescan –reconcile Remove entries for devices and paths that have become inaccessible.
isi tape modify tape003 –new-name=tape005 Modify the name of an NDMP device entry.
isi tape delete –name=tape001 Disconnects tape001 from the cluster.
isi tape list –node=18 –tape List tape devices on node 18.
isi tape view Displays information about a tape or media changer device.
isi fc settings modify 5.1 –topology=ptp Configures port 1 on node 5 to support a point-to-point Fibre Channel topology.
isi fc settings modify 5.1 –state=enable | disable Enable or disable an NDMP backup port.
isi fc settings view 5.1 View Fibre Channel port settings for port 1 on node 5.
isi fc settings list Lists Fibre Channel port settings.
isi fc settings view Displays settings for a specific Fibre Channel port.
isi ndmp sessions list Retrieve the ID of the NDMP session that you want to end.
isi ndmp sessions delete View the status of NDMP sessions or terminate a session that is in progress.
isi ndmp sessions delete 4.36339 –force Ends an NDMP session with an ID of 4.36339.
isi ndmp sessions view View NDMP Sessions.
isi ndmp list View information about active NDMP sessions.
isi ndmp contexts list –type bre View NDMP restartable backup contexts that have been configured.
isi ndmp contexts view View detailed information about a specific restartable backup context.
isi ndmp contexts delete Delete a restartable backup context.
isi ndmp settings global modify Specify the number of restartable backup contexts that OneFS can retain, up to 1024.
isi ndmp settings global modify –bre_max_num_contexts=128 Modify max number of contexts.
isi ndmp settings variables modify Modify default NDMP variable settings.
isi ndmp settings variables list view the default NDMP settings for a path.
isi ndmp settings variables create Sets the default value for an NDMP environment variable for a given path.
isi ndmp dumpdates delete [–name ] Delete snapshots created for snapshot-based incremental backups.
isi ndmp dumpdates list View snapshots generated for snapshot-based incremental backups.
File Retention
isi worm cdate set Set the compliance clock.
isi worm cdate view View the current time of the compliance clock.
isi job jobs start DomainMark –root /ifs/data/smartlock –dm-type Worm Creates a SmartLock enterprise domain
isi worm domains create Creates a SmartLock directory.
isi worm domains modify /ifs/data/SmartLock/prod_dir \  –default-retention 1Y Sets the default retention period to one year.
isi worm domains view View detailed information about a specific SmartLock directory.
isi worm domains modify /ifs/data/SmartLock/prod_dir \ –override-date 2014-06-01 Overrides the retention period expiration date of /ifs/data/SmartLock/prod_dir to June 1, 2014.
isi worm domains modify –privileged-delete Modify smartlock directory to allow deletion.
isi worm files view /ifs/data/SmartLock/prod_dir/file Displays the WORM status of a file.
isi worm files delete Deletes a file committed to a WORM state.
isi worm domains list Displays a list of WORM directories.
Protection Domains
isi job jobs start domainmark –root /ifs/data/media \ –dm-type SyncIQ Creates a SyncIQ domain for /ifs/data/source.
isi job jobs start domainmark –root /ifs/data/media \ –dm-type SyncIQ –delete Deletes a SyncIQ domain for /ifs/data/source.
Data-at-rest-encryption
isi_reformat_node Securely deletes the authentication keys on an entire cluster, smartfails each node,
  and runs the isi_reformat_node command on the last node.
SmartQuotas
isi quota quotas create –help Information about the parameters and options that can be used.
isi quota quotas create Create an accounting quota.
isi quota quotas create /ifs/data/test_1 directory \ –advisory-threshold=10M –enforced=false Creates an informative quota for the /test_1 directory.
isi job events list –job-type quotascan Verify that no QuotaScan jobs are in progress.
isi quota quotas list –help For information about the parameters and options that you can use.
isi quota quotas list –path=/ifs/data/quota_test_1 Finds all quotas that monitor the /ifs/data/quota_test_1 directory.
isi quota quotas list -v –path=/ifs/data/quota_test_2 \ –include-snapshots=”yes” Provides current usage information for the root user.
isi quota reports list -v Lists all info in the quota report.
isi quota reports delete Deletes a specified report.
isi quota quotas delete /ifs/data/quota_test_2 directory Deletes the specified directory-type quota.
isi quota quotas modify /ifs/dir-1 user –linked=false –user=admin Unlinks a user quota.
isi_classic quota list –export The quota configuration file displays as raw XML.
isi_classic quota import –from-file= Import quota settings in the form of a configuration file.
isi quota settings notifications modify –help Information about the parameters and options.
isi quota settings notifications modify advisory exceeded \ –action-alert=true Generate an alert when the advisory threshold is exceeded.
isi quota settings reports modify –schedule=”Every 2 days” Creates a quota report schedule that runs every two days.
isi quota settings mappings create -v Creates a SmartQuotas email mapping rule.
isi quota settings mappings delete Deletes SmartQuotas email mapping rules.
isi quota settings mappings list Lists SmartQuotas email mapping rules.
isi quota settings mappings modify Modifies an existing SmartQuotas email mapping rule.
si quota settings mappings view View a SmartQuotas email mapping rule.
isi quota reports create -v Creates an ad-hoc quota report.
isi quota quotas view Displays detailed properties of a single file system quota.
isi quota quotas notifications create /ifs/data/test_2 \ directory advisory exceeded –holdoff=10W Advisory quota notification rule, specifies length of time to wait before creating a notification.
isi quota quotas notifications delete –path Deletes a quota notification rule.
isi quota quotas notifications disable Disables all quota notifications.
isi quota quotas notifications list Displays a list of quota notification rules.
isi quota quotas notifications modify Modifies a notification rule for a quota.
isi quota quotas notifications view Displays the properties of a quota notification rule.
isi quota quotas notifications clear Clears rules for a quota and uses system notification settings.
Storage Pools
isi storagepool compatibilities class active create Create a node class compatibility.
isi storagepool compatibilities class active create N400 N410 creates a compatibility between Isilon NL400 and NL410 nodes.
isi storagepool compatibilities class active list lists active compatibilities and their ID numbers
isi storagepool compatibilities class available list Lists node class compatibilities that are available, but not yet created.
isi storagepool compatibilities class active delete 9 deletes a node class compatibility with an ID number of 9.
isi storagepool compatibilities class active view Displays the details of an active node class compatibility.
isi storagepool compatibilities ssd active create S200 Creates an SSD class compatibility for Isilon S200 nodes that have different capacity SSDs.
isi storagepool compatibilities ssd active delete 1 Deletes an SSD compatibility with an ID number of 1.
isi storagepool compatibilities ssd active list Lists SSD compatibilities that have been created.
isi storagepool compatibilities ssd active view Displays the details of an active SSD compatibility.
isi storagepool compatibilities ssd available list Lists SSD compatibilities that are available, but not yet created.
isi storagepool nodepools create Create a node pool manually
isi storagepool nodepools create PROJECT-TEST –lnns 1,2,3 Creates a node pool by specifying the LNNs of three nodes to be included.
isi storagepool nodepools delete Deletes a node pool and autoprovisions the affected nodes into the appropriate node pool.
isi storagepool nodepools modify PROJECT-TEST –lnns 3-4, 11 Adds nodes with the LNNs (logical node numbers) of 3, 4, and 11 to an existing node pool.
isi storagepool nodepools modify PROJECT-TEST –set-name PROD-PROJECT \ –protection-policy +2:1 Changes the name and protection policy of a node pool.
isi storagepool nodepools modify PROD_ARCHIVE –remove-lnns 7,9 Removes two nodes, identified by its LNNs.
isi storagepool nodepools modify PROD-PROJECT –tier PROD_ARCHIVE Adds a node pool named PROD-PROJECT to a tier.
isi storagepool nodepools list Displays a list of node pools.
isi storagepool nodepools view Displays details for a node pool.
isi storagepool settings modify Modify default storage pool settings
isi storagepool settings view Lays global SmartPools settings.
isi storagepool settings modify –ssd-l3-cache-default-enabled yes Sets L3 cache enabled as the default for new node pools that are added.
isi storagepool nodepools modify hq_datastore –l3 true Enables L3 cache on a node pool named hq_datastore.
isi storagepool nodepools create Creates a manually managed node pool (use with assistance of technical support personnel)
isi storagepool tiers create PROD_ARCHIVE –children hq_datastore1 –children hq_datastore2 Creates a tier named PROD_ARCHIVE, and adds node pools to the tier.
isi storagepool tiers modify PROD_ARCHIVE –set-name ARCHIVE_TEST Renames a tier from PROD_ARCHIVE to ARCHIVE_TEST.
isi storagepool tiers delete ARCHIVE_TEST Deletes a tier named ARCHIVE_TEST.
isi storagepool tiers list Displays a list of tiers.
isi storagepool tiers view Displays details for a tier.
isi filepool policies create Create a file pool policy.
isi filepool policies delete Deletes a file pool policy.
isi filepool policies list view a list of available file pool policies.
isi filepool policies view View the current settings of a file pool policy.
isi filepool policies view OLD_ARCHIVE Displays the settings of a file pool policy named OLD_ARCHIVE.
isi filepool default-policy view Display the current default file pool policy settings.
isi filepool default-policy modify Change default settings
isi filepool policies modify PERFORMANCE –apply-order 1 Changes the priority of a file pool policy named PERFORMANCE.
isi filepool policies delete PROD_ARCHIVE Deletes a file pool policy named PROD_ARCHIVE.
isi storagepool health –verbose Displays a tabular description of storage pool health.
isi storagepool list Displays node pools and tiers in the cluster.
isi filepool apply Applies all file pool policies to the specified file or directory path.
isi filepool policies delete Delete a custom file pool policy. The default file pool policy cannot be deleted.
isi filepool templates list Lists available file pool policy templates.
isi filepool templates view View the detailed settings in a file pool policy template.
CloudPools
CloudPools can seamlessly connect to EMC-based cloud storage systems and to popular third-party providers, Amazon S3 and Microsoft Azure.
isi cloud pools create cp_az azure csa_azure1 –vendor Microsoft Creates an Azure-based CloudPool.
isi cloud pools view cp_az View the result of this operation of the CloudPool that you created.
isi cloud pools list View a list of CloudPools that have been created on your cluster.
isi cloud pools view cah_s3_cp Information on the CloudPool named cah_s3_cp.
isi cloud pools modify c_pool_azure –remove-accounts c_acct2 –description “Secondary archive” modifies a CloudPool named c_pool_azure, removing its cloud storage acct.
isi cloud pools delete c_pool_azure Deletes the CloudPool named c_pool_azure.
isi cloud archive Archive specific files directly to the cloud.
isi cloud archive /ifs/data/shared/images/*.* –recursive yes Specifies a directory and all of its subdirectories and files to be archived.
isi cloud access add Adds cloud write access to the cluster.
isi cloud access list List the GUIDs of clusters that are accessible for SyncIQ failover or restore operations.
isi cloud access add ac7dd991261e33e382240801204c9a66 Enables a secondary cluster, identified by GUID, to have write access to cloud data.
 isi cloud access remove Remove previously granted access to SmartLink files.
isi cloud access view View the details of a cluster with, or eligible for, write access to cloud data.
isi cloud jobs list List all CloudPools jobs.
isi cloud jobs view View information about a CloudPools job.
isi cloud jobs pause Pause a running CloudPools job.
isi cloud jobs resume Resume a cloud job that has been paused.
isi cloud jobs cancel Cancel a running CloudPools job.
isi cloud jobs files list Displays the list of files matched by the specified CloudPools job.
isi cloud settings view View the top-level settings for CloudPools.
isi cloud settings modify Modify default CloudPools settings.
isi cloud settings modify –default-archive-snapshot-files=no Disables archival of files that have snapshot versions.
isi cloud settings modify –default-encryption-enabled=yes Enables both encryption and compression of cloud data.
isi cloud settings regenerate-encryption-key –verbose Generate a new master encryption key.
isi cloud recall [–recursive {yes | no}] [–verbose] Specify one or more files to be recalled from the cloud.
isi cloud restore_coi Restores the cloud object index (COI) for a cloud storage account on the cluster.
isi cloud settings regenerate-encryption-key –verbose Generates a new master encryption key for data to be archived to the cloud.
isi cloud accounts create Create a cloud storage account.
isi cloud accounts delete Delete a cloud storage account.
isi cloud accounts list List all cloud storage accounts created on your cluster.
isi cloud accounts view CloudAcct3 Displays account information for the CloudAcct3 account.
isi cloud accounts modify CloudAcct3 –name=CloudAcct5 Changes the name of the cloud storage account CloudAcct3 to CloudAcct5
isi cloud accounts delete OldRecords –acknowledge yes Deletes the cloud storage account OldRecords.
isi cloud accounts create –name=c-acct1 –type=azure Creates a Microsoft Azure cloud storage account.
  –uri=https://admin2.blob.core.windows.net –account-username=adm1
System Jobs
isi job jobs start
isi job jobs start Collect –policy MEDIUM –priority 2 Runs the Collect job with a stronger impact policy and a higher priority.
isi job jobs start multiscan –priority 8 –policy high Starts a MultiScan job with a priority of 8 and a high impact policy.
isi job jobs pause 7 Pauses a job with an ID of 7.
isi job jobs pause Collect Pauses an active job.
isi job jobs list –state paused_user Lists jobs that have been manually paused.
isi job jobs list –format csv > /ifs/data/joblist.csv Outputs a CSV-formatted list of jobs to a file in the /ifs/data path.
isi job jobs list Displays information about active jobs.
isi job jobs view Displays information about a running or queued job, including the state, impact policy, priority, and schedule.
isi job jobs modify Changes the priority level or impact policy of a queued, running, or paused job.
isi job pause Collect Pauses collect job.
isi job jobs modify 7 –priority 3 –policy medium Updates the priority and impact policy of an active job.
isi job jobs modify Collect –priority 3 –policy medium Job type can be specified instead of the job id.
isi job jobs resume 7 Resumes a job with the ID number 7.
isi job jobs resume Collect Job type can be specified instead of the job id.
isi job jobs cancel 7 Cancels a job with the ID number 7.
isi job jobs cancel Collect Job type can be specified instead of the job id.
isi job types modify mediascan –priority 2 –policy medium Modifies the default priority level and impact level for the MediaScan job type.
isi job types modify mediascan –schedule ‘every Saturday at 09:00’ –force Schedules the MediaScan job to run every Saturday morning.
isi job types modify mediascan –clear-schedule –force Removes the schedule for a job type that is scheduled.
isi job types list Displays a list of job types and default settings.
isi job types view Displays the parameters of a specific job type
isi job jobs list View active jobs.
isi job events list –job-type multiscan Displays the activity of the MultiScan job type.
isi job events list –begin 2013-09-16 View all jobs within a specific time frame.
isi job events list –begin 2013-09-15 –end 2013-09-16 > /ifs/data/report1.txt Outputs job history for a specific two-week period.
isi job policies create MY_POLICY –impact medium –begin ‘Saturday 00:00’ –end ‘Sunday 23:59’ Creates a custom policy defining a specific time frame and impact level.
isi job policies view MY_POLICY Displays the impact policy settings of the custom impact policy MY_POLICY.
isi job policies modify MY_POLICY –reset-intervals Resets the policy interval settings to the base defaults.  Low impact and anytime operation.
isi job policies delete MY_POLICY Deletes a custom impact policy named MY_POLICY.
isi job policies list –verbose Displays the names and descriptions of job impact policies.
isi job statistics view –job-id 857 View statistics for a job in progress.
isi job statistics list Displays a statistical summary of active jobs in the Job Engine queue.
isi job reports view 857 Displays the report of a Collect job with an ID of 857.
isi job reports list Displays information about successful job operations.
isi job status [–verbose] Displays a summary of active, completed, and failed jobs.
Networking
Run the isi config command, The command-line prompt changes to indicate that you are in the isi config subsystem. Run the commit command to complete.
iprange int-a 192.168.101.10-192.168.101.20 Adds an IP range to the int-a internal network.
deliprange int-a 192.168.101.15-192.168.101.20 Deletes an existing IP address range from the int-a internal network.
netmask int-a 255.255.255.0 Changes the int-a internal network netmask.
netmask int-b 255.255.255.0 Changes the int-b internal network netmask.
iprange int-b 192.168.101.21-192.168.101.30 Adds an IP range to the int-b internal network.
iprange failover 192.168.101.31-192.168.101.40 Adds an IP range to the internal failover network.
interface int-b enable Specifies the interface name as int-b and enables it.
interface int-b disable Specifies the int-b interface and disables it.
isi network groupnet create Create a groupnet and configure DNS client settings.
isi network groupnet create ProdGroupNet \ –dns-servers=192.0.2.0 –dns-cache-enabled=true Creates a groupnet named ProdGroupNet that supports one DNS server and enables DNS caching
isi network groupnet modify Modify groupnet attributes.
isi network groupnet modify ProdGroupNet \ –dns-search=dat.corp.com,stor.corp.com Modifies ProdGroupNet to enable DNS search on three suffixes.
isi network groupnet modify ProdGroupNet \ –add-dns-servers=192.0.2.1 –dns-options=rotate Modifies ProdGroupNet to support a second DNS server and to enable rotation through the configured DNS resolvers
isi network groupnet delete Delete a groupnet.
isi network groupnet delete ProdGroupNet Deletes a groupnet named ProdGroupNet.
isi network groupnets list Retrieve and sort a list of all groupnets on the system.
isi network groupnets list –sort=id –descending Sorts the list of groupnets by ID in descending order.
isi network groupnets view ProdGroupNet Displays the details of a groupnet named ProdGroupNet.
isi network groupnets modify Modifies a groupnet which defines the DNS settings applied to services that connect through the groupnet.
isi network subnets create Add a subnet to the external network of an EMC Isilon cluster.
isi network subnets create \ ProdGroupNet.subnetX ipv4 255.255.255.0 Creates a subnet associated with ProdGroupNet.
isi network subnets list identify the ID of the external subnet.
isi network subnets modify ProdGroupNet.subnetX \ –name=subnet5 Changes the name of subnetX under ProdGroupNet to subnet5.
isi network subnets modify g1.sbet3 –mtu=1500 –gateway=198.162.205.10 –gateway-priority=1 Sets the MTU to 1500, sets GW to 198.162.205.10, sets GW priority to 1.
isi network subnets delete ProdGroupNet.subnetX Deletes subnetX under ProdGroupNet.
isi network subnets view view the details of a specific subnet.
isi network subnets view ProdGroupNet.subnetX displays details for subnetX under ProdGroupNet.
isi network subnets modify ProdGroupNet.subnetX \ –sc-service-addr=198.11.100.15 Specifies the SmartConnect service IP address on subnetX under ProdGroupNet.
isi networks modify subnet Enable or disable VLAN tagging on the external subnet.
isi network subnets modify ProdGroupNet.subnetX \ –vlan-enabled=true –vlan-id=256 Enables VLAN tagging on subnetX under ProdGroupNet, sets VLAN ID to 256.
isi network subnets modify ProdGroupNet.subnetX \ –vlan-enabled=false Disables VLAN tagging on subnetX under ProdGroupNet.
isi network subnets modify ProdGroupNet.subnetX \ –add-dsr-addrs=198.11.100.20 Adds a DSR address to subnetX under ProdGroupNet.
isi network subnets modify ProdGroupNet.subnetX \ –remove-dsr-addrs=198.11.100.20 removes a DSR address from subnetX under ProdGroupNet.
isi network pools create Create an IP address pool.
isi network pools create ProdGroupNet.subnetX.ProdPool1 Creates a pool named ProdPool1 and assigns it to subnetX under ProdGroupNet.
isi network pools create ProdGroupNet.subnetX.ProdPool1 \ –access-zone=zoneB Creates a pool named ProdPool1, assigns it to ProdGroupNet.subnetX, specifies zoneB as the access zone.
isi networks modify pool Modify IP address pools to update pool settings.
isi network pools modify ProdGroupNet.subnetX.pool3 –name=ProdPool1 Changes the name of the pool from pool3 to ProdPool1.
isi networks delete pool Delete an IP address pool that you no longer need.
isi network pools delete ProdGroupNet.subnetX.ProdPool1 Deletes the pool name ProdPool1 from ProdGroupNet.subnetX.
isi network pools list View all IP address pools within a groupnet or subnet
isi network pools list ProdGroupNet.subnetX Displays all IP address pools under ProdGroupNet.subnetX.
isi network pools view ProdGroupNet.subnetX.ProdPool1 Displays the setting details of ProdPool1 under ProdGroupNet.subnetX.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –add-ranges=192.0.102.12-192.0.102.22 Adds an address range to ProdPool1 under ProdGroupNet.subnetX
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –remove-ranges=192.0.102.12-192.0.102.14 Deletes an address range from ProdPool1
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –alloc-method=dynamic Specifies dynamic distribution of IP addresses in ProdPool1 under ProdGroupNet.subnet 3.
isi networks modify pool .. Configures a SmartConnect DNS zone.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –sc-dns-zone=www.corp.com Specifies a SmartConnect DNS zone in ProdPool1 under subnetX and ProdGroupNet.
isi networks modify pool Configures a SmartConnect DNS zone alias.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –add-sc-dns-zone-aliases=data.corp.com Specifies SmartConnect DNS aliases in ProdPool1 under subnetX and ProdGroupNet.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –remove-dns-zone-aliases=data.corp.com removes a SmartConnect DNS aliases from ProdPool1 under subnetX and ProdGroupNet.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –sc-subnet=subnet0 specifies subnet0 as the a SmartConnect service subnet of ProdPool1 under subnetX and ProdGroupNet.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 –add-ifaces=1-3:ext-1 modifies ProdPool1 under ProdGroupNet.subnetX to add the first external network interfaces on nodes 1 through 3.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 –remove-ifaces=3:ext-1 Removes the first network interface on node 3 from ProdPool1.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 –aggregation-mode-fec Modifies ProdPool1 under Pnet1.subnetX to specify FEC as the aggregation mode for all aggregated interfaces in the pool.
isi network pools modify Pnet1.subnetX.ProdPool1 –add-ifaces=1:ext-agg –aggregation-mode=lacp Modifies ProdPool1 under Pnet1.subnetX to add ext-agg on node 1 and specify LACP as the aggregation mode.
isi network pools modify Pnet1.snet3.ProdPool1 –add-static-routes=192.168.100.0/24-192.168.205.2 Adds an IPv4 static route to ProdPool1 and assigns the route to all network interfaces that are members of the pool.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –sc-connect-policy=conn_count Specifies a connection balancing policy based on connection count in ProdPool1 under subnet 3 and ProdGroupNet.
isi network pools modify groupnet0.subnetX.ProdPool1 \ –sc-failover-policy=cpu_usage Specifies a IP failover policy based on CPU usage in ProdPool1 under subnet 3 and groupnet0.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –rebalance-policy=manual Specifies manual rebalancing of IP addresses in ProdPool1 under ProdGroupNet.subnet 3.
isi network pools sc-suspend-nodes .. Suspends DNS query responses for an node.
isi network pools rebalance-ips .. Manually rebalances a specific IP address pool.
isi network pools rebalance-ips ProdGroupNet.subnetX.ProdPool1 Rebalances the IP addresses in ProdPool1.
isi network pools sc-suspend-nodes ProdGroupNet.subnetX.ProdPool1 3 Suspends DNS query responses on node 3.
isi network pools sc-resume-nodes Resumes DNS query responses for an IP address pool.
isi network pools sc-resume-nodes ProdGroupNet.subnetX.ProdPool1 3 Resumes DNS query responses on node 3.
isi network external view Displays configuration settings for the external network.
isi network external modify Modifies global external network settings on the EMC Isilon cluster.
isi network external modify –sc-balance-delay Specifies a rebalance delay (in seconds) that passes after a qualifying event prior to an automatic rebalance.
isi network sc-rebalance-all Rebalances all IP address pools.
isi networks modify pool .. Configure which network interfaces are assigned to an IP address pool.
isi network interfaces list Retrieve and sort a list of all external network interfaces on the EMC Isilon cluster.
isi network interfaces list –nodes=1,3 Displays interfaces only on nodes 1 and 3.
isi network rules create … Creates a node provisioning rule.
isi network rules create ProdGroupNet.subnetX.ProdPool1.rule7 \ –iface=ext-1 –node-type=accelerator Creates a rule (rule7) that assigns the first external network if on each new accelerator node to Pnet1.subnetX.ProdPool1
isi network rules modify … Modifies node provisioning rules settings.
isi network rules modify ProdGroupNet.subnetX.ProdPool1.rule7 \ –name=rule7accelerator Changes the name of rule7 to rule7accelerator.
isi network rules modify ProdGroupNet.subnetX.ProdPool1.rule7 \ –node-type=backup-accelerator Changes rule7 so that it applies only to backup accelerator nodes.
isi networks delete rule … Delete an node provisioning rule that are no longer needed.
isi network rules delete ProdGroupNet.subnetX.ProdPool1.rule7 Deletes rule7 from ProdPool1.
isi network rules list Lists all of the provisioning rules in the system.
isi network rules list –groupnet=ProdGroupNet Lists rules in ProdGroupNet.
isi network rules view ProdGroupNet.subnetX.ProdPool1.rule7 Displays the setting details of rule7 under ProdGroupNet.subnetX.ProdPool1.
isi network external modify –sbr=true Enablessource-based routing on the cluster.
isi network external modify –sbr=false Disables source-based routing on the cluster.
isi network dnscache flush [–verbose] Simultaneously flushes the DNS cache of each groupnet that has enabled DNS caching.
isi network dnscache modify Modifies global DNS cache settings for each DNS cache that is enabled per groupnet.
isi network dnscache view Displays DNS cache settings.
Hadoop
isi hdfs settings modify The HDFS service (enabled by default after you activate an HDFS license) can be enabled or disabled per access zone.
isi hdfs settings modify –service=yes –zone=DevZone Enables the HDFS service in DevZone.
isi hdfs settings modify –service=no –zone=DevZone Disables the HDFS service in DevZone.
isi hdfs settings modify Configure HDFS service settings in each zone to improve performance for HDFS workflows.
isi hdfs settings modify –default-block-size=256K –zone=DevZone Sets the block size to 256 KB in the DevZone access zone  (Suffixes K, M, and G are allowed).
isi hdfs settings modify –default-checksum-type=crc32 –zone=DevZone Sets the checksum type to crc32 in the DevZone access zone.
isi hdfs settings view View HDFS settings in an access zone.
isi hdfs settings view –zone=ProdZone Displays the HDFS settings in the ProdZone access zone.
isi hdfs log-level modify Sets the default logging level of HDFS services events.
isi hdfs log-level modify –set=trace Sets the HDFS log level to trace on the node.
isi hdfs log-level view View the default logging level of HDFS services events.
isi hdfs settings modify –root-directory=/ifs/DevZone/hadoop –DevZone Grant access to the /ifs/data/hadoop directory.
isi hdfs settings modify –authentication-mode=simple_only –DevZone Clients connecting to DevZone must be identified through the simple authentication method.
isi zone zones modify DevZone –authentication-mode=kerberos_only Clients connecting to DevZone must be identified through the Kerberos authentication method.
isi hdfs settings modify –webhdfs-enabled=yes –zone=DevZone Enables WebHDFS in DevZone.
isi hdfs settings modify –webhdfs-enabled=no –zone=DevZone Disables WebHDFS in DevZone.
isi hdfs proxyusers create Creates a proxy user.
isi hdfs proxyusers create hadoop-HDPUser –zone=ProdZone Designates hadoop-HDPUser in ProdZone as a new proxy user.
isi hdfs proxyusers modify Modifies the list of members that a proxy user securely impersonates.
isi hdfs proxyusers delete Deletes a proxy user from an access zone.
isi hdfs proxyusers delete hadoop-HDPUser –zone=ProdZone Deletes the proxy user hadoop-HDPUser from the ProdZone access zone.
isi hdfs proxyusers members list Lists the members of a proxy user.
isi hdfs proxyusers list –zone=ProdZone Displays a list of all proxy users configured in ProdZone.
isi hdfs proxyusers view View the configuration details for a specific proxy user.
isi hdfs proxyusers view hadoop-HDPUser –zone=ProdZone displays the configuration details for the hadoop-HDPUser.
isi hdfs racks create Create a virtual HDFS rack of nodes.
isi hdfs racks create /hdfs-rack2 –zone=TestZone Creates a rack named /hdfs-rack2 in the TestZone access zone.
isi hdfs racks modify Modify the settings of a virtual HDFS rack.
isi hdfs racks modify /hdfs-rack2 –new-name=/hdfs-rack5 –zone=DevZone Renames a rack named /hdfs-rack2 in the DevZone access zone to /hdfs-rack5.
isi hdfs racks delete Delete a virtual HDFS rack from an access zone.
isi hdfs racks delete /hdfs-rack2 –zone=ProdZone Deletes the virtual HDFS rack named /hdfs-rack2 from the ProdZone access zone.
isi hdfs racks list View a list of all virtual HDFS racks in an access zone.
isi hfds racks list –zone=ProdZone Lists all HDFS racks configured in the ProdZone access zone.
isi hdfs racks view /hdfs-rack2 –zone=ProdZone View the setting details for a specific virtual HDFS rack.
ESRS Commands
isi remotesupport connectemc modify Enable and configure ESRS.
isi remotesupport connectemc modify –enabled=no Disables ESRS.
isi remotesupport connectemc view View ESRS Config.
Antivirus
isi antivirus settings modify Target specific files for scans by antivirus policies.
isi antivirus settings modify –glob-filters-enabled true \ –glob-filters .txt Configures OneFS to scan only files with the .txt extension.
isi antivirus settings modify –scan-on-close true \ –path-prefixes /ifs/data/media Configures OneFS to scan files and directories under /ifs/data/media when they are closed.
isi antivirus settings modify –repair true –quarantine true Configures OneFS and ICAP servers to attempt to repair infected files and quarantine files that cannot be repaired.
isi antivirus settings modify –report-expiry 12w Configures OneFS to delete antivirus reports older than 12 weeks.
isi antivirus settings modify –service enable Enables antivirus scanning.
isi antivirus settings modify –service disable Dsables antivirus scanning.
isi antivirus servers create Add and connect to an ICAP server.
isi antivirus servers create icap://192.168.1.100 –enabled yes Adds and connects to an ICAP server at 192.168.1.100.
isi antivirus servers modify icap://192.168.1.100 –enabled yes Temporarily disconnects from the ICAP server.
isi antivirus servers modify icap://192.168.1.100 –enabled no Reconnects to an ICAP server.
isi antivirus servers delete icap://192.168.1.100 Removes an ICAP server with an ID of icap.//192.168.1.100.
isi antivirus policies create Create a  policy that causes specific files to be scanned for viruses each time the policy is run.
isi antivirus policies create HolidayVirusScan –paths /ifs/d \ –schedule “Every Friday at 12:00 PM” Creates an antivirus policy that scans /ifs/data every Friday at 12.00 PM.
isi antivirus policies modify HolidayVirusScan \ –schedule “Every Friday at 12:00 PM” Modifies a policy called HolidayVirusScan to be run on Saturday at 12.00 PM.
isi antivirus policies delete HolidayVirusScan Deletes a policy called HolidayVirusScan.
isi antivirus policies modify HolidayVirusScan –enabled yes Enables a policy called HolidayVirusScan.
isi antivirus policies modify HolidayVirusScan –enabled no Eisables a policy called HolidayVirusScan.
isi antivirus policies list View antivirus policies.
isi antivirus scan Manually scan an individual file for viruses.
isi antivirus scan /ifs/data/virus_file Scans the /ifs/data/virus_file file for viruses.
isi antivirus quarantine Quarantine a file to prevent the file from being accessed by users.
isi antivirus quarantine /ifs/data/badFile.txt Quarantines /ifs/data/badFile.txt.
isi antivirus scan /ifs/data/virus_file Scans /ifs/data/virus_file.
isi antivirus release /ifs/data/newFile Removes /ifs/data/badFile.txt from quarantine.
isi antivirus reports threats list View files that have been identified as threats by an ICAP server.
isi antivirus reports scans list View antivirus reports.
isi event events list View events that relate to antivirus activity.
Event Commands
isi event groups list Identify the group ID of the event group that you want to view.
isi event groups view View the details of a specific group.
isi event alerts list Identify the alert ID of the alert that you want to view.
isi event alerts delete Deletes an alert.
isi event alerts view NewExternal View the details of a specific alert, the name of the alert is case-sensitive.
isi event channels list Identify the name of the channel that you want to view
isi event channels view Support View the details of a channel
isi event settings view View your storage and maintenance settings.
isi event test create “Test message” Manually generate a test alert.
isi event settings modify Change the frequency that a heartbeat event is generated.
isi event alerts create Hardware NEW-EVENTS –channel RemoteSupport This command creates an alert named Hardware, sets the alert condition to
  NEW_EVENTS, and sets the channel that will broadcast the event as RemoteSupport
Isilon Technical Support Commands
isi_auth_expert
isi_bootdisk_finish
isi_bootdisk_provider_dev
isi_bootdisk_status
isi_bootdisk_unlock
isi_checkjournal
isi_clean_idmap
isi_client_stats
isi_cpr
isi_cto_update
isi_disk_firmware_reboot
isi_dmi_info
isi_dmilog
isi_dongle_sync
isi_drivenum
isi_dsp_install
isi_dumpjournal
isi_eth_mixer_d
isi_evaluate_provision_drive
isi_fcb_vpd_tool
isi_flexnet_info
isi_flush
isi_for_array
isi_fputil
isi_gather_info
isi_gather_auth_info
isi_gather_cluster_info
isi_gconfig
isi_get_itrace
isi_get_profile
isi_hangdump
isi_hw_check
isi_hw_status
isi_ib_bug_info
isi_ib_fw
isi_ib_info
isi_ilog
isi_imdd_status
isi_inventory_tool
isi_ipmicmc
isi_job_d
isi_kill_busy
isi_km_diag
isi_lid_d
isi_linmap_mod
isi_logstore
isi_lsiexputil
isi_make_abr
isi_mcp
isi_mps_fw_status
isi_netlogger
isi_nodes
isi_ntp_config
isi_ovt_check
isi_patch_d
isi_promptsupport
isi_radish
isi_rbm_ping
isi_repstate_mod
isi_restill
isi_rnvutil
isi_sasphymon
isi_save_itrace
isi_savecore
isi_sed
isi_send_abr
isi_smbios
isi_stats_tool
isi_transform_tool
isi_ufp
isi_umount_ifs
isi_update_cto
isi_update_serialno
isi_vitutil
isi_vol_copy
isi_vol_copy_vnx

Scripting automatic reports for Isilon from the CLI

We recently set up a virtual demo of an Isilon system on our network as we are evaluating Isilon for a possible purchase.  You can obtain a virtual node that runs on ESX from your local EMC Isilon representative, along with temporary licenses to test everything out.  As part of the test I wanted to see if it was possible to create custom CLI scripts in the same way that I create them on the Celerra or VNX File.  On the Celerra, I run daily scripts that output file pool sizes, file systems & disk space, failover status, logs, health check info, checkpoint info, etc. to my web report page.  Can you do the same thing on an Isilon?

Well, to start with, Isilon’s commands are completely different.  The first step of course was to look at what was available to me.  All of the Isilon administration commands appear to begin with ‘isi’.  If you type isi by itself it will show you the basic list of commands and what they do:

isilon01-1% isi
Description:
OneFS cluster administration.
Usage:
isi  <subcommand>
[--timeout <integer>]
[{--help | -h}]
Subcommands:
Cluster Monitoring:
alert*           An alias for "isi events".
audit            Manage audit configuration.
events*          Manage cluster events.
perfstat*        View cluster performance statistics.
stat*            An alias for "isi status".
statistics*      View general cluster statistics.
status*          View cluster status.
Cluster Configuration:
config*          Manage general cluster settings.
email*           Configure email settings.
job              Isilon job management commands.
license*         Manage software licenses.
networks*        Manage network settings.
services*        Manage cluster services.
update*          Update OneFS system software.
pkg*             Manage OneFS system software patches.
version*         View system version information.
remotesupport    Manage remote support settings.
Hardware & Devices:
batterystatus*   View battery status.
devices*         Manage cluster hardware devices.
fc*              Manage Fibre Channel settings.
firmware*        Manage system firmware.
lun*             Manage iSCSI logical units (LUNs).
target*          Manage iSCSI targets.
readonly*        Toggle node read-write state.
servicelight*    Toggle node service light.
tape*            Manage tape and media changer devices.
File System Configuration:
get*             View file system object properties.
set*             Manage file system object settings.
quota            Manage SmartQuotas, notifications and reports.
smartlock*       Manage SmartLock settings.
domain*          Manage file system domains.
worm*            Manage SmartLock WORM settings.
dedupe           Manage Dedupe settings.
Access Management:
auth             Manage authentication, identities and role-based access.
zone             Manage access zones.
Data Protection:
avscan*          Manage antivirus service settings.
ndmp*            Manage backup (NDMP) settings.
snapshot         Manage SnapshotIQ file system snapshots and settings.
sync             SyncIQ management interface.
Protocols:
ftp*             Manage FTP settings.
hdfs*            Manage HDFS settings.
iscsi*           Manage iSCSI settings.
nfs              Manage NFS exports and protocol settings.
smb              Manage SMB shares and protocol settings.
snmp*            Manage SNMP settings.
Utilities:
exttools*        External tools.
Other Subcommands:
filepool         Manage filepools on the cluster.
storagepool      Configure and monitor storage pools
Options:
Display Options:
--timeout <integer>
Number of seconds for a command timeout.
--help | -h
Display help for this command.

Noe that subcommands or actions that are marked with an asterisk(*) require root login.  If you log in with a normal admin account you’ll get the following error message when you run the command:

isilon01-1% isi status
Commands not enabled for role-based administration require root user access.

Isilon’s OneFS operating system is based on FreeBSD as opposed to Linux for the Celerra/VNX DART OS.   Since it’s a unix based OS with console access, you can create shell scripts and cron job schedules just like the Celerra/VNX File.

Because all of the commands I want to run require root access, I had to create the scripts logged in as root.  Be careful doing this!  This is only a test for me, I would likely look for a workaround for a prod system.    Knowing that I’m going to be FTPing the output files to my web report server, I started by creating the .netrc file in the /root folder.  This is where I store the default login and password for the FTP server. Permissions must be changed for it to work, use chmod 600 on the file after you create it.  It didn’t work for me at first as the syntax is different on FreeBSD than on Linux, so looking at my Celerra notes didn’t help  (For Celerra/VNX File I used “machine <ftp_server_name> login <ftp_login_id> password <ftp_password>”).

For Isilon, the correct syntax is this:

default login <ftp_username> password <ftp_password>
 

The script that FTP’s the files would then look like this for Isilon:

HOST=”10.1.1.1″
ftp $HOST <<SCRIPT
put /root/isilon_stats.txt
put /root/isilon_repl.txt
put /root/isilon_df.txt
put /root/isilon_dedupe.txt
put /root/isilon_perf.txt
SCRIPT
 

For this demo, I created a script that generates reports for File system utilization, Deduplication Status, Performance Stats, Replication stats, and Array Status.  For the Filesystem utilization report, I used two different methods as I wasn’t sure which I’d like better.   Using ‘df –h –a /ifs’ will get you similar information to ‘isi storagepool list’, but the output format is different.   I used cron to schedule the job directly on the Isilon.

Here is the reporting script:

TODAY=$(date)
HOST=$(hostname)
sleep 15
echo “———————————————————————————” > /root/isilon_stats.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_stats.txt
echo “———————————————————————————” >> /root/isilon_stats.txt
/usr/bin/isi status >> /root/isilon_stats.txt
echo “———————————————————————————” > /root/isilon_repl.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_repl.txt
echo “———————————————————————————” >> /root/isilon_repl.txt
/usr/bin/isi sync reports list >> /root/isilon_repl.txt
echo “———————————————————————————” > /root/isilon_df.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_df.txt
echo “———————————————————————————” >> /root/isilon_df.txt
df -h -a /ifs >> /root/isilon_df.txt
echo ” ” >> /root/isilon_df.txt
/usr/bin/isi storagepool list >> /root/isilon_df.txt
echo “———————————————————————————” > /root/isilon_dedupe.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_dedupe.txt
echo “———————————————————————————” >> /root/isilon_dedupe.txt
/usr/bin/isi dedupe stats  >> /root/isilon_dedupe.txt
echo “———————————————————————————” > /root/isilon_perf.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_perf.txt
echo “———————————————————————————” >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–System Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics system  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Client Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics client  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Protocol Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics protocol  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Protocol Data–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics pstat  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Drive Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics drive  >> /root/isilon_perf.txt
 

Once the ouput files are FTP’d to the web server, I have a basic HTML page that uses iframes to show the text files.  The web page is then automatically updated as soon as the new text files are FTP’d.  Below is a screenshot of my demo report page.  It doesn’t show the entire page, but you’ll get the idea.

IsilonReports

EMC World 2012 – Thoughts on Isilon

I’m at EMC World in Las Vegas and I finished up my first full day at EMC World 2012 today.  There are about 15,000 attendees this year, which is much more than last year and it’s obvious. The crowds are huge and the Venetian is packed full.  Joe Tucci’s Keynote was amazing, the video screen behind Joe was longer than a football field and he took the time to point that out. 🙂  He went into detail about the past, present and future of IT and it was very interesting.

Many of the sessions I’m signed up for have non-disclosure agreements, so I can’t speak about some of the new things being announced or the sessions I’ve attended.  I’m trying to focus on learning about (and attending breakout sessions) about EMC technologies that we don’t currently use in my organization to broaden my scope of knowledge. There may be better solutions from EMC available than what the company I work for is currently using, and I want to learn about all the options available.

My first session today was about EMC’s Isilon product and I was excited to learn more about it. My only experience so far with EMC’s file based solutions is with legacy Celerra arrays and VNX File.  So, what’s the difference, and why would anyone choose to purchase an Isilon solution over simply adding a few data movers to their VNX? Why is Isilon better? Good Question. I attended an introductory level session but it was very informative.

I’m not going to pretend to be an expert after listening to a one hour session this morning, but here were my take-aways from it. Isilon, in a nutshell, is a much higher performance solution than VNX file. There are several different iterations of the platform available (S, X, and NL Series) all focused on specific customer needs.  One is for high transactional, IOPs intensive needs, another geared for capacity, and another geared for a balance (and a smaller budget). It uses the OneFS single filesystem (impressive by itself), which eliminates the standard abstraction layers of the filesystem, volume manager, and RAID types.  All of the disks use a single file system regardless of the total size.  The data is arranged in a fully symmetric cluster with data striped across all of the nodes.  The single, OneFS filesystem works regardless of the size of your filesystem – 18TB minimum all the way up to 15 PB.

Adding a new node to Isilon is seamless, it’s instantly added to the cluster (hence the term “Scale-out NAS” EMC has been touting throughout the conference).  You can add up to 144 nodes to a single Isilon array. It also features auto balancing, in that it will automatically rebalance data to the new node that was just added.  It can also remove data from a node and move it to a new one if you decide to decomission a node and replace it with a newer model. Need to replace that 4 year old isilon node with the old, low capacity disks? No problem. Another interesting item to node is how data is stored across the nodes. Isilon does not use a standard RAID model at all, it distributes data across the disks based on how much protection you decide you need.  You can decide as an administrator how the data is protected, choosing to keep as many copies of data as you want (at the expense of total available storage). The more duplicate copies of data you want to keep, the less total storage you have available for production use.  One great benefit of Isilon vs. VNX file is that rebuilds are much faster, as traditional RAID groups are dependant on the total IO available to the single drive being rebuilt, while Isilon rebuilds are spanned across the entire system. It could mean the differnce between a 12 hour single disk RAID5 rebuild vs. less than one hour on Isilon. Pretty cool stuff.

I only have experience with Celerra replicator, but it was also mentioned in the session that Isilon replications can go down to the specific folder level within a file system.  Very cool.  I can only do replications at the entire file system level on VNX file and Celerra right now. I don’t have any experience with that functionality yet, but it sounds very interesting.

There is a new upcoming version of the OneFS (called “Mavericks”) that will introduce even more new features, I’m not going to go into those as they may be part of the non-disclosure agreement.  Everything I’ve mentioned thus far is available currently.  Overall, I was very impressed with the Isilon architecture as compared to VNX file.  EMC claimed that they have the highest FS NAS throughput for any vendor with Isilon at 106GB/sec.  Again, very impressive.

I’ll make another update this week after attending a few more breakout sessions.  I’m also looking forward to learning more about Greenplum, the promise of improved performance through paralellism (using scale out archiceture on standard hardware) is also very interesting to me.  If anyone else is at EMC World this week, please comment!

Cheers!