Adding, modifying and viewing an ACL in the Isilon OneFS CLI

This is an overview and reference for the commands and syntax needed for adding and modifying an ACL on Isilon OneFS files and directories from the CLI.

Access Control Entries for the ACL

Note that a complete ACE list can be viewed by running “man chmod” from the CLI.  This is a list of the entries I used.

dir_gen_all Read,write and execute access(dir_gen_read,dir_gen_write,dir_gen_execute,delete_child and std_write_owner)
object_inherit Only files in this directory and its descendants inherit the ACE
container_inherit Only directories in this directory and its descendants inherit the ACE
delete_child The right to delete children, including read-only files
file_gen_all file_gen_read, file_gen_write, file_gen_execute,delete, std_write_dac, and std_write_owner
add_file The right to create a file in the directory
add_subdir The right to create a sub-directory

Sample commands for addding ACLs to a folder

The chmod +a command is used to specify an AD group and set ACLs explicitly on directories and files.  Setting directory permissions automatically sets all of the files within that directory to the same set.

chmod -R +a group "<Domain Name>\<Group Name>" allow dir_gen_all,delete_child,object_inherit,container_inherit,file_gen_all,add_file,add_subdir </absolute_path>
chmod -R +a user "<Domain Name>\<User Name>" allow dir_gen_all,delete_child,object_inherit,container_inherit,file_gen_all,add_file,add_subdir </absolute_path>

For the equivalent of Full Control I added the following groups:

dir_gen_all,delete_child,object_inherit,container_inherit,file_gen_all,add_file,add_subdir

For the equivalent of Read\Execute I added the following groups:

dir_gen_read,dir_gen_execute,file_gen_read,file_gen_execute,object_inherit,container_inherit

Making Changes

If you need to alter an ACL, the most commonly used chmod command line switches would be a, b and D, described below.

-a The -a mode is used to delete ACL entries.
-b Removes the ACL and replaces with the specified mode.
-D Removes all ACEs in the security descriptor's DACL for all named files. This results in implicitly denying everything.

If you attempt to add a well known windows group such as “Authenticated Users” or “Power Users” to a file or directory Access Control List (ACL) from the command line interface you’ll get the error “illegal group name: Invalid argument”.  You’ll need to use the well known SID in ofder to add the ACL entry rather than the name. You can view a complete list of the well known SIDs on Microsoft’s website here: https://support.microsoft.com/en-us/kb/243330.

Syntax examples using well-known SIDs

(S-1-5-11 is “Authenticated Users”)

chmod +a sid S-1-5-11 allow traverse,list

(S-1-1-0 is “Everyone”)

chmod +a sid S-1-1-0 allow traverse,list

Viewing AD permissions on a OneFS folder

Once you’ve added your ACL’s, you can list and confirm permissions on the folder with the “ls -led” command. Below is some sample output.

ISILON-NODE-1# ls -led
drwxrwx--- + 34 AD_DOMAIN_NAME\account_name AD_DOMAIN_NAME\domain users 1630 May 11 13:26 .
OWNER: user:AD_DOMAIN_NAME\account_name
GROUP: group:AD_DOMAIN_NAME\domain users
CONTROL:dacl_auto_inherited,sacl_auto_inherited
0: group:AD_DOMAIN_NAME\folder-admin allow dir_gen_read,dir_gen_write,dir_gen_execute,std_delete,object_inherit,container_inherit
1: user:AD_DOMAIN_NAME\service_file_transfer allow dir_gen_all,object_inherit,container_inherit
2: SID:S-1-5-21-2127695773-1422393826-955202855-634017 allow dir_gen_read,dir_gen_write,dir_gen_execute,std_delete,object_inherit,container_inherit
3: SID:S-1-5-21-2127695773-1422393826-955202855-634018 allow dir_gen_read,dir_gen_write,dir_gen_execute,std_delete,object_inherit,container_inherit
4: group:AD_DOMAIN_NAME\desktop_corpaccess allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit
5: group:AD_DOMAIN_NAME\desktop_corpaccess_dev allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit
6: group:AD_DOMAIN_NAME\desktop_svaccess allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit
7: group:AD_DOMAIN_NAME\desktop_svaccess_dev allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit
8: group:AD_DOMAIN_NAME\folder-admin-auth allow dir_gen_read,dir_gen_write,dir_gen_execute,std_delete,object_inherit,container_inherit
9: group:AD_DOMAIN_NAME\fileadmin allow inherited dir_gen_all,object_inherit,container_inherit,inherited_ace
10: group:AD_DOMAIN_NAME\ADadmins allow inherited dir_gen_all,object_inherit,container_inherit,inherited_ace
11: group:AD_DOMAIN_NAME\domain admins allow inherited dir_gen_all,object_inherit,container_inherit,inherited_ace
12: group:AD_DOMAIN_NAME\domain users allow inherited dir_gen_read,dir_gen_execute,container_inherit,inherited_ace

To verify the permissions on the files within that folder, run “ls -la”.

ISILON-NODE-1# ls -la
total 6611740
drwxrwx--- + 34 AD_DOMAIN_NAME\account1 AD_DOMAIN_NAME\domain users 1630 May 16 13:26 .
drwxrwx--- + 4 root wheel 51 Feb 19 20:31 ..
drwxrwx--- + 2 AD_DOMAIN_NAME\filetrans AD_DOMAIN_NAME\domain users 245 Apr 26 07:06 File1
drwxrwx--- + 2 AD_DOMAIN_NAME\filetrans AD_DOMAIN_NAME\domain users 253 Apr 25 17:15 File2
drwxrwx--- + 2 AD_DOMAIN_NAME\filetrans AD_DOMAIN_NAME\domain users 249 Apr 25 13:59 File3
drwxrwx--- + 2 AD_DOMAIN_NAME\filetransr AD_DOMAIN_NAME\domain users 253 Apr 25 17:16 File4
...
Advertisements

NetApp Solidfire CLI Command Line Reference

 

Note that this CLI guide is meant to be a short “cheat sheet” reference to the Solidfire CLI.  For complete details on the options for each command you’ll want to log in to NetApp’s support site and review their latest CLI guide.

If you want to control your Solidfire array from the CLI you’ll need to install the CLI Tools before you get started.  For a Linux client, you must have Python 2.7.9+ installed first and you’ll need to set up the virtual environment.  More details are of course available in the NetApp documentation, but for a quick start you’ll want to follow the steps below.  I also found the code on Gitub here:  https://github.com/solidfire/solidfire-cli.

pip install virtualenv
virtualenv pythoncli
source pythoncli/bin/activate
pip install solidfire-cli
Go to solidfirecli*.tar.gz, and run easy_install    solidfirecli*.tar.gz.

CLI Reference

CLI Option Parameters
Option Description
-c, –connectionindex Index of connection you want to use
-j, –json Display output in JSON
-k, –pickle Display output in pickled JSON
-d, –depth Display output in tree format with specified depth
-m, –mvip Management virtual IP address for the cluster
-l, –login Login ID for the cluster
-p, –password Password for the cluster
-n, –name Name of connection you want to use
-f, –filter_tree Tree format filter, specify fields in csv list
–debug [ 0 | 1 | 2 | 3 ] Debug level
–help Display inline help
Managing Connections
Executing a command without storing it
sfcli –mvip 192.168.1.10 –login admin –password admin Account List
Storing a Connection
sfcli –mvip 192.168.1.10 –login admin –password admin –name “Sample” Connection Push
Using a Stored Connection
sfcli -n Example Account List # by name
List Stored Connections
sfcli Connection List
API Commands
sfcli SFApi invoke <method> <parameters> Invoke any API method for version & port the connection is using.
Account Commands
sfcli Account list <options> Returns the list of accounts.
sfcli Account getefficiency <options> Retrieve efficiency statistics about a volume account.
sfcli Account remove <accountid> Remove an existing account.
sfcli Account getbyname <username> Retrieve details about a specific account.
sfcli Account add <options> Add a new account to the system.
sfcli Account getbyid <accountid> Return details about a specific account, given it’s account ID.
Administrator Commands
sfcli Clusteradmin getloginbanner <options> Get the current terms of use banner shown on login.
sfcli Clusteradmin setloginbanner <options> Configure the terms of use banner.
Backup Target Commands
sfcli BackupTarget modify  <options> Change attributes of a backup target.
sfcli BackupTarget create <name> <attributes> Create and store backup target info.
sfcli BackupTarget list Retrieve info about all backup targets.
sfcli BackupTarget remove <backuptargetid> Delete backup targets.
sfcli BackupTarget get <backuptargetid> Retrieve info about a specific backup target.
Cluster Commands
sfcli Cluster getinfo Undocumented NetApp engineering command for troubleshooting.
sfcli Cluster getcompletestats Undocumented NetApp engineering command for troubleshooting.
sfcli Cluster getrawstats Undocumented NetApp engineering command for troubleshooting.
sfcli Cluster getapi Return a list of all API methods & supported API endpoints.
sfcli Cluster disablesnmp Disable SNMP on cluster nodes.
sfcli Cluster getsnmpstate Get the current SNMP State.
sfcli Cluster getsnmpinfo Retrieve the current SNMP config.
sfcli Cluster getconfig Return info about the cluster config this node uses.
sfcli Cluster createsupportbundle <options> Create a support bundle.
sfcli Cluster deleteallsupportbundles Delete all support bundles.
sfcli Cluster getsystemstatus Return whether or not a reboot is required.
sfcli Cluster setsnmptrapinfo <options> Enable/Disable SNMP Traps & specify host notifications.
sfcli Cluster listfaults  <bestpractices> <faulttypes> Retrieve info about cluster faults.
sfcli Cluster listadmins Lists cluster administrators.
sfcli Cluster create <options> Initialize the cluster node that owns the SVIP/MVIP.
sfcli Cluster enableencryptionatrest Enables AES 256bit encryption at rest on the cluster.
sfcli Cluster disableencryptionatrest Remove encryption that was previously enabled.
sfcli Cluster addadmin <options> Add a new cluster admin account.
sfcli Cluster setntpinfo <servers> <broadcastclient> Configure NTP on cluster nodes.
sfcli Cluster setconfig <cluster> Node cluster configuration.
sfcli Cluster modifyadmin <options> Change settings for the cluster or LDAP cluster admin.
sfcli Cluster getsnmptrapinfo Retrieve current SNMP Trap configuration.
sfcli Cluster listevents <options> Retrurn a list of the current cluster events.
sfcli Cluster snmpsendtesttraps Test SNMP, send test traps to the current SNMP manager.
sfcli Cluster removeadmin <clusteradminid> Remove a cluster admin.
sfcli Cluster modifyfullthreshold <options> Change event thresholds.
sfcli Cluster getlimits Retrieve API limit values.
sfcli Cluster getcurrentadmin Lists info for the current primary cluster admin.
sfcli Cluster getcapacity Return a high level list of cluster capacity information.
sfcli Cluster getntpinfo Shows the current NTP config.
sfcli Cluster getversioninfo Shows the current software version on each cluster node.
sfcli Cluster setsnmpacl <networks> <usmusers> Configure SNMP on cluster nodes.
sfcli Cluster clearfaults <options> Remove fault info for resolved and unresolved faults.
sfcli Cluster getsnmpacl Return SNMP access permissions.
sfcli Cluster getstate <force> Returns info regarding node’s cluster participation.
sfcli Cluster enablesnmp <snmpv3enabled> Enable SNMP on cluster nodes.
sfcli Cluster getstats Retreive cumulative high level activity for the cluster.
sfcli Cluster getmasternodeid Retreive the cluster master node ID.
sfcli Cluster setsnmpinfo <options> Configure SNMP v2/3 on cluster nodes.
sfcli Cluster getfullthreshold View the stages set for clsuter fullness levels.
sfcli Cluster listsyncjobs Return info about cluster syncronization jobs.
sfcli Cluster getsslcertificate Retrieve the active SSL certificate.
sfcli Cluster removesslcertificate Delete the user SSL certificate & private key for all cluster nodes.
sfcli Cluster setsslcertificate <options> Set a user SSL certificate & private key for the cluster nodes.
Drive Commands
sfcli Drive reset  <drives> <force> Initialize a drive & remove all data on it.
sfcli Drive secureerase  <drives> Remove residual data from drives in an available state.
sfcli Drive listdrivestats <drives> Retreive activity info for multiple cluster drives.
sfcli Drive list List of drives in active cluster nodes.
sfcli Drive remove <drives> <forceduringupgrade> Remove cluster drives.
sfcli Drive gethardwareinfo <driveid> Returns all hardware info for a specific drive.
sfcli Drive add <options> Add one or more drives to the cluster.
sfcli Drive getstats <driveid> Returns activity info for a single cluster drive.
sfcli Drive getconfig Display drive info for number of slices & block drive counts.
sfcli Drive test <minutes> Runs a hardware validation on all drives in a node.
sfcli Drive listhardware <force> Returns all of the drives connected to a cluster node.
Hardware Information Commands
sfcli Hardware getnvraminfo <force> Retrieve NVRAM info from each cluster node.
sfcli Hardware gethardwareinfo <nodeid> Retreive hardware info status for a single cluster node.
sfcli Hardware getnodeinfo <nodeid> Returns hardware status info for a single cluster node.
sfcli Hardware getclusterinfo  <type> Get hardware status for all FC & iSCSI nodes and drives in the cluster.
sfcli Hardware getconfig Display hardware configuration for a node.
Hardware Sensor Commands
sfcli Sensors getipmiinfo Display a detailed reporting of sensors/objects.
sfcli Sensors getipmiconfig <chassistype> Retreive hardware sensor info.
Initiator Commands
sfcli Initiators modify <options> Change the attributes of initiators.
sfcli Initiators create <options> Create new IQNs or WWPNs & assign them aliases.
sfcli Initiators list <options> List IQNs and WWPNs.
sfcli Initiators delete <initiators> Delete one or more initiators.
sfcli Initiators removefromvolumeaccessgroup <options> Remove initiators from a specific volume access group.
sfcli Initiators addtovolumeaccessgroup <options> Add initiators to a specific volume access group.
LDAP Commands
sfcli LDAP addclusteradmin <options> Add a new LDAP cluster admin user.
sfcli LDAP getconfiguration Get the currently active LDAP cluster config.
sfcli LDAP testauthentication <options> Validate the currently enabled LDAP authentication settings.
sfcli LDAP disableauthentication Disable LDAP authentication & remove LDAP configuration.
sfcli LDAP enableauthentication <options> Configure an LDAP connection to a cluster.
Logging Session Commands
sfcli LoggingSession getremotelogginghosts Retreive the current list of log servers.
sfcli LoggingSession setremotelogginghosts <hosts> Configure remote logging to a central log server (or servers).
sfcli LoggingSession setloginsessioninfo <timeout> Set time period for valid login authentication.
sfcli LoggingSession getloginsessioninfo Return the time period that logins are valid.
Network Commands
sfcli Network listnodefibrechannelportinfo Get info on node fiber channel ports.
sfcli Network listfibrechannelsessions Get info about active fiber channel connections.
sfcli Network listfibrechannelportinfo Get info about fiber channel ports on a node.
sfcli Network listiscsisessions Get info about iSCSI for cluster volumes.
sfcli Network listinterfaces Get info about the network interface on a node.
Node Commands
sfcli Node add <pendingnodes> Add one ore more new nodes to a cluster.
sfcli Node remove <pendingnodes> Remove nodes from a cluster.
sfcli Node setnetworkconfig <network> Set the network config for a node.
sfcli Node setconfig <config> Set all configuration info for a node.
sfcli Node listpending List all currently pending nodes in the system.
sfcli Node listpendingactive List nodes in the pendingactive state.
sfcli Node listall List all active and pending nodes.
sfcli Node liststats View high level activity info for all cluster nodes.
sfcli Node listactive Lists currently active nodes.
sfcli Node getorigin Retrieve the origination certificate for where the node was built.
sfcli Node getpendingoperation Detect node operations that are currently in progress.
sfcli Node getnetworkconfig Display network config info.
sfcli Node getstats <nodeid> Retreive activity info for a single node.
sfcli Node getconfig Get all configuration info for a node.
sfcli Node getbootstrapconfig Get cluster and node info from the bootstrap config file.
sfcli Node getsslcertificate Get the SSL certificate that is currently active on the mgmt node.
sfcli Node removesslcertificate Remove mgmt node SSL cert and private key.
sfcli Node setsslcertificate <certificate> <privatekey> Set a user SSL cert & key for the management node.
Pairing Commands
sfcli Pairing startcluster Create encoded key from cluster to pair with another cluster.
sfcli Pairing completecluster <options> Use with above command to complete pairing process.
sfcli Pairing listclusterpairs List all cluster pairs.
sfcli Pairing removeclusterpair <pairingid> Close the open connections between two paird clusters.
sfcli Pairing startvolume <options> Create encoded key to pair two volumes.
sfcli Pairing completevolume <options> Use with above command to complete pairing process.
sfcli Pairing removevolumepair <volumeid> Remove remote pairing between two volumes.
sfcli Pairing listactivepairedvolumes Lists all active volume pairs.
sfcli Pairing modifyvolumepair <options> Pause or restart replication between a pair of volumes.
Restart Commands
sfcli Restart services <force> <service> <action> Restart the service on a node, will cause service interruption.
sfcli Restart networking <force> Restart networking services on a node.
sfcli Restart resetnode <build> <force> <options> Reset a node to factory settings, but keeps network settings.
sfcli Restart shutdown <nodes> <options> Restart or shutdown a node that is not part of the cluster.
Schedule Commands
sfcli Schedule list Get info about all scheduled snapshots.
sfcli Schedule create <options> Schedule snapshots of volumes.
sfcli Schedule get <scheduleid> Get info about a scheduled snapshot.
sfcli Schedule modify <options> Change intervales when scheduled snapshots happen.
Service Commands
sfcli Service list Return service info for nodes, drives, software, and other services.
Snapshot Commands
sfcli Snapshot listgroup <options> Get info about all group snapshots.
sfcli Snapshot modifygroup <options> Change the attributes of a group of snapshots.
sfcli Snapshot modify <options> Change the attributes currently assigned to a snapshot.
sfcli Snapshot create <options> Create a point in time copy of a volume.
sfcli Snapshot list <options> Return the attributes of each volume snapshot taken.
sfcli Snapshot deletegroup <groupsnapshotid> Delete a group snapshot.
sfcli Snapshot rollbacktogroup <options> Roll back all individual volumes in a snapshot group.
sfcli Snapshot rollbackto <options> Make existing snap of an active volume image, create snap from snap.
sfcli Snapshot creategroup <options> Create a point in time copy of a group of volumes.
sfcli Snapshot delete <snapshotid> Delete a snapshot.
Storage Container Commands
sfcli StorageContainers modifystoragecontainer <options> Make changes to an existing virtual ovlume storage container.
sfcli StorageContainers list <storagecontainerids> Get info about all virtual volume storage containers.
sfcli StorageContainers getstoragecontainerefficiency Retrieve efficiency info about a virtual volume storage container.
sfcli StorageContainers createstoragecontainer Create a Vvol storage container.
sfcli StorageContainers delete <options> Remove up to 2000 Vvol storage containers from the system at once.
Test Commands
sfcli Test list Return the available tests you can run.
sfcli Test ping <options> Validate network connections using ICMP packets.
sfcli Test connectmvip <mvip> Test management connection to the cluster.
sfcli Test listutilities Return operations available on a node.
sfcli Test connectensemble <ensemble> Verify connectivity with a specified database ensemble.
sfcli Test connectsvip <svip> Test storage connection to the cluster.
Virtual Network Commands
sfcli VirtualNetwork add <options> Add a new virtual network to a cluster configuration.
sfcli VirtualNetwork list <options> List all configured virtual networks for the cluster.
sfcli VirtualNetwork remove <options> Remove a virtual network.
sfcli VirtualNetwork modify <options> Change attributes of a virtual network.
Virtual Volume Commands
sfcli VirtualVolume modifyhost <options> Change an existing ESX host.
sfcli VirtualVolume gettaskupdate <options> Checks the status of a VVol Async Task.
sfcli VirtualVolume unbindallfromhost Removes all VVol host binding.
sfcli VirtualVolume modifymetadata <virtualvolumeid> Modify Vvol metadata.
sfcli VirtualVolume modifyvasaproviderinfo <options> Update the VASA provider.
sfcli VirtualVolume querymetadata Return a list of Vvols matching a metadata query.
sfcli VirtualVolume listtasks <vvol_task_ids> Return a list of Vvols in the system.
sfcli VirtualVolume listprotocolendpoints <ids> Get info about all protocol endpoints in the cluster.
sfcli VirtualVolume listvolumestatsby <ids> List volume stats for any volumes associated with Vvols.
sfcli VirtualVolume create <options> Create a new Vvol on the cluster.
sfcli VirtualVolume fastclone <options> Execute a Vmware Vvol fast clone.
sfcli VirtualVolume canceltask <ids> Cancel the Vvol Async task.
sfcli VirtualVolume getallocatedbitmap <options> Returns info regarding segment allocation of a volume.
sfcli VirtualVolume getunsharedbitmap <options> Returns info regarding segment allocation for volumes.
sfcli VirtualVolume listhosts <ids> List of all Vvol hosts known to the cluster.
sfcli VirtualVolume rollback <options> Restore a Vvol snapshot.
sfcli VirtualVolume getunsharedchunks <options> Scans Vvol segment & returns number of chunks not shared by 2 volumes.
sfcli VirtualVolume getallocatedbitmap <options> Returns setgment allocation info about a volume.
sfcli VirtualVolume clone <options> Create a Vvol clone.
sfcli VirtualVolume modify <options> Modify settings on a Vvol.
sfcli VirtualVolume preparevirtualsnapshot <options> Set up Vvol Snapshot.
sfcli VirtualVolume getfeaturestatus <options> Retrieve the status of a cluster feature.
sfcli VirtualVolume unbind <context> Remove Vvol host binding.
sfcli VirtualVolume createhost <options> Create a new ESX host.
sfcli VirtualVolume bind <options> Bind a virtual volume with a host.
sfcli VirtualVolume list <options> List all virtual volumes in the system.
sfcli VirtualVolume getvasaproviderinfo Get VASA provider info.
sfcli VirtualVolume snapshot <options> Take a Vvol snapshot.
sfcli VirtualVolume listbindings <options> List all Vvols in the cluster that are bound to protocol endpoints.
sfcli VirtualVolume getcount Get the number of Vvols in the system.
sfcli VirtualVolume enablefeature <feature> Enable cluster features that are disabled by default.
sfcli VirtualVolume delete <options> Marks an active volume for deletion.
Volume Commands
sfcli Volume getefficiency <volume_id> Get capacity info about a volume.
sfcli Volume liststats <volume_id> Return info for a volume (or volumes), cumulative from volume creation.
sfcli Volume removefromaccessgroup <options> Remove volumes from a volume access group.
sfcli Volume addtoaccessgroup <options> Add volumes to a specific volume access group.
sfcli Volume liststatsbyaccount <options> Return high level measurements for all accounts.
sfcli Volume startbulkwrite <options> Initialize a bulk volume write session on a specific volume.
sfcli Volume updatebulkstatus <options> Update status of a bulk volume job.
sfcli Volume startbulkread <options> Initiailize a bulk volume read session on a specific volume.
sfcli Volume listdeleted <options> Retrieve the list of volumes marked for deletion.
sfcli Volume purgedeleted <volume_id> Immediately purges a volume that has been deleted.
sfcli Volume liststatsby <options> Returns high level activity info for every volume, by volume.
sfcli Volume create <options> Creates a new empty volume on the cluster.
sfcli Volume cancelclone <clone_id> Stops an ongoing volume clone or copy.
sfcli Volume getdefaultqos Retreive the default QoS values for a new volume.
sfcli Volume getasyncresult <options> Get the  result of ansynchronous method calls.
sfcli Volume listasyncresults <options> Lists the results of all currently running & completed async methods.
sfcli Volume liststatsbyaccessgroup <options> Get total activity measurements for all volumes of the volume access group.
sfcli Volume listbulkjobs Retreive info about each bulk volume read or write operation.
sfcli Volume clone <options> Create a copy of a volume.
sfcli Volume modify <options> Modify settings on an existing volume.
sfcli Volume restoredeleted <volume_id> Marks a deleted volume as active again.
sfcli Volume copy <options> Overwrite contents of existing volume with contents of another volume.
sfcli Volume listactive <options> Return a list of active volumes in the system.
sfcli Volume list <options> Get a list of volumes in the cluster.
sfcli Volume clonemultiple <options> Create a clone of a group of specified volumes.
sfcli Volume setdefaultqos <options> Configure default QoS values for a volume.
sfcli Volume getstats <options> Retreive high level activity information for a single volume.
sfcli Volume listforaccount <options> List active and pending deleted volumes for an account.
sfcli Volume getcount Get the number of volumes in the system.
sfcli Volume cancelgroupclone <group_clone_id> Stop an ongoing process for cloning multiple volumes.
sfcli Volume delete <volume_id> Marks an active volume for deletion.
sfcli Volume createqospolicy <name> <qos> Create a QoS policy object that you can later apply to a volume.
sfcli Volume deleteqospolicy <qospolicyid> Delete a QoS policy.
sfcli Volume getqospolicy <qospolicyid> Get details about a specific QoS policy.
sfcli Volume listqospolicies List the settings of all QoS policies.
sfcli Volume modifyqospolicy <options> Modify an existing QoS policy.
Volume Access Group Commands
sfcli VolumeAccessGroup create <options> Create a new Volume Access Group
sfcli VolumeAccessGroup modify <options> Update Initiators/Remove Volumes from a volume access group.
sfcli VolumeAccessGroup modifylunassignments <options> Define custom LUN assignments for specific volumes.
sfcli VolumeAccessGroup list <options> Return information about the volume access groups in the system.
sfcli VolumeAccessGroup getlunassignments <options> Retrieve details on LUN mappings of a specific volume access group.
sfcli VolumeAccessGroup getefficiency <options> Retrieve efficiency info about a volume access group.
sfcli VolumeAccessGroup delete <id> Delete a volume access group.

 

Isilon Port Usage

Below is a table of Isilon port usage and the OneFS services that use them.   Additional detail is available in the Isilon Security Configuration guide on Dell EMC’s support site.

Affected Services Port Service Protocol Connection Type
FTP 20 ftp-data TCP, IPv4, IPv6 External, Outbound
FTP 21 ftp TCP, IPv4, IPv6 External, Inbound
SSH 22 ssh TCP, IPv4, IPv6 External, Inbound
Telnet 23 telnet TCP External, Inbound
SMTP 25 smtp TCP, IPv4 External, Outbound
SmartConnect 53 domain TCP, UDP, IPv4 External, Outbound
SmartConnect 53 domain UDP, IPv4 External, Inbound
HTTP 80 http TCP, IPv4, IPv6 External, Inbound
Kerberos 88 kerberos TCP, UDP, IPv4, IPv6 External, Outbound
Portmapper 111 sunrpc TCP, UDP, IPv4, IPv6 External, Inbound
Time Service 123 ntp UDP, IPv4, IPv6 External, Inbound
Netbios 137 netbios-ns IPv4 External, Inbound
Netbios 138 netbios-gdm IPv4 External, Inbound
Netbios 139 netbios-ssn TCP, IPv4 External, Inbound
SNMP 161 snmp UDP, IPv4 External, Inbound
SNMP Traps 162 snmptrap UDP, IPv4 External, Inbound
NFS 2049 nfsd TCP, UDP, IPv4, IPv6 External, Inbound
NFSv3 Mount 300 nfsmountd TCP, UDP, IPv4, IPv6 External, Inbound
NFSv3 Notifications 302 nfsstatd TCP, UDP, IPv4, IPv6 External, Inbound
NFSv3 Locking 304 nfslockd TCP, UDP, IPv4, IPv6 External, Inbound
DNS Caching 307 isi-cbind_d UDP, IPv4 External, Inbound
LDAP 389 ldap TCP, IPv4, IPv6 External, Outbound
LDAP 636 ldap TCP, IPv4, IPv6 External, Outbound
HTPS 443 https TCP, IPv4, IPv6 External, Inbound
SMB1/2 Services 445 microsoft-ds TCP, IPv4 External, Outbound
Syslog 514 syslog TCP, IPv4 Internal, Inbound
MSDP 639 msdp UDP, IPv4 Internal
Entrust SPS 640 entrust-sps UDP, IPv4 Internal
Secure FTPS 989 ftps-data TCP, IPv4, IPv6 External, Outbound
Secure FTPS 990 ftps TCP, IPv4, IPv6 External, Inbound
SyncIQ 2098 isi_repl_pworker TCP, IPv4, IPv6 External, Inbound
SyncIQ 3148 isi_repl_bandwidth TCP, IPv4, IPv6 External, Inbound
SyncIQ 3149 isi_repl_bandwidth TCP, IPv4, IPv6 External, Inbound
SyncIQ 5667 isi_migr_sworker TCP, IPv4, IPv6 External, Inbound
iSCSI 3260 iscsi-target TCP, IPv4, IPv6 External, Inbound
MS AD Global Catalog 3268 n/a TCP, IPv4 External, Outbound
ISI Stats 6116 isi_stats_d External, Inbound
ISI Stats 7117 isi_stats_d External, Inbound
HDFS (Hadoop) 8020 hdfs TCP External, Inbound
HDFS (Hadoop) 8021 hdfs TCP, IPv4, IPv6 External, Inbound
Isilon WebGUI (https) 8080 n/a TCP, IPv4, IPv6 External, Inbound
REST API (https) 8080 n/a TCP, IPv4, IPv6 External, Inbound
VASA (vCenter) 8081 vasa TCP External, Inbound

HP 3PAR CLI Reference Guide

This is a CLI command line reference guide for HP’s 3PAR OS, and is based on version 3.3.1. It is a complete list of commands and a description of their function, I’d recommend reviewing HP’s official support documentation for specific information on command line arguments and advanced functions of each command. This is designed to be a quick reference of the commands available for use.

Adaptive Flash Cache
createflashcache Creates flash cache for
the cluster.
removeflashcache Remove flash cache
from the cluster.
setflashcache Sets the flash cache
policy for virtual
volumes.
showflashcache Shows either the status
node, or flash cache
policy for virtual
volumes.
statcache Show the flash cache
and data cache statistics
in a timed loop.
Adaptive Optimization (AO)
createaocfg Creates an AO
configuration.
removeaocfg Removes specified AO
configurations from the
system.
setaocfg Updates an AO
showaocfg Shows AO
Certificate
createcert Create self-signed SSL
certificate or a certificate
signing request (CSR)
for the 3PAR Storage
System SSL services.
showcert Show information about
SSL certificates of the
3PAR Storage System.
importcert Imports certificates for a
given service.
removecert Removes certificates
that are no longer
trusted.
CIM Server
setcim Sets the properties of
the CIM server,
including options to
enable or disable the
SLP, HTTP and HTTPS
ports for the CIM server.
showcim Displays the CIM server
setting information and
status.
startcim Starts the CIM server to
service CIM requests.
stopcim Stops the CIM server
from servicing CIM
requests.
Disk Enclosure Management Commands
Drive Cage Management
locatecage Locates a particular
drive cage.
setcage Sets parameters for a
drive cage.
showcage Displays drive cage
information.
Encryption
controlencryption Controls data
encryption.
showencryption Shows data encryption.
Physical Disk Management
admitpd Admits one or all
physical disks to enable
their use.
checkpd Executes surface scans
on physical disks.
controlpd Spins physical disks up
or down.
dismisspd Dismisses one or more
physical disks from use.
movepd Moves data from
specified Physical Disks
to a temporary location
selected by the system.
setpd Marks physical disks as
allocatable for logical
disks.
showpd Displays physical disks
in the system.
Domain Management
changedomain Changes the
currentdomain CLI
environment parameter.
createdomain Shows a list of domains
on the system.
createdomainset Defines a new set of
domains and provides
the option of assigning
one or more domains to
that set.
movetodomain Moves objects from one
domain to another.
removedomain Removes an existing
domain from the system.
removedomainset Removes a domain set
or removes domains
from an existing set.
setdomain Sets the parameters and
modifies the properties
of a domain.
setdomainset Sets the parameters and
modifies the properties
of a domain set.
showdomain Displays the list of
domains on a system.
showdomainset Displays the domain
sets defined on the
3PAR Storage System
and their members.
Dual Sign-On Request
removecorequest Removes rejected,
cancelled, or executed
dual sign-on requests.
setcorequest Approve or deny a
pending dual sign-on
command approval
request, modify the
queue size, execute an
approved request, or
cancel a pending
request.
showcorequest Displays status of CO
approval requests or
queue size.
File Access Audit Settings
setfsaudit Set File Access Audit
settings.
showfsaudit Display File Access
Audit settings.
File Persona
setfs Update global File
Persona Settings
showfs Show information of a
File Persona cluster.
startfs Initialize and start File
Persona on the system.
statfs Show statistics for File
Persona.
stopfs Stop or remove File
Persona.
File Persona Antivirus
setfsav Set antivirus properties
for File Persona.
showfsav Show antivirus
properties for File
Persona.
startfsav Start antivirus service or
scan for File Persona.
stopfsav Stop the antivirus
service or stop/pause a
scan.
File Persona Archiving
removefsarchive Deletes the WORM/
WORM-retained file(s),
removes the retention
period or deletes
retention validation scan
on the fstore.
setfsarchive Set or modify archiving
properties for File
Persona.
showfsarchive Displays policy setting,
retention setting and
status of validation
scans.
startfsarchive Starts or resumes
validation jobs.
stopfsarchive Stops or pauses data
validation jobs.
File Persona Group Accounts
createfsgroup Create a local group
account associated with
File Persona.
showfsgroup Show local group
information associated
with File Persona.
removefsgroup Remove a local group
account associated with
File Persona.
setfsgroup Modify a local group
account associated with
File Persona.
File Persona NDMP
setfsndmp Set NDMP properties for
File Persona.
showfsndmp Show NDMP properties
for File Persona.
startfsndmp Start NDMP and ISCSI
service.
stopfsndmp Stop NDMP and ISCSI
service.
File Persona Network Commands
createfsnetwork Configures multiple
networks by creating
new network.
removefsnetwork Removes the given
network.
setfsnetwork Modifies parameters for
multiple networks.
showfsnetwork Displays multiple
networks configuration.
File Persona Routes
createfsroute Create a route for a
target address with a
gateway.
removefsroute Removes an existing
route for a target
address.
setfsroute Modifies an existing
route for a target
address.
showfsroute Displays routes for
target addresses.
File Persona Snapshots
createfsnap Create a snapshot for
File Persona.
removefsnap Remove file store
snapshots from File
Persona.
showfsnap Show snapshot
information for File
Persona.
showfsnapclean Show details of an on-
demand snapshot
reclamation task.
startfsnapclean Start or resume an on-
demand snapshot
reclamation task.
stopfsnapclean Stop or pause an on-
demand snapshot
reclamation task.
File Persona User Accounts
createfsuser Create a local user
account associated with
File Persona.
removefsuser Remove a local user
account associated with
File Persona.
setfsuser Modify a local user
account associated with
File Persona.
showfsuser Show local user
information associated
with File Persona.
File Provisioning Group
createfpg Create a file
provisioning group.
growfpg Grow a file provisioning
group.
removefpg Remove a file
provisioning group.
setfpg Modify the properties of
a file provisioning group.
showfpg Show file provisioning
group information.
File Share 
createfshare Create a file share.
removefshare Remove a file share
from a File Persona
cluster.
setfshare Modify the properties of
a file share.
showfshare Show file share
information.
File Store 
createfstore Create a file store.
removefstore Remove a file store.
setfstore Modify the properties of
a file store.
showfstore Show file store
information.
Health and Alert Management
removealert Removes one or more
alerts.
setalert Sets the status of
system alerts.
showalert Displays system alerts.
showeventlog Displays event logs.
checkhealth Displays the status of
the system hardware
and software
components.
Help and Utility Commands
cli Provides a means to set
up your CLI session or to
enter directly into a CLI
shell.
clihelp Lists all commands or
details for a specified
command.
cmore Pages the output of
commands.
help Lists all commands or
details for a specified
command.
setclienv Sets the CLI
environment parameters.
showclienv Displays the CLI
environment parameters.
LDAP Management 
setauthparam Sets the authentication
parameters.
showauthparam Shows authentication
parameters and
integrates the
authentication and
authorization features
using LDAP.
checkpassword Supports authentication
and authorization using
LDAP.
Licensing Management
setlicense Sets the license key.
showlicense Displays the installed
license info or key.
Node Subsystem Management 
Firmware Versions
showfirmwaredb Displays a current
database of firmware
levels.
Node Date info
setdate Sets the system time
and date on all nodes.
showdate Displays the date and
time on all system
nodes.
Controller Node Properties
setnode Sets the properties of
the node components
such as the serial
number of the power
supply.
shownode Displays an overview of
the node specific
properties.
shownodeenv Displays the node’s
environmental status.
Controller Node EEPROM Log
showeeprom Displays node EEPROM
information.
Array and Controller Node Info
locatenode Locates a particular
node component by
blinking LEDs on the
node.
locatesys Locates a system by
blinking its LEDs.
setsys Enables you to set
system-wide parameters
such as the raw space
alert.
showsys Displays the 3PAR
Storage System
properties, including
system name, model,
serial number, and
system capacity.
Network Interface Config
setnet Sets the administration
network interface
configuration.
shownet Displays the network
configuration and status.
Port Information
checkport Performs a loopback
test on Fibre Channel
ports.
controlport Controls Fibre Channel
or Remote Copy ports.
controliscsiport Used to set up the
parameters and
characteristics of an
iSCSI port.
showiscsisession Shows the iSCSI active
sessions per port.
showport Displays system port
information.
showportarp Shows the ARP table for
iSCSI ports in the
system.
showportdev Displays detailed
information about
devices on a Fibre
Channel port.
showportisns Show iSNS host
information for iSCSI
ports in the system.
showportlesb Displays Link Error
Status Block information
about devices on a Fibre
Channel port.
showtarget Displays unrecognized
targets.
statiscsi Displays the iSCSI
statistics.
statiscsisession Displays the iSCSI
session statistics.
Battery Management
setbattery Sets battery properties.
showbattery Displays battery status
information.
System Manager
setsysmgr Sets the system
manager startup state.
showsysmgr Displays the system
manager startup state.
showtoc Displays the system
table of contents
summary.
showtocgen Displays the system
table of contents
generation number.
Node Rescue
startnoderescue Initiates a node rescue,
which initializes the
internal node disk of the
specified node to match
the contents of the other
node disks.
Performance Management Commands
Chunklet Stats
histch Displays histogram data
for individual chunklets.
setstatch Sets statistics collection
mode on chunklets.
setstatpdch Sets statistics collection
mode on physical disk
chunklets.
statch Displays statistics for
individual chunklets.
Data Cache
statcmp Displays statistics for
cache memory pages.
Node CPU Stats
statcpu Displays statistics for
CPU use.
Logical Disk Stats
histld Displays histogram data
for logical disks.
statld Displays statistics for
logical disks.
Link Stats
statlink Displays statistics for
links.
Physical Disk Stats
histpd Displays histogram data
for physical disks.
statpd Displays statistics for
physical disks.
Port Stats
histport Displays histogram data
for Fibre Channel ports.
statport Displays statistics for
Fibre Channel ports.
System Tuner
tunepd Displays physical disks
with high service times
and optionally performs
load balancing.
tunesys Analyzes disk usage
and adjusts resources.
tunevv Changes the layout of a
virtual volume.
Virtual LUN Stats
histvlun Displays histogram data
for VLUNs.
statvlun Displays statistics for
VLUNs.
Virtual Volume Stats
histvv Displays histogram data
for virtual volumes.
statvv Displays statistics for
virtual volumes.
Remove Copy Volume Stats
histrcvv Displays histogram data
for Remote Copy
volumes.
statrcvv Displays statistics for
Remote Copy volumes.
Preserved Data 
showpdata Displays preserved data
status.
Replication Commands
Physical Copy
creategroupvvcopy Creates consistent
group physical copies of
a list of virtual volumes.
createvvcopy Copies a virtual volume.
promotevvcopy Promotes a physical
copy back to a base
volume.
Remote Copy
admitrcopylink Admits a network link for
Remote Copy use.
admitrcopytarget Adds a target to a
Remote Copy volume
group
admitrcopyvv Admits a virtual volume
to a Remote Copy
volume group.
checkrclink Performs a latency and
throughput test on a
remote copy link.
creatercopygroup Creates a group for
Remote Copy.
creatercopytarget Creates a target for
Remote Copy.
dismissrcopylink Dismisses a network link
from Remote Copy use.
dismissrcopytarget Dismisses a Remote
Copy target from a
Remote Copy volume
group.
dismissrcopyvv Dismisses a virtual
volume from a Remote
Copy volume group.
removercopygroup Removes a group used
for Remote Copy.
removercopytarget Removes a target used
for Remote Copy.
setrcopygroup Sets a volume group’s
policy for dealing with
I/O failure and error
handling, or switches
the direction of a volume
group.
setrcopytarget Sets the Remote Copy
target state.
showrcopy Displays the details of a
Remote Copy
configuration.
showrctransport Shows status and info
about end-to-end
transport for Remote
Copy in the system.
startrcopy Starts a Remote Copy
subsystem.
startrcopygroup Starts a Remote Copy
volume group.
statrcopy Displays Remote Copy
statistics.
stoprcopy Stops a Remote Copy
subsystem.
stoprcopygroup Stops a Remote Copy
volume group.
syncrcopy Synchronizes Remote
Copy volume groups.
Virtual Copy
createsv Creates snapshot
volumes.
creategroupsv Creates consistent
group snapshots of a list
of virtual volumes.
promotesv Copies the differences
of a virtual copy back to
its base volume.
promotegroupsv Copies the differences
of snapshots back to
their base volumes, to
allow to revert the base
volumes to an earlier
point in time.
updatevv Updates a snapshot
virtual volume with a
new snapshot.
Security Hardening Commands
SP Credential
Command Description
removespcredential Removes all Service
Processor credentials
on the array.
Support Recovery Account Password
Command Description
controlrecoveryauth Control the method used
to authenticate recovery
accounts.
Service Commands
Disk Enclosure
admithw Admits new hardware
into the system.
controlmag Takes drives or
magazines on or off
loop.
servicecage Prepares a drive cage
for service.
servicehost Prepares a port for host
attachment.
servicemag Prepares a drive
magazine for service.
upgradecage Upgrades drive cage
firmware.
upgradepd Upgrades disk firmware.
General/Node
servicenode Prepares a node for
service.
shutdownnode Shuts down an
individual system node.
shutdownsys Shuts down the entire
system.
QoS
setqos Creates and updates
QoS rules in a system.
showqos Lists the QoS rules
configured in a system.
statqos Displays historical
performance data
reports for QoS rules.
Software Version
showpatch Displays patches
applied to a system.
showversion Displays software
versions.
SNMP Agent Commands
addsnmpmgr Adds an SNMP
manager to receive trap
notifications.
checksnmp Allows a user to send an
SNMPv2 test trap to the
list of managers
removesnmpmgr Removes an SNMP trap
manager.
removesnmppw Removes an SNMP
password.
removesnmpuser Removes an SNMP
user.
setsnmppw Allows users to update
SNMP passwords.
setsnmpmgr Changes an SNMP
manager’s properties.
showsnmpmgr Displays SNMP trap
managers.
showsnmppw Displays SNMP access
passwords.
showsnmpuser Displays information
about SNMP users.
Sparing Commands
createspare Creates spare
chunklets.
movech Moves specified
chunklets.
movechtospare Moves specified
chunklets to spare.
movepd Moves data from
specified physical disks
to a temporary location
selected by the system.
movepdtospare Moves specified
physical disks to spare.
moverelocpd Moves chunklets
relocated from a
physical disk to another
physical disk.
removespare Removes spare
chunklets.
showspare Displays information
about spare and
relocated chunklets.
SSH Access
setsshkey Sets the SSH public key
for users enabling login
without a password.
showsshkey Displays all SSH public
keys that have been set
with setshhkey.
removesshkey Removes a user’s SSH
public key.
SSH Banner
removesshbanner Removes the SSH
banner.
setsshbanner Sets the SSH banner
that is displayed before
the user logs in.
showsshbanner Display the SSH banner
that has been set with
setsshbanner.
System Reporter
controlsr Make changes to the
System Reporter.
createsralertcrit Creates the criteria that
System Reporter
evaluates to determine
if a performance alert
should be generated.
removesralertcrit Removes the criteria
that System Reporter
evaluates to determine
if a performance alert
should be generated.
setsralertcrit Sets the criteria that
System Reporter
evaluates to determine
if a performance alert
should be generated.
showsr Displays System
Reporter status.
showsralertcrit Displays the criteria
that System Reporter
evaluates to determine
if a performance alert
should be generated.
sraomoves Shows the space that
Adaptive Optimization
(AO) has moved
between tiers.
srcpgspace Displays historical
space data reports for
common provisioning
groups (CPGs).
srhistld Displays historical
histogram performance
data reports for logical
disks.
srhistpd Displays historical
histogram data reports
for physical disks.
srhistport Displays historical
histogram performance
data reports for ports.
srhistvlun Displays historical
histogram performance
data reports for VLUNs.
srldspace Displays historical
space data reports for
logical disks (LDs).
srpdspace Displays historical
space data reports for
physical disks (PDs).
srrgiodensity Shows the distribution
of IOP/s intensity for
Logical Disk (LD)
regions for a common
provisioning group
(CPG) or Adaptive
Optimization (AO)
configuration.
srstatcmp Displays historical
performance data
reports for cache
memory.
srstatcpu Displays historical
performance data
reports for CPUs.
srstatfsav Displays system
reporter performance
reports for file service
anti-virus.
srstatfsblock Displays System
reporter performance
reports for file service
block devices.
srstatfscpu Displays system
reporter performance
reports for file service
CPU usage.
srstatfsfpg Displays system
reporter performance
reports for file service
file provisioning groups.
srstatfsmem Displays system
reporter performance
reports for file service
memory usage.
srstatfsnet Displays system
reporter performance
reports for file service
Ethernet interfaces.
srstatfsnfs Displays system
reporter performance
reports for file service
NFS.
srstatfssmb Displays system
reporter performance
reports for file service
SMB.
srstatfssnapshot Displays system
reporter performance
reports for file service
snapshots.
srstatiscsi System reporter
performance for iSCSI
ports.
srstatiscsisession System reporter
performance for iSCSI
sessions.
srstatld Displays historical
performance data
reports for logical disks.
srstatlink Displays historical
performance data
reports for links
(internode, PCI and
cache memory).
srstatpd Displays historical
performance data
reports for physical
disks.
srstatport Displays historical
performance data
reports for ports.
srstatqos Displays historical
performance data
reports for QoS rules.
srstatvlun Displays historical
performance data
reports for VLUNs.
srstatvv System reporter
performance reports for
virtual volumes.
srsysspace System reporter space
reports for the system.
srvvspace Displays historical
space data reports for
virtual volumes (VVs).
Task Management
canceltask Cancels one or more
tasks.
removetask Removes information
about one or more tasks
and their details.
settask Sets the priority on a
specified task.
showtask Displays information
about tasks.
starttask Executes commands
with long running times.
waittask Asks the CLI to wait for
a task to complete
before proceeding.
User Management
createsched Allows users to
schedule tasks that are
periodically run by the
scheduler.
removesched Removes a scheduled
task from the system.
setsched Allows users to
suspend, pause, change
the schedule, change
the parameters, and
change the name of
currently scheduled
tasks.
showsched Displays the state of
tasks currently
scheduled on the
system.
createuser Creates user accounts.
removeuser Removes user
accounts.
removeuserconn Removes user
connections.
setpassword Changes your
password.
setuser Sets your user
properties.
setuseracl Sets your Access
Control List (ACL).
showuser Displays user accounts.
showuseracl Displays your access
control list (ACL).
showuserconn Displays user
connections.
showrole Displays information
about rights assigned to
roles in the system.
VASA Provider
setvasa Set the VASA Provider
server properties.
showvasa Show properties of the
VASA web service
provider.
startvasa Start the VASA Provider
server to service HTTPS
requests.
showvasa Show properties of the
VASA web service
provider.
stopvasa Stop the VASA Provider
server from servicing
HTTPS requests.
Virtual File Server
createvfs Create a virtual file
server.
removevfs Remove a virtual file
server.
setvfs Modify the properties of
a virtual file server.
showvfs Show virtual file server
information.
Virtual File Server Network
createfsip Assigns an IP address
to a virtual file server.
removefsip Removes the network
configuration of a virtual
file server.
setfsip Modify the network
configuration of a virtual
file server.
showfsip Show the network
configuration of a virtual
file server.
Virtual File Server Quota
setfsquota Set the quotas for a
specific virtual file
server.
showfsquota Show the quotas for a
specific virtual file
server.
Virtual File Server Backup Configuration
backupfsconf Create a configuration
backup for a virtual file
server.
restorefsconf Restore a configuration
backup for a virtual file
server on the same or a
different system with
same fpg and vfs
structure.
Volume Management
Common Provisioning Group Management
compactcpg Consolidates logical disk
space in a CPG into as
few logical disks as
possible, allowing
unused logical disks to
be removed.
createcpg Creates a Common
Provisioning Group
(CPG).
removecpg Removes CPGs.
setcpg Changes the properties
CPGs.
showcpg Displays CPGs.
Host Management
createhost Creates host and host
path definitions.
createhostset Creates a new set of
hosts and provides the
option of assigning one
or more existing hosts to
that set.
removehost Removes host
definitions from the
system.
removehostset Removes a host set or
removes hosts from an
existing set.
showhost Displays defined hosts
in the system.
showhostset Displays the host sets
defined on the 3PAR
Storage System and
their members.
sethost Sets properties on
existing system hosts,
including options to
annotate a host with
descriptor information
such as physical
location, IP address,
operating system,
model, and so on.
sethostset Sets the parameters and
modifies the properties
of a host set.
Logical Disk Management
checkld Performs validity checks
of data on logical disks.
compactld Consolidates space on
the logical disks.
removeld Removes logical disks.
showld Displays logical disks.
startld Starts logical disks.
Space and Storage Management
showblock Displays block mapping
information for virtual
volumes, logical disks,
and physical disks.
showldch Displays logical disk to
physical disk chunklet
mapping.
showldmap Displays logical disk to
virtual volume mapping.
showpdch Displays the status of
selected chunklets of
physical disks.
showpdvv Displays physical disk to
virtual volume mapping.
showspace Displays estimated free
space.
showvvmap Displays virtual volume
to logical disk mapping.
showvvolvm Displays information
about all virtual
machines (VVol-based)
or a specific virtual
machine in a system.
showvvpd Displays virtual volume
distribution across
physical disks.
Template Management
createtemplate Creates templates for
the creation of logical
disks, virtual volumes,
thinly provisioned virtual
volumes, and common
provisioning groups.
removetemplate Removes one or more
templates.
settemplate Modifies template
properties.
showtemplate Displays existing
templates.
Virtual Volume Management
admitvv Creates and admits
remotely exported virtual
volume definitions to
enable the migration of
these volumes.
checkvv Performs validity checks
of virtual volume
administrative
information.
createvv Creates a virtual volume
from logical disks.
createvvset Defines a new set of
virtual volumes provides
the option of assigning
one or more existing
virtual volumes to that
set.
freespace Frees SA and SD
spaces from a virtual
volume if they are not in
use.
growvv Increases the size of a
virtual volume by adding
logical disks.
importvv Migrates data from a
remote LUN to the local
3PAR Storage System.
removevv Removes virtual
volumes or logical disks
from common
provisioning groups.
removevvset Removes a virtual
volume set or virtual
volumes from an
existing set.
setvv Modifies properties
associated with a virtual
volume.
setvvolsc Creates, removes, and
sets properties of
Storage Containers for
virtual volumes.
setvvset Sets the parameters and
modifies the properties
of a virtual volume set.
showrsv Displays information
about reservation and
registration of VLUNs
connected on a Fibre
Channel port.
showvv Displays virtual volumes
in the system.
showvvolsc Display information
about VVol storage
containers in the
system.
showvvcpg Displays the virtual
volume sets defined on
the 3PAR Storage
System and their
associated members.
showvvset Displays the virtual
volume sets defined on
the 3PAR Storage
System and their
members.
startvv Starts virtual volumes.
updatesnapspace Starts a task to update
the actual snapshot
space used by a virtual
volume.
Virtual LUN Management
createvlun Creates a virtual volume
as a SCSI LUN.
removevlun Removes VLUNs.
showvlun Displays VLUNs in the
system.
Web Services API (WSAPI)
removewapisession Removes the WSAPI
user connections.
setwsapi Sets properties of the
Web Services API
server.
showwsapi Displays the WSAPI
server service
configuration state.
showwsapisession Displays the WSAPI
server sessions
connection information.
startwsapi Starts the WSAPI
server.
stopwsapi Stops the WSAPI
server.

Configuring a Brocade Switch for Access Gateway (AG) Mode

What is Access Gateway Mode?

Access Gateway mode is useful when you need to add more ports to your fabric without the additional complexity of using additional zoning configurations or additional domains.  It allows us to configure an F port as N port.

Other Useful Brocade related posts

FOS CLI Reference Guide
Automating Config Zone Backups
Scripting Alias and Zone Creation
Switch Type Matrix
Disabling Telnet

Verify NPIV is enabled on the Upstream Switch

We first need to ensure that the upstream switch that the access gateway (AG) switch will connect to has NPIV enabled.  Log in to the upstream switch to verify.

  1. Verify NPIV is enabled by running ‘portcfgshow’.
  2. If it is not enabled, enable it by running ‘portcfgnpivport’.

Steps to Configure and Enable AG Mode

Below are the steps for placing a switch in Access Gateway Mode.  Note that all zoning information on the switch that you’re enabling it on will be lost.  In addition, it’s important to note that Access Gateway mode changes other standard behaviors of the switch as well.  I encourage you to review the Brocade Access Gateway Administrator’s Guide if you have any doubts. In addition to zoning, the following servers are also not available in AG mode:  FCAL, Fabric Manager, FICON, IP over FC, ISL Trunking, Extended Fabrics, Management Platform services, Name services (SNS), Port Mirroring, and SMI-S.

  1. Backing up your current configuration is important, and should be done first. I’ve automated this in my environment, you can view my post on automating configuration and zone backups here.  The basic command for backing up your configuration manually is below.
configupload -ftp $FTPHOST, $FTPUSER, $FTPPATH, $FTPPASSWORD
  1. Next you should verify that the switch is in native mode. This can be verified by running ‘switchshow’ and checking the mode, it should be set to 0 (zero).  To change it to zero, use the ‘interopmode’ command.
interopmode 0
  1. Next we disable the switch, run the ‘switchdisable’ command for this step.
switchdisable
  1. Next we enable access gateway mode on the switch with the ‘ag –modeenable’ command. Enabling agmode will remove all the configuration data on the switch, including your zoning configuration and security database.  Make sure you backup your configuration using configupload before performing this step.  After running the command, you will be prompted to reboot the switch.
ag –modeenable

Verify AG Mode is enabled

  1. After the switch has rebooted, log in and verify that access gateway mode is enabled. This is done with the “modeshow” switch on the ag command.
ag --modeshow

Access Gateway mode is enabled
  1. In order to view how the automatic port mapping has been configured on the switch, use the “ag –mapshow” command.
ag --mapshow

N_Port|Config_F_Ports|Static_F_Prt|Current_F_Prt|Failovr|Failbck|PGID|PG_Name

------------------------------------------------------------------------------

0   13;14    None           None             1       1         0   pg0

1   1;2      None           None             1       1         0   pg0

2   9;10     None           None             1       1         0   pg0

3   7;8      None           None             1       1         0   pg0

4   11;12    None           None             1       1         0   pg0

5   5;6      None           None             1       1         0   pg0

6   15;16    None           None             1       1         0   pg0

7   3;4      None           None             1       1         0   pg0

Modifying AG Port Mappings

It is possible to change the port mappings after the initial configuration if modifications are necessary.  Below are the steps to do so.

  1. A port’s existing mapping bust be removed before it can be modified. Delete the configuration with the “ag –mapdel” command, as shown below.
ag --mapdel N_Port “fport1;fport2”

ag --mapdel 0 "13;14"

F_Port to N_Port mapping has been updated successfully
  1. Now that the original mapping has been removed, the new port mapping can be created.
ag --mapadd n_portnumber fport1;fport2

ag --mapadd 13 "1;2;5;6"

Sample Output:

WARNING: Mapping F_Port(s) to this N_Port may cause the F_Port(s) to be disabled

F_Port to N_Port mapping has been updated successfully

 

EMC ViPR Controller CLI Command Reference

Other CLI Reference Guides:
Isilon CLI  |  EMC ECS CLI  |  VNX NAS CLI  | NetApp Clustered ONTAP CLI  |  Data Domain CLI  |  Brocade FOS CLI | EMC XTremIO CLI

This is a comprehensive CLI command line reference guide for EMC’s ViPR Controller.  The commands in this list are based on Version 3.6.x.  This reference is designed to be an easy reference for quickly looking up a ViPR Controller CLI command and reviewing a description of it’s use, for additional detail on the syntax options for each command I recommend you download the latest CLI guide from EMC’s support site.

viprcli approval commands
viprcli approval approve Approve a request for a service catalog order.
viprcli approval list List pending approval requests.
viprcli approval reject Reject an approval request.
viprcli approval show Show approval request details for the specified Service Catalog order.
viprcli assetoptions commands
viprcli assetoptions list List the URN information for all ViPR Controllers.
viprcli authenticate commands
viprcli authenticate Authenticate with the ViPR CLI.
viprcli authentication commands
viprcli authentication add-provider Adds an authentication provider to ViPR Controller.
viprcli authentication list-providers Show a list of authentication providers.
viprcli authentication show-provider Show the extended information for an authentication provider.
viprcli authentication update Update an authentication provider.
viprcli authentication delete-provider Delete the authentication provider.
viprcli authentication add-vdc-role Add a user role to the vdc group.
viprcli authentication list-vdc-role Show the list of roles authenticated to the VDC.
viprcli authentication delete-role Delete a user role from the VDC group.
viprcli authentication add-user-group Add a user group.
viprcli authentication user-group-add-attribute Add an attribute to a user group.
viprcli authentication user-group-add-values Add values to an attribute of a user group.
viprcli authentication user-group-remove-attribute Remove values from an attribute of a user group.
viprcli authentication user-group-remove-values Remove values from an attribute of a user group.
viprcli authentication show-user-group Show the details of a user group.
viprcli authentication delete-user-group Deletes a user group.
viprcli authentication list-user-groups List all user groups.
viprcli bucket commands
viprcli bucket acl Adds, updates, or deletes the ACL rules for a bucket.
viprcli bucket create Creates a bucket.
viprcli bucket delete Deletes a bucket.
viprcli bucket delete-acl Deletes an ACL from a bucket.
viprcli bucket list-acl Lists the ACL for a bucket.
viprcli bucket show Shows a bucket.
viprcli bucket update Updates a bucket.
viprcli catalog commands
viprcli catalog get-category Show service catalog category details.
viprcli catalog get-descriptor Show service catalog category options.
viprcli catalog get-service Show service catalog service details.
viprcli catalog execute Execute the catalog service for the specified request to place the order.
viprcli cluster commands
viprcli cluster create Create a cluster.
viprcli cluster delete Delete the cluster and provide the datacenter and vCenter options if the cluster is a part of Datacenter-vCenter.
viprcli cluster detach Detach a cluster and provide the datacenter and vCenter options if the cluster is a part of Datacenter-vCenter.
viprcli cluster get-hosts Show the list of hosts in the cluster and provide the datacenter and vCenter options if the cluster is a part of Datacenter-vCenter.
viprcli cluster list Show the list of clusters.
viprcli cluster list-umexportmasks Show the list of unmanaged export masks on the cluster and provides the datacenter and vCenter options if the cluster is a part of Datacenter-vCenter.
viprcli cluster list-umvolumes Show the list of unmanaged volumes on the cluster and provides the datacenter and vCenter options if the cluster is a part of Datacenter-vCenter.
viprcli cluster show Show the cluster details and provides the datacenter and vCenter options if the cluster is a part of Datacenter-vCenter.
viprcli cluster tasks Check the tasks of a cluster and provide the datacenter and vCenter options if the cluster is a part of Datacenter-vCenter.
viprcli cluster update Update the cluster with specified details and provide the datacenter and vCenter options if the cluster is a part of Datacenter-vCenter.
viprcli computeimage commands
viprcli computeimage create Adds a compute image to the ViPR Controller physical assets.
viprcli computeimage delete Deletes a compute image.
viprcli computeimage list Lists all the compute images that have been added to ViPR Controller.
viprcli computeimage show Shows the details of a compute image.
viprcli computeimage update Updates the properties of a compute image.
viprcli computeimageserver commands
viprcli computeimageserver create Adds the compute image server to the ViPR Controller physical assets
viprcli computeimageserver delete Deletes a compute image server from ViPR Controller.
viprcli computeimageserver list Provides a list of the compute image servers that have been added to the ViPR controller.
viprcli computeimageserver show Provides the details of a compute image server.
viprcli computeimageserver update Edits the compute image server properties.
viprcli computelement commands
viprcli computelement deregister De-register a compute element
viprcli computelement list Lists a compute element.
viprcli computelement register Register a compute element.
viprcli computelement show Show a compute element.
viprcli computesystem commands
viprcli computesystem create Adds a compute system (Cisco UCS) to the ViPR Controller physical assets.
viprcli computesystem delete Delete a compute image.
viprcli computesystem deregister De-registers a compute image.
viprcli computesystem discover Discovers a compute image.
viprcli computesystem list Lists a compute image.
viprcli computesystem list-compute-elements Lists the compute element for a compute image.
viprcli computesystem register Registers a compute image.
viprcli computesystem show Shows a compute system, and the compute system attribtutes.
viprcli computesystem update Edit the properties of a Vblock compute system (UCS).
viprcli computevpool commands
viprcli computevpool assign_computele Assigns a compute element to a virtual pool.
viprcli computevpool create Creates a compute virtual pool.
viprcli computevpool delete Deletes a compute virtual pool.
viprcli computevpool list Lists a compute virtual pool.
viprcli computevpool show Shows a compute virtual pool.
viprcli computevpool update Updates a compute virtual pool.
viprcli consistencygroup commands
viprcli consistencygroup accessmode Updates the access mode on a target copy for a RecoverPoint consistency group.
viprcli consistencygroup create Create a consistency group.
viprcli consistencygroup delete Delete the consistency group.
viprcli consistencygroup failover Provide access to the latest image at the remote site.
viprcli consistencygroup failover_cancel Cancel the operation started by viprcli consistencygroup failover.
viprcli consistencygroup list Show the list of consistency groups within a project.
viprcli consistencygroup show Show the details of a consistency group.
viprcli consistencygroup swap Swap the personalities of the source and target so that the source becomes the target and the target becomes the source.
viprcli consistencygroup update Update the consistency group.
viprcli executionwindow commands
viprcli executionwindow create Create a schedule for the orders to be executed at a scheduled time or date or recurring intervals.
viprcli executionwindow delete Delete the execution window.
viprcli executionwindow list List the execution window.
viprcli executionwindow show Show the execution window details.
viprcli executionwindow update Update the execution window.
viprcli event commands
viprcli event approve Approve the event.
viprcli event delete Delete the event.
viprcli event list List the events.
viprcli event details Provides details about the event.
viprcli event decline Declines action on the event.
viprcli event show Shows the event.
viprcli exportgroup commands
viprcli exportgroup add_cluster Add a cluster to the export group.
viprcli exportgroup add_host Add a host to the export group.
viprcli exportgroup add_initiator Add an initiator to the export group. An initiator is a host port.
viprcli exportgroup add_vol Add a volume or a snapshot to the export group.
viprcli exportgroup create Create a new export group.
viprcli exportgroup delete Delete an export group.
viprcli exportgroup list Show the list of all export groups for a specified project.
viprcli exportgroup path_adjustment_preview Preview export path adjustments.
viprcli exportgroup path_adjustment Adjust the export paths for a storage system within an export group.
viprcli exportgroup remove_cluster Delete a cluster from the export group.
viprcli exportgroup remove_host Delete a host from the export group.
viprcli exportgroup remove_initiator Remove an initiator from an export group.
viprcli exportgroup remove_vol Remove a volume or volume snapshot from an export group.
viprcli exportgroup show Show the details for a specified export group in a project.
viprcli exportgroup tag Add or delete a tag name to the export group.
viprcli exportgroup tasks List all tasks within a given project of the export group.
viprcli filepolicy commands
viprcli filepolicy assign Assigns a file policy to a vPool, project, or file system.
viprcli filepolicy create Creates a new file policy.
viprcli filepolicy delete Deletes a file policy.
viprcli filepolicy list Outputs a list of file policies.
viprcli filepolicy show Shows a file policy.
viprcli filepolicy unassign Unassigns a file policy.
viprcli filepolicy update Updates a file policy.
viprcli filesystem commands
viprcli filesystem assign-policy Assigns a snapshot schedule policy to a file system.
viprcli filesystem change-vpool Moves a file system from one virtual pool to another.
viprcli filesystem create Create a file system with the given parameters.
viprcli filesystem create-replication-copy Creates a replication copy of a file system.
viprcli filesystem delete Delete a file system.
viprcli filesystem delete-acl Delete an ACL of a SMB share.
viprcli filesystem expand Expand the size of a file system.
viprcli filesystem export Export the file system to specified host protocol endpoints.
viprcli filesystem export-rule The export rules for a file system.
viprcli filesystem failback-replication After a failover, use to failback from the target file system back to the original source fs.
viprcli filesystem failover-replication Use to failover from the source file system to the target file system.
viprcli filesystem list Show a list of all file systems in the given project.
viprcli filesystem list-acl List the ACLs of a SMB share.
viprcli filesystem list-policy Lists the schedule policies that are assigned to a file system.
viprcli filesystem mount Mount a previously created NFS Export of a file system to a Linux Host.
viprcli filesystem mountlist Show a list of all NFS exports of a file system in the specified project.
viprcli filesystem nfs-acl To add, update, or delete an NFS ACL from a filesystem or subdirectory.
viprcli filesystem nfs-delete-acl Delete the NFS access control list from a filesystem or subdirectory.
viprcli filesystem nfs-list-acl Lists the NFS ACLs on a filesystem or a subdirectory.
viprcli filesystem pause-replication Pauses the replication process between the source and target file system.
viprcli filesystem refresh-replication-copy Refreshes a replication copy of a file system.
viprcli filesystem remove-replication-copy Removes a replication copy of the file system.
viprcli filesystem resume-replication Resumes the replication process between the source and target file system.
viprcli filesystem schedule-snapshots-list Lists schedule snapshots.
viprcli filesystem share-acl Add, update, and delete ACL rules for a file share.
viprcli filesystem show Shows details of a file system.
viprcli filesystem show-exports Show the export details of a filesystem.
viprcli filesystem show-shares Show the shares of a filesystem.
viprcli filesystem start-replication Manually starts, or restarts the replication process for continuous copies of file systems.
viprcli filesystem stop-replication Stops the replication process between the source and target file system.
viprcli filesystem tag Add or remove tags to the file system.
viprcli filesystem tasks Check the status of an asynchronous task.
viprcli filesystem unexport Unexport a file system.
viprcli filesystem unmanaged ingest Ingest unmanaged filesystems into ViPR Controller.
viprcli filesystem unmanaged show Show the details of unmanaged file systems.
viprcli filesystem unmount Unmount a previously mounted NFS Export from a Linux Host.
viprcli filesystem update Updates the file system.
viprcli filesystem unassign-policy Unassigns a schedule policy that is assigned to a file system.
viprcli host commands
viprcli host compute-host-os-install Installs an operating system on a host.
viprcli host create Create a host with the specified name.
viprcli host delete Delete a host.
viprcli host detach Detach a host.
viprcli host discover Discover a host.
viprcli host discover-array-affinity Discovers host/array affinity for a given host, or hosts when the storage is provisioned ot the host from VMAX, VNX for Block, Unity, and XtremIO storage systems.
viprcli host list Show the list of hosts.
viprcli host list-initiators Show the list of initiators for the specified host label.
viprcli host list-ipinterfaces Show the list of IP interfaces for a specified host label.
viprcli host list-umexportmasks Show the list of un-managed export masks on a host.
viprcli host list-umvolumes Show the list of un-managed volumes on a host.
viprcli host provision-bare-metal-host Creates hosts using compute elements from the virtual compute pool.
viprcli host show Show the details of a host.
viprcli host tasks Check the tasks of a host.
viprcli host update Update a host with specified details.
viprcli initiator commands
viprcli initiator aliasget Lists the aliases used for VMAX storage system initiator world wide port names.
viprcli initiator aliasset Adds an alias to a VMAX storage system initiator world wide port name.
viprcli initiator create Create an initiator.
viprcli initiator delete Delete the specified initiator.
viprcli initiator list Show the list of initiator port details.
viprcli initiator show Shows initiator details.
viprcli initiator tasks Check the tasks of an initiator.
viprcli initiator update Update an initiator with specified details.
viprcli ipinterface commands
viprcli ipinterface create Create an IP interface.
viprcli ipinterface delete Delete an IP interface.
viprcli ipinterface list Show the list of IP interfaces.
viprcli ipinterface show Show the IP interface details.
viprcli ipinterface tasks Check the tasks of an IP interface.
viprcli ipinterface update Update an IP interface.
viprcli ipsec commands
viprcli ipsec change-status Changes the IPsec status.
viprcli ipsec rotate-key Use to start key rotation.
viprcli ipsec status Shows the IPsec status.
viprcli logout command
viprcli logout Log out of ViPR Controller CLI.
viprcli meter commands
viprcli meter Show performance statistics for ViPR Controller in a specified time range.
viprcli monitor commands
viprcli monitor Get system events for a given time period.
viprcli network commands
viprcli network create Create a network with the specified parameters.
viprcli network list List networks in a virtual array.
viprcli network show Show the details of a network associated with the virtual array.
viprcli network update Update a network.
viprcli network assign Assign a virtual array to a network.
viprcli network delete Delete a network associated with the specified virtual array.
viprcli network endpoint add Add an endpoint to a network.
viprcli network endpoint remove Remove an endpoint from a network.
viprcli network register Register a network.
viprcli network deregister Unregister a network.
viprcli networksystem commands
viprcli networksystem aliases add Add aliases to a network system.
viprcli networksystem aliases remove Remove aliases from a network system.
viprcli networksystem aliases show Show the aliases for a network system.
viprcli networksystem aliases update Update aliases for a network system.
viprcli networksystem create Create a network system in ViPR Controller.
viprcli networksystem discover Discover a network system (swtich) from ViPR Controller.
viprcli networksystem list List all network system objects in ViPR Controller.
viprcli networksystem list-connections Show the worldwide names of all the ports managed by a ViPR Controller network system.
viprcli networksystem show Show a verbose listing of all parameters of a specific network system.
viprcli networksystem delete Delete a network system object from ViPR Controller.
viprcli networksystem register Register a network system.
viprcli networksystem deregister Unregister a network system.
viprcli objectuser commands
viprcli objectuser create_secretkey Creates a secret key.
viprcli order commands
viprcli order list List all orders.
viprcli order show Show details about a specific order.
viprcli order show-execution Show execution details for an order.
viprcli project commands
viprcli project create Create a project.
viprcli project list Get a list of all projects belonging to a specified tenant.
viprcli project show Show details of project.
viprcli project update Update a project.
viprcli project delete Delete a project.
viprcli project get-acl Show the project ACL details.
viprcli project update-acl Update the project ACL details.
viprcli project tag Add or delete a tag name for the specified project.
viprcli protectionsystem commands
viprcli protectionsystem create Add a protection system to ViPR Controller.
viprcli protectionsystem discover Re-discover a protection system.
viprcli protectionsystem list Show a list of protection systems established in ViPR Controller.
viprcli protectionsystem show Show the detailed information about a protection system.
viprcli protectionsystem update Modify the parameters of a ViPR Controller protection system.
viprcli protectionsystem connectivity Show information about a protection system and its associated storage.
viprcli protectionsystem delete Delete a protection system from ViPR Controller
viprcli quotadirectory commands
viprcli quotadirectory create Create a quota directory.
viprcli quotadirectory delete Delete a quota directory.
viprcli quotadirectory list List a quota directory.
viprcli quotadirectory show Show a quota directory.
viprcli quotadirectory update Update a quota directory.
viprcli sanfabrics commands
viprcli sanfabrics activate Activates a VSAN or fabric.
viprcli sanfabrics get-sanzone Get the SAN zone for a VSAN or fabric.
viprcli sanfabrics list List a VSAN or fabric.
viprcli sanfabrics list-sanzones List the SAN zones for a VSAN or fabric.
viprcli sanfabrics show Show a VSAN or fabric.
viprcli sanfabrics update Updates a VSAN or fabric.
viprcli schedule-policy commands
viprcli schedule-policy create Creates a policy, which defines regularly scheduled intervals for when the ViPR Controller creates snapshots of file systems.
viprcli schedule-policy delete Deletes a schedule policy.
viprcli schedule-policy list Lists the schedule policies.
viprcli schedule-policy show Presents the schedule policy settings.
viprcli schedule-policy update Edits a schedule policy.
viprcli scheduled_event commands
viprcli scheduled_event create Used to create a scheduled event for block snapshots, block full copies, filesystem snapshots, etc.
viprcli scheduled_event get View the parameters of a scheduled event.
viprcli scheduled_event cancel Cancel the scheduled event for an order.
viprcli scheduled_event delete Delete a scheduled event.
viprcli scheduled_event update Make changes to a scheduled event.
viprcli snapshot commands
viprcli snapshot activate Activate a snapshot and establish the synchronization between the source volume or consistency group and the target snapshot.
viprcli snapshot create Create a snapshot of a specified file system, volume, or consistency group.
viprcli snapshot delete Delete the specified snapshot.
viprcli snapshot delete-acl Delete all the access control lists from a snapshot share.
viprcli snapshot export-file Export a file system snapshot.
viprcli snapshot export-rule Add, update, and delete export rules for file system snapshots.
viprcli snapshot import-to-vplex Import a VPLEX snapshot into VPLEX as a VPLEX volume.
viprcli snapshot list Show the list of snapshots under a given file system, volume, or consistency group name.
viprcli snapshot list-acl Provides the list of access control entries on a CIFs snapshot share.
viprcli snapshot restore Restore a volume or file system snapshot, overwriting the target file system, volume, or consistency  group.
viprcli snapshot resync Resynchronize a snapshot and its volume or consistency group for XtremIO arrays.
viprcli snapshot share-acl Used to set an access control list (ACL) on a snapshot share.
viprcli snapshot show Show the snapshot details for the given snapshot name. Specify a file system, volume, or consistency group name (but only one at a time).
viprcli snapshot show-exports Shows the details of the exports associated with a specific snapshot.
viprcli snapshot show-shares Shows the details of the file share associated with a specific snapshot.
viprcli snapshot tasks Check the status of asynchronous tasks.
viprcli snapshot tag Add or delete a tag name for the specified snapshot.
viprcli snapshot unexport-file Unexport a file system snapshot.
viprcli snapshotsession commands
viprcli snapshotsession create Creates a snapshot session.
viprcli snapshotsession deactivate Deactivates a snapshot session.
viprcli snapshotsession linktarget Links a target to a snapshot session.
viprcli snapshotsession list Lists snapshot sessions.
viprcli snapshotsession relinktargets Relinks a target to a snapshot session
viprcli snapshotsession restore Restores a snapshot session.
viprcli snapshotsession show Shows a snapshot session.
viprcli snapshotsession unlinktargets Unlinks a target from a snapshot session.
viprcli storagepool commands
viprcli storagepool delete Deletes a storage pool.
viprcli storagepool deregister Remove the registered storage pool.
viprcli storagepool list Show a list of all storage pools for a specified storage system.
viprcli storagepool register Register a storage pool with ViPR Controller.
viprcli storagepool show Show a detailed listing for a particular storage pool.
viprcli storagepool update Modify the parameters of a storage pool.
viprcli storageport commands
viprcli storageport create Creates a storage port.
viprcli storageport delete Deletes a storage port.
viprcli storageport deregister Unregister a storage port.
viprcli storageport list Show the list of all storage ports for a specified storage system.
viprcli storageport register Register a storage port with ViPR Controller.
viprcli storageport show Show detailed listing for a particular storage port.
viprcli storageport update Update the storage system for a registered storage port.
viprcli storageprovider commands
viprcli storageprovider list List a storage provider.
viprcli storageprovider create Create a storage provider.
viprcli storageprovider delete Delete a storage provider.
viprcli storageprovider scan Scan a storage provider.
viprcli storageprovider show Show a storage provider.
viprcli storageprovider update Update a storage provider.
viprcli storagesystem commands
viprcli storagesystem connectivity Shows connectivity information for a storage system.
viprcli storagesystem create Create a storage system object.
viprcli storagesystem delete Delete a storage system.
viprcli storagesystem deregister Unregister the specified storage system.
viprcli storagesystem discover Discover the storage systems, associated storage pools, and storage ports.
viprcli storagesystem discover_arrayaffinity Discover the host mapping to VMAX, VNX for Block, Unity, and XtremIO storage systems for the given storage system.
viprcli storagesystem discover_unmanagedfilesystems Discover the unmanaged filesystem by storage system.
viprcli storagesystem discover_unmanagedvolumes Discover the unmanaged volumes by storage system.
viprcli storagesystem get_unmanagedfilesystems Show the unmanaged file systems of specified storage system.
viprcli storagesystem get_unmanagedvolumes Show the unmanaged volumes of specified storage system.
viprcli storagesystem list Show the list of all storage systems.
viprcli storagesystem register Register the specified storage system.
viprcli storagesystem show Show the details for a specified storage system type.
viprcli storagesystem show-unmanagedexportmask Shows the details of a specified unmanaged export mask.
viprcli storagesystem update Update the storage system.
viprcli system commands
viprcli system add-license Add a license file.
viprcli system add-site Adds a ViPR Controller standby site when configuring ViPR Controller for DR.
viprcli system cluster-ipinfo Get the IPs of the ViPR Controller nodes for Hyper-V and non-vApp platforms in a single Virtual Data Center (VDC) configuration.
viprcli system cluster-ipreconfig Reconfigures the IPs of the ViPR Controller nodes for Hyper-V and non-vApp platforms in a single Virtual Data Center (VDC) configuration.
viprcli system cluster-poweroff Power off the cluster.
viprcli system cluster-recovery Triggers ViPR Controller node recovery for ViPR Controller installations on Hyper-V and on VMware without vApp.
viprcli system cluster-recovery-status Gets the status of ViPR Controller node recovery for ViPR Controller installations on Hyper-V and on VMware without vApp.
viprcli system connectemc-ftps Configure ConnectEMC service with FTPS details.
viprcli system connectemc-smtp Configure ConnectEMC service with SMTP details.
viprcli system create-backup Create a system backup.
viprcli system db-consistency-check Use to trigger a database consistency check.
viprcli system db-consistency-check-cancel Use to cancel a database consistency check.
viprcli system db-consistency-check-status Use to view the status of a database consistency check.
viprcli system dbrepair-status Gets the status of ViPR Controller database repair.
viprcli system delete-backup Deletes a system backup.
viprcli system delete-site Deletes a standby site.
viprcli system delete-sites Deletes multiple standby sites.
viprcli system delete-task Delete a task.
viprcli system disable-update-check Disable the update check.
viprcli system download-backup Downloads a system backup.
viprcli system failover-site Use to perform a failover from a standby site.
viprcli system get-alerts Retrieve alerts and save it in a file name of your choice for troubleshooting analysis.
viprcli system get-cluster-state Shows status information for the ViPR Controller cluster.
viprcli system get-diagnostics Shows the diagnostics for the specified nodes.
viprcli system get-esrsconfig Shows the EMC Secure Remote Support (ESRS) configuration details.
viprcli system get-health Show the health of nodes and services.
viprcli system get-license Show the license details.
viprcli system get-log-level Show the logging levels.
viprcli system get-logs Get Logs in either JSON or XML Format.
viprcli system get-properties Show the system component properties as a key-value pair.
viprcli system get-properties-metadata Show the metadata information for system properties.
viprcli system get-stats Show the system statistics.
viprcli system get-storage Show the statistical information of the underlying storage type in kilo bytes.
viprcli system get-target-version Shows the current version of ViPR Controller.
viprcli system install-image Install a new ViPR Controller image from the remote ViPR Controller repository.
viprcli system ipreconfig-status Get the status of the reconfiguration of the IPs of the ViPR Controller nodes.
viprcli system list-backup Lists a system backup.
viprcli system list-external-backup View a list of external backups.
viprcli system list-sites Lists the ViPR Controller standby sites configured for ViPR Controller DR.
viprcli system pause-site Use to pause a standby site.
viprcli system pause-sites Use to pause multiple standby sites.
viprcli system pull-backup Use to pull a specific backup.
viprcli system pull-backup-cancel Use to cancel the pull of a specific backup.
viprcli system query-backup Use to view information about a ViPR backup set.
viprcli system query-backup-info Use to view information about a ViPR backup set.
viprcli system reboot-node Reboot a specific node.
viprcli system remove-image Removes a ViPR Controller image from your local repository.
viprcli system reset-properties Reset the specified key-value pair to the default value.
viprcli system restart-service Restart a specified service.
viprcli system restore-backup Use to restore a backup set.
viprcli system restore-backup-status Use to view the status of a backup set restore operation.
viprcli system resume-site Use to resume a standby site.
viprcli system retry-site Use to perform a retry operation on a standby site.
viprcli system send-alert Send an alert to SYR database.
viprcli system send-heartbeat Send the system heart beat to indicate that the system is in working condition.
viprcli system send-registration Send system registration details.
viprcli system set-log-level Configure the logging level.
viprcli system set-properties Update the key-value pair with the specified values in the properties file.
viprcli system show-site Shows site details.
viprcli system site-error Use to query the latest error message for a specific standby site.
viprcli system site-time Use to query the transition times for a specific standby site.
viprcli system skip-setup Enables execution of ViPR Controller UI catalog services without initial setup.
viprcli system switchover-site Use to switch over to a target new active site.
viprcli system update-cluster Upgrades ViPR Controller to a newer version.
viprcli system update-site Updates a site on the system.
viprcli system upload Install a ViPR Controller image file from a local repository.
viprcli system upload-backup Use to upload a backup
viprcli system upload-backup-status Use to view the status of an upload backup operation.
viprcli task commands
viprcli task Check the status of an asynchronous task.
viprcli tenant commands
viprcli tenant add-attribute Map an Active Directory attribute to a ViPR Controller tenant.
viprcli tenant add-group Map an Active Directory user or group to a ViPR Controller tenant.
viprcli tenant add-namespace Add an Elastic Cloud Storage namespace to a tenant.
viprcli tenant add-role Assigns a new role to a user in a tenant.
viprcli tenant create Create a subtenant with the given parameters.
viprcli tenant delete-role Remove a role from a user or group of users.
viprcli tenant delete Delete a tenant with the given name.
viprcli tenant get-clusters Show the list of clusters in a tenant.
viprcli tenant get-hosts Show the list of hosts in a tenant.
viprcli tenant get-role Retrieve the users with roles assigned to a tenant.
viprcli tenant get-vcenters Show the list of vCenters in a tenant.
viprcli tenant list Show a list of tenants and subtenants.
viprcli tenant list-object-namespaces Shows a list of object namespaces.
viprcli tenant remove-attribute Remove an Active Directory attribute from a ViPR Controller tenant.
viprcli tenant show Show the detailed information of a tenant.
viprcli tenant show-object-namespaces Shows object namespaces.
viprcli tenant update-quota Update the tenant with quota information.
viprcli varray commands
viprcli varray create Create a virtual array.
viprcli varray get-acl List tenants that have ACL privileges assigned for this virtual array.
viprcli varray list Lists the virtual arrays.
viprcli varray show Show the details about a virtual array.
viprcli varray allow Allow a specific tenant to use a virtual array.
viprcli varray disallow Remove the USE privilege from a tenant for the specified virtual array.
viprcli varray update Update the virtual array.
viprcli varray list-storage-ports List implicitly associated storage ports of a virtual array.
viprcli varray delete Deletes a virtual array.
viprcli vcenter commands
viprcli vcenter create Adds a vCenter to the ViPR Controller physical assets.
viprcli vcenter delete Delete a vCenter.
viprcli vcenter discover Discover a vCenter.
viprcli vcenter get-clusters Show the clusters of a vCenter.
viprcli vcenter get-datacenters Show the datacenters of a vCenter.
viprcli vcenter get-hosts Show the hosts of a vCenter.
viprcli vcenter list Show the list of vCenters.
viprcli vcenter show Show the vCenter details.
viprcli vcenter tasks Check the tasks of a vCenter.
viprcli vcenter update update the properties of a vCenter that has been added to the ViPR controller.
viprcli vcenterdatacenter commands
viprcli vcenterdatacenter create Create a vCenter datacenter.
viprcli vcenterdatacenter create-cluster Create a vCenter datacenter cluster.
viprcli vcenterdatacenter delete Delete a vCenter datacenter.
viprcli vcenterdatacenter get-clusters Show the clusters of a vCenter datacenter.
viprcli vcenterdatacenter get-hosts Show the hosts of a vCenter datacenter.
viprcli vcenterdatacenter list Show the list of datacenters in a vCenter.
viprcli vcenterdatacenter show Show the datacenter details of a vCenter.
viprcli vcenterdatacenter update-cluster Updates a vCenter datacenter cluster.
viprcli vdc commands
viprcli vdc add Add a vdc.
viprcli vdc delete Delete a vdc.
viprcli vdc disconnect Disconnect a vdc.
viprcli vdc list List a vdc.
viprcli vdc reconnect Reconnect a vdc.
viprcli vdc show Show a vdc.
viprcli vdc update Update a vdc.
viprcli vnasserver commands
viprcli vnasserver assign Assigns a vNAS server to a ViPR Controller project.
viprcli vnasserver list Show a list of vNAS servers.
viprcli vnasserver show Show the attributes of a vNAS server.
viprcli vnasserver unassign Unassigns a vNAS server from a project
viprcli volume commands
viprcli volume clone Clone a volume with the given parameters.
viprcli volume clone-activate Activates a fullcopy(clone) of a volume that was created inactive or all clones in a consistency group clone set.
viprcli volume clone-deactivate Deactivates a fullcopy(clone) of a volume or all clones in a consistency group clone set.
viprcli volume clone-checkprogress Check progress of an operation on a single clone or a consistency group clone set.
viprcli volume clone-detach Detach a clone from its source volume or detach all clones in a consistency group from their source volumes.
viprcli volume clone-list List the clones in a consistency group.
viprcli volume clone-restore Restore a single volume from its clone or all volumes in a consistency group from their clones.
viprcli volume clone-resync Resynchronizes a full copy(clone) of a single volume or all clones with their source volumes in a consistency group.
viprcli volume clone-show Gets the details of a single clone or of all the clones in a consistency group.
viprcli volume continuous_copies copy Make a continuous-copy volume of the specified volume.
viprcli volume continuous_copies delete Delete the continuous-copy of a volume.
viprcli volume continuous_copies establish Establishes the continuous-copying for a volume.
viprcli volume continuous_copies failover Fail over continuous protection for a volume.
viprcli volume continuous_copies failover-test Fail over continuous protection test for a RecoverPoint protected volume.
viprcli volume continuous_copies failover-test-cancel Cancel a fail over continuous protection test for a RecoverPoint protected volume.
viprcli volume continuous_copies list List the continuous-copies for a native or RecoverPoint protected volume.
viprcli volume continuous_copies pause Pause the continuous-copying for a volume.
viprcli volume continuous_copies restore Restores the continuous-copying for a volume.
viprcli volume continuous_copies resume Resume the continuous-copying for a volume.
viprcli volume continuous_copies show Show the details of continuous-copy for a volume.
viprcli volume continuous_copies start Start a continuous-copy for a specified volume.
viprcli volume continuous_copies stop Stop the continuous-copy for a specified volume.
viprcli volume continuous_copies swap Swap a continuous-copy for a specified volume.
viprcli volume continuous_copies update-access-mode Enable Direct Access for a specified volume.
viprcli volume create Create a volume with the given parameters.
viprcli volume delete Delete a volume.
viprcli volume expand Expand the size of a volume.
viprcli volume list Show a list of all volumes in the given project.
viprcli volume migration-cancel Cancel the specified data migration, currently in progress.
viprcli volume migration-deactivate Remove the completed data migration from ViPR Controllerand VPLEX .
viprcli volume migration-list Show a list of all data migrations.
viprcli volume migration-pause Pause the specified data migration, currently in progress.
viprcli volume migration-resume Resume the specified data migration, that was previously paused.
viprcli volume migration-show List the details of the specified data migration.
viprcli volume protectionset show Show the protection set details for a volume.
viprcli volume show Show a detailed listing of the volume.
viprcli volume tag Add or remove tags to a volume.
viprcli volume tasks Check the status of an asynchronous task.
viprcli volume unmanaged ingest Ingest unmanaged volumes into ViPR Controller and provides the datacenter and vcenter options if the cluster is a part of Datacenter-Vcenter.
viprcli volume unmanaged show Show the details of unmanaged volumes.
viprcli volume update Update a volume.
viprcli vpool commands
viprcli vpool create Creates a virtual storage pool in ViPR Controller.
viprcli vpool list Show a list of virtual storage pools accessible to the user.
viprcli vpool show Show a verbose listing of all parameters of a virtual pool that has a specified name.
viprcli vpool update Update the virtual pool.
viprcli vpool delete Delete a virtual pool.
viprcli vpool allow Allow a specified tenant to use a virtual pool.
viprcli vpool disallow Disallow the tenant from using a virtual pool.
viprcli vpool add_pools Add the pools to a virtual pool.
viprcli vpool refresh_pools Update the matched pools list of a virtual pool.
viprcli vpool get_pools Show the storage pools contained in a virtual pool.
viprcli vpool remove_pools Remove the pool from a virtual pool.

A Roundup of Storage Startups

The enterprise storage market has been a hotbed of innovation and entrepreneurship in the last several years.  While the storage industry has consolidated with acquisitions (such as HP’s purchase of Nimble) or otherwise simply shutting down, there always seem to be new companies waiting in the wings to take over.

These latest new companies are hoping to prove that they have a new and better way to address the increasing challenges of managing the huge amount of data growth by implementing their own take on enterprise storage technology.  They all come to the market with both the hope and the potential to change how the enterprise business market stores their data.

An up to date list of storage startups is hard to maintain, as the ranks are growing fast and companies can appear seemingly out of nowhere.  This latest crop has some game changing ideas and I look forward to seeing how their technology will shape the future of enterprise storage.  Some have been around for several years and are starting to mature, others started less than a year ago.

Many new companies are hoping (betting?) that the market will see a need for a new data managing layer of software that provides improved management capabilities across  multiple silos of data both on-premises and in the cloud.  Some of the emerging suppliers for data software management are Actifio, Avere (now part of Microsoft), Catalogic, Cohesity, Delphix, Druva, Rubrik, Scality and Strongbox. I’m going to be focusing more on the hardware suppliers in this post, so let’s take a closer look at some of the rising stars in the enterprise storage market.  I’m going to take a look at 23 companies in total in this post, in no particular order:  E8 Storage, Igneous Systems, Komprise, Portworx, Primary Data, Reduxio, Talena, Alluxio, Aparavi, Attala Systems, Datera, Datrium, Elastifile, Morro Data, Excelero, Minio, Nyriad, ScaleFlux, StorageOS, Storj Labs, Vexata, Wasabi and WekaIO.

I have no affiliation with any of these hardware vendors, this is simply a compiled list I generated with some basic online research.  The data presented is based primarily on marketing information that I gathered.  For more detailed information I recommend reviewing their individual company websites, links are provided for all of them.

     E8 Storage | CEO: Zivan Ori

E8 Storage focuses on shared accelerated storage for data-intensive, top tier applications that require a large amount of IO. Their scalable solution is well suited for intense low latency workloads,  real time analytics, financial and trading applications, transactional processing and large scale file systems.  Their patented high performance shared NVMe storage solution delivers much higher performance, improved storage performance density, and lower costs when compared to legacy systems.   They promise NVMe performance without giving up reliability and availability.  The company is privately held and based in Santa Clara, CA with R&D in Tel Aviv, and they have channel partners in the US and in Europe.

Their hardware is built on industry standards, including converged ethernet with RDMA and standard 2.5″ NVMe SSDs.  Up to 96 host servers can connect to each storage controller, and each controller is concurrently linked to shared storage to deliver scalability into the petabytes.

Potential customers can purchase their software separately or as an integrated system if an appliance based solution is a better fit. Bought independently, their software allows the use of hardware from any vendor, as long as the vendor is on their pre qualified list.  It also allows businesses to take advantage of economies of scale within their own supply chains and purchase new units at a pace that suits their needs.

  Igneous Systems | CEO: Kiran Bhageshpur

Igneous Systems is a Seattle-based, venture-backed company that designed a secondary storage system designed to support massive file systems. Their Hybrid Storage Cloud solution provides enterprises a consolidated secondary storage tier with cloud support and scalability.  Igneous remotely manages all on-premises cloud infrastructure, which includes monitoring, troubleshooting, and non-disruptive software upgrades.

Their infrastructure scales from 100TB to 100PB and uses their RatioPerfect architecture, which consists of distributed nano-servers that make the infrastructure resistant to hardware failures. This cloud-like architecture enables Igneous to offer cloud economics in the enterprise data center.

Unlike traditional storage equipment, Igneous Hybrid Storage Cloud uses an integrated serverless environment designed for data centric applications.  It features integrated backup and archive applications that are designed to seamlessly integrate with enterprise NAS as well as tiering data to the cloud.  Integrated search capabilities are built directly into the infrastructure and therefore require no separate backup catalogs to manage. Igneous Hybrid Storage Cloud is specifically designed for massive file systems managing billions of files,  unlike legacy backup systems.

They provide easy to deploy  storage that is a cost effective alternative to cloud data storage.  They provide a managed hardware solution on-premises and look after everything from maintenance and provisioning to performance tuning. Their pricing model is based on consumption. With a background at EMC-Isilon, the Igneous team has a great deal of experience in building infrastructure for unstructured data.

They were recognized by Gartner as a 2017 “Cool Vendor in Storage Technologies”.

     Komprise | CEO: Kumar K. Goswami

Komprise aims to address the issues of storage sprawl and rising costs with storage analytics.  They contend that storage management requires getting as close as possible to realtime insight into what is happening, and their software addresses this by providing metrics together with analytic tools to build a variety of data policies.  They then manage data placement across storage tiers and multiple clouds. Their software allows for interactively modelling multiple scenarios before moving the configuration into production.

Their intelligent data management provides an alternative to more expensive solutions from larger, more established vendors. The company’s IDM platform enables customers to lower NAS costs and ongoing cloud operations by using analytics to intelligently automate archiving and disaster recovery. The  service also allows for transparent access of data across on-premises NAS storage and the cloud.

Their analytics processing identifies data that is most suited for the cloud and then transparently archives and replicates the data. User defined policies are automated to move and manage data across on-prem NAS storage tiers.

      Portworx | CEO: Murli Thirumale

Portworx provides storage for containers and brings persistent storage to all of the common container schedulers.  All of the most popular databases are supported in the container environment.  They are an early player in the persistent container storage field, but have signed up some big names like GE Digital and Lufthansa Systems.  They are betting on the recent trends to replace hypervisors with containers and see persistent storage as the wave of the future.

They provide scheduler integrated data services forproduction enterprise containers and allow users to deploy stateful containers on-prem, in the cloud, or in hybrid clouds.  In contrast to legacy storage that has container connectors built on key-value stores, they are designed and built for cloud-native applications, making container data more portable, persistent and protected.

Primary Data | CEO: Lance Smith

Primary Data’s storage is based on the idea of extensible metadata, using open ended tagging of data objects to control them (i.e. life-cycle management and priority of service), but they also add telemetry to the equation to allow real time automated data placement.

Parallel access to metadata and metrics processing has the effect of speeding up I/O performance, and they keep it cheap by implementing a “pay as you go” pricing model.  Their leadership team happens to include Steve Wozniak (how cool is that?), who is listed as as chief scientist. In 2017 they announced $40 million in new funding and a new version of their storage platform.

Reduxio | CEO: Mark Weiner

Reduxio’s TimeOS software delivers high performance enterprise storage solutions with unique data management capabilities.  They put data at the middle of their architecture and allow complete virtualization of all types of storage.

Their HX550 multi-tier storage solution with built-in BackDating allows customers to modernize and simplify their storage infrastructure and IT operations by deploying flash storage that is cost-effective and that can be used across all their applications.

Reduxio’s unified storage platform is designed to deliver near-zero RPO and RTO, while greatly simplifying the data protection process and providing built-in data replication for disaster recovery. The features in the TimeOS v3 released in June 2017 enabled a single platform for the end-to-end management of the life cycle of an application’s data.

They already have a global install base of more than 150 enterprise customers, many with multiple installed systems across a wide range of industrial sectors, including Managed Service Providers, Manufacturing, BioTech, Education, State and Local Government and Professional Services.  Their product seems to be catching on.

Talena | CEO: Srinivas Vadlamani 

Talena developed the industry’s fastest data backup and recovery solution with built-in machine intelligence to handle huge data sets  with mission-critical applications sitting on top of modern data platforms such as DataStax/Cassandra, Couchbase, Hadoop HBase/Hive, MongoDB and Vertica.  Talena takes advantage of machine learning to ensure data resiliency in the event of disasters. They have the ability to back up and recover petabyte-sized and larger data sets much faster than other solutions on the market, minimizing the impact of data loss and greatly reduce downtime. Their growing customer base includes leading Fortune 500 businesses in the retail, financial services and travel industries, among others. 

Targeting the big-data market, They provides backup, recovery, archiving and test data management for major unstructured databases.  Their key features include deduplication and replication control via user-defined policies. The technology supports data-masking algorithms to prevent data exposure as data is moved around or used in testing.

     Alluxio | CEO: Haoyuan Li

Alluxio (formerly known as Tachyon) provides virtual distributed storage for Big Data.  They aim to become the storage abstraction layer for Big Data in the same manner that Apache Spark became the computation layer. Their memory centric architecture allows developers to interact with a single storage layer API without worrying about the configurations and complexities of the underlying storage and file systems.

Alluxio is a virtual distributed storage layer between big data computation frameworks and underlying storage systems that delivers data at memory speed to any target framework from any storage system regardless of its location.  They aim to address the challenge of data locality.  While in-memory stoage is usually  viewed as cache, their technology allows for separation of the function layer from the persistent storage layer.  Organizations can run any big data framework (like Apache Spark) with any storage system or filesystem underneath (like S3, EMC, NetApp, OpenStack Swift, Red Hat GlusterFS, etc.), and run it on any storage media (DRAM, SSD, HDV, etc.), and with that they support a unified global namespace by virtualizing disparate storage systems.

The company was founded by the creators of the Alluxio open source project from UC Berkeley AMPLab.

Aparavi | Chairman:  Adrian Knapp

Aparavi offers cloud data protection and remote disaster recovery as a service. Their cloud-forward solution offers a RESTful API, a policy engine, an open data format, and a multi-tenant architecture.  Their technology can reduce a customer’s storage footprint compared to more traditional methods while making sure compliance policies are adhered to.  They aim to address the issues of evolving global regulations and the huge amounts of data now being generated with long-term data retention solutions across modern, multi-cloud architectures.

At their core, they aim to better prepare their customers to meet the challenges of long-term data retention across multi cloud architectures. They designed and built a new software-as-a-service platform from scratch to allow companies to protect data on-prem and in in the cloud.  They also aim to break the typical barriers of cost, vendor lock-in, complexity and regulatory compliance requirements that cause businesses problems when utilizing more conventional solutions.  The company is run by management and engineers with a ton of experience in data retention, and the issue they are attempting to resolve is something I’ve seen directly in the companies I’ve worked for.  They may have something here.

         Attala Systems | CEO: Taufik Ma

Attala offers high performance computing and primary cloud storage.  Their product utilizes a scale out fabric running on standard ethernet to interconnect servers and data nodes in a data center.  Because they focus on scale out cloud storage and use an FPGA based fabric, they  are able to effectively eliminate legacy storage management layers. They tout that their product provides over ten million IOPS per scale-out node with latencies as low as 16 microseconds.

The Attala fabric includes the Model HNA host PCIe adapters, providing full hardware emulation of NVMe SSDs, thus allowing their solution to expose pooled resources as virtual SSDs. The host OS, hypervisor or driver see the virtual SSDs as real SSDs using standard NVMe drivers, so they can be used with any OS, hypervisor, ore bare-metal provisioning software.

The software also offers a fully automated orchestration layer, where the fabric dynamically and securely attaches volumes from storage resources from across the network directly to bare-metal servers, virtual machines or containers. No host agents or other software is required, so deployment and maintenance of the system across heterogeneous environments is fairly simple.

Datera | CEO: Marc Fleischmann 

Datera aims to solve what they see as some of the biggest challenges in storage. Their key-value store approach uses NVDIMM to speed up write operations, coalesce writes, and provide a cache for reads. Their access protocol can aggregate massively parallel reads, and their software tools provide many of the same compression, snapshot and replication features offered by the bigger and more established storage vendors. The software also works with orchestration tools from VMware, OpenStack, and Docker.

The product is focused on DevOps and cloud native apps use cases.  It runs on x86 servers with flash, and there is iSCSI-based native integration with OpenStack, CloudStack, VMware vSphere and container orchestration platforms such as Docker, Kubernetes and Mesos.

Some of the key features of DEDF include a RESTful interface, API-first operations to provide web scale automation with full infrastructure programmability, policy based configuration, self service provisioning, a scale out model, a flash first design that delivers high efficiency and low latency, multi tenancy and QoS for cloud-native and traditional workloads, and heterogeneous component support for easily scaling across commodity x86 servers.

       Datrium | CEO: Brian Biles

Datrium offers stackable appliances that act as servers.  Each appliance has a flash cache and they are are linked to a back-end storage unit with larger hard drives that then serves as the primary storage. Enterprise features such as compression, deduplication and end-to-end encryption are included.   They also offer an advanced snapshot tool that includes a catalog of snapshots.  Their product takes a slightly different approach than a hyperconvergence vendor like Nutanix that only pools drives that are built in to its servers.

They see compute, primary storage, secondary storage and cloud storage all coming together in a configuration that is scalable and easy to manage without the need for a silo for each storage class. As most IO requests will utilize the on-board flash cache on their nodes, they can deliver excellent performance without ever having to go to the data nodes.

Compute nodes can be supplied by Datrium, or clients can also use their existing infrastructure.  As persistent data resides on the data nodes, compute nodes are stateless and can go offline without risking data loss or corruption.  They supports a wide variety of environments including vSphere 5.5-6.5, Red Hat 7.3, CentOS 7 1611, and Docker 1.2.

They have a very unique spin on convergence, and their DVX system really enhances storage efficiency, which is critical to getting the most out of the flash in the data nodes.

Elastifile | CEO: Amir Aharoni

Elastifile offers file storage and scale-out file storage.  Their product employs the Bizur consensus algorithm, a distributed metadata model using an adaptive data placement methodology to provide cloud enabled storage services capable of handling transactional workloads with very low latency.

Their software is designed to help large and midsize enterprises scale up through the cloud to thousands of nodes and millions of IOPs for their most mission critical workloads. It will run on any server and can use any type of flash (3D and TLC included).  They claim to bring flash performance to all enterprise applications while reducing the capex and opex of virtualized data centers, and simplify the adoption of hybrid cloud by extending file systems across on-prem and cloud deployments.

       Morro Data | CEO: Paul Tien

Morro Data offers file storage and hybrid cloud solutions.  Their CloudNAS service combines an on-prem cache with S3 or Backblaze cloud storage, designed to give small/midsize businesses an alternative to using local file servers.

What separates them from others is a global distributed file system that synchronizes customer data between one or more on-prem CacheDrive hardware appliances and public cloud storage. The CacheDrives store frequently accessed data on site for better performance.

Their CacheDrive also serves as a cloud storage gateway to improve the performance of file transfers to object storage in the cloud. It is also designed to optimize bandwidth in order to accommodate less than ideal connections.  Morro also supports key enterprise storage features like the more established vendors, like data encryption, compression, retention policies and data recovery.

CloudNAS is designed to be used as primary storage, with the master copy of the data stored in the cloud and synchronized to the CacheDrives at local sites.

Excelero | CEO: Lior Gal

Excelero is a software-defined storage technology startup, and their primary offering is their NVMesh Server SAN software. They designed the software to pool NVMe storage from multiple servers.  The pooled storage then offers very high performance and is intended to be used as primary storage.

Using an SDS approach with an NVMe over Fabrics mesh storage stack, they aim to address issues with hyperconverged infrastructure.  Accessing a drive utilizes the RDMA feature of NVMe over Fabrics, which results in very low latency and shifting the CPU load on the initiating system rather than the one holding the drive.

  Leonovus | Chair: Michael Gaffney

Lenovus’ product is an advanced blockchain storage and compute solution with a marketplace for cloud applications.  Leonovus has invested over twenty million dollars in the development of distributed compute and distributed storage technology. They have been several granted patents, numerous patent claims and patents pending. Their unique software-defined storage solution has strong intellectual property protection.

Their software defined object storage technology is designed for enterprise on-prem, hybrid or public cloud users that have governance, risk management and compliance requirements. The software is designed to run on existing storage hardware.

        Minio | CEO: Anand Babu Periasamy

Minio is an easy to deploy open-source object storage server that uses an Amazon S3 compatible API.  They develop software for cloud-native and containerized applications to help businesses with the management of the exponential growth of unstructured data.  They also support Amazon AWS compatible lambda functions to perform useful actions like thumbnail generation, metadata extraction and virus scanning.

Their open source based object storage server is primarily designed to be used for cloud applications and DevOps departments. Application developers can containerize storage, apps and security simultaneously and use the same resources. The server enables applications to manage large amounts of unstructured data and enables cloud and SaaS application developers to more quickly and easily use and implement emerging cloud hosting providers like Digital Ocean, Packet and Hyper.sh.  It has proven to be popular in the Docker, Mesos and Kubernetes communities because of its cloud native architecture.

    Nyriad | CEO: Matthew Simmons

The core of Nyriad’s platform is their “NSULATE” technology, which uses a GPU to perform the processing. GPUs are specialized for floating point calculations, and Nyriad uses that enhanced capability to generate parity calculations that would otherwise be impossible with a CPU or a RAID controller.

They claim that NSULATE can handle dozens of simultaneous device failures in real-time and maintain consistent I/O performance.  Using Netlist’s NVvault non-volatile DIMMs to create a Linux storage platform, It can scale up to millions of IOPs and allow data to be directly sent to the GPU for storage processing.  As it directly bypasses the Linux kernel, it can offer improved performance.

NSULATE technology also allows for compute and storage to coexist on the same node. The idea is to enable storage nodes to be configured for computation in order to speed up I/O-related code, which can accelerate applications that typically hit a brick wall with storage IO bottlenecks.

ScaleFlux | CEO: Hao Zhong

ScaleFlux’s Converged Cloud Subsystem (CCS) is a tightly unified software and flash based hardware subsystem solution that easily and cost effectively integrates into Big Data, scale out servers. CCS collapses the traditional scale up storage hierarchy that usually bottlenecks data movement and processing performance by enabling high density, commodity flash to be used as an extension to memory.  Deploying CCS throughout the entire data center infrastructure is designed to provide a significant boost to application performance while reducing data center TCO.

StorageOS | CEO: Chris Brandon

StorageOS is a software-based distributed storage platform designed to provider persistent container storage.  It’s available on commodity hardware, virtual machines or in the cloud.  With the addition of a 40MB container, developers can build scalable stateful containerized apps, with fast, highly available persistent storage.

StorageOS offers simple and automated block storage to stateless containers, allowing databases and other applications that need enterprise class storage functionality to run without the normal complexity and high cost.

They aim to provide an enterprise class storage offering that is simpler, faster, easier, and cheaper than legacy IT storage. They also aim to provide automated storage provisioning to containers which can be instantiated and torn down many thousands of times a day.

So, how does the product work?  It’s a very fast and easy process.  It installs as a container under Linux and it locates accessible storage; be it direct-attached, network-attached and cloud-attached, or connected nodes. That storage is then aggregated into a virtual multi-node pool of block storage. Volumes are then carved out for accessing containers, and are then thin provisioned, mounted, loaded up with a database and started.

        Storj Labs | CSO: Shawn Wilkinson

Storj is based on blockchain technology and peer to peer protocols to provide secure, private, and encrypted cloud storage. Basically, it’s an open source decentralized cloud storage platform utilizing blockchain technology, and I looked at them in my previous article Blockchain and Enterprise Storage, where I dove in to what exactly blockchain is, how it works, how it may be applied in the enterprise storage space, and how it’s already starting to be used in various global industries.  Storj uses the spare storage capacity of its community members to store data that has been shredded and encrypted.  From a blockchain perspective, Storj uses their own storage coin token that is used to buy and sell space on the network.

For potential data farmers that are looking to share storage capacity, they will verify the integrity of your storage with a challenge that performs a remote audit. As a distributed storage system, it is a highly-available solution with data sliced up into multiple segments that are stored redundantly across at least five different systems.  The distributed nature helps accelerate data access because data is retrieved from multiple sources simultaneously rather than just one.

    Vexata | CEO: Zahid Hussain

Vexata’s active data infrastructure solution aims to improve performance at scale for I/O intensive applications.   The system presents a block or unstructured data I/O interface to enable applications to access and update large volumes of data at high throughput and low latency, and can be deployed as a fully contained or cloud deployed solution.  Based on their VX-OS software, their SSD systems can be deployed in both enterprise and cloud data center environments.

Their file storage system OS is well suited for business critical enterprise data architectures, media and entertainment workflows, and high performance data analytics.  VX-OS  is a scalable, resilient file storage system that supports industry standard protocols (like NFSv3 and GPFS), while providing over 1M random file IOPS, 50GB/s read and 20GB/s write bandwidth, and up to 180TB of protected capacity. It also supports enterprise class features such as file-based snapshots/clones and replication, as well as data-at-rest encryption without the huge performance penalty.

   Wasabi | CEO: David Friend

Wasabi offers cloud based object storage as a service.  You can read more about Object storage in my Primer on Object Storage article, but in a nutshell it refers to a data storage approach that stores information as individual objects in digital buckets, as opposed to storing files in a hierarchical or block fashion.  They claim that their storage service is significantly faster and cheaper than competing products and offers the same levels of reliability, and that their service can read and write data more than six times as fast as Amazon’s S3, while maintaining 100% compatibility with the Amazon S3 API.  Their prices are also claimed to be around 1/5th the cost of S3, Microsoft Azure, and Google Cloud.

   WekaIO | CEO: Liran Zvibel

WekaIO’s core product is WekaIO Matrix, which is a cloud-native scalable file system the that provides all flash storage performance with the simplicity of NAS storage. WekaIO Matrix offers dynamic scaling of resources based on application requirements. It is a distributed global namespace file system that can scale to thousands of compute nodes and petabytes of storage, and also provides integrated tiering to the cloud.

Their software deploys on industry-standard commodity servers. They have reference architectures for HPE Apollo and Dell EMC servers, with Supermicro and Lenovo in the pipeline.  It runs on bare-metal servers, virtual machines or in containers. The software scales from 6-240 nodes with little to no latency impact.  As a frame of reference, they note that a 30-node cluster can address up to 2PB of storage with up to 1.8M IOPS and 60GBps of bandwidth.  They also position themselves as a file access cloud storage option that sidesteps the limitations of existing Amazon storage services.

NetApp Clustered Data ONTAP Command Line Reference Guide (CDOT CLI)

Other CLI Reference Guides:
Isilon CLI  |  EMC ECS CLI  |  VNX NAS CLI  |  ViPR Controller CLI  Data Domain CLI  |  Brocade FOS CLI  | EMC XTremIO CLI

This is a comprehensive CLI command line reference guide for NetApp Clustered Data ONTAP.  The commands are based on Version 9.x.  It includes a list of commands for Autobalance, Cluster, Event, Job, LUN, Metrocluster, Network, QoS, Security, SnapLock, SnapMirror, Statistics, Storage, System, Volume, and Vcenter.  For complete documentation and additional options for each of these commands, I recommend you visit the official NetApp ONTAP 9 documentation page.

Autobalance Commands
autobalance aggregate show-aggregate-state -instance Display the Auto Balance Aggregate state for an aggregate
autobalance aggregate show-unbalanced-volume-state Display the Auto Balance Aggregate state for a volume
autobalance aggregate config modify Modify the Auto Balance Aggregate feature configuration
autobalance aggregate config show Display the Auto Balance Aggregate feature configuration
autobalance volume show rebalance -vserver vs1 -volume repo_vol -storage-service gold Display Auto Balance Volume progress for an Infinite Volume
autobalance volume rebalance start Start Auto Balance Volume for an Infinite Volume
autobalance volume rebalance stop Stop Auto Balance Volume for an Infinite Volume
Cluster Commands
cluster contact-info modify Manage contact information for the cluster.
cluster contact-info show Display contact information for the cluster
cluster ha modify -configured true Manage high-availability configuration
cluster ha show Show high-availability configuration status for the cluster
cluster date modify -date “01/01/2011 01:00:00” Modify the current date and time for the nodes in the cluster
cluster date show Display the current date and time for the nodes in the cluster
cluster image cancel-update Cancel an update
cluster image pause-update Pause an update
cluster image resume-update Resume an update
cluster image show-update-history Display the update history
cluster image show-update-log Display the update transaction log
cluster image show-update-log-detail Display detailed information about nondisruptive update events
cluster image show-update-progress Display the update progress
cluster time-service ntp server create Add an NTP Server
cluster time-service ntp server delete Delete a NTP Server
cluster time-service ntp server modify Modify NTP Server Options
cluster time-service ntp server reset Reset NTP server list to a default selection
cluster time-service ntp server show Display a list of NTP Servers
cluster image update Manage an update
cluster image validate Validates the cluster’s update eligibility
cluster image package delete Remove a package from the cluster image package repository
cluster image package get Fetch a package file from a URL into the cluster image package repository
cluster image package show-repository Display information about packages available in the cluster image package repository
cluster kernel-service show Display cluster service state in the kernel
cluster identity modify Modify the cluster’s attributes
cluster identity show Display the cluster’s attributes including Name, Serial Number, Cluster UUID, Location and Contact
cluster log-forwarding create Create a log forwarding destination
cluster log-forwarding delete Delete a log forwarding destination
cluster log-forwarding modify Modify log forwarding destination settings
cluster log-forwarding show Display log forwarding destinations
cluster peer create -peer-addrs cluster2-d2,10.10.1.1 -username admin Create a new cluster peer relationship
cluster peer delete -cluster cluster1 Delete a cluster peer relationship
cluster peer modify Modify cluster peer relationships
cluster peer show -instance Show cluster peer relationships
cluster peer modify-local-name -name cluster1 -new-name cluster1A Modify the local name for a cluster peer
cluster peer ping Initiate intercluster connectivity test
cluster peer show {-instance} Display peer cluster information
cluster peer connection show Show current peering connections for a cluster
cluster peer health show Check peer cluster health
cluster peer offer cancel -cluster cluster2 Cancel the outstanding offer to authenticate with a peer cluster
cluster peer offer show Display outstanding offers to authenticate with a peer cluster
cluster quorum-service options modify Modify the settings for cluster quorum-service
cluster quorum-service options show Display the settings for cluster quorum-service
cluster ring show Display cluster node member’s replication rings
cluster statistics show Display cluster-wide statistics
cluster add-node {-node-ip x.x.x.x | -node-count x} Expand the cluster by discovering and adding new nodes
cluster create -clustername cluster1 -node-count 4 Create a cluster
cluster join -clusteripaddr 192.0.1.1 Join an existing cluster using the specified member’s IP address or by cluster name
cluster modify -node node8 -eligibility true Modify cluster node membership attributes
cluster ping-cluster -node node1 Ping remote cluster interfaces and perform RPC server check
cluster setup Setup wizard
cluster show Display cluster node members
cluster unjoin -node node4  {-force} Unjoin or remove a node from the cluster
Event Commands
event config modify Modify log configuration parameters
event config set-proxy-password Modify password for proxy server
event config show Display log configuration parameters
event catalog show Display event definitions
event filter copy Copy an event filter
event filter show -filter-name filter1 Displays the event catalog
event filter create -filter-name filter1 Create a new event filter.
event filter delete Delete existing event filters
event filter rename Rename an event filter
event filter show Display the list of existing event filters.
event filter test Test an event filter
event filter rule add Add a rule for an event filter
event filter rule delete Delete a rule for an event filter
event filter rule reorder Modify the index of a rule for an event filter
event log show Display latest log events
event notification create Create an event notification
event notification delete Delete event notifications
event notification modify Modify event notifications
event notification show Display event notifications
event notification destination create Create an event notification destination
event notification destination delete Delete existing event destinations
event notification destination modify Modify an event notification destination
event notification destination show Display event notification destinations
event notification history show Display latest events sent to destination
event status show -node node1 Display event status
Job Commands
job schedule delete -name backup1 Delete a schedule
job schedule show {-type cron} Display a list of available schedules
job schedule show-jobs Display the list of jobs by schedule
job schedule cron create -name weekendcron -dayofweek “Saturday, Sunday” -hour 3 Create a cron schedule
job schedule cron delete -name dailycron Delete a cron schedule
job schedule cron modify -name dailycron -hour 7 -minute 20 Modify a cron schedule
job schedule cron show Show cron schedules
job schedule interval create Create a schedule that runs on an interval
job schedule interval delete -name daily Delete an interval schedule
job schedule interval show Show interval schedules
job delete -id 99 Delete a job
job pause -id 99 Pause a job
job resume -id 99 Resume a job
job show {-node node1} Display a list of jobs
job show-bynode Display a list of jobs by node
job show-cluster Display a list of cluster jobs
job show-completed Display a list of completed jobs
job stop -id 99 Stop a job
job unclaim Unclaim a cluster job
job watch-progress -vserver vs0 -id 99 -interval 3 Watch the progress of a job
job history show Display a history of jobs
job initstate show Display init state for job managers
job private delete -node node2 -id 99 Delete a job
jobs private pause -node node2 -id 99 Pause a job
job private resume -node node2 -id 99 Resume a job
job private show -node local Display a list of jobs
job private show-completed Display a list of completed jobs
job private stop -node node2 -id 99 Stop a job
job private watch-progress -node node1 -id 99 -interval 2 Watch the progress of a job
LUN Commands
lun copy cancel -vserver vs1 -destination-path /vol/vol2/lun3 Cancel a LUN copy operation before the new LUN has been created
lun copy modify -vserver vs1 -destination-path /vol/vol2/lun3 Modify an ongoing LUN copy operation
lun copy pause -vserver vs1 -destination-path /vol/vol2/lun3 Pause an ongoing LUN copy operation
lun copy resume -vserver vs1 -destination-path /vol/vol2/lun3 Resume a paused LUN copy operation
lun copy show Display a list of LUNs currently being copied
lun copy start -vserver vs1 -destination-path /vol/vol2/lun2 -source-path /vol/vol1/vol3 Start copying a LUN from one volume to another within a cluster
lun create -vserver vs1 -path /vol/vol1/lun1 -size 100M -ostype linux Create a new LUN
lun delete -vserver vs1 -path /vol/vol1/lun1 Delete the LUN
lun maxsize -volume vol0 -ostype netware Display the maximum possible size of a LUN on a given volume or qtree.
lun modify -path /vol/vol1/lun1 -comment “new comment” Modify a LUN
lun move-in-volume -vserver vs1 -volume vol1 -lun lun1 -new-lun newlun1 Move a LUN within a volume
lun resize -vserver vs1 -path /vol/vol2/lun3 -size 100M -force Changes the size of the LUN to the input value size.
lun show -vserver vs0 -path /vol/vol1/lun1 -instance Display a list of LUNs
lun bind create -vserver vs1 -protocol-endpoint-path /vol/VV1/PE1 -vvol-path /vol/VV1/vol3 Bind a VVol LUN to a protocol endpoint
lun bind destroy -protocol-endpoint-path /vol/VV/pe9 -vvol-path /vol/VV4/dir -vserver vs9 Unbind a VVol LUN from a protocol endpoint
lun bind show -vserver vs1 Show list of Vvol bindings
lun igroup add -vserver vs1 -igroup ig1 -initiator iqn.2018-01.com.thesanguy.mvinitiator Add initiators to an initiator group
lun igroup bind -vserver vs1 -igroup ig1 -portset-name ps1 Bind an existing initiator group to a given portset
lun igroup create Create a new initiator group
lun igroup delete -vserver vs1 -igroup ig1 Delete an initiator group
lun igroup disable-aix-support Disables SAN AIX support on the cluster
lun igroup modify -vserver vs1 -igroup ig1 -ostype windows Modify an existing initiator group
lun igroup remove -vserver vs1 -igroup ig1 -initiator iqn.2018-01.com.thesanguy.mvinitiator Remove initiators from an initiator group
lun igroup rename -vserver vs1 -igroup ig1 -new-name ignew1 Rename an existing initiator group
lun igroup show Display a list of initiator groups
lun igroup unbind Unbind an existing initiator group from a portset
lun import create Create an import relationship
lun import delete -vserver vs1 -path /vol/vol3/lun3 Deletes the import relationship of the specified LUN or the specified foreign disk
lun import pause Pause the import for the specified LUN
lun import prepare-to-downgrade Prepares LUN import to be downgraded
lun import resume Resume the import for the specified LUN
lun import show Display a list of import relationships
lun import start -vserver vs1 -path /vol/vol3/lun3 Start the import for the specified LUN
lun import stop -vserver vs1 -path /vol/vol3/lun3 Stop the import for the specified LUN
lun import throttle -vserver vs1 -path /vol/vol3/lun3 -max-throughput-limit 3M Modify the max throughput limit for the specified import relationship
lun import verify start -vserver vs1 -path /vol/vol3/lun3 Start the verification of the foreign disk and LUN data
lun import verify stop -vserver vs1 -path /vol/vol3/lun3 Stop the verify for the specified LUN
lun mapping add-reporting-nodes -vserver vs1 -path /vol/vol3/lun4 -igroup ig1 Add Reporting Nodes
lun mapping create -vserver vs1 -path /vol/vol8/lun9 -igroup ig9 -lun-id 11 Map a LUN to an initiator group
lun mapping delete -vserver vs1 -path /vol/vol5/lun8 -igroup ig3 Unmap a LUN from an initiator group
lun mapping remove-reporting-nodes -vserver vs1 -path /vol/vol4/lun7 -igroup ig5 Remove Reporting Nodes
lun mapping show Lists the mappings between LUNs and initiator groups.
lun move cancel -vserver vs1 -destination-path /vol/vol3/lun9 Cancel a LUN move operation before the new LUN has been created
lun move modify -vserver vs1 -destination-path /vol/vol6/lun8 Modify an ongoing LUN move operation
lun move pause -vserver vs1 -destination-path /vol/vol6/lun8 Pause an ongoing LUN move operation
lun move resume -vserver vs1 -destination-path /vol/vol6/lun8 Resume a paused LUN move operation
lun move show Display a list LUNs currently being moved
lun move start -vserver vs1 -destination-path /vol/vol3/lun4 -source-path /vol/vol5/lun6 Start moving a LUN from one volume to another within a Vserver
lun persistent-reservation clear -vserver vs1 -path /vol/vol3/lun5 Clear the SCSI-3 persistent reservation information for a given LUN
lun persistent-reservation show -vserver vs1 /vol/vol8/lun9 Display the current reservation information for a given LUN
lun portset add -vserver vs1 -portset ps1 -port-name lif1 Add iSCSI/FCP LIFs to a portset
lun portset create -vserver vs1 -portset ps1 -protocol mixed Creates a new portset
lun portset delete Delete the portset
lun portset remove Remove iSCSI/FCP LIFs from a portset
lun portset show -protocol iscsi Displays a list of portsets
lun transition show -vserver vs1 Display the status of LUN transition processing
lun transition start -vserver vs1 -volume thesanguyvol_1 Start LUN Transition Processing
Metrocluster Commands
metrocluster check disable-periodic-check Disable Periodic Check
metrocluster check enable-periodic-check Enable Periodic Check
metrocluster check run Check the MetroCluster setup
metrocluster check show Show the results of the last instance of MetroCluster check
metrocluster check cluster show Show results of MetroCluster check for the cluster components
metrocluster check aggregate show Show results of MetroCluster check for aggregates
metrocluster check config-replication show Display MetroCluster config-replication status information
metrocluster check config-replication show-aggregate-eligibility Displays the MetroCluster configuration replication aggregate eligibility.
metrocluster check config-replication show-capture-status Display MetroCluster capture status information
metrocluster check lif repair-placement Repair LIF placement for the sync-source Vserver LIFs in the destination cluster
metrocluster check lif show Show results of MetroCluster check results for the data LIFs
metrocluster check node show Show results of MetroCluster check for nodes
metrocluster config-replication cluster-storage-configuration modify Modify MetroCluster storage configuration information
metrocluster config-replication cluster-storage-configuration show Display MetroCluster storage configuration information
metrocluster config-replication resync-status show Display MetroCluster Configuration Resynchronization Status
metrocluster configure Configure MetroCluster and start DR mirroring for the node and its DR group
metrocluster heal -phase aggregates Heal DR data aggregates and DR root aggregates
metrocluster modify -node-name clusA-01 -node-object-limit on Modify MetroCluster configuration options
metrocluster show Display MetroCluster configuration information
metrocluster switchback Switch back storage and client access
metrocluster switchover -forced-on-disaster true Switch over storage and client access
metrocluster interconnect adapter modify Modify MetroCluster interconnect adapter settings
metrocluster interconnect adapter show Display MetroCluster interconnect adapter information
metrocluster interconnect mirror show Display MetroCluster interconnect mirror information
metrocluster interconnect mirror multipath show Display multipath information
metrocluster node show Display MetroCluster node configuration information
metrocluster operation show Display details of the last MetroCluster operation
metrocluster operation history show Display details of all MetroCluster operations
metrocluster vserver recover-from-partial-switchback Recover vservers from partial switchback
metrocluster vserver recover-from-partial-switchover Recover vservers from partial switchover
metrocluster vserver resync Resynchronize Vserver with it’s partner Vserver
metrocluster vserver show Display MetroCluster Vserver relationships
Network Commands
network ping -node thesanguy -destination 10.1.1.22 Ping
network ping6 -node node1 -destination ipv6.thesanguy.com Ping an IPv6 address
network test-path -source-node node1 -destination-cluster cluster1 -destination-node node2 Test path performance between two nodes
network raceroute -node node1 -destination 10.1.1.1 -maxttl 8 Traceroute
network traceroute6 -node node1 -vserver vs1 Traceroute an IPv6 address
network cloud routing-table create -route-table-id eni-999 Create a new external routing table
network cloud routing-table delete -route-table-id eni-999 Delete an existing external routing table
network arp create -vserver vs1 -remotehost 10.1.1.1 -mac 80:45:32:14:39:f1 Create static ARP entry
network arp delete -vserver vs1 -remotehost 10.1.1.1 Delete static ARP entry
network arp show -vserver vs1 Display static ARP entries
network arp active-entry delete Delete active ARP entry
network arp active-entry show -vserver vs1 Display active ARP entries
network connections active show Show the active connections in this cluster
network connections active show-clients Show a count of the active connections by client
network connections active show-lifs Show a count of the active connections by logical interface
network connections active show-protocols Show a count of the active connections by protocol
network connections active show-services Show a count of the active connections by service
network connections listening show Show the listening connections in this cluster
network fcp adapter modify -node node1 -adapter 0d -speed 2 Modify the fcp adapter settings
network fcp adapter show -instance -node thesanguy1 -adapter 0b Display FCP adapters
network device-discovery show Display device discovery information
network interface create Create a logical interface
network interface delete Delete a logical interface
network interface migrate Migrate a logical interface to a different port
network interface migrate-all -node local Migrate all data logical interfaces away from the specified node
network interface modify -vserver vs0 -lif lif1 -netmask 255.255.255.0 Modify a logical interface
network interface rename -vserver vs0 -lif clusterlif0 -newname clusterlif1 Rename a logical interface
network interface revert -vserver * -lif * Revert a logical interface to its home port
network interface show Display logical interfaces
network interface start-cluster-check Start the cluster check function
network interface capacity show Display the number of IP data LIFs capable of being configured on the cluster.
network interface capacity details show Display details about the IP data LIFs capable of being configured on each node.
network interface check failover show Discover if any LIFs might become inaccessible during a node outage, due to over-provisioning
network interface dns-lb-stats show Show the DNS load-balancer stats for this node
network interface lif-weights show Show the load-balancer LIF weights
network interface failover-groups add-targets Add failover targets to a failover group
network interface failover-groups create Create a new failover group
network interface failover-groups delete Delete a failover group
network interface failover-groups modify Modify a failover group
network interface failover-groups remove-targets Remove failover targets from a failover group
network interface failover-groups rename Rename a logical interface failover Group
network interface failover-groups show Display logical interface failover groups
network ipspace create -name ipspace1 Create a new IPspace
network ipspace delete -ipspace ipspace1 Delete an IPspace
network ipspace rename -ipspace ipsA -new-name ipsB Rename an IPspace
network ipspace show Display IPspace information
network ndp neighbor create Create a static NDP neighbor entry
network ndp neighbor delete -vserver vs_1 -neighbor 10:10::10 Delete a static NDP neighbor entry
network ndp neighbor show Display static NDP neighbor entries
network ndp neighbor active-entry delete Delete active neighbor entry
network ndp neighbor active-entry show Display active neighbor entries
network ndp default-router delete-all -ipspace ipspace1 Delete default routers on a given IPspace
network ndp default-router show -port a0d -node local Display default routers
network ndp prefix delete-all -ipspace ips1 Delete IPv6 prefixes on a given IPspace
network ndp prefix show Display IPv6 prefixes
network options cluster-health-notifications modify cluster health notification options
network options cluster-health-notifications show {-node node1} Display cluster health notification options
network options ipv6 modify Modify IPv6 options
network options ipv6 show Display IPv6 options
network options load-balancing modify Modify load balancing algorithm
network options port-health-monitor disable-monitors Disable one or more port health monitors
network options port-health-monitor enable-monitors Enable one or more port health monitors
network options port-health-monitor modify Modify port health monitors configuration
network options port-health-monitor show Display port health monitors configuration
network options send-soa modify Modify Send SOA settings
network options send-soa show Display Send SOA settings
network options switchless-cluster modify Modify switchless cluster network options
network port delete Delete a network port
network port modify Modify network port attributes
network port show Display network port attributes
network port show-address-filter-info Print the port’s address filter information
network port broadcast-domain add-ports Add ports to a layer 2 broadcast domain
network port broadcast-domain create Create a new layer 2 broadcast domain
network port broadcast-domain delete Delete a layer 2 broadcast domain
network port broadcast-domain merge Merges two layer 2 broadcast domains
network port broadcast-domain modify Modify a layer 2 broadcast domain
network port broadcast-domain remove-ports Remove ports from a layer 2 broadcast domain
network port broadcast-domain rename Rename a layer 2 broadcast domain
network port broadcast-domain show Display layer 2 broadcast domain information
network port broadcast-domain split Splits a layer 2 broadcast domain into two layer 2 broadcast domains.
network port ifgrp add-port Add a port to an interface group
network port ifgrp create Create a port interface group
network port ifgrp delete Destroy a port interface group
network port ifgrp remove-port Remove a port from an interface group
network port ifgrp show Display port interface groups
network port vlan create -node node1 -vlan-name f1d-90 Create a virtual LAN
network port vlan delete -node node1 -vlan-name f1d-90 Delete a virtual LAN
network port vlan show Display virtual LANs
network qos-marking modify Modify the QoS marking values
network qos-marking show -ipspace thesanguy Display the QoS marking values
network route create -vserver vserver0 -destination 10.0.0.0/0 -gateway 10.1.9.100 Create a static route
network route delete -vserver vserver0 -destination 0.0.0.0/0 Delete a static route
network route show Display static routes
network route show-lifs Show the Logical Interfaces for each route entry
network route active-entry show Display active routes
network subnet add-ranges -ipspace thesanguy -subnet-name s1 -ip-ranges “10.1.1.10-10.1.1.50” Add new address ranges to a subnet
network subnet create Create a new layer 3 subnet
network subnet delete -ipspace thesanguy -subnet-name sub3 Delete an existing subnet object
network subnet modify Modify a layer 3 subnet
network subnet remove-ranges Remove address ranges from a subnet
network subnet rename -ipspace thesanguy -subnet sub9 -new-name sub5 Rename a layer 3 subnet
network subnet show Display subnet information
network test-link run-test Test link bandwidth
network test-link show Display test results
network test-link start-server -node node1 Start server for bandwidth test
network test-link stop-server -node node1 Stop server for bandwidth test
QoS Commands
qos settings cache modify -default true -cache-setting random_read_write-random_write Cache QoS settings
qos settings cache show Display list of cache policies
qos statistics latency show -iterations 50 -rows 3 Display latency breakdown data per QoS policy group
qos statistics characteristics show -iterations 50 -rows 4 Display QoS policy group characterization
qos statistics resource cpu show -node sanguyA -iterations 50 -rows 3 Display CPU resource utilization data per QoS policy group
qos statistics resource disk show Display disk resource utilization data per QoS policy group
qos statistics performance show -iterations 50 -rows 3 Display system performance data per QoS policy group
qos statistics volume latency show -iterations 50 -rows 3 Display latency breakdown data per volume
qos statistics volume resource cpu show -node thesanguy -iterations 50 -rows 3 Display CPU resource utilization data per volume
qos statistics volume resource disk show Display disk resource utilization data per volume
qos statistics volume performance show -iterations 50 -rows 3 Display system performance data per volume
qos statistics volume characteristics show Display volume characteristics
qos statistics workload latency show -iterations 50 -rows 5 Display latency breakdown data per QoS workload
qos statistics workload characteristics show -iterations 50 -rows 4 Display QoS workload characterization
qos statistics workload performance show -iterations 50 -rows 3 Display system performance data per QoS workload
qos statistics workload resource cpu show -node thesanguy -iterations 80 -rows 4 Display CPU resource utilization data per QoS workload
qos statistics workload resource disk show Disk utilization
qos workload show -class user-defined Display a list of workloads
qos policy-group create p1 -vserver vserverA Create a policy group
qos policy-group delete p1 {-force} Delete a policy group
qos policy-group modify p1 -max-throughput 99IOPS Modify a policy group
qos policy-group rename -policy-group psanguy -new-name psanguy_new Rename a policy group
qos policy-group show Display a list of policy groups
Security Commands
security certificate create -vserver vs1 -common-name www.thesanguy.com -type server Create and Install a Self-Signed Digital Certificate
security certificate delete Delete an Installed Digital Certificate
security certificate generate-csr Generate a Digital Certificate Signing Request
security certificate install Install a Digital Certificate
security certificate prepare-to-downgrade Restore Certificate Management to releases earlier than Data ONTAP 8.3.1.
security certificate show Display Installed Digital Certificates
security certificate sign Sign a Digital Certificate using Self-Signed Root CA
security certificate ca-issued revoke Revoke a Digital Certificate
security certificate ca-issued show Display CA-Issued Digital Certificates
security audit modify Set administrative audit logging settings
security audit show Show administrative audit logging settings
security audit log show Display audit entries merged from multiple nodes in the cluster
security config modify Modify Security Configuration Options
security config show Display Security Configuration Options
security config status show Display Security Configuration Status
security key-manager add -address 10.10.1.250 Add a key management server
security key-manager create-key Create a new authentication key
security key-manager delete -address 10.10.1.250 Delete a key management server
security key-manager query Displays the key IDs stored in a key management server and whether restored
security key-manager restore Restore the authentication key and key ID pairs from the key management servers.
security key-manager setup Configure key manager connectivity
security key-manager show Display key management servers
security key-manager certificate update -type server -address 10.10.10.23 Update key manager SSL certificates
security key-manager backup show Show salt and wrapped keys as a hex dump
security key-manager key show Display Encryption Key IDs
security login create Add a login method
security login delete Delete a login method
security login expire-password Expire user’s password
security login lock Lock a user account with password authentication method
security login modify Modify a login method
security login password -username admin -vserver sanguy1 Modify a password for a user
security login password-prepare-to-downgrade Reset password features introduced in the Data ONTAP version
security login show Show user login methods
security login unlock Unlock a user account with password authentication method
security login whoami Show the current user and role of this session
security login domain-tunnel create Add authentication tunnel Vserver for administrative Vserver
security login domain-tunnel delete Delete authentication tunnel Vserver for administrative Vserver
security login domain-tunnel modify -vserver vs322 Modify authentication tunnel Vserver for administrative Vserver
security login domain-tunnel show Show authentication tunnel Vserver for administrative Vserver
security login banner modify Modify the login banner message
security login banner show Display the login banner message
security login ns-switch group-authentication commands The group-authentication directory
security login ns-switch group-authentication prepare-to-downgrade Remove Ns-switch Groups for Downgrade
security login motd modify Modify the message of the day
security login motd show Display the message of the day
security login publickey create Add a new public key
security login publickey delete Delete a public key
security login publickey load-from-uri Load one or more public keys from a URI
security login publickey modify Modify a public key
security login publickey prepare-to-downgrade Restore publickey features compatible with releases earlier than ONTAP 9.0.
security login publickey show Display public keys
security login role create Add an access control role
security login role delete Delete an access control role
security login role modify Modify an access control role
security login role prepare-to-downgrade Update role configurations so that they are compatible with earlier releases of Data ONTAP
security login role show Show access control roles
security login role show-ontapi Display the mapping between Data ONTAP APIs and CLI commands
security login role config modify Modify local user account restrictions
security login role config reset Reset RBAC characteristics supported on releases later than Data ONTAP 8.1.2
security login role config show Show local user account restrictions
security protocol modify -application rsh -enabled true Modify application configuration options
security protocol show Show application configuration options
security snmpusers Show SNMP users
security session kill-cli -node node1 -session-id 99 Kill an active CLI session
security session show -node node1 Show active CLI & ONTAPI sessions
security session limit create -interface ontapi -category application -max-active-limit 5 Create default session limit
security session limit delete -interface cli -category * Delete default session limit
security session limit modify -interface cli -category location -max-active-limit 4 Modify default session limit
security session limit show Show default session limits
security session limit application create Create per-application session limit
security session limit application delete Delete per-application session limit
security session limit application modify Modify per-application session limit
security session limit application show Show per-application session limits
security session limit location create -interface cli -location 10.1.1.1 -max-active-limit 1 Create per-location session limit
security session limit location delete Delete per-location session limit
security session limit location modify Modify per-location session limit
security session limit location show Show per-location session limits
security session limit request create Create per-request session limit
security session limit request delete Delete per-request session limit
security session limit request modify Modify per-request session limit
security session limit request show Show per-request session limits
security session limit user create Create per-user session limit
security session limit user delete Delete per-user session limit
security session limit user modify Modify per-user session limit
security session limit user show Show per-user session limits
security session limit vserver create Create per-vserver session limit
security session limit vserver delete Delete per-vserver session limit
security session limit vserver modify Modify per-vserver session limit
security session limit vserver show Show per-vserver session limits
security session request-statistics show-by-application Show session request statistics by application
security session request-statistics show-by-location Show session request statistics by location
security session request-statistics show-by-request Show session request statistics by request name
security session request-statistics show-by-user Show session request statistics by username
security session request-statistics show-by-vserver Show session request statistics by Vserver
security ssh add Add SSH configuration options
security ssh modify Modify SSH configuration options
security ssh prepare-to-downgrade Restore SSH configuration to releases earlier than Data ONTAP 9.0.0.
security ssh remove Remove SSH configuration options
security ssh show Display SSH configuration options
security ssl modify Modify the SSL configuration for HTTP servers
security ssl show Display the SSL configuration for HTTP servers
SnapLock Commands
snaplock log create Create audit log configuration for a Vserver.
snaplock log delete Delete audit log configuration for a Vserver.
snaplock log modify Modify audit log configuration for a Vserver.
snaplock log show Display audit log configuration.
snaplock log file archive Archive Active Log Files in Log Volume
snaplock log file show Display audit log file information.
snaplock compliance-clock initialize Initializes the node ComplianceClock
snaplock compliance-clock show Displays the node ComplianceClock
SnapMirror Commands
snapmirror config-replication commands The config-replication directory
snapmirror config-replication cluster-storage-configuration show Display SnapMirror storage configuration information
snapmirror config-replication status show SnapMirror configuration replication status information
snapmirror config-replication status show-aggregate-eligibility Display the SnapMirror configuration replication aggregate eligibility
snapmirror config-replication status show-communication Display SnapMirror configuration replication communication status information
snapmirror abort Abort an active transfer
snapmirror break Make SnapMirror destination writable
snapmirror create Create a new SnapMirror relationship
snapmirror delete Delete a SnapMirror relationship
snapmirror initialize Start a baseline transfer
snapmirror initialize-ls-set Start a baseline load-sharing set transfer
snapmirror list-destinations Display a list of destinations for SnapMirror sources
snapmirror modify Modify a SnapMirror relationship
snapmirror promote Promote the destination to read-write
snapmirror quiesce Disable future transfers
snapmirror release Remove source information for a SnapMirror relationship
snapmirror restore Restore a Snapshot copy from a source volume to a destination volume
snapmirror resume Enable future transfers
snapmirror resync Start a resynchronize operation
snapmirror set-options Display/Set SnapMirror options
snapmirror show Display a list of SnapMirror relationships
snapmirror show-history Displays history of SnapMirror operations.
snapmirror update Start an incremental transfer
snapmirror update-ls-set Start an incremental load-sharing set transfer
snapmirror policy add-rule Add a new rule to SnapMirror policy
snapmirror policy create Create a new SnapMirror policy
snapmirror policy delete Delete a SnapMirror policy
snapmirror policy modify Modify a SnapMirror policy
snapmirror policy modify-rule Modify an existing rule in SnapMirror policy
snapmirror policy remove-rule Remove a rule from SnapMirror policy
snapmirror policy show Show SnapMirror policies
snapmirror snapshot-owner create Add an owner to preserve a Snapshot copy for a SnapMirror mirror-to-vault cascade configuration
snapmirror snapshot-owner delete Delete an owner used to preserve a Snapshot copy for a SnapMirror mirror-to-vault cascade configuration
snapmirror snapshot-owner show Display Snapshot Copies with Owners
Statistics-v1 Commands
statistics-v1 nfs show-mount -node node1 Display mount statistics
statistics-v1 nfs show-statusmon Display status monitor statistics
statistics-v1 nfs show-v3 Display NFSv3 statistics
statistics-v1 nfs show-v4 -node node1 Display NFSv3 statistics
statistics-v1 protocol-request-size show -stat-type nfs3_* -node sanguy_n2 Display size statistics for CIFS and NFS protocol read and write requests
Statistics Commands
statistics disk show Disk throughput and latency metrics
statistics nfs show-mount Display mount statistics
statistics nfs show-statusmon Display status monitor statistics
statistics nfs show-v3 Display NFSv3 statistics
statistics nfs show-v4 Display NFSv4 statistics
statistics start -object system -counter avg_processor_busy|cpu_busy -sample-id smpl_1 Display performance data for a time interval
statistics show-periodic Continuously display current performance data at regular interval
statistics start -object system -sample-id smpl_1 Start data collection for a sample
statistics stop -sample-id smpl_1 Stop data collection for a sample
statistics aggregate show Aggregate throughput and latency metrics
statistics cache flash-pool show Flash pool throughput metrics
statistics catalog counter show -object processor Display the list of counters in an object
statistics catalog instance show Display the list of instances associated with an object
statistics lif show Logical network interface throughput and latency metrics
statistics lun show LUN throughput and latency metrics
statistics node show System utilization metrics for each node in the cluster
statistics oncrpc show-rpc-calls Display ONC RPC Call Statistics
statistics port fcp show FCP port interface throughput and latency metrics
statistics preset delete Delete an existing Performance Preset
statistics preset import Import Performance Preset configuration from source URI
statistics preset modify Modify an existing Performance Preset
statistics preset show Display information about Performance Presets
statistics preset detail show Display information about Performance Preset Details
statistics samples delete Delete statistics samples
statistics samples show Display statistics samples
statistics settings modify Modify settings for the statistics commands
statistics settings show Display settings for the statistics commands
statistics system show System utilization metrics for the cluster
statistics top client show Most active clients
statistics top file show Most actively accessed files
statistics volume show Volume throughput and latency metrics
statistics vserver show Vserver throughput and latency metrics
statistics workload show QoS workload throughput and latency metrics
Storage Commands
storage automated-working-set-analyzer show Display running instances
storage automated-working-set-analyzer start Command to start Automated Working Set Analyzer on node or aggregate
storage automated-working-set-analyzer stop -node vsim21 -aggregate aggr233 Command to stop Automated Working Set Analyzer on node or aggregate
storage automated-working-set-analyzer volume show Displays the Automated Working Set Analyzer volume table
storage aggregate add-disks -aggregate aggr0 -diskcount 6 -raidgroup rg1 Add disks to an aggregate
storage aggregate create -aggregate aggr0 -mirror Create an aggregate
storage aggregate delete -aggregate aggr1 Delete an aggregate
storage aggregate mirror Mirror an existing aggregate
storage aggregate modify -aggregate aggr0 -raidtype raid_dp Modify aggregate attributes
storage aggregate offline Offline an aggregate
storage aggregate online -aggregate aggr1 Online an aggregate
storage aggregate remove-stale-record Remove a stale aggregate record
storage aggregate rename -aggregate aggr0 -newname thesanguy-aggr Rename an aggregate
storage aggregate restrict -aggregate aggr2 -unmount-volumes true Restrict an aggregate
storage aggregate scrub Aggregate parity scrubbing
storage aggregate show Display a list of aggregates
storage aggregate show-efficiency Display aggregate storage efficiency details
storage aggregate show-resync-status Display aggregate resynchronization status
storage aggregate show-scrub-status Display aggregate scrubbing status
storage aggregate show-space Display details of space utilization within an aggregate.
storage aggregate show-spare-disks -owner-name thesanguy_node1 Display spare disks
storage aggregate show-status -aggregate node1_flashpool_0 Display aggregate configuration
storage aggregate verify Verify an aggregate
storage aggregate inode-upgrade resume -aggregate aggr1 Resume suspended inode upgrade
storage aggregate inode-upgrade show Display inode upgrade progress
storage aggregate resynchronization modify -aggregate aggr1 -resync-priority high Modify aggregate resynchronization priorities
storage aggregate resynchronization options show Display node specific aggregate resynchronization options
storage aggregate plex delete -aggregate aggr1 -plex plex1 Delete a plex
storage aggregate plex offline Offline a plex
storage aggregate plex online -aggregate aggr1 -plex plex1 Online a plex
storage aggregate plex show Show plex details
storage aggregate reallocation quiesce Quiesce reallocate job on aggregate
storage aggregate reallocation restart Restart reallocate job on aggregate
storage aggregate reallocation schedule Modify schedule of reallocate job on aggregate
storage aggregate reallocation show Show reallocate job status for improving free space layout
storage aggregate reallocation start Start reallocate job on aggregate
storage aggregate reallocation stop Stop reallocate job on aggregate
storage aggregate relocation show Display relocation status of an aggregate
storage aggregate relocation start Relocate aggregates to the specified destination
storage array modify -name hardware1 -model FastTier Make changes to an array’s profile.
storage array remove Remove a storage array record from the array profile database.
storage array rename -name NetAppClus1 -new-name THESANGUY_Array1 Change the name of a storage array in the array profile database.
storage array show Display information about SAN-attached storage arrays.
storage array config show Display connectivity to back-end storage arrays.
storage array disk paths show Display a list of LUNs on the given array
storage array port modify Make changes to a target port record.
storage array port remove Remove a port record from an array profile.
storage array port show Display information about a storage array’s target ports.
storage bridge add -address 10.1.1.33 Add a bridge for monitoring
storage bridge modify Modify a bridge’s configuration information
storage bridge refresh Refresh storage bridge info
storage bridge remove Remove a bridge from monitoring
storage bridge show Display bridge information
storage encryption disk destroy 1.8.4 Cryptographically destroy a self-encrypting disk
storage encryption disk modify Modify self-encrypting disk parameters
storage encryption disk revert-to-original-state Revert a self-encrypting disk to its original, as-manufactured state
storage encryption disk sanitize 1.8.4 Cryptographically sanitize a self-encrypting disk
storage encryption disk show Display self-encrypting disk attributes
storage encryption disk show-status Display status of disk encryption operation
storage errors show Display storage configuration errors.
storage disk assign -all -node node1 Assign ownership of a disk to a system
storage disk fail -disk 2.3.36 -i true Fail the file system disk
storage disk refresh-ownership Refresh the disk ownership information on a node
storage disk remove -disk 2.3.36 Remove a spare disk
storage disk remove-reservation Removes reservation from an array LUN marked as foreign.
storage disk removeowner -disk 3.8.15 Remove disk ownership
storage disk replace -disk 2.3.36 -replacement 3.2.24 -action start Initiate or stop replacing a file-system disk
storage disk set-foreign-lun -disk EMC-2.3 -is-foreign-lun true Sets or Unsets an array LUN as foreign
storage disk set-led Identify disks by turning on their LEDs
storage disk show Display a list of disk drives and array LUNs
storage disk unfail -disk 1.5.11 -spare Unfail a broken disk
storage disk zerospares Zero non-zeroed spare disks
storage disk error show Display disk component and array LUN configuration errors.
storage disk firmware revert Revert disk firmware
storage disk firmware show-update-status Display disk firmware update status.
storage disk firmware update Update disk firmware
storage disk show -fields firmware-revision Show disk firmware
storage disk option modify Modify disk options
storage disk option show Display a list of disk options
storage failover giveback -fromnode node1 Return failed-over storage to its home node
storage failover modify -node node0 -enabled true Modify storage failover attributes
storage failover show Display storage failover status
storage failover show-giveback Display giveback status
storage failover show-takeover Display takeover status
storage failover takeover -bynode node0 -option immediate Take over the storage of a node’s partner
storage failover hwassist show Display hwassist status
storage failover hwassist test Test the hwassist functionality
storage failover hwassist stats clear -node cluster1-02 Clear the hwassist statistics
storage failover hwassist stats show -node ha1 Display hwassist statistics
storage failover internal-options show Display the internal options for storage failover
storage failover mailbox-disk show Display information about storage failover mailbox disks
storage failover progress-table show Display status information about storage failover operations
storage firmware download Download disk, ACP processor and shelf firmware
storage iscsi-initiator add-target Add an iSCSI target
storage iscsi-initiator connect Connect to an iSCSI target
storage iscsi-initiator disconnect Disconnect from an iSCSI target
storage iscsi-initiator remove-target Remove the iSCSI targets
storage iscsi-initiator show Display the iSCSI targets
storage load balance Balance storage I/O across controller’s initiator ports
storage load show Display I/O statistics to array LUNs, grouped by initiator port.
storage path quiesce Quiesce I/O on a path to array
storage path resume Resume I/O on a path to array
storage path show Display a list of paths to attached arrays.
storage path show-by-initiator Display a list of paths to attached arrays from the initiator’s perspective
storage pool add -storage-pool SP1 -disk-list 2.1.13 -simulate Add disks to a storage pool
storage pool create -storage-pool SP1 -disk-count 6 Create a new storage pool
storage pool delete Delete an existing storage pool
storage pool reassign Reassign capacity from one node to another node in storage pool
storage pool show Display details of storage pools
storage pool show-aggregate -storage-pool SP1 -instance Display aggregates provisioned from storage pools
storage pool show-available-capacity Display available capacity of storage pools
storage pool show-disks Display disks in storage pools
storage port disable -node thesanguy_node1a -port 0b Disable a storage port
storage port enable Enable a storage port
storage port rescan -node nodeB -port 0b Rescan a storage port
storage port reset Reset a storage port
storage port reset-device -node node1 -port 1a -loop-id 10 Reset a device behind a storage port
storage port show Show storage port information
storage raid-options modify Modify a RAID option
storage raid-options show Display RAID options
storage shelf show Display a list of storage shelves
storage shelf acp configure Configure alternate control path (ACP)
storage shelf acp show Show connectivity information
storage shelf acp module show Show modules connected to the cluster
storage shelf firmware show-update-status Display the Shelf Firmware Update (SFU) Status.
storage shelf firmware update Update Shelf Firmware
storage shelf location-led modify Modify the state of the shelf Location LED
storage shelf location-led show Display the Location LED status
storage switch add -address 10.1.1.4 -snmp-community public Add a switch for monitoring
storage switch modify Modify information about a switch’s configuration
storage switch refresh Refresh storage switch info
storage switch remove Remove a switch from monitoring
storage switch show Display switch information
storage tape offline -node cluster1-01 -name thesanguy99 Take a tape drive offline
storage tape online Bring a tape drive online
storage tape position -node thesanguy1-01 -name tape1 -operation rewind Modify a tape drive cartridge position
storage tape reset Reset a tape drive
storage tape show Display information about tape drives and media changers
storage tape show-errors Display tape drive errors
storage tape show-media-changer Display information about media changers
storage tape show-supported-status Displays the qualification and supported status of tape drives
storage tape show-tape-drive Display information about tape drives
storage tape trace -node thesanguy1-01 -is-trace-enabled true Enable/disable tape trace operations
storage tape alias clear -node thesanguy1-01 -name st0 Clear alias names
storage tape alias set Set an alias name for tape drive or media changer
storage tape alias show Displays aliases of all tape drives and media changers
storage tape library config show Display connectivity to back-end storage tape libraries.
storage tape library path show Display a list of Tape Libraries on the given path
storage tape library path show-by-initiator Display a list of LUNs on the given tape library
storage tape config-file delete -filename thesanguy_LTO-6.TCF Delete a tape config file
storage tape config-file get Get a tape drive configuration file
storage tape config-file show Display the list of tape drive configuration files on the given node
storage tape load-balance modify Modify the tape load balance configuration
storage tape load-balance show Displays the tape load balance configuration
System Commands
system controller show Display the controller information
system controller bootmedia show Display the Boot Media Device Health Status
system controller bootmedia serial show Display the Boot Media Device serial number
system controller clus-flap-threshold show Display the controller cluster port flap threshold
system controller config show-errors Display configuration errors
system controller config pci show-add-on-devices Display PCI devices in expansion slots
system controller config pci show-hierarchy Display PCI hierarchy
system controller environment show Display the FRUs in the controller
system controller fru show Display Information About the FRUs in the Controller
system controller fru led disable-all Turn off all the LEDs Data Ontap has lit
system controller fru led enable-all Light all the LEDs
system controller fru led modify Modify the status of FRU LEDs
system controller fru led show Display the status of FRU LEDs
system controller ioxm show Displays IOXM Device Health Status
system controller memory dimm show Display the Memory DIMM Table
system controller pci show Display the PCI Device Table
system controller pcicerr threshold modify Modify the Node PCIe error alert threshold
system controller pcicerr threshold show Display the Node PCIe error alert threshold
system controller sp config show Display the Service Processor Config Table
system controller sp upgrade show Display the Service Processor Upgrade Table
system chassis show Display all the chassis in the cluster
system chassis fru show Display the FRUs in the cluster
system cluster-switch create Add information about a cluster switch or management switch
system cluster-switch delete -device SwitchA Delete information about a cluster switch or management switch
system cluster-switch modify Modify information about a switch’s configuration
system cluster-switch prepare-to-downgrade Changes the model number of manually created switches based on the switch support provided in the Data ONTAP releases prior to 8.2.1
system cluster-switch show Display the configuration for cluster and management switches
system cluster-switch show-all Displays the list of switches that were added and deleted
system cluster-switch polling-interval modify -polling-interval 99 Modify the polling interval for monitoring cluster and management switch health
system cluster-switch polling-interval show Display the polling interval for monitoring cluster and management switch health
system cluster-switch threshold show Display the cluster switch health monitor alert thresholds
system configuration recovery node restore Restore node configuration from a backup
system configuration recovery cluster modify -recovery-status complete Modify cluster recovery status
system configuration recovery cluster recreate Recreate cluster
system configuration recovery cluster rejoin Rejoin a cluster
system configuration recovery cluster show Show cluster recovery status
system configuration recovery cluster sync Sync a node with cluster configuration
system configuration backup copy Copy a configuration backup
system configuration backup create Create a configuration backup
system configuration backup delete Delete a configuration backup
system configuration backup download Download a configuration backup
system configuration backup rename Rename a configuration backup
system configuration backup show Show configuration backup information
system configuration backup upload Upload a configuration backup
system configuration backup settings modify Modify configuration backup settings
system configuration backup settings set-password Modify password for destination URL
system configuration backup settings show Show configuration backup settings
system feature-usage show-history Display Feature Usage History
system feature-usage show-summary Display Feature Usage Summary
system ha interconnect config show Display the high-availability interconnect configuration information
system ha interconnect link off Turn off the interconnect link
system ha interconnect link on Turn on the interconnect link
system ha interconnect port show Display the high-availability interconnect device port information
system ha interconnect ood clear-error-statistics -node ic-f33270-05 Clear error statistics
system ha interconnect ood clear-performance-statistics Clear performance statistics
system ha interconnect ood disable-optimization Disable coalescing work requests
system ha interconnect ood disable-statistics Disable detailed statistics collection
system ha interconnect ood enable-optimization Enable coalescing work requests
system ha interconnect ood enable-statistics Enable detailed statistics collection
system ha interconnect ood send-diagnostic-buffer Send diagnostic buffer to partner
system ha interconnect ood status show Display the high-availability interconnect device out-of-order delivery (OOD) information
system ha interconnect statistics clear-port Clear the high-availability interconnect port counters
system ha interconnect statistics clear-port-symbol-error Clear the high-availability interconnect port symbol errors
system ha interconnect statistics show-scatter-gather-list Display the high-availability interconnect scatter-gather list entry statistics
system ha interconnect statistics performance show Display the high-availability interconnect device performance statistics
system ha interconnect status show Display the high-availability interconnect connection status
system health alert delete Delete system health alert
system health alert modify Modify system health alert
system health alert show View system health alerts
system health alert definition show Display system health alert definition
system health config show Display system health configuration
system health autosupport trigger history show View system health alert history
system health policy definition modify Modify system health policy definition
system health policy definition show Display system health policy definitions
system health status show Display system health monitoring status
system health subsystem show Display the health of subsystems
system license add Add one or more licenses
system license clean-up Remove unnecessary licenses
system license delete -serial-number 1-61-0000000000000000000999999 -package NFS Delete a license
system license show Display licenses
system license capacity show displays the information about the licenses in the system related to capacity
system license entitlement-risk show Display Cluster License Entitlement Risk
system license status show Display license status
system halt -node cluster1 -reason ‘hardware maintenance’ Shut down a node
system node migrate-root Start the root aggregate migration on a node
system node modify Modify node attributes
system node reboot Reboot a node
system node rename Rename a node
system node restore-backup Restore the original backup configuration to the HA target node
system node revert-to Revert a node to a previous release of Data ONTAP
system node run Run interactive or non-interactive commands in the nodeshell
system node run-console Access the console of a node
system node show Display the list of nodes in the cluster
system node show-discovered Display all nodes discovered on the local network
system node show-memory-errors Display Memory Errors on DIMMs
system node autosupport invoke Generate and send an AutoSupport message
system node autosupport invoke-core-upload Generate and send an AutoSupport message with an existing core file.
system node autosupport invoke-performance-archive Generate and send an AutoSupport message with performance archives.
system node autosupport invoke-splog Generate and send an AutoSupport message with collected service-processor log files
system node autosupport modify Modify AutoSupport configuration
system node autosupport show Display AutoSupport configuration
system node autosupport check show Display overall status of AutoSupport subsystem
system node autosupport check show-details Display detailed status of AutoSupport subsystem
system node autosupport destinations show Display a summary of the current AutoSupport destinations
system node autosupport history cancel Cancel an AutoSupport Transmission.
system node autosupport history retransmit Selectively retransmit a previously collected AutoSupport.
system node autosupport history show Display recent AutoSupport messages
system node autosupport history show-upload-details Display upload details of recent AutoSupport messages
system node autosupport manifest show Display AutoSupport content manifest
system node autosupport trigger modify Modify AutoSupport trigger configuration
system node autosupport trigger show Display AutoSupport trigger configuration
system node coredump delete -node node0 -corename core.101689.2018-01-12.12_34_32.nz Delete a coredump
system node coredump delete-all Delete all coredumps owned by a node
system node coredump save Save an unsaved kernel coredump
system node coredump save-all Save all unsaved kernel coredumps owned by a node
system node coredump show Display a list of coredumps
system node coredump status Display kernel coredump status
system node coredump trigger Make the node dump system core and reset
system node coredump config modify Modify coredump configuration
system node coredump config show Display coredump configuration
system node coredump reports delete Delete an application core report
system node coredump reports show Display a list of application core reports
system node coredump segment delete Delete a core segment
system node coredump segment delete-all -node node1 Delete all core segments on a node
system node coredump segment show Display a list of core segments
system node environment sensors show Display the sensor table
system node hardware tape library show Display information about tape libraries
system node hardware tape drive show Displays information about tape drives
system node hardware unified-connect modify Modify the Fibre Channel and converged networking adapter configuration
system node hardware unified-connect show Displays information about Fibre Channel and converged networking adapters
system node external-cache modify -node node1 -is-enabled true Modify external cache settings.
system node external-cache show -node node1 Display external cache settings.
system node image abort-operation Abort software image ‘update’ or ‘get’ operation
system node image get Fetch a file from a URL
system node image modify Modify software image configuration
system node image show Display software image information
system node image show-update-progress Show progress information for a currently running update
system node image update Perform software image upgrade/downgrade
system node image package delete Delete a software package
system node image package show Display software package information
system node firmware download Download motherboard firmware and system diagnostics
system node internal-switch show Display onboard switch attributes
system node internal-switch dump eeprom Display onboard switch eeprom config
system node internal-switch dump port-mapping -node Node1 -switch-id 0 Display onboard switch port mapping
system node internal-switch dump stat Display onboard switch port statistics counter
system node power on Power nodes on
system node power show Display the current power status of the nodes
system node root-mount create Create a mount from one node to another node’s root volume.
system node root-mount delete Delete a mount from one node to another node’s root volume.
system node root-mount show Show the existing mounts from any node to another node’s root volume.
system node upgrade-revert show Display upgrade/revert node status.
system node upgrade-revert upgrade Run the upgrade at a specific phase.
system node virtual-machine instance show Display virtual machine instance information per node
system node virtual-machine instance show-system-disks Display information about Data ONTAP-v system disks
system node virtual-machine hypervisor modify-credentials Modify hypervisor IP address and its credentials
system node virtual-machine hypervisor show Display hypervisor information about Data ONTAP-v nodes
system node virtual-machine hypervisor show-credentials Display hypervisor IP address and its credentials
system node virtual-machine provider credential create Add provider credentials
system node virtual-machine provider credential delete Remove provider credentials
system node virtual-machine provider credential modify Modify provider credentials
system node virtual-machine provider credential show Display the provider credentials
system node virtual-machine provider proxy create Add a proxy server
system node virtual-machine provider proxy delete Remove the proxy server
system node virtual-machine provider proxy modify Modify the proxy server
system node virtual-machine provider proxy show Display the proxy server
system script delete Delete saved CLI session logs
system script show Display saved CLI session logs
system script start Start logging all CLI I/O to session log
system script stop Stops logging CLI I/O
system script upload Upload the selected CLI session log
system service-processor reboot-sp -node node1 -image primary Reboot the Service Processor on a node
system service-processor show Display the Service Processor information
system service-processor api-service modify -port 82001 Modify service processor API service configuration
system service-processor api-service renew-certificates Renew SSL and SSH certificates used for secure communication with Service Processor API service
system service-processor api-service show Display service processor API service configuration
system service-processor image modify -node local -autoupdate true Enable/Disable automatic firmware update
system service-processor image show Display the details of currently installed Service Processor firmware image
system service-processor image update Update Service Processor firmware
system service-processor image update-progress show Display status for the latest Service Processor firmware update
system service-processor log show-allocations Display the Service Processor log allocation map
system service-processor network modify Modify the network configuration
system service-processor network show Display the network configuration
system service-processor network auto-configuration disable Disable Service Processor Auto-Configuration
system service-processor network auto-configuration enable Enable Service Processor Auto-Configuration
system service-processor network auto-configuration show Display Service Processor Auto-Configuration Setup
system service-processor ssh add-allowed-addresses Add IP addresses to the list that is allowed to access the Service Processor
system service-processor ssh remove-allowed-addresses Remove IP addresses from the list that is allowed to access the Service Processor
system service-processor ssh show Display SSH security information about the Service Processor
system services firewall modify Modify firewall status
system services firewall show Show firewall status
system services firewall policy clone Clone an existing firewall policy
system services firewall policy create Create a firewall policy entry for a network service
system services firewall policy delete Remove a service from a firewall policy
system services firewall policy modify Modify a firewall policy entry for a network service
system services firewall policy show Show firewall policies
system services manager install show Display a list of installed services
system services manager policy add -service diagnosis -version 1.0 Add a new service policy
system services manager policy remove Remove a service policy
system services manager policy setstate -service diagnosis -version 1.0 -state off Enable/disable a service policy
system services manager policy show Display service policies
system services manager status show Display the status of a service
system services ndmp kill Kill the specified NDMP session
system services ndmp kill-all -node node1 Kill all NDMP sessions
system services ndmp probe Display list of NDMP sessions
system services ndmp status Display list of NDMP sessions
system services ndmp modify -node node1 Modify NDMP service configuration
system services ndmp service show Display NDMP service configuration
system services ndmp service start Start the NDMP service
system services ndmp service stop -node node0 Stop the NDMP service
system services ndmp service terminate Terminate all NDMP sessions
system services web modify -wait-queue-capacity 256 Modify the cluster-level configuration of web protocols
system services web show Display the cluster-level configuration of web protocols
system services web node show Display the status of the web servers at the node level
system smtape abort -session 99 Abort an active SMTape session
system smtape backup Backup a volume to tape devices
system smtape break Make a restored volume read-write
system smtape continue Continue SMTape session waiting at the end of tape
system smtape restore Restore a volume from tape devices
system smtape showheader Display SMTape header
system smtape status clear Clear SMTape sessions
system smtape status show Show status of SMTape sessions
system timeout modify -timeout 10 Set the CLI inactivity timeout value
system timeout show Display the CLI inactivity timeout value
system snmp authtrap Enables or disables SNMP authentication traps
system snmp contact Displays or modifies contact details
system snmp init Enables or disables SNMP traps
system snmp location Displays or modifies location information
system snmp show Displays SNMP settings
system snmp community add Adds a new community with the specified access control type
system snmp community delete Deletes community with the specified access control type
system snmp community show Displays communities
system snmp traphost add Add a new traphost
system snmp traphost delete Delete a traphost
system snmp traphost showDisplays traphosts Displays traphosts
Volume Commands
vol autosize vol1 -maximum-size 1t -mode grow Set/Display the autosize settings of the flexible volume.
volume create -vserver vs0 -volume vol_cached -aggregate aggr1 -state online -caching-policy auto Create a new volume
volume delete -vserver vs0 -volume vol1_old Delete an existing volume
volume make-vsroot -vserver vs0 -volume root_vs0_backup Designate a non-root volume as a root volume of the Vserver
volume modify -volume vol2 -autosize-mode grow -max-autosize 500g Modify volume attributes
volume mount -vserver vs0 -volume sanguy1 -junction-path /user/sanguy -active true -policy-override false Mount a volume on another volume with a junction-path
volume offline vol23 Take an existing volume offline
volume online vol1 Bring an existing volume online
volume rehost -vserver vs1 -volume vol0 -destination-vserver thesanguy2 Rehost a volume from one Vserver into another Vserver
volume rename -vserver vs0 -volume vol3_bckp -newname vol3_new Rename an existing volume
volume restrict vol1 Restrict an existing volume
volume show -vserver vs1 Display a list of volumes
volume show-footprint Display a list of volumes and their data and metadata footprints in their associated aggregate.
volume show-space Display space usage for volume(s)
volume size Set/Display the size of the volume.
volume transition-prepare-to-downgrade Verifies that there are no volumes actively transitioning from 7-mode to clustered Data ONTAP
volume unmount -vserver vs0 -volume vol2 Unmount a volume
volume efficiency check -vserver vs1 -volume vol1 -delete-checkpoint true Scrub efficiency metadata of a volume
volume efficiency modify -vserver vs1 -volume vol1 -policy policy1 Modify the efficiency configuration of a volume
volume efficiency on Enable efficiency on a volume
volume efficiency prepare-to-downgrade Identify any incompatable volumes or Snapshot copies before downgrade
volume efficiency revert-to Reverts volume efficiency metadata
volume efficiency show -vserver vs1 Display a list of volumes with efficiency
volume efficiency start -volume vol1 -vserver vs1 Starts efficiency operation on a volume
volume efficiency stat -l -vserver vs1 -volume vol1 Show volume efficiency statistics
volume efficiency stop Stop efficiency operation on a volume
volume efficiency undo Undo efficiency on a volume
volume efficiency policy create Create an efficiency policy
volume efficiency policy delete Delete an efficiency policy
volume efficiency policy modify -policy policy1 -schedule hourly Modify an efficiency policy
volume efficiency policy show Show efficiency policies
volume clone create Create a FlexClone volume
volume clone show Display a list of FlexClones
volume clone split commands Commands to manage FlexClone split
volume clone split show Show the status of FlexClone split operations in-progress
volume clone split start Split a FlexClone from the parent volume
volume clone split stop Stop an ongoing FlexClone split job
volume aggregate vacate Move all Infinite Volume constituents from one aggregate to another.
volume file compact-data Apply Adaptive Data Compaction to a Snapshot copy of a file
volume file modify Manage the association of a QoS policy group with a file
volume file privileged-delete Perform a privileged-delete operation on unexpired WORM files on a SnapLock enterprise volume
volume file reservation Get/Set the space reservation info for the named file.
volume file show-disk-usage Show disk usage of file
volume file show-filehandle Show the file handle of a file
volume file show-inode Display file paths for a given inode
volume file clone autodelete Enable/Disable autodelete
volume file clone create Create file or LUN full or sub file clone
volume file clone show-autodelete Show the autodelete status for a file or LUN clone
volume file clone deletion add-extension Add new supported file extensions to be deleted with clone delete
volume file clone deletion modify -volume vol1 -vserver vs1 -minimum-size 100M Used to change the required minimum clone file size of a volume for clone delete
volume file clone deletion remove-extension Remove unsupported file extensions for clone delete
volume file clone deletion show Show the supported file extensions for clone delete
volume file clone split load*> modify -node clone-01 -max-split-load 100KB Modify maximum split load on a node
volume file clone split load show Show split load on a node
volume file fingerprint abort Abort a file fingerprint operation
volume file fingerprint dump Display fingerprint of a file
volume file fingerprint show -vserver vs0 -volume nfs_slc Display fingerprint operation status
volume file fingerprint start Start a file fingerprint computation on a file
volume file retention show Display retention time of a WORM file.
volume inode-upgrade prepare-to-downgrade Prepare volume to downgrade to a release earlier than Data ONTAP 9.0.0
volume inode-upgrade resume -vserver vs0 -volume vol1 Resume suspended inode upgrade
volume inode-upgrade show Display Inode Upgrade Progress
volume move abort -vserver vs0 -volume vol1 Stop a running volume move operation
volume move modify Modify parameters for a running volume move operation
volume move show -vserver vs0 -volume vol2 Show status of a volume moving from one aggregate to another aggregate
volume move start Start moving a volume from one aggregate to another aggregate
volume move trigger-cutover Trigger cutover of a move job
volume move recommend show Display Move Recommendations
volume move target-aggr show List target aggregates compatible for volume move
volume qtree create -vserver vs0 -volume vol1 -qtree qtree1 -security-style mixed Create a new qtree
volume qtree delete -vserver vs0 -volume vol1 -qtree qtree4 Delete a qtree
volume qtree modify Modify qtree attributes
volume qtree oplocks Modify qtree oplock mode
volume qtree rename Rename an existing qtree
volume qtree security Modify qtree security style
volume qtree show Display a list of qtrees
volume qtree statistics -vserver vs0 Display qtree statistics
volume qtree statistics-reset -vserver vs0 -volume vol1 Reset qtree statistics in a volume
volume quota modify Modify quota state for volumes
volume quota off -vserver vs0 -volume vol1 Turn off quotas for volumes
volume quota on -vserver vs0 -volume vol1 Turn on quotas for volumes
volume quota report Display the quota report for volumes
volume quota resize -vserver vs0 -volume vol1 Resize quotas for volumes
volume quota show -vserver vs0 Display quota state for volumes
volume quota policy copy Copy a quota policy
volume quota policy create -vserver vs0 -policy-name quota_policy_0 Create a quota policy
volume quota policy delete -vserver vs0 -policy-name quota_sanguy_2 Delete a quota policy
volume quota policy rename Rename a quota policy
volume quota policy show Display the quota policies
volume quota policy rule create Create a new quota rule
volume quota policy rule delete Delete an existing quota rule
volume quota policy rule modify Modify an existing quota rule
volume quota policy rule show Display the quota rules
volume quota policy rule count show -vserver vs0 -policy-name default Display count of quota rules
volume reallocation measure Start reallocate measure job
volume reallocation off Disable reallocate jobs
volume reallocation on Enable reallocate jobs
volume reallocation quiesce Quiesce reallocate job
volume reallocation restart Restart reallocate job
volume reallocation schedule Modify schedule of reallocate job
volume reallocation show Show reallocate job status
volume reallocation start Start reallocate job
volume reallocation stop Stop reallocate job
volume schedule-style prepare-to-downgrade Disables volume schedule style feature and sets schedule style to default
volume snaplock modify -volume vol_slc -maximum-retention-period infinite Modify SnapLock attributes of a SnapLock volume
volume snaplock show Display SnapLock attributes of a SnapLock volume
volume snapshot compute-reclaimable Calculate the reclaimable space if specified snapshots are deleted
volume snapshot create -vserver vs0 -volume vol3 -snapshot vol3_snap -comment “snapme” -foreground false Create a snapshot
volume snapshot delete -vserver vs0 -volume vol3 -snapshot vol3_daily Delete a snapshot
volume snapshot modify Modify snapshot attributes
volume snapshot modify-snaplock-expiry-time Modify expiry time of a SnapLock Snapshot copy
volume snapshot partial-restore-file Restore part of a file from a snapshot
volume snapshot prepare-for-revert -node node1 Deletes multiple Snapshot copies of the current File System version.
volume snapshot rename Rename a snapshot
volume snapshot restore -vserver vs0 -volume vol3 -snapshot vol3_snap_archive Restore the volume to a snapshot.
volume snapshot restore-file Restore a file from a snapshot
volume snapshot show Display a list of snapshots
volume snapshot show-delta Computes delta between two Snapshot copies
volume snapshot autodelete modify Modify autodelete settings
volume snapshot autodelete show Display autodelete settings
volume snapshot policy add-schedule Add a schedule to snapshot policy
volume snapshot policy create Create a new snapshot policy
volume snapshot policy delete Delete a snapshot policy
volume snapshot policy modify Modify a snapshot policy
volume snapshot policy modify-schedule Modify a schedule within snapshot policy
volume snapshot policy remove-schedule Remove a schedule from snapshot policy
volume snapshot policy show Show snapshot policies
volume transition-convert-dir show Display 7-Mode directories being converted
volume transition-convert-dir start Start converting a 7-Mode directory to Cluster-mode
Vserver Commands
vserver add-aggregates -vserver vs.thesanguy.com -aggregates aggr1,aggr2 Add aggregates to the Vserver
vserver add-protocols -vserver vs0.thesanguy.com -protocols cifs Add protocols to the Vserver
vserver context -vserver vs0.thesanguy.com Set Vserver context
vserver create -vserver vs0.thesanguy.com -ipspace ipspace1 -rootvolume root_vs0 -aggregate aggr0 -language en_US.UTF-8 -rootvolume-security-style mixed Create a Vserver
vserver delete -vserver vs2.thesanguy.com Delete an existing Vserver
vserver modify Modify a Vserver
vserver prepare-for-revert Prepares Vservers to be reverted
vserver remove-aggregates -vserver vs.example.com -aggregates aggr1,aggr2 Remove aggregates from the Vserver
vserver remove-protocols -vserver vs0.example.com -protocols cifs Remove protocols from the Vserver
vserver rename Rename a Vserver
vserver show Display Vservers
vserver show-aggregates -vserver vs Show details of aggregates in a Vserver
vserver show-protocols -vserver vs1 Show protocols for Vserver
vserver start -vserver vs0.thesanguy.com -foreground false Start a Vserver
vserver stop -vserver vs0.thesanguy.com -foreground false Stop a Vserver
vserver unlock -vserver vs1.thesanguy.com -force true Unlock Vserver configuration
vserver active-directory create -vserver vs1 -account-name ADSERVER1 -domain www.thesanguy.com Create an Active Directory account.
vserver active-directory delete Delete an Active Directory account
vserver active-directory modify Modify the domain of an Active Directory account.
vserver active-directory password-change -vserver vs1 Change the domain account password for an Active Directory account
vserver active-directory password-reset Reset the domain account password for an Active Directory account
vserver active-directory show Display Active Directory accounts
vserver check lif-multitenancy run -vserver vs0 Run check for LIF multitenancy
vserver check lif-multitenancy show Show the summary of the latest multitenancy network run
vserver check lif-multitenancy show-results -vserver vs0 Show the results of the latest multitenancy network run
vserver audit create Create an audit configuration
vserver audit delete Delete audit configuration
vserver audit disable Disable auditing
vserver audit enable Enable auditing
vserver audit modify -vserver vs1 -rotate-size 10MB -rotate-limit 3 Modify the audit configuration
vserver audit rotate-log -vserver vs1 Rotate audit log
vserver audit show Display the audit configuration
vserver cifs add-netbios-aliases Add NetBIOS aliases for the CIFS server name
vserver cifs create -vserver vs1 -cifs-server CIFSSERVER1 -domain sample.com Create a CIFS server
vserver cifs delete -vserver vs1 Delete a CIFS server
vserver cifs modify -vserver vs1 -default-site default -status-admin up Modify a CIFS server
vserver cifs nbtstat Display NetBIOS information over TCP connection
vserver cifs prepare-to-downgrade Restore the CIFS Configurations to Earlier Release of Data ONTAP Version
vserver cifs remove-netbios-aliases Remove NetBIOS aliases
vserver cifs repair-modify Repair a partially-failed Vserver CIFS server modify operation
vserver cifs show Display CIFS servers
vserver cifs start Start a CIFS server
cifs stop -vserver vs1 Stop a CIFS server
vserver cifs branchcache create -vserver vs1 -hash-store-path /vs1_hash_store Create the CIFS BranchCache service
vserver cifs branchcache delete -flush-hashes true -vserver vs1 Stop and remove the CIFS BranchCache service
vserver cifs branchcache hash-create Force CIFS BranchCache hash generation for the specified path or file
vserver cifs branchcache hash-flush -vserver vs1 Flush all generated BranchCache hashes
vserver cifs branchcache modify Modify the CIFS BranchCache service settings
vserver cifs branchcache show Display the CIFS BranchCache service status and settings
vserver cifs character-mapping create Create character mapping on a volume
vserver cifs character-mapping delete Delete character mapping on a volume
vserver cifs character-mapping modify Modify character mapping on a volume
vserver cifs character-mapping show Display character mapping on volumes
vserver cifs connection show Displays established CIFS connections
vserver cifs domain discovered-servers reset-servers Reset and rediscover servers for a Vserver
vserver cifs domain discovered-servers show Display discovered server information
vserver cifs domain name-mapping-search add Add to the list of trusted domains for name-mapping
vserver cifs domain name-mapping-search modify Modify the list of trusted domains for name-mapping search
vserver cifs domain name-mapping-search remove Remove from the list of trusted domains for name-mapping search
vserver cifs domain name-mapping-search show Display the list of trusted domains for name-mapping searches
vserver cifs domain password change -vserver vs1 Generate a new password for the CIFS server’s machine account and change it in the Windows Active Directory domain.
vserver cifs domain password reset Reset the CIFS server’s machine account password in the Windows Active Directory domain.
vserver cifs domain password schedule modify Modify the domain account password change schedule
vserver cifs domain preferred-dc add -vserver vs1 -domain example.com -preferred-dc 10.1.1.1 Add to a list of preferred domain controllers
vserver cifs domain preferred-dc remove Remove from a list of preferred domain controllers
vserver cifs domain preferred-dc show Display a list of preferred domain controllers
vserver cifs domain trusts rediscover Reset and rediscover trusted domains for a Vserver
vserver cifs domain trusts show Display discovered trusted domain information
vserver cifs group-policy modify -vserver vs1 -status enabled Change group policy configuration
vserver cifs group-policy show Show group policy configuration
vserver cifs group-policy show-applied Show currently applied group policy setting
vserver cifs group-policy show-defined Show applicable group policy settings defined in Active Directory
vserver cifs group-policy update -vserver vs1 -force-reapply-all-settings true Apply group policy settings defined in Active Directory
vserver cifs group-policy central-access-policy show-applied Show currently applied central access policies
vserver cifs group-policy central-access-policy show-defined Show applicable central access policies defined in the Active Directory
vserver cifs group-policy central-access-rule show-applied Show currently applied central access rules
vserver cifs group-policy central-access-rule show-defined Show applicable central access rules defined in the Active Directory
vserver cifs group-policy restricted-group show-applied Show the applied restricted-group settings.
vserver cifs group-policy restricted-group show-defined Show the defined restricted-group settings.
vserver cifs home-directory modify Modify attributes of CIFS home directories
vserver cifs home-directory show Display home directory configurations
vserver cifs home-directory show-user Display the Home Directory Path for a User
vserver cifs home-directory search-path add -vserver vs1 -path /home1 Add a home directory search path
vserver cifs home-directory search-path remove Remove a home directory search path
vserver cifs home-directory search-path reorder -vserver vs1 -path /home1 -to-position 1 Change the search path order used to find a match
vserver cifs home-directory search-path show Display home directory search paths
vserver cifs options modify Modify CIFS options
vserver cifs options show Display CIFS options
vserver cifs security modify Modify CIFS security settings
vserver cifs security show Display CIFS security settings
vserver cifs session close -node * -protocol-version SMB2 Close an open CIFS session
vserver cifs session show Display established CIFS sessions
vserver cifs session file close Close an open CIFS file
vserver cifs session file show Display opened CIFS files
vserver cifs share create -vserver vs1 -share-name SALES_SHARE -path /sales -symlink properties enable Create a CIFS share
vserver cifs share delete -vserver vs1 -share-name share1 Delete a CIFS share
vserver cifs share modify Modify a CIFS share
vserver cifs share show Display CIFS shares
vserver cifs share properties add Add to the list of share properties
vserver cifs share properties remove Remove from the list of share properties
vserver cifs share properties show Display share properties
vserver cifs share access-control create Create an access control list
vserver cifs share access-control delete Delete an access control list
vserver cifs share access-control modify Modify an access control list
vserver cifs share access-control show Display access control lists on CIFS shares
vserver cifs superuser create -domain thesanguy.com -accountname username -vserver vs1 Adds superuser permissions to a CIFS account
vserver cifs superuser delete Deletes superuser permissions from a CIFS account
vserver cifs superuser show Display superuser permissions for CIFS accounts
vserver cifs symlink create Create a symlink path mapping
vserver cifs symlink delete Delete a symlink path mapping
vserver cifs symlink modify Modify a symlink path mapping
vserver cifs symlink show Show symlink path mappings
vserver cifs users-and-groups update-names Update the names of Active Directory users and groups
vserver cifs users-and-groups local-group add-members Add members to a local group
vserver cifs users-and-groups local-group create Create a local group
server cifs users-and-groups local-group delete -vserver vs1 -group-name CIFS_SERVER Delete a local group
vserver cifs users-and-groups local-group modify Modify a local group
vserver cifs users-and-groups local-group remove-members Remove members from a local group
vserver cifs users-and-groups local-group rename Rename a local group
vserver cifs users-and-groups local-group show -vserver vs1 Display local groups
vserver cifs users-and-groups local-group show-members -vserver vs1 Display local groups’ members
vserver cifs users-and-groups privilege add-privilege Add local privileges to a user or group
vserver cifs users-and-groups privilege remove-privilege Remove privileges from a user or group
vserver cifs users-and-groups privilege reset-privilege Reset local privileges for a user or group
vserver cifs users-and-groups privilege show Display privileges
vserver cifs users-and-groups local-user create Create a local user
vserver cifs users-and-groups local-user delete Delete a local user
vserver cifs users-and-groups local-user modify Modify a local user
vserver cifs users-and-groups local-user rename Rename a local user
vserver cifs users-and-groups local-user set-password Set a password for a local user
vserver cifs users-and-groups local-user show Display local users
vserver cifs users-and-groups local-user show-membership Display local users’ membership information
vserver config-replication pause Temporarily pause Vserver configuration replication
vserver config-replication resume Resume Vserver configuration replication
vserver config-replication show Display Vserver configuration replication resume time
vserver data-policy export -vserver vs1 Display a data policy
vserver data-policy import Import a data policy
vserver data-policy validate Validate a data policy without import
vserver export-policy check-access Given a Volume And/or a Qtree, Check to See If the Client Is Allowed Access
vserver export-policy copy Copy an export policy
vserver export-policy create Create a rule set
vserver export-policy delete Delete a rule set
vserver export-policy rename Rename an export policy
vserver export-policy show Display a list of rule sets
vserver export-policy access-cache config modify -ttl-positive 36000 -ttl-negative 3600 -harvest-timeout 43200 Modify exports access cache configuration
vserver export-policy access-cache config modify-all-vservers -ttl-positive 36000 -ttl-negative 3600 -harvest-timeout 43200 Modify exports access cache configuration for all Vservers
vserver export-policy access-cache config show Display exports access cache configuration
vserver export-policy access-cache config show-all-vservers Display exports access cache configuration for all Vservers
vserver export-policy netgroup check-membership Check to see if the client is a member of the netgroup
vserver export-policy netgroup cache show -vserver vs1 -netgroup netgroup1 Show the Netgroup Cache
vserver export-policy netgroup queue show Show the Netgroup Processing Queue
vserver export-policy cache flush -vserver vs0 -cache access Flush the Export Caches
vserver export-policy rule add-clientmatches Add list of clientmatch strings to an existing rule
vserver export-policy rule create Create a rule
vserver export-policy rule delete Delete a rule
vserver export-policy rule modify Modify a rule
vserver export-policy rule remove-clientmatches -vserver vs1 -policyname default_expolicy -ruleindex 1 -clientmatches “1.1.1.1” Remove list of clientmatch strings from an existing rule
vserver export-policy rule setindex Move a rule to a specified index
vserver export-policy rule show Display a list of rules
vserver fcp create Create FCP service configuration
vserver fcp create -vserver vsanguy_1 Manage the FCP service on a Vserver
vserver fcp delete -vserver vsanguy_1 Delete FCP service configuration
vserver fcp modify -vserver vsanguy_1 -status-admin down Modify FCP service configuration
vserver fcp show Display FCP service configuration
vserver fcp start Starts the FCP service
vserver fcp stop Stops the FCP service
vserver fcp interface show Display configuration information for an FCP interface
vserver fcp ping-igroup show Ping FCP by Igroup
vserver fcp initiator show Display FCP initiators currently connected
vserver fcp ping-initiator show Ping FCP initiator
vserver fcp portname set -vserver vs_1 -lif vs_1.fcp -wwpn SA:NG:UY:B0:58:9b:F6:33 Assigns a new WWPN to a FCP adapter
vserver fcp portname show Display WWPN for FCP logical interfaces
vserver fcp wwn blacklist commands Manage blacklisted WWNs
vserver fcp wwn blacklist show Displays the blacklisted WWNs
vserver fcp wwpn-alias remove -vserver vs_1 -wwpn ff:b1:TH:ES:AN:GU:Ya:23 Removes an alias for a World Wide Port Name of an initiator.
vserver fcp wwpn-alias set Set an alias for a World Wide Port Name of an initiator that might login to the target.
vserver fcp wwpn-alias show Displays a list of the WWPN aliases configured for initiators
vserver fpolicy disable Disable a policy
vserver fpolicy enable Enable a policy
vserver fpolicy engine-connect Establish a connection to FPolicy server
vserver fpolicy engine-disconnect Terminate connection to FPolicy server
vserver fpolicy prepare-to-downgrade Restore the FPolicy configuration to Earlier Release of Data ONTAP
vserver fpolicy show Display all policies with status
vserver fpolicy show-enabled Display all enabled policies
vserver fpolicy show-engine Display FPolicy server status
vserver fpolicy show-passthrough-read-connection Display connection status for FPolicy passthrough-read
vserver fpolicy policy create Create a policy
vserver fpolicy policy delete Delete a policy
vserver fpolicy policy modify Modify a policy
vserver fpolicy policy show Display policy configuration
vserver fpolicy policy external-engine create Create an external engine
vserver fpolicy policy external-engine delete Delete an external engine
vserver fpolicy policy external-engine modify Modify an external engine
vserver fpolicy policy external-engine show Display external engines
vserver fpolicy policy scope create Create scope
vserver fpolicy policy scope delete Delete scope
vserver fpolicy policy scope modify Modify scope
vserver fpolicy policy scope show Display scope
vserver fpolicy policy event create Create an event
vserver fpolicy policy event delete -vserver vs1.example.com -event-name cifs_event Delete an event
vserver fpolicy policy event modify Modify an event
vserver fpolicy policy event show Display events
vserver group-mapping create Create a group mapping
vserver group-mapping delete Delete a group mapping
vserver group-mapping insert Create a group mapping at a specified position
vserver group-mapping modify Modify a group mapping’s pattern, replacement pattern, or both
vserver group-mapping show Display group mappings
vserver group-mapping swap Exchange the positions of two group mappings
vserver iscsi create -vserver vs_1 Create a Vserver’s iSCSI service
vserver iscsi delete Delete a Vserver’s iSCSI service
vserver iscsi modify Modify a Vserver’s iSCSI service
vserver iscsi show Display a Vserver’s iSCSI configuration
vserver iscsi start Starts the iSCSI service
vserver iscsi stop Stops the iSCSI service
vserver iscsi connection show Display active iSCSI connections
vserver iscsi connection shutdown Shut down a connection on a node
vserver iscsi command show Display active iSCSI commands
vserver iscsi initiator show Display iSCSI initiators currently connected
vserver iscsi interface disable Disable the specified interfaces for iSCSI service
vserver iscsi interface enable Enable the specified interfaces for iSCSI service
vserver iscsi interface modify Modify network interfaces used for iSCSI connectivity
vserver iscsi interface show -vserver vs_1 Show network interfaces used for iSCSI connectivity
vserver iscsi interface accesslist add Add the iSCSI LIFs to the accesslist of the specified initiator
vserver iscsi interface accesslist remove Remove the iSCSI LIFs from the accesslist of the specified initiator
vserver iscsi interface accesslist show Show accesslist of the initiators for iSCSI connectivity
vserver iscsi isns create Configure the iSNS service for the Vserver
vserver iscsi isns delete Remove the iSNS service for the Vserver
vserver iscsi isns modify Modify the iSNS service for the Vserver
vserver iscsi isns show Show iSNS service configuration
vserver iscsi isns start -vserver vs_1 Starts the iSNS service
vserver iscsi isns stop Stops the iSNS service
vserver iscsi isns update Force update of registered iSNS information
vserver iscsi session show Display iSCSI sessions
vserver iscsi session shutdown Shut down a session on a node
vserver iscsi session parameter show Display the parameters used to establish an iSCSI session
vserver iscsi security create Create an iSCSI authentication configuration for an initiator
vserver iscsi security delete -vserver vs_1 -initiator iqn.1992-08.com.thesanguy:abcdefg Delete the iSCSI authentication configuration for an initiator
vserver iscsi security modify Modify the iSCSI authentication configuration for an initiator
vserver iscsi security show Show the current iSCSI authentication configuration
vserver nfs create Create an NFS configuration for a Vserver
vserver nfs delete -vserver vs2 Delete the NFS configuration of a Vserver
vserver nfs modify -vserver vs0 -access true -v3 enabled -udp disabled -tcp enabled Modify the NFS configuration of a Vserver
vserver nfs off -vserver vs0 Disable the NFS service of a Vserver
vserver nfs on -vserver vs0 Enable the NFS service of a Vserver
vserver nfs prepare-for-v3-ms-dos-client-downgrade Disable NFSv3 MS-DOS Client Support
vserver nfs show Display the NFS configurations of Vservers
vserver nfs start -vserver vs0 Start the NFS service of a Vserver
vserver nfs status -vserver vs0 Display the status of the NFS service of a Vserver
vserver nfs stop Stop the NFS service of a Vserver
vserver nfs kerberos realm create Create a Kerberos realm configuration
vserver nfs kerberos realm delete -vserver AUTH -realm security.thesanguy.com Delete a Kerberos realm configuration
vserver nfs kerberos realm modify Modify a Kerberos realm configuration
vserver nfs kerberos realm show Display Kerberos realm configurations
vserver nfs kerberos interface disable -vserver vs0 -lif datalif1 Disable NFS Kerberos on a LIF
vserver nfs kerberos interface enable Enable NFS Kerberos on a LIF
vserver nfs kerberos interface modify Modify the Kerberos configuration of an NFS server
vserver nfs kerberos interface show Display the Kerberos configurations of NFS servers
vserver nfs pnfs devices create Create a new pNFS device and its mapping
vserver nfs pnfs delete -mid 2 Delete a pNFS device
vserver nfs pnfs devices show Display pNFS device information
vserver nfs pnfs devices cache show Display the device cache
vserver nfs pnfs devices mappings show Display the list of pNFS device mappings
vserver locks break Break file locks based on a set of criteria
vserver locks show Display current list of locks
vserver name-mapping create Create a name mapping
vserver name-mapping delete Delete a name mapping
vserver name-mapping insert Create a name mapping at a specified position
vserver name-mapping modify Modify a name mapping’s pattern, replacement pattern, or both
vserver name-mapping refresh-hostname-ip Refresh the IP addresses for configured hostnames
vserver name-mapping show Display name mappings
vserver name-mapping swap Exchange the positions of two name mappings
vserver peer accept -vserver pvs1.example.com -peer-vserver lvs1.thesanguy.com Accept a pending Vserver peer relationship
vserver peer create Create a new Vserver peer relationship
vserver peer delete Delete a Vserver peer relationship
vserver peer modify Modify a Vserver peer relationship
vserver peer modify-local-name Modify the local name for a peer Vserver
vserver peer reject Reject a Vserver peer relationship
vserver peer repair-peer-name -vserver vs1.thesanguy.com Repair the peer vserver name that was not updated during the last rename operation
vserver peer resume Resume a Vserver peer relationship
vserver peer show Display Vserver peer relationships
vserver peer suspend -vserver lvs1.thesanguy.com -peer-vserver pvs1.thesanguy.com Suspend a Vserver peer relationship
vserver peer transition create Create a new transition peer relationship between a 7-Mode system and a Vserver.
vserver peer transition delete Delete a transition peer relationship.
vserver peer transition modify Modify a transition peer relationship.
vserver peer transition show Display transition peer relationships.
vserver san prepare-to-downgrade Restore the SAN Configurations to Earlier Release of Data ONTAP Version.
vserver security file-directory apply -vserver vs0 -policy-name p1 Apply security descriptors on files and directories defined in a policy to a Vserver
vserver security file-directory remove-slag Removes Storage-Level Access Guard
vserver security file-directory show Display file/folder security information
vserver security file-directory ntfs create Create an NTFS security descriptor
vserver security file-directory ntfs delete -ntfs-sd sd1 -vserver vs1 Delete an NTFS security descriptor
vserver security file-directory ntfs modify Modify an NTFS security descriptor
vserver security file-directory ntfs show Display an NTFS security descriptors
vserver security file-directory ntfs dacl add Add a DACL entry to NTFS security descriptor
vserver security file-directory ntfs dacl modify Modify an NTFS security descriptor DACL entry
vserver security file-directory ntfs dacl remove Remove a DACL entry from NTFS security descriptor.
vserver security file-directory ntfs dacl show Display NTFS security descriptor DACL entries
vserver security file-directory ntfs sacl add Add a SACL entry to NTFS security descriptor
vserver security file-directory ntfs sacl modify Modify an NTFS security descriptor SACL entry
vserver security file-directory ntfs sacl remove Remove a SACL entry from NTFS security descriptor
vserver security file-directory ntfs sacl show Display NTFS security descriptor SACL entries
vserver security file-directory policy create -policy-name policy1 -vserver server1 Create a file security policy
vserver security file-directory policy delete Delete a file security policy
vserver security file-directory policy show Display file security policies
vserver security file-directory policy task add Add a policy task
vserver security file-directory policy task modify Modify policy tasks
vserver security file-directory policy task remove Remove a policy task
vserver security file-directory policy task show Display policy tasks
vserver security file-directory job show Display a list of file security jobs
vserver security trace trace-result delete -vserver vserver_1 -node Node_1 -seqnum 999 Delete trace results
vserver security trace trace-result show Show trace results
vserver security trace filter create Create a security trace entry
vserver security trace filter delete -vserver vs0 -index 1 Delete a security trace entry
vserver security trace filter modify Modify a security trace entry
vserver security trace filter show Display a security trace entry
vserver services name-service dns create Create a new DNS table entry
vserver services name-service dns delete -vserver vs0 Remove a DNS table entry
vserver services name-service dns modify Change a DNS table entry
vserver services name-service dns show Display DNS configuration
vserver services name-service dns hosts create Create a new host table entry
vserver services name-service dns hosts delete Remove a host table entry
vserver services name-service dns hosts modify -vserver -vs1 -address 10.0.0.57 -hostname www.thesanguy.com Modify hostname or aliases
vserver services name-service dns hosts show Display IP address to hostname mappings
vserver services name-service dns dynamic-update modify Modify a Dynamic DNS Update Configuration
vserver services name-service dns dynamic-update prepare-to-downgrade Disable the Dynamic DNS update feature to be compatible with releases earlier than Data ONTAP 8.3.1
vserver services name-service dns dynamic-update show Display Dynamic DNS Update Configuration
vserver services name-service getxxbyyy getaddrinfo Gets the IP address information by using the host name.
vserver services name-service getxxbyyy getgrbygid Gets the group members by using the group identifier or GID.
vserver services name-service getxxbyyy getgrbyname Gets the group members by using the group name.
vserver services name-service getxxbyyy getgrlist Gets the group list by using the user name.
vserver services name-service getxxbyyy gethostbyaddr Gets the host information from the IP address.
vserver services name-service getxxbyyy gethostbyname Gets the IP address information from host name.
vserver services name-service getxxbyyy getnameinfo Gets the name information by using the IP address.
vserver services name-service getxxbyyy getpwbyname Gets the password entry by using the user name.
vserver services name-service getxxbyyy getpwbyuid Gets the password entry by using the user identifier or UID.
vserver services name-service getxxbyyy netgrp Checks if a client is part of a netgroup.
vserver services name-service getxxbyyy netgrpbyhost Check if a client is part of a netgroup using netgroup-by-host query
vserver services name-service ldap create -vserver vs1 -client-config corp Create an LDAP configuration
vserver services name-service ldap delete -vserver vs1 Delete an LDAP configuration
vserver services name-service ldap modify -vserver vs1 -client-config corpnew Modify an LDAP configuration
vserver services name-service ldap show Display LDAP configurations
vserver services name-service ldap client create -vserver vs1 -client-config corp – servers 192.16.0.100,192.16.0.101 Create an LDAP client configuration
vserver services name-service ldap client delete -vserver vs1 -client-config corp Delete an LDAP client configuration
vserver services name-service ldap client modify Modify an LDAP client configuration
vserver services name-service ldap client modify-bind-password Modify Bind Password of an LDAP client configuration
vserver services name-service ldap client show Display LDAP client configurations
vserver services name-service ldap client schema show Display LDAP schema templates
vserver services name-service netgroup load Load netgroup definitions from a URI
vserver services name-service netgroup status Display local netgroup definitions status
vserver services name-service netgroup file delete -vserver vs1 Remove a local netgroup file
vserver services name-service netgroup file show Display a local netgroup file
vserver services name-service nis-domain create Create a NIS domain configuration
vserver services name-service nis-domain delete -vserver vs2 -domain testnisdomain Delete a NIS domain configuration
vserver services name-service nis-domain modify Modify a NIS domain configuration
vserver services name-service nis-domain show Display NIS domain configurations
vserver services name-service nis-domain show-bound Display binding status of a NIS domain configuration
vserver services name-service ns-switch commands Manage Name Services Switch ordering
vserver services name-service ns-switch create Create a new Name Service Switch table entry
vserver services name-service ns-switch delete -vserver vs0 -database hosts Remove a Name Service Switch table entry
vserver services name-service ns-switch modify Change a Name Service Switch table entry
vserver services name-service ns-switch show Display Name Service Switch configuration
vserver services name-service remote-admin-auth prepare-to-downgrade Disable remote admin authentication feature to be compatible with releases earlier than Data ONTAP 8.3.1
vserver services name-service unix-group adduser Add a user to a local UNIX group
vserver services name-service unix-group create -vserver vs0 -name sanguy -id 99 Create a local UNIX group
vserver services name-service unix-group delete Delete a local UNIX group
vserver services name-service unix-group deluser Delete a user from a local UNIX group
vserver services name-service unix-group load-from-uri Load one or more local UNIX groups from a URI
vserver services name-service unix-group modify vserver services name-service unix-group modify -vserver vs0 -group hr -id 65
vserver services name-service unix-group show Display local UNIX groups
vserver services name-service unix-group max-limit modify -limit 33792 Change Configuration Limits for UNIX-Group
vserver services name-service unix-group max-limit show Display Configuration Limits for UNIX-Group
vserver services name-service unix-user create Create a local UNIX user
vserver services name-service unix-user delete -vserver vs0 -user testuser Delete a local UNIX user
vserver services name-service unix-user load-from-uri Load one or more local UNIX users from a URI
vserver services name-service unix-user modify Modify a local UNIX user
vserver services name-service unix-user show Display local UNIX users
vserver services name-service unix-user max-limit modify Change Configuration Limits for UNIX-User
vserver services name-service ypbind start Start ypbind
vserver services name-service ypbind status Current ypbind status
vserver services name-service ypbind stop Stop ypbind
vserver services ndmp generate-password Generates NDMP password for a user
vserver services ndmp kill 1001:9022 -vserver vserverA Kill the specified NDMP session
vserver services ndmp kill-all This command terminates all NDMP sessions on a particular Vserver in the cluster.
vserver services ndmp modify Modify NDMP Properties
vserver services ndmp off -vserver vs1 Disable NDMP service
vserver services ndmp on -vserver vs1 Enable NDMP service
vserver services ndmp probe Display list of NDMP sessions
vserver services ndmp show Display NDMP Properties
vserver services ndmp status Display list of NDMP sessions
vserver services ndmp extension modify -is-extension-0x2050-enabled true Modify NDMP extension status
vserver services ndmp extensions show Display NDMP extension status
vserver services ndmp log start Start logging for the specified NDMP session
vserver services ndmp log stop Stop logging for the specified NDMP session
vserver services ndmp restartable-backup delete Delete an NDMP restartable backup context
vserver services ndmp restartable-backup show Display NDMP restartable backup contexts
vserver services web modify Modify the configuration of web services
vserver services web show Display the current configuration of web services
vserver services web access create -name ontapi -role auditor Authorize a new role for web service access
vserver services web access delete -name ontapi -role auditor Remove role authorization for web service access
vserver services web access show Display web service authorization for user roles
vserver smtape break -vserver vserver0 -volume datavol Make a restored volume read-write
vserver snapdiff-rpc-server off -vserver vs0 Stop the SnapDiff RPC server
vserver snapdiff-rpc-server on Start the SnapDiff RPC Server
vserver snapdiff-rpc-server show Display the SnapDiff RPC server configurations of Vservers
vserver vscan disable Disable Vscan on a Vserver
vserver vscan enable Enable Vscan on a Vserver
vserver vscan reset Discard cached scan information
vserver vscan show Display Vscan status
vserver vscan show-events Display Vscan events
vserver vscan connection-status show Display Vscan servers connection status summary
vserver vscan connection-status show-all Display Vscan servers connection status
vserver vscan connection-status show-connected Display connection status of connected Vscan servers
vserver vscan connection-status show-not-connected Display connection status of Vscan servers which are allowed to connect but not yet connected
vserver vscan on-access-policy create Create an On-Access policy
vserver vscan on-access-policy delete Delete an On-Access policy
vserver vscan on-access-policy disable Disable an On-Access policy
vserver vscan on-access-policy enable Enable an On-Access policy
vserver vscan on-access-policy modify Modify an On-Access policy
vserver vscan on-access-policy show Display On-Access policies
vserver vscan on-access-policy file-ext-to-include add Add to the list of file extensions to include
vserver vscan on-access-policy file-ext-to-include remove Remove from the list of file extensions to include
vserver vscan on-access-policy file-ext-to-include show Display list of file extensions to include
vserver vscan on-access-policy file-ext-to-exclude add Add to the list of file extensions to exclude
vserver vscan on-access-policy file-ext-to-exclude remove Remove from the list of file extensions to exclude
vserver vscan on-access-policy file-ext-to-exclude show Display list of file extensions to exclude
vserver vscan on-access-policy paths-to-exclude add Add to the list of paths to exclude
vserver vscan on-access-policy paths-to-exclude remove Remove from the list of paths to exclude
vserver vscan on-access-policy paths-to-exclude show Display list of paths to exclude
vserver vscan on-demand-task create Create an On-Demand task
vserver vscan on-demand-task delete -vserver vs1 -task-name t1 Delete an On-Demand task
vserver vscan on-demand-task modify Modify an On-Demand task
vserver vscan on-demand-task run -vserver vs1 -task-name t1 Run an On-Demand task
vserver vscan on-demand-task schedule Schedule an On-Demand task
vserver vscan on-demand-task show Display On-Demand tasks
vserver vscan on-demand-task unschedule -vserver vs1 -task-name t1 Unschedule an On-Demand task
vserver vscan on-demand-task report delete Delete an On-Demand report
vserver vscan on-demand-task report show Display On-Demand reports
vserver vscan scanner-pool apply-policy -vserver vs1 -scanner-pool p1 -scanner-policy primary -cluster cluster2 Apply scanner-policy to a scanner pool
vserver vscan scanner-pool create Create a scanner pool
vserver vscan scanner-pool delete Delete a scanner pool
vserver vscan scanner-pool modify Modify a scanner pool
vserver vscan scanner-pool show Display scanner pools
vserver vscan scanner-pool show-active Display active scanner pools
vserver vscan scanner-pool privileged-users add Add to the list of privileged users
vserver vscan scanner-pool privileged-users remove Remove from the list of privileged users
vserver vscan scanner-pool privileged-users show Display list of privileged users
vserver vscan scanner-pool servers add Add to the list of hostnames
vserver vscan scanner-pool servers remove Remove from the list of hostnames
vserver vscan scanner-pool servers show Display list of servers

EMC XTremIO CLI Reference Guide (XMSCLI)

Other CLI Reference Guides:
Isilon CLI  |  EMC ECS CLI  |  VNX NAS CLI  |  ViPR Controller CLI  NetApp Clustered ONTAP CLI  |  Data Domain CLI  |  Brocade FOS CLI

This is a quick reference guide for the EMC XTremIO CLI, including all of the commands for cluster monitoring, cluster operations, hardware management, volume operations, administration & configuration, and alerting & events.

Monitoring Clusters
show-clusters Displays the connected clusters’ information.
show-clusters-info Displays the connected clusters’ information.
show-clusters-upgrade Displays the clusters’ upgrade status.
show-clusters-upgrade-progress Displays indicators of the clusters’ software upgrade progress.
show-clusters-performance Displays clusters’ performance data.
show-clusters-performance-small Displays clusters’ performance data for small (under 4KB) blocks.
show-clusters-performance-unaligned Displays clusters’ performance data for unaligned blocks.
show-clusters-performance-latency Displays clusters’ performance latency data.
modify-clusters-parameters Displays the connected clusters iSCSI TCP port numbers.
show-clusters-savings Displays savings parameters of the selected cluster.
modify-cluster-thresholds Displays thin provisioning soft limits for the connected clusters.
show-clusters-data-protection-properties Displays clusters’ data protection properties.
Managing Multiple Clusters
add-cluster Adds a Cluster to the list of Clusters managed by the XMS.
remove-cluster Removes a Cluster from the list of Clusters managed by the XMS.
set-context Sets a cluster context in a multiple cluster environment and
Managing Tags
show-tags Displays the details of all defined Tags.
show-tag Displays the details of a specified Tag.
create-tag Creates a Tag for an entity.
tag-object Assigns a Tag to the specified object.
untag-object Removes a Tag from the specified object.
modify-tag Modifies the specified Tag caption.
remove-tag Deletes a Tag from the Tags list.
Monitoring Cluster Performance
show-most-active Displays the most active Volumes and Initiator Groups.
show-most-active-initiator-groups Displays performance data of the most active Initiator Groups.
show-most-active-volumes Displays performance data of the most active Volumes.
Monitoring X-Bricks
show-bricks Displays a list of X-Bricks and their associated cluster.
show-clusters Displays the connected clusters information.
show-storage-controllers Displays the cluster’s Storage Controllers information and status.
show-ssds Displays a list of SSDs in the cluster and their properties.
show-bbus Displays the Battery Backup Units information.
Monitoring Storage Controllers
show-storage-controllers Displays the cluster’s Storage Controllers information and status.
show-storage-controllers-info Displays the cluster’s Storage Controllers information.
show-storage-controllers-fw-versions Displays the Storage Controllers firmware version information.
show-storage-controllers-psus Displays information on Storage Controllers power supply units.
show-storage-controllers-sensors Displays a list of sensors and their related information.
test-xms-storage-controller-connectivity Performs a connectivity check for the specified Storage Controller and its Managing XMS.
Monitoring SSDs
show-ssds Displays a list of SSDs in the cluster and their properties.
show-ssds-performance Displays the SSDs performance data.
show-slots Displays a list of SSD slots and their properties.
Monitoring InfiniBand Switches
show-infiniband-switches displays InfiniBand Switches’ information.
show-infiniband-switches-ports Displays InfiniBand Switches’ port information.
show-infiniband-switches-psus Displays a InfiniBand Switches’ PSU infomation.
Monitoring Data Protection Groups
show-data-protection-groups Displays XDP groups status and information.
show-clusters-data-protection-properties Displays the clusters’ data protection properties.
show-data-protection-groups-performance Displays XDP groups performance information.
Monitoring Local Disks
show-local-disks Displays the Storage Controller’s Local Disks information.
Monitoring BBUs
show-bbus Displays Battery Backup Units information.
Monitoring DAEs
show-daes Displays the cluster’s DAE information.
show-daes-psus Displays a list of DAE power supply units (PSUs) and their properties.
show-daes-controllers Displays a list of DAE LCCs (controllers) and their properties.
Monitoring Targets
show-targets Displays the cluster Targets information.
show-target-groups Displays a list of Target groups.
show-targets-fc-error-counters Displays Fibre Channel error counter per Target.
show-target-groups-fc-error-counters Displays Fibre Channel error counter per Target group.
show-targets-performance Displays Targets’ performance data.
show-targets-performance-small Displays Targets’ performance data for small (under 4KB) blocks.
show-targets-performance-unaligned Displays Targets’ performance data for unaligned blocks.
show-target-groups-performance Displays Target groups’ performance data.
show-target-groups-performance-small Displays Target groups’ performance data for small (under 4KB) blocks.
show-target-groups-performance-unaligned Displays Target groups’ performance data for unaligned blocks.
Monitoring Volumes
show-volume Displays the specified Volume’s information.
show-volumes Displays a list of Volumes and their information.
show-volume-snapshot-groups Displays the defined Snapshot groups and their parameters
show-volumes-performance Displays Volumes’ performance data.
show-volumes-performance-small Displays Volumes’ performance data for small (under 4KB) blocks.
show-volumes-performance-unaligned Displays Volumes’ performance data for unaligned blocks.
Monitoring Consistency Groups
show-consistency-group Displays the parameters of the specified Consistency Group.
show-consistency-groups Displays the parameters of all defined Consistency Groups.
Monitoring Initiators
show-initiators Displays Initiators’ data.
show-initiators-performance Displays Initiators’ performance data.
show-initiators-performance-small Displays Initiators’ performance data for small (under 4KB) blocks.
show-initiators-performance-unaligned Displays Initiators’ performance data for unaligned blocks.
show-initiators-connectivity Displays Initiators-Port connectivity and the number of the connected Targets. Specifying the Target-details input parameter, provides the Initiators-Targets connectivity map.
show-discovered-initiators-connectivity Displays the Initiators that are logged in to the cluster but not assigned to any Initiator Group.
Monitoring initiator groups
show-initiator-group Displays information for a specific Initiator Group.
show-initiator-groups Displays information for all Initiator Groups.
show-initiator-groups-performance Displays Initiator Groups’ performance data.
show-initiator-groups-performance-small Displays Initiator Groups’ performance data for small (under 4KB) blocks.
show-initiator-groups-performance-unaligned Displays Initiator Groups’ performance data for unaligned blocks.
Monitoring Snapshot Sets
show-snapshot-set displays the parameters of a specified Snapshot Set.
show-snapshot-sets Displays a list of Snapshot Sets and related information.
Monitoring Cluster Alerts
show-alerts Displays a list of active alerts and their details.
show-alert-definitions Displays a list of pre-defined alerts and their definitions.
Managing Reports
show-report Displays the details of a specified report.
show-reports Displays a list of defined reports.
show-reports-data displays a report’s data for a specified entity and category.
Managing Tags
show-tags Displays the details of all defined Tags.
show-tag Displays the details of a specified Tag.
create-tag Creates a Tag for an entity.
tag-object Assigns a Tag to a specified object.
untag-object Removes a Tag from a specified object.
modify-tag Modifies a specified Tag caption.
remove-tag Deletes a Tag from the Tags list.
Managing Volumes and Snapshots
add-volume Creates and adds a new Volume.
remove-volume Removes a Volume.
modify-volume Modifies a Volume’s parameters.
show-volume Displays the specified Volume’s information.
show-volumes Displays a list of Volumes/Snapshots (including properties of each), and the Volume Snapshot Group Index each Volume/Snapshot belongs to.
create-snapshot Creates a Snapshot from a specified Volume.
create-snapshot-and-reassign Creates a Snapshot from a specified Volume/Snapshot, Consistency Group, or Snapshot Set and reassigns the Volume identity characteristic to the created Snapshot.
show-volume-snapshot-groups Displays the Volume Snapshot Group and its members.
add-volume-to-consistency-group Adds a Volume to a Consistency Group.
create-scheduler Creates a new Snapshot Scheduler.
show-snapshots Displays a list of Snapshots and related information.
map-lun Maps a Volume to an Initiator Group and assigns a Logical Unit Number (LUN) to it.
Managing Consistency Groups
add-volume-to-consistency-group Adds a Volume to a Consistency Group.
create-consistency-group Creates a new Consistency Group.
create-snapshot-and-reassign Creates a Snapshot from a specified Volume/Snapshot, Consistency Group, or Snapshot set and reassigns the Volume identity characteristic to the created Snapshot.
remove-consistency-group Deletes a Consistency Group.
remove-volume-from-consistency-group Removes a Volume from a Consistency Group.
show-consistency-group Displays a specified Consistency Group’s parameters.
show-consistency-groups Displays all the defined Consistency Groups’ parameters.
create-scheduler Creates a new Snapshot scheduler.
Managing Snapshot Sets
create-snapshot Creates a Snapshot from a specified Volume.
create-snapshot-and-reassign Creates a Snapshot from a specified Volume/Snapshot, Consistency Group, or Snapshot Set and reassigns the Volume identity characteristic to the created Snapshot.
show-snapshot-sets Displays a list of Snapshot Sets and their data.
show-snapshot-set displays the parameters of a specified Snapshot Set.
remove-snapshot-set Removes a Snapshot Set
create-scheduler Creates a new Snapshot scheduler.
Managing Initiator Groups
add-initiator Adds an Initiator and associates it with an existing Initiator Group.
add-initiator-group Adds an Initiator Group and its associated Initiators to the XtreamIO cluster.
modify-initiator Modifies the properties of an existing Initiator.
remove-initiator Deletes an Initiator.
remove-initiator-group Deletes an Initiator Group.
show-initiators Displays Initiators’ data.
show-initiator-group Displays information for a specific Initiator Group.
show-initiator-groups Displays information for a all Initiator Groups.
show-targets Displays the cluster Targets’ interfaces (iSCSI or FC ports).
show-target-groups Displays a list of Target Groups.
show-discovered-initiators-connectivity Displays the Initiators-Targets connectivity map.
show-initiators-connectivity Displays Initiators-Port connectivity and the number of connected Targets.
map-lun Maps a Volume to an Initiator Group and assigns a Logical Unit Number (LUN) to it.
Managing Initiators
add-initiator Adds an Initiator and associates it with an existing Initiator Group.
modify-initiator Modifies the properties of an existing Initiator.
remove-initiator Deletes an Initiator.
show-initiators Displays Initiators’ data.
rename Renames a component of the XtremIO Storage Array.
show-chap Displays the cluster’s configured CHAP authentication and discovery modes.
modify-chap Modifies Chap configuration parameters.
Managing Schedulers
create-scheduler Creates a new Snapshot Scheduler.
modify-scheduler Modifies a Snapshot Scheduler’s parameters.
remove-scheduler Removes a Snapshot Scheduler.
show-scheduler Displays the parameters of the specified Scheduler.
show-schedulers Displays the defined Schedulers parameters.
suspend-scheduler Suspends the activity of an active Scheduler.
resume-scheduler Resumes the activity of a suspended Scheduler.
Managing alerts
acknowledge-alert Acknowledges an alert and removes it from the dashboard Active Alerts list. The alert remains in the Alert List window. Alerts with Clear Mode set to Acknowledge Required, remain on the Alert List until they are acknowledged.
modify-alert-definition Modifies the alert definition properties for a specified alert type.
show-alert-definitions Displays a list of pre-defined alerts and their definitions.
Managing events
add-event-handler-definition Adds a definition to an event handling rule.
remove-event-handler-definition Deletes the event handling rule definitions.
modify-event-handler-definition Modifies the definition of event handling rules.
show-event-handler-definitions Displays the event handling rule definitions.
Managing iSCSI portals and routes
add-iscsi-portal Maps a portal to a Target.
add-iscsi-route Adds and configures iSCSI route parameters.
remove-iscsi-portal Deletes a portal mapping from a Target.
remove-iscsi-route Deletes an iSCSI routing configuration.
show-iscsi-portals Displays a list of iSCSI portals and their properties.
show-iscsi-routes Displays a list of iSCSI routes and their properties.
Managing Cluster Limits
modify-cluster-thresholds Modifies the properties for thin provisioning soft limits for connected clusters.
modify-alert-definition Modifies the alert definition properties for a specified alert type.
Managing Cluster ODX mode
modify-clusters-parameters Modifies various cluster parameters.
show-clusters-parameters Displays various cluster parameters.
Configuring CHAP Parameters
modify-chap Modifies CHAP configuration parameters.
show-chap Displays the cluster’s configured CHAP authentication and discovery modes.
Managing User Accounts
add-user-account Adds a new user account.
remove-user-account Removes a user account.
modify-user-account Modifies the user account parameters.
modify-password Used to modify one’s own password, or for entitled users (Configuration and Admin) to modify others’ passwords.
show-user-accounts Displays the user accounts information.
LDAP server configuration
add-ldap-config Adds a new LDAP configuration profile to the LDAP configuration table.
modify-ldap-config Modifies an LDAP configuration profile.
remove-ldap-config Removes an LDAP configuration profile from the LDAP configuration table.
show-ldap-configs Displays the LDAP users’ authentication configuration data.
Configuring the Inactivity Timeout
show-xms Displays the XtremIO management System information.
modify-xms-parameters Modifies the XMS’s user inactivity timeout.
Managing Email settings
modify-email-notifier Modifies the email notification settings.
show-email-notifier Displays the Email notification settings.
Managing SNMP configuration
modify-snmp-notifier Modifies the SNMP notification settings.
show-snmp-notifier Shows the SNMP notification settings.
Managing Syslog notification configuration
show-syslog-notifier Displays the Syslog server notification status and data.
modify-syslog-notifier Enables Syslog configuration.
Cluster Operations
show-clusters Obtain the name and index of the cluster
stop-cluster cluster-id=<cluster name> To stop the cluster via the CLI
start-cluster cluster-id=<cluster name> To start the cluster via the CLI
power-off cluster-id=<cluster name> shut-down-reason=”reason” Power off the cluster via the CLI
show-storage-controllers cluster-id=<cluster-name> View all of the Storage Controllers in the cluster.
show-clusters-performance Verify that no I/O requests are sent from the host with this command prior to planned shutdown
show-targets-performance cluster-id=<cluster name> Verify that all output counters for the relevant cluster
show-email-notifier {enable | disable} Verify email notifications (may want to disable for planned maintenance)
modify-ip-addresses View the existing Storage Controllers and their respective index numbers.
modify-datetime Set the time zone.
refresh-xms-ssh-key Generate a new, unique SSH key for the cluster you are working with

Dell EMC Unity Unisphere CLI Command Reference Guide (UEMCLI)

This is a list of syntax examples for using uemcli on a Unity array.  It covers system management, networking, host management, hardware,  storage management, data protection and mobility, events and alerts, andsystem maintenance.  Install the UEMCLI on your client machine or open an ssh session to the unity array as uemcli is already accessible there.

This post is designed to help quickly find the general syntax of uemcli commands and be short enough to print out a copy.  For details on all of the specific options for each of these commands I recommend downloading the (612 page long) Dell EMC CLI Reference Guide.

System Management
Displays the general settings for a physical system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/general show -detail
Displays the general settings for a virtual system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/general show
Disable automatic failback uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/general set –autoFailback off
Fail back all NAS servers that have failed over uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/general failback
Perform a health check of the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/general healthcheck
Displays the general setting information for the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/info show
Change System Information uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/info set -contactFirstName Zach -contactLastName Arnold -contactEmail x@mail.com -contactPhone 5559999 -location here -contactMobilePhone 987654321
Create a session to upgrade the system software uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/upgrade create –type software
creates a session to upgrade the storage processor uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/upgrade create –type sp –newSPModel SP500
Display details about the hardware upgrade session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/upgrade show
Displays details about the software upgrade session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/upgrade show
Resume upgrade uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/upgrade resume
Cancel upgrade session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/upgrade cancel
Display security settings for the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/security show
Changes system security settings uemcli /sys/security set -fips140Enabled yes
Display system time uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/time show
System reboot uemcli /sys/time set -utc “2018-01-17 12:16:30” -force allowreboot
Display support configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/config show
Display detailed support configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/config show -detail
Specifies the support services proxy server parameters uemcli /sys/support/config set -supportProxyAddr 10.1.55.1 -supportProxyPort 8080 -supportProxyUser user1 -supportProxyPasswd password123 –supportProxyProtocol http
Displays support contracts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/contract show
Refresh support conracts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/contract refresh
Display Centralized ESRS configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/esrsc show -detail
Check network connectivity from Centralized ESRS to EMC uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/esrsc checkNetwork -address 10.10.96.97
Displays the Integrated ESRS configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/esrsi show -detail
Specifies the Integrated ESRS parameters uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/esrsi set -acceptEula yes
Displays network connectivity for Integrated ESRS uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/esrsi checkNetwork
Display Integrated ESRS policy Manager server config uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/esrsi/policymgr show -detail
Change the Policy Manager and proxy server attributes uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/esrsi/policymgr set -enable no
Show configuration details for Connect Home uemcli -d 10.1.55.1 -u admin -p Password /sys/support/connecthome show -detail
Enable Connect Home and specify SMTP server uemcli -d 10.1.55.1 -u local/username -p Password /sys/support/connecthome set –enable yes –smtpServer 10.10.99.99
Test email alert uemcli -d 10.1.55.1 -u local/username -p Password /sys/support/connecthome test
Displays a list of user roles on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /user/role show -detail
Create user account with operator role uemcli -d 10.1.55.1 -u Local/username -p passwd123 /user/account create –name user1 –type local –passwd Password –role operator
Displays a list of all user accounts on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /user/account show
Change password for user account user_user1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /user/account –id user_user1 set –passwd NewPassword –oldpasswd OldPassword
Delete user account user_user1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /user/account -id user_user1 delete
Display support credentials uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/support/account show
Display list of all feature licenses on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/limit show -detail
Display list of all feature licenses on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/lic show
This command accepts the EULA uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/eula set -agree yes
Create remote manager configuration uemcli /sys/ur create -addr 10.10.0.1 -certificate 2fd4e1c67a2d28fced849ee1bb76e7391b93eb12 -passphrase password
Display Unisphere Central manager configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/ur show
Display settings for remote system logging uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/rlog show
Configure remote system logging with these settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/rlog set –enabled yes –host 10.64.74.12 –port 500 –protocol UDP -facility chicago
Delete X509 certificate uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/cert -id vasa_http-vc1-servercert-1 delete
Display details about all schedules uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/task/sched show
Delete schedule MySchedID uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/task/sched -id MySchedID delete
List details for all task rules assigned to protection schedule uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/task/rule –sched SCHD_3 show
Delete a task rule uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/task/rule –id RULE_1 delete
Display list of all jobs uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/task/job show {-detail}
Resume an existing job uemcli -d 10.1.55.1 -u Local/username -p passwd123 /sys/task/job -id N-23564 resume
Display list of steps of the specified job uemcli /sys/task/job/step -jobId N-23654 show {-detail}
Network
Create a NAS server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/server create -name NasSanguy_1 -sp spa -pool pool_0 -enablePacketReflect yes
Displayl details for a list of all configured NAS servers uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/server show -detail
Uses LDAP as the Unix Directory Service uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/server -id nas_1 set -mpSharingEnabled yes -unixDirectoryService ldap
Change replication settings for NAS server nas_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/server -id nas_1 set -replDest yes
Change storage processor to SPB for NAS server nas_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/server -id nas_1 set -sp spb
Delete NAS server nas_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/server –id nas_1 delete
Create user mapping report for NAS server nas_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/server -id nas_1 update -async -userMapping
View FTP or SFTP server settings for a NAS server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/ftp show
View LDAP settings of a NAS server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/ldap -server nas_1 show -detail
Create a new NAS interface uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/if create -server nas_1 -port eth0_SPA -addr 10.1.55.1 -netmask 255.255.255.0
Display all NAS interfaces on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/if show
Change the gateway address for interface IF_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123/net/nas/if –id IF_1 set -gateway 2001:db8:0:170:a:0:2:70
Delete interface IF_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/if –id IF_1 delete
Create a network route for interface if_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/route create -if if_1 -type net -target 10.64.200.10 ‑netmask 255.255.255.0 -gateway 10.64.74.1
Display all NAS routes on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/route show -detail
Delete route ‘route_1’ uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/route -id route_1 delete
Configure a custom Kerberos realm for NAS server nas_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/kerberos -server nas_1 set -addr master.thesanguy.com,slave.thesanguy.com -realm domain.thesanguy.com
Show Kerberos settings for all NAS Servers uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/kerberos show
Display information for VLANs that are in use uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/vlan show -from 100 -inUse yes
Create a CIFS server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/cifs create -server nas_0 -name CIFSserver1 -description “CIFS description” -domain domain.thesanguy.com -username user1 -passwd password1
Displays CIFS server settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/cifs show
Delete a CIFS server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/cifs -id CIFS_0 delete
Create an NFS server on NAS server nas_1 with ID nfs_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/nfs create -server nas_1 -v4 yes -secure yes
Display NFS server settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/nfs show -detail
Change credit cache retention period for an NFS server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/nfs -id nfs_1 set -credCacheRetention 20
Delete an existing NFS server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/nfs -id nfs_1 delete
View details about CAVA settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/cava show
Modify CAVA settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/cava -server nas_1 set -enabled yes
View details about CEPA configuration settings uemcli /net/nas/event/config -server nas_1 show -detail
Enable Event Publishing and set the post-event policy uemcli /net/nas/event/config -server nas_1 set -enabled yes -postEventPolicy accumulate
Create a CEPA pool and a list of post events to be notified on uemcli /net/nas/event/pool create -server nas_1 -name mypool1 -addr 10.1.2.100 -postEvents CreateFile,DeleteFile
View details about a CEPA pool uemcli /net/nas/event/pool -server nas_1 show
Change the name for a CEPA pool uemcli /net/nas/event/pool -id cepa_pool_1 set -name TestCepaPool
Delete a CEPA pool uemcli /net/nas/event/pool –id cepa_pool_1 delete
Create VMware protocol endpoints servers for File Vvols uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/vmwarepe create -server nas_1 -if if_ 1
View VMware protocol endpoints servers for File Vvols uemcli -d 10.1.55.1 -u Local/username -p passwd123/net/nas/vmwarepe show -detail
Delete a VMware protocol endpoints server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/vmwarepe –id PES_0 delete
View details about the iSCSI configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/iscsi/config show
List all iSCSI nodes on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/iscsi/node show
Change the network interface alias assigned to the node uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/iscsi/node -id ISCSIN_1 set -alias “Sample iSCSI node”
View details about the network ports uemcli /net/port/eth show
sets the MTU size for Ethernet port 0 (eth0) on SP A to 9000b uemcli /net/port/eth –id spa_eth0 set –mtuSize 9000
View details about the FC ports uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/port/fc show -detail
Change the speed for FC port fc1 on SP A to 1 Gbps uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/port/fc –id spa_fc1 set –speed 1Gbps
View details about uncommited ports uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/port/unc show -detail
View a list of interfaces on the system uemcli /net/if/mgmt show
Change the settings for an interface uemcli /net/if/mgmt set -ipv4 static -addr 192.168.1.199 -netmask 255.255.255.0 -gateway 192.168.1.254
Create a replication interface uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/if create -type replication -port eth1_spb -addr 10.1.55.1 -netmask 255.255.255.0 -gateway 10.1.55.1
View a list of interfaces on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/if show -detail
Change the gateway address for interface IF_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123/net/if –id IF_1 set -gateway 2001:ac8:0:253:c:0:2:50
Delete an interface uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/if –id IF_1 delete
Create an IP route uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/route create –if IF_1 –type net –target 10.55.99.10 netmask 255.255.255.0 –gateway 10.55.99.254
View details about IP routes uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/route show -detail
Modify an existing IP route uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/route -id RT_1 set -target 10.55.99.99 ‑netmask 255.255.255.0 -gateway 10.55.99.254
Delete an IP route uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/route –id RT_1 delete
Create a link aggregation uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/la create -ports “eth0_SPA,eth1_SPA”
Show the link aggregations on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/la show
Change the settings of a link aggregation uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/la –id la0_SPA set –mtuSize 9000
Delete a link aggregation uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/la –id la0_SPA delete
Configure the DNS settings for the storage system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/dns/config set -nameServer “10.55.13.9,10.55.13.10”
View the DNS server addresses designated as a default uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/dns/config show
View details about configured DNS server domains uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/dns -server nas_1 show -detail
Configure a DNS server domain uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/dns –server nas_1 set -name “storage.thesanguy.com”
Create an NTP server to specify an IP address of each NTP server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/ntp/server create –server ntp.thesanguy.com
View details about the NTP server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/ntp/server show
Configure the NTP server setting uemcli -d 10.1.55.1 -u Local/username -p 12345 /net/ntp/server set –addr “10.55.9.1,10.55.9.2”
Delete an NTP server record to remove the NTP settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/ntp/server –id NTP_10.5.1.207 delete
View details about NIS server domains uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/nis show -detail
Add NIS server addresses to an NIS server domain uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/nis –id nis.thesanguy.com set –ip “10.55.1.38″
View the IP addresses of the SMTP servers uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/smtp show
Specify the IP addresses for the SMTP server setting uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/smtp -id default set -addr 10.55.1.36
View whether NDMP is enabled or disabled uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/ndmp show
Enable NDMP uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/ndmp -server nas_0 set –enabled yes –passwd “passwd123”
View details for configured LDAP settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/ldap show
Update a configured LDAP setting uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/ldap -id lDAP_1 set –server webhost.thesanguy.com –port 389
Verify the connection to the LDAP server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/ldap –id LDAP_1 verify
Delete an LDAP setting uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/ldap –id LDAP_1 delete
Ping a remote host from the specified NAS server interface uemcli /net/util ping -srcIf if_0 -addr 10.1.55.1
Display the route from the specified interface to a remote host uemcli /net/util/traceroute -srcIf if_0 -addr 10.1.55.1
Displays DHSM settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/dhsm show
Modify Distributed Hierarchical Storage Management settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /net/nas/dhsm –server nas_0 set –state Enabled –username newname –passwd newpassword
Create an HTTP connection uemcli /net/nas/dhsmconn create -filesystem fs_1 -secondaryUrl http://10.1.0.115/export/dhsm1
View details for DHSM connections uemcli /net/nas/dhsmconn –fs fs_1 show
Modify settings for an existing DHSM connection uemcli /net/nas/dhsmconn –id dhsmconn_1 modify -mode recallOnly
Deletes an existing HTTP connection uemcli /net/nas/dhsmconn –id dhsmconn_1 delete -recallPolicy no
Host Management
Create a host configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/host create –name MyHost –descr “accounting” -type host –addr 10.64.74.10 -osType winxp
View details about a host configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/host show -brief
Change the settings for a host configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/host –id 1014 set -descr “Storage Team” –osType winxp
Delete a host configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/host –id 999 delete
Lists all host LUNs on host Host_3 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/host/hlu -host Host_3 show -detail
Change the host LUN ID uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/host/hlu -id Host_3_sv_2_prod set –lunid 0
Create an FC or iSCSI initiator and assign it to a host configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/initiator create -host 1014 -uid “20:00:00:00:A9:19:0A:CD:10:00:00:00:A9:19:CD:FD” -type fc
View a list of all initiators uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/initiator show
Modify an already created initiator uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/initiator -id 1058 set -host 1099
List all initiator paths on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/initiator/path show
Configures a remote system configuration for local access uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/sys create –addr 10.55.1.98 –type VNX -dstUsername admin1 -dstPassword password12134
Verify the configuration settings for a remote system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/sys –id RS_1 verify
View the configuration for a remote system on the local system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/sys show -detail
Changes the configuration settings for a remote system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/sys –id RS_1 set –addr “10.55.2.98” -dstUsername Local/username -dstPassword password1234
Deletes the configuration for a remote system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/sys –id RS_1 delete
Add virtual center credentials uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/vc create -addr 10.55.11.109 -username administrator@vsphere.local -passwd xxx -descr “Add vCenter”
Specify a new description for the VMware vCenter server uemcli /virt/vmw/vc -id VC_1 set -descr “This vCenter manages 5 Executive hosts”
Remove an existing VMware vCenter server and its ESXi hosts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/vc -id VC_1 delete
Displays a list of configured VMware vCenter servers uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/vc show
Rescan details of all configured VMware vCenter servers. uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/vc refresh -scanHardware
Adds a VMware ESXi host uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/esx create -addr 10.1.1.1 -username root -passwd xxx -descr “Prod ESX”
Change ESXi host credentials and/or description. uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/esx -id ESX_1 set -descr “New Description”
Delete ESXi host credentials uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/esx -id ESX_1 delete
Display a list of all configured VMware ESXi hosts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/esx -vc VC_1 show
List all VMware ESXi hosts on the specified VMware vCenter server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/esx discover -vc VC_1
Rescan details of a VMware ESXi host uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/esx refresh -scanHardware
Display a list of all existing virtual machines on existing ESXi hosts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/vm -esx ESX_1 show
Manage hard disk properties for VMware virtual machines uemcli -d 10.1.55.1 -u Local/username -p passwd123 /virt/vmw/vmdevice -vm VM_1 show
Hardware
View existing Storage Processors (SPs) uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/sp show
View existing drives uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/disk show
Display the details of all drives on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/disk show -detail
Rescan the system for available virtual disks uemcli -d 10.0.0.2 -u Local/username -p passwd123 /env/disk rescan
Change settings of an existing disk uemcli -d 10.0.0.2 -u Local/username -p passwd123 /env/disk -id vdisk_1 set -name “extreme perf storage”
Display a list of system batteries uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/bat show
View a list of system power supplies uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/ps show
View a list of LCCs uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/lcc show
View a list of system SSDs uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/ssd show
View a list of system DAEs uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/dae show
View details of the system DPE uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/dpe show
View a list of system memory modules uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/mm show
View a list of System Status Cards (SSC) uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/ssc show -detail
View a list of system fan modules uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/fan show
View details about I/O modules in the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /env/iomodule show
Storage Management
Create a dynamic pool uemcli -d 10.1.55.1 -u Local/username -p passwd123 uemcli /stor/config/pool create -name MyPool -descr “dynamic pool” -diskGroup dg_2,dg_28 -drivesNumber 6,10 -storProfile profile_1,profile_2
Create a traditional pool in a model that support dynamic pools uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool create -name MyPool -descr “traditional pool” -diskGroup dg_3,dg_28 -drivesNumber 5,9 -storProfile tprofile_1,tprofile_2 -type traditional
Create a traditional pool that does not support dynamic pools uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool create -name MyPool -descr “my large pool” -storProfile profile_19,profile_20 -diskGroup dg_15,dg_16 -drivesNumber 5,9 -FASTCacheEnabled yes
Create a traditional pool with two virtual disks uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool create -name vPool -descr “my virtual pool” -disk vdisk_0,vdisk_2
Set the subscription alert threshold for pool_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool -id pool_1 -set -alertThreshold 70 -FASTCacheEnabled no
Add new drives to a pool to increase its storage capacity uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool –id pool_1 extend –diskGroup dg_1 –drivesNumber 7 -storProfile profile_99
View a list of pools uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool show -detail
Show all pools for a model that supports dynamic pools uemcli -d 10.0.0.2 -u Local/username -p passwd123 /stor/config/pool show -detail
Shows details for all pools on a virtual system uemcli -d 10.0.0.2 -u Local/username -p passwd123 /stor/config/pool show -detail
Delete a pool uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool –id pool_1 delete
Modify FAST VP settings on an existing pool uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool/fastvp -pool pool_1 set -schedEnabled yes
View FAST VP settings on a pool uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool/fastvp –show -detail
Start data relocation on a pool uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool/fastvp -pool pool_1 start -endTime 09:00
Stop data relocation on a pool uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool/fastvp –pool pool_1 stop
Show tier details about the specified pool uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool/tier -pool pool_1 show -detail
shows details for all storage resources associated with the pool uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/pool/sr -pool pool_1 show -detail
Change FAST VP general settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/fastvp set -schedEnabled yes -days “Mon,Fri” -at 23:00 -until 09:00
View the FAST VP general settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/fastvp show -detail
Configure FAST Cache uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/fastcache create -diskGroup dg9 -drivesNumber 6 -enableOnExistingPools
View the FAST Cache parameters uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/fastcache show -detail
Extend the FAST Cache by adding more drives uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/fastcache extend -diskGroup dg9 -drivesNumber 6
Shrink the FAST Cache by removing storage objects uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/fastcache shrink –so rg_1
Delete the FAST Cache configuration uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/fastcache delete
View a list of all storage objects, including RAID groups and drives uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/fastcache/so show
Show details for storage profiles uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/profile -configurable show
View details about drive groups on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/dg show -detail
View the current storage system capacity settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/general/system show
View the current system tier capacity settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/general/tier show
View details about a file system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs -id res_99 show
Specify Event Publishing protocols uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs -id res_99 set -eventProtocols nfs,cifs
Delete a file system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs -id res_99 delete
creates a user quota (for user 201 on file system res_1) uemcli -d 10.1.55.1 -u Local/username -p passwd123 /quota/user create -fs res_1 -path /qtree_1 -userId 201 -softLimit 20G -hardLimit 50G
Change the user quota uemcli -d 10.1.55.1 -u Local/username -p passwd123 /quota/user -fs res_1 -path /qtree_1 unixName nasadmin show -detail
Create quota tree /qtree_1 on file system res_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /quota/tree create -fs res_1 -path /qtree_1 -softLimit 500G -hardLimit 999G
Display space usage information for all quota trees (on res_1) uemcli -d 10.1.55.1 -u Local/username -p passwd123 /quota/tree -fs res_1 show -detail
Refresh quota information for all quota trees on res_1 fs uemcli -d 10.1.55.1 -u Local/username -p passwd123 /quota/tree -fs res_1 refresh /
Delete quota tree /qtree_1 on file system res_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /quota/tree -fs res_1 -path /qtree_1 delete
List config info for quota tree /quota/config on res_1 fs uemcli -d 10.1.55.1 -u Local/username -p passwd123 /quota/config -fs res_1 show -detail
Create an NFS share to export a file system through NFS uemcli -u admin -p Password123! /stor/prov/fs/nfs create -name testnfs112 -fs res_26 -path “mypath”
View details of an NFS share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs/nfs show -detail
Delete an NFS share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs/nfs –id NFSShare_1 delete
Creates a CIFS share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs/cifs create –name CIFSshare -descr “a share” –fs fs1 -path ”/cifsshare” -enableContinuousAvailability yes -enableCIFSEncryption yes
List details for all CIFS network shares uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs/cifs show
Set the description of CIFS share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs/cifs –id SMBShare_1 set -descr “a share”
Delete a CIFS (SMB) share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/fs/cifs –id CIFSShare_1 delete
Create a LUN uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/lun create -name “TheLUN” -descr “The LUN” -type primary -group group1 -pool pool_1 -size 999M
Display the list of existing LUNs uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/lun show
Change the settings of a LUN uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/lun -id lun_1 set -name NewName -descr “My new description” -size 150M
Delete a LUN uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/lun -id lun_1 delete
Refresh a LUN’s thin clone uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/lun -id lun_5_tc refresh -source SNAP_2 -copyName Backup1
Create a consistency group uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/group create -name GenericStorage01 -descr “MyStorage” -sched SCHD_1
Display the list of existing consistency groups uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/group show -detail
Change the settings for a consistency group uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/group -id group_1 set -name NewName -descr “New Descr” -sched SCHD_2 -schedPaused yes -fastvpPolicy startHighThenAuto
Delete a consistency group uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/luns/group -id group_1 delete
Create an NFS datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/nfs create –name Executive –descr “Executive VMs” –server nas_1 –pool capacity –size 100G -rwHosts host1 -esxMountProtocol NFSv4 -minSecurity krb5 -nfsOwner john -defAccess na
View details about an NFS datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/nfs show
Change the settings for an NFS datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/nfs –id NFSDS_1 set –roHosts “HOST_1,HOST_2” -naHosts “HOST_3”
Delete an NFS datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/nfs -id NFSDS_1 delete
Create a VMFS datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vmfs create –name “Banking 3” –descr “Banking Grp 3” –pool capacity -size 100G –thin yes –vdiskHosts “1166,1167”
Display the list of existing VMFS datastores uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vmfs show
Change the settings for a VMFS datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vmfs –id VMFS1 set –name engineering2 –descr “Eng Grp 2”
Delete a VMFS datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vmfs –id VMFS_1 delete
Refresh the thin clone of a VMFS datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vmfs -id vmware_2_tc refresh -source snapshot2 -copyName Backup1
Display a list of existing protocol endpoints and their characteristics uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/pe show detail
Changes the settings for a VMware protocol endpoint (iSCSI) uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/pe set –id rfc9999.e53a24f1-3324-9999-80a3-c2cabb322a1c set –lunid 5
Create a datastore for VMware Vvols uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vvolds create –name “HR” –cp cp_1,cp_2 –size 10G, 12G –type file –hosts “HostA,HostB”
Display a list of existing VVol datastores and their characteristics uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vvolds show -detail
Modify an existing VVol datastore uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vvolds -id res_1 set -name MyNewName -descr “Descr” -addCp cp_1 -size 10G
Delete VVol datastores and their associated virtual volumes uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vvolds -id res_1 delete -force yes
Display existing VVol datastore allocations uemcli /stor/prov/vmware/vvolds/alloc -vvolds vvolds_1 -pool pool_1 show -detail
Display a list of existing VVol datastores and their characteristics uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vvol –vm VM_1 show -detail
Deletes the specified existing VVol objects uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/prov/vmware/vvol –id naa.6006016005603c009370093e194fca3f delete
Create a capability profile for VVol datastores uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/cp create -name “CapabilityProfile1” -pool pool_1 -usageTag “Prod”
Display a list of existing capability profiles and their characteristics uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/cp show -detail
Modify an existing capability profile uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/cp -id cp_1 set -name “CapabilityProfile2”
Deletes specified capability profiles uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/cp -id cp_1 delete
Create an I/O limit policy uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/iolimit create -name “SALES” -descr “for Sales Dept” -shared yes -type absolute -maxIOPS 500 -maxKBPS 1000
Delete an I/O limit policy uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/iolimit -id IOL_1 delete
Change the settings of an existing I/O limit policy uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/iolimit -id IOL_1 set -name “HR” -maxIOPS 1000 -noKBPS
Display the settings for the specified I/O limit policy uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/iolimit show -detail
Display the settings for the existing I/O limit configuration setting uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/iolimit/config show
Enforces the use of I/O limits on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /stor/config/iolimit/config set -paused no
Data Protection
Create a snapshot of a storage resource uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap create –name accounting –source FS_1 -keepFor 1d
View details about snapshots on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap show -detail
Attach snapshot SNAP_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap –id SNAP1 attach –type dynamic –roHosts HostA,HostB –rwHosts HostC,HostD
Refresh a block snapshot uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap –id 38654705680 refresh –copyName copy1
Replicate snapshots after they have been created uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap –id 38654705680 replicate -keepRemotelyFor 1d
Detaches snapshot SNAP_1 uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap –id SNAP_1 detach
Restore Snapshot uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap –id SNAP_1 restore
Delete Snapshot uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap –id SNAP_1 delete
Copy a snapshot uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap -id SNAP_1 copy –copyName SNAP_Copy
Change the settings of a snapshot uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap -id SNAP_1 set -newName MySnap
Create a snapshot NFS share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap/nfs create –name NFSshare -descr “SHARENAME” –snap SNAP1 -path / -roHosts “HostA, HostB” -rwHosts “HostC”
Lists the existing snapshot NFS shares uemcli /prot/snap/nfs show -detail
Modifies an existing snapshot NFS share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap/nfs -id nfs_1 set -descr “SHARENAME”
Delete (destroy) a snapshot NFS share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap/nfs –id nfs_1 delete
Create a snapshot CIFS (SMB) share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap/cifs create –name CIFSshare -descr “SHARENAME” -path /
Lists the existing snapshot CIFS (SMB) shares uemcli /prot/snap/cifs show
Modifies an existing snapshot CIFS (SMB) share uemcli /prot/snap/cifs -id cifs_1 set -descr “My share”
Delete (destroy) a snapshot CIFS (SMB) share uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/snap/cifs –id smb_1 delete
Create a replication session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/rep/session create -name REP1 -srcRes RS_1 –dstType remote -dstSys RS_2 –dstRes LUN_2 –syncType auto –rpo 02h30m
View details about replication sessions uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/rep/session show {-detail}
Change the settings for a replication session uemcli /prot/rep/session –id 64518754321_AFCDEF34234A3B_0000_35674324567_ADCDF154321341_0000 set –srcSPAInterface if_1 –srcSPBInterface if_2 –dstSPAInterface if_3 –dstSPBInterface if_4
Manually synchronize a replication session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/rep/session –id REPS_1 sync
Delete a replication session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/rep/session –id 64518754321_AFCDEF34234A3B_0000_35674324567_ADCDF154321341_0000 delete
Fail over of a replication session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/rep/session –id 64518754321_AFCDEF34234A3B_0000_35674324567_ADCDF154321341_0000 failover
Fail back of a replication session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/rep/session –id 64518754321_AFCDEF34234A3B_0000_35674324567_ADCDF154321341_0000 failback
View the RPA CHAP account uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/rpa/chap show
Modify the RPA CHAP account uemcli -d 10.1.55.1 -u Local/username -p passwd123 /remote/rpa/chap set -outUsername admin -outSecret abcdef123456
View Data at Rest Encryption settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/encrypt show -detail
Enable encryption setting for KMIP support uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/encrypt set -kmipEnabled yes
View settings for KMIP support uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/encrypt/kmip show
Change key management server parameters for KMIP support uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/encrypt/kmip set -username skoobee -passwd doobee -port 5696 -timeout 20 -addr 10.1.1.1
Verify the current connection to the KMIP server uemcli -d 10.1.55.1 -u Local/username -p passwd123 /prot/encrypt/kmip verify
Data Mobility
Displays all existing import sessions on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session show -detail
Create a block import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 import/session/block create -name lun_17_import -srcSys RS_65596 -srcRes 17 -lunPoolPairs 17:pool_1 -importAsVMwareDatastore yes
Change the settings for a block import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/block –id import_1 set -name newName -throttle no -cutoverThreshold 5
Cut over block import session to the target system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/block -id import_1 cutover
Cancel an existing block import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/block -id import_1 cancel
View details about import sessions for block uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/block show -detail
Create a NAS import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 import/session/nas create -name MyName1 -srcSys RS_1 -srcRes src_vdm_to_migrate -targetResPool pool_1 -targetImportIf if_3 -productionIfPortPairs fsPoolPairs 100~200:pool_2,255:pool_3 -srcFsImportedAsVMWareDatastore 13,20~25,30 -skipServerParamCheckdefaultProductionPort spa_iom_0_eth0 -flrImport yes -unixDirectoryService directMatch -source_interface_1:spa_iom_0_eth1,source_interface_2:spa_iom_0_eth0 –
Change the settings for a NAS import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/nas –id import_1 set -name newName -targetResPool pool_2 -targetImportIf if_3 -productionIfPortPairs source_interface_1:spa_iom_0_eth1,source_interface_2:spa_iom_0_eth0 -fsPoolPairs 100~200:pool_2,255:pool_3 -srcFsImportedAsVMWareDatastore 17~20 – srcFsImportedWithCompressionEnabled 31,40~45
Cut over a NAS import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/nas -id import_1 cutover
Commit an existing NAS import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/nas -id import_1 commit
Cancel an existing NAS import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/nas -id import_1 cancel
View details about import sessions for NAS uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/nas show -detail
Display import status for the specified import session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /import/session/element -importId import_2 show -detail
Create a LUN move session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /move/session create -srcRes sv_1 –targetPool pool_1 -priority above –thin yes –compressed no
Display details for a LUN move session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /move/session show -detail
Modify the settings of a move session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /move/session –id MoveSession_1 set -priority below
Delete a LUN move session uemcli -d 10.1.55.1 -u Local/username -p passwd123 /move/session –id movesession_1 delete
Cancel a LUN move session that is in progress uemcli -d 10.1.55.1 -u Local/username -p passwd123 /move/session –id movesession_1 cancel
Events and Alerts
View a detailed log of system events uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/log show -fromTime “2017-01-05 00:00:00.000” –to “2017-01-05 23:59:59.999”
View a detailed list of all system alerts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/hist show
Acknowledge specific alerts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/hist -id alert_2 ack
Delete specific alerts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/hist -id alert_3 delete
View the settings for how the system handles alerts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/conf show
Configure the settings for how the system handles alerts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/conf set -emailFromAddr “from@mail.com” -emailToAddrs “x@mail.com,z@mail.com” –emailSeverity info -snmpSeverity error
Create an SNMP trap destination for system alerts uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/snmp create –host 10.64.75.1 –port 333 –userName user1 authProto md5 -authPassword authpassword1234 –privProto des –privPassword passwd123
View details about SNMP destinations uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/snmp show
Change the settings for an SNMP destination uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/snmp –id Host1_323 set -authProto md5 -authPassword newauthpassword –privProto des –privPassword newpasswd
Delete an SNMP destination uemcli -d 10.1.55.1 -u Local/username -p passwd123 /event/alert/snmp -id Host1_323 delete
System Maintenance
Changes the service password uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/user set –passwd NewPassword456! –oldpasswd OldPassword456!
Restarts management software on the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/system restart
Shuts down the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/system shutdown
Reinitialize the storage system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/system reinit
Collect service information uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/system collect -serviceInfo {-type perfAssessment}
Show a list of the system core dumps uemcli -d 10.1.55.1 -u local/serviceuser -p Password /service/system/dump –id “mspb:logDaemon_:2017-12-25_01_33_22_473_logDaemon.x” show
Delete a core dump uemcli -d 10.1.55.1 -u local/serviceuser -p Password /service/system/dump -id mspa:CP_:2018-01-22_15_11_39_13422_ECOM delete
Enable SSH access to the system uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/ssh set -enabled yes
Display SSH settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/ssh show
Switch the storage processor to the service mode uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/sp -id spa service
Reboot the storage processor uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/sp -id spa reboot
Reimage the storage processor uemcli -d 10.1.55.1 -u Local/username -p passwd123 /service/sp -id spa reimage
Manage Metrics
View the current metrics service settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /metrics/service show
Enable historical metrics collection uemcli -d 10.1.55.1 -u Local/username -p passwd123 /metrics/service set -historyEnabled yes
View information about supported metrics uemcli -d 10.1.55.1 -u Local/username -p passwd123 /metrics/metric show {-detail}
Displays all available real-time metric service settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /metrics/metric -availability real-time show
Displays metrics service settings for a specific path uemcli -d 10.1.55.1 -u Local/username -p passwd123 /metrics/metric -path sp.*.storage.lun.*.avgReadSize,sp.*.storage.filesystem.*.writesRate,sp.*.cifs.smb2.basic.readsRate show -detail
View historical metrics settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 /metrics/value/hist -path sp.spa.storage.lun.sv_1.readsRate show -interval 60 -from 2017-01-24 01:42:00 -to “2017-01-24 04:1 4 :00”
View real-time metrics settings uemcli -d 10.1.55.1 -u Local/username -p passwd123 uemcli /metrics/value/rt -path sp.*.storage.lun.*.readsRate show -interval 10

Data Domain CLI Command Reference Guide

Other CLI Reference Guides:
Isilon CLI  |  EMC ECS CLI  |  VNX NAS CLI | ViPR Controller CLINetApp Clustered ONTAP CLI  |  Brocade FOS CLI | EMC XTremIO CLI

This is a Data Domain CLI Command Reference Guide for the commands that are more commonly used.

If you’re looking to automate reports for your Data Domain, see my post Easy Reporting on Data Domain using the Autosupport Log.

Alerting
# alerts notify-list create <group-name> Creates a notification list and subscribes to events belonging to the specified list of classes and severity levels.
# alerts notify-list add <group-name> Adds to a notification list and subscribes to events belonging to the specified list of classes and severity levels.
# alerts notify-list del <group-name> Deletes members from a notification list, a list of classes, a list of email addresses.
# alerts notify-list destroy <group-name> Destroys a notification list
# alerts notify-list reset Resets all notification lists to factory default
# alerts notify-list show Shows notification lists’ configuration
# alerts notify-list test Sends a test notification to alerts notify-list
CIFS and NFS
# cifs share create share path {max-connections max connections | clients clients | users users | comment comment}
# cifs status Check CIFS Status
# cifs disable Disable CIFS Service
# cifs enable Enable CIFS Service
NFS
# nfs add path client-list [(option-list)] Add NFS clients to an Export
# nfs show active List clients active in the past 15 minutes and the mount path for each
# nfs show clients list NFS clients allowed to access the Data Domain system and the mount path and NFS options for each
# nfs show detailed-stats display NFS cache entries and status to facilitate troubleshooting
# NFS Status Display NFS Status
# NFS Enable Enable NFS Service
# NFS disable Disable NFS Service
DD Boost
# ddboost enable Enable DDBoost
# ddboost status show DDBoost status
# ddboost set user-name <user-name> Set DD Boost user
# ddboost access add clients <client-list> Add clients to DD Boost access list
# ddboost storage-unit create <storage-unit-name> Create storage-unit, setting quota limits
# ddboost storage-unit delete <storage-unit-name> Delete storage-unit
# ddboost storage-unit show [compression] [<storage-unit-name>] List the storage-units and images in a storage-unit:
# ddboost storage-unit create <storage-unit> user <user-name> Create a storage unit, assign tenant, and set quota and stream limits
# ddboost storage-unit delete <storage-unit> Delete a specified storage unit, its contents, and any DD Boost assocaitions
# ddboost storage-unit rename <storage-unit> <new-storage-unit> Rename a storage-unit
# ddboost storage-unit undelete <storage-unit> Recover a deleted storage unit
# ddboost option reset Reset DD Boost options
# ddboost option set distributed-segment-processing {enabled|disabled} Enable or disable distributed-segment-processing for DD Boost
# ddboost option set virtual-synthetics {enabled | disabled} Enable or disable virtual-synthetics for DD Boost
# ddboost option show Show DD Boost options
# ddboost option set fc {enabled | disabled} Enable or disable fibre-channel for DD Boost
# ddboost fc dfc-server-name set DDBoost Fibre-Channel set Server Name
# ddboost fc dfc-server-name show Show DDBoost Fibre-Channel Server Name
# ddboost fc status DDBoost Fibre Channel Status
# ddboost fc group show list [<group-spec>] [initiator<initiator-spec>] List configured DDBoost FC groups
# ddboost fc group create <group-name> Create a DDBoost FC group
# ddboost fc group add <group-name> initiator <initiator-spec> Add initiators to a DDBoost FC group
# ddboost fc group add <group-name> device-set Add DDBoost devices to a DDBoost FC group
Encryption and File system Locking
# filesys enable Enables the file system
# filesys disable Disables the file system
# filesys encryption enable Enables encryption. Enter a passphrase when prompted
# filesys encryption disable Disables encryption.
# filesys encryption show Checks the status of the encryption feature
# filesys encryption lock Locks the system by creating a new passphrase and destroying the cached copy of existing passphrase
# filesys encryption passphrase change Changes the passphrase for system encryption keys
# filesys encryption unlock Prepares the encrypted file system for use after it has arrived at its destination
Licensing
# license add <license-code> [<license-code> …] Adds one or more licenses for features and storage capacity.
# license show [local] Displays license codes currently installed.
# license del <license-code> Deletes one or more licenses.
# license reset Removes all licenses and requires confirmation before deletion.
Network
# net show settings Displays the interface’s network settings
# net show hardware Displays the interface’s hardware configuration
# net show config Displays the active network configuration
# net show domainname Displays the domain name associated with this device
# net show searchdomain Lists the domains that will be searched when only the host name is provided for a r command
# net show dns Lists the domain name servers used by this device.
# net show stats Provides a number of different networking statistics
# net show all Combines the output of several other net show CLI commands
Replication, Throttling, LBO, Encryption
# replication enable {<destination> | all} Enables replication
# replication disable {<destination> | all} Disables replication
# replication add source <source> destination <destination> Creates a replication pair
# replication break {<destination> | all} Removes the source or destination DD system from a replication pair
# replication initialize <destination> Initialize replication on the source (configure both source and destination first)
# replication modify <destination> {source-host | destination-host} <new-host-name> Modifies connection host, hostname
# replication modify <destination> connection-host <new-host-name> [port <port>] Modifies port
# replication add … low-bw-optim enabled Adds LBO
# replication modify … low-bw-optim enabled Modify LBO
# replication modify … low-bw-optim disabled Disable
# replication add … encryption enabled Add encryption over wire
# replication modify … encryption enabled Enable encryption over wire
# replication modify … encryption disabled Disable encryption over wire
# replication option set listen-port <port> Modify listening port  [context must be disabled before the connection port can be modified]
# replication option reset listen-port Reset listening port  [context must be disabled before the connection port can be modified]
# replication throttle add <sched-spec> <rate> Add a throttle schedule
# replication throttle add destination <host> <sched-spec> <rate> Add a destination specific throttle
# replication throttle del <sched-spec> Delete a throttle schedule
# replication throttle reset {current | override | schedule | all} Reset throttle configuration
# replication throttle set current <rate> Set a current override
# replication throttle set override <rate> Set a permanent override
# replication throttle show [KiB] Show throttle onfiguration
Retention Lock
# mtree retention-lock enable mtree_name Enables the retention-lock feature for the specified MTree
# mtree retention-lock disable mtree_name Disables the retention-lock feature for the specified MTree
# mtree retention-lock reset Resets the value of the retention period for the specified MTree to its default
# mtree retention-lock revert Reverts the retention lock for all files on a specified path
# mtree retention-lock set Sets the minimum or maximum retention period for the specified MTree
# mtree retention-lock show Shows the minimum or maximum retention period for the specified MTree
#mtree retention-lock status mtree_name Shows the retention-lock status for the specified MTree
Sanitization
#system sanitize abort Aborts the sanitization process
#system sanitize start Starts sanitization process immediately
#system sanitize status Shows current sanitization status
#system sanitize watch Monitors sanitization progress
SMT MTree stats
# mtree list List List the Mtrees on a Data Domain system
# mtree show stats Collect MTree real-time performance statistics
# mtree show performance Collect performance statistics for MTrees associated with a tenant-unit
# mtree show compression Collect compression statistics for MTrees associated with a tenant-unit
# quota capacity show List capacity quotas for MTrees and storage-units
# ddboost storage-unit modify Adjust or modify the quotas after the initial configuration
System Performance
# system show stats interval [interval in seconds] Shows system stats (Disk, IOs,…etc)
# system show performance [ {hr | min | sec} [ {hr | min | sec} ]] Show System Performance
NDMP
# ndmpd enable Enable the NDMP daemon
# ndmpd show devicenames Verify that the NDMP daemon sees the devices created in the TapeServer access group
# ndmpd user add ndmp Add an NDMP user
# ndmpd option show all Check the options for the ndmpd daemon
# ndmpd option set authentication md5 Set the ndmpd service authentication to MD5
# ndmpd option show all Verify the serivce authentication

Storage Performance Benchmarking with FIO

Flexible IO tester (FIO) is an open-source synthetic benchmark tool first developed by Jens Axboe.  FIO can generate various IO workloads: sequential reads or random writes, synchronous or asynchronous, all based on the options provided by the user.  FIO provides various global options through which different type of workloads can be generated.  FIO is the easiest and versatile tool to quickly perform IO performance tests on storage system, and allows you to simulate different types of IO loads and tweak several parameters, among others, the write/read mix and the amount of processes.  I’ll likely make a few additional posts with some of the other storage benchmarking tools I’ve used, but I’m focusing on FIO for this post.  Why FIO?  It’s a great tool, and it’s pros outweigh it’s cons for me.

Pros

  • It has a batch mode and a very extensive set of parameters.
  • Unlike IOMeter, it is still being actively developed.
  • It has multi-OS support.
  • It’s free.

Cons

  • It is CLI only, there are no GUI or Graphics features.
  • It has a rather complex syntax and it takes some time to get the hang of it.

Download and Installation

FIO can be run from either Linux or Windows, although Windows will first require an installation of Cygwin.  FIO works on Linux, Solaris, AIX, HP-UX, OSX, NetBSD, OpenBSD, Windows, FreeBSD, and DragonFly.  Some features and options may only be available on some of the platforms, typically because those features only apply to that platform (like the solarisaio engine, or the splice engine on Linux).  Note that you can check github for the latest version before you get started.

You can run the following commands from a Linux server to download and install the FIO package:

cd /root

yum install -y make gcc libaio-devel || ( apt-get update && apt-get install -y make gcc libaio-dev  </dev/null )

wget https://github.com/Crowd9/Benchmark/raw/master/fio-2.0.9.tar.gz ; tar xf fio*

cd fio*

make

How to compile FIO on 64-bit Windows:

Install Cygwin (http://www.cygwin.com/). Install **make** and all     packages starting with **mingw64-i686** and **mingw64-x86_64**.

Open the Cygwin Terminal.

Go to the fio directory (source files).

Run ``make clean && make -j``.

To build fio on 32-bit Windows, run ``./configure --build-32bit-win`` before ``make``.

FIO Cheat sheet

With FIO compiled, we can now run some tests.  For reference, I’ll start off with some basic commands for simulating different types of workloads.

Sequential Reads – Async mode – 8K block size – Direct IO – 100% Reads

fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k --numjobs=8 --size=1G --runtime=600  --group_reporting

Sequential Writes – Async mode – 32K block size – Direct IO – 100% Writes

fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32k --numjobs=4 --size=2G --runtime=600 --group_reporting

Random Reads – Async mode – 8K block size – Direct IO – 100% Reads

fio --name=randread --rw=randread --direct=1 --ioengine=libaio --bs=8k --numjobs=16 --size=1G --runtime=600 --group_reporting

Random Writes – Async mode – 64K block size – Direct IO – 100% Writes

fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=64k --numjobs=8 --size=512m --runtime=600 --group_reporting

Random Read/Writes – Async mode – 16K block size – Direct IO – 90% Reads/10% Writes

fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --rwmixread=90 --size=1G --runtime=600 --group_reporting

Host Considerations

To avoid IOs reporting out of the host system cache, use the direct option which will directly read/write to the disk.  Use the Linux native asynchronous IO using the ioengine directive with libaio.  When FIO is launched, it will create the file with the name provided in –name to the size as provided in –size with block size as –bs.  If the –numjobs are provided, it will create the files in the format of name.n.0 where n will be between 0 and –numjobs.

–jobs = The more jobs, the higher the performance can be, based on the resource availability.  If your server is limited on the resources (TCP or FC), I’d recommend running FIO across multiple servers to push a higher workload to the storage array.

Block Size Matters

Many storage vendors will advertise performance benchmarks based on 4k block sizes, which can artificially inflate the total IO number that the array is capable of handling.  In my professional experience with the workloads I’ve supported, the most popular read size is between 32KB and 64KB and the most popular write size is between 8KB and 32KB.  VMWare-heavy environments may skew a bit lower in read block size.  Read IO is typically more common than Write IO, at a rate of around 3:1.  It’s important to know the characteristics of your workload before you begin testing, as we need to look at IO Size as a weight attached to the IO. An IO of size 64KB will have a weight 8 times higher than an IO of size 8KB since it will move 8 times as many bytes.  A 256K block has 64 times the payload of a 4K block.  Both examples take substantially more effort for every component of the storage stack to satisfy the IO request. Applications and the operating systems they run on generate a wide, ever-changing mix of block sizes based on the characteristics of the application and the workloads being serviced. Reads and writes are often delivered using different block sizes as well. Block size has a significant impact on the latency your applications see.

Try to understand the IO size distributions of your workload and use those IO size modalities when you develop your FIO test commands. If a single IO size is a requirement for a quick rule-of-thumb comparison, then 32KB has been a pretty reasonable number for me to use, as it is a logical convergence of the weighted IO size distribution of most of the shared workload arrays I’ve supported. Your mileage may vary, of course.

Because block sizes have different effects on different storage systems, visibility into this metric is critical. The storage fabric, the protocol, the processing overhead on the HBAs, the switches, the storage controllers, and the storage media are all affected by it.

General Tips on Testing

Work on large datasets.  Your dataset should be at least double the amount of RAM in the OS.  For example, if the OS RAM is 16GB, test 32GB datasets multiplied by the number of CPU cores.

The Rule of Thumb:  75/25.  Although it really depends on your workloads, typically the rule of thumb is that there are 25% writes and 75% reads on the dataset.

Test from small to large blocks of I/O.  Consider testing small blocks of I/O up to large blocks of I/O in the following order: 512 bytes, 4K, 16K, 64K, 1MB to get proper measurement that can be the visualized as a histogram. This makes it easier to interpret.

Test multiple workload patterns.  Not everything is sequential read/write. Test all scenarios: read / write, write only, read only, random read / random write, random read only, and random write only.

Sample Output

Here’s a sample command string for FIO that includes many of the command switches you’ll want to use.  Each parameter can be tweaked to your specific environment.  It creates 8 files (numjobs=8) each with size 512MB (size) at 64K block size (bs=64k) and will perform random read/write (rw=randrw) with the mixed workload of 70% reads and 30% writes. The job will run for full 5 minutes (runtime=300 & time_based) even if the files were created and read/written.

[root@server1 fio]# fio --name=randrw --ioengine=libaio --iodepth=1 --rw=randrw --bs=64k --direct=1 --size=512m --numjobs=8 --runtime=300 --group_reporting --time_based --rwmixread=70 randrw: (g=0): rw=randrw, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1

Output:

 Starting 8 processes

 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 Jobs: 8 (f=8): [mmmmmmmm] [2.0% done] [252.0MB/121.3MB/0KB /s] [4032/1940/0 iops] [eta 04m:55s]
randrw: (groupid=0, jobs=8): err= 0: pid=31900: Mon Jun 13 01:01:08 2016
 read : io=78815MB, bw=269020KB/s, iops=4203, runt=300002msec
 slat (usec): min=6, max=173, avg= 9.99, stdev= 3.63
 clat (usec): min=430, max=23909, avg=1023.31, stdev=273.66
 lat (usec): min=447, max=23917, avg=1033.46, stdev=273.78
 clat percentiles (usec):
 | 1.00th=[ 684], 5.00th=[ 796], 10.00th=[ 836], 20.00th=[ 892],
 | 30.00th=[ 932], 40.00th=[ 964], 50.00th=[ 996], 60.00th=[ 1032],
 | 70.00th=[ 1080], 80.00th=[ 1128], 90.00th=[ 1208], 95.00th=[ 1288],
 | 99.00th=[ 1560], 99.50th=[ 2256], 99.90th=[ 3184], 99.95th=[ 3408],
 | 99.99th=[13888]
 bw (KB /s): min=28288, max=39217, per=12.49%, avg=33596.69, stdev=1709.09
 write: io=33899MB, bw=115709KB/s, iops=1807, runt=300002msec
 slat (usec): min=7, max=140, avg=11.42, stdev= 3.96
 clat (usec): min=1246, max=24744, avg=2004.11, stdev=333.23
 lat (usec): min=1256, max=24753, avg=2015.69, stdev=333.36
 clat percentiles (usec):
 | 1.00th=[ 1576], 5.00th=[ 1688], 10.00th=[ 1752], 20.00th=[ 1816],
 | 30.00th=[ 1880], 40.00th=[ 1928], 50.00th=[ 1976], 60.00th=[ 2040],
 | 70.00th=[ 2096], 80.00th=[ 2160], 90.00th=[ 2256], 95.00th=[ 2352],
 | 99.00th=[ 2576], 99.50th=[ 2736], 99.90th=[ 4256], 99.95th=[ 4832],
 | 99.99th=[16768]
 bw (KB /s): min=11776, max=16896, per=12.53%, avg=14499.30, stdev=907.78
 lat (usec) : 500=0.01%, 750=1.61%, 1000=33.71%
 lat (msec) : 2=50.35%, 4=14.27%, 10=0.04%, 20=0.02%, 50=0.01%
 cpu : usr=0.46%, sys=1.60%, ctx=1804510, majf=0, minf=196
 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, &gt;=64=0.0%
 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
 issued : total=r=1261042/w=542389/d=0, short=r=0/w=0/d=0
 latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
 READ: io=78815MB, aggrb=269020KB/s, minb=269020KB/s, maxb=269020KB/s, mint=300002msec, maxt=300002msec
 WRITE: io=33899MB, aggrb=115708KB/s, minb=115708KB/s, maxb=115708KB/s, mint=300002msec, maxt=300002msec

Additional Samples

I’ll run through an additional set of simple examples of using FIO as well using different workload patterns.

Random read/write performance

If you want to compare disk performance with a simple 3:1 4K read/write test, use the following command:

./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

This command string create a 4 GB file and perform 4KB reads and writes using a 75%/25% split within the file, with 64 operations running at a time. The 3:1 ratio represents a typical database.

The output is below, with the IO numbers highlighted in red.

Jobs: 1 (f=1): [m] [100.0% done] [43496K/14671K /s] [10.9K/3667 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=31214: Fri May 9 16:01:53 2014
read : io=3071.1MB, bw=39492KB/s, iops=8993 , runt= 79653msec
write: io=1024.7MB, bw=13165KB/s, iops=2394 , runt= 79653msec
cpu : usr=16.26%, sys=71.94%, ctx=25916, majf=0, minf=25
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=786416/w=262160/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=3071.1MB, aggrb=39492KB/s, minb=39492KB/s, maxb=39492KB/s, mint=79653msec, maxt=79653msec
WRITE: io=1024.7MB, aggrb=13165KB/s, minb=13165KB/s, maxb=13165KB/s, mint=79653msec, maxt=79653msec
Disk stats (read/write):
vda: ios=786003/262081, merge=0/22, ticks=3883392/667236, in_queue=4550412, util=99.97%

This tests shows the array performed 8993 read operations per second and 2394 write operations per second.

Random read performance

To measure random reads, we’ll change FIO command a bit:

./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread

Output:

Jobs: 1 (f=1): [r] [100.0% done] [62135K/0K /s] [15.6K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=31181: Fri May 9 15:38:57 2014
read : io=1024.0MB, bw=62748KB/s, iops=19932 , runt= 16711msec
cpu : usr=5.94%, sys=90.13%, ctx=1885, majf=0, minf=89
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=262144/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=1024.0MB, aggrb=62747KB/s, minb=62747KB/s, maxb=62747KB/s, mint=16711msec, maxt=16711msec
Disk stats (read/write):
vda: ios=259063/2, merge=0/1, ticks=951356/20, in_queue=951308, util=96.83%

This test shows the storage array performing 19,932 read operations per second.

Random write performance

Modify the FIO command slightly to use randwrite instead of randread for the random write test.

./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite

Output:

Jobs: 1 (f=1): [w] [100.0% done] [0K/26326K /s] [0 /6581 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=31235: Fri May 9 16:16:21 2014
write: io=1024.0MB, bw=29195KB/s, iops=5434, runt= 35916msec
cpu : usr=77.42%, sys=13.74%, ctx=2306, majf=0, minf=24
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=1024.0MB, aggrb=29195KB/s, minb=29195KB/s, maxb=29195KB/s, mint=35916msec, maxt=35916msec
Disk stats (read/write):
vda: ios=0/260938, merge=0/11, ticks=0/2315104, in_queue=2316372, util=98.87%

This tests shows storage scoring 5,434 write operations per second.

Blockchain and Enterprise Storage

The biggest value of blockchain in enterprise storage will be what it enables, not what it is.  While it has yet to be fully embraced by the enterprise, blockchain is well poised to change enterprise IT much like open source software did 20+ years ago.  Interest is steadily rising, and there is evidence that businesses are starting to investigate how blockchain technology will integrate into their future business goals and objectives. In this post I’m going to dive in to what exactly blockchain is, how it works, how it may be applied in the enterprise storage space, and how it’s already starting to be used in various global industries.

What is Blockchain technology?

Blockchain is a distributed ledger that maintains a continuously growing number of data records and transactions. It is a chain of transaction blocks built in adherence to a defined set of rules. It allows organizations who don’t trust each other to agree on database updates. Rather than using a central third party or an offline reconciliation process, Blockchain uses peer-to-peer protocols. As a distributed database, Blockchain provides a near real-time, permanent record that’s replicated among the participants. Bitcoin, probably the most well-known cryptocurrency right now, was possible due to Blockchain. It’s the core of the Bitcoin payment system.

What are the main characteristics of Blockchain?

There are a defined set of characteristics that make blockchain what it is. It is both a network and a database. It has rules and built-in security and it maintains internal integrity and its own history. Let’s take a look at the main characteristics of blockchain.

1. Decentralized.  Blockchain is decentralized, there is no central authority required to approve transactions. It is a system of peer to peer validating nodes. Because there are no intermediaries, transactions are made directly and each node maintains the ledger of updates.

2. External clients manage changes.  Changes to the ledger are triggered by transactions proposed by external parties through clients. When triggered by transactions, blockchain participants execute business logic and follow consensus protocols to verify the results.

3. Shared and distributed publicity.  Participants in the ledger maintain the blocks. When consensus is reached under the network’s rules, transactions and their results are grouped into cryptographically secured, immutable data blocks that are appended to the ledger by each participant. All members of the blockchain network can see the same transaction history in the same order.

4. Trusted Transactions.  The nature of the network distribution requires nodes to come to a consensus that enables transactions to be carried out between unknown parties.

5. Secure Transactions.  Strong Cryptography is added to each block. In addition to all of its transactions and their results, each block includes a cryptographic hash of the previous block, which ensures that any tampering with a particular block is easily detected. Blockchain provides transaction and data security. The ledger is an unchangeable record. Posts to it cannot be revised or tampered with, even by database operators.

How Blockchain Works

Consensus in Blockchain

Consensus is at the heart of the blockchain. To keep the integrity of its database, a consensus protocol is used that considers that the longest chain is always the most trustworthy and nodes can only be allowed to blocks to the chain if they solve an arbitrary mathematical puzzle.   These rules define which changes are allowed to be made to the database, who may make them, and when they can be made. One of the most important aspects of the consensus protocol concerns the rules governing how and when blocks are added to the chain. This is vitally important as in order for blockchains to be useful, they must establish an unchangeable timeline of events which must be agreed upon by all nodes, so that all nodes can agree on the current state of the database.  The timeline cannot be subject to censorship, thus no single node may be entrusted with control over what enters it when.

Proof of Work is the original consensus protocol and is used by Bitcoin and Ethereum. Proof of Work is based on puzzles that are difficult to solve but have an easily verifiable solution.  It can be thought of like a jigsaw puzzle.  While many hours of effort may be required to piece a puzzle together, it takes only a momentary glance to see that it has been correctly assembled. With proof of work consensus, the effort required to solve a puzzle is the “work” and the solution is the “proof of work.”  The fact that the solution to the puzzle is known proves that someone did the work to find that solution.

Blockchains that utilize proof of work consensus require proof for each new block to be added to the chain, thus requiring work to be done to create new blocks. This work is frequently referred to as mining. Proof of Work consensus protocols state that the chain containing the most blocks is the correct chain because it contains the most work. Blockchains which use proof of work are regarded as secure timelines because if one node attempted to rewrite history by changing an old block, its change would invalidate the work on the block it changed and all blocks after it by making the proofs incorrect.   While experimentation with different consensus mechanisms continues,  proof of work is by far the most the widely adopted.  There are alternatives however, so let’s take a brief look at some of them.

Proof-of-Stake.  In proof of stake, participants are required to maintain stocks of the currency (or tokens) to use the system. Creators of a new block are chosen deterministically depending on their stake.

Proof-of-Activity.  In proof of activity,  proof of work and proof of stake are used at the same time to help alleviate the issue of hash rate escalation.  Hash Rate is the measuring unit that measures how much power Bitcoin network is consuming to be continuously functional.

Proof-of-Burn.   With proof of burn, instead of trying arbitrarily large numbers of hashes to answer a puzzle as done with the proof of work method, the system instead runs a lottery and the tokens are burned so a node can try to win a block.

Proof-of-Capacity.  Proof of capacity is similar to proof of stake, but it is measured in hardware capacity that is dedicated to the network.

Federated Byzantine Agreement.  This is designed for private, permissioned Blockchains (like Hyperledger) where good behavior is an expectation, it is designed with less resource intensive methods. This method offers more flexibility with trust because a fork can be agreed upon by its members.

How can Blockchain be used in Enterprise Storage?

Enterprises looking for data access speed, physical security of the files, and businesses that must adhere to strict regulatory requirements about access policies and in-country data location regulations may have trouble applying the technology. Blockchain doesn’t meet those requirements in a traditional sense, most notably because of the distributed nature of blockchain.  For enterprise environments with less stringent regulatory requirements, it could still be an attractive option. The main benefits relate to its redundancy and reduced cost. The cost savings could be the major driver toward this technology in the enterprise.  Let’s take a look at some of the primary benefits of adopting the technology in the enterprise.

The primary benefits of blockchain in the enterprise

1. Decentralization and Redundancy.  Amazon S3 achieves redundancy by spreading files through all of its regional data centers, which makes each data center a point of failure. On a decentralized blockchain where data is stored on many individual nodes across the globe, it is much more difficult to create disruptions.

2. Privacy.  No third party controls user data or has access to user files. Each node only stores encrypted fragments of user data and users control their own keys.

3. Huge cost reductions.  Blockchain storage costs around $2 per terabyte per month. In comparison, S3 hosting from Amazon can cost over $20 per month per terabyte.

4. The Bottom Line.  Companies are always looking to increase revenues, cut costs, and reduce risks. Blockchain technology has the potential to address those core, bottom line issues.

The Elements of Blockchain in the Enterprise

How can blockchain be implemented in an existing enterprise storage envinroment?  Steve Todd from Dell EMC started by defining the basic elements of blockchain and the questions that need to be asked, all of which need to be answered in order to implement blockchain solutions in the enterprise. I’ve copied his questions below. It’s very high level, but it’s a good start in establishing a baseline for an enterprise blockchain implementation.

1. New business logic.  What new business logic is being written, and what is it’s purpose? Will modern application development processes be used to develop the new logic? How will this code be deployed when compared against existing application deployment frameworks? Will your business logic be portable across blockchains?

2. Smart Contracts. How are smart contracts deployed compared to existing application deployment? Are these contracts secure (e.g. encrypted)? Are they well-written? How easy are they to consume? Do they lock-in application developers to a certain platform? Are metrics collected to measure usage? Are access attempts logged securely?

3. Cryptography. Given the liberal use of cryptography within blockchains, which libraries will be used within the underlying ledger? How are these libraries maintained and used across ledgers? what role does cryptography play in different consensus algorithms?

4. Identity / Key Management. The use of private and public keys in a blockchain is foundational. How are these keys managed in comparison to other corporate key management systems? How do corporate identities translate to shared identities with other nodes on a blockchain network?

5. Network Programmability.  How will the network between blockchain nodes be instantiated, tuned, and controlled? How will application SLAs for latency be translated into adequately-performing network operations? Will blockchain transactions be distributed as cleartext or encrypted?

6. Consensus Algorithms.  How will decisions be made to accept/reject transactions? What is the “speed to finality” of these decisions? What are the scalability limits of the consensus algorithm? How much fault tolerance is built into the consensus? How much does performance suffer when fault tolerance limits are reached?

7. Off-chain Storage.  What kind of data assets are recorded within the ledger? Are they consistently referenced? How are access permissions consistently enforced between the ledger and off-chain assets? Do all consensus nodes have the ability to verify all off-chain data assets?

8. Data Protection.  How is data consistency enforced within the ledger? Do corrupted transactions thrown an exception? How are corrupted transactions repaired? Does every consensus node always store every single transaction locally? Can deduplication or compression occur? Can snapshot copies of the ledger be created for analysis purposes?

9. Integration with Legacy.  Does the ledger and consensus engine exist on the same converged platform as other business logic? Will there be integration connectors that copy and/or transform the ledger for other purposes? Is the ledger accessible to corporate analytic workspaces?

10. Multi-chain.  how will the ledger interact with the reality of a multi-chain world (e.g. Quorum, Hyperledger, Ethereum, etc). How will the ledger interact with non-chain ledgers (e.g. Corda)? Will there be a common API to access different blockchains?

11. Cloud automation.  Can routine blockchain tasks be automated? Will cloud providers offer non-repudiation and/or blockchain governance? Can blockchain app developers execute test/dev processes in one cloud provider environment and then push to a (different) cloud production environment?

Blockchain Cloud Storage in the Marketplace

There are multiple blockchain powered distributed cloud storage offerings that I’m aware of, and there are likely more to come. These organizations are using blockchain technology to take advantage of the spare hard drive space it’s users to make decentralized competitors similar to Amazon Web Services and Dropbox.

• Storj
• Filecoin
• Sia
• MaidSafe
• Cryptyk

All of these options provide decentralized cloud-based storage. Customers who use their services allocate a portion of their local storage for cloud-based storage. It’s akin to a decentralized, blockchain-powered version of Amazon Web Services. They all show that a public ledger can be used to facilitate a distributed public cloud, but I think it’s unlikely to be used for mission critical enterprise storage in the near future, at least until some of the basic questions about the elements of blockchain in the enterprise are answered, as I detailed in the previous section.

As cloud based storage becomes more relevant over time, the number of blockchain solutions similar to these projects will surely increase. Blockchain’s decentralization, speed, and reliability give it an inherent advantage over centralized cloud services, as they require the storage of data in data centers with high costs and maintenance requirements. Blockchain technology will likely have an increasingly important role in decreasing costs and increasing the security and efficiency of the methods data storage is implemented.

Blockchain Storage Provider Operations

I thought it would be interesting to take a look at how these existing competitors implement blockchain and how they market their services.  In addition to the security benefits,  overall these decentralized cloud storage providers seem to be marketed as being inexpensive storage for general consumers. A terabyte of storage at Sia costs about $2 per month. Storj charges by gigabyte, starting at $0.015 per gigabyte per month.

Storj, Sia, MaidSafe and Filecoin are built with a proprietary storage marketplace where users can buy and sell storage space.  They all use mining to provide compute power for the network.

Filecoin miners are given token rewards for hosting files, but also must prove that they are continuously replicating the files for more secure storage. Miners are also rewarded for distributing content quickly as the miner that can do this the fastest ends up with the tokens. Filecoin and Sia both support smart contracts on the blockchain that set the consensus rules and requirements for storage, however Storj users pay only for what they consume.  Filecoin also aims to allow the exchange of its tokens with fiat currencies and other tokens via wallets and exchanges.

In Maidsafe’s network,  Safecoin is paid to the user as data is retrieved. It’s done in a lottery system where a miner is rewarded at random. The amount of Safecoin earned is directly linked to the resources they provide and how often their shared storage is available and online.  Maidsafe miners rent their unused compute resources to the SAFE network (capacity, CPU, and bandwidth) and are paid in Safecoin. The SAFE network also supports a marketplace in which Safecoin is used to access, with part of the payment going to the application’s developer.  Miners can also sell the coins that they earn for other digital currencies, and these transactions can happen either on the network or directly between individuals.

All of these service providers store data with erasure coding.  Files are split apart and distributed across many locations and servers, which eliminates the chance of a single point of failure causing catastrophic data loss. Filecoin uses the IPFS distributed web protocol, allowing nodes to continue to communicate even if the rest of the network goes down.

Business Benefits

Blockchain technology implementation can provide a lot of benefits, most notably that it provides for making interactions faster, safer and less expensive, ensuring data security.  Although blockchain technology is primarily associated with the financial industry, blockchain solutions have the potential to be a disruptive force in other businesses sectors as well.

At a high level, what are the main benefits of blockchain in a business environment?

Fewer Intermediaries.  Blockchain avoids centralized intermediaries by using a peer to peer business network.

Faster, More Automated Processes.  Businesses can automate their data exchange and the processes that depend on it and eliminate offline or batch reconciliation. Business can automatically trigger actions, events, and even payments based on preset conditions with the potential for dramatic performance improvements.

Reduced Costs.  Business can lower costs by accelerating transactions and eliminating settlement processes by using a trusted, shared fabric of common information instead of relying on centralized intermediaries or complex reconciliation processes.

Increased Visibility.  Businesses can gain near real-time visibility into their distributed transactions across their networks, and maintain a shared system of records.

Enhanced Security.  Businesses can reduce fraud while at the same time increase regulatory compliance with tamper-proof business-critical records. They can secure their data by using cryptographically linked blocks so that records cannot be altered without detection.

With that in mind, let’s consider the most likely scenarios for Blockchain implementation in business. How exactly is blockchain technology being used in the industry today, and how may it be used in the future?

Blockchain in the Energy industry

The German company Share&Charge and California based eMotorWerks announced they are testing the first phase of a peer-to-peer electric vehicle charging network with blockchain payments. The technology has been called an “AirBnB for EVs,” and will allow EV owners to rent out their charging stations, set their own prices and receive payments via Bitcoin. The technology aims to prove that blockchain technology can make sharing and payment easier and more efficient and at the same time decrease the range anxiety that EV drivers experience.

The companies say that the partnership is the first peer to peer charging network to use blockchain technology in North America. The new P2P network was made available in California starting in August 2017, and a planned expansion to other states is in the works.

Blockchain technology in Banking and Finance

Blockchain solutions are looking to revolutionize how we transfer funds in a business environment. As transactions within Blockchain occur without intermediaries or any kind of central authority, a direct payment flow between customers around the world is easily accomplished. Blockchain application development is booming as more and more startups attempt to innovate the payment chain. Abra, a good example of a recent Blockchain startup, offers a digital wallet mobile app using Bitcoin currency.   There is intense interest in Blockchain in the finanace sector.  A New York-based company that runs a consortium of banks (R3 CEV), has recently released a new version of its blockchain platform (Corda) that it hopes will make it easier for financial firms to use the technology.  Banks and other financial institutions have been investing in the technology for the past few years in the hope that it can be used to automate some of their back office processes such as securities settlement and regulatory reporting.

A report from Accenture claimed blockchain technology could potentially reduce infrastructure costs for eight of the world’s ten largest investment banks by an average of 30%, which would result in $8 to $12 billion in annual cost savings. The savings, according to Accenture, would come in replacing traditionally fragmented database systems that support transaction processing with blockchain’s distributed ledger system. That would allow banks to reduce or eliminate reconciliation costs and data quality.

In addition, Accenture, J.P. Morgan Chase and Microsoft were among 30 companies that announced the formation of the Enterprise Ethereum Alliance, aimed at creating a standard version of the platform for financial transaction processing and tracking.

Blockchain in the Insurance industry

Insurance interest in blockchain appears to be growing. Blockchain has the potential to vastly improve the nature of claims processing and fraud detection in the insurance industry.

Blockchain could reduce many of the typical issues involved with smart contracts. Insured individuals usually find insurance contracts long and confusing, and insurance companies are constantly battling fraud. Using blockchain and smart contracts, both sides could benefit from managing claims in a more responsive and transparent way, and recording and verifying contracts on the blockchain could be a great start. When claims are submitted, blockchain could ensure that only valid claims are paid as the network would know if there were multiple claims submitted for the same accident. When specific criteria are met, a blockchain could trigger payment of the claim without any human intervention, improving the time it takes to resolve claims.

Blockchain also has great potential to detect and prevent fraudulent activity as well. Because validation is at the core of blockchain technology’s decentralized repository, its historical record can independently verify the validity of customers, policies and all transactions.

In the summer of 2017, blockchain Firm Bitfury Teams with Insurance Broker Risk Cooperative. The Bitfury Risk Cooperative partnership seeks to leverage Bitfury’s expertise in blockchain applications across a range of sectors and Risk Cooperative’s insurance placement platform and partnership model with leading insurers to spur adoption of blockchain in the insurance space.

Blockchain perspectives in Supply Chain Management

Blockchain has the potential to transform the supply chain and disrupt the way we produce, market, purchase and consume goods. The added transparency and security to the supply chain will make huge improvments, making our economies safer and more reliable by promoting trust and preventing the implementation of questionable business practices.

Microsoft’s blockchain supply chain group, Project Manifest, is testing the ability to track inventory on cargo ships, trains and trucks using RFID tags that link back to blockchain technologies. Though Microsoft hasn’t shared many details about the project yet, it appears it is working with partners to track things like auto parts to address cross-industry supply chains, which are very complex.
IBM offers a service that allows customers to test blockchains in a secure cloud and track high-value items through complex supply chains. The service is being used by Everledger, a firm that is trying to use the blockchain to push transparency into the diamond supply chain. Finnish startup Kuovola Innovation is working on a blockchain solution that enables smart tendering across the supply chain.

Blockchain smart-contracts are being used to address everything from the shipment, to receipt of inventory between all parties in various supply chains. Doing so could reduce complexity and the number of counterfeit items that enter the supply chain.

Blockchain in the Healthcare Industry

There are plenty of opportunities to leverage blockchain technology in healthcare, from medical records to the pharmaceutical supply chain to smart contracts for payment distribution. While progress has been slow, there are innovations in the healthcare industry taking place.

MediLedger successfully brings pharmaceutical manufacturers and wholesalers who compete with each other to the same negotiating table. They designed and implemented a process for using blockchain technology to improve tracking and tracing capabilities for prescriptions. They also successfully developed a blockchain solution that allows full privacy with no leaking of business intelligence, while still allowing the capability of drug verification and provenance reporting.

Built to support the requirements of the U.S. Drug Supply Chain Security Act (DSCSA), MediLedger also outlines steps to build an electronic, interoperable system to identify and trace certain prescription drugs, meaning it successfully met not just the law, but the operational needs of industry.

Additional projects were kicked off by SimplyVitalHealth and Robomed, where they focused on developing an audit trail and smart contracts between healthcare providers and patients, respectively.

Blockchain solutions for Online Voting

Blockchain could be the missing link in the architecture of an effective and secure online voting system, and could resolve major issues related to the privacy, transparency, and security of online voting.

Using blockchain technology, we can make sure that those who are voting are who they say they are and are legally allowed to vote. We can also make voting online more accessible, as anyone who knows how to use a cell phone can understand the technology required for voting, all while making the election process more secure than it currently is and allowing greater participation for all legally-registered voters.

Sovereign was unveiled in September 2017 by Democracy Earth, a not-for-profit organisation in Palo Alto, California. It combines liquid democracy – which gives individuals more flexibility in how they use their votes – with blockchains, digital ledgers of transactions that keep cryptocurrencies like bitcoin secure. Sovereign’s developers hope it could signal the beginning of a democratic system that transcends national borders.

The basic concept of liquid democracy is that voters can express their wishes on an issue directly or delegate their vote to someone else they think is better-placed to decide on their behalf. In turn, those delegates can also pass those votes upwards through the chain. Crucially, users can see how their delegate voted and reclaim their vote to use themselves.  It sits on existing blockchain software platforms, such as Ethereum, but instead of producing units of cryptocurrency, it creates a finite number of tokens called “votes”. These are assigned to registered users who can vote as part of organisations who set themselves up on the network, whether that is a political party, a municipality, a country or even a co-operatively run company.

No knowledge of blockchains is required – voters simply use an app. Votes are then “dripped” into their accounts over time like a universal basic income of votes. Users can debate with each other before deciding which way to vote.

Blockchain usage in Stock Trading

Some of the most prominent stock exchanges are looking at ways to leverage blockchain to fundamentally overhaul traditional mechanisms. Blockchain could enable savings by reducing duplication of processes, settlement time, collateral requirements and operational overheads. This would minimize the need to set aside financial resources to cater to counterparty risks and achieve higher anti-money laundering standards and reduced risk exposure.

Nasdaq has been at the forefront of blockchain innovation. At the turn of 2015, Nasdaq unveiled the use of its Nasdaq Linq blockchain ledger technology to successfully complete and record private securities transactions for Chain.com—the inaugural Nasdaq Linq client. In May, Nasdaq and Citi announced an integrated payment solution using a distributed ledger to record and transmit payment instructions based on Chain’s blockchain technology. The technology overcomes challenges of liquidity in private securities by streamlining payment transactions between multiple parties.

The path to its adoption will require resolving issues such as scalability, common standards, regulation, and legislation. Blockchain could revolutionize the core infrastructure systems of capital markets around the globe, bringing in greater transparency and efficiency.

Diving in to Isilon SyncIQ and SnapshotIQ Management

In this post I’m going to review the most useful commands for managing SyncIQ replication jobs and SnapshotIQ snapshots on the Isilon.  While this will primarily be a CLI administration reference, I’ll look at some WebUI options as well when I get to Snapshots, as well as some additional notes and caveats regarding snapshot management.  I’d highly recommend reviewing EMC’s SnapshotIQ best practices page, as well as the SyncIQ best practices guide if you’re just starting a new implementation.  For a complete Isilon Command line reference you can reference this post.

Creating a Replication policy

# isi sync policies create sync –schedule “” –target-snapshot-archive on –target-snapshot-pattern “%{PolicyName}-%{SrcCluster}-%Y-%m-%d_%H-%M”

Viewing active replication jobs

# isi sync jobs list

Policy Name ID State Action Duration
 -----------------------------------------------
 Replica1 32375 running run 1M1W5D14H55m
 ------------------------------------------------
 Total: 1

# isi sync jobs view

Policy Name: Replica1
 ID: 32375
 State: running
 Action: run
 Duration: 1M1W5D14H55m9s
 Start Time: 2017-10-27T17:00:25

# isi_classic sync job rep

Name | Act | St | Duration | Transfer | Throughput
----------+------+---------+------------------+--------+-----------
 Replica1 | sync | Running | 42 days 14:59:23 | 3.0 TB | 6.8 Mb/s

# isi_classic sync job rep –v [Provides a more verbose report]

Creating a SyncIQ domain [Required for failback operations]

# isi job jobs start –root –dm-type SyncIQ

Reviewing a replication Job before starting it

Replication policy status can be reviewed with the ‘test’ option. It is useful for previewing the size of the data set that will be transferred if you run the policy.

# isi sync jobs start –test
# isi sync reports view 1

Replication policy Enable/Disable/Delete

# isi sync policies enable # isi sync policies disable # isi sync policies delete

Replication Job Management

# isi sync jobs start # isi sync jobs pause # isi sync jobs resume # isi sync jobs cancel

Replication Policy Management

# isi sync policies list
# isi sync policies view

Viewing replication policies that target the local cluster

# isi sync target list
# isi sync target view

Managing replication performance rules

# isi sync rules create

Create network traffic rules that limit replication bandwidth

# isi sync rules create bandwidth 00:00-23:59 Sun-Sat 19200 [Limit consumption to 19200 kbps per second, 24×7]
# isi sync rules create file_count 08:00-18:00 M-F 7 [Limit file-send rate to 7 files per second 8-6 on weekdays]

Managing replication performance rules

# isi sync rules list
# isi sync rules view –id bw-0
# isi sync rules modify bw-0 –enabled true
# isi sync rules modify bw-0 –enabled false

Managing replication reports

# isi sync reports list
# isi snapshots list | head -200 [list the first 200 snapshots]
# isi sync reports view 2
# isi sync reports subreports list 1 [view sub-reports]

Managing failed replication jobs

# isi sync policies resolve [Resolve a policy error]
# isi sync policies reset If the issue can’t be resolved, the job can be reset. Resetting a policy results in a full or differential replication the next time the policy is run.

Creating Snapshots

# isi snapshot snapshots create

# isi snapshot snapshots delete {|–schedule |–type {alias|real}|–all}
[{–force|-f}] [{–verbose|-v}]

Modifing Snapshots

# isi snapshot snapshots modify

Listing Snapshots

# isi snapshot snapshots list –state {all | active | deleting}
# isi snapshot snapshots list –limit | -l [Number of snapshots to display]
# isi snapshot snapshots list –descending | -d [Sort data in descending order]

Viewing Snapshots

# isi snapshot snapshots view

Deleting Snapshots

Deleting a snapshot from OneFS is an all-or-nothing event, an existing snapshot cannot be partially deleted. Snapshots are created at the directory level, not at the volume level, which allows for a higher degree of granularity. Because they are a point in time copy of a specific subset of OneFS data they can’t be changed, only fully deleted. When deleting a snapshot OneFS immediately modifies some of the tracking data and the snapshot dissappears from view. Despite the fact that the snap is no longer visible, the behind the scenes cleanup of the snapshot will still be pending. It is performed in the ‘SnapshotDelete’ job.

OneFS frees disk space occupied by deleted snapshots only when the snapshot delete job is run. If a snapshot is deleted that contains clones or cloned files, the data in a shadow store may no longer be referenced by files on the cluster. OneFS deletes unreferenced data in a shadow store when the shadow store delete job is run. OneFS automatically runs both the shadow store delete and snapshot delete jobs, but you can also run them manually any time. Follow the procedure below to force the snapshot delete job to more quickly reclaim array capacity.

Deleting Snapshots from the WebUI

Go to Data Protection > SnapshotIQ > Snapshots and specify the snapshots that you want to delete.

• For each snapshot you want to delete, in the Saved File System Snapshots table, in the row of a snapshot, select the check box.
• From the Select an action list, select Delete.
• In the confirmation dialog box, click Delete.
• Note that you can select more than one snapshot at a time, and clicking the delete button on any of the snapshots will result in the entire checked list being deleted.
• If you have a large number of snapshots and want to delete them all, you can run a command from the CLI that will delete all of them at once: isi snapshot snapshots delete –all.

Increasing the Speed of Snapshot Deletion from the WebUI

It’s important to note that the SnapshotDelete will only run if the cluster is in a fully available state. There can be no drives or nodes down and it cannot be in a degraded state. To increase the speed at which deleted snapshot data is freed on the cluster, run the snapshot delete job.

• Go to Cluster Management > Operations.
• In the Running Jobs area, click Start Job.
• From the Job list, select SnapshotDelete.
• Click Start.

Increasing the Speed of Cloned File deletion from the WebUI

Run the shadow store delete job only after you run the snapshot delete job.

• Go to Cluster Management > Operations.
• In the Running Jobs area, click Start Job.
• From the Job list, select ShadowStoreDelete.
• Click Start.

Reserved Space

There is no requirement for reserved space for snapshots in OneFS. Snapshots can use as much or little of the available file system space as desirable. The oldest snapshot can be deleted very quickly. An ordered deletion is the deletion of the oldest snapshot of a directory, and is a recommended best practice for snapshot management. An unordered deletion is the removal of a snapshot that is not the oldest in a directory, and can often take approximately twice as long to complete and consume more cluster resources than ordered deletions.

The Delete Sequence Matters

As I just mentioned, avoid deleting snapshots from the middle of a time range whenever possible. Newer snapshots are mostly pointers to older snapshots, and they look like they are consuming more capacity than they actually are. Removing the newer snapshots will not free up much space, while deleting the oldest snapshot will ensure you are actually freeing up the space. You can determine snapshot order by using the isi snapshot list -l command.

Watch for SyncIQ Snaps

Avoid deleting SyncIQ snapshots if possible. They are easily identifiable, as they will all be prefixed with SIQ. It is ok to delete them if they are the only remaining snapshots on the cluster, and the only way to free up space is to delete them. Be aware that deleting SyncIQ snapshots resets the SyncIQ policy state, which requires a reset of the policy and may result in either a full sync or initial differential sync. A full sync or initial diff sync could take many times longer than a regular snapshot-based incremental sync.

Using the InsightIQ iiq_data_export Utility

InsightIQ includes a very useful data export tool:  iiq_data_export. It can be used with any version of OneFS beginning with 7.x.  While the tool is compatible with older versions of the operating system, if you’re running OneFS v8.0 or higher it offers a much needed performance improvement.  The improvements allow this to be a much more functional tool that can be used daily, and for quick reports it’s much faster than relying on the web interface.

Applications of this tool could include daily reports for application teams to monitor their data consumption, charge-back reporting processes,  or administrative trending reports. The output is in csv format, so there are plenty of options for data manipulation and reporting in your favorite spreadsheet application.

The utility is a command line tool, so you will need to log in to the CLI with an ssh session to the Linux InsightIQ server.  I generally use putty for that purpose.  The utility works with either root or non-root users, so you won’t need elevated privileges – I log in with the standard administrator user account. The utility can be used to export both performance stats and file system analytics [fsa] data, but I’ll review some uses of iiq_data_export for file system analytics first, more specifically the directory data-module export option.

The default command line option for file system analytics include list, describe, and export:

iiq_data_export fsa [-h] {list,describe,export} ...

Options:
 -h, --help Show this help message and exit.

Sub-Commands:
 {list,describe,export}
 FSA Sub-Commands
 list List valid arguments for the different options.
 describe Describes the specified option.
 export Export FSA data to a specified .csv file.

Listing FSA results for a specific Cluster

First we’ll need to review the reports that are available on the server. Below is the command to list the available FSA results for the cluster:

iiq_data_export fsa list --reports IsilonCluster1

Here are the results of running that command on my InsightIQ Server:

[administrator@corporate_iq1 ~]$ iiq_data_export fsa list --reports IsilonCluster1

Available Reports for: IsilonCluster1 Time Zone: PST
 ====================================================================
 | ID    | FSA Job Start         | FSA Job End           | Size     |
 ====================================================================
 | 57430 | Jan 01 2018, 10:01 PM | Jan 01 2018, 10:03 PM | 115.49M  |
 --------------------------------------------------------------------
 | 57435 | Jan 02 2018, 10:01 PM | Jan 02 2018, 10:03 PM | 115.53M  |
 --------------------------------------------------------------------
 | 57440 | Jan 03 2018, 10:01 PM | Jan 03 2018, 10:03 PM | 114.99M  |
 --------------------------------------------------------------------
 | 57445 | Jan 04 2018, 10:01 PM | Jan 04 2018, 10:03 PM | 116.38M  |
 --------------------------------------------------------------------
 | 57450 | Jan 05 2018, 10:00 PM | Jan 05 2018, 10:03 PM | 115.74M  |
 --------------------------------------------------------------------
 | 57456 | Jan 06 2018, 10:00 PM | Jan 06 2018, 10:03 PM | 114.98M  |
 --------------------------------------------------------------------
 | 57462 | Jan 07 2018, 10:01 PM | Jan 07 2018, 10:03 PM | 113.34M  |
 --------------------------------------------------------------------
 | 57467 | Jan 08 2018, 10:00 PM | Jan 08 2018, 10:03 PM | 114.81M  |
 ====================================================================

The ID column is the job number that is associated with that particular FS Analyze job engine job.  We’ll use that ID number when we run the iiq_data_export to extract the capacity information.

Using iiq_data_export

Below is the command to export the first-level directories under /ifs from a specified cluster for a specific FSA job:

iiq_data_export fsa export -c <cluster_name> --data-module directories -o <jobID>

If I want to view the /ifs subdirectores from job 57467, here’s the command syntax and it’s output:

[administrator@corporate_iq1 ~]$ iiq_data_export fsa export -c IsilonCluster1 --data-module directories -o 57467

Successfully exported data to: directories_IsilonCluster1_57467_1515522398.csv

Below is the resulting file. The output shows the directory count, file counts, logical, and capacity consumption.

[administrator@corporate_iq1 ~]$ cat directories_IsilonCluster1_57467_1515522398.csv

path[directory:/ifs/],dir_cnt (count),file_cnt (count),ads_cnt,other_cnt (count),log_size_sum (bytes),phys_size_sum (bytes),log_size_sum_overflow,report_date: 1515470445
 /ifs/NFS_exports,138420,16067265,0,1659,335841902399477,383999799732224,0
 /ifs/data,95,2189,0,0,13303199652,15264802304,0
 /ifs/.isilon,3,22,0,0,647236,2284544,0
 /ifs/netlog,2,5,0,0,37615,208384,0
 /ifs/home,9,31,0,0,30070,950784,0
 /ifs/SITE,10,0,0,0,244,53248,0
 /ifs/PRODUCTION-CIFS,2,0,0,0,23,4096,0
 /ifs/WAREHOUSE,1,0,0,0,0,2048,0
 /ifs/upgrade_error_logs,1,0,0,0,0,2048,0

While that is a useful top level report, we may want to dive a bit deeper and report on 2nd or 3rd level directories as well. To gather that info, use the directory filter option, which is “-r”:

iiq_data_export fsa export -c <cluster_name> --data-module directories -o <jobID> -r directory:<directory_path_in_ifs>

As an example, if we wanted more detail on the subfolders under the /NFS_exports/warehouse/ directory, we’d run the following command:

[administrator@corporate_iq1 ~]$ iiq_data_export fsa export -c IsilonCluster1 --data-module directories -o 57467 -r directory:/NFS_exports/warehouse/warehouse_dec2017

Successfully exported data to: directories_IsilonCluster1_57467_1515524307.csv

Below is the output from the csv file that I generated:

[administrator@corporate_iq1 ~]$ cat directories_IsilonCluster1_57467_1515524307.csv

path[directory:/ifs/NFS_exports/warehouse/warehouse_dec2017/],dir_cnt (count),file_cnt (count),ads_cnt,other_cnt (count),log_size_sum (bytes),phys_size_sum (bytes),log_size_sum_overflow,report_date: 1515470445
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_t01,44,458283,0,0,27298994838926,31275791237632,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_cat,45,106854,0,0,14222018137340,16285929507840,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_set,24,261564,0,0,11221057700000,12847989286912,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_auth,17,96099,0,0,7402828037356,8471138941440,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_mds,41,457984,0,0,5718188746729,6576121923584,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_hsh,17,101969,0,0,4396244719797,5035400875520,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_hop,17,115257,0,0,3148118026139,3608613813760,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_brm,24,3434,0,0,2964319382819,3381774883840,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_exe,9,22851,0,0,2917582971428,3317971597824,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_com,21,33286,0,0,2548672643701,2907729505280,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_mig,2,30,0,0,2255138307994,2586591986688,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_cls,7,4994,0,0,1795466785597,2035911001088,0
 /ifs/NFS_exports/warehouse/warehouse_dec2017/dir_enc,45,106713,0,0,1768636398516,2032634691072,0
 <...truncated>

Diving Deeper into subdirectories

Note that how deep you can go down the /ifs subdirectory tree depends on the FSA configuration in InsightIQ. By default InsightIQ will configure the “directory filter maximum depth” option to 5, allowing directory information as low as
/ifs/dir1/dir2/dir3/dir4/dir5. If you need to dive deeper the FSA config will need to be updated. To do so, go to the Configuration Page, FSA Configuration, then the “Directory Filter path_squash) maximum depth setting. Note that the larger the maximum depth the more storage space an individual FSA result will use.

Scripting Reports

For specific subdirectory reports it’s fairly easy to script the output.

First, let’s create a text file with a list of the subdirectories under /ifs that we want to report on. I’ll create a file named “directories.txt” in the /home/administrator folder on the InsightIQ server. You can use vi to create and save the file.

[administrator@corporate_iq1 ~]$ vi directories.txt

[add the following in the vi editor...]

NFS_exports/warehouse/warehouse_dec2017/dir_t01
 NFS_exports/warehouse/warehouse_dec2017/dir_cat
 NFS_exports/warehouse/warehouse_dec2017/dir_set

I’ll then use vi again to create the script itself.   You will need to substitute the cluster name and the job ID to match your environment.

[administrator@corporate_iq1 ~]$ vi direxport.sh

[add the following in the vi editor...]

for i in `cat directories.txt`
 do
 echo "Processing Directory $i..."
 j=`basename $i`;
 echo "Base Folder Name is $j"
 date_time="`date +%Y_%m_%d_%H%M%S_`";
 iiq_data_export fsa export -c IsilonCluster1 --data-module directories -o 57467 -r directory:$i -n direxport_$date_time$j.csv
 done

We can now change the permissions and set the file to executable, then run the script.  An output example is below.

[administrator@corporate_iq1 ~]$ chmod 777 direxport.sh
 [administrator@corporate_iq1 ~]$ chmod +X direxport.sh
 [administrator@corporate_iq1 ~]$ ./direxport.sh

Processing NFS_exports/warehouse/warehouse_dec2017/dir_t01...
 Base Folder Name is dir_t01

Successfully exported data to: direxport_2017_01_19_085528_dir_t01.csv

Processing NFS_exports/warehouse/warehouse_dec2017/dir_cat...
 Base Folder Name is dir_cat

Successfully exported data to: direxport_2017_01_19_0855430_dir_cat.csv

Processing NFS_exports/warehouse/warehouse_dec2017/dir_set...
 Base Folder Name is dir_set

Successfully exported data to: direxport_2017_01_19_085532_dir_set.csv

Performance Reporting

As I mentioned at the beginning of this post, this command can also provide performance related information. Below are the default command line options.

usage: iiq_data_export perf list [-h] [--breakouts] [--clusters] [--data-modules]

Options:
 -h, --help Show this help message and exit.

Mutually Exclusive Options:
 --breakouts Displays the names of all breakouts that InsightIQ supports for
 performance data modules. Each data module supports a subset of
 breakouts.
 --clusters Displays the names of all clusters that InsightIQ is monitoring.
 --data-modules Displays the names of all available performance data modules.
 iiq_data_export perf list: error: One of the mutually exclusive arguments are
 required.

Here are the data modules you can export:

 iiq_data_export perf list --data-modules
 ====================================================================
 | Data Module Label                       | Key 
 ====================================================================
 | Active Clients                          | client_active 
 --------------------------------------------------------------------
 | Average Cached Data Age                 | cache_oldest_page_age 
 --------------------------------------------------------------------
 | Average Disk Hardware Latency           | disk_adv_access_latency 
 --------------------------------------------------------------------
 | Average Disk Operation Size             | disk_adv_op_size 
 --------------------------------------------------------------------
 | Average Pending Disk Operations Count   | disk_adv_io_queue 
 --------------------------------------------------------------------
 | Blocking File System Events Rate        | ifs_blocked
 --------------------------------------------------------------------
 | CPU % Use                               | cpu_use 
 --------------------------------------------------------------------
 | CPU Usage Rate                          | cpu_usage_rate 
 --------------------------------------------------------------------
 | Cache Hits                              | cache_hits 
 --------------------------------------------------------------------
 | Cluster Capacity                        | ifs_cluster_capacity 
 --------------------------------------------------------------------
 | Connected Clients                       | client_connected 
 --------------------------------------------------------------------
 | Contended File System Events Rate       | ifs_contended 
 --------------------------------------------------------------------
 | Deadlocked File System Events Rate      | ifs_deadlocked 
 --------------------------------------------------------------------
 | Deduplication Summary (Logical)         | dedupe_logical 
 --------------------------------------------------------------------
 | Deduplication Summary (Physical)        | dedupe_physical 
 --------------------------------------------------------------------
 | Disk Activity                           | disk_adv_busy 
 --------------------------------------------------------------------
 | Disk IOPS                               | disk_iops 
 --------------------------------------------------------------------
 | Disk Operations Rate                    | disk_adv_op_rate 
 --------------------------------------------------------------------
 | Disk Throughput Rate                    | disk_adv_bytes 
 --------------------------------------------------------------------
 | External Network Errors                 | ext_error 
 --------------------------------------------------------------------
 | External Network Packets Rate           | ext_packet 
 --------------------------------------------------------------------
 | External Network Throughput Rate        | ext_net_bytes 
 --------------------------------------------------------------------
 | File System Events Rate                 | ifs_heat 
 --------------------------------------------------------------------
 | File System Throughput Rate             | ifs_total_rate 
 --------------------------------------------------------------------
 | Job Workers                             | worker 
 --------------------------------------------------------------------
 | Jobs                                    | job 
 --------------------------------------------------------------------
 | L1 Cache Throughput Rate                | cache_l1_read 
 --------------------------------------------------------------------
 | L1 and L2 Cache Prefetch Throughput Rate| cache_all_prefetch 
 --------------------------------------------------------------------
 | L2 Cache Throughput Rate                | cache_l2_read 
 --------------------------------------------------------------------
 | L3 Cache Throughput Rate                | cache_l3_read 
 --------------------------------------------------------------------
 | Locked File System Events Rate          | ifs_lock 
 --------------------------------------------------------------------
 | Overall Cache Hit Rate                  | cache_all_read_hitrate 
 --------------------------------------------------------------------
 | Overall Cache Throughput Rate           | cache_all_read 
 --------------------------------------------------------------------
 | Pending Disk Operations Latency         | disk_adv_io_latency
 --------------------------------------------------------------------
 | Protocol Operations Average Latency     | proto_latency 
 --------------------------------------------------------------------
 | Protocol Operations Rate                | proto_op_rate 
 --------------------------------------------------------------------
 | Slow Disk Access Rate                   | disk_adv_access_slow 
 ====================================================================

As an example, if I want to review the CPU utilization for the cluster, I’d type in the command below.   It will show all of the CPU performance information for the specified cluster name.  Once I’ve had more time to dive in to the performance reporting aspect of InsightIQ I’ll revisit and add to this post.

[administrator@corporate_iq1~]$ iiq_data_export perf export -c IsilonCluster1 -d cpu_use

Successfully exported data to: cpu_IsilonCluster1_1515527709.csv

Below is what the output looks like:

[administrator@corporate_iq1 ~]$ cat cpu_STL-Isi0091_1515527709.csv
 Time (Unix) (America/Chicago),cpu (percent)
 1515524100.0,3.77435898780823
 1515524130.0,4.13846158981323
 1515524160.0,3.27435898780823
 1515524190.0,2.34871792793274
 1515524220.0,2.68974351882935
 1515524250.0,3.33333349227905
 1515524280.0,3.02051281929016
 1515524310.0,2.78974366188049
 1515524340.0,2.98717951774597
 <...truncated>

Best Practices for FAST Cache

I recently received a comment asking for more information on EMC’s FAST Cache, specifically about why increased CPU Utilization was observed after a FAST Cache expansion. It’s likely due to the rebuilding of the cache after the expansion and possibly having it enabled on LUNs that shouldn’t, like those with high sequential I/O. It’s hard to pinpoint the exact cause of an issue like that without a thorough analysis of the array itself, however.   I thought I’d do a quick write-up of EMC’s best practices for implementing FAST Cache and the caveats to consider when implementing it.

What is FAST Cache?

First, a quick overview of what it is.  EMC’s FAST Cache uses a RAID set of EFD drives that sits between DRAM Cache and the disks themselves. It holds a large percentage of the most often used data in high performance EFD drives.  It hits a price/performance sweet spot between DRAM and traditional spinning disks for cache, and can greatly increase array performance.

The theory behind FAST Cache is simple:  we divide the array’s storage up in 64KB blocks, we count the number of hits on those blocks, and then we create a cache page on the FAST Cache EFDs if there have been three read (or write) hits on that block.  If FAST Cache fills up, the array will start to seek pages in the EFDs that will make a full stripe write to the spinning disks in the array, and then force flush out to traditional spinning disks.

FAST Cache uses a “three strikes” algorithm.  If you are moving large amounts of data, the FAST Cache algorithm does not activate, which is by design, as cache does not help at all in large copy transactions.  Random hits on active blocks, however,  will ultimately cache those blocks into FAST Cache.  This is where the 64KB granularity makes a difference.  Typical workloads I/O are 64KB or less, and there is a significant chance that even if a workload is performing 4KB reads and writes to different blocks, they will still hit the same 64KB FAST Cache block, resulting in the promotion of that data into FAST Cache.  Cool, right?  It works very well in practice.  With all that said, there are still plenty of implementation considerations for an ideal FAST Cache configuration.  Below is an overview of EMC’s best practices.

Best Practices for LUNs and Pools

  • Only use it where you need it. The FAST Cache driver has to track every I/O to calculate whether a block needs promotion to FAST Cache, which then adds to the SP CPU utilization.  As a best practice, you should disabling FAST Cache for LUNs that won’t need it.  It will cut this overhead and thus can improve overall performance levels.  Having a separate storage pool for LUNs that don’t need FASTCache would be ideal.

Disable FASTCache for the following LUN types:

– Secondary Mirror and Clone destination LUNs
– LUNs with small, high sequential I/O, such as Oracle Database Logs & snapsure dvols
– LUNs in the reserved LUN pool.
– Recoverpoint Journal LUNs
– SnapView Clones and MirrorView Secondary Mirrors

  • Analyze where you need it most.  Based on a workload analysis, I’d consider restricting the use of FAST Cache to the LUNs or Pools that need it the most.  For every new block that is added into FAST Cache, old blocks that are the oldest in terms of the most recent access are removed.  If your FAST Cache capacity is limited, even frequently accessed blocks may be removed before they’re accessed again.
  • Upgrade to the latest OS Release. On the VNX platform, upgrading to the latest FLARE or MCx release can greatly improve the performance of FAST Cache.  It’s been a few years now, but as an example r32 recovers performance much faster after a FAST Cache drive failure compared to r31, as well as automatically avoiding the promotion of small sequential block I/O to FAST Cache.  It’s always a good idea to run a current version of the code.

Best Practices For VNX arrays with MCx:

  • Spread it out. Spread the drives as evenly as possible across the available backend busses.  Be careful, though, as you shouldn’t add more than 8 FAST Cache flash drives per bus including any unused flash drives for use as hot-spares.
  • Always use DAE 0. Try and use DAE 0 on each bus for flash drives as it provides for the lowest latency.

Best Practices for VNX and CX4 arrays with FLARE 30-32: 

  • CX4? No more than 4 per bus. If you’re still using an older CX4 series array, don’t use more than 4 FAST Cache drives per bus, and don’t put all of them on bus 0. If they are all on the same bus, they could completely saturate this bus with I/O.
  • Spread it out. Spread the FAST Cache drives over as many buses as possible. This would especially be an issue if the drives were all on bus 0, because it is used to access the vault drives.  Note that the VNX has six times the back-end bandwidth per bus compared to a CX, so it’s less of a concern.
  • Match the drive sizes. All the drives in FAST Cache must be of the same capacity; otherwise the workload on each drive would rise proportionately with its capacity.  In other words, a 200GB drive would have double the workload of a 100Gb drive.
  • VNX? Use enclosure 0. Put the EFD drives in the first DAE on any bus (i.e. Enclosure 0).  The I/O has to pass through the LCC of each DAE between the drive and the SP, and each extra LCC it passes through will add a small amount of latency. The latency would normally be negligible, but is significant for flash drives.  Note that on the CX4, all I/O has to pass through every LCC anyway.
  • Mind the order the disks are added.  The order the drives are added dictates which drives are primary & secondary. The first drive added is the primary for the first mirror, the next drive added is its secondary for the first mirror, the third drive is the primary for the second mirror, etc.
  • Location, Location, Location. It’s a more advanced configuration and requires the use of the CLI, but for highest availability place the primary and secondary for each FAST Cache RAID 1 pair are on different buses.

 

 

 

 

Using Cron with EMC VNX and Celerra

I’ve shared numerous shell scripts over the years on this blog, many of which benefit from being scheduled to run automatically on the Control Station.  I’ve received emails and comments asking “How do I schedule Unix or Linux crontab jobs to run at intervals like every five minutes, every ten minutes, Every hour, etc.”?  I’ll run through some specific examples next. While it’s easy enough to simply type “man crontab” from the CLI to review the syntax, it can be helpful to see specific examples.

What is cron?

Cron is a time-based job scheduler used in most Unix operating systems, including the VNX File OS (or DART). It’s used schedule jobs (either commands or shell scripts) to run periodically at fixed times, dates, or intervals. It’s primarily used to automate system maintenance, administration, and can also be used for troubleshooting.

What is crontab?

Cron is driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. The crontab files are stored where the lists of jobs and other instructions to the cron daemon are kept. Users can have their own individual crontab files and often there is a system-wide crontab file (usually in /etc or a subdirectory of /etc) that only system administrators can edit.

On the VNX, the crontab files are located at /var/spool/cron but are not intended to be edited directly, you should use the “crontab –e” command.  Each NAS user has their own crontab, and commands in any given crontab will be executed as the user who owns the crontab.  For example, the crontab file for nasadmin is stored as /var/spool/cron/nasadmin.

Best Practices and Requirements

First, let’s review how to edit and list crontab and some of the requirements and best practices for using it.

1. Make sure you use “crontab -e” to edit your crontab file. As a best practice you shouldn’t edit the crontab files directly.  Use “crontab -l” to list your entries.

2. Blank lines and leading spaces and tabs are ignored. Lines whose first non-space character is a pound sign (#) are comments, and are ignored. Comments are not allowed on the same line as cron commands as they will be taken to be part of the command.

3. If the /etc/cron.allow file exists, then you must be listed therein in order to be allowed to use this command. If the /etc/cron.allow file does not exist but the /etc/cron.deny file does exist, then you must not be listed in the /etc/cron.deny file in order to use this command.

4. Don’t execute commands directly from within crontab, place your commands within a script file that’s called from cron. Crontab cannot accept anything on stdout, which is one of several reasons you shouldn’t put commands directly in your crontab schedule.  Make sure to redirect stdout somewhere, either a log file or /dev/null.  This is accomplished by adding “> /folder/log.file” or “> /dev/null” after the script path.

5. For scripts that will run under cron, make sure you either define actual paths or use fully qualified paths to all commands that you use in the script.

6. I generally add these two lines to the beginning of my scripts as a best practice for using cron on the VNX.

export NAS_DB=/nas
export PATH=$PATH:/nas/bin

Descriptions of the crontab date/time fields

Commands are executed by cron when the minute, hour, and month of year fields match the current time, and when at least one of the two day fields (day of month, or day of week) match the current time.

# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of week (0 - 7) 
# │ │ │ │ │                          
# │ │ │ │ │
# │ │ │ │ │
# * * * * *  command to execute
# 1 2 3 4 5  6

field #   meaning          allowed values
-------   ------------     --------------
   1      minute           0-59
   2      hour             0-23
   3      day of month     1-31
   4      month            1-12
   5      day of week      0-7 (0 or 7 is Sun)

Run a command every minute

While it’s not as common to want to run a command every minute, there can be specific use cases for it.  It would most likely be used when you’re in the middle of troubleshooting an issue and need data to be recorded more frequently.  For example, you may want to run a command every minute to check and see if a specific process is running.  To run a Unix/Linux crontab command every minute, use this syntax:

# Run “check.sh” every minute of every day
* * * * * /home/nasadmin/scripts/check.sh

Run a command every hour

The syntax is similar when running a cron job every hour of every day.  In my case I’ve used hourly scripts for performance monitoring, for example with the server_stats VNX script. Here’s a sample crontab entry that runs at 15 minutes past the hour, 24 hours a day.

# Brocade Backup
# This command will run at 12:15, 1:15, 2:15, etc., 24 hours a day.
15 * * * * /home/nasadmin/scripts/stat_collect.sh

Run a command once a day

Here’s an example that shows how to run a command from the cron daemon once a day. In my case, I’ll usually run daily commands for report updates on our web page and for backups.  As an example, I run my Brocade Zone Backup script once daily.

# Run the Brocade backup script at 7:30am
30 7 * * * /home/nasadmin/scripts/brocade.sh

Run a command every 5 minutes

There are multiple methods to run a crontab entry every five minutes.  It is possible to enter a single, specific minute value multiple times, separated by commas.  While this method does work, it makes the crontab list a bit harder to read and there is a shortcut that you can use.

0,5,10,15,20,25,30,35,40,45,50,55  * * * * /home/nasadmin/scripts/script.sh

The crontab “step value” syntax (using a forward slash) allows you use a crontab entry in the format sample below.  It will run a command every five minutes and accomplish the same thing as the command above.

# Run this script every 5 minutes
*/5 * * * * /home/nasadmin/scripts/test.sh

Ranges, Lists, and Step Values

I just demonstrated the use of a step value to specify a schedule of every five minutes, but you can actually get even more granular that that using ranges and lists.

Ranges.  Ranges of numbers are allowed (two numbers separated with a hyphen). The specified range is inclusive. For example, using 7-10 for an “hours” entry specifies execution at hours 7, 8, 9, & 10.

Lists. A list is a set of numbers (or ranges) separated by commas. Examples: “1,2,5,9”, “0-4,8-12”.

Step Values. Step values can be used in conjunction with ranges. Following a range with “/” specifies skips of the number’s value through the range. For example, “0-23/2” can be used in the hours field to specify command execution every other hour (the alternative being “0,2,4,6,8,10,12,14,16,18,20,22”). Steps are also permitted after an asterisk, so if you want to say “every two hours” you can use “*/2”.

Special Strings

While I haven’t personally used these, there is a set of built in special strings you can use, outlined below.

string         meaning
------         -------
@reboot        Run once, at startup.
@yearly        Run once a year, "0 0 1 1 *".
@annually      (same as @yearly)
@monthly       Run once a month, "0 0 1 * *".
@weekly        Run once a week, "0 0 * * 0".
@daily         Run once a day, "0 0 * * *".
@midnight      (same as @daily)
@hourly        Run once an hour, "0 * * * *".

Using a Template

Below is a template you can use in your crontab file to assist with the valid values that can be used in each column.

# Minute|Hour  |Day of Month|Month |WeekDay |Command
# (0-59)|(0-23)|(1-31)      |(1-12)|(0-7)             
  0      2      12           *      *        test.sh

Gotchas

Here’s a list of the known limitations of cron and some of the issues you may encounter.

1. When cron job is run it is executed as the user that created it. Verify security requirements for the job.

2. Cron jobs do not use any files in the user’s home directory (like .cshrc or .bashrc). If you need cron to read any file that your script will need, you will need to call it from the script cron is using. This includes setting paths, sourcing files, setting environment variables, etc.

3. If your cron jobs are not running, make sure the cron daemon is running. The cron daemon can be started or stopped with the following VNX Commands (run as root):

# /sbin/service crond stop
 # /sbin/service crond start

4.  If your job isn’t running properly you should also check the /etc/cron.allow and /etc/cron.deny files.

5. Crontab is not parsed for environmental substitutions. You can not use things like $PATH, $HOME, or ~/sbin.

6. Cron does not deal with seconds, minutes is the most granular it allows.

7. You can not use % in the command area. They will need to be escaped and if used with command substitution like the date command you can put it in backticks. Ex. `date +\%Y-\%m-\%d`. Or use bash’s command substitution $().

8. Be cautious using the day of the month and the day of week together.  The day of month and day of week fields with restrictions (no *) makes this an “or” condition not an “and” condition.  When either field is true it will be executed.

 

Storage Class Memory and Emerging Technologies

I mentioned in my earlier post, The Future of Storage Administration, that Flash will continue to dominate the industry and will be embraced by the enterprise, which I believe will drive newer technologies like NVMe and diminish older technologies like fiber channel.  While there is a lot of agreement over the latest storage technologies that are driving the adoption of flash in the enterprise, including the aforementioned NVMe technology, there doesn’t seem to be nearly as much agreement on what the “next big thing” will be in the enterprise storage space.  NVMe and NVMe-oF are definitely being driven by the trend towards the all-flash data center, and Storage Class Memory (SCM) is certainly a relevant trend that could be that “next big thing”.  Before I continue, what are NVMe, NVMe-oF and SCM?

  • NVMe is a protocol that allows for fast access for direct attached flash storage. NVMe is considered an evolutionary step toward exploiting the inherent parallelism built into SSDs.
  • NVMe-oF allows the advantages of NVMe to be used on a fabric connecting hosts with networked storage. With the increased adoption of low latency, high bandwidth network fabrics like 10GB+ Ethernet and InfiniBand, it becomes possible to build an infrastructure that extends the performance advantages of NVMe over standard fabrics to access low latency nonvolatile persistent storage.
  • SCM (Storage Class Memory) is a technology that places memory and storage on what looks like a standard DIMM board, which can be connected over NVMe or the memory bus.  I’ll dive in a bit more later on.

In the coming years, you’ll likely see every major storage vendor rolling out their own solutions for NVMe, NVMe-oF, and SCM.  The technologies alone won’t mean anything without optimization of the OS/hypervisor, drivers, and protocols, however. The NVMe software will need to be designed to take advantage of the low latency transport and media.

Enter Storage Class Memory

SCM is a hybrid memory and storage paradigm, placing memory and storage on what looks like a standard DIMM board.  It’s been gaining a lot of attention at storage industry conferences for the past year or two.  Modern solid-state drives are a compromise because they’re inherently all-flash and are still configured with all the bottlenecks of legacy standard drives even when bundled in to modern enterprise arrays.  SCM is not exactly memory and it’s not exactly storage.  It physically connects to memory slots in a mainboard just like traditional DRAM.  It is also a little bit slower than DRAM, but it is persistent, so just like traditional storage all content is saved after a power cycle.  Compared to flash SCM is orders of magnitude faster and offers equal performance gains on read and write operations.  In addition, SCM tiers are much more resilient and do not have the same wear pattern problems as flash.

A large gap exists between DRAM as a main memory and traditional SSD and HDD storage in terms of performance vs. cost, and SCM looks to address that gap.

The next-generation technologies that will drive SCM aim to be denser than current DRAM along with being faster, more durable, and hopefully cheaper than NAND solutions.  SCM, when connected over NVMe technology or directly on the memory bus, will enable device latencies to be about 10x lower than those provided by NAND-based SSDs.  SCM can also be up to 10x faster than NAND flash although at a higher cost than NAND-based SSDs. Similarly, NAND flash started out at least 10x more expensive than the dominant 15K RPM HDD media when it was introduced. Prices will come down.

Because the expected media latencies for SCM (<2us) are lower than the network latencies (<5us), SCM will probably end up being more common on servers rather than on the network.  Either way, SCM on a storage system will help accelerate metadata access and result in improvement of overall system performance.  Using NVMe-oF to provide low-latency access to networked storage SCM could potentially be used to create a different tier of network storage.

The SCM Vision

It sounds great, right?  The concept of Storage Class Memory has been around for a while, but it’s become a hard to reach albeit very desirable goal for storage professionals. The common vision seems to be a new paradigm where data can live in fast, DRAM-like storage areas in which data in memory is the center of the computer instead of the compute functions. The main problem with this vision is how we get the system and applications to recognize that something beyond just DRAM is available for use and that it can be used as either data storage or as persistent memory.

We know that SCM will allow for huge volumes of I/O to be served from memory and potentially stored in memory.  There will be fewer requirements needed to create multiple copies to protect against controller or server failure.  Exactly how this will be done remains to be seen, but there are obvious benefits from not having to continuously commit to slow external disks.  Once all the hurdles are overcome, SCM should have broad applicability in SSDs, storage controllers, PCI or NVMe boards and DIMMs.

Sofware Support

With SCM, applications won’t need to execute write IOs to get data into persistent storage. A memory level, zero copy operation moving data into XPoint will take care of that. That is just one example of the changes that systems and software will have to take on board when a hardware option like XPoint is treated as persistent storage-class memory, however.  Most importantly, the following must also be developed:

  • File systems that are aware of persistent memory must be developed
  • Operating system support for storage-class memory must be developed
  • Processors designed to use hybrid DRAM and XPoint memory must be developed

With that said, the industry is well on its way. Microsoft has added XPoint storage-class memory support into Windows Server 2016.  It provides zero-copy access and Direct Access Storage volumes, known as DAX volumes.  Red Hat Linux Operating system support is in place to use these devices as fast disks in sector mode with btt, and this usecase is fully supported in RHEL 7.3.

Hardware

SCM can be implemented with a variety of current technologies, notably Intel Optane, ReRAM, and NVDIMM-P.

Intel has introduced Optane brand XPoint SSDs and XPoint DIMMs, instead of the relatively slower PCIe bus used by the NVMe XPoint drives.

Resistive Random-Access Memory (ReRAM) is still an up-and-coming technology and comparable to Intel’s XPoint. It is currently under development by a number of companies and is a viable replacement for flash memory. The costs and performance of ReRAM are not currently at a level that makes the technology ready for the mass market. Developers of ReRAM technology all face similar challenges: overcoming temperature sensitivity, integrating with standard CMOS technology and manufacturing processes, and limiting the effects of sneak path currents, which would otherwise disrupt the stability of the data contained in each memory cell.

NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.” The NVDIMM-P specification is being developed to support NAND flash directly on the host memory interface.  NVDIMMs use predictive software that allocates data in advance between DRAM and NAND.  NVDIMM-P is limited in that even though NAND flash is physically located at DIMM along with DRAM, the traditional memory hierarchy is still the same. The NAND implementation still works as a storage device and the DRAM implementation still works as main memory.

HP worked for years developing its Machine project.  Their effort revolved around memory-driven computing and an architecture aimed at big data workloads, and their goal was eliminating inefficiencies in how memory, storage, and processors interact.  While the project appears to now be dead, the technologies they developed will live on in current and future HP products. Here’s what we’ll likely see out of their research:

  • Now: ProLiant boxes with persistent memory for applications to use, using a mix of DRAM and flash.
  • Next year: Improved DRAM-based persistent memory.
  • Two-three years: True non-volatile memory (NVM) for software to use as slow but high volume RAM.
  • Three-Four years: NVM technology across many product categories.

SCM Use Cases

I think SCM’s possibly most exciting use case for high performance computing will be its use as nonvolatile memory that is tightly coupled to an application. SCM has the potential to dramatically affect the storage landscape in high performance computing, and application and storage developers will have fantastic opportunities to take advantage of this unique new technology.

Intel touts fast storage, cache, and extended memory as the primary use cases for their Optane product line.  Fast storage or cache refers to the tiering and layering which enable a better memory-to-storage hierarchy. The Optane product provides a new storage tier that breaks through the bottlenecks of traditional NAND storage to accelerate applications, and enable more work to get done per server. Intel’s extended memory use case describes the use of an Optane SSD to participate in a shared memory pool with DRAM at either the OS or application level enabling bigger memory or more affordable memory.

What the next generation of SCM will require is the industry coming together to agree on what we are all talking about and generate some standards.  Those standards will be critical to support innovation. Industry experts seem to be saying that the adoption of SCM will evolve around use cases and workloads, and task-specific, engineered machines that are built with real-time analytics in mind.  We’ll see what happens.

No matter what, new NVMe-based products coming out will definitely do a lot toward enabling fast data processing at a large scale, especially solutions that support the new NVMe-oF specification. SCM combined with software-defined storage controllers and NVMe-oF will enable users to pool flash storage drives and treat them as if they are one big local flash drive. Exciting indeed.

SCM may not turn out to be a panacea, and current NVMe flash storage systems will provide enough speed and bandwidth to handle the even the most demanding compute requirements for the foreseeable future.  I’m looking forward to seeing where this technology takes us.

The Future of Storage Administration

What is the future of enterprise storage administration? Will the job of enterprise storage administrator still be necessary in 10 years? A friend in IT storage management recently asked  me a question related to that very relevant topic. In a word, I believe the answer to that 2nd question is a resounding yes, but alas, things are changing and we are going to have to embrace the inevitable changes in the industry to stay on top of our game and remain relevant.  In recent years I’ve seen the need to broaden my skills and demonstrate how my skills can actually drive business value. The modern data center is undergoing a tremendous transformation, with hyper-converged systems, open source solutions, software-defined storage, large cloud-scale storage systems that companies can throw together all by themselves, and many others. This transformation is being created by the need for business agility, and it’s being fueled by software innovation.

As the expansion of data into the cloud influences and changes our day-to-day management, we will begin to see the rise of the IT generalist in the storage world.  These inevitable changes and the new tools that manage them will mean that storage will likely move toward being procured and managed by IT generalists rather than specialists like myself. Hyper converged infrastructures will allow these generalists to manage an entire infrastructure with a single, familiar set of tools.  As overall data center responsibilities start to shift to more generalist roles, traditional enterprise storage storage professionals like myself will need to expand our expertise beyond storage, or focus on more strategic projects where storage performance is critical.  I personally see us starting to move away from the day-to-day maintenance of infrastructure and more toward how IT can become an real driver of business value. The glory days of on-prem SAN and storage arrays are nearing an end, but us old timers in enterprise storage can still be a key part of the success of the businesses we work for. If we didn’t embrace change, we wouldn’t be in IT, right?

Despite all of these new technologies and trends, keep in mind that there are still some good reasons to take the classic architecture into consideration for new deployments. It’s not going to disappear overnight. It’s the business that drives the need for storage, and it’s the business applications that dictate the ideal architecture for your environment. Aside from the application, businesses will also be dependent on their existing in-house skills which will of course affect the overall cost analysis of embracing the new technologies, possibly pushing them off.

So, what are we in for? The following list summarizes my view on the key changes that I think we’ll see in the foreseeable future.  I’m guessing you’d see these (along with many others) pop up in pretty much any google search about the future of storage or storage trends, but these are the most relevant to what I’m personally witnessing.

  • The public cloud is an unstoppable force.  Embrace it as a storage admin or risk becoming irrelevant.
  • Hyper-converged systems will become more and more common and will driven by market demand.
  • Hardware commoditization will continue to eat away at the proprietary hardware business.
  • Storage vendors will continue to consolidate.
  • We will see the rise of RDMA in enterprise storage.
  • Open Source storage software will mature and will see more widespread acceptance.
  • Flash continues to dominate and will be embraced by the enterprise, driving newer technologies like NVMe and diminishing technologies like fiber channel.
  • GDPR will drive increase spending and overall focus on data security.
  • Scale out and object solutions will increasingly be more important.
  • Data Management and automation will increase in importance.

Cloud
I believe that the future of cloud computing is undeniably hybrid. The future data center will likely represent a combination of cloud based software products and on-prem compute, creating a hybrid IT solution that balances the scalability and flexibility associated with cloud, and the security and control you have with a private data center. With that said, I don’t believe that Cloud is a panacea as there are always concerns about security, privacy, backups, and especially performance. In my experience, when the companies I’ve worked for have directed us to investigate cloud options for specific applications, on-premises infrastructure costs less than public cloud in the long run. Even so, there is no doubting the inexorable shift of projects, infrastructure, and spending to the cloud, and it will affect compute, networking, software, and storage. I expect I’ll see more and more push to find more efficient solutions that offer lower costs, likely resulting in hybrid solutions. When moving to the cloud, monitoring consumption is the key to cost savings. Cost management tools from the likes of Cloudability, Cloud Cruiser and Cloudyn are available and well worth looking at.

I’ve also heard, “the cloud is already in our data center, it’s just private”. Contrary to popular belief, private clouds are not simply existing data centers running virtualized, legacy workrkloads. They are highly-modernized application and service environments running on true cloud platforms (like AWS or Azure) residing either on-prem or in a hybrid scenario with a hosting services partner. As we shift more of our data to the cloud, we’ll see industry demand for storage move from “just in case” storage (an upfront capex model) to “just in time” storage (an ongoing opex model). “Just in time” storage has been a running joke for years for me in the more traditional data center environments that I’ve been responsible for, alluding to the fact that we’d get storage budget approved, ordered and installed just days before reaching full capacity. That’s not what I’m referring to in this case… “Just in time” means online service providers are running at much higher asset utilization than the typical customer can add capacity in more granular increments. The migration to cloud allows for a much more efficient “just in time” model than I’m used to, and allows the switch to an ongoing opex model.

Hyper Converged
A hyper-converged infrastructure can greatly simplify the management of IT and yes, it could reduce the need for skilled storage administrators: the complexities of storage, servers and networking that require separate skills to manage are hidden ‘under the hood’ by that software layer, allowing it to be managed by staff with more general IT skills through a single administrative interface. Hyperconverged infrastructure is also much easier to scale and in smaller increments than traditional integrated systems. Instead of making major infrastructure investments every few years, businesses can simply add modules of hyperconverged infrastructure when they are needed.

It seems like an easy sell. It’s a data center in a box. Fewer components, a smaller data center footprint, reduced energy consumption, lower cooling requirements, reduced complexity, rapid deployment time, fewer high level skill requirements, and reduced cost. What else could you ask for?

As it turns out, there are issues. Hyper converged systems require a massive amount of interoperability testing, which means hardware and software updates take a very long time to be tested, approved and released. A brand new intel chipset can take half a year to be approved. There is a tradeoff between performance and interoperability. In addition, you won’t be saving any money over a traditional implementation, hyper-converged requires vendor lock-in, and performance and capacity must be scaled out at the same time. Even with those potential pitfalls, hyper converged systems are here to stay and will continue to be adopted at a fast pace in the industry. The Pros tend to outweigh the cons.

Hardware Commoditization
The commoditization of hardware will continue to eat away at proprietary hardware businesses. The cost savings from economies of scale always seem to overpower the benefits of specialized solutions. Looking at history, there has been a long a pattern of mass-market produced products that completely wipe out low-volume high-end products, even superior products. Open source software using off-the-shelf hardware will become more common as we move toward the commoditzation of storage.

I believe most enterprises in general lack the in-house talent required to combine third-party or open source storage software with commodity hardware in a way that can guarantee the scalability and resilience that would be required. I think we’re moving in that direction, but we’re not likely to see it become prevalent in enterprise storage soon.

The mix of storage vendors in typical enterprises is not likely to be radically changed anytime soon, but it’s coming. Startups, even with their innovative storage software, have to deal with concerns about interoperability, supportability and resilience, and those concerns aren’t going anywhere. While the endorsement of a startup by one of the major vendors could change that, I think the current largest players like Dell/EMC and NetApp might be apprehensive in accelerating the move to storage hardware commoditization.

Open Source
I believe that software innovation has decisively shifted to open source, and we’re seeing that more and more in the enterprise storage space. You can take a look at many current open source solutions in my previous blog post here. Moreover, I can’t think of a single software market that has a proprietary packaged software vendor that defines and leads the field. Open source allows fast access to innovative software at little or no cost, allowing IT organizations to redirect their budget to other new initiatives.

When Enterprise architecture groups look at open source solutions, which generally focus on which proprietary vendor they should lock themselves in to, are now faced with the onerous task of selecting the appropriate open source software components, figuring out how they’ll be integrated, and doing interoperability testing, all while ensuring that they are maintaining a reliable infrastructure to the business. As you might expect, implementing open source requires a much higher level of technical ability than traditional proprietary solutions. Having the programming knowledge to build a and support an open source solution is far different than operating someone else’s supported solution. I’m seeing some traditional vendors move to the “milk the installed base” strategy and stifle their own internal innovation. If we want to showcase ourselves as technology leaders, we’re going to have to embrace open source solutions, despite the drawbacks.

While open source software can increase complexity and include poorly tested features and bugs, the overall maturity and general usability of Open Source storage software has been improving in recent years. With the right staff, implementation risks can be managed. For some businesses, the cost benefits of moving to that model are very tangible. Open source software has become commonplace in the enterprise, especially in the Linux realm. Linux of course pretty much started the open source movement, followed by widely adopted enterprise applications like MySQL, Apache, Hadoop. Open source software can allow businesses to develop IT solutions to address challenges that are customized and innovative while at the same time bring down acquisition costs by using commodity hardware.

NVMe
Storage industry analysts have predicted the slow death of Fiber Channel based storage for a long time. I expect that trend to speed up, with the steadily increasing speed of standard Ethernet all but eliminating the need for proprietary SAN connections and the expensive Fibre Channel infrastructure that comes along with it. NVMe over ethernet will drive it. NVMe technology is a high performance interface for solid-state drives (SSDs) and predictably, it will be embraced by all-flash vendors moving forward.

All the current storage trends you’ve read around efficiency, flash, performance, big data, machine learning, object storage, hyper-converged infrastruture, etc. are all moving against the current Fibre Channel standard. Production deployments are not yet widespread, but it’s coming. It allows vendors and customers get the most out of flash (and other non-volatile memory) storage. The rapid growth of all-flash arrays has kept fiber channel alive because it typcially replaces legacy disk or hybrid fiber channel arrays.

Legacy Fiber Channel vendors like Emulex, QLogic, and Brocade have been acquired by larger companies so the larger companies can milk the cash flow from the expensive FC hardware before their customers convert to Ethernet. I don’t see any growth or innovation in the FC market moving forward.

Flash
In case you haven’t noticed, it’s near the end of 2017 flash has taken over. It was widely predicted, and from what I’ve seen personally, those predictions absolutely came true. While it still may not rule the data center overall, new purchases have trended that way for quite some time now. Within the past year the organizations I’ve worked for have completely eliminated spinning disk from block storage purchases, instead relying on the value propositions of all-flash with data reduction capabilities making up for the smaller footprint. SSDs are now growing in capacity faster than HDDs (15TB SSDs have been announced) and every storage vendor now has an all-flash offering.

Consolidate and Innovate
The environment for flash startups is getting harder because all the traditional vendors now offer their own all-flash options. There are still startups making exciting progress in NVMe over Fabrics, object storage, hyper-converged infrastructure, data classification, and persistent memory, but only a few can grow into profitability on their own. We will continue to see acquisitions of these smaller, innovative startups as the larger companies struggle to develop similar technologies internally.

RDMA
RDMA will continue to become more prevalent in enterprise storage, as it significantly boosts performance.. RDMA, or Remote Direct Memory Access, has actually been around in the storage arena for quite a while as a cluster interconnect and for HPC storage. Most high-performance scale-out storage arrays use DMA for their cluster communications. Examples inlcude Dell FluidCache for SAN, XtremIO, VMAX3, IBM XIV, InfiniDat, and Kaminario. In a microsoft blog I was reading, it showed 28% more throughput, realized by the reduced IO latency. It also illustrated that RDMA is more CPU efficient which leaves the CPU available to run more virtual machines. TCP/IP is of course no slouch and is absolutely still a viable deployment option. While not quite as fast and efficient as RDMA, it will remain well suited for organizations that lack the expertise needed for RDMA.

The Importance of Scale-Out
Scale-up storage is showing it’s age. If you’re reading this, you probably know that scale up is limited to the scalability limits of the storage controllers and has for years led to storage system sprawl. As we move into a multi data center architecture, especially in the world of object storage, clusters will be extended by adding nodes in different geographical areas. As object storage is geo aware (I am in the middle of a global ECS installation), policies can be established to distribute data into these other locations. As a user is accessing the storage the object storage system will return data from the node that provides the best response time to the user. As data storage needs continue to rapidly grow, it’s critical to move towards scale-out architecture vs. scale-up. The scalability that scale-out storage offers will help reduce costs, complexity, and resource allocation.

GDPR
The General Data Protection Regulation takes effect in 2018 and applies to any entity doing business within any EU country. Under the GDPR, companies will need to build controls around security roles and levels in regard to data access and data transfer, and must provide tight data-breach mechanisms and notification protocols. As process controls they probably will have little impact on your infrastructure, however the two main points within the GDPR that have the most potential for directly impacting storage are data protection by design and data privacy by default.

the GDPR is going to require you to think about the benefits of cloud vs on-prem solutions. Data will have to meet the principle of privacy by default, be in an easily portable format and meet the data minimization principle. Liability of the new regulation falls on all parties however, so cloud providers will have to provide robust compliance solutions in place as well, meaning it could be a simpler, less-expensive route to look at a cloud or hybrid solution in the future.

XtremIO Manual Log File Collection Procedure

If you have a need to gather XtremIO logs for EMC to analyze and they are unable to connect via ESRS, there is a method to gather them manually.  Below are the steps on how to do it.

1. Log in to the XtremIO Management System (XMS) GUI interface with the ‘admin‘ user account.

2. Click on the ‘Administration‘ tab, which is on the top of the XtremIO Management System (XMS) GUI banner bar.

3. On the left side of the Administration window, choose the ‘CLI Terminal‘ option.

4. Once you have the CLI terminal up, enter the following CLI command at the ‘xmcli (admin)>‘ prompt.  This command will generate a set of XtremIO dossier log files: create-debug-info.  Note that it may take a little while to complete.  Once the command completes and returns you to the ‘xmcli (admin)>’ prompt, a complete package of XtremIO dossier log files will be available for you to download.

Example:

xmcli (admin)> create-debug-info
The process may take a while. Please do not interrupt.
Debug info collected and could be accessed via http:// <XMS IP Address> /XtremApp/DebugInfo/104dd1a0b9f56adf7f0921d2f154329a.tar.xz

Important Note: If you have more than one cluster managed by the XMS server, you will need to select the specific cluster.

xmcli (e012345)> show-clusters

Cluster-Name Index State  Gates-Open Conn-State Num-of-Vols Num-of-Internal-Volumes Vol-Size UD-SSD-Space Logical-Space-In-Use UD-SSD-Space-In-Use Total-Writes Total-Reads Stop-Reason Size-and-Capacity

XIO-0881     1     active True       connected  253         0                       60.550T  90.959T      19.990T              9.944T              44

2.703T     150.288T    none        4X20TB

XIO-0782     2     active True       connected  225         0                       63.115T  90.959T      20.993T              9.944T              20

7.608T     763.359T    none        4X20TB

XIO-0355     3     active True       connected  6           0                       2.395T   41.111T      1.175T               253.995G            6.

251T       1.744T      none        2X40TB

xmcli (e012345)> create-debug-info cluster-id=3

5. Once the ‘create-debug-info‘ command completes, you can use a web browser to navigate to the HTTP address link that’s provided in the terminal session window.  After navigating to the link, you’ll be presented with a pop-up window asking you to save the log file package to your local machine.  Save the log file package to your local machine for later upload.

6. Attach the XtremIO dossier log file package you downloaded to the EMC Service Request (SR) you currently have open or are in the process of opening.  Use the ‘Attachments’ (the paperclip button) area located on the Service Request page for upload.

7. You also have the ability to view a historical listing of all XtremIO dossier log file packages that are currently available on your system. To view them, issue the following XtremIO CLI command: show-debug-info. A series of log file packages will be listed.  It’s possible EMC may request a historical log file package for baseline information when troubleshooting.  To download, simply reference the HTTP links listed under the ‘Output-Path‘ header and input the address into your web browser’s address bar to start the download.

Example:

xmcli (tech)> show-debug-info
 Name  Index  System-Name   Index   Debug-Level   Start-Time                 Create-Time               Output-Path
 1      XtremIO-SVT   1       medium        Mon Aug 14 15:55:10 2017   Mon Aug 14 16:09:40 2017  http://<XMS IP Address>/XtremApp/ DebugInfo/1aaf4b1acd88433e9aca5b022b5bc43f.tar.xz
 2      XtremIO-SVT   1       medium        Mon Aug 14 15:55:10 2017   Mon Aug 14 16:09:40 2017  http://<XMS IP Address>/XtremApp/ DebugInfo/af5001f0f9e75fdd9c0784c3d742531f.tar.xz

That’s it! It’s a fairly straightforward process.

 

 

Configuring a Powershell to Isilon Connection with SSL

PowerShell allows an easy method to access the Isilon ReST API, but in my environment I need to use true SSL validation. If you are using the default self-signed certificate of the Isilon, your connection will likely fail with an error similar to the one below:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Isilon generates a self signed certificate by default.  Certificate validation for the current PowerShell session can be disabled with the script below, however in my environment I’m not allowed to do that.  I’m including it for completeness in case it is useful for someone else, it was not written by me but uses a BSD 3-Clause license.

function Disable-SSLValidation{
<#
.SYNOPSIS
    Disables SSL certificate validation
.DESCRIPTION
    Disable-SSLValidation disables SSL certificate validation by using reflection to implement the System.Net.ICertificatePolicy class.
    Author: Matthew Graeber (@mattifestation)
    License: BSD 3-Clause
.NOTES
    Reflection is ideal in situations when a script executes in an environment in which you cannot call csc.ese to compile source code. If compiling code is an option, then implementing System.Net.ICertificatePolicy in C# and Add-Type is trivial.
.LINK
    http://www.exploit-monday.com
#>
    Set-StrictMode -Version 2
    # You have already run this function if ([System.Net.ServicePointManager]::CertificatePolicy.ToString() -eq 'IgnoreCerts') { Return }
    $Domain = [AppDomain]::CurrentDomain
    $DynAssembly = New-Object System.Reflection.AssemblyName('IgnoreCerts')
    $AssemblyBuilder = $Domain.DefineDynamicAssembly($DynAssembly, [System.Reflection.Emit.AssemblyBuilderAccess]::Run)
    $ModuleBuilder = $AssemblyBuilder.DefineDynamicModule('IgnoreCerts', $false)
    $TypeBuilder = $ModuleBuilder.DefineType('IgnoreCerts', 'AutoLayout, AnsiClass, Class, Public, BeforeFieldInit', [System.Object], [System.Net.ICertificatePolicy])
  $TypeBuilder.DefineDefaultConstructor('PrivateScope, Public, HideBySig, SpecialName, RTSpecialName') | Out-Null
    $MethodInfo = [System.Net.ICertificatePolicy].GetMethod('CheckValidationResult')
    $MethodBuilder = $TypeBuilder.DefineMethod($MethodInfo.Name, 'PrivateScope, Public, Virtual, HideBySig, VtableLayoutMask', $MethodInfo.CallingConvention, $MethodInfo.ReturnType, ([Type[]] ($MethodInfo.GetParameters() | % {$_.ParameterType})))
    $ILGen = $MethodBuilder.GetILGenerator()
    $ILGen.Emit([Reflection.Emit.Opcodes]::Ldc_I4_1)
    $ILGen.Emit([Reflection.Emit.Opcodes]::Ret)
    $TypeBuilder.CreateType() | Out-Null

    # Disable SSL certificate validation
   [System.Net.ServicePointManager]::CertificatePolicy = New-Object IgnoreCerts
}

While that code may work fine for some, for security reasons you may not want to or be able to disable certificate validation.  Fortunately, you can create your own key pair with puttygen.  This solution was tested to work with OneFS v 7.2.x and PowerShell V3.

Here are the steps for creating your own key pair for PowerShell SSL authentication to Isilon:

Generate the Key

  1. Download Puttygen to generate the keypair for authentication.
    Open Puttygen and click Generate.
  2. It’s important to note that PowerShell requires exporting the key in OpenSSH format, which is done under the Conversions menu, and the option ‘Export OpenSSHKey’.  Save the key without a passphrase.  It can be named something like “SSH.key”.
  3. Next we need to save the public key.  Copy the information in the upper text box labeled “public key for pasting into OpenSSH authorized_keys file”, and paste it into a new text file.  You can then save the file as “authorized_keys” for later use.

Copy the Key

  1. Copy the authorized_keys file to the Isilon cluster to the location of your choosing.
  2. Open an SSH connection to the Isilon cluster and create a folder for the authorized_keys file.
    Example command:  isi_for_array mkdir /root/.ssh
  3. Copy the file to all nodes. Example command: isi_for_array cp /ifs/local/authorized_keys /root/.ssh/
  4. Verify that the file is available on all of the nodes, and it’s also a good idea to verify that the checksum is correct. Example command: isi_for_array md5 /root/.ssh/authorized_keys

Install PowerShell SSH Module

  1. In order to execute commands via SSH using PowerShell you will need to use an SSH module.  Various options exist, however the module from PowerShellAdmin works fine. It works for running commands via SSH on remote hosts such as Linux or Unix computers, VMware ESX(i) hosts or network equipment such as routers and switches that support SSH. It works well with OpenSSH-type servers.

You can visit the PowerShellAdmin page here,  and here is the direct download link for the SessionsPSv3.zip file.

  1. Once you’ve downloaded it, unzip the file to the SSH-Sessions folder, located in C:\Windows\System32\WindowsPowerShell\v1.0\Modules. With that module in place, we are now ready to connect with PowerShell to the Isilon cluster.

Test it

Below is a powershell script you can use to test your connection, it simply runs a df command on the cluster.

#PowerShell Test Script
Import-Module "SSH-Sessions"
$Isilon = "<hostname>"
KeyFile = "C:\scripts\<filename>.key"
New-SshSession -ComputerName $Isilon -Username root -KeyFile $KeyFile
Invoke-SshCommand -verbose -ComputerName $Isilon -Command df  
Remove-SshSession -ComputerName $Isilon

 

 

 

Isilon Mitrend Data Gathering Procedure

Mitrend is an extremely useful IT Infrastructure analysis service. They provide excellent health, growth and workload profiling assessments.  The service can process input source data from EMC and many non-EMC arrays, from host operating systems, and also from some applications.  In order to use the service, certain support files must be gathered before submitting your analysis request.  I had previously run the reports myself as an EMC customer, but sometime in the recent past they removed that ability for customers and it is now restricted to EMC employees and partners. You can of course simply send the files to your local EMC support team and they will be able to submit the files for a report on your behalf.  The reports are very detailed and extremely helpful for a general health check of your array, data is well organized into a powerpoint slide presentation and raw data is also made available in excel format.

My most recent analysis request was for Isilon, and below are the steps you’ll need to take to gather the appropriate information to receive your Isilon Mitrend report.  The performance impact of running the data gather is expected to be minimal, but in situations where the performance impact may be a concern then you should consider the timing of the run. I have never personally had an issue with performance when running the data gather, and the performance data is much more useful if it’s run during peak periods. The script is compatible with the virtual OneFS Simulator and can be executed and can be tested prior to running on any production cluster. If you notice performance concerns while the script is running, pressing Control + C in the console window will terminate it.

Obtain & Verify isi_gather_perf file

You will need to obtain a copy of the isi_gather_perf.tgz file from your local EMC team if you don’t already have a copy.  Verify that the file you receive file is 166 KB in size. To verify the isi_gather_perf.tgz is not corrupted or truncated you can run the following command once the file is on the Isilon cluster.

Isilon-01# file /ifs/isi_gather_perf.tgz

Example of a good file:

Isilon-01# file /ifs/isi_gather_perf.tgz /ifs/isi_gather_perf.tgz:
gzip compressed data, from Unix, last modified: Tue Nov 18 08:33:49 2014
data file is ready to be executed

Example of a corrupt file:

Isilon-01# file /ifs/isi_gather_perf.tgz /ifs/isi_gather_perf.tgz:
data file is corrupt

Once you’ve verified that the file is valid, you must manually run a Cluster Diagnostics gather. On the OneFS web interface, navigate to Cluster Management > Diagnostics > Gather Info and click the “Start Gather” button. Depending on the size of the cluster, it will take about 15 minutes. This process will automatically create a folder on the cluster called “Isilon_Support”, created under “ifs/data/”.

Gather Performance Info

Below is the process that I used.  Different methods of transferring files can of course be used, but I use WinSCP to copy files directly to the cluster from my Windows laptop, and I use putty for CLI management of the cluster via ssh.

1. Copy the isi_gather_perf.tgz to the Isilon cluster via SCP.

2.  Log into the cluster via ssh.

3. Copy the isi_gather_perf.tgz to /ifs/data/Isilon_Support, if it’s not there already.

4. Change to the Isilon Support Directory

 Isilon-01# cd /ifs/data/Isilon_Support

5. Extract the compressed file

 Isilon-01# tar zxvf /ifs/data/Isilon_Support/isi_gather_perf.tgz

After extraction, a new directory will be automatically created within the “Isilon_Support” directory named “isi_gather_perf”.

6. Start ‘Screen’

 Isilon-01# screen

7.  Execute the performance gather.  All output data is written to /ifs/data/Isilon_Support/isi_gather_perf/.  Extracting the file creates a new directory named “isi_gather_perf” which contains the script “isi_gather_perf”.  The default option gathers 24 hours of performance data and then creates a bundle with the gathered data.

Isilon-01# nohup python /ifs/data/Isilon_Support/isi_gather_perf/isi_gather_perf

8. At the end of the run, the script will create a .tar.gz archive of the capture data to /ifs/data/Isilon_Support/isi_gather_perf/. Gather the output files and send them to EMC.  Once EMC submits the files to Mitrend, it can take up to 24 hours for them to be processed.

Notes:

Below is a list of the command options available.  You may want to change the frequency the command is executed and the length of time the command is run with the I and r options.

 Usage: isi_gather_perf [options]

 Options:
 -h, --help Show this help message and exit
 -v, --version Print Version
 -d, --debug Enable debug log output Logs: /tmp/isi_gather_perf.log
 -i INTERVAL, --interval=INTERVAL
 Interval in seconds to sample performance data
 -r REPEAT, --repeat=REPEAT
 Number of times to repeat specified interval.

Logs:

The logs are located in /ifs/data/Isilon_Support/isi_gather_perf/gathers/ and by default are set to debug level, so they are extremely verbose.

Output:

The output from isi_gather_info will go to /ifs/data/Isilon_Support/pkg/
The output from isi_gather_perf will be /ifs/data/Isilon_Support/isi_gather_perf/gathers/

 

 

 

 

 

 

 

Machine Learning, Cognitive Computing, and the Storage Industry

In context with my recent posts about object storage and software defined storage, this is another topic that simply interested me enough to want to do a bit of research about the topic in general, as well as how it relates to the industry that I work in.  I discovered that there is a wealth of information on the topics of Machine Learning, Cognitive Computing, Artificial Intelligence, and Neural Networking, so much that writing a summary is difficult to do.  Well, here’s my attempt.

There is pressure in the enterprise software space to incorporate new technologies in order to keep up with the needs of modern businesses. As we move farther into 2017, I believe we are approaching another turning point in technology where many concepts that were previously limited to academic research or industry niches are now being considered for actual mainstream enterprise software applications.  I believe you’ll see Machine learning and cognitive systems becoming more and more visible in the coming years in the enterprise storage space. For the storage industry, this is very good news. As this technology takes off, it will result in the need to retain massive amounts of unstructured data in order to train the cognitive systems. Once machines can learn for themselves, they will collect and generate a huge amount of data to be stored, intelligently categorized and subsequently analyzed.

The standard joke about artificial intelligence (or machine learning in general) is that, like nuclear fusion, it has been the future for more than half a century now.  My goal in this post is to define the concepts, look at ways this technology has already been implemented, look at how it affects the storage industry, and investigate use cases for this technology.  I’m writing this paragraph before I start, so we’ll see how that goes. 🙂

 What is Cognitive Computing?

Cognitive computing is the simulation of human thought processes using computerized modeling (the most well know example is probably IBM’s Watson). It incorporates self-learning systems that use data mining, pattern recognition and natural language processing to imitate the way our brains process thoughts. The goal of cognitive computing is to create automated IT systems that are capable of solving problems without requiring human assistance.

This sounds like the stuff of science fiction, right? HAL (from the movie “2001 Space Odyssey”) came to the logical conclusion that his crew had to be eliminated. It’s my hope that intelligent storage arrays utilizing cognitive computing will come to the conclusion that 99.9 percent of stored data has no value and therefore should be deleted.  It would eliminate the need for me to build my case for archiving year after year. J

Cognitive computing systems work by using machine learning algorithms, they are inescapably linked. They will continuously gather knowledge from the data fed into them by mining data for information. The systems will progressively refine the methods the look for and process data until they become capable of anticipating new problems and modeling possible solutions.

Cognitive computing is a new field that is just beginning to emerge. It’s about making computers more user friendly with an interface that understands more of what the user wants. It takes signals about what the user is trying to do and provides an appropriate response. Siri, for example, can answer questions but also understands context of the question. She can ascertain whether the user is in a car or at home, moving quickly and therefore driving, or moving more slowly while walking. This information contextualizes the potential range of responses, allowing for increased personalization.

What Is Machine Learning?

Machine Learning is a subset of the larger discipline of Artificial Intelligence, which involves the design and creation of systems that are able to learn based on the data they collect. A machine learning system learns by experience. Based on specific training, the system will be able to make generalizations based on its exposure to a number of cases and will then be able to perform actions after new or unforeseen events. Amazon already use this technology, it’s part of their recommendation engine. It’s also commonly used by ad feed systems that provide ads based on web surfing history.

While machine learning is a tremendously powerful tool for extracting information from data, but it’s not a silver bullet for every problem. The questions must be framed and presented in a way that allows the learning algorithms to answer them. Because the data needs to be set up in the appropriate way, that can add additional complexity. Sometimes the data needed to answer the questions may not be available. Once the results are available, they also need to be interpreted to be useful and it’s essential to understand the context. A sales algorithm can tell a salesman what’s working the best, but he still needs to know how to best use that information to increase his profits.

What’s the difference?

Without cognition there cannot be good Artificial intelligence, and without Artificial Intelligence cognition can never be expressed. I Cognitive computing involves self-learning systems that use pattern recognition and natural language processing to mimic the way how the human brain works. The goal of cognitive computing is to create automated systems that are capable of solving problems without requiring human assistance. Cognitive computing is used in A.I. applications, hence Cognitive Computing is also actually subset of Artificial Intelligence.

If this seems like a slew of terms that all mean almost the same thing, you’d be right. Cognitive Computing and Machine Learning can both be considered subsets of Artificial Intelligence. What’s the difference between artificial intelligence and cognitive computing? Let’s use a medical example. In an artificial intelligence system, machine learning would tell the doctor which course of action to take based on its analysis. In cognitive computing, the system would provide information to help the doctor decide, quite possibly with a natural language response (like IBM’s Watson).

In general, Cognitive computing systems include the following ostensible characteristics:

  • Machine Learning
  • Natural Language Processing
  • Adaptive algorithms
  • Highly developed pattern recognition
  • Neural Networking
  • Semantic understanding
  • Deep learning (Advanced Machine Learning)

How is Machine Learning currently visible in our everyday lives?

Machine Learning has fundamentally changed the methods in which businesses relate to their customers. When you click “like” on a Facebook post your feed is dynamically adjusted to contain more content like that in the future. When you buy a Sony PlayStation on Amazon, and it recommends that you also buy an extra controller and a top selling game for the console, that’s their recommendation engine at work. Both of those examples use machine learning technology, and both affect most people’s everyday lives. Machine language technology delivers educated recommendations to people to help them make decisions in a world of almost endless choices.

Practical business applications of Cognitive Computing and Machine Learning

Now that we have a pretty good idea of what this all means, how is this technology actually being used today in the business world? Artificial Intelligence has been around for decades, but has been slow to develop due to the storage and compute requirements being too expensive to allow for practical applications. In many fields, machine learning is finally moving from science labs to commercial and business applications. With cloud computing and robust virtualized storage solutions providing the infrastructure and necessary computational power, machine learning developments are offering new capabilities that can greatly enhance enterprise business processes.

The major approaches today include using neural networkscase-based learninggenetic algorithmsrule induction, and analytic learning. The current uses of the technology combine all of these analytic methods, or a hybrid of them, to help guarantee effective, repeatable, and reliable results. Machine learning is a reality today and is being used very effectively and efficiently. Despite what many business people might assume, it’s no longer in its infancy. It’s used quite effectively across a wide array of industry applications and is going to be part of the next evolution of enterprise intelligence business offerings.

There are many other machine learning can have an important role. This is most notable in systems that with so much complexity that algorithms are difficult to design, when an application requires the software to adapt to an operational environment, or with applications that need to work with large and complex data sets. In those scenarios, machine learning methods play an increasing role in enterprise software applications, especially for those types of applications that need in-depth data analysis and adaptability like analytics, business intelligence, and big data.

Now that I’ve discussed some general business applications for the technology, I’ll dive in to how this technology is being used today, or is in development and will be in use in the very near future.

  1. Healthcare and Medicine. Computers will never completely replace doctors and nurses, but in many ways machine learning is transforming the healthcare industry. It’s improving patient outcomes and in general changing the way doctors think about how they provide quality care. Machine learning is being implemented in health care in many ways: Improving diagnostic capabilities, medicinal research (medicines are being developed that are genetically tailored to a person’s DNA), predictive analytics tools to provide accurate insights and predictions related to symptoms, diagnoses, procedures, and medications for individual patients or patient groups, and it’s just beginning to scratch the surface of personalized care. Healthcare and personal fitness devices connected via the Internet of Things (IoT) can also be used to collect data on human and machine behavior and interaction. Improving quality of life and people’s health is one of the most exciting use cases of Machine Learning technologies.
  2. Financial services. Machine Learning is being used for predicting credit card risk, managing an individual’s finances, flagging criminal activity like money laundering and fraud, as well as automating business processes like call centers and insurance claim processing with trained AI agents. Product recommendation systems for a financial advisor or broker must leverage current interests, trends, and market movements for long periods of time, and ML is well suited to that task.
  3. Automating business analysis, reporting, and work processes. Machine learning automation systems that use detailed statistical analysis to process, analyze, categorize, and report on their data exist today. Machine learning techniques can be used for data analysis and pattern discovery and can play an important role in the development of data mining applications. Machine learning is enabling companies to increase growth and optimize processes, increase customer satisfaction, and improve employee engagement.As one specific example, adaptive analytics can be used to help stop customers from abandoning a website by analyzing and predicting the first signs they might log off and causing live chat assistance windows to appear. They are also good at upselling by showing customers the most relevant products based on their shopping behavior at that moment. A large portion of Amazon’s sales are based on their adaptive analytics, you’ll notice that you always see “Customers who purchased this item also viewed” when you view an item on their web site.Businesses are presently using Machine learning to improve their operations in other many ways. Machine learning technology allows business to personalize customer service, for example with chatbots for customer relations. Customer loyalty and retention can be improved by mining customer actions and targeting their behavior. HR departments can improve their hiring processes by using ML to shortlist candidates. Security departments can use ML to assist with detecting fraud by building models based on historical transactions and social media. Logistics departments can improve their processes by allowing contextual analysis of their supply chain. The possibilities for the application of this technology across many typical business challenges is truly exciting.
  4. Playing Games. Machine learning systems have been taught to play games, and I’m not just talking about video games. Board game like Go, IBM’s Watson in games of Chess and Jeopardy, as well as in modern real time strategy video games, all with great success. When Watson defeated Brad Rutter and Ken Jennings in the Jeopardy! Challenge of February 2011, showcasing Watson’s ability to learn, reason, and understand natural language with machine learning technology. In game development, Machine learning has been used for gesture recognition in Kinect and camera based interfaces, and It has also been used in some fighting style games to analyze the style of moves of the human to mimic the human player, such as the character ‘Mokujin’ in Tekken.
  5. Predicting the outcome of legal proceedings. A system developed by a team of British and American researchers was proven to be able to correctly predict a court’s decision with a high degree of accuracy. The study can be viewed here: https://peerj.com/articles/cs-93/. While computers are not likely to replace judges and lawyers, the technology could very effectively be used to assist the decision making process.
  6. Validating and Customizing News content. Machine learning can be used to create individually personalized news and screening and filtering out “fake news” has been a more recent investigative priority, especially given today’s political landscape. Facebook’s director of AI research Yann LeCun was quoted saying that machine learning technology that could squash fake news “either exists or can be developed.” A challenge aptly named the “Fake News Challenge” was developed for technology professionals, you can view their site http://www.fakenewschallenge.org/ for more information. Whether or not it actually works is dubious at the moment, but the application of it could have far reaching positive effects for democracy.
  7. Navigation of self-driving cars. Using sensors and onboard analytics, cars are learning to recognize obstacles and react to them appropriately using Machine Learning. Google’s experimental self-driving cars currently rely on a wide range of radar/lidar and other sensors to spot pedestrians and other objects. Eliminating some or all of that equipment would make the cars cheaper and easier to design and speed up mass adoption of the technology. Google has been developing its own video-based pedestrian detection system for years using machine learning algorithms. Back in 2015, its system was capable of accurately identifying pedestrians within 0.25 seconds, with 0.07-second identification being the benchmark needed for such a system to work in real-time.This is all good news for storage manufacturers. Typical luxury cars have up to around 200 GB of storage today, primarily for maps and other entertainment functionality. Self-driving cars will likely need terabytes of storage, and not just for the car to drive itself. Storage will be needed for intelligent assistants in the car, advanced voice and gesture recognition, caching software updates, and caching files to storage to reduce peak network bandwidth utilization.
  8. Retail Sales. Applications of ML are almost limitless when it comes to retail. Product pricing optimization, sales and customer service trending and forecasting, precise ad targeting with data mining, website content customization, prospect segmentation are all great examples of how machine learning can boost sales and save money. The digital trail left by customer’s interactions with a business both online and offline can provide huge amounts of data to a retailer. All of that data is where Machine learning comes in. Machine learning can look at history to determine which factors are most important, and to find the best way to predict what will occur based on a much larger set of variables. Systems must take into account today’s market trends not only for the past year, but for what happened as recently as 1 hour ago in order to implement real-time personalization. Machine learning applications can discover which items are not selling and pull them from the shelves before a salesperson notices, and even keep overstock from showing up in the store at all with improved procurement processes. A good example of the machine learning personalized approach to customers can be found once you get in the Jackets and Vests section of the North Face website. Click on “Shop with IBM Watson” and experience what is almost akin to a human sales associate helping you choose which jacket you need.
  9. Recoloring black and white images. Ted Turner’s dream come true. J Using computers to recognize objects and learn what they should look like to humans, color can be returned to both black and white pictures and video footage. Google’s DeepDream (https://research.googleblog.com/2015/07/deepdream-code-example-for-visualizing.html) is probably the most well-known example of one. It has been trained by examining millions of images of just about everything. It analyzes images in black and white and then colors them the way it thinks they should be colored. The “colorize” project is also taking up the challenge, you can view their progress at http://tinyclouds.org/colorize/ and download the code. A good online example is at Algorithmia, which allows you to upload and convert an image online. http://demos.algorithmia.com/colorize-photos/
  10. Enterprise Security. Security and loss of are major concerns for the modern enterprise. Some storage vendors are beginning to use artificial intelligence and machine learning to prevent data loss, increase availability and reduce downtime via smart data recovery and systematic backup strategies. Machine learning allows for smart security features to detect data and packet loss during transit and within data centers.Years ago it was common practice to spend a great deal of time reviewing security logs on a daily basis. You were expected to go through everything and manually determine the severity of any of the alerts or warnings as you combed through mountains of information. As time progresses it becomes more and more unrealistic for this process to remain manual. Machine learning technology is currently implemented and is very effective at filtering out what deviates from normal behavior, be it with live network traffic or mountains of system log files. While humans are also very good at finding patterns and noticing odd things, computers are really good at doing that repetitive work at a much larger scale, complementing what an analyst can do.Interested in looking at some real world examples of Machine Learning as it relates to security? There’s many out there. Clearcut is one example of a tool that uses machine learning to help you focus on log entries that really need manual review. David Bianco created a relatively simple Python script that can learn to find malicious activity in HTTP proxy logs. You can download David’s script here: https://github.com/DavidJBianco/Clearcut. I also recommend taking a look at the Click Security project, which also includes many code samples. http://clicksecurity.github.io/data_hacking/, as well as PatternEx, a SecOps tool that predicts cyber attacks. https://www.patternex.com/.
  11. Musical Instruments. Machine learning can also be used in more unexpected ways, even in creative outlets like making music. In the world of electronic music there are new synthesizers and hardware created and developed often, and the rise in machine learning is altering the landscape. Machine learning will allow instruments the potential to be more expressive, complex and intuitive in ways previously experienced only through traditional acoustic instruments. A good example of a new instrument using machine learning is the Mogees instrument. This device has a contact microphone that picks up sound from everyday objects and attaches to your iPhone. Machine learning could make it possible to use a drum machine then adapts to your playing style, learning as much about the player as the player learns about the instrument. Simply awe inspiring.

What does this mean for the storage industry?

As you might expect, this is all very good news for the storage industry and very well may lead to more and more disruptive changes. Machine learning has an almost insatiable appetite for data storage. It will consume huge quantities of capacity while at the same time require very high levels of throughput. As adoption of Cognitive Computing, Artificial Intelligence, and machine learning grows, it will attract a growing number of startups eager to solve the many issues that are bound to arise.

The rise of Machine learning is set to alter the storage industry in very much the same way that PC’s helped reshape the business world in the 1980’s. Just as PCs have advanced from personal productivity applications like Lotus 1-2-3 to large-scale Oracle databases, Machine learning is poised to evolve from consumer type functions like Apple’s Siri to full scale data driven programs that will drive global enterprises. So, in what specific ways is this technology set to alter and disrupt the storage industry? I’ll review my thoughts on that below.

  1. Improvements in Software-Defined Storage. I recently dove into Software defined storage in a blog post (https://thesanguy.com/2017/06/15/defining-software-defined-storage-benefits-strategy-use-cases-and-products/). As I described in that post, there are many use cases and a wide variety of software defined storage products in the market right now. Artificial Intelligence and machine learning will spark faster adoption of software-defined storage, especially as products are developed that use the technology to allow storage to be self-configurable. Once storage is all software-defined, algorithms can be integrated and far-reaching enough to process and solve complicated storage management problems because of the huge amount of data they can now access. This is a necessary step to build the monitoring, tuning, healing service abilities needed for self-driving software defined storage.
  2. Overall Costs will be reduced. Enterprises are moving towards cloud storage and fewer dedicated storage arrays. Dynamic software defined software that integrates machine learning could help organizations more efficiently utilize the capacity that they already own.
  3. Hybrid Storage Clouds. public vs. private clouds has been a hot topic in the storage industry, and with the rise of machine learning and software-defined storage it’s becoming more and more of a moot point. Well-designed software-defined architectures should be able to transition data seamlessly from one type of cloud to another, and machine learning will be used to implement that concept without human intervention. Data will be analyzed and logic engines will automate data movement. The hybrid cloud is very likely to flourish as machine learning technologies are adopted into this space.
  4. Flash Everywhere. Yes, the concept of “flash first” has been promoted for years now, and machine learning simply furthers that simple truth. The vast amount of data that machine learning needs to process will further increase the demand for throughput and bandwidth, and flash storage vendors will be lining up to fill that need.
  5. Parallel File Systems. Storage systems will have to deliver performance and throughput at scale in order to support machine learning technologies. Parallel file system can effectively reduce the problems of massive data storage and I/O bottlenecks. With its focus on high performance access to large data sets, parallel file systems combined with flash could be considered an entry point to full scale machine learning systems.
  6. Automation. Software-defined storage has had a large influence in the rise of machine learning in storage environments. Adding a heterogeneous software layer abstracted from the hardware allows the software to efficiently monitor many more tasks. The additional automation allows adminisrators like myself much more time for more strategic work.
  7. Neural Storage. Neural storage (“deep learning”) is designed to recognize and respond to problems and opportunities without any human intervention. It will drive the need for massive amounts of storage as it is utilized in modern businesses. It uses artificial neural networks, which are simplified computer simulations of how biological neurons behave to extract rules and patterns from sets of data. Unsurprisingly (based on it’s name) the concept is inspired by the way biological nervous systems process information. In general, think of of neural storage as many layers of processing on mountain-sized mounds of data. Data is fed through neural networks that are logical constructions that ask a series of binary true/false questions, or extract a numerical value of every bit of data which pass through them and classify it according to the answers that were tallied up. Deep Learning work is focused on developing these networks, which is why they became what are known as Deep Neural Networks (logic networks of the complexity needed to deal with classifying enormous datasets, think google-scale data). Using Google Images as an example, with datasets as massive and comprehensive as these and logical networks sophisticated enough to handle their classification, it becomes relatively trivial to take an image and state with a high probability of accuracy what it represents to humans.

How does Machine Learning work?

At its core, Machine learning works by recognizing patterns (such as facial expressions or spoken words), extracting insight from those patterns, discovering anomalies in those patterns, and then making evaluations and predictions based on those discoveries.

The principle can be summed up with the following formula:

Machine Learning = Model Representation + Parameter Evaluation + Learning & Optimization

Model Representation: The system that makes predictions or identifications. Includes the use of a object element represented in a formal language that a computer can handle and interpret.

Parameter Evaluation: A function needed to distinguish or evaluate the good and bad objects, the factors used by the model to form it’s decisions.

Learning & Optimization: The method used to search among these classifiers within the language to find the highest scoring ones. This is the learning system that adjust the parameters and looks at predictions vs. actual outcome.

How do we apply machine learning to a problem? First and foremost, a pattern must exist in the input data that would allow a conclusion to be drawn. To solve a problem with machine learning, the machine learning algorithm must have a pattern to deduce information from. Next, there must be a sufficient amount of data to apply machine learning to a problem. If there isn’t enough data to analyze, it will compromise the validity of the end result. Finally, machine learning is used to derive meaning from the data and perform structured learning to arrive at a mathematical approximation to describe the behavior of the problem. Therefore if the conditions above aren’t met, it will be a waste of time to apply machine learning to a problem through structured learning. All of these conditions must be met for machine learning to be successful.

Summary

Machines may not have reached the point where they can make full decisions without humans, but they have certainly progressed to the point where they can make educated, accurate recommendations to us so that we have an easier time making decisions. Current machine learning systems have delivered tremendous benefits by automating tabulation and harnessing computational processing and programming to improve both enterprise productivity and personal productivity.

Cognitive systems will learn and interact to provide expert assistance to scientists, engineers, lawyers, and other professionals in a fraction of the time it now takes. While they will likely never replace human thinking, cognitive systems will extend our cognition and free us to think more creatively and effectively, and be better problem solvers.

Scripting a VNX/Celerra to Isilon Data Migration with EMCOPY and Perl

datamigration

Below is a collection of perl scripts that make data migration from VNX/Celerra file systems to an Isilon system much easier.  I’ve already outlined the process of using isi_vol_copy_vnx in a prior post, however using EMCOPY may be more appropriate in a specific use case, or simply more familiar to administrators for support and management of the tool.  Note that while I have tested these scripts in my environment, they may need some modification for your use.  I recommend running them in a test environment prior to using them in production.

EMCOPY can be downloaded directly from DellEMC with the link below.  You will need to be a registered user in order to download it.

https://download.emc.com/downloads/DL14101_EMCOPY_File_migration_tool_4.17.exe

What is EMCOPY?

For those that haven’t used it before, EMCOPY is an application that allows you to copy a file, directory, and subdirectories between NTFS partitions while maintaining security information, an improvement over the similar robocopy tool that many veteran system administrators are familiar with. It allows you to back up the file and directory security ACLs, owner information, and audit information from a source directory to a destination directory.

Notes about using EMCOPY:

1) In my testing, EMCopy has shown up to a 25% performance improvement when copying CIFS data compared to Robocopy while using the same number of threads. I recommend using EMCopy over Robocopy as it has other feature improvements as well, for instance sidmapfile, which allows migrating local user data to Active Directory users. It’s available in version 4.17 or later.  Robocopy is also not an EMC supported tool, while EMCOPY is.

2) Unlike isi_vol_copy_vnx, EMCOPY is a windows application and must be run from a windows host.  I highly recommend a dedicated server for any migration tasks.  The isi_vol_copy_vnx utility runs directly on the Isilon OneFS CLI which eliminates any intermediary copy hosts, theoretically providing a much faster solution.

3) There are multiple methods to compare data sizes between the source and destination. I would recommend maintaining a log of each EMCopy session as that log indicates how much data was copied and if there were any errors.

4) If you are migrating over a WAN connection, I recommend first restoring from tape and then using an incremental data sync with EMCOPY.

Getting Started

I’ve divided this post up into a four step process.  Each step includes the relevant script and a description of the process.

  • Export File System information (export_fs.pl  Script)

Export file system information from the Celerra & generate the Isilon commands to re-create them.

  • Export SMB information (export_smb.pl Script)

Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

  • Export NFS information (export_nfs.pl Script)

Export NFS information from the Celerra & generate the Isilon commands to re-create them.

  • Create the EMCOPY migration script (EMCOPY_create.pl Script)

Perform the data migration with EMCOPY using the output from this script.

Exporting information from the Celerra to run on the Isilon

These Perl scripts are designed to be run directly on the Control Station and will subsequently create shell scripts that will run on the Isilon to assist with the migration.  You will need to manually copy the output files from the VNX/Celerra to the Isilon. The first three steps I’ve outlined do not move the data or permissions, they simply run a nas_fs query on the Celerra to generate the Isilon script files that actually make the directories, create quotas, and create the NFS and SMB shares. They are “scripts that generate scripts”. 🙂

Before you run the scripts, make sure you edit them to correctly specify the appropriate Data Mover.  Once complete, You’ll end up with three .sh files created for you to move to your Isilon cluster.  They should be run in the same order as they were created.

Note that EMC occasionally changes the syntax of certain commands when they update OneFS.  Below is a sample of the isilon specific commands that are generated by the first three scripts.  I’d recommend verifying that the syntax is still correct with your version of OneFS, and then modify the scripts if necessary with the new syntax.  I just ran a quick test with OneFS 8.0.0.2, and the base commands and switches appear to be compatible.

isi quota create –directory –path=”/ifs/data1″ –enforcement –hard-threshold=”1032575M” –container=1
isi smb share create –name=”Data01″ –path=”/ifs/Data01/data”
isi nfs exports create –path=”/Data01/data”  –roclient=”Data” –rwclient=”Data” –rootclient=”Data”

 

Step 1 – Export File system information

This script will generate a list of the file system names from the Celerra and place the appropriate Isilon commands that create the directories and quotes into a file named “create_filesystems_xx.sh”.

#!/usr/bin/perl

# Export_fs.pl – Export File system information
# Export file system information from the Celerra & generate the Isilon commands to re-create them.

use strict;
my $nas_fs="nas_fs -query:inuse=y:type=uxfs:isroot=false -fields:ServersNumeric,Id,Name,SizeValues -format:'%s,%s,%s,%sQQQQQQ'";
my @data;

open (OUTPUT, ">> create_filesystems_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nas_fs |") || die "cannot open $nas_fs: $!\n\n";

while ()

{
   chomp;
   @data = split("QQQQQQ", $_);
}

close(CMD);
foreach (@data)

{
   my ($dm, $id, $dir,$size,$free,$used_per, $inodes) = split(",", $_);
   print OUTPUT "mkdir /ifs/$dir\n";
   print OUTPUT "chmod 755 /ifs/$dir\n";
   print OUTPUT "isi quota create --directory --path=\"/ifs/$dir\" --enforcement --hard-threshold=\"${size}M\" --container=1\n";
}

The Output of the script looks like this (this is an excerpt from the create_filesystems_xx.sh file):

isi quota create --directory --path="/ifs/data1" --enforcement --hard-threshold="1032575M" --container=1
mkdir /ifs/data1
chmod 755 /ifs/data1
isi quota create --directory --path="/ifs/data2" --enforcement --hard-threshold="20104M" --container=1
mkdir /ifs/data2
chmod 755 /ifs/data2
isi quota create --directory --path="/ifs/data3" --enforcement --hard-threshold="100774M" --container=1
mkdir /ifs/data3
chmod 755 /ifs/data3

The output script can now be copied to and run from the Isilon.

Step 2 – Export SMB Information

This script will generate a list of the smb share names from the Celerra and place the appropriate Isilon commands into a file named “create_smb_exports_xx.sh”.

#!/usr/bin/perl

# Export_smb.pl – Export SMB/CIFS information
# Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "cifs";:wq!
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $path =~ s/^"/\"\/ifs/;
   print  OUTPUT "isi smb share create --name=$name --path=$path\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_smb_exports_xx.sh file):

isi smb share create --name="Data01" --path="/ifs/Data01/data"
isi smb share create --name="Data02" --path="/ifs/Data02/data"
isi smb share create --name="Data03" --path="/ifs/Data03/data"
isi smb share create --name="Data04" --path="/ifs/Data04/data"
isi smb share create --name="Data05" --path="/ifs/Data05/data"

 The output script can now be copied to and run from the Isilon.

Step 3 – Export NFS Information

This script will generate a list of the NFS export names from the Celerra and place the appropriate Isilon commands into a file named “create_nfs_exports_xx.sh”.

#!/usr/bin/perl

# Export_nfs.pl – Export NFS information
# Export NFS information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "nfs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep export";

open (OUTPUT, ">> create_nfs_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $test = @vars;
   my $i=2;
   my ($ro, $rw, $root, $access, $name);
   my $path=$vars[1];

   for ($i; $i < $test; $i++)
   {
      my ($type, $value) = split("=", $vars[$i]);

      if ($type eq "ro") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }
      if ($type eq "rw") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $rw .= " --rwclient=\"$_\""; }
      }

      if ($type eq "root") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $root .= " --rootclient=\"$_\""; }
      }

      if ($type eq "access") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }

      if ($type eq "name") { $name=$value; }
   }
   print OUTPUT "isi nfs exports create --path=$path $ro $rw $root\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_nfs_exports_xx.sh file):

isi nfs exports create --path="/Data01/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data02/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data03/data" --roclient="Backup" --roclient="Data" --rwclient="Backup" --rwclient="Data" --rootclient="Backup" --rootclient="Data"
isi nfs exports create --path="/Data04/data" --roclient="Backup" --roclient="ProdGroup" --rwclient="Backup" --rwclient="ProdGroup" --rootclient="Backup" --rootclient="ProdGroup"
isi nfs exports create --path="/" --roclient="127.0.0.1" --roclient="127.0.0.1" --roclient="127.0.0.1" -rootclient="127.0.0.1"

The output script can now be copied to and run from the Isilon.

Step 4 – Generate the EMCOPY commands

Now that the scripts have been generated and run on the Isilon, the next step is the actual data migration using EMCOPY.  This script will generate the commands for a migration script, which should be run from a windows server that has access to both the source and destination locations. It should be run after the previous three scripts have successfully completed.

This script will output the commands directly to the screen, it can then be cut and pasted from the screen directly into a windows batch script on your migration server.

#!/usr/bin/perl

# EMCOPY_create.pl – Create the EMCOPY migration script
# Perform the data migration with EMCOPY using the output from this script.

use strict;

my $datamover = "server_4";
my $source = "\\\\celerra_path\\";
my $dest = "\\\\isilon_path\\";
my $prot = "cifs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cant open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $name =~ s/\"//g;
   $path =~ s/^/\/ifs/;

   my $log = "c:\\" . $name . "";
   $log =~ s/ //;
   my $src = $source . $name;
   my $dst = $dest . $name;

   print "emcopy \"$src\" \"$  dst\" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:$log\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the screen output):

emcopy "\\celerra_path\Data01" "\\isilon_path\billing_tmip_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_tmip_01
emcopy "\\celerra_path\Data02" "\\isilon_path\billing_trxs_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_trxs_01
emcopy "\\celerra_path\Data03" "\\isilon_path\billing_vru_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_vru_01
emcopy "\\celerra_path\Data04" "\\isilon_path\billing_rpps_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_rpps_01

That’s it.  Good luck with your data migration, and I hope this has been of some assistance.  Special thanks to Mark May and his virtualstoragezone blog, he published the original versions of these scripts here.

Open Source Storage Solutions

Storage solutions can generally be grouped into four categories: SoHo NAS systems, Cloud-based/object solutions, Enterprise NAS and SAN solutions, and Microsoft Storage Server solutions. Enterprise NAS and SAN solutions are generally closed systems offered by traditional vendors like EMC and NetApp with a very large price tag, so many businesses are looking at Open Source solutions to meet their needs. This is a collection of links and brief descriptions of Open Source storage solutions currently available. Open Source of course means it’s free to use and modify, however some projects have do commercially supported versions as well for enterprise customers who require it.

Why would an enterprise business consider an Open Source storage solution? The most obvious reason is that it’s free, and any developer can customize it to suit the needs of the business. With the right people on board, innovation can be rapid. Unfortunately, as is the case with most open source software, it can be needlessly complex and difficult to use, require expert or highly trained staff, have compatibility issues, and most don’t offer the support and maintenance that enterprise customers require. There’s no such thing as a free lunch, as they say, and using Open Source generally requires compromising on support and maintenance. I’d see some of these solutions as perfect for an enterprise development or test environment, and as an easy way for a larger company to allow their staff to get their feet wet in a new technology to see how it may be applied as a potential future solution. As I mentioned, tested and supported versions of some open source storage software is available, which can ease the concerns regarding deployment, maintenance and support.

I have the solutions loosely organized into Open Source NAS and SAN Software, File Systems, RAID, Backup and Synchronization, Cloud Storage, Data Desctruction, Distributed Storage/Big Data Tools, Document Management, and Encryption tools.

Open Source NAS and SAN Software Solutions

Backblaze

Backblaze is a object data storage provider. Backblaze stores data on its customized, open source hardware platform called Storage Pods, and its cloud-based Backblaze Vault file system. It is compatible with Windows and Apple OSes. While they are primarily an online backup service, they opened up their StoragePod design starting in 2009, which uses commodity hardware that anyone can build. They are self-contained 4U data storage servers. It’s interesting stuff and worth a look.

Enterprise Storage OS (ESOS)

Enterprise Storage OS is a linux distribution based on the SCST project with the purpose of providing SCSI targets via a compatible SAN (Fibre Channel, InfiniBand, iSCSI, FCoE). ESOS can turn a server with the appropriate hardware into a disk array that sits on your enterprise Storage Area Network (SAN) and provides sharable block-level storage volumes.

OpenIO 

OpenIOis an open source object storage startup founded in 2015 by CEO Laurent Denel and six co-founders. The product is an object storage system for applications that scales from terabytes to exabytes. OpenIO specializes in software defined storage and scalability challenges, with experience in designing and running cloud platforms. It owns a general purpose object storage and data processing solution adopted by large companies for massive production.

Open vStorage

Open vStorage is an open-source, scale-out, reliable, high performance, software based storage platform which offers a block & file interface on top of a pool of drives. It is a virtual appliance (called the “Virtual Storage Router”) that is installed on a host or cluster of hosts on which Virtual Machines are running. It adds value and flexibility in a hyper converged / Open Stack provider deployment where you don’t necessarily want to be tied to a solution like VMware VSAN. Being hypervisor agnostic is a key advantage of Open vStorage.

OpenATTIC

OpenATTIC is an Open Source Ceph and storage management solution for Linux, with a strong focus on storage management in a datacenter environment. It allows for easy management of storage resources, it features a modern web interface, and supports NFS, CIFS, iSCSI and FS. It supports a wide range of file systems including Btrfs and ZFS, as well as automatic data replication using DRBD, the distributed replicated block device and automatic monitoring of shares and volumes using a built-in Nagios/Icinga instance. openATTIC 2 will support managing the Ceph distributed object store and file system.

OpenStack

OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

The OpenStack Object Storage (swift) service provides software that stores and retrieves data over HTTP. Objects (blobs of data) are stored in an organizational hierarchy that offers anonymous read-only access, ACL defined access, or even temporary access. Object Storage supports multiple token-based authentication mechanisms implemented via middleware.

CryptoNAS

CryptoNAS (formerly CryptoBox) is one NAS project that makes encrypting your storage quick and easy. It is a multilingual Debian based Linux live CD with a web based front end that can be installed into a hard disk or USB stick. CryptoNAS has various choices of encryption algorithms, the default is AES, it encrypts disk partitions using LUKS (Linux Unified Key setup) which means that any Linux operating system can also access them without using CryptoNAS software.

Ceph

Ceph is a distributed object store and file system designed to provide high performance, reliability and scalability. It’s built on the Reliable Autonomic Distributed Object Store (RADOS) and allows enterprises to build their own economical storage devices using commodity hardware. It has been maintained by RedHat since their acquisition of InkTank in April 2014. It’s capable of block, object, and file storage.  It is scale-out, meaning multiple Ceph storage nodes will present a single storage system that easily handles many petabytes, and performance and capacity increase simultaneously. Ceph has many basic enterprise storage features including replication (or erasure coding), snapshots, thin provisioning, auto-tiering and self-healing capabilities.

FreeNAS

The FreeNAS website touts itself as “the most potent and rock-solid open source NAS software,” and it counts the United Nations, The Salvation Army, The University of Florida, the Department of Homeland Security, Dr. Phil, Reuters, Michigan State University and Disney among its users. You can use it to turn standard hardware into a BSD-based NAS device, or you can purchase supported, pre-configured TrueNAS appliances based on the same software.

RockStor 

RockStor is a free and open source NAS (Network Attached Storage) solution. It’s Personal Cloud Server is a powerful local alternative to public cloud storage that mitigates the cost and risks of public cloud storage. This NAS and cloud storage platform is suitable for small to medium businesses and home users who don’t have much IT experience, but who may need to scale to terabytes of data storage.  If you are more interested in Linux and Btrfs, it’s a great alternative to FreeNAS. The RockStor NAS and cloud storage platform can be managed within a LAN or over the Web using a simple and intuitive UI, and with the inclusion of add-ons (fittingly named ‘Rockons’), you can extend the feature set of your Rockstor to include new apps, servers, and services.

Gluster

Red Hat-owned Gluster is a distributed scale-out network attached storaage file system that can handle really big data—up to 72 brontobytes.  It has found applications including cloud computing, streaming media services and content delivery networks. It promises high availability and performance, an elastic hash algortithm, an elastic volume manager and more. GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system.

72 Brontobytes? I admit that I hadn’t seen that term used yet in any major storage vendor’s marketing materials. How big is that? Really, really big.

1 Bit = Binary Digit
8 Bits = 1 Byte
1,000 Bytes = 1 Kilobyte
1,000 Kilobytes = 1 Megabyte
1,000 Megabytes = 1 Gigabyte
1,000 Gigabytes = 1 Terabyte
1,000 Terabytes = 1 Petabyte
1,000 Petabytes = 1 Exabyte
1,000 Exabytes = 1 Zettabyte
1,000 Zettabytes = 1 Yottabyte
1,000 Yottabytes = 1 Brontobyte
1,000 Brontobytes = 1 Geopbyte

NAS4Free

Like FreeNAS, NAS4Free allows you to create your own BSD-based storage solution from commodity hardware. It promises a low-cost, powerful network storage appliance that users can customize to their own needs.

If FreeNAS and NAS4Free sound suspiciously similar, it’s because they share a common history. Both started from the same original FreeNAS code, which was created in 2005. In 2009, the FreeNAS team pursued a more extensible plugin architecture using OpenZFS, and a project lead who disagreed with that direction departed to continue his work using Linux, thus creating NAS4Free. NAS4Free dispenses with the fancy stuff and sticks with a more focused approach of “do one thing and do it well”. You don’t get bittorrent clients or cloud servers and you can’t make a virtual server with it, but many feel that NAS4Free has a much cleaner, more usable interface.

OpenFiler

Openfiler is a storage management operating system based on rPath Linux. It is a full-fledged NAS/SAN that can be implemented as a virtual appliance for VMware and Xen hypervisors. It offers storage administrators a set of powerful tools that are used to manage complex storage environments. It supports software and hardware RAID, monitoring and alerting facilities, volume snapshot and recovery features. Configuring Openfiler can be complicated, but there are many online resources available that cover the most typical installations. I’ve seen mixed reviews about the product online, it’s worth a bit of research before you consider an implementation.

OpenSMT

OpenSMT is an open source storage management toolkit based on opensolaris. Like Openfiler, OpenSMT also allows users to use commodity hardware for a dedicated storage device with NAS features and SAN features. It uses the ZFS filesystem and includes a well-designed Web GUI.

Open Media Vault

This NAS solution is based on Debian Linux and offers plug-ins to extend it’s capabilities. It boasts really easy-to-use storage management with a web based interface, fast setup, Multilanguage support, volume management, monitoring, UPS support, and statistics reporting. Plugins allow it to be extended with LDAP support, bittorrent, and iSCSI. It is primarily designed to be used in small offices or home offices, but is not limited to those scenarios.

Turnkey Linux

The Turnkey Linux Virtual Appliance Library is a free open source project which has developed a range of Debian based pre-packaged server software appliances (a.k.a. virtual appliances). Turnkey appliances can be deployed as a virtual machine (a range of hypervisors are supported), in cloud computing infrastructures (including AWS and others) or installed in physical computers.

Turnkey offers more than 100 different software appliances based on open source software. Among them is a file server that offers simple network attached storage, hence it’s inclusion in this list.

Turnkey file server is an easy to use file server that combines Windows-compatible network file sharing with a web based file manager. TurnKey File Server includes support for SMB, SFTP, NFS, WebDAV and rsync file transfer protocols. The server is configured to allow server users to manage files in private or public storage. It is based on Samba and SambaDAV.

oVirt

oVirt is free, open-source virtualization management platform. It was founded by Red Hat as a community project on which Red Hat Enterprise Virtualization is based. It allows centralized management of virtual machines, compute, storage and networking resources, from an easy to use web-based front-end with platform independent access. With oVirt, IT can manage virtual machines, virtualized networks and virtualized storage via an intuitive Web interface. It’s based on the KVM hypervisor.

Kinetic Open Storage

Backed by companies like EMC, Seagate, Toshiba, Cisco, NetApp, Red Hat, Western Digital, Dell and others, Kinetic is a Linux Foundation project dedicated to establishing standards for a new kind of object storage architecture. It’s designed to meet the need for scale-out storage for unstructured data. Kinetic is fundamentally a way for storage applications to communicate directly with storage devices over Ethernet. With Kinetic, storage use cases that are targeted consist largely of unstructured data like NoSQL, Hadoop and other distributed file systems, and object stores in the cloud like Amazon S3, OpenStack Swift and Basho’s Riak.

Storj DriveShare and MetaDisk

Storj (pronounced “Storage”) is a new type of cloud storage built on blockchain and peer-to-peer technology. Storj offers decentralized, end-to-end encrypted cloud storage. The DriveShare app allows users to rent out their unused hard drive space for use by the service, and the MetaDisk Web app allows users to save their files to the service securely.

The core protocol allows for peer to peer negotiation and verification of storage contracts. Providers of storage are called “farmers” and those using the storage, “renters”. Renters periodically audit whether the farmers are still keeping their files safe and, in a clever twist of similar architectures, immediately pay out a small amount of cryptocurrency for each successful audit. Conversely, farmers can decide to stop storing a file if its owner does not audit and pay their services on time. Files are cut up into pieces called “shards” and stored 3 times redundantly by default. The network will automatically determine a new farmer and move data if copies become unavailable. In the core protocol, contracts are negotiated through a completely decentralized key-value store (Kademlia). The system puts measures in place that prevent farmers and renters from cheating on each other, e.g. through manipulation of the auditing process. Other measures are taken to prevent attacks on the protocol itself.

Storj, like other similar services, offers several advantages over more traditional cloud storage solutions: since data is encrypted and cut into “shards” at source, there is almost no conceivable way for unauthorized third parties to access that data. Data storage is naturally distributed and this, in turn, increases availability and download speed thanks to the use of multiple parallel connections.

Open Source File Systems

Btrfs

Btrfs is a newer Linux filesystem being developed by Facebook, Fujitsu, Intel, the Linux Foundation, Novell, Oracle, Red Hat and some other organizations. It emphasizes fault tolerance and easy administration, and it supports files as large as 16 EiB.

It has been included in the Linux 3.10 kernel as a stable filesystem since July 2014. Because of the fast development speed, btrfs noticeably improves with every new kernel version, so it’s always recommended to use the most recent, stable kernel version you can. Rockstor always runs a very recent kernel for that reason.

One of the big draws of Btrfs is its Copy on Write (CoW) nature of the filesystem. When multiple users attempt to read/write a file, it does not make a separate copy until changes are made to the original file by the user. This has the benefit of saving changes, which allows file restorations with snaps. Btrfs also has its own native RAID support built in, appropriately named Btrfs-RAID. A nice benefit of the Btrfs RAID implementation is that a RAID6 volume does not need additional re-syncing upon creation of the RAID set, greatly reducing the time requirement.

Ext4

This is the latest version of one of the most popular filesystems for Linux. One of its key benefits is the ability to handle very large amounts of data— 16 TB maximum per file and 1 EB (exabyte, or 1 million terabytes) maximum per filesystem. It is the evolution of the most used Linux filesystem, Ext3. In many ways, Ext4 is a deeper improvement over Ext3 than Ext3 was over Ext2. Ext3 was mostly about adding journaling to Ext2, but Ext4 modifies important data structures of the filesystem such as the ones destined to store the file data.

GlusterFS

Owned by RedHat, GlusterFS is a scale-out distributed file system designed to handle petabytes worth of data. Features include high availability, fast performance, global namespace, elastic hash algorithm and an elastic volume manager.

GlusterFS combines the unused storage space on multiple servers to create a single, large, virtual drive that you can mount like a legacy filesystem using NFS or FUSE on a client PC. It also provides the ability to add more servers or remove existing servers from the storage pool on the fly. GlusterFS functions like a “network RAID” device, many RAID concepts are apparent during setup. It really shines when you need to store huge quantities of data, have redundant file storage, or write data very quickly for later access. Geo-replication lets you mirror data on a volume across the wire. The target can be a single directory or another GlusterFS volume.  It can also handle multiple petabytes easily along with being very easy to install and manage.

Lustre

Designed for “the world’s largest and most complex computing environments,” Lustre is a high-performance scale-out file system. It boasts that it can handle tens of thousands of nodes and petabytes of data with very fast throughput.

Lustre file systems are highly scalable and can be part of multiple computer clusters with tens of thousands of client nodes, multiple petabytes of storage on hundreds of servers, and more than 1TB/s of aggregate I/O throughput. This makes Lustre file systems a popular choice for businesses with large data centers.

OpenZFS

OpenZFS is an outstanding storage platform that encompasses the functionality of traditional filesystems, volume managers, and more, with consistent reliability, functionality and performance. This popular file system is incorporated into many other open source storage projects. It offers excellent scalability and data integrity, and it’s available for most Linux distributions.

IPFS

IPFS is short for “Interplanetary File System,” and is an unusual project that uses peer-to-peer technology to connect all computers with a single file system. It aims to supplement, or possibly even replace, the Hypertext Transfer Protocol that runs the web now. According to the project owner, “In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository.”

IPFS isn’t exactly a well-known technology yet, even among many in the Valley, but it’s quickly spreading by word of mouth among folks in the open-source community. Many are excited by its potential to greatly improve file transfer and streaming speeds across the Internet.

Open Source RAID Solutions

DRBD

DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several userspace management applications and some shell scripts. It is typically used in high availability (HA) computer clusters, but beginning with v9 it can also be used to create larger software defined storage pools with more of a focus on cloud integration. Support and training are available through the project owner, LinBit.

DRBD’s replication technology is very fast and efficient. If you can live with an active-passive setup, DRBD is an efficient storage replication solution. DRBD helps keep data synchronized between multiple nodes and multiple nodes in different datacenters, and if you need to failover between two nodes DRBD is very fast and efficient.

Mdadm

This piece of the Linux kernel makes it possible to set up and manage your own software RAID array using standard hardware. While it is terminal-based, but it offers a wide variety of options for monitoring, reporting, and managing RAID arrays.

Raider

Raider applies RAID 1, 4, 5, 6 or 10 to hard drives. It is able to convert a single linux system disk in to a software raid 1, 4, 5, 6 or 10 system in a two-pass simple command. Raider is a bash shell script, that deals with specific oddities of several linux distros (Ubuntu, Debian, Arch, Mandriva, Mageia, openSuSE, Fedora, Centos, PCLinuxOS, Linux Mint, Scientific Linux, Gentoo, Slackware… – see README) and uses linux software raid (mdadm) ( http://en.wikipedia.org/wiki/Mdadm and https://raid.wiki.kernel.org/ ) to execute the conversion.

Open Source Backup and Synchronization Solutions

Zmanda

From their marketing staff… “Zmanda is the world’s leading provider of open source backup and recovery software. Our open source development and distribution model enables us to deliver the highest quality backup software such as Amanda Enterprise and Zmanda Recovery Manager for MySQL at a fraction of the cost of software from proprietary vendors. Our simple-to-use yet feature-rich backup software is complemented by top-notch services and support expected by enterprise customers.”

Zmanda offers a community and enterprise edition of their software. The enterprise edition of course offers a much more complete feature set.

AMANDA

The core of Amanda is the Amanda server, which handles all the backup operations, compression, indexing and configuration tasks. You can run it on any Linux server as it doesn’t cause any conflicts with any other processes, but it is recommend to run it on a dedicated machine as that removes any associated processing loads from the client machines and prevents the backup from negatively affecting the client’s performance.

Overall it is an extremely capable file-level backup tool that can be customized to your exact requirements. While it lacks a GUI, the command line controls are simple and the level of control you have over your backups is exceptional. Because it can be called from within your own scripts, it can be incorporated into your own custom backup scheme no matter how complex your requirements are. Paid support and a cloud-based version are available through Zmanda, which is owned by Carbonite.

Areca Backup

Areca Backup is a free backup utility for Windows and Linux.  It is written in Java and released under the GNU General Public License. It’s a good option for backing up a single system and it aims to be simple and versatile. Key features include compression, encryption, filters and support for delta backup.

Backup

Backup is a system utility for Linux and Mac OS X, distributed as a RubyGem, that allows you to easily perform backup operations. It provides an elegant DSL in Ruby for modeling your backups. Backup has built-in support for various databases, storage protocols/services, syncers, compressors, encryptors and notifiers which you can mix and match. It was built with modularity, extensibility and simplicity in mind.

BackupPC

Designed for enterprise users, BackupPC claims to be “highly configurable and easy to install and maintain.” It backs up to disk only (not tape) and offers features that reduce storage capacity and IO requirements.

Bacula

Another enterprise-grade open source back solution, Bacula offers a number of advanced features for backup and recovery, as well as a fairly easy-to-use interface. Commercial support, training and services are available through Bacula Systems.

Back In Time

Similar to FlyBack (see below), Back in Time offers a very easy-to-configure snapshot backup solution. GUIs are available for both Gnome and KDE (4.1 or greater).

Backupninja

This tool makes it easier to coordinate and manage backups on your network. With the help of programs like rdiff-backup, duplicity, mysqlhotcopy and mysqldump, Backupninja offers common backup features such as remote, secure and incremental file system backups, encrypted backup, and MySQL/MariaDB database backup. You can selectively enable status email reports, and can back up general hardware and system information as well. One key strength of backupninja is a built-in console-based wizard (called ninjahelper) that allows you to easily create configuration files for various backup scenarios. The downside is that backupninja requires other “helper” programs to be installed in order to take full advantage of all its features. While backupninja’s RPM package is available for Red Hat-based distributions, backupninja’s dependencies are optimized for Debian and its derivatives. Thus it is not recommended to try backupninja for Red Hat based systems.

Bareos

Short for “Backup Archiving Recovery Open Sourced,” Bareos is a 100% open source fork of the backup project from bacula.org. The fork is in development since late 2010, it has a lot of new features. The source has been published on github, licensed AGPLv3. It offers features like LTO hardware encryption, efficient bandwidth usage and practical console commands. A commercially supported version of the same software is available through Bareos.com.

Box Backup

Box Backup describes itself as “an open source, completely automatic, online backup system.” It creates backups continuously and can support RAID. Box Backup is stable but not yet feature complete. All of the facilities to maintain reliable encrypted backups and to allow clients to recover data are, however, already implemented and stable.

BURP

BURP, which stands for “BackUp And Restore Program,” is a network backup tool based on librsync and VSS. It’s designed to be easy to configure and to work well with disk storage. It attempts to reduce network traffic and the amount of space that is used by each backup.

Clonezilla

Conceived as a replacement for True Image or Norton Ghost, Clonezilla is a disk imaging application that can do system deployments as well as bare metal backup and recovery. Two types of Clonezilla are available, Clonezilla live and Clonezilla SE (server edition). Clonezilla live is suitable for single machine backup and restore. While Clonezilla SE is for massive deployment, it can clone many (40+) computers simultaneously. Clonezilla saves and restores only used blocks in the hard disk. This increases the clone efficiency. With some high-end hardware in a 42-node cluster, a multicast restoring at rate 8 GB/min was reported.

Create Synchronicity

Create Synchronicity’s claim to fame is its lightweight size—just 220KB. It’s also very fast, and it offers an intuitive interface for backing up standalone systems. Create Synchronicity is an easy, fast and powerful backup application. It synchronizes files and folders, has a nice interface, and can schedule backups to keep your data safe. Plus, it’s open source, portable, multilingual, and very light (180kB). Windows 2000, Windows XP, Windows Vista, and Windows Seven are supported. To run Create Synchronicity, you must install the .Net Framework, version 2.0 or later.

DAR

AR is a command-line backup and archiving tool that uses selective compression (not compressing already compressed files), strong encryption, may split an archive in different files of given size and provides on-fly hashing. DAR knows how to perform full, differential, incremental and decremental backups. It provides testing, diffing, merging, listing and of course data extracting from existing archives. Archive internal’s catalog, allows very quick restoration of a even a single file from a very large, eventually sliced, compressed and encrypted archive. Dar saves *all* UNIX inode types, takes care of hard links, sparse files as well as Extended Attributes (MacOS X file forks, Linux ACL, SELinux tags, user attributes), it has support for ssh and is suitable for tapes and disks (floppy, CD, DVD, hard disks, …). An optional GUI is available from the DarGUI project.

DirSync Pro

DirSync Pro is a small, but powerful utility for file and folder synchronization. DirSync Pro can be used to synchronize the content of one or many folders recursively. Use DirSync Pro to easily synchronize files from your desktop PC to your USB-stick (/Externa HD/PDA/Notebook). Use this USB-stick (/Externa HD/PDA/Notebook) to synchronize files to another desktop PC. It also features incremental backups, a user friendly interface, a powerful schedule engine, and real-time synchronization. It is written in Java.

Duplicati

Duplicati is designed to backup your network to a cloud computing service like Amazon S3, Microsoft OneDrive, Google Cloud or Rackspace. It includes AES-256 encryption and a scheduler, as well as features like filters, deletion rules, transfer and bandwidth options. Save space with incremental backups and data deduplication. Run backups on any machine through the web-based interface or via command line interface. It has an auto-updater.

Duplicity

Based on the librsync library, Duplicity creates encrypted archives and uploads them to remote or local servers. It can use GnuPG to encrypt and sign archives if desired.

Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.

The duplicity package also includes the rdiffdir utility. Rdiffdir is an extension of librsync’s rdiff to directories—it can be used to produce signatures and deltas of directories as well as regular files. These signatures and deltas are in GNU tar format.

FlyBack

Similar to Apple’s TimeMachine, FlyBack provides incremental backup capabilities and allows users to recover their systems from any previous time. The interface is very easy to use, but little customization is available. FlyBack creates incremental backups of files, which can be restored at a later date. FlyBack presents a chronological view of a file system, allowing individual files or directories to be previewed or retrieved one at a time. Flyback was originally based on rsync when the project began in 2007, but in October 2009 it was rewritten from scratch using Git.

FOG

An imaging and cloning solution, FOG makes it easy for administrators to backup networks of all sizes. FOG can be used to image Windows XP, Vista, Windows 7 and Window 8 PCs using PXE, PartClone, and a Web GUI to tie it together. Includes featues like memory and disk test, disk wipe, av scan & task scheduling.

FreeFileSync

FreeFileSync is a free Open Source software that helps you synchronize files and synchronize folders for Windows, Linux and Mac OS X. It is designed to save your time setting up and running data backups while having nice visual feedback along the way. This file and folder synchronization tool can be very useful for backup purposes. It can save a lot of time and receives very good reviews from its users.

FullSync

FullSync is a powerful tool that helps you keep multiple copies of various data in sync. I.e. it can update your Website using (S)Ftp, backup your data or refresh a working copy from a remote server. It offers flexible rules, a scheduler and more. Built for developers, FullSync offers synchronization capabilities suitable for backup purposes or for publishing Web pages. Features include multiple modes, flexible tools, support for multiple file transfer protocols and more.

Grsync

Grsync provides a graphical interface for rsync, a popular command line synchronization and backup tool. It’s useful for backup, mirroring, replication of partitions, etc. It’s a hack/port of Piero Orsoni’s wonderful Grsync – rsync frontend in GTK – to Windows (win32).

LuckyBackup

Award-winning LuckyBackup offers simple, fast backup. Note that while it is available in a Windows version, it’s still under development. It features Backup using snapshots, Various checks to keep data safe, Simulation mode, Remote connections, Easy restore procedure, Add/remove any rsync option, Synchronize folders, Exclude data from tasks, Execute other commands before or after a task, Scheduling, Tray notification support, and e-mail reports.

Mondo Rescue

Mondo Rescue is a GPL disaster recovery solution. It supports Linux (i386, x86_64, ia64) and FreeBSD (i386). It’s packaged for multiple distributions (Fedora, RHEL, openSuSE, SLES, Mandriva, Mageia, Debian, Ubuntu, Gentoo). It supports tapes, disks, network and CD/DVD as backup media, multiple filesystems, LVM, software and hardware Raid, BIOS and UEFI.

Obnam

Winner of the most original name for backup software – “OBligatory NAMe”. This app performs snapshot backups that can be stored on local disks or online storage services. Features include Easy usage, Snapshot backups, Data de-duplication, across files, and backup generations, Encrypted backups, and it supports both PUSH (i.e. Run on the client) and PULL (i.e. Run on the server) methods.

Partimage

Partimage is opensource disk backup software. It saves partitions having a supported filesystem on a sector basis to an image file. Although it runs under Linux, Windows and most Linux filesystems are supported. The image file can be compressed to save disk space and transfer time and can be split into multiple files to be copied to CDs or DVDs. Partitions can be saved across the network using the partimage network support, or using Samba / NFS (Network File Systems). This provides the ability to perform an hard disk partition recovery after a disk crash. Partimage can be run as part of your normal system or as a stand-alone from the live SystemRescueCd. This is helpful when the operating system cannot be started. SystemRescueCd comes with most of the data recovery software for linux that you may need .

Partimage will only copy data from the used portions of the partition. (This is why it only works for supported filesystem. For speed and efficiency, free blocks are not written to the image file. This is unlike other commands, which also copy unused blocks. Since the partition is processed on a sequential sector basis disk transfer time is maximized and seek time is minimized, Partimage also works for very full partitions. For example, a full 1 GB partition may be compressed down to 400MB.

Redo

Easy rescue system with GUI tools for full system backup, bare metal recovery, partition editing, recovering deleted files, data protection, web browsing, and more. Uses partclone (like Clonezilla) with a UI like Ghost or Acronis. Runs from CD/USB.

Rsnapshot

Rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. Using rsync and hard links, it is possible to keep multiple, full backups instantly available. The disk space required is just a little more than the space of one full backup, plus incrementals. Depending on your configuration, it is quite possible to set up in just a few minutes. Files can be restored by the users who own them, without the root user getting involved. There are no tapes to change, so once it’s set up, you may never need to think about it again. rsnapshot is written entirely in Perl. It should work on any reasonably modern UNIX compatible OS, including: Debian, Redhat, Fedora, SuSE, Gentoo, Slackware, FreeBSD, OpenBSD, NetBSD, Solaris, Mac OS X, and even IRIX.

Rsync

Rsync is a fast and extraordinarily versatile file copying tool for both remote and local files. Rsync uses a delta-transfer algorithm which provides a very fast method for bringing remote files into sync. It does this by sending just the differences in the files across the link, without requiring that both sets of files are present at one of the ends of the link beforehand. At first glance this may seem impossible because the calculation of diffs between two files normally requires local access to both files.

SafeKeep

SafeKeep is a centralized and easy to use backup application that combines the best features of a mirror and an incremental backup. It sets up the appropriate environment for compatible backup packages and simplifies the process of running them. For Linux users only, SafeKeep focuses on security and simplicity. It’s a command line tool that is a good option for a smaller environment.

Synkron

This application allows you to keep your files and folders updated and synchronized. Key features include an easy to use interface, blacklisting, analysis and restore. It is also cross-platform.

Synbak

Synbak is an software designed to unify several backup methods. Synbak provides a powerful reporting system and a very simple interface for configuration files. Synbak is a wrapper for several existing backup programs suppling the end user with common method for configuration that will manage the execution logic for every single backup and will give detailed reports of backups result. Synbak can make backups using RSync over ssh, rsync daemon, smb and cifs protocols (using internal automount functions), Tar archives (tar, tar.gz and tar.bz2), Tape devices (using multi loader changer tapes too), LDAP databases, MySQL databases, Oracle databases, CD-RW/DVD-RW, Wget to mirror HTTP/FTP servers. It offers official support to GNU/Linux Red Hat Enterprise Linux and Fedora Core Distributions only.

SnapBackup

Designed to be as easy to use as possible, SnapBackup backs up files with just one click. It can copy files to a flash drive, external hard drive or the cloud, and it includes compression capabilities.  The first time you run Snap Backup, you configure where your data files reside and where to create backup files. Snap Backup will also copy your backup to an archive location, such as a USB flash drive (memory stick), external hard drive, or cloud backup. Snap Backup automatically puts the current date in the backup file name, alleviating you from the tedious task of renaming your backup file every time you backup. The backup file is a single compressed file that can be read by zip programs such as gzip, 7-Zip, The Unarchiver, and Mac’s built-in Archive Utility.

Syncovery

File synchronization and backup software. Back up data and synchronize PCs, Macs, servers, notebooks, and online storage space. You can set up as many different jobs as you need and run them manually or using the scheduler. Syncovery works with local hard drives, network drives and any other mounted volumes. In addition, it comes with support for FTP, SSH, HTTP, WebDAV, Amazon S3, Google Drive, Microsoft Azure, SugarSync, box.net and many other cloud storage providers. You can use ZIP compression and data encryption. On Windows, the scheduler can run as a service – without users having to log on. There are powerful synchronization modes, including Standard Copying, Exact Mirror, and SmartTracking. Syncovery features a well designed GUI to make it an extremely versatile synchronizing and backup tool.

XSIbackup

XSIbackup can backup VMwareESXi environments version 5.1 or greater. It’s a command line tool with a scheduler, and it runs directly on the hypervisor. XSIBackup is a free alternative to commercial software like Veeam Backup.

UrBackup

A client-server system, UrBackup does both file and image backups. UrBackup is an easy to setup Open Source client/server backup system, that through a combination of image and file backups accomplishes both data safety and a fast restoration time. File and image backups are made while the system is running without interrupting current processes. UrBackup also continuously watches folders you want backed up in order to quickly find differences to previous backups. Because of that, incremental file backups are really fast. Your files can be restored through the web interface, via the client or the Windows Explorer while the backups of drive volumes can be restored with a bootable CD or USB-Stick (bare metal restore). A web interface makes setting up your own backup server easy.

Unison

This file synchronization tool goes beyond the capabilities of most backup systems, because it can reconcile several slightly different copies of the same file stored in different places. It can work between any two (or more) computers connected to the Internet, even if they don’t have the same operating system. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.

Unison shares a number of features with tools such as configuration management packages (CVS, PRCS, Subversion, BitKeeper, etc.), distributed filesystems (Coda, etc.), uni-directional mirroring utilities (rsync, etc.), and other synchronizers (Intellisync, Reconcile, etc). Unison runs on both Windows and many flavors of Unix (Solaris, Linux, OS X, etc.) systems. Moreover, Unison works across platforms, allowing you to synchronize a Windows laptop with a Unix server, for example. Unlike simple mirroring or backup utilities, Unison can deal with updates to both replicas of a distributed directory structure. Updates that do not conflict are propagated automatically. Conflicting updates are detected and displayed.

Win32DiskImager

This program is designed to write a raw disk image to a removable device or backup a removable device to a raw image file. It is very useful for embedded development, namely Arm development projects (Android, Ubuntu on Arm, etc). Averaging more than 50,000 downloads every week, this tool is a very popular way to copy a disk image to a new machine. It’s very useful for systems administrators and developers.

Open Source Cloud Data Storage Solutions

Camlistore

Camlistore is short for “Content-Addressable Multi-Layer Indexed Storage.” Camlistore is a set of open source formats, protocols, and software for modeling, storing, searching, sharing and synchronizing data in the post-PC era. Data may be files or objects, tweets or 5TB videos, and you can access it via a phone, browser or FUSE filesystem. It is still under active development. If you’re a programmer or fairly technical, you can probably get it up and running and get some utility out of it. Many bits and pieces are actively being developed, so be prepared for bugs and unfinished features.

CloudStack

Apache’s CloudStack project offers a complete cloud computing solution, including cloud storage. Key storage features include tiering, block storage volumes and support for most storage hardware.

CloudStack is open source software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform. CloudStack is used by a number of service providers to offer public cloud services, and by many companies to provide an on-premises (private) cloud offering, or as part of a hybrid cloud solution.

CloudStack is a turnkey solution that includes the entire “stack” of features most organizations want with an IaaS cloud: compute orchestration, Network-as-a-Service, user and account management, a full and open native API, resource accounting, and a first-class User Interface (UI). It currently supports the most popular hypervisors: VMware, KVM, Citrix XenServer, Xen Cloud Platform (XCP), Oracle VM server and Microsoft Hyper-V.

CloudStore

CloudStore synchronizes files between multiple locations. It is similar to Dropbox, but it’s completely free and, as noted by the developer, does not require the user to trust a US company.

Cozy

Cozy is a personal cloud solution allows users to “host, hack and delete” their own files. It stores calendar and contact information in addition to documents, and it also has an app store with compatible applications.

DREBS

Designed for Amazon Web Services users, DREBS stands for “Disaster Recovery for Elastic Block Store.” It runs on Amazon’s EC2 services and takes snapshots of EBS volumes for disaster recovery purposes. It can be used for taking periodic snapshots of EBS volumes. It is designed to be run on the EC2 host which the EBS volumes to be snapshoted are attached.

DuraCloud

DuraCloud is a hosted service and open technology developed by DuraSpace that makes it easy for organizations and end users to use cloud services. DuraCloud leverages existing cloud infrastructure to enable durability and access to digital content. It is particularly focused on providing preservation support services and access services for academic libraries, academic research centers, and other cultural heritage organizations. The service builds on the pure storage from expert storage providers by overlaying the access functionality and preservation support tools that are essential to ensuring long-term access and durability. DuraCloud offers cloud storage across multiple commercial and non commercial providers, and offers compute services that are key to unlocking the value of digital content stored in the cloud. DuraCloud provides services that enable digital preservation, data access, t