Custom Reporting with Isilon OneFS API Calls

Have you ever wondered where the metrics you see in InsightIQ come from?  InsightIQ uses OneFS API calls to gather information, and you can use the same API calls for custom reporting and scripting.  Whether you’re interested in performance metrics relating to cluster capacity, CPU utilization, network latency & throughput, or disk activities, you have access to all of that information.

I spent a good deal of time already on how to make this work and investigating options that are available to make the gathered data be presentable in some useful manner.  This is really just the beginning, I’m hoping to take some more time later to work on additional custom script examples that gather specific info and have useful output options.  For now, this should get anyone started who’s interested in trying this out.  This post also includes a list of available API calls you can make.  I cover these basic steps in this post to get you started:

  1. How to authenticate to the Isilon Cluster using cookies.
  2. How to make the API call to the Isilon to generate the JSON output.
  3. How to install the jq utility to parse JSON output files.
  4. Some examples of using the jq utility to parse the JSON output.

Authentication

First I’ll go over how to authenticate to the Isilon cluster using cookies. You’ll have to create a credentials file first.  Name the file auth.json and enter the following info into it:

{
 "username":"root",
 "password":"<password>",
 "services":["platform","namespace"]
 }

Note that I am using root for this example, but it would certainly be possible to create a separate account on the Isilon to use for this purpose.  Just give the account the Platform API and Statistics roles.

Once the file is created, you can make a session call to get a cookie:

curl -v -k –insecure -H “Content-Type: application/json” -c cookiefile -X POST -d @auth.json https://10.10.10.10:8080/session/1/session

The output will be over one page long, but you’re looking to verify that the cookie was generated.  You should see two lines similar to this:

* Added cookie isisessid="123456-xxxx-xxxx-xxxx-a193333a99bc" for domain 10.10.10.10, path /, expire 0
< Set-Cookie: isisessid=123456-xxxx-xxxx-xxxx-a193333a99bc; path=/; HttpOnly; Secure

Run a test command to gather data

Below is a sample string to gather some sample statistics.  Later in this document I’ll review all of the possible strings you can use to gather info on CPU, disk, performance, etc.

curl -k –insecure -b @cookiefile ‘https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total’

The command above generates the following json output:

{
"stats" :
[

{
"devid" : 0,
"error" : null,
"error_code" : null,
"key" : "ifs.bytes.total",
"time" : 1398840008,
"value" : 433974304096256
}
]
}

Install JQ

Now that we have data in json format, we need to be able to parse it and change it into a more readable format.  I’m looking to convert it to csv.  There are many different scripts, tools, and languages available for that purpose online.  I personally looked for a method that can be used in a simple bash script and jq is a good solution for that.  I use Cygwin on a windows box for my scripts, but you can download any version you like for your flavor of OS.  You can download the JQ parser here: https://github.com/stedolan/jq/releases.

Instructions for the installation of jq for Cygwin:

  1. Download the latest source tarball for jq from https://stedolan.github.io/jq/download/
  2. Open Cygwin to create the folder you’d like to extract it in
  3. Copy the ‘jq-1.5.tar.gz’ file into your folder to make it available within Cygwin
  4. From a Cygwin command shell, enter the following to uncompress the tarball file : ‘tar -xvzf jq-1.5.tar.gz’
  5. Change folder location to the uncompressed folder e.g. ‘cd /jq-1.5’
  6. Next enter ‘./configure’ and wait for the command to complete (about 5 minutes)
  7. Then enter the commands ‘make’, followed by ‘make install’
  8. You’re done.

Once jq is installed, we can play around with using it to make our json output more readable.  One simple way to make it into a comma separated output, is with this command:

cat sample.json | jq “.stats | .[]” –compact-output

It will turn json output like this:

{
"stats" :
[

{
"devid" : 8,
"error" : null,
"error_code" : null,
"key" : "node.ifs.bytes.used",
"values" :
[

{
"time" : 1498745964,
"value" : 51694140276736
},
{
"time" : 1498746264,
"value" : 51705407610880
}
]
},

Into single line, comma separated output like this:

{"devid":8,"error":null,"error_code":null,"key":"node.ifs.bytes.used","values":[{"time":1498745964,"value":51694140276736},{"time":1498746264,"value":51705407610880}]}

You can further improve the output by removing the quote marks with sed:

cat sample.json | jq “.stats | .[]” –compact-output | sed ‘s/\”//g’

At this point the data is formatted well enough to easily modify it to suit my needs in Excel.

{devid:8,error:null,error_code:null,key:node.ifs.bytes.used,values:[{time:1498745964,value:51694140276736},{time:1498746264,value:51705407610880}]}

JQ CSV

Using the –compact-output switch isn’t the only way to manipulate the data, and probably not the best way.  I haven’t had much time to work with the @csv option in JQ, but it looks very promising. for this.  Below are a few notes on using it, I will include more samples in an edit to this post or a new post in the future that relate this more directly to using it with the Isilon-generated output.  I prefer to use csv files for report output due to the ease of working with them and manipulating them with scripts.

Order is significant for csv, but not for JSON fields.  Specify the mapping from JSON named fields to csv positional fields by constructing an array of those fields, using [.date,.count,.title]:

input: { "date": "2011-01-12 13:14", "count": 17, "title":"He's dead, Jim!" }
jq -r '[.date,.count,.title] | @csv'
"2017-06-12 08:19",17,"You're going too fast!"

You also may want to apply this to an array of objects, in which case you’ll need to use the .[] operator, which streams each item of an array in turn:

jq -r '.[] | [.date, .count, .title] | @csv'
"2017-06-12 08:19",17,"You're going too fast!"
"2017-06-15 11:50",4711,"That's impossible"?"
"2017-06-19 00:01",,"I can't drive 55!"

You’ll likely also want the csv file to populate the csv field names at the top. The easiest way to do this is to add them in manually:

jq -r '["date", "count", "title"], (.[] | [.date, .count, .title]) | @csv'
"date","count","title"
"2017-06-12 08:19",17,"You're going too fast!"
"2017-06-15 11:50",4711,"That's impossible"?"
"2017-06-19 00:01",,"I can't drive 55!"

We can avoid repeating the same list of field names by reusing the header array to lookup the fields in each object.

jq -r '["date", "count", "title"] as $fields| $fields, (.[] | [.[$fields[]]]) | @csv'

Here it is as a function, with a slightly nicer field syntax, using path():

def csv(fs): [path(null|fs)[]] as $fields| $fields, (.[] | [.[$fields[]]]) | @csv;
USAGE: csv(.date, .count, .title)

If the input is not an array of objects but just a sequence of objects  then we can omit the .[] – but then we can’t get the header at the top.  It’s best to convert it to an array using the –slurp/-s option (or put [] around it if it’s generated within jq).

More to come on formatting JSON for Isilon in the future…

Isilon API calls

All of these specific API calls were pulled from the EMC community forum, I didn’t compose this list myself.  It’s a list of the calls that InsightIQ makes to the OneFS API.  They can be queried in exactly the same way that I demonstrated in the examples earlier in this post.

Please note the following about the API calls regarding time ranges:

  1. Every call to the “/platform/1/statistics/current” APIs do not contain query parameters for &begin and &end time range.
  2. Every call to the “/platform/1/statistics/history” APIs always contain query parameters for &begin and &end POSIX time range.

Capacity

https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total&key=ifs.ssd.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.bytes.avail&key=ifs.ssd.bytes.avail&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=ifs.bytes.avail&key=ifs.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.ssd.bytes.avail&key=ifs.ssd.bytes.total&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.out.rate&key=node.ifs.bytes.in.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.total&key=node.ifs.ssd.bytes.used&key=node.ifs.ssd.bytes.total&key=node.ifs.bytes.used&devid=all&degraded=true&interval=300&memory_only=true

CPU

https://10.10.10.10:8080/platform/1/statistics/history?key=node.cpu.idle.avg&devid=all&degraded=true&interval=30&memory_only=true

Network

https://10.10.10.10:8080/platform/1/statistics/current?key=node.net.iface.name.0&key=node.net.iface.name.1&key=node.net.iface.name.2&key=node.net.iface.name.3&key=node net.iface.name.4&key=node.net.iface.name.5&key=node.net.iface.name.6&key=node.net.iface.name.7&key=node.net.iface.name.8&key=node.net.iface.name.9&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.ext.packets.in.rate&key=node.net.ext.errors.in.rate&key=node.net.ext.bytes.out.rate&key=node.net.ext.errors.out.rate&key=node.net.ext.bytes.in.rate&key=node.net.ext.packets.out.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.iface.bytes.out.rate.0&key=node.net.iface.bytes.out.rate.1&key=node.net.iface.bytes.out.rate.2&key=node.net.iface.bytes.out.rate.3&key=node.net.iface.bytes.out.rate.4&key=node.net.iface.bytes.out.rate.5&key=node.net.iface.bytes.out.rate.6&key=node.net.iface.bytes.out.rate.7&key=node.net.iface.bytes.out.rate.8&key=node.net.iface.bytes.out.rate.9&key=node.net.iface.errors.in.rate.0&key=node.net.iface.errors.in.rate.1&key=node.net.iface.errors.in.rate.2&key=node.net.iface.errors.in.rate.3&key=node.net.iface.errors.in.rate.4&key=node.net.iface.errors.in.rate.5&key=node.net.iface.errors.in.rate.6&key=node.net.iface.errors.in.rate.7&key=node.net.iface.errors.in.rate.8&key=node.net.iface.errors.in.rate.9&key=node.net.iface.errors.out.rate.0&key=node.net.iface.errors.out.rate.1&key=node.net.iface.errors.out.rate.2&key=node.net.iface.errors.out.rate.3&key=node.net.iface.errors.out.rate.4&key=node.net.iface.errors.out.rate.5&key=node.net.iface.errors.out.rate.6&key=node.net.iface.errors.out.rate.7&key=node.net.iface.errors.out.rate.8&key=node.net.iface.errors.out.rate.9&key=node.net.iface.packets.in.rate.0&key=node.ne .iface.packets.in.rate.1&key=node.net.iface.packets.in.rate.2&key=node.net.iface.packets.in.rate.3&key=node.net.iface.packets.in.rate.4&key=node.net.iface.packets.in.rate.5&key=node.net.iface.packets.in.rate.6&key=node.net.iface.packets.in.rate.7&key=node.net.iface.packets.in.rate.8&key=node.net.iface.packets.in.rate.9&key=node.net.iface.bytes.in.rate.0&key=node.net.iface.bytes.in.rate.1&key=node.net.iface.bytes.in.rate.2&key=node.net.iface.bytes.in.rate.3&key=node.net.iface.bytes.in.rate.4&key=node.net.iface.bytes.in.rate.5&key=node.net.iface.bytes.in.rate.6&key=node.net.iface.bytes.in.rate.7&key=node.net.iface.bytes.in.rate.8&key=node.net.iface.bytes.in.rate.9&key=node.net.iface.packets.out.rate.0&key=node.net.iface.packets.out.rate.1&key=node.net.iface.packets.out.rate.2&key=node.net.iface.packets.out.rate.3&key=node.net.iface.packets.out.rate.4&key=node.net.iface.packets.out.rate.5&key=node.net.iface.packets.out.rate.6&key=node.net.iface.packets.out.rate.7&key=node.net.iface.packets.out.rate.8&key=node.net.iface.packets.out.rate.9&devid=all&degraded=true&interval=30&memory_only=true

Disk

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.count&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.name.0&key=node.disk.name.1&key=node.disk.name.2&key=node.disk.name.3&key=node.disk.name.4&key=node.disk.name.5&key=node.disk.name.6&key=node.disk.name.7&key=node.disk.name.8&key=node.disk.name.9&key=node.disk.name.10&key=node.disk.name.11&key=node.disk.name.12&key=node.disk.name.13&key=node.disk.name.14&key=node.disk.name.15&key=node.disk.name.16&key=node.disk.name.17&key=node.disk.name.18&key=node.disk.name.19&key=node.disk.name.20&key=node.disk.name.21&key=node.disk.name.22&key=node.disk.name.23&key=node.disk.name.24&key=node.disk.name.25&key=node.disk.name.26&key=node.disk.name.27&key=node.disk.name.28&key=node.disk.name.29&key=node.disk.name.30&key=node.disk.name.31&key=node.disk.name.32&key=node.disk.name.33&key=node.disk.name.34&key=node.disk.name.35&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.slow.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.busy.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.queue.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.in.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.out.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

Complete List of API calls made by InsightIQ

Here is a complete a list of all of the API calls that InsightIQ makes to the Isilon cluster using OneFS API. For complete reference of what these APIs actually do, you can refer to the OneFS API Info Hub and the OneFS API Reference documentation.

https://10.10.10.10:8080/platform/1/cluster/config

https://10.10.10.10:8080/platform/1/cluster/identity

https://10.10.10.10:8080/platform/1/cluster/time

https://10.10.10.10:8080/platform/1/dedupe/dedupe-summary

https://10.10.10.10:8080/platform/1/dedupe/reports

https://10.10.10.10:8080/platform/1/fsa/path

https://10.10.10.10:8080/platform/1/fsa/results

https://10.10.10.10:8080/platform/1/job/types

https://10.10.10.10:8080/platform/1/license/licenses

https://10.10.10.10:8080/platform/1/license/licenses/InsightIQ

https://10.10.10.10:8080/platform/1/quota/reports

https://10.10.10.10:8080/platform/1/snapshot/snapshots-summary

https://10.10.10.10:8080/platform/1/statistics/current?key=cluster.health&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total&key=ifs.ssd.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.bytes.avail&key=ifs.ssd.bytes.avail&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.count&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.name.0&key=node.disk.name.1&key=node.disk.name.2&key=node.disk.name.3&key=node.disk.name.4&key=node.disk.name.5&key=node.disk.name.6&key=node.disk.name.7&key=node.disk.name.8&key=node.disk.name.9&key=node.disk.name.10&key=node.disk.name.11&key=node.disk.name.12&key=node.disk.name.13&key=node.disk.name.14&key=node.disk.name.15&key=node.disk.name.16&key=node.disk.name.17&key=node.disk.name.18&key=node.disk.name.19&key=node.disk.name.20&key=node.disk.name.21&key=node.disk.name.22&key=node.disk.name.23&key=node.disk.name.24&key=node.disk.name.25&key=node.disk.name.26&key=node.disk.name.27&key=node.disk.name.28&key=node.disk.name.29&key=node.disk.name.30&key=node.disk.name.31&key=node.disk.name.32&key=node.disk.name.33&key=node.disk.name.34&key=node.disk.name.35&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.net.iface.name.0&key=node.net.iface.name.1&key=node.net.iface.name.2&key=node.net.iface.name.3&key=node.net.iface.name.4&key=node.net.iface.name.5&key=node.net.iface.name.6&key=node.net.iface.name.7&key=node.net.iface.name.8&key=node.net.iface.name.9&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=cluster.dedupe.estimated.saved.bytes&key=cluster.dedupe.logical.deduplicated.bytes&key=cluster.dedupe.logical.saved.bytes&key=cluster.dedupe.estimated.deduplicated.bytes&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=ifs.bytes.avail&key=ifs.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.ssd.bytes.avail&key=ifs.ssd.bytes.total&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.ftp&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.hdfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.http&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nfs3&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nfs4&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nlm&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.papi&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.siq&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.smb1&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.smb2&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.ftp&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.hdfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.http&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.nfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.nlm&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.papi&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.siq&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.smb&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.ftp&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.hdfs&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.http&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nfs3&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nfs4&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nlm&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.papi&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.siq&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.smb1&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.smb2&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.cpu.idle.avg&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.slow.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.busy.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.queue.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.in.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.out.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.out.rate&key=node.ifs.bytes.in.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.total&key=node.ifs.ssd.bytes.used&key=node.ifs.ssd.bytes.total&key=node.ifs.bytes.used&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.cache&key=node.ifs.cache.l3.data.read.miss&key=node.ifs.cache.l3.meta.read.hit&key=node.ifs.cache.l3.data.read.hit&key=node.ifs.cache.l3.data.read.start&key=node.ifs.cache.l3.meta.read.start&key=node.ifs.cache.l3.meta.read.miss&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.blocked&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.blocked.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.contended&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.contended.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.deadlocked&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.deadlocked.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.getattr&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.getattr.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.link&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.link.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lock&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lock.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lookup&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lookup.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.read&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.read.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.rename&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.rename.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.setattr&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.setattr.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.unlink&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.unlink.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.write&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.write.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.je.num_workers&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.ext.packets.in.rate&key=node.net.ext.errors.in.rate&key=node.net.ext.bytes.out.rate&key=node.net.ext.errors.out.rate&key=node.net.ext.bytes.in.rate&key=node.net.ext.packets.out.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.iface.bytes.out.rate.0&key=node.net.iface.bytes.out.rate.1&key=node.net.iface.bytes.out.rate.2&key=node.net.iface.bytes.out.rate.3&key=node.net.iface.bytes.out.rate.4&key=node.net.iface.bytes.out.rate.5&key=node.net.iface.bytes.out.rate.6&key=node.net.iface.bytes.out.rate.7&key=node.net.iface.bytes.out.rate.8&key=node.net.iface.bytes.out.rate.9&key=node.net.iface.errors.in.rate.0&key=node.net.iface.errors.in.rate.1&key=node.net.iface.errors.in.rate.2&key=node.net.iface.errors.in.rate.3&key=node.net.iface.errors.in.rate.4&key=node.net.iface.errors.in.rate.5&key=node.net.iface.errors.in.rate.6&key=node.net.iface.errors.in.rate.7&key=node.net.iface.errors.in.rate.8&key=node.net.iface.errors.in.rate.9&key=node.net.iface.errors.out.rate.0&key=node.net.iface.errors.out.rate.1&key=node.net.iface.errors.out.rate.2&key=node.net.iface.errors.out.rate.3&key=node.net.iface.errors.out.rate.4&key=node.net.iface.errors.out.rate.5&key=node.net.iface.errors.out.rate.6&key=node.net.iface.errors.out.rate.7&key=node.net.iface.errors.out.rate.8&key=node.net.iface.errors.out.rate.9&key=node.net.iface.packets.in.rate.0&key=node.net.iface.packets.in.rate.1&key=node.net.iface.packets.in.rate.2&key=node.net.iface.packets.in.rate.3&key=node.net.iface.packets.in.rate.4&key=node.net.iface.packets.in.rate.5&key=node.net.iface.packets.in.rate.6&key=node.net.iface.packets.in.rate.7&key=node.net.iface.packets.in.rate.8&key=node.net.iface.packets.in.rate.9&key=node.net.iface.bytes.in.rate.0&key=node.net.iface.bytes.in.rate.1&key=node.net.iface.bytes.in.rate.2&key=node.net.iface.bytes.in.rate.3&key=node.net.iface.bytes.in.rate.4&key=node.net.iface.bytes.in.rate.5&key=node.net.iface.bytes.in.rate.6&key=node.net.iface.bytes.in.rate.7&key=node.net.iface.bytes.in.rate.8&key=node.net.iface.bytes.in.rate.9&key=node.net.iface.packets.out.rate.0&key=node.net.iface.packets.out.rate.1&key=node.net.iface.packets.out.rate.2&key=node.net.iface.packets.out.rate.3&key=node.net.iface.packets.out.rate.4&key=node.net.iface.packets.out.rate.5&key=node.net.iface.packets.out.rate.6&key=node.net.iface.packets.out.rate.7&key=node.net.iface.packets.out.rate.8&key=node.net.iface.packets.out.rate.9&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.ftp&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.hdfs&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.http&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nfs3&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nfs4&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nlm&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.papi&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.siq&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.smb1&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.smb2&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/keys

https://10.10.10.10:8080/platform/1/statistics/protocols

https://10.10.10.10:8080/platform/1/storagepool/nodepools

https://10.10.10.10:8080/platform/1/storagepool/tiers

https://10.10.10.10:8080/platform/1/storagepool/unprovisioned

https://10.10.10.10:8080/session/1/session

Advertisements

Mutiprotocol VNX File Systems: Listing and counting Shares & Exports by file system

I’m in the early process of planning a NAS data migration from VNX to Isilon, and one of the first steps I wanted to accomplish was to identify which of our VNX file systems are multiprotocol (with both CIFS shares and NFS exports from the same file system). In the environment I support, which has over 10,000 cifs shares, it’s not a trivial task to identify which shares are multiprotocol.  After some research it doesn’t appear that there is a built in method from EMC for determining this information from within the Unisphere GUI. From the CLI, however, the server_export command can be used to view the shares and exports.

Here’s an example of listing shares and exports with the server_export command:

[nasadmin@VNX1 ~]$server_export ALL -Protocol cifs -list | grep filesystem01

share "share01$" "/filesystem01/data" umask=022 maxusr=4294967294 netbios=NASSERVER comment="Contact: John Doe"
 
[nasadmin@VNX1 ~]$server_export ALL -Protocol nfs -list | grep filesystem01

export "/root_vdm_01/filesystem01/data01" rw=admins:powerusers:produsers:qausers root=storageadmins access=admins:powerusers:produsers:qausers:storageadmins
export "/root_vdm_01/filesystem01/data02" rw=admins:powerusers:produsers:qausers root=storageadmins access=admins:powerusers:produsers:qausers:storageadmins

The output above shows me that the file system named “filesystem01” has one cifs share and two NFS exports.  That’s a good start, but I want to get a count of the number of shares and exports rather than a detailed list of them. I can accomplish that by adding ‘wc’ [word count] to the command:

[nasadmin@VNX1 ~]$ server_export ALL -Protocol cifs -list | grep filesystem01 | wc
 1 223 450

[nasadmin@VNX1 ~]$ server_export ALL -Protocol nfs -list | grep filesystem01 | wc
 2 15 135

That’s closer to what I want.  The output includes three numbers and the first number is the line count.  I really only want that number, so I’ll just grab it with awk. Ultimately I want the output to go to a single file with each line containing the name of
the file system, the number of CIFS shares, and the number of NFS Exports.  This line of code will give me what I want:

[nasadmin@VNX1 ~]$ echo -n "filesystem01",`server_export ALL -Protocol cifs -list | grep filesystem01 | wc | awk '{print $1}'`, `server_export ALL -Protocol nfs -list | grep filesystem01 | wc | awk '{print $1}'` >> multiprotocol.txt ; echo 
" " >> multiprotocol.txt

The output is below.  It’s perfect, as it’s in the format of a comma delimited file and can be easily exported into Microsoft Excel for reporting purposes.

filesystem01, 1, 2

Here’s a more detailed explanation of the command:

echo -n “filesystem01”, : Echo will write the name of the file system to the screen or to a file if you’ve redirected it with “>” at the end of the command.  Adding the “-n” supresses the “new line” that is automatically created after text is outputted, as I want each file system and it’s share & export count to be on the same line in the report.

`server_export ALL -Protocol cifs -list | grep filesystem01 | wc | awk ‘{print $1}’`,: The server_export command lists all of the cifs shares for the file system that you’re grepping for.  The wc command is for the “word count”, we’re using it to count the number of output lines to verify how many exports exist for the specified file system.  The awk ‘{print $1}’ command will output only the first item of data, when it hits a blank space it will stop.  If the output is “1 23 34 32 43 1”, running ‘{print $1}’ will only output the 1.

`server_export ALL -Protocol nfs -list | grep filesystem01 | wc | awk ‘{print $1}’` >> multiprotocol.txt: This is the same command as above, but we’re now counting the number of NFS exports rather than CIFS shares.

; echo ” ” >> multiprotocol.txt:  After the count is complete and the data has been outputted, I want to run an echo command without the “-n” option to force a line break to the next line, in preparation for the next line of the script.  When exporting, using “>” will output the results to a file and overwrite the file if it already exists, if you use “>>”, it will append the results to the file if it already exists.  In this case we want to append each line.  In an actual script you’d want to create a blank file first with “echo > filename.ext”. Also, the “;” prior to the command instructs the interpreter to start a new command regardless of the success or failure of the prior command.

At this point, all that needs to be done is to create a script that includes the line above with every file system on the VNX. I copied the line of code above into excel into multiple columns, allowing me to copy and paste the file system list from Unisphere and then concatenate the results into a single script file. I’m including a screenshot of one of my script lines from Excel as an example.  The final column (AG) has the following formula:

=CONCATENATE(A4,B4,C4,D4,E4,F4,G4,H4,I4,J4,K4,L4,M4,N4,O4,P4,Q4,R4,S4,T4,U4,V4,W4,X4,Y4,Z4,AA4,AB4,AC4,AD4,AE4)

Spreadsheet example:

multiprotocol count

 

Defining Software-Defined Storage: Benefits, Strategy, Use Cases, and Products

This blog entry was made out of personal interest.  I was curious about the current state of software defined storage in the industry and decided to get myself up to speed.  I’ve done some research and reading on SDS off and on over the course of the last week and this is a summary of what I’ve learned from various sources around the internet.

What is SDS?

First things first.  What is “Software-Defined Storage?”  The term is very broadly used to describe many different products with various features and capabilities. It seems to me to be a very overused and not very well defined term, but it is the preferred term for defining the trend towards data storage becoming independent of the underlying hardware. In general, SDS describes data storage software that includes policy-based provisioning and management of data storage that is independent of the underlying hardware. The term itself has been open to interpretation among industry experts and vendors, but it usually encompasses software abstraction from hardware, policy-based provisioning and data management, and allows for a hardware agnostic implementation.

How the industry defines SDS

Because of the ambiguity surrounding the definition, I looked up multiple respected sources on how the term is defined in the industry. I first looked at IDC and Gartner. IDC defines software defined storage solutions as solutions that deploy controller software (the storage software platform) that is decoupled from underlying hardware, runs on industry standard hardware, and delivers a complete set of enterprise storage services. Gartner defines SDS in two separate parts, Infrastructure and Management:

  • Infrastructure SDS (what most of us are familiar with) utilizes commodity hardware such as x86 servers, JBOD, JBOF or other and offers features through software orchestration. It creates and provides data center services to replace or augment traditional storage arrays.
  • Management SDS controls hardware but also controls legacy storage products to integrate them into a SDS environment. It interacts with existing storage systems to deliver greater agility of storage services.

The general characteristics of SDS

So, based on what I just discussed, what follows is my summary and explanation of the general defining characteristics of software defined storage. These key characteristics are common among all vendor offerings.

  • Hardware and Software Abstraction. SDS always includes abstraction of logical storage services and capabilities from the underlying physical storage systems.
  • Storage Virtualization. External-controller based arrays include storage virtualization to manage usage and access across the drives within their own pools, other products exist independently to manage across arrays and/or directly attached server storage.
  • Automation and Orchestration. SDS includes automation with policy-driven storage provisioning, and service-level agreements (SLAs) generally replace the precise details of the actual hardware. This requires management interfaces that span traditional storage-array products.
  • Centralized Management. SDS includes management capabilities with a centralized point of management.
  • Enterprise storage features. SDS includes support for all the features desired in an enterprise storage offering, such as compression and deduplication, replication, snapshots, data tiering, and thin provisioning.

Choosing a strategy for SDS

There are a host of considerations when developing a software defined storage strategy. Below is a list of some of the important items to consider during the process.

  • Cloud Integration. It’s important to ensure any SDS implementation will integrate with the cloud, even if you’re not currently using cloud services in your environment. The storage industry is moving heavily to cloud workloads and you need to be ready to accommodate business demands in that area. In addition, Amazon’s S3 interface has become the default protocol for cloud communication, so choose an SDS solution can supports S3 for seamless integration.
  • Storage Management Analysis. A deep understanding of how SDS is managed alongside all your legacy storage will be needed. You’ll need a clear understanding of the capacity and performance being used in your environment. Determine where you might need more performance and where you might need more capacity. It’s common in the industry now to not have a deep understanding of how your storage impacts the business, to lack a service catalog portfolio, and have limited resources managing your critical storage. If your organization is on top of those common issues, you’re well ahead of the game.
  • Research your options well. SDS really marks the end of large isolated storage environments. It allows organizations to move away from silos and customize solutions to their specific business needs. SDS allows organizations to build a hybrid of pretty much anything.   Taking advantage of high density NL-SAS disks right next to the latest high performance all-flash array is easily done, and the environment can be tuned to specific needs and use cases.
  • Pay attention to Vendor Support. There are also concerns about support. A software vendor will of course support its own software-defined storage product, but will they offer support when there is a conflict between a heterogeneous hardware environment and their software? Organizations should plan and architect the environment very carefully. All competent software vendors will offer a support matrix for hardware, but only so much can be done if there is a bug in the underlying hardware.
  • Performance Impact analysis. Just like any traditional storage implementation, predictability of performance is an important item to consider when implementing an SDS architecture. A workload analysis and a working knowledge of your precise performance requirements will go a long way toward a successful implementation. Many organizations run SDS on general-purpose, server-class servers and not the purpose-built systems designed solely for storage. Performance predictability can be especially concerning when SDS is implemented into a hyper-converged environment, as the hosts must run the SDS software while also running business applications.
  • Implementation Timeframe. SDS technology can make initial implementation more time consuming and difficult, especially if you choose a software only solution. The flexibility SDS offers provides a storage architect with many more design options, which of course translates into a much more extensive hardware selection process.   Organizations must carefully evaluate the various SDS components and the total amount of time it will consume to select the appropriate storage, networking, and server hardware for the project.
  • Overall Cost and ROI. I’m sure you’ll hear this from your vendor – they will promise that SDS will decrease both acquisition and operational costs while simultaneously increasing storage infrastructure flexibility. Your results may vary, and be aware that the software based products more closely resemble the original intention of this technology and are the best suited to provide those promised benefits. A software based SDS architecture will likely involve a more complex initial implementation with higher costs. While bundled products may offer a better implementation experience, they may limit flexibility.   Determining if software solutions and bundled hardware solutions are a better fit largely depends on whether your IT team has the time and skills required to research and identify the required hardware components on their own. If so, a software-only product can provide for significant savings and provide maximum flexibility.
  • Avoid Forklift Upgrades. One of the original purposes of SDS was to be hardware agnostic so there should be no reason to remove and replace all of your existing hardware. Organizations should research solutions that enable you to protect your existing investment in hardware as opposed to requiring a forklift upgrade of all of your hardware. A new SDS implementation should complement your environment and protect your investment in existing servers, storage, networking, management tools and employee skill sets.
  • Expansion and Upgrade capability. Before you buy new hardware to expand your environment, confirm that the additional hardware can seamlessly integrate with your existing cloud or datacenter environment. Organizations should look for products that allow easy and non-disruptive hardware and software expansions & upgrades, without the need for additional time consuming customization.
  • Storage architecture. The fundamental design of the hardware can expose both efficiencies and deficiencies in the solution stack. Everything should be scrutinized from the tiniest details. Pay particular attention to features that affect storage overhead (deduplication, compression, etc).
  • Test your application workloads. Often overlooked is the fact that a storage infrastructure exists to entirely to facilitate data access by applications. It’s a common mistake to downplay the importance of an application workload analysis. Consider a proof of concept or extensive testing with a value added reseller with your own data if possible, it’s the only way to ensure it will meet your expectations when it’s placed into a production environment.   If it’s possible, test SDS software solutions with your existing storage infrastructure before a purchase is made as it will help reveal just how hardware independent the SDS software actually is.

Potential use cases and justification for SDS 

The impact that SDS solutions will continue to have a significant impact on the traditional storage market moving into the future. IDC research suggests that traditional stand-alone hybrid systems are expected to start declining while new all-flash, hyper-converged and software-defined storage adoption will be growing at a much faster rate.  So, on to some potential use cases:

  • Non-disruptive data migrations.  This is where appliance and storage controller based virtual solutions have already been used quite successfully for many years.   I have experience installing and managing the VPLEX storage virtualization device into an existing storage infrastructure quite successfully, and it was used extensively for non disruptive data migrations in the environment that I supported. By inserting an appliance or storage controller based SDS solution into an existing storage network between the server and backend storage, it’s then easy to virtualize the storage volumes on both existing and new storage arrays and then migrate data seamlessly and non-disruptively from old arrays to new ones. Weekend outages were turned into much shorter non-disruptive upgrades that the application was completely unaware of.  Great stuff.
  • Better managing deployments of archival/utility storage.  Organizations in general seem to have a growing need for deployments of large amounts of archive storage in their environments (low cost, high density disk). It’s not uncommon to have vast amounts of data with an undefined business value, but it’s sufficiently valuable that it cannot easily or justifiably be deleted. In cases like that storage arrays that are reliable, stable, and economical perform moderately well and remain easy to manage and scale are a good fit for an SDS solution. The storage that  this data resides on would need very few extra enterprise features like auto-tiering, VM integration, deduplication, etc. Cheap and deep storage will work here, and SDS solutions work in these environments. Whether the SDS software resides on a storage controller or on an appliance, more storage capacity can be quickly and easily added to these environments and then easily managed and scaled. Many of the interoperability and performance issues that have hurt SDS deployments in the past don’t make much difference in a situation where it’s simply archive data.
  • Managing heterogeneous storage environments. One of the big issues with appliance and storage controller-based SDS solutions at the beginning was that they attempted to do it all by virtualizing storage arrays from every vendor under the sun and failed to create a single pane of glass to manage all of the storage capacity while providing a common, standardized set of storage management features.  That feature is now a game changer in complex environments and is offered by most vendors. Implementing SDS can dramatically reduce administrative time and allow your top staff to focus on more important business needs.

The benefits of SDS

What follows is a summary of some of the key benefits of implementing SDS. This list is what you’re most likely to hear from your friendly local salesperson and in the marketing materials from each vendor. J

  • Non-disruptive hardware expansion. SDS solutions can enable storage capacity expansion without disruption.   New arrays can be added to the environment and data can be migrated completely non-disruptively.
  • Cloud Automation. SDS provides an optimal storage platform for next generation infrastructure of on-prem & private data centers that offers public cloud scale economics, universal access and self-service automation to private clouds.
  • Economics. SDS has potential to significantly reduce operational and management expenses using policy based automation, ease of deployment, programmable flexibility, and centralized management while providing hardware independence and using off the shelf industry-standard components to lower storage system costs. Some vendor offerings will allow the user to leverage existing hardware.
  • Increased ROI. SDS allows policy-driven data center automation that provides the ability to provision storage resources immediately based on VM workload demand. This capability of SDS will encourage organizations to deploy SDS offerings to improve their opex and capex, providing a quick return on investment (ROI).
  • Real-time scalability. SDS offers tiered capacity by service level and the ability to provision storage on demand, which enables optimal capacity based on current business requirements. It also provides details metrics for reporting of storage infrastructure usage.
  • High Availability. SDS architectures can provide for improved business continuity. In the event of a hardware failure, an SDS environment can shift load and data automatically to another available node.       Because the storage infrastructure sits above the physical hardware, any hardware can be used to replace a failed node. Older systems could even be recycled to improve disaster recovery provisions in SDS, further improving your ROI.

 

The trends for SDS in 2017 and beyond

The SAN guy is not a fortune teller, but these predictions are all creating a buzz in the industry and you’re likely to see them start to materialize in 2017.

  • SDS catches up to traditional storage. SDS is finally catching up with traditional storage. Now that enterprise-class storage features like inline deduplication, compression and QoS have been introduced across the market leaders in SDS solutions, it’s finally becoming a more mainstream solution. The rapidly declining cost of EFD along with the performance and reliability of SDS are really making it well suited for the virtual workloads of many organizations.
  • Multiple Cloud implementations. Analysts are predicting that SDS will introduce a new multi-cloud era in 2017, as leveraging the power of a software-defined infrastructure that is not tied to a specific hardware platform and configuration. SDS users will finally have a defined cloud strategy that is evolutionary to what they are doing today. As a result, IT has to be prepared to support new application models designed to bring the simplicity and agility of cloud to on-premises infrastructure. At the same time, new software-defined infrastructure enables a flexible multi-cloud architecture that extends a common and consistent operating environment from on-prem to off-prem, including public clouds.
  • Management integration improves. Integration will continue to improve. The continued integration of management into hypervisor tools, computational platforms, hyper-converged systems, and next-generation service based infrastructures will continue to enhance SDS capabilities.
  • Storage leaves the island.   Traditional storage implementations typically have many different islands of storage in independent silos. It’s been difficult to break that mold based on business requirements and the hardware and software available to provide the necessary multi-tenancy and still meet those requirements. SDS will begin to allow organizations to consolidate those islands of storage and break the artificial barriers.
  • Increased Hybrid SDS deployments. The use of SDS will continue to move toward hybrid implementations. Organizational requirements will drive the change. It’s no secret that more workloads are moving toward the cloud, and SDS will help break down that boundary. SDS will also start to blur the lines between data that is in the cloud and data that is locally stored and help make data mobility more seamless, improving the fluidity while taking into accound regulatory requirements, cost, and performance.
  • The Software-Defined Data Center starts to materialize. The ultimate goal for SDS is the software defined data center. Implementing a Hyper-converged infrastructure (HCI) is important to reach that goal, but in order to achieve it HCI must deliver consistent and predictable performance to all elements of data center management, not just storage. SDS and HCI are the stepping stones for that goal.

Software Defined Storage Vendors

Now that we have an idea of what SDS is and what it can be used for, let’s take a look at the vendors that offer SDS solutions. I put together a vendor list below along with a brief description of the product that is based mostly on the company’s marketing materials.

SwiftStack

SwiftStack’s design goal is to make it easy to deploy, operate, and scale, as well as to provide the fastest experience when deploying and managing a private cloud storage system. Another key design element is to enable large-scale growth without any disruption to performance. It has no fixed hardware configurations and can be configured using any server hardware. It is also licensed for the amount of data capacity utilized, not the total amount of hardware capacity deployed, allowing organizations to pay-as-they-grow using annual licenses.

SwiftStack offers a reliable, massively scalable, software defined storage platform. It seamlessly integrates with existing IT infrastructures, running on standard hardware, and replicated across globally distributed data centers.

HPE StoreVirtual VSA

HPE StoreVirtual VSA is storage software that runs on commodity hardware in a virtual machine in any virtualized server environment, including VMware, Hyper-V, and KVM. It turns any media presented to it via the hypervisor into shared storage. It presents the storage to all physical and virtual hosts in the environment as an iSCSI array. Additionally, StoreVirtual VSA is part of an integrated family of solutions that all share the same storage operating system, including StoreVirtual arrays and HPE’s hyper-converged systems. It has a full enterprise storage feature set that provides the capabilities and performance you would expect from a traditional storage area network. It provides low cost data protection that delivers fast, efficient, and scalable backup and does not require dedicated hardware.

HPE StoreOnce VSA

StoreOnce VSA is a SDS solution that provides backup and recovery for virtualized environments. It enables organizations to reduce the cost of secondary storage by eliminating the need for a dedicated backup appliance. It shares the same deduplication algorithm and storage features as the StoreOnce Disk Backup family, including the ability to replicate bi-directionally from a physical backup appliance to SDS.

Metalogix StoragePoint

StoragePoint is a SharePoint storage optimization solution that offloads unstructured SharePoint content data, which is known as Binary Large Objects (BLOBs), from SharePoint’s underlying SQL database to alternate tiers of storage. BLOBs quickly overwhelm the SQL database that powers SharePoint, resulting in poor performance that is expensive to maintain and grow. Many rich media formats are too large to store in SQL Server due to technical limitations, resulting in a collaboration platform that cannot address all the content needs of an organization.

StoragePoint optimizes SharePoint Storage using Remote Blob Storage (RBS). It provides a method to address file content storage issues related to large file size, slow user query times and backup failures. It externalizes SharePoint content so it can be stored and managed anywhere. An automated rules engine places content in the most appropriate storage locations based on the type, criticality, age and frequency of use.

VMware vSAN

Previously known as VMware Virtual SAN, vSAN addresses hyper-converged infrastructure systems. It aggregates locally attached disks in a vSphere cluster to create storage that can be provisioned and managed from vCenter and vSphere Web Client tools. This enables organizations to evolve their existing virtualization environment with the only natively integrated vSphere solution and leverages multiple server hardware platforms. It reduces TCO due to the cost savings of utilizing server side storage, with more affordable flash storage, on demand scaling, and simplified storage management. It can also be expanded into a complete SDS solution that can provide the foundation for a cloud architecture.

Using the VMware SDS model, the data level that’s responsible for storing data and implementing data services such as replication and snapshots is virtualized by abstracting physical hardware resources, and aggregating them into logical pools of capacity (called virtual datastores) that can be used and managed with a high degree of flexibility. By making the virtual disk the basic unit of management for storage operations in the virtual datastores, precise combinations of hardware resources and storage services can be configured and controlled independently for each virtual machine.

Microsoft S2D

Microsoft Storage Spaces Direct (or S2D) is a part of Windows Server 2016. It can be combined with Storage Replica (SR) along with resilient file system cache tiering to create scale-out, converged and hyper-converged infrastructure SDS for Windows Servers and Hyper-V environments. It has the capability to use existing tools and has many flexible configuration and deployment options.

Infinidat InfiniBox

InfiniBox is based upon a fully abstracted set of software driven storage functions layered on top of industry standard hardware, and delivers a fast, highly available, and easy-to-deploy storage system. Extreme reliability and performance is delivered through their innovative self-healing architecture, high performance double-parity RAID, and comprehensive end-to-end data verification capability. They also feature an efficient data distribution architecture that uses all of the installed drives all the time. It has a large flash cache that deliver ultra-high performance that can match or exceed 12GB/s throughput (yes, it’s a marketing number).

Pivot3

Pivot3’s virtual storage and compute operating environment, known as vSTAC, is designed to maximize overall resource utilization, providing efficient fault tolerance and giving IT the flexibility to deploy on a wide range of commodity x86 hardware. A distributed scale-out architecture pools compute and storage from each HCI node into high-availability clusters, accessible by every VM and application. Its Scalar Erasure Coding is said to be more efficient than network RAID or replication protection schemes, and it maintains performance during degraded mode conditions. Pivot3 owns multiple SDS patents, one covering their technology that creates a cross-node virtual SAN that can be accessed as a unified storage target by any application running on the cluster. By converging compute, storage and VM management, they automate system management with self-optimizing, self-healing and self-monitoring features. Their vCenter plugin provides a single pane of glass to simplify management of single and multi-site deployments.

EMC VIPR Controller

EMC ViPR Controller provides Software Defined Storage automation that centralizes and transforms multivendor storage into a simple and extensible platform. It also performs infrastructure provisioning on VCE Vblock Systems. It abstracts and pools resources to deliver automated, policy-driven, storage as-a-service on demand through a self-service catalog across a multi-vendor environment. It integrates with cloud stacks like VMWare, OpenStack, and Microsoft and offers RESTful APIs for integrating with other management systems and offers multi-vendor platform support.

EMC ECS (Elastic Cloud Storage)

ECS provides a complete software-defined cloud storage platform for commodity infrastructure. Deployed as a software-only solution or as a turnkey appliance, ECS offers all the cost savings of commodity infrastructure with enterprise reliability, availability, and serviceability. EMC launched it as its next generation hyper scale object-based storage solution, it was originally designed to overcome the limitations of Centera. It is used to store, archive, and access unstructured content at scale. It’s designed to allow businesses to deploy massively scalable storage in a private or public cloud, and allows customizable metadata for data placement, protection, and lifecycle policies. Data protection is provided by a hybrid encoding approach that utilizes local and distributed erasure coding for site level and geographic protection.

EMC ScaleIO

ScaleIO is a software-only server based storage area network that combines storage and compute resources to form a single-layer. It uses existing local disks and LANs so that the host can realize a virtual SAN with all the benefits of external storage. It provides virtual and bare metal environments with scale, elasticity, multi-tenant capabilities, and service quality that enables Service Providers to build high performance, low cost cloud offerings. It enables full data protection and persistence. The software ensures enterprise-grade resilience through meshed mirroring of randomly sliced and distributed data chunks across multiple servers.

IBM Spectrum Storage

IBM spectrum software is part of a comprehensive family of software-defined storage solutions. It is specifically structured to meet changing storage needs, including hybrid cloud, and is designed for organizations just starting out with software-defined storage as well as those with established infrastructures who need to expand their capabilities.

NetApp StorageGrid

NetApp’s SDS offerings include NetApp clustered Data ONTAP OS, NetApp OnCommand, NetApp FAS series, and NetApp FlexArray virtualization software. Some features of NetApps SDS include virtualized storage services that includes effective provision of data storage and access based on service levels, multiple hardware options that Supports deployment in a variety of enterprise platforms, and application self-service which delivers APIs for workflow automation and custom applications.

DataCore

DataCore’s storage virtualization software allows organizations to seamlessly manage and scale data storage architectures, delivering massive performance gains at a much lower cost than solutions offered by legacy storage hardware vendors. DataCore has a large customer base around the globle. Their adaptive and self-learning and healing technology eases management, and it’s solution is completely hardware agnostic.

Nexenta

Nexenta integrates software-only “Open Source” collaboration with commodity hardware. Their software is installed in thousands of companies around the world serving a wide variety of workloads and business-critical situations. It powers some of the world’s largest cloud deployments. With their complete Software-Defined Storage portfolio and recent updates to NexentaConnect for VMware VSAN and the launch of NexentaEdge, they offer a robust SDS solution.

Hitachi Data Systems G-Series

Hitachi Virtual Storage Platform G1000 provides the always-available, agile and automated foundation needed for a on-prem or hybrid cloud infrastructure. Their software enables IT agility and a low TCO. They delivering a top notch combination of enterprise-ready software-defined storage, global storage virtualization, along with efficient, scalable, and high performance hardware. It also supports self-managing policy-driven management. Their SDS implementation includes Hitachi Virtual Storage Platform G1000 (VSP G1000) and Hitachi Storage Virtualization Operation System (SVOS).

StoneFly SCVM

The StoneFly Storage Concentrator Virtual Machine (SCVM) Software-Defined Unified Storage (SDUS) is a Virtual IP Storage Software Appliance that creates a virtual network storage appliance using the existing resources of an organization’s virtual server infrastructure. It is a virtual SAN storage platform for VMware vSphere ESX/ESXi, VMware vCloud and Microsoft Hyper-V environments and provides an advanced, fully featured iSCSI, Fibre Channel SAN and NAS within a virtual machine to form a Virtual Storage Appliance.

Nutanix

Nutanix’s software-driven Xtreme Computing Platform natively converges compute, virtualization and storage into a single solution. It offers predictable performance, linear scalability and cloud-like infrastructure consumption. PernixData FVP software is a 100% software solution that clusters server flash and RAM to create a low latency I/O acceleration tier for any shared storage environment.

StorPool

StorPool is a storage software solution that runs on standard commodity based servers and builds scalable, high-performance SDS system. It offers great flexiblity and can be deployed in both converged or on separate storage nodes. It has an advanced fully-distributed architecture and is one of the fastest and most efficient cloud ready block-storage software solutions available.

Hedvig

Hedvig collapses traditional tiers of storage into a single, software platform designed for primary and secondary data. Their patented “Universal Data Plane” architecture stores, protects, and replicates data across multiple private and public clouds. The Hedvig Distributed Storage Platform is a single software-defined storage solution that is designed to meet the needs of primary, secondary, and cloud data requirements. It is a distributed system that provides cloud-like elasticity, simplicity, and flexibility.

Amax StorMax SDS

StorMax SDS is a highly available software-designed storage solution that delivers unified file and block storage services with enterprise-grade data management, data integrity, and performance that can scale from tens of Terabytes to Petabytes. It is seamlessly integrated with NexentaStor and the plug and play appliances are designed to be a simple swap-in replacement of legacy block and file storage appliances, offering unlimited file system sizes, unlimited snapshots and clones, and inline data reduction for additional storage cost savings. It’s well suited for VMWare, OpenStack, or CloudStack backend storage, generic NAS file services, home directory storage, and near-line archive and large backup & archive repositories.

Atlantis USX

USX is Atlantis’ SDS software solution. It includes policy-based management of storage resources, storage pooling and automation of storage functions. It also provides a REST API to allow organizations to automate storage functions. It promises to deliver the performance of an all-flash storage array at a lower cost than that of traditional SAN or NAS. The marketing materials state that you can pool any SAN, NAS or DAS storage and accelerate its performance by up to 10x, while at the same time consolidating storage to increase storage capacity by up to 10x.

LizardFS

The LizardFS SDS solution is a distributed, scalable, fault-tolerant and highly available file system that runs on commodity hardware. It allows users to combine disk space located on many servers into a single name space that is visible on Unix and Windows. SDS LizardFS ensures file security by keeping all the data in many replicas spread over all available servers. Disk and server failures are handled transparently without any downtime or loss of data. As your storage requirements grow it scales by adding new servers without any downtime. The system will automatically move distribute data to the newly added servers as it continuously balances disk usage across all connected nodes. Removing servers is as easy as adding a new one.

That’s a large portion of the SDS vendor playing field, but there are others. You can also check out the offerings from Maxta, Tarmin, Coraid, Cohesity, Scality, Starwind, and Red Hat Storage Server (Ceph).

There were long pauses in between as I worked on this blog post in an on and off manner, so I may make some editorial changes and additions in the coming weeks.  Feedback is welcomed and appreciated.

What’s the difference between a Storage Engineer and a Storage Administrator?

During my recent job search a recruiter asked me if there was a difference between a Storage Administrator and a Storage Engineer. He had no idea. I was initially a bit surprised at the question, as I’ve always assumed that it was widely accepted that an engineer is more involved in the architecture of systems whereas an administrator is responsible for managing them.  While his question was about Storage, it applies to many different disciplines in the IT industry as both the Administrator and Engineer titles are routinely appended to “System”, “Network”, “Database”, etc.  Many companies use the terms completely interchangeably and many storage professionals perform both roles. In my experience HR Departments generally label all technical IT employees as “Analysts”, no matter which discipline you specialize in.

From my own personal perspective, I present the following definitions:

Storage Engineer: A person who uses a disciplined, methodical approach to the design, realization, technical management, operation, and life-cycle management of a storage environment.

Storage Administrator: A person who is responsible for the daily upkeep, technical configuration, support, and reliable operation of a storage environment.

To all of my recruiter friends and associates, please think of the System Engineer as the person who is responsible for laying the foundation and ensuring that it is implemented properly.  Afterwards, the Administrator is responsible for carrying out the daily routines and supporting the vision of the engineer.

Does one title outrank the other? No. In my opinion they’d be equal. As I mentioned before HR departments generally don’t distinguish between the two and both usually are in the same pay grade, and the overlap of responsibilities is such that many people perform the duties of both regardless if their title is one or the other. In my experience performing both roles at multiple companies, A Storage Engineer at any given company is given a problem and in a nutshell their job is to find the best solution for it. What is the normal process for finding the best solution?  The Engineer researches and develops the best possible combinations of network, compute, and storage resources along with all the required software features and functionality after investigating a multitude of different vendors and technologies.  Storage industry trends and new technologies are usually researched as well. Following that research, they finally determine the best fit based on the cost, the specific business use case, expansion and scalability, performance testing in a lab or onsite with a proof of concept, all while taking into account the ease of administration and supportability of the hardware and software from both a vendor and internal admin standpoint. A Storage Administrator is generally heavily involved in this decision-making process as they will be responsible for the tuning of the environment to optimize the performance and reliability to the customer, as a result their opinion during the research phase is crucial.  Administrator feedback based on job experience is critical in the research and testing phase across the board, it’s simply not something that’s taught in a book or degree program.
With the considerable overlap between the two jobs in most companies, it’s not surprising they are used so interchangeably and that there is general market confusion about the difference. A company isn’t going to hire a group of storage administrators to simply sit at a desk and monitor a group of storage arrays, they will be required to understand the process of building a complex storage environment and how it fits in to the specific business environment. Engineering and administering a Petabyte-scale global storage environment is very complex no matter which title you’re given. A seasoned Storage Administrator or Storage Engineer should both be up to the task, regardless of how you define their roles. At the end of the day, I’m proud to be a SeniorSANStorageAnalystAdminEngineerSpecialist professional. 🙂

Did I get any of this wrong?  Share your feedback with me in the comments section.

Generating and installing SSL requests, keys, and certificates on EMC ECS

ecshttps

In this post I’ve outlined the procedure for generating SSL requests, keys and certificates for ECS, as well as outlining the process for uploading them to ECS and verifying the installed certificates afterwards.   This was a new process for me so I created very detailed documentation on the process I used, hopefully this will help someone else out.

I mention using the ECS CLI a few times in this document.  If you’d like to use the ECS CLI, I have another blog post here that reviews the details on it’s implementation.  It requires Python.

Part 1: Generating SSL requests, Keys, and Certificates.

The procedure for generating SSL requests, keys, and certificates is unnecessary if you will be given the certificate and key files from a trusted source within your organization.  If you’ve been provided the certificate and key file already, you can skip to the Part 2 that details how to upload and import the keys and certificates to ECS.  This is of course a sample procedure on how I did it in my organization, specific details may have to be altered depending on the use case.

a.       Prepare for Creating (or editing) the SSL Request file

  • The first step in this process is to generate an SSL request file.  As OpenSSL does not allow you to pass Subject Alternative Names (SANs) through the command line, they must be added to a configuration file first.
  • On ECS, the OpenSSL configuration file is located at /etc/ssl/oenssl.cnf by default.  Copy that file to a temporary directory where you will be generating your certificates.
  • Run this command to copy the request file for editing:
admin@ecs-node1:~# cp /etc/ssl/openssl.cnf /tmp/request.conf

b.      Make changes to the request.conf file.  Edit it with vi and make the edits outlined below.  Each bullet reviews a specific section of the file where changes are required.

  • [ alternate_names ] Edit the [ alternate_names ] section.  In a typical request file these are included at the very end of the configuration file.  Note that this request example includes the wildcard as the first entry (which is required by S3).

Sample:

DNS.1 = *.prod.os.example.com
DNS.2 = atmos.example.com
DNS.3 = swift.example.com
  • [ v3_ca ]  Edit the [ v3_ca ] section.

This line should be added directly below the [ v3_ca ] header:

subjectAltName = @alternate_names

Search for “basicConstraints” in the [ v3_ca ] section.  You may see “basicConstraints = CA:true”.  Make sure it is commented out – add the # to the beginning of the line.

# basicConstraints = CA:true

Search for “keyUsage = cRLSign, keyCertSign” in the [ v3_ca ] section.  You may see “# keyUsage = cRLSign, keyCertSign”.  Make sure it is commented out.

# keyUsage = cRLSign, keyCertSign
  • [ v3_req ] Verify the configuration in the [ v3_req ] section.  The line below must exist.
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
  • [ usr_cert ] Verify the configuration in the [ usr_cert ] section.

Search for the entry below and uncomment it, it should be added.

extendedKeyUsage = serverAuth

The following line is likely to already exist in this [ v3_ca ] section.  The authorityKeyIdentifier line exists in multiple locations in the config file, however in the v3_ca section it must have “always,issuer” as its option.

# authorityKeyIdentifier=keyid:always,issuer
  • [ req ] Verify the configuration In the [ req ] section.

For our dev environment, in the testing phase with a self-signed certificate, the following entry was made six lines below the [ req ] header:

x509_extensions = v3_ca         # The extensions to add to the self-signed cert

The x509_extensions line also exists in the [ CA_default ] section.  This was left untouched in my configuration.

x509_extensions = usr_cert      # The extensions to add to the cert

Change based on certificate type.  Note that this will change if you’re not using a self-signed certificate, which I did not test.  The req_extensions line exists in the default configuration file and is commented out.

x509_extensions = v3_ca           #  for a self-signed cert
req_extensions = v3_ca              # for cert signing req

Change the default_bits entry.

Search for default_bits = 1024, it should be default_bits = 2048
  • [ CA_default ]  In the CA_default section, uncomment or add the line below.  The line exists in the default configuration file and simply needs to be uncommented.
copy_extensions = copy

The following additional changes were made in my configuration:

Search for dir = ./demoCA, change to dir = /etc/pki/CA
Search for default_md = default, change to default_md = sha256
  • [ req_distinguished_name] Verify the configuraiton in the [ req_distinguished_name] section.

The following changes were made in my configuration:

countryName_default = AU, change to countryName_default = XX
stateOrProviceName_default = SomeState, change to stateOrProviceName_default = Default Province
localityName_default doesn’t exist in the default file, added as localityName_default = Default City
0.organizationName_default = Internet Widgits Pty Ltd, change to 0.organizationName_default = Default Company
commonName = Common Name (e.g. server FQDN or YOUR name), it was changed to commonName = Common Name (eg, your name or your server\'s hostname)
  • [ tsa_config1 ] Verify the configuration in the [ tsa_config1 ] section.

The following additional change was made in my configuration:

digests = md5, sha1, change to digests = sha1, sha256, sha384, sha512

c.       Generate the Private Key.  Save the key file in a secure location, the security of your certificate depends on the private key.

  • Run this command to generate the private key:
admin@ecs-node1:~# openssl genrsa -des3 -out server.key 2048
Generating RSA private key, 2048 bit long modulus
............................................................+++
Enter pass phrase for server.key: <enter a password>
Verifying - Enter pass phrase for server.key: <enter a password>
  • Modify the permissions of the server key:
admin@ecs-node1:~# chmod 0400 server.key
  • Now that the private key is generated, you can either create a certificate request (the .req file) to request a certificate from a CA or generate a self-signed certificate.  In the samples below, I’m setting the Common Name (CN) on the certificate to *.os.example.com.

d.      Generate the Certificate Request.  Next we will look at the steps used to generate a certificate request.

  • Run the command below to generate the request.
admin@ecs-node1:~# openssl req -new -key server.key -config request.conf -out server.csr
Enter pass phrase for server.key: <your passprhase from above>
  • Running the command above will prompt for additional information that will be incorporated into the final certificate request (the Distinguished Name, or DN). Some fields may be left blank and some will have default values, If you enter ‘.’ the field will be left blank.
Country Name (2 letter code) [US]: <Enter value>
State or Province Name (full name) [Province]: <Enter value>
Locality Name (eg, city) []: <Enter value>
Organization Name (eg, company) [Default Company Ltd]: <Enter value>
Organizational Unit Name (eg, section) []: <Enter value>
Common Name (e.g. server FQDN or YOUR name) []: <*.os.example.com>
Email Address []: <admin email>
  • Enter the following extra attributes to be sent with the certificate request:
A challenge password []: <optional>
An optional company name []: <optional>
  • Check request contents.  Use OpenSSL to verify the contents of the request, verify that the SANs are set correctly.
admin@ecs-node1:~# openssl req -in server.csr -text -noout
Certificate Request:
Data:
Version: 0 (0x0)
Subject: C=US, ST=North Dakota, L=Fargo, O=EMC, OU=ASD,
CN=*.os.example.com/emailAddress=admin@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:vc
a7:5a:dc:ca:ff:73:53:6b:ab:a7:ff:7a:20:c1:ff:
   … <removed a portion of the output for this example> ..
ff:9e:66:ff:43:0a:fd:31:3d:69:b1:03:20:51:ff:
Exponent: 65537 (0x10001) A
Requested Extensions:
X509v3 Subject Alternative Name:
DNS:os.example.com, DNS:atmos.example.com, DNS:swift.example.com
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication attributes:
Signature Algorithm: sha256 WithRSAEncryption
ff:7a:f3:7d:8e:8d:37:8f:66:c8:91:16:c0:00:39:df:03:c1:
… <removed a portion of the output for this example> ..
ff:d9:68:ff:be:e4:4e:e1:78:16:67:47:14:01:31:32:0e:a2:
  • Now that the certificate request is completed it may be submitted to the CA who will then return a signed certificate file.

e.      Generate a Self-Signed Certificate.  Generating a self-signed certificate is almost identical to generating the certificate request. The main difference is that instead of generating a request file, you add an -x509 argument to to the openssl req command to generate a certificate file instead.

admin@ecs-node1:~#  openssl req -x509 -new -key server.key -config request.conf -out server.crt
Enter pass phrase for server.key: <your passprhase from above>
  • Running that command will prompt for additional information that will be incorporated into the certificate request.  This is called a Distinguished Name (DN). Some fields may be left blank and some will have default values, If you enter ‘.’ the field will be left blank.
Country Name (2 letter code) [US]: <Enter value>
State or Province Name (full name) [Province]: <Enter value>
Locality Name (eg, city) []: <Enter value>
Organization Name (eg, company) [Default Company Ltd]: <Enter value>
Organizational Unit Name (eg, section) []: <Enter value>
Common Name (e.g. server FQDN or YOUR name) []: <*.os.example.com>
Email Address []: <admin email>
  • Enter the following extra attributes to be sent with the certificate request:
A challenge password []: <optional>
An optional company name []: <optional>
  • Check request contents.  Use OpenSSL to verify the contents of the request, verify that the SANs are set correctly.
admin@ecs-node1:~# openssl x509 -in server.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9999999999990453326 (0x11fc66cf7c09d762)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=North Dakota, L=Fargo, O=EMC, OU=ASD, CN=*.os.example.com/
emailAddress=admin@example.com
Validity
Not Before: Oct 14 16:47:40 2014 GMT
Not After : Nov 13 16:47:40 2014 GMT
Subject: C=US, ST=Minnesota, L=Minneapolis, O=EMC, OU=ASD,
CN=*.os.example.com/emailAddress=admin@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
ff:bc:8f:83:7b:57:72:3d:70:ef:ff:d0:f9:97:ff:
   … <removed a portion of the output for this example> ..
ff:9e:66:86:43:0a:fd:ff:3d:69:b1:03:20:51:ff:
db:77
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Extended Key Usage:
TL Web Server Authentication
X509v3 Subject Alternative Name:
DNS:os.example.com, DNS:atmos.example.com, DNS:swift.example.com
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
Signature Algorithm: sha256WithRSAEncryption
ff:bc:8f:83:7b:57:72:ff:70:ef:b9:d0:f9:97:ff:
   … <removed a portion of the output for this example> ..
ff:9e:66:ff:43:0a:fd:31:3d:69:ff:03:20:51:39:
db:77

f.        Chain File. In either a self-signed or a CA signed use case, you now have a certificate file.  In the case of a self-signed certificate, the certificate is the chain file.  If your certificate was signed by a CA, you’ll need to append the intermediate CA cert(s) to your certificate.  I used a self-signed certificate in my implementation and did not perform this step.

  • Append the CA cert if it was signed by a CA.  Do not append the root CA certificate:
admin@ecs-node1:~# cp server.crt serverCertChain.crt
admin@ecs-node1:~# cat intermediateCert.crt >> serverCertChain.crt

 

Part 2: Upload the keys and Certificates.  The next section outlines the process for installing the key and certificate pair on ECS.

a.       First log in to the management API to get a session token.  You will need the root password for the ECS node.

  • Run this command (change IP and password as needed): (ctrl+c to break)
admin@ecs-node1:/> curl -L --location-trusted -k https://10.10.10.10:4443/login -u "root:password" –v
  • Note that the prior will leave the root password in the command history.  You can run it without the password and have it prompt you instead:
curl -L --location-trusted -k https://10.10.10.10:4443/login -v -u root
Enter host password for user 'root': <enter password>
  • From the output of the command above, set an environment variable to hold the token for later use.
admin@ecs-node1:/> export ECS_TOKEN=x-sds-auth-token-value

b.      Commands used for installing a key & certificate pair for Management requests/users:

  • Use ECSCLI to run it from a client PC:
admin@ecs-node1:/> python ecscli.py vdc_keystore update –hostname <ecs host ip> -port 4443 –cf <cookiefile> –privateKey <privateKey> -certificateChain <certificateChainFile>
  • Use CURL to run it directly from the ECS management console.  Note that this command uses the TOKEN environment variable that was set earlier.

Sample Command:

admin@ecs-node1:/> curl -svk -H "X-SDS-AUTH-TOKEN: $TOKEN" -H "Content-type: application/xml" -H "X-EMC-REST-CLIENT: TRUE" -X PUT -d "<rotate_keycertchain><key_and_certificate><private_key>`cat privateKeyFile`</private_key><certificate_chain>`cat certChainFile`</certificate_chain></key_and_certificate></rotate_keycertchain>" https://localhost:4443/vdc/keystore

c.       Commands used for installing a key & certificate pair for Object requests/users.  Use the actual private key and certificate chain files here, and a successful response code should be an HTTP 200.

  • Use ECSCLI to run it from a client PC:
admin@ecs-node1:/> python ecscli.py keystore update –hostname <ecs host ip> -port 4443 –cf <cookiefile> -pkvf <privateKey> -cvf <certificateChainFile>
  • Use CURL to run it directly from the ECS management console.  If curl is used, the xml format is required so that carriage returns and the like will be handled via the `cat` command.

Sample Command:

admin@ecs-node1:/> curl -svk -H "X-SDS-AUTH-TOKEN: $TOKEN" -H "Content-type: application/xml" -H "X-EMC-REST-CLIENT: TRUE" -X PUT -d "<rotate_keycertchain><key_and_certificate><private_key>`cat privateFile.key`</private_key><certificate_chain>`cat certChainFile.pem`</certificate_chain></key_and_certificate></rotate_keycertchain>" https://localhost:4443/object-cert/keystore

d.      Important Notes:

  • Though this is the object certificate to be used for object requests sent on port 9021, the upload command is a management command which is sent on port 4443.
  • Once this is done it can take up to 2 hours for the certificate to be distributed to all of the nodes.
  • The certificate is immediately distributed upon the service restart of the node where the certificate was uploaded.

e.      Restart managment services to propagate the management certificate.  Using viprexec will run the command on all of the nodes in the cluster.

admin@ecs-node1:/> sudo -i viprexec -i -c '/etc/init.d/nginx restart;sleep 10;/etc/init.d/nginx status'

Output from host : 192.168.1.1
Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=75447)

Output from host : 192.168.1.2
Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=85534)

Output from host : 192.168.1.3

Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=87325)

Output from host : 192.168.1.4
Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=59112)

Output from host : 192.168.1.5
Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=77312)

f.        Verify that the certificate was propogated to each node.  The output will show the certificate, scroll up and verify all of the information is correct.  At the minimum the first and last node should be checked.

admin@ecs-node1:/> openssl s_client -connect 10.10.10.1:4443 | openssl x509 -noout -text 
admin@ecs-node1:/> openssl s_client -connect 10.10.10.2:4443 | openssl x509 -noout -text 
admin@ecs-node1:/> openssl s_client -connect 10.10.10.3:4443 | openssl x509 -noout -text 
admin@ecs-node1:/> openssl s_client -connect 10.10.10.4:4443 | openssl x509 -noout -text 
admin@ecs-node1:/> openssl s_client -connect 10.10.10.5:4443 | openssl x509 -noout -text

g.       Wait at least 2 minutes and then restart the object head services to propagate the object head certificate:

admin@ecs-node1:/> sudo -i viprexec -i -c 'kill \`pidof dataheadsvc\`'
  • Wait for the service to come back up, which you can verify with the next few commands.
  • Run netstat to verify the datahead service is listening.
admin@ecs-node1:/tmp> netstat -an | grep LIST | grep 9021
tcp        0      0 10.10.10.1:9021     :::*    LISTEN
admin@ecs-node1:/tmp> sudo netstat -anp | grep 9021
tcp  0  0 10.10.10.1:9021 :::* LISTEN 67064/dataheadsvc
  • You can run the ps command to verify the start time of the datahead service compared to the current time on the node.
admin@ecs-node1:/tmp> ps -ef | grep dataheadsvc
storage+  29052  11163  0 May19 ? 00:00:00 /opt/storageos/bin/monitor -u 444 -g 444 -c / -l /opt/storageos/logs/dataheadsvc.out -p /var/run/dataheadsvc.pid /opt/storageos/bin/dataheadsvc file:/opt/storageos/conf/datahead-conf.xml
storage+  57064  29052 88 20:27 ? 00:00:51 /opt/storageos/bin/dataheadsvc -ea -server -d64 -Xmx9216m -Dproduct.home=/opt/storageos -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/storageos/logs/dataheadsvc-78517.hprof -XX:+PrintGCDateStamps -XX:+PrintGCDetails -Dlog4j.configurationFile=file:/opt/storageos/conf/dataheadsvc-log4j2.xml -Xmn2560m -Dsun.net.inetaddr.ttl=0 -Demc.storageos.useFastMD5=1 -Dcom.twmacinta.util.MD5.NATIVE_LIB_FILE=/opt/storageos/lib/MD5.so -Dsun.security.jgss.native=true -Dsun.security.jgss.lib=libgssglue.so.1 -Djavax.security.auth.useSubjectCredsOnly=false -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+ExplicitGCInvokesConcurrent -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -XX:+PrintTenuringDistribution -XX:+PrintGCDateStamps -Xloggc:/opt/storageos/logs/dataheadsvc-gc-9.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=3 -XX:GCLogFileSize=50M com.emc.storageos.data.head.Main file:/opt/storageos/conf/datahead-conf.xml

admin@ecs-node1:/tmp> date
Wed Jun  7 20:28:41 UTC 2017

3.       Verify the Installed Certificates.  The object certificate and management certificate each have their own GET request to retrieve the installed certificate.  Note that these commands are management requests.

a.       Verify the installed/Active Management Certificate

An alternative method to this one, which I used personally, is the OpenSSL s_client command.  The details in step 3a below aren’t necessary if you are going to use s_client for verification, I’ve simply included them here for completeness.   You can skip to step 3b for the s_client method.

  • Use ECSCLI to run it from a client PC:
python ecscli.py vdc_keystore get –hostname <ecs host ip> -port 4443 –cf <cookiefile>
  • Use CURL to run it directly from the ECS management console:
curl -svk -H "X-SDS-AUTH-TOKEN: $TOKEN" https://10.10.10.1:4443/vdc/keystore

Verify the installed (active) Object Certificate.  This can be done using a variety of methods, outlined below.

  • Use ECSCLI to run it from a client PC:
python ecscli.py keystore show –hostname <ecs host ip> -port 4443 –cf <cookiefile>
  • Use CURL to run it directly from the ECS management console:
curl -svk -H "X-SDS-AUTH-TOKEN: $TOKEN" https://10.10.10.1:4443/object-cert/keystore

b.      The certificate presented by a port can also be verified using OpenSSL’s s_client tool.  If you used the method in step 3a, this is unnecessary as it will give you the same information.

  • Sample command syntax:
openssl s_client -connect host:port -showcerts
  • The command syntax I used and some sample output from my ECS environment is below.  Verify the certificate on last node as well as the expected SAN entries.
openssl s_client -connect 10.10.10.1:9021 | openssl x509 -noout -text
openssl s_client -connect 10.10.10.1:9021 -showcerts
CONNECTED(00000003)
depth=0 C = US, ST = North Dakota, L = Fargo, OU = server, O = CompanyName Worldwide, CN = *.nd.dev.ecs.CompanyName.int
verify error:num=18:self-signed certificate
verify return:1
depth=0 C = US, ST = North Dakota, L = Fargo, OU = server, O = CompanyName Worldwide, CN = *.nd.dev.ecs.CompanyName.int
verify return:1
---
Certificate chain
0 s:/C=US/ST=North Dakota/L=Fargo/OU=server/O=CompanyName Worldwide/CN=*.nd.dev.ecs.CompanyName.int
   i:/C=US/ST=North Dakota/L=Fargo/OU=server/O=CompanyName Worldwide/CN=*.nd.dev.ecs.CompanyName.int
-----BEGIN CERTIFICATE-----
MIIFQDCCBCigAwIBAgIJANzBojR+ij2xMA0GCSqGSIb3DQEBBQUAMIGRMQswCQYD
VQQGEwJVUzERMA8GA1UECBMITWlzc291cmkxFDASBgNVBAcTC1NhaW50IExvdWlz
   … <output truncated for this example> …

c.       The process is now complete.  You can have your application team test SSL access to ensure everything is working properly.

 

 

 

 

 

Installing the EMC ECS CLI Package

Below is a brief outline on installing EMC’s ECS CLI package.  I have another blog post that outlines all of the ECSCLI commands here.

Getting Started

Prerequisites:

Install Python Requests Package:

  • Versions of ECSCLI prior to 3.x may require a manual install of the python requests package.  When I installed v3.1.9, the PIP install process appears to have taken care of installing the python requests package for me, but I saw reports of this issue while reading other documentation.   Either way, you can manually install the requests package either by using “pip install requests” or downloading the code from GitHub and running “python setup.py install”.

Install ECSCLI using Python PIP:

  • There are frequent updates and fixes being made to the ECSCLI package. The latest version of ECSCLI can always be downloaded and installed via pip using “pip install ecscli” from a windows command prompt.  PIP will be in your system path once you’ve installed python so it can be run from any directory.  If you want to archive a copy, use “pip download ecscli” rather than “pip install ecscli”.  As an alternative, you can also find the ECSCLI install package available for download at EMC’s support site (v2 is available here).

ECS CLI PIP Installation and Configuration

You will need to set up a configuration profile once ECSCLI is installed.  Configuration profiles address issues with older versions of the ECSCLI regarding authentication and python dependencies.  A profile simply contains the hostname and port along with an existing management user who will be authenticating to that host.  Several profiles can be created but only one can be active.  Once the active profile is set, ECSCLI will then use that profile for authenticating and sending commands.

To install the ecscli via pip:

pip install ecscli

Collecting ecscli
Downloading ecscli-2.2.0a5.tar.gz (241kB)
100% |████████████████████████████████| 245kB 568kB/s
Requirement already satisfied (use --upgrade to upgrade): requests in ./anaconda/envs/ecscli_demoenv/lib/python2.7/site-packages (from ecscli)
Building wheels for collected packages: ecscli
Running setup.py bdist_wheel for ecscli ... done
Stored in directory: /Users/conerj/Library/Caches/pip/wheels/92/7f/c3/129ffe5cd1b3b20506264398078bdd886c27fefe89b062b711
Successfully built ecscli
Installing collected packages: ecscli
Successfully installed ecscli-2.2.0a5

To see a list of profiles:

ecscli config list

Running without an acive config profile
list of existing configuration profiles:

Since the ecscli was just installed, no profiles exist yet.

Once you have an active profile, the output will look like this:

Running with config profile: C:\python\ecscli/ecscliconfig_demouser_.json
user: root host:port: 10.10.10.1:4443
list of existing configuration profiles:
ACTIVE  |PROFILE   |HOSTNAME   |PORT   |MGMT USER   |ECS VERSION
----------------------------------------------------------------
        |demouser  |10.10.10.1 |4443   |root        |3.0

To create a profile:

ecscli config -pf demoprofile

Running without an acive config profile
Please enter the default ECS hostname or ip (127.0.0.1):10.10.10.11
Please enter the default command port (4443):
Please enter the default user for the profile (root):
Entered saveConfig profileName = demoprofile
will be saved to base path: /Users/demo_user/ecscliconfig_
Saving profile config to: /Users/demo_user/ecscliconfig_demoprofile_.json
list of existing configuration profiles:
     * demoprofile2 - hostname:10.10.10.11:4443       user:root

Normally one profile will always be active.  Because this is the first time a profile is being created, ECSCLI will run without an active profile. The CLI will prompt the user to enter the hostname, IP, port and management user for the profile. The “*” shows the active profile that will be used. Several profiles can be configured, however only one profile can be active at a time. The profiles are stored in .json files in the home directory with the name prefix “ecscliconfig_”.

To see a list of profiles and the active profile:

ecscli config list

Running with config profile: demoprofile
user: demo_user    host:port: 10.10.10.10:4443
list of existing configuration profiles:
    * demoprofile2 - hostname:10.10.10.11:4443 user:demouser
      demoprofile  - hostname:10.10.10.10:4443 user:root

The currently active profile is denoted by “*” before the profile name.

To change the active profile:

ecscli config set -pf mydemoprofile

Running with config profile: demoprofile2
user: demo_user    host:port: 10.10.10.11:4443
list of existing configuration profiles:
   demo_profile2 - hostname:10.10.10.11:4443 user:demouser
   demo_profile  - hostname:10.10.10.10:4443 user:root

To delete a profile:

ecscli config delete -pf mydemoprofile

Running with config profile: demoprofile
user: root  host:port: 10.10.10.10:4443
list of existing configuration profiles:
* demoprofile2 - hostname:10.10.10.11:4443 user:demouser

Since the currently active profile was deleted in this example, the ecscli chose another profile to set as the active profile.

Ecscli configuration handles the “–hostname” and “–port” arguments and manages the tokens for subsequent management requests.  Authentication is still required. This as well as all other requests are simplified since cookie related arguments are no longer required.

To Authenticate:

ecscli authenticate

Running with config profile: demoprofile2
user: root  host:port: 10.10.10.10:4443
Password :
authentication result: root : Authenticated Successfully
/Users/demo_user/demo_profile/rootcookie : Cookie saved successfully

Another sample command:

This command example will list the storage pools:

ecscli objectvpool list

Running with config profile: demo_rofile
user: root    host:port: 10.10.10.10:4443
{'global_data_vpool': [{'isAllowAllNamespaces': True, 'remote': None, 'name': 'lab_env', 'enable_rebalancing': True, 'global': None, 'creation_time': 1033186012844, 'isFullRep': False, 'vdc': None, 'inactive': False, 'varrayMappings': [{'name': 'urn:storageos:VirtualDataCenterData:823c6f4c-bda2-6ca2-69d7-110df3e9f022', 'value': 'urn:storageos:VirtualArray:19f03490-3f30-25dd-5f5c-8b208f64e3f0'}], 'id': 'urn:storageos:ReplicationGroupInfo:8066234b-bdc2-6234-f066-81f0aa61e7bf:global', 'description': ''}]}

EMC ECS CLI Command Reference

Below is a comprehensive list of the available ECS CLI Commands. The ‘-h’ flag will list the various options available with each command.  A detailed description of each command is also available in EMC’s reference guide, which is available on their support site.   The ECS CLI requires python. I have another blog entry on installing the ECS CLI here.

ecscli.py authenticate Authenticate to ECS Array
ecscli.py authentication add-provider Add an Authentication Provider
ecscli.py authentication delete-provider Delete an Authentication Provider
ecscli.py authentication list-providers List your Authentication Provider
ecscli.py authentication show-provider Show Authentication Provider
ecscli.py authentication update Update your Authentication
ecscli.py bucket delete Delete an ECS Bucket
ecscli.py bucket delete-quota Delete an ECS Bucket Quota
ecscli.py bucket get-acl Get bucket ACL information
ecscli.py bucket get-groups Get bucket group information
ecscli.py bucket get-permissions Get Bucket Permissions
ecscli.py bucket get-quota Get bucket Quota information
bucket get-ret-period Get Bucket Retention Period
ecscli.py bucket info Bucket Info
ecscli.py bucket list Bucket List
ecscli.py bucket lock Bucket Lock
ecscli.py bucket lock-info Bucket Lock Info
ecscli.py bucket set-acl Set Bucket ACL
ecscli.py bucket update-owner Update the Bucket Owner
ecscli.py bucket update-ret Update the Bucket Retention Period
ecscli.py bucket update-stale Update the Bucket ‘isStaleAllowed’ parameter
ecscli.py cas create_update_secret Create or update cas secret for user
ecscli.py cas delete_secret Delete cas secret for user
ecscli.py cas get_bucket Get cas bucket for user
ecscli.py cas get_metadata Get cas metadata for user with namespace
ecscli.py cas get_pea Get cas pea for user with namespace
ecscli.py cas get_registered_apps Get cas registered applications for user
ecscli.py cas get_secret Get cas secret for user
ecscli.py cas set_bucket Set cas bucket for user
ecscli.py cas set_metadata Set cas metadata for user
ecscli.py datastore bulk-get Get Bulk Resources for the Datastore
ecscli.py datastore create Create a data store
ecscli.py datastore delete Delete a data store node
ecscli.py datastore list List Datastore
ecscli.py datastore show Show Datastore node
ecscli.py datastore tasks List Datastore tasks
ecscli.py dataservice list List data fabric services
ecscli.py dataservice provision Provision data fabric services
ecscli.py failedzones Get configured temp failed zone info
ecscli.py keystore show Show Keystore
ecscli.py keystore update Update Keystore
ecscli.py meter SOS metering
ecscli.py mgmtuserinfo add Create Mgmtuserinfo
ecscli.py mgmtuserinfo delete Delete Mgmtuserinfo
ecscli.py mgmtuserinfo list List Mgmtuserinfo
ecscli.py monitor SOS Monitoring
ecscli.py namespace create Create Namespace
ecscli.py namespace create-ret Create Namespace Retention Class
ecscli.py namespace delete Delete Namespace
ecscli.py namespace delete-quota Delete Namespace Quota
ecscli.py namespace get Get Tenant Namespace
ecscli.py namespace get-quota Get Namespace Quota
ecscli.py namespace get-ret-period Get Namespace Retention Period
ecscli.py namespace list List Namespaces
ecscli.py namespace list-ret Get Namespace Retention Classes
ecscli.py namespace show Show Namespace
ecscli.py namespace update Update Namespace
ecscli.py namespace update-ret Update Namespace Retention Class
ecscli.py namespace update-quota Update Namespace Quota
ecscli.py nodes list Get a list of ECS datanodes
ecscli.py objectuser create Create an Objectuser
ecscli.py objectuser delete Delete an Objectuser
ecscli.py objectuser get-lock Get lock info for an Objectuser
ecscli.py objectuser list List an Objectuser
ecscli.py objectuser lock Lock an Objectuser
ecscli.py objectuser unlock Unlock an Objectuser
ecscli.py objectvpool add Add an ObjectVPool
ecscli.py objectvpool create Create an ObjectVPool
ecscli.py objectvpool delete Delete an ObjectVPool
ecscli.py objectvpool list List ObjectVPools
ecscli.py objectvpool remove Remove an ObjectVPool
ecscli.py objectvpool show Show an ObjectVPool
ecscli.py objectvpool update Update an ObjectVPool
ecscli.py secretkeyuser add Add a Secretkeyuser
ecscli.py secretkeyuser delete Delete a Secretkeyuser
ecscli.py secretkeyuser show Show a Secretkeyuser
ecscli.py secretkeyuser user-delete Delete a Secretkeyuser user
ecscli.py secretkeyuser user-show Show a Secretkeyuser User
ecscli.py system add-license Add a System license
ecscli.py system connectemc-ftps Connect  EMC by ftps
ecscli.py system connectemc-smtp Connect  EMC by smtp
ecscli.py system deactivate-callhome Deactivate ESRS callhome configuration
ecscli.py system get-alerts Get System Alerts
ecscli.py system get-callhome-config Get the ESRS callhome configuration
ecscli.py system get-license Get the System license
ecscli.py system get-log-level Get the System logging level
ecscli.py system get-logs Get the System logs
ecscli.py system get-properties Get the System properties
ecscli.py system get-properties-metadata Get the system properties metadata
ecscli.py system send-alert Send a System Alert
ecscli.py system set-log-level Set the system logging level
ecscli.py system set-properties Set system properties
ecscli.py tenant add-attribute Add a Tenant attribute
ecscli.py tenant add-group Create a Tenant
ecscli.py tenant add-role Add a Tenant Role
ecscli.py tenant create Update a tenant role
ecscli.py tenant delete Delete a Tenant
ecscli.py tenant delete-role Delete a tenant role
ecscli.py tenant get-clusters Get tenant clusters
ecscli.py tenant get-hosts Get tenant hosts
ecscli.py tenant get-role Display tenant roles
ecscli.py tenant get-vcenters Get tenant vcenters
ecscli.py tenant list List the tenants
ecscli.py tenant remove-attribute Remove a tenant attribute
ecscli.py tenant show Show tenants
ecscli.py tenant update-quota Update tenant quotas
ecscli.py varray create Create a varray
ecscli.py varray delete Delete a varray
ecscli.py varray list List a varray
ecscli.py varray update Update a varray
ecscli.py vdc delete VirtualDataCenter delete
ecscli.py vdc delete VirtualDataCenter delete
ecscli.py vdc list VirtualDataCenter list
ecscli.py vdc_data insert Insert ECS Data VirtualDataCenter
ecscli.py vdc_data list List ECS Data VirtualDataCenter
ecscli.py vdc_data local Local ECS Data VirtualDataCenter
ecscli.py vdc_data show Show ECS Data VirtualDataCenter
ecscli.py vpool add_pools Add storage pools to ECS VPOOL
ecscli.py vpool allow Allow tenant access to ECS VPOOL
ecscli.py vpool create Create an ECS VPOOL
ecscli.py vpool delete Delete an ECS VPOOL
ecscli.py vpool disallow Disallow tenant access to ECS VPOOL
ecscli.py vpool get_pools Get storage pools in ECS VPOOL
ecscli.py vpool list List ECS VPOOLs
ecscli.py vpool refresh_pools Refresh storage pools in ECS VPOOL
ecscli.py vpool remove_pools Remove storage pools in ECS VPOOL
ecscli.py vpool show Show ECS VPOOL
ecscli.py vpool update Update ECS VPOOL