Using Cron with EMC VNX and Celerra

I’ve shared numerous shell scripts over the years on this blog, many of which benefit from being scheduled to run automatically on the Control Station.  I’ve received emails and comments asking “How do I schedule Unix or Linux crontab jobs to run at intervals like every five minutes, every ten minutes, Every hour, etc.”?  I’ll run through some specific examples next. While it’s easy enough to simply type “man crontab” from the CLI to review the syntax, it can be helpful to see specific examples.

What is cron?

Cron is a time-based job scheduler used in most Unix operating systems, including the VNX File OS (or DART). It’s used schedule jobs (either commands or shell scripts) to run periodically at fixed times, dates, or intervals. It’s primarily used to automate system maintenance, administration, and can also be used for troubleshooting.

What is crontab?

Cron is driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. The crontab files are stored where the lists of jobs and other instructions to the cron daemon are kept. Users can have their own individual crontab files and often there is a system-wide crontab file (usually in /etc or a subdirectory of /etc) that only system administrators can edit.

On the VNX, the crontab files are located at /var/spool/cron but are not intended to be edited directly, you should use the “crontab –e” command.  Each NAS user has their own crontab, and commands in any given crontab will be executed as the user who owns the crontab.  For example, the crontab file for nasadmin is stored as /var/spool/cron/nasadmin.

Best Practices and Requirements

First, let’s review how to edit and list crontab and some of the requirements and best practices for using it.

1. Make sure you use “crontab -e” to edit your crontab file. As a best practice you shouldn’t edit the crontab files directly.  Use “crontab -l” to list your entries.

2. Blank lines and leading spaces and tabs are ignored. Lines whose first non-space character is a pound sign (#) are comments, and are ignored. Comments are not allowed on the same line as cron commands as they will be taken to be part of the command.

3. If the /etc/cron.allow file exists, then you must be listed therein in order to be allowed to use this command. If the /etc/cron.allow file does not exist but the /etc/cron.deny file does exist, then you must not be listed in the /etc/cron.deny file in order to use this command.

4. Don’t execute commands directly from within crontab, place your commands within a script file that’s called from cron. Crontab cannot accept anything on stdout, which is one of several reasons you shouldn’t put commands directly in your crontab schedule.  Make sure to redirect stdout somewhere, either a log file or /dev/null.  This is accomplished by adding “> /folder/log.file” or “> /dev/null” after the script path.

5. For scripts that will run under cron, make sure you either define actual paths or use fully qualified paths to all commands that you use in the script.

6. I generally add these two lines to the beginning of my scripts as a best practice for using cron on the VNX.

export NAS_DB=/nas
export PATH=$PATH:/nas/bin

Descriptions of the crontab date/time fields

Commands are executed by cron when the minute, hour, and month of year fields match the current time, and when at least one of the two day fields (day of month, or day of week) match the current time.

# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of week (0 - 7) 
# │ │ │ │ │                          
# │ │ │ │ │
# │ │ │ │ │
# * * * * *  command to execute
# 1 2 3 4 5  6

field #   meaning          allowed values
-------   ------------     --------------
   1      minute           0-59
   2      hour             0-23
   3      day of month     1-31
   4      month            1-12
   5      day of week      0-7 (0 or 7 is Sun)

Run a command every minute

While it’s not as common to want to run a command every minute, there can be specific use cases for it.  It would most likely be used when you’re in the middle of troubleshooting an issue and need data to be recorded more frequently.  For example, you may want to run a command every minute to check and see if a specific process is running.  To run a Unix/Linux crontab command every minute, use this syntax:

# Run “check.sh” every minute of every day
* * * * * /home/nasadmin/scripts/check.sh

Run a command every hour

The syntax is similar when running a cron job every hour of every day.  In my case I’ve used hourly scripts for performance monitoring, for example with the server_stats VNX script. Here’s a sample crontab entry that runs at 15 minutes past the hour, 24 hours a day.

# Brocade Backup
# This command will run at 12:15, 1:15, 2:15, etc., 24 hours a day.
15 * * * * /home/nasadmin/scripts/stat_collect.sh

Run a command once a day

Here’s an example that shows how to run a command from the cron daemon once a day. In my case, I’ll usually run daily commands for report updates on our web page and for backups.  As an example, I run my Brocade Zone Backup script once daily.

# Run the Brocade backup script at 7:30am
30 7 * * * /home/nasadmin/scripts/brocade.sh

Run a command every 5 minutes

There are multiple methods to run a crontab entry every five minutes.  It is possible to enter a single, specific minute value multiple times, separated by commas.  While this method does work, it makes the crontab list a bit harder to read and there is a shortcut that you can use.

0,5,10,15,20,25,30,35,40,45,50,55  * * * * /home/nasadmin/scripts/script.sh

The crontab “step value” syntax (using a forward slash) allows you use a crontab entry in the format sample below.  It will run a command every five minutes and accomplish the same thing as the command above.

# Run this script every 5 minutes
*/5 * * * * /home/nasadmin/scripts/test.sh

Ranges, Lists, and Step Values

I just demonstrated the use of a step value to specify a schedule of every five minutes, but you can actually get even more granular that that using ranges and lists.

Ranges.  Ranges of numbers are allowed (two numbers separated with a hyphen). The specified range is inclusive. For example, using 7-10 for an “hours” entry specifies execution at hours 7, 8, 9, & 10.

Lists. A list is a set of numbers (or ranges) separated by commas. Examples: “1,2,5,9”, “0-4,8-12”.

Step Values. Step values can be used in conjunction with ranges. Following a range with “/” specifies skips of the number’s value through the range. For example, “0-23/2” can be used in the hours field to specify command execution every other hour (the alternative being “0,2,4,6,8,10,12,14,16,18,20,22”). Steps are also permitted after an asterisk, so if you want to say “every two hours” you can use “*/2”.

Special Strings

While I haven’t personally used these, there is a set of built in special strings you can use, outlined below.

string         meaning
------         -------
@reboot        Run once, at startup.
@yearly        Run once a year, "0 0 1 1 *".
@annually      (same as @yearly)
@monthly       Run once a month, "0 0 1 * *".
@weekly        Run once a week, "0 0 * * 0".
@daily         Run once a day, "0 0 * * *".
@midnight      (same as @daily)
@hourly        Run once an hour, "0 * * * *".

Using a Template

Below is a template you can use in your crontab file to assist with the valid values that can be used in each column.

# Minute|Hour  |Day of Month|Month |WeekDay |Command
# (0-59)|(0-23)|(1-31)      |(1-12)|(0-7)             
  0      2      12           *      *        test.sh

Gotchas

Here’s a list of the known limitations of cron and some of the issues you may encounter.

1. When cron job is run it is executed as the user that created it. Verify security requirements for the job.

2. Cron jobs do not use any files in the user’s home directory (like .cshrc or .bashrc). If you need cron to read any file that your script will need, you will need to call it from the script cron is using. This includes setting paths, sourcing files, setting environment variables, etc.

3. If your cron jobs are not running, make sure the cron daemon is running. The cron daemon can be started or stopped with the following VNX Commands (run as root):

# /sbin/service crond stop
 # /sbin/service crond start

4.  If your job isn’t running properly you should also check the /etc/cron.allow and /etc/cron.deny files.

5. Crontab is not parsed for environmental substitutions. You can not use things like $PATH, $HOME, or ~/sbin.

6. Cron does not deal with seconds, minutes is the most granular it allows.

7. You can not use % in the command area. They will need to be escaped and if used with command substitution like the date command you can put it in backticks. Ex. `date +\%Y-\%m-\%d`. Or use bash’s command substitution $().

8. Be cautious using the day of the month and the day of week together.  The day of month and day of week fields with restrictions (no *) makes this an “or” condition not an “and” condition.  When either field is true it will be executed.

 

Advertisements

Storage Class Memory and Emerging Technologies

I mentioned in my earlier post, The Future of Storage Administration, that Flash will continue to dominate the industry and will be embraced by the enterprise, which I believe will drive newer technologies like NVMe and diminish older technologies like fiber channel.  While there is a lot of agreement over the latest storage technologies that are driving the adoption of flash in the enterprise, including the aforementioned NVMe technology, there doesn’t seem to be nearly as much agreement on what the “next big thing” will be in the enterprise storage space.  NVMe and NVMe-oF are definitely being driven by the trend towards the all-flash data center, and Storage Class Memory (SCM) is certainly a relevant trend that could be that “next big thing”.  Before I continue, what are NVMe, NVMe-oF and SCM?

  • NVMe is a protocol that allows for fast access for direct attached flash storage. NVMe is considered an evolutionary step toward exploiting the inherent parallelism built into SSDs.
  • NVMe-oF allows the advantages of NVMe to be used on a fabric connecting hosts with networked storage. With the increased adoption of low latency, high bandwidth network fabrics like 10GB+ Ethernet and InfiniBand, it becomes possible to build an infrastructure that extends the performance advantages of NVMe over standard fabrics to access low latency nonvolatile persistent storage.
  • SCM (Storage Class Memory) is a technology that places memory and storage on what looks like a standard DIMM board, which can be connected over NVMe or the memory bus.  I’ll dive in a bit more later on.

In the coming years, you’ll likely see every major storage vendor rolling out their own solutions for NVMe, NVMe-oF, and SCM.  The technologies alone won’t mean anything without optimization of the OS/hypervisor, drivers, and protocols, however. The NVMe software will need to be designed to take advantage of the low latency transport and media.

Enter Storage Class Memory

SCM is a hybrid memory and storage paradigm, placing memory and storage on what looks like a standard DIMM board.  It’s been gaining a lot of attention at storage industry conferences for the past year or two.  Modern solid-state drives are a compromise because they’re inherently all-flash and are still configured with all the bottlenecks of legacy standard drives even when bundled in to modern enterprise arrays.  SCM is not exactly memory and it’s not exactly storage.  It physically connects to memory slots in a mainboard just like traditional DRAM.  It is also a little bit slower than DRAM, but it is persistent, so just like traditional storage all content is saved after a power cycle.  Compared to flash SCM is orders of magnitude faster and offers equal performance gains on read and write operations.  In addition, SCM tiers are much more resilient and do not have the same wear pattern problems as flash.

A large gap exists between DRAM as a main memory and traditional SSD and HDD storage in terms of performance vs. cost, and SCM looks to address that gap.

The next-generation technologies that will drive SCM aim to be denser than current DRAM along with being faster, more durable, and hopefully cheaper than NAND solutions.  SCM, when connected over NVMe technology or directly on the memory bus, will enable device latencies to be about 10x lower than those provided by NAND-based SSDs.  SCM can also be up to 10x faster than NAND flash although at a higher cost than NAND-based SSDs. Similarly, NAND flash started out at least 10x more expensive than the dominant 15K RPM HDD media when it was introduced. Prices will come down.

Because the expected media latencies for SCM (<2us) are lower than the network latencies (<5us), SCM will probably end up being more common on servers rather than on the network.  Either way, SCM on a storage system will help accelerate metadata access and result in improvement of overall system performance.  Using NVMe-oF to provide low-latency access to networked storage SCM could potentially be used to create a different tier of network storage.

The SCM Vision

It sounds great, right?  The concept of Storage Class Memory has been around for a while, but it’s become a hard to reach albeit very desirable goal for storage professionals. The common vision seems to be a new paradigm where data can live in fast, DRAM-like storage areas in which data in memory is the center of the computer instead of the compute functions. The main problem with this vision is how we get the system and applications to recognize that something beyond just DRAM is available for use and that it can be used as either data storage or as persistent memory.

We know that SCM will allow for huge volumes of I/O to be served from memory and potentially stored in memory.  There will be fewer requirements needed to create multiple copies to protect against controller or server failure.  Exactly how this will be done remains to be seen, but there are obvious benefits from not having to continuously commit to slow external disks.  Once all the hurdles are overcome, SCM should have broad applicability in SSDs, storage controllers, PCI or NVMe boards and DIMMs.

Sofware Support

With SCM, applications won’t need to execute write IOs to get data into persistent storage. A memory level, zero copy operation moving data into XPoint will take care of that. That is just one example of the changes that systems and software will have to take on board when a hardware option like XPoint is treated as persistent storage-class memory, however.  Most importantly, the following must also be developed:

  • File systems that are aware of persistent memory must be developed
  • Operating system support for storage-class memory must be developed
  • Processors designed to use hybrid DRAM and XPoint memory must be developed

With that said, the industry is well on its way. Microsoft has added XPoint storage-class memory support into Windows Server 2016.  It provides zero-copy access and Direct Access Storage volumes, known as DAX volumes.  Red Hat Linux Operating system support is in place to use these devices as fast disks in sector mode with btt, and this usecase is fully supported in RHEL 7.3.

Hardware

SCM can be implemented with a variety of current technologies, notably Intel Optane, ReRAM, and NVDIMM-P.

Intel has introduced Optane brand XPoint SSDs and XPoint DIMMs, instead of the relatively slower PCIe bus used by the NVMe XPoint drives.

Resistive Random-Access Memory (ReRAM) is still an up-and-coming technology and comparable to Intel’s XPoint. It is currently under development by a number of companies and is a viable replacement for flash memory. The costs and performance of ReRAM are not currently at a level that makes the technology ready for the mass market. Developers of ReRAM technology all face similar challenges: overcoming temperature sensitivity, integrating with standard CMOS technology and manufacturing processes, and limiting the effects of sneak path currents, which would otherwise disrupt the stability of the data contained in each memory cell.

NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.” The NVDIMM-P specification is being developed to support NAND flash directly on the host memory interface.  NVDIMMs use predictive software that allocates data in advance between DRAM and NAND.  NVDIMM-P is limited in that even though NAND flash is physically located at DIMM along with DRAM, the traditional memory hierarchy is still the same. The NAND implementation still works as a storage device and the DRAM implementation still works as main memory.

HP worked for years developing its Machine project.  Their effort revolved around memory-driven computing and an architecture aimed at big data workloads, and their goal was eliminating inefficiencies in how memory, storage, and processors interact.  While the project appears to now be dead, the technologies they developed will live on in current and future HP products. Here’s what we’ll likely see out of their research:

  • Now: ProLiant boxes with persistent memory for applications to use, using a mix of DRAM and flash.
  • Next year: Improved DRAM-based persistent memory.
  • Two-three years: True non-volatile memory (NVM) for software to use as slow but high volume RAM.
  • Three-Four years: NVM technology across many product categories.

SCM Use Cases

I think SCM’s possibly most exciting use case for high performance computing will be its use as nonvolatile memory that is tightly coupled to an application. SCM has the potential to dramatically affect the storage landscape in high performance computing, and application and storage developers will have fantastic opportunities to take advantage of this unique new technology.

Intel touts fast storage, cache, and extended memory as the primary use cases for their Optane product line.  Fast storage or cache refers to the tiering and layering which enable a better memory-to-storage hierarchy. The Optane product provides a new storage tier that breaks through the bottlenecks of traditional NAND storage to accelerate applications, and enable more work to get done per server. Intel’s extended memory use case describes the use of an Optane SSD to participate in a shared memory pool with DRAM at either the OS or application level enabling bigger memory or more affordable memory.

What the next generation of SCM will require is the industry coming together to agree on what we are all talking about and generate some standards.  Those standards will be critical to support innovation. Industry experts seem to be saying that the adoption of SCM will evolve around use cases and workloads, and task-specific, engineered machines that are built with real-time analytics in mind.  We’ll see what happens.

No matter what, new NVMe-based products coming out will definitely do a lot toward enabling fast data processing at a large scale, especially solutions that support the new NVMe-oF specification. SCM combined with software-defined storage controllers and NVMe-oF will enable users to pool flash storage drives and treat them as if they are one big local flash drive. Exciting indeed.

SCM may not turn out to be a panacea, and current NVMe flash storage systems will provide enough speed and bandwidth to handle the even the most demanding compute requirements for the foreseeable future.  I’m looking forward to seeing where this technology takes us.

The Future of Storage Administration

What is the future of enterprise storage administration? Will the job of enterprise storage administrator still be necessary in 10 years? A friend in IT storage management recently asked  me a question related to that very relevant topic. In a word, I believe the answer to that 2nd question is a resounding yes, but alas, things are changing and we are going to have to embrace the inevitable changes in the industry to stay on top of our game and remain relevant.  In recent years I’ve seen the need to broaden my skills and demonstrate how my skills can actually drive business value. The modern data center is undergoing a tremendous transformation, with hyper-converged systems, open source solutions, software-defined storage, large cloud-scale storage systems that companies can throw together all by themselves, and many others. This transformation is being created by the need for business agility, and it’s being fueled by software innovation.

As the expansion of data into the cloud influences and changes our day-to-day management, we will begin to see the rise of the IT generalist in the storage world.  These inevitable changes and the new tools that manage them will mean that storage will likely move toward being procured and managed by IT generalists rather than specialists like myself. Hyper converged infrastructures will allow these generalists to manage an entire infrastructure with a single, familiar set of tools.  As overall data center responsibilities start to shift to more generalist roles, traditional enterprise storage storage professionals like myself will need to expand our expertise beyond storage, or focus on more strategic projects where storage performance is critical.  I personally see us starting to move away from the day-to-day maintenance of infrastructure and more toward how IT can become an real driver of business value. The glory days of on-prem SAN and storage arrays are nearing an end, but us old timers in enterprise storage can still be a key part of the success of the businesses we work for. If we didn’t embrace change, we wouldn’t be in IT, right?

Despite all of these new technologies and trends, keep in mind that there are still some good reasons to take the classic architecture into consideration for new deployments. It’s not going to disappear overnight. It’s the business that drives the need for storage, and it’s the business applications that dictate the ideal architecture for your environment. Aside from the application, businesses will also be dependent on their existing in-house skills which will of course affect the overall cost analysis of embracing the new technologies, possibly pushing them off.

So, what are we in for? The following list summarizes my view on the key changes that I think we’ll see in the foreseeable future.  I’m guessing you’d see these (along with many others) pop up in pretty much any google search about the future of storage or storage trends, but these are the most relevant to what I’m personally witnessing.

  • The public cloud is an unstoppable force.  Embrace it as a storage admin or risk becoming irrelevant.
  • Hyper-converged systems will become more and more common and will driven by market demand.
  • Hardware commoditization will continue to eat away at the proprietary hardware business.
  • Storage vendors will continue to consolidate.
  • We will see the rise of RDMA in enterprise storage.
  • Open Source storage software will mature and will see more widespread acceptance.
  • Flash continues to dominate and will be embraced by the enterprise, driving newer technologies like NVMe and diminishing technologies like fiber channel.
  • GDPR will drive increase spending and overall focus on data security.
  • Scale out and object solutions will increasingly be more important.
  • Data Management and automation will increase in importance.

Cloud
I believe that the future of cloud computing is undeniably hybrid. The future data center will likely represent a combination of cloud based software products and on-prem compute, creating a hybrid IT solution that balances the scalability and flexibility associated with cloud, and the security and control you have with a private data center. With that said, I don’t believe that Cloud is a panacea as there are always concerns about security, privacy, backups, and especially performance. In my experience, when the companies I’ve worked for have directed us to investigate cloud options for specific applications, on-premises infrastructure costs less than public cloud in the long run. Even so, there is no doubting the inexorable shift of projects, infrastructure, and spending to the cloud, and it will affect compute, networking, software, and storage. I expect I’ll see more and more push to find more efficient solutions that offer lower costs, likely resulting in hybrid solutions. When moving to the cloud, monitoring consumption is the key to cost savings. Cost management tools from the likes of Cloudability, Cloud Cruiser and Cloudyn are available and well worth looking at.

I’ve also heard, “the cloud is already in our data center, it’s just private”. Contrary to popular belief, private clouds are not simply existing data centers running virtualized, legacy workrkloads. They are highly-modernized application and service environments running on true cloud platforms (like AWS or Azure) residing either on-prem or in a hybrid scenario with a hosting services partner. As we shift more of our data to the cloud, we’ll see industry demand for storage move from “just in case” storage (an upfront capex model) to “just in time” storage (an ongoing opex model). “Just in time” storage has been a running joke for years for me in the more traditional data center environments that I’ve been responsible for, alluding to the fact that we’d get storage budget approved, ordered and installed just days before reaching full capacity. That’s not what I’m referring to in this case… “Just in time” means online service providers are running at much higher asset utilization than the typical customer can add capacity in more granular increments. The migration to cloud allows for a much more efficient “just in time” model than I’m used to, and allows the switch to an ongoing opex model.

Hyper Converged
A hyper-converged infrastructure can greatly simplify the management of IT and yes, it could reduce the need for skilled storage administrators: the complexities of storage, servers and networking that require separate skills to manage are hidden ‘under the hood’ by that software layer, allowing it to be managed by staff with more general IT skills through a single administrative interface. Hyperconverged infrastructure is also much easier to scale and in smaller increments than traditional integrated systems. Instead of making major infrastructure investments every few years, businesses can simply add modules of hyperconverged infrastructure when they are needed.

It seems like an easy sell. It’s a data center in a box. Fewer components, a smaller data center footprint, reduced energy consumption, lower cooling requirements, reduced complexity, rapid deployment time, fewer high level skill requirements, and reduced cost. What else could you ask for?

As it turns out, there are issues. Hyper converged systems require a massive amount of interoperability testing, which means hardware and software updates take a very long time to be tested, approved and released. A brand new intel chipset can take half a year to be approved. There is a tradeoff between performance and interoperability. In addition, you won’t be saving any money over a traditional implementation, hyper-converged requires vendor lock-in, and performance and capacity must be scaled out at the same time. Even with those potential pitfalls, hyper converged systems are here to stay and will continue to be adopted at a fast pace in the industry. The Pros tend to outweigh the cons.

Hardware Commoditization
The commoditization of hardware will continue to eat away at proprietary hardware businesses. The cost savings from economies of scale always seem to overpower the benefits of specialized solutions. Looking at history, there has been a long a pattern of mass-market produced products that completely wipe out low-volume high-end products, even superior products. Open source software using off-the-shelf hardware will become more common as we move toward the commoditzation of storage.

I believe most enterprises in general lack the in-house talent required to combine third-party or open source storage software with commodity hardware in a way that can guarantee the scalability and resilience that would be required. I think we’re moving in that direction, but we’re not likely to see it become prevalent in enterprise storage soon.

The mix of storage vendors in typical enterprises is not likely to be radically changed anytime soon, but it’s coming. Startups, even with their innovative storage software, have to deal with concerns about interoperability, supportability and resilience, and those concerns aren’t going anywhere. While the endorsement of a startup by one of the major vendors could change that, I think the current largest players like Dell/EMC and NetApp might be apprehensive in accelerating the move to storage hardware commoditization.

Open Source
I believe that software innovation has decisively shifted to open source, and we’re seeing that more and more in the enterprise storage space. You can take a look at many current open source solutions in my previous blog post here. Moreover, I can’t think of a single software market that has a proprietary packaged software vendor that defines and leads the field. Open source allows fast access to innovative software at little or no cost, allowing IT organizations to redirect their budget to other new initiatives.

When Enterprise architecture groups look at open source solutions, which generally focus on which proprietary vendor they should lock themselves in to, are now faced with the onerous task of selecting the appropriate open source software components, figuring out how they’ll be integrated, and doing interoperability testing, all while ensuring that they are maintaining a reliable infrastructure to the business. As you might expect, implementing open source requires a much higher level of technical ability than traditional proprietary solutions. Having the programming knowledge to build a and support an open source solution is far different than operating someone else’s supported solution. I’m seeing some traditional vendors move to the “milk the installed base” strategy and stifle their own internal innovation. If we want to showcase ourselves as technology leaders, we’re going to have to embrace open source solutions, despite the drawbacks.

While open source software can increase complexity and include poorly tested features and bugs, the overall maturity and general usability of Open Source storage software has been improving in recent years. With the right staff, implementation risks can be managed. For some businesses, the cost benefits of moving to that model are very tangible. Open source software has become commonplace in the enterprise, especially in the Linux realm. Linux of course pretty much started the open source movement, followed by widely adopted enterprise applications like MySQL, Apache, Hadoop. Open source software can allow businesses to develop IT solutions to address challenges that are customized and innovative while at the same time bring down acquisition costs by using commodity hardware.

NVMe
Storage industry analysts have predicted the slow death of Fiber Channel based storage for a long time. I expect that trend to speed up, with the steadily increasing speed of standard Ethernet all but eliminating the need for proprietary SAN connections and the expensive Fibre Channel infrastructure that comes along with it. NVMe over ethernet will drive it. NVMe technology is a high performance interface for solid-state drives (SSDs) and predictably, it will be embraced by all-flash vendors moving forward.

All the current storage trends you’ve read around efficiency, flash, performance, big data, machine learning, object storage, hyper-converged infrastruture, etc. are all moving against the current Fibre Channel standard. Production deployments are not yet widespread, but it’s coming. It allows vendors and customers get the most out of flash (and other non-volatile memory) storage. The rapid growth of all-flash arrays has kept fiber channel alive because it typcially replaces legacy disk or hybrid fiber channel arrays.

Legacy Fiber Channel vendors like Emulex, QLogic, and Brocade have been acquired by larger companies so the larger companies can milk the cash flow from the expensive FC hardware before their customers convert to Ethernet. I don’t see any growth or innovation in the FC market moving forward.

Flash
In case you haven’t noticed, it’s near the end of 2017 flash has taken over. It was widely predicted, and from what I’ve seen personally, those predictions absolutely came true. While it still may not rule the data center overall, new purchases have trended that way for quite some time now. Within the past year the organizations I’ve worked for have completely eliminated spinning disk from block storage purchases, instead relying on the value propositions of all-flash with data reduction capabilities making up for the smaller footprint. SSDs are now growing in capacity faster than HDDs (15TB SSDs have been announced) and every storage vendor now has an all-flash offering.

Consolidate and Innovate
The environment for flash startups is getting harder because all the traditional vendors now offer their own all-flash options. There are still startups making exciting progress in NVMe over Fabrics, object storage, hyper-converged infrastructure, data classification, and persistent memory, but only a few can grow into profitability on their own. We will continue to see acquisitions of these smaller, innovative startups as the larger companies struggle to develop similar technologies internally.

RDMA
RDMA will continue to become more prevalent in enterprise storage, as it significantly boosts performance.. RDMA, or Remote Direct Memory Access, has actually been around in the storage arena for quite a while as a cluster interconnect and for HPC storage. Most high-performance scale-out storage arrays use DMA for their cluster communications. Examples inlcude Dell FluidCache for SAN, XtremIO, VMAX3, IBM XIV, InfiniDat, and Kaminario. In a microsoft blog I was reading, it showed 28% more throughput, realized by the reduced IO latency. It also illustrated that RDMA is more CPU efficient which leaves the CPU available to run more virtual machines. TCP/IP is of course no slouch and is absolutely still a viable deployment option. While not quite as fast and efficient as RDMA, it will remain well suited for organizations that lack the expertise needed for RDMA.

The Importance of Scale-Out
Scale-up storage is showing it’s age. If you’re reading this, you probably know that scale up is limited to the scalability limits of the storage controllers and has for years led to storage system sprawl. As we move into a multi data center architecture, especially in the world of object storage, clusters will be extended by adding nodes in different geographical areas. As object storage is geo aware (I am in the middle of a global ECS installation), policies can be established to distribute data into these other locations. As a user is accessing the storage the object storage system will return data from the node that provides the best response time to the user. As data storage needs continue to rapidly grow, it’s critical to move towards scale-out architecture vs. scale-up. The scalability that scale-out storage offers will help reduce costs, complexity, and resource allocation.

GDPR
The General Data Protection Regulation takes effect in 2018 and applies to any entity doing business within any EU country. Under the GDPR, companies will need to build controls around security roles and levels in regard to data access and data transfer, and must provide tight data-breach mechanisms and notification protocols. As process controls they probably will have little impact on your infrastructure, however the two main points within the GDPR that have the most potential for directly impacting storage are data protection by design and data privacy by default.

the GDPR is going to require you to think about the benefits of cloud vs on-prem solutions. Data will have to meet the principle of privacy by default, be in an easily portable format and meet the data minimization principle. Liability of the new regulation falls on all parties however, so cloud providers will have to provide robust compliance solutions in place as well, meaning it could be a simpler, less-expensive route to look at a cloud or hybrid solution in the future.

XtremIO Manual Log File Collection Procedure

If you have a need to gather XtremIO logs for EMC to analyze and they are unable to connect via ESRS, there is a method to gather them manually.  Below are the steps on how to do it.

1. Log in to the XtremIO Management System (XMS) GUI interface with the ‘admin‘ user account.

2. Click on the ‘Administration‘ tab, which is on the top of the XtremIO Management System (XMS) GUI banner bar.

3. On the left side of the Administration window, choose the ‘CLI Terminal‘ option.

4. Once you have the CLI terminal up, enter the following CLI command at the ‘xmcli (admin)>‘ prompt.  This command will generate a set of XtremIO dossier log files: create-debug-info.  Note that it may take a little while to complete.  Once the command completes and returns you to the ‘xmcli (admin)>’ prompt, a complete package of XtremIO dossier log files will be available for you to download.

Example:

xmcli (admin)> create-debug-info
The process may take a while. Please do not interrupt.
Debug info collected and could be accessed via http:// <XMS IP Address> /XtremApp/DebugInfo/104dd1a0b9f56adf7f0921d2f154329a.tar.xz

Important Note: If you have more than one cluster managed by the XMS server, you will need to select the specific cluster.

xmcli (e012345)> show-clusters

Cluster-Name Index State  Gates-Open Conn-State Num-of-Vols Num-of-Internal-Volumes Vol-Size UD-SSD-Space Logical-Space-In-Use UD-SSD-Space-In-Use Total-Writes Total-Reads Stop-Reason Size-and-Capacity

XIO-0881     1     active True       connected  253         0                       60.550T  90.959T      19.990T              9.944T              44

2.703T     150.288T    none        4X20TB

XIO-0782     2     active True       connected  225         0                       63.115T  90.959T      20.993T              9.944T              20

7.608T     763.359T    none        4X20TB

XIO-0355     3     active True       connected  6           0                       2.395T   41.111T      1.175T               253.995G            6.

251T       1.744T      none        2X40TB

xmcli (e012345)> create-debug-info cluster-id=3

5. Once the ‘create-debug-info‘ command completes, you can use a web browser to navigate to the HTTP address link that’s provided in the terminal session window.  After navigating to the link, you’ll be presented with a pop-up window asking you to save the log file package to your local machine.  Save the log file package to your local machine for later upload.

6. Attach the XtremIO dossier log file package you downloaded to the EMC Service Request (SR) you currently have open or are in the process of opening.  Use the ‘Attachments’ (the paperclip button) area located on the Service Request page for upload.

7. You also have the ability to view a historical listing of all XtremIO dossier log file packages that are currently available on your system. To view them, issue the following XtremIO CLI command: show-debug-info. A series of log file packages will be listed.  It’s possible EMC may request a historical log file package for baseline information when troubleshooting.  To download, simply reference the HTTP links listed under the ‘Output-Path‘ header and input the address into your web browser’s address bar to start the download.

Example:

xmcli (tech)> show-debug-info
 Name  Index  System-Name   Index   Debug-Level   Start-Time                 Create-Time               Output-Path
 1      XtremIO-SVT   1       medium        Mon Aug 14 15:55:10 2017   Mon Aug 14 16:09:40 2017  http://<XMS IP Address>/XtremApp/ DebugInfo/1aaf4b1acd88433e9aca5b022b5bc43f.tar.xz
 2      XtremIO-SVT   1       medium        Mon Aug 14 15:55:10 2017   Mon Aug 14 16:09:40 2017  http://<XMS IP Address>/XtremApp/ DebugInfo/af5001f0f9e75fdd9c0784c3d742531f.tar.xz

That’s it! It’s a fairly straightforward process.

 

 

Configuring a Powershell to Isilon Connection with SSL

PowerShell allows an easy method to access the Isilon ReST API, but in my environment I need to use true SSL validation. If you are using the default self-signed certificate of the Isilon, your connection will likely fail with an error similar to the one below:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Isilon generates a self signed certificate by default.  Certificate validation for the current PowerShell session can be disabled with the script below, however in my environment I’m not allowed to do that.  I’m including it for completeness in case it is useful for someone else, it was not written by me but uses a BSD 3-Clause license.

function Disable-SSLValidation{
<#
.SYNOPSIS
    Disables SSL certificate validation
.DESCRIPTION
    Disable-SSLValidation disables SSL certificate validation by using reflection to implement the System.Net.ICertificatePolicy class.
    Author: Matthew Graeber (@mattifestation)
    License: BSD 3-Clause
.NOTES
    Reflection is ideal in situations when a script executes in an environment in which you cannot call csc.ese to compile source code. If compiling code is an option, then implementing System.Net.ICertificatePolicy in C# and Add-Type is trivial.
.LINK
    http://www.exploit-monday.com
#>
    Set-StrictMode -Version 2
    # You have already run this function if ([System.Net.ServicePointManager]::CertificatePolicy.ToString() -eq 'IgnoreCerts') { Return }
    $Domain = [AppDomain]::CurrentDomain
    $DynAssembly = New-Object System.Reflection.AssemblyName('IgnoreCerts')
    $AssemblyBuilder = $Domain.DefineDynamicAssembly($DynAssembly, [System.Reflection.Emit.AssemblyBuilderAccess]::Run)
    $ModuleBuilder = $AssemblyBuilder.DefineDynamicModule('IgnoreCerts', $false)
    $TypeBuilder = $ModuleBuilder.DefineType('IgnoreCerts', 'AutoLayout, AnsiClass, Class, Public, BeforeFieldInit', [System.Object], [System.Net.ICertificatePolicy])
  $TypeBuilder.DefineDefaultConstructor('PrivateScope, Public, HideBySig, SpecialName, RTSpecialName') | Out-Null
    $MethodInfo = [System.Net.ICertificatePolicy].GetMethod('CheckValidationResult')
    $MethodBuilder = $TypeBuilder.DefineMethod($MethodInfo.Name, 'PrivateScope, Public, Virtual, HideBySig, VtableLayoutMask', $MethodInfo.CallingConvention, $MethodInfo.ReturnType, ([Type[]] ($MethodInfo.GetParameters() | % {$_.ParameterType})))
    $ILGen = $MethodBuilder.GetILGenerator()
    $ILGen.Emit([Reflection.Emit.Opcodes]::Ldc_I4_1)
    $ILGen.Emit([Reflection.Emit.Opcodes]::Ret)
    $TypeBuilder.CreateType() | Out-Null

    # Disable SSL certificate validation
   [System.Net.ServicePointManager]::CertificatePolicy = New-Object IgnoreCerts
}

While that code may work fine for some, for security reasons you may not want to or be able to disable certificate validation.  Fortunately, you can create your own key pair with puttygen.  This solution was tested to work with OneFS v 7.2.x and PowerShell V3.

Here are the steps for creating your own key pair for PowerShell SSL authentication to Isilon:

Generate the Key

  1. Download Puttygen to generate the keypair for authentication.
    Open Puttygen and click Generate.
  2. It’s important to note that PowerShell requires exporting the key in OpenSSH format, which is done under the Conversions menu, and the option ‘Export OpenSSHKey’.  Save the key without a passphrase.  It can be named something like “SSH.key”.
  3. Next we need to save the public key.  Copy the information in the upper text box labeled “public key for pasting into OpenSSH authorized_keys file”, and paste it into a new text file.  You can then save the file as “authorized_keys” for later use.

Copy the Key

  1. Copy the authorized_keys file to the Isilon cluster to the location of your choosing.
  2. Open an SSH connection to the Isilon cluster and create a folder for the authorized_keys file.
    Example command:  isi_for_array mkdir /root/.ssh
  3. Copy the file to all nodes. Example command: isi_for_array cp /ifs/local/authorized_keys /root/.ssh/
  4. Verify that the file is available on all of the nodes, and it’s also a good idea to verify that the checksum is correct. Example command: isi_for_array md5 /root/.ssh/authorized_keys

Install PowerShell SSH Module

  1. In order to execute commands via SSH using PowerShell you will need to use an SSH module.  Various options exist, however the module from PowerShellAdmin works fine. It works for running commands via SSH on remote hosts such as Linux or Unix computers, VMware ESX(i) hosts or network equipment such as routers and switches that support SSH. It works well with OpenSSH-type servers.

You can visit the PowerShellAdmin page here,  and here is the direct download link for the SessionsPSv3.zip file.

  1. Once you’ve downloaded it, unzip the file to the SSH-Sessions folder, located in C:\Windows\System32\WindowsPowerShell\v1.0\Modules. With that module in place, we are now ready to connect with PowerShell to the Isilon cluster.

Test it

Below is a powershell script you can use to test your connection, it simply runs a df command on the cluster.

#PowerShell Test Script
Import-Module "SSH-Sessions"
$Isilon = "<hostname>"
KeyFile = "C:\scripts\<filename>.key"
New-SshSession -ComputerName $Isilon -Username root -KeyFile $KeyFile
Invoke-SshCommand -verbose -ComputerName $Isilon -Command df  
Remove-SshSession -ComputerName $Isilon

 

 

 

Isilon Mitrend Data Gathering Procedure

Mitrend is an extremely useful IT Infrastructure analysis service. They provide excellent health, growth and workload profiling assessments.  The service can process input source data from EMC and many non-EMC arrays, from host operating systems, and also from some applications.  In order to use the service, certain support files must be gathered before submitting your analysis request.  I had previously run the reports myself as an EMC customer, but sometime in the recent past they removed that ability for customers and it is now restricted to EMC employees and partners. You can of course simply send the files to your local EMC support team and they will be able to submit the files for a report on your behalf.  The reports are very detailed and extremely helpful for a general health check of your array, data is well organized into a powerpoint slide presentation and raw data is also made available in excel format.

My most recent analysis request was for Isilon, and below are the steps you’ll need to take to gather the appropriate information to receive your Isilon Mitrend report.  The performance impact of running the data gather is expected to be minimal, but in situations where the performance impact may be a concern then you should consider the timing of the run. I have never personally had an issue with performance when running the data gather, and the performance data is much more useful if it’s run during peak periods. The script is compatible with the virtual OneFS Simulator and can be executed and can be tested prior to running on any production cluster. If you notice performance concerns while the script is running, pressing Control + C in the console window will terminate it.

Obtain & Verify isi_gather_perf file

You will need to obtain a copy of the isi_gather_perf.tgz file from your local EMC team if you don’t already have a copy.  Verify that the file you receive file is 166 KB in size. To verify the isi_gather_perf.tgz is not corrupted or truncated you can run the following command once the file is on the Isilon cluster.

Isilon-01# file /ifs/isi_gather_perf.tgz

Example of a good file:

Isilon-01# file /ifs/isi_gather_perf.tgz /ifs/isi_gather_perf.tgz:
gzip compressed data, from Unix, last modified: Tue Nov 18 08:33:49 2014
data file is ready to be executed

Example of a corrupt file:

Isilon-01# file /ifs/isi_gather_perf.tgz /ifs/isi_gather_perf.tgz:
data file is corrupt

Once you’ve verified that the file is valid, you must manually run a Cluster Diagnostics gather. On the OneFS web interface, navigate to Cluster Management > Diagnostics > Gather Info and click the “Start Gather” button. Depending on the size of the cluster, it will take about 15 minutes. This process will automatically create a folder on the cluster called “Isilon_Support”, created under “ifs/data/”.

Gather Performance Info

Below is the process that I used.  Different methods of transferring files can of course be used, but I use WinSCP to copy files directly to the cluster from my Windows laptop, and I use putty for CLI management of the cluster via ssh.

1. Copy the isi_gather_perf.tgz to the Isilon cluster via SCP.

2.  Log into the cluster via ssh.

3. Copy the isi_gather_perf.tgz to /ifs/data/Isilon_Support, if it’s not there already.

4. Change to the Isilon Support Directory

 Isilon-01# cd /ifs/data/Isilon_Support

5. Extract the compressed file

 Isilon-01# tar zxvf /ifs/data/Isilon_Support/isi_gather_perf.tgz

After extraction, a new directory will be automatically created within the “Isilon_Support” directory named “isi_gather_perf”.

6. Start ‘Screen’

 Isilon-01# screen

7.  Execute the performance gather.  All output data is written to /ifs/data/Isilon_Support/isi_gather_perf/.  Extracting the file creates a new directory named “isi_gather_perf” which contains the script “isi_gather_perf”.  The default option gathers 24 hours of performance data and then creates a bundle with the gathered data.

Isilon-01# nohup python /ifs/data/Isilon_Support/isi_gather_perf/isi_gather_perf

8. At the end of the run, the script will create a .tar.gz archive of the capture data to /ifs/data/Isilon_Support/isi_gather_perf/. Gather the output files and send them to EMC.  Once EMC submits the files to Mitrend, it can take up to 24 hours for them to be processed.

Notes:

Below is a list of the command options available.  You may want to change the frequency the command is executed and the length of time the command is run with the I and r options.

 Usage: isi_gather_perf [options]

 Options:
 -h, --help Show this help message and exit
 -v, --version Print Version
 -d, --debug Enable debug log output Logs: /tmp/isi_gather_perf.log
 -i INTERVAL, --interval=INTERVAL
 Interval in seconds to sample performance data
 -r REPEAT, --repeat=REPEAT
 Number of times to repeat specified interval.

Logs:

The logs are located in /ifs/data/Isilon_Support/isi_gather_perf/gathers/ and by default are set to debug level, so they are extremely verbose.

Output:

The output from isi_gather_info will go to /ifs/data/Isilon_Support/pkg/
The output from isi_gather_perf will be /ifs/data/Isilon_Support/isi_gather_perf/gathers/

 

 

 

 

 

 

 

Machine Learning, Cognitive Computing, and the Storage Industry

In context with my recent posts about object storage and software defined storage, this is another topic that simply interested me enough to want to do a bit of research about the topic in general, as well as how it relates to the industry that I work in.  I discovered that there is a wealth of information on the topics of Machine Learning, Cognitive Computing, Artificial Intelligence, and Neural Networking, so much that writing a summary is difficult to do.  Well, here’s my attempt.

There is pressure in the enterprise software space to incorporate new technologies in order to keep up with the needs of modern businesses. As we move farther into 2017, I believe we are approaching another turning point in technology where many concepts that were previously limited to academic research or industry niches are now being considered for actual mainstream enterprise software applications.  I believe you’ll see Machine learning and cognitive systems becoming more and more visible in the coming years in the enterprise storage space. For the storage industry, this is very good news. As this technology takes off, it will result in the need to retain massive amounts of unstructured data in order to train the cognitive systems. Once machines can learn for themselves, they will collect and generate a huge amount of data to be stored, intelligently categorized and subsequently analyzed.

The standard joke about artificial intelligence (or machine learning in general) is that, like nuclear fusion, it has been the future for more than half a century now.  My goal in this post is to define the concepts, look at ways this technology has already been implemented, look at how it affects the storage industry, and investigate use cases for this technology.  I’m writing this paragraph before I start, so we’ll see how that goes. 🙂

 What is Cognitive Computing?

Cognitive computing is the simulation of human thought processes using computerized modeling (the most well know example is probably IBM’s Watson). It incorporates self-learning systems that use data mining, pattern recognition and natural language processing to imitate the way our brains process thoughts. The goal of cognitive computing is to create automated IT systems that are capable of solving problems without requiring human assistance.

This sounds like the stuff of science fiction, right? HAL (from the movie “2001 Space Odyssey”) came to the logical conclusion that his crew had to be eliminated. It’s my hope that intelligent storage arrays utilizing cognitive computing will come to the conclusion that 99.9 percent of stored data has no value and therefore should be deleted.  It would eliminate the need for me to build my case for archiving year after year. J

Cognitive computing systems work by using machine learning algorithms, they are inescapably linked. They will continuously gather knowledge from the data fed into them by mining data for information. The systems will progressively refine the methods the look for and process data until they become capable of anticipating new problems and modeling possible solutions.

Cognitive computing is a new field that is just beginning to emerge. It’s about making computers more user friendly with an interface that understands more of what the user wants. It takes signals about what the user is trying to do and provides an appropriate response. Siri, for example, can answer questions but also understands context of the question. She can ascertain whether the user is in a car or at home, moving quickly and therefore driving, or moving more slowly while walking. This information contextualizes the potential range of responses, allowing for increased personalization.

What Is Machine Learning?

Machine Learning is a subset of the larger discipline of Artificial Intelligence, which involves the design and creation of systems that are able to learn based on the data they collect. A machine learning system learns by experience. Based on specific training, the system will be able to make generalizations based on its exposure to a number of cases and will then be able to perform actions after new or unforeseen events. Amazon already use this technology, it’s part of their recommendation engine. It’s also commonly used by ad feed systems that provide ads based on web surfing history.

While machine learning is a tremendously powerful tool for extracting information from data, but it’s not a silver bullet for every problem. The questions must be framed and presented in a way that allows the learning algorithms to answer them. Because the data needs to be set up in the appropriate way, that can add additional complexity. Sometimes the data needed to answer the questions may not be available. Once the results are available, they also need to be interpreted to be useful and it’s essential to understand the context. A sales algorithm can tell a salesman what’s working the best, but he still needs to know how to best use that information to increase his profits.

What’s the difference?

Without cognition there cannot be good Artificial intelligence, and without Artificial Intelligence cognition can never be expressed. I Cognitive computing involves self-learning systems that use pattern recognition and natural language processing to mimic the way how the human brain works. The goal of cognitive computing is to create automated systems that are capable of solving problems without requiring human assistance. Cognitive computing is used in A.I. applications, hence Cognitive Computing is also actually subset of Artificial Intelligence.

If this seems like a slew of terms that all mean almost the same thing, you’d be right. Cognitive Computing and Machine Learning can both be considered subsets of Artificial Intelligence. What’s the difference between artificial intelligence and cognitive computing? Let’s use a medical example. In an artificial intelligence system, machine learning would tell the doctor which course of action to take based on its analysis. In cognitive computing, the system would provide information to help the doctor decide, quite possibly with a natural language response (like IBM’s Watson).

In general, Cognitive computing systems include the following ostensible characteristics:

  • Machine Learning
  • Natural Language Processing
  • Adaptive algorithms
  • Highly developed pattern recognition
  • Neural Networking
  • Semantic understanding
  • Deep learning (Advanced Machine Learning)

How is Machine Learning currently visible in our everyday lives?

Machine Learning has fundamentally changed the methods in which businesses relate to their customers. When you click “like” on a Facebook post your feed is dynamically adjusted to contain more content like that in the future. When you buy a Sony PlayStation on Amazon, and it recommends that you also buy an extra controller and a top selling game for the console, that’s their recommendation engine at work. Both of those examples use machine learning technology, and both affect most people’s everyday lives. Machine language technology delivers educated recommendations to people to help them make decisions in a world of almost endless choices.

Practical business applications of Cognitive Computing and Machine Learning

Now that we have a pretty good idea of what this all means, how is this technology actually being used today in the business world? Artificial Intelligence has been around for decades, but has been slow to develop due to the storage and compute requirements being too expensive to allow for practical applications. In many fields, machine learning is finally moving from science labs to commercial and business applications. With cloud computing and robust virtualized storage solutions providing the infrastructure and necessary computational power, machine learning developments are offering new capabilities that can greatly enhance enterprise business processes.

The major approaches today include using neural networkscase-based learninggenetic algorithmsrule induction, and analytic learning. The current uses of the technology combine all of these analytic methods, or a hybrid of them, to help guarantee effective, repeatable, and reliable results. Machine learning is a reality today and is being used very effectively and efficiently. Despite what many business people might assume, it’s no longer in its infancy. It’s used quite effectively across a wide array of industry applications and is going to be part of the next evolution of enterprise intelligence business offerings.

There are many other machine learning can have an important role. This is most notable in systems that with so much complexity that algorithms are difficult to design, when an application requires the software to adapt to an operational environment, or with applications that need to work with large and complex data sets. In those scenarios, machine learning methods play an increasing role in enterprise software applications, especially for those types of applications that need in-depth data analysis and adaptability like analytics, business intelligence, and big data.

Now that I’ve discussed some general business applications for the technology, I’ll dive in to how this technology is being used today, or is in development and will be in use in the very near future.

  1. Healthcare and Medicine. Computers will never completely replace doctors and nurses, but in many ways machine learning is transforming the healthcare industry. It’s improving patient outcomes and in general changing the way doctors think about how they provide quality care. Machine learning is being implemented in health care in many ways: Improving diagnostic capabilities, medicinal research (medicines are being developed that are genetically tailored to a person’s DNA), predictive analytics tools to provide accurate insights and predictions related to symptoms, diagnoses, procedures, and medications for individual patients or patient groups, and it’s just beginning to scratch the surface of personalized care. Healthcare and personal fitness devices connected via the Internet of Things (IoT) can also be used to collect data on human and machine behavior and interaction. Improving quality of life and people’s health is one of the most exciting use cases of Machine Learning technologies.
  2. Financial services. Machine Learning is being used for predicting credit card risk, managing an individual’s finances, flagging criminal activity like money laundering and fraud, as well as automating business processes like call centers and insurance claim processing with trained AI agents. Product recommendation systems for a financial advisor or broker must leverage current interests, trends, and market movements for long periods of time, and ML is well suited to that task.
  3. Automating business analysis, reporting, and work processes. Machine learning automation systems that use detailed statistical analysis to process, analyze, categorize, and report on their data exist today. Machine learning techniques can be used for data analysis and pattern discovery and can play an important role in the development of data mining applications. Machine learning is enabling companies to increase growth and optimize processes, increase customer satisfaction, and improve employee engagement.As one specific example, adaptive analytics can be used to help stop customers from abandoning a website by analyzing and predicting the first signs they might log off and causing live chat assistance windows to appear. They are also good at upselling by showing customers the most relevant products based on their shopping behavior at that moment. A large portion of Amazon’s sales are based on their adaptive analytics, you’ll notice that you always see “Customers who purchased this item also viewed” when you view an item on their web site.Businesses are presently using Machine learning to improve their operations in other many ways. Machine learning technology allows business to personalize customer service, for example with chatbots for customer relations. Customer loyalty and retention can be improved by mining customer actions and targeting their behavior. HR departments can improve their hiring processes by using ML to shortlist candidates. Security departments can use ML to assist with detecting fraud by building models based on historical transactions and social media. Logistics departments can improve their processes by allowing contextual analysis of their supply chain. The possibilities for the application of this technology across many typical business challenges is truly exciting.
  4. Playing Games. Machine learning systems have been taught to play games, and I’m not just talking about video games. Board game like Go, IBM’s Watson in games of Chess and Jeopardy, as well as in modern real time strategy video games, all with great success. When Watson defeated Brad Rutter and Ken Jennings in the Jeopardy! Challenge of February 2011, showcasing Watson’s ability to learn, reason, and understand natural language with machine learning technology. In game development, Machine learning has been used for gesture recognition in Kinect and camera based interfaces, and It has also been used in some fighting style games to analyze the style of moves of the human to mimic the human player, such as the character ‘Mokujin’ in Tekken.
  5. Predicting the outcome of legal proceedings. A system developed by a team of British and American researchers was proven to be able to correctly predict a court’s decision with a high degree of accuracy. The study can be viewed here: https://peerj.com/articles/cs-93/. While computers are not likely to replace judges and lawyers, the technology could very effectively be used to assist the decision making process.
  6. Validating and Customizing News content. Machine learning can be used to create individually personalized news and screening and filtering out “fake news” has been a more recent investigative priority, especially given today’s political landscape. Facebook’s director of AI research Yann LeCun was quoted saying that machine learning technology that could squash fake news “either exists or can be developed.” A challenge aptly named the “Fake News Challenge” was developed for technology professionals, you can view their site http://www.fakenewschallenge.org/ for more information. Whether or not it actually works is dubious at the moment, but the application of it could have far reaching positive effects for democracy.
  7. Navigation of self-driving cars. Using sensors and onboard analytics, cars are learning to recognize obstacles and react to them appropriately using Machine Learning. Google’s experimental self-driving cars currently rely on a wide range of radar/lidar and other sensors to spot pedestrians and other objects. Eliminating some or all of that equipment would make the cars cheaper and easier to design and speed up mass adoption of the technology. Google has been developing its own video-based pedestrian detection system for years using machine learning algorithms. Back in 2015, its system was capable of accurately identifying pedestrians within 0.25 seconds, with 0.07-second identification being the benchmark needed for such a system to work in real-time.This is all good news for storage manufacturers. Typical luxury cars have up to around 200 GB of storage today, primarily for maps and other entertainment functionality. Self-driving cars will likely need terabytes of storage, and not just for the car to drive itself. Storage will be needed for intelligent assistants in the car, advanced voice and gesture recognition, caching software updates, and caching files to storage to reduce peak network bandwidth utilization.
  8. Retail Sales. Applications of ML are almost limitless when it comes to retail. Product pricing optimization, sales and customer service trending and forecasting, precise ad targeting with data mining, website content customization, prospect segmentation are all great examples of how machine learning can boost sales and save money. The digital trail left by customer’s interactions with a business both online and offline can provide huge amounts of data to a retailer. All of that data is where Machine learning comes in. Machine learning can look at history to determine which factors are most important, and to find the best way to predict what will occur based on a much larger set of variables. Systems must take into account today’s market trends not only for the past year, but for what happened as recently as 1 hour ago in order to implement real-time personalization. Machine learning applications can discover which items are not selling and pull them from the shelves before a salesperson notices, and even keep overstock from showing up in the store at all with improved procurement processes. A good example of the machine learning personalized approach to customers can be found once you get in the Jackets and Vests section of the North Face website. Click on “Shop with IBM Watson” and experience what is almost akin to a human sales associate helping you choose which jacket you need.
  9. Recoloring black and white images. Ted Turner’s dream come true. J Using computers to recognize objects and learn what they should look like to humans, color can be returned to both black and white pictures and video footage. Google’s DeepDream (https://research.googleblog.com/2015/07/deepdream-code-example-for-visualizing.html) is probably the most well-known example of one. It has been trained by examining millions of images of just about everything. It analyzes images in black and white and then colors them the way it thinks they should be colored. The “colorize” project is also taking up the challenge, you can view their progress at http://tinyclouds.org/colorize/ and download the code. A good online example is at Algorithmia, which allows you to upload and convert an image online. http://demos.algorithmia.com/colorize-photos/
  10. Enterprise Security. Security and loss of are major concerns for the modern enterprise. Some storage vendors are beginning to use artificial intelligence and machine learning to prevent data loss, increase availability and reduce downtime via smart data recovery and systematic backup strategies. Machine learning allows for smart security features to detect data and packet loss during transit and within data centers.Years ago it was common practice to spend a great deal of time reviewing security logs on a daily basis. You were expected to go through everything and manually determine the severity of any of the alerts or warnings as you combed through mountains of information. As time progresses it becomes more and more unrealistic for this process to remain manual. Machine learning technology is currently implemented and is very effective at filtering out what deviates from normal behavior, be it with live network traffic or mountains of system log files. While humans are also very good at finding patterns and noticing odd things, computers are really good at doing that repetitive work at a much larger scale, complementing what an analyst can do.Interested in looking at some real world examples of Machine Learning as it relates to security? There’s many out there. Clearcut is one example of a tool that uses machine learning to help you focus on log entries that really need manual review. David Bianco created a relatively simple Python script that can learn to find malicious activity in HTTP proxy logs. You can download David’s script here: https://github.com/DavidJBianco/Clearcut. I also recommend taking a look at the Click Security project, which also includes many code samples. http://clicksecurity.github.io/data_hacking/, as well as PatternEx, a SecOps tool that predicts cyber attacks. https://www.patternex.com/.
  11. Musical Instruments. Machine learning can also be used in more unexpected ways, even in creative outlets like making music. In the world of electronic music there are new synthesizers and hardware created and developed often, and the rise in machine learning is altering the landscape. Machine learning will allow instruments the potential to be more expressive, complex and intuitive in ways previously experienced only through traditional acoustic instruments. A good example of a new instrument using machine learning is the Mogees instrument. This device has a contact microphone that picks up sound from everyday objects and attaches to your iPhone. Machine learning could make it possible to use a drum machine then adapts to your playing style, learning as much about the player as the player learns about the instrument. Simply awe inspiring.

What does this mean for the storage industry?

As you might expect, this is all very good news for the storage industry and very well may lead to more and more disruptive changes. Machine learning has an almost insatiable appetite for data storage. It will consume huge quantities of capacity while at the same time require very high levels of throughput. As adoption of Cognitive Computing, Artificial Intelligence, and machine learning grows, it will attract a growing number of startups eager to solve the many issues that are bound to arise.

The rise of Machine learning is set to alter the storage industry in very much the same way that PC’s helped reshape the business world in the 1980’s. Just as PCs have advanced from personal productivity applications like Lotus 1-2-3 to large-scale Oracle databases, Machine learning is poised to evolve from consumer type functions like Apple’s Siri to full scale data driven programs that will drive global enterprises. So, in what specific ways is this technology set to alter and disrupt the storage industry? I’ll review my thoughts on that below.

  1. Improvements in Software-Defined Storage. I recently dove into Software defined storage in a blog post (https://thesanguy.com/2017/06/15/defining-software-defined-storage-benefits-strategy-use-cases-and-products/). As I described in that post, there are many use cases and a wide variety of software defined storage products in the market right now. Artificial Intelligence and machine learning will spark faster adoption of software-defined storage, especially as products are developed that use the technology to allow storage to be self-configurable. Once storage is all software-defined, algorithms can be integrated and far-reaching enough to process and solve complicated storage management problems because of the huge amount of data they can now access. This is a necessary step to build the monitoring, tuning, healing service abilities needed for self-driving software defined storage.
  2. Overall Costs will be reduced. Enterprises are moving towards cloud storage and fewer dedicated storage arrays. Dynamic software defined software that integrates machine learning could help organizations more efficiently utilize the capacity that they already own.
  3. Hybrid Storage Clouds. public vs. private clouds has been a hot topic in the storage industry, and with the rise of machine learning and software-defined storage it’s becoming more and more of a moot point. Well-designed software-defined architectures should be able to transition data seamlessly from one type of cloud to another, and machine learning will be used to implement that concept without human intervention. Data will be analyzed and logic engines will automate data movement. The hybrid cloud is very likely to flourish as machine learning technologies are adopted into this space.
  4. Flash Everywhere. Yes, the concept of “flash first” has been promoted for years now, and machine learning simply furthers that simple truth. The vast amount of data that machine learning needs to process will further increase the demand for throughput and bandwidth, and flash storage vendors will be lining up to fill that need.
  5. Parallel File Systems. Storage systems will have to deliver performance and throughput at scale in order to support machine learning technologies. Parallel file system can effectively reduce the problems of massive data storage and I/O bottlenecks. With its focus on high performance access to large data sets, parallel file systems combined with flash could be considered an entry point to full scale machine learning systems.
  6. Automation. Software-defined storage has had a large influence in the rise of machine learning in storage environments. Adding a heterogeneous software layer abstracted from the hardware allows the software to efficiently monitor many more tasks. The additional automation allows adminisrators like myself much more time for more strategic work.
  7. Neural Storage. Neural storage (“deep learning”) is designed to recognize and respond to problems and opportunities without any human intervention. It will drive the need for massive amounts of storage as it is utilized in modern businesses. It uses artificial neural networks, which are simplified computer simulations of how biological neurons behave to extract rules and patterns from sets of data. Unsurprisingly (based on it’s name) the concept is inspired by the way biological nervous systems process information. In general, think of of neural storage as many layers of processing on mountain-sized mounds of data. Data is fed through neural networks that are logical constructions that ask a series of binary true/false questions, or extract a numerical value of every bit of data which pass through them and classify it according to the answers that were tallied up. Deep Learning work is focused on developing these networks, which is why they became what are known as Deep Neural Networks (logic networks of the complexity needed to deal with classifying enormous datasets, think google-scale data). Using Google Images as an example, with datasets as massive and comprehensive as these and logical networks sophisticated enough to handle their classification, it becomes relatively trivial to take an image and state with a high probability of accuracy what it represents to humans.

How does Machine Learning work?

At its core, Machine learning works by recognizing patterns (such as facial expressions or spoken words), extracting insight from those patterns, discovering anomalies in those patterns, and then making evaluations and predictions based on those discoveries.

The principle can be summed up with the following formula:

Machine Learning = Model Representation + Parameter Evaluation + Learning & Optimization

Model Representation: The system that makes predictions or identifications. Includes the use of a object element represented in a formal language that a computer can handle and interpret.

Parameter Evaluation: A function needed to distinguish or evaluate the good and bad objects, the factors used by the model to form it’s decisions.

Learning & Optimization: The method used to search among these classifiers within the language to find the highest scoring ones. This is the learning system that adjust the parameters and looks at predictions vs. actual outcome.

How do we apply machine learning to a problem? First and foremost, a pattern must exist in the input data that would allow a conclusion to be drawn. To solve a problem with machine learning, the machine learning algorithm must have a pattern to deduce information from. Next, there must be a sufficient amount of data to apply machine learning to a problem. If there isn’t enough data to analyze, it will compromise the validity of the end result. Finally, machine learning is used to derive meaning from the data and perform structured learning to arrive at a mathematical approximation to describe the behavior of the problem. Therefore if the conditions above aren’t met, it will be a waste of time to apply machine learning to a problem through structured learning. All of these conditions must be met for machine learning to be successful.

Summary

Machines may not have reached the point where they can make full decisions without humans, but they have certainly progressed to the point where they can make educated, accurate recommendations to us so that we have an easier time making decisions. Current machine learning systems have delivered tremendous benefits by automating tabulation and harnessing computational processing and programming to improve both enterprise productivity and personal productivity.

Cognitive systems will learn and interact to provide expert assistance to scientists, engineers, lawyers, and other professionals in a fraction of the time it now takes. While they will likely never replace human thinking, cognitive systems will extend our cognition and free us to think more creatively and effectively, and be better problem solvers.

Scripting a VNX/Celerra to Isilon Data Migration with EMCOPY and Perl

datamigration

Below is a collection of perl scripts that make data migration from VNX/Celerra file systems to an Isilon system much easier.  I’ve already outlined the process of using isi_vol_copy_vnx in a prior post, however using EMCOPY may be more appropriate in a specific use case, or simply more familiar to administrators for support and management of the tool.  Note that while I have tested these scripts in my environment, they may need some modification for your use.  I recommend running them in a test environment prior to using them in production.

EMCOPY can be downloaded directly from DellEMC with the link below.  You will need to be a registered user in order to download it.

https://download.emc.com/downloads/DL14101_EMCOPY_File_migration_tool_4.17.exe

What is EMCOPY?

For those that haven’t used it before, EMCOPY is an application that allows you to copy a file, directory, and subdirectories between NTFS partitions while maintaining security information, an improvement over the similar robocopy tool that many veteran system administrators are familiar with. It allows you to back up the file and directory security ACLs, owner information, and audit information from a source directory to a destination directory.

Notes about using EMCOPY:

1) In my testing, EMCopy has shown up to a 25% performance improvement when copying CIFS data compared to Robocopy while using the same number of threads. I recommend using EMCopy over Robocopy as it has other feature improvements as well, for instance sidmapfile, which allows migrating local user data to Active Directory users. It’s available in version 4.17 or later.  Robocopy is also not an EMC supported tool, while EMCOPY is.

2) Unlike isi_vol_copy_vnx, EMCOPY is a windows application and must be run from a windows host.  I highly recommend a dedicated server for any migration tasks.  The isi_vol_copy_vnx utility runs directly on the Isilon OneFS CLI which eliminates any intermediary copy hosts, theoretically providing a much faster solution.

3) There are multiple methods to compare data sizes between the source and destination. I would recommend maintaining a log of each EMCopy session as that log indicates how much data was copied and if there were any errors.

4) If you are migrating over a WAN connection, I recommend first restoring from tape and then using an incremental data sync with EMCOPY.

Getting Started

I’ve divided this post up into a four step process.  Each step includes the relevant script and a description of the process.

  • Export File System information (export_fs.pl  Script)

Export file system information from the Celerra & generate the Isilon commands to re-create them.

  • Export SMB information (export_smb.pl Script)

Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

  • Export NFS information (export_nfs.pl Script)

Export NFS information from the Celerra & generate the Isilon commands to re-create them.

  • Create the EMCOPY migration script (EMCOPY_create.pl Script)

Perform the data migration with EMCOPY using the output from this script.

Exporting information from the Celerra to run on the Isilon

These Perl scripts are designed to be run directly on the Control Station and will subsequently create shell scripts that will run on the Isilon to assist with the migration.  You will need to manually copy the output files from the VNX/Celerra to the Isilon. The first three steps I’ve outlined do not move the data or permissions, they simply run a nas_fs query on the Celerra to generate the Isilon script files that actually make the directories, create quotas, and create the NFS and SMB shares. They are “scripts that generate scripts”. 🙂

Before you run the scripts, make sure you edit them to correctly specify the appropriate Data Mover.  Once complete, You’ll end up with three .sh files created for you to move to your Isilon cluster.  They should be run in the same order as they were created.

Note that EMC occasionally changes the syntax of certain commands when they update OneFS.  Below is a sample of the isilon specific commands that are generated by the first three scripts.  I’d recommend verifying that the syntax is still correct with your version of OneFS, and then modify the scripts if necessary with the new syntax.  I just ran a quick test with OneFS 8.0.0.2, and the base commands and switches appear to be compatible.

isi quota create –directory –path=”/ifs/data1″ –enforcement –hard-threshold=”1032575M” –container=1
isi smb share create –name=”Data01″ –path=”/ifs/Data01/data”
isi nfs exports create –path=”/Data01/data”  –roclient=”Data” –rwclient=”Data” –rootclient=”Data”

 

Step 1 – Export File system information

This script will generate a list of the file system names from the Celerra and place the appropriate Isilon commands that create the directories and quotes into a file named “create_filesystems_xx.sh”.

#!/usr/bin/perl

# Export_fs.pl – Export File system information
# Export file system information from the Celerra & generate the Isilon commands to re-create them.

use strict;
my $nas_fs="nas_fs -query:inuse=y:type=uxfs:isroot=false -fields:ServersNumeric,Id,Name,SizeValues -format:'%s,%s,%s,%sQQQQQQ'";
my @data;

open (OUTPUT, ">> create_filesystems_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nas_fs |") || die "cannot open $nas_fs: $!\n\n";

while ()

{
   chomp;
   @data = split("QQQQQQ", $_);
}

close(CMD);
foreach (@data)

{
   my ($dm, $id, $dir,$size,$free,$used_per, $inodes) = split(",", $_);
   print OUTPUT "mkdir /ifs/$dir\n";
   print OUTPUT "chmod 755 /ifs/$dir\n";
   print OUTPUT "isi quota create --directory --path=\"/ifs/$dir\" --enforcement --hard-threshold=\"${size}M\" --container=1\n";
}

The Output of the script looks like this (this is an excerpt from the create_filesystems_xx.sh file):

isi quota create --directory --path="/ifs/data1" --enforcement --hard-threshold="1032575M" --container=1
mkdir /ifs/data1
chmod 755 /ifs/data1
isi quota create --directory --path="/ifs/data2" --enforcement --hard-threshold="20104M" --container=1
mkdir /ifs/data2
chmod 755 /ifs/data2
isi quota create --directory --path="/ifs/data3" --enforcement --hard-threshold="100774M" --container=1
mkdir /ifs/data3
chmod 755 /ifs/data3

The output script can now be copied to and run from the Isilon.

Step 2 – Export SMB Information

This script will generate a list of the smb share names from the Celerra and place the appropriate Isilon commands into a file named “create_smb_exports_xx.sh”.

#!/usr/bin/perl

# Export_smb.pl – Export SMB/CIFS information
# Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "cifs";:wq!
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $path =~ s/^"/\"\/ifs/;
   print  OUTPUT "isi smb share create --name=$name --path=$path\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_smb_exports_xx.sh file):

isi smb share create --name="Data01" --path="/ifs/Data01/data"
isi smb share create --name="Data02" --path="/ifs/Data02/data"
isi smb share create --name="Data03" --path="/ifs/Data03/data"
isi smb share create --name="Data04" --path="/ifs/Data04/data"
isi smb share create --name="Data05" --path="/ifs/Data05/data"

 The output script can now be copied to and run from the Isilon.

Step 3 – Export NFS Information

This script will generate a list of the NFS export names from the Celerra and place the appropriate Isilon commands into a file named “create_nfs_exports_xx.sh”.

#!/usr/bin/perl

# Export_nfs.pl – Export NFS information
# Export NFS information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "nfs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep export";

open (OUTPUT, ">> create_nfs_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $test = @vars;
   my $i=2;
   my ($ro, $rw, $root, $access, $name);
   my $path=$vars[1];

   for ($i; $i < $test; $i++)
   {
      my ($type, $value) = split("=", $vars[$i]);

      if ($type eq "ro") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }
      if ($type eq "rw") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $rw .= " --rwclient=\"$_\""; }
      }

      if ($type eq "root") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $root .= " --rootclient=\"$_\""; }
      }

      if ($type eq "access") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }

      if ($type eq "name") { $name=$value; }
   }
   print OUTPUT "isi nfs exports create --path=$path $ro $rw $root\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_nfs_exports_xx.sh file):

isi nfs exports create --path="/Data01/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data02/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data03/data" --roclient="Backup" --roclient="Data" --rwclient="Backup" --rwclient="Data" --rootclient="Backup" --rootclient="Data"
isi nfs exports create --path="/Data04/data" --roclient="Backup" --roclient="ProdGroup" --rwclient="Backup" --rwclient="ProdGroup" --rootclient="Backup" --rootclient="ProdGroup"
isi nfs exports create --path="/" --roclient="127.0.0.1" --roclient="127.0.0.1" --roclient="127.0.0.1" -rootclient="127.0.0.1"

The output script can now be copied to and run from the Isilon.

Step 4 – Generate the EMCOPY commands

Now that the scripts have been generated and run on the Isilon, the next step is the actual data migration using EMCOPY.  This script will generate the commands for a migration script, which should be run from a windows server that has access to both the source and destination locations. It should be run after the previous three scripts have successfully completed.

This script will output the commands directly to the screen, it can then be cut and pasted from the screen directly into a windows batch script on your migration server.

#!/usr/bin/perl

# EMCOPY_create.pl – Create the EMCOPY migration script
# Perform the data migration with EMCOPY using the output from this script.

use strict;

my $datamover = "server_4";
my $source = "\\\\celerra_path\\";
my $dest = "\\\\isilon_path\\";
my $prot = "cifs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cant open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $name =~ s/\"//g;
   $path =~ s/^/\/ifs/;

   my $log = "c:\\" . $name . "";
   $log =~ s/ //;
   my $src = $source . $name;
   my $dst = $dest . $name;

   print "emcopy \"$src\" \"$  dst\" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:$log\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the screen output):

emcopy "\\celerra_path\Data01" "\\isilon_path\billing_tmip_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_tmip_01
emcopy "\\celerra_path\Data02" "\\isilon_path\billing_trxs_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_trxs_01
emcopy "\\celerra_path\Data03" "\\isilon_path\billing_vru_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_vru_01
emcopy "\\celerra_path\Data04" "\\isilon_path\billing_rpps_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_rpps_01

That’s it.  Good luck with your data migration, and I hope this has been of some assistance.  Special thanks to Mark May and his virtualstoragezone blog, he published the original versions of these scripts here.

Open Source Storage Solutions

Storage solutions can generally be grouped into four categories: SoHo NAS systems, Cloud-based/object solutions, Enterprise NAS and SAN solutions, and Microsoft Storage Server solutions. Enterprise NAS and SAN solutions are generally closed systems offered by traditional vendors like EMC and NetApp with a very large price tag, so many businesses are looking at Open Source solutions to meet their needs. This is a collection of links and brief descriptions of Open Source storage solutions currently available. Open Source of course means it’s free to use and modify, however some projects have do commercially supported versions as well for enterprise customers who require it.

Why would an enterprise business consider an Open Source storage solution? The most obvious reason is that it’s free, and any developer can customize it to suit the needs of the business. With the right people on board, innovation can be rapid. Unfortunately, as is the case with most open source software, it can be needlessly complex and difficult to use, require expert or highly trained staff, have compatibility issues, and most don’t offer the support and maintenance that enterprise customers require. There’s no such thing as a free lunch, as they say, and using Open Source generally requires compromising on support and maintenance. I’d see some of these solutions as perfect for an enterprise development or test environment, and as an easy way for a larger company to allow their staff to get their feet wet in a new technology to see how it may be applied as a potential future solution. As I mentioned, tested and supported versions of some open source storage software is available, which can ease the concerns regarding deployment, maintenance and support.

I have the solutions loosely organized into Open Source NAS and SAN Software, File Systems, RAID, Backup and Synchronization, Cloud Storage, Data Desctruction, Distributed Storage/Big Data Tools, Document Management, and Encryption tools.

Open Source NAS and SAN Software Solutions

Backblaze

Backblaze is a object data storage provider. Backblaze stores data on its customized, open source hardware platform called Storage Pods, and its cloud-based Backblaze Vault file system. It is compatible with Windows and Apple OSes. While they are primarily an online backup service, they opened up their StoragePod design starting in 2009, which uses commodity hardware that anyone can build. They are self-contained 4U data storage servers. It’s interesting stuff and worth a look.

Enterprise Storage OS (ESOS)

Enterprise Storage OS is a linux distribution based on the SCST project with the purpose of providing SCSI targets via a compatible SAN (Fibre Channel, InfiniBand, iSCSI, FCoE). ESOS can turn a server with the appropriate hardware into a disk array that sits on your enterprise Storage Area Network (SAN) and provides sharable block-level storage volumes.

OpenIO 

OpenIOis an open source object storage startup founded in 2015 by CEO Laurent Denel and six co-founders. The product is an object storage system for applications that scales from terabytes to exabytes. OpenIO specializes in software defined storage and scalability challenges, with experience in designing and running cloud platforms. It owns a general purpose object storage and data processing solution adopted by large companies for massive production.

Open vStorage

Open vStorage is an open-source, scale-out, reliable, high performance, software based storage platform which offers a block & file interface on top of a pool of drives. It is a virtual appliance (called the “Virtual Storage Router”) that is installed on a host or cluster of hosts on which Virtual Machines are running. It adds value and flexibility in a hyper converged / Open Stack provider deployment where you don’t necessarily want to be tied to a solution like VMware VSAN. Being hypervisor agnostic is a key advantage of Open vStorage.

OpenATTIC

OpenATTIC is an Open Source Ceph and storage management solution for Linux, with a strong focus on storage management in a datacenter environment. It allows for easy management of storage resources, it features a modern web interface, and supports NFS, CIFS, iSCSI and FS. It supports a wide range of file systems including Btrfs and ZFS, as well as automatic data replication using DRBD, the distributed replicated block device and automatic monitoring of shares and volumes using a built-in Nagios/Icinga instance. openATTIC 2 will support managing the Ceph distributed object store and file system.

OpenStack

OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

The OpenStack Object Storage (swift) service provides software that stores and retrieves data over HTTP. Objects (blobs of data) are stored in an organizational hierarchy that offers anonymous read-only access, ACL defined access, or even temporary access. Object Storage supports multiple token-based authentication mechanisms implemented via middleware.

CryptoNAS

CryptoNAS (formerly CryptoBox) is one NAS project that makes encrypting your storage quick and easy. It is a multilingual Debian based Linux live CD with a web based front end that can be installed into a hard disk or USB stick. CryptoNAS has various choices of encryption algorithms, the default is AES, it encrypts disk partitions using LUKS (Linux Unified Key setup) which means that any Linux operating system can also access them without using CryptoNAS software.

Ceph

Ceph is a distributed object store and file system designed to provide high performance, reliability and scalability. It’s built on the Reliable Autonomic Distributed Object Store (RADOS) and allows enterprises to build their own economical storage devices using commodity hardware. It has been maintained by RedHat since their acquisition of InkTank in April 2014. It’s capable of block, object, and file storage.  It is scale-out, meaning multiple Ceph storage nodes will present a single storage system that easily handles many petabytes, and performance and capacity increase simultaneously. Ceph has many basic enterprise storage features including replication (or erasure coding), snapshots, thin provisioning, auto-tiering and self-healing capabilities.

FreeNAS

The FreeNAS website touts itself as “the most potent and rock-solid open source NAS software,” and it counts the United Nations, The Salvation Army, The University of Florida, the Department of Homeland Security, Dr. Phil, Reuters, Michigan State University and Disney among its users. You can use it to turn standard hardware into a BSD-based NAS device, or you can purchase supported, pre-configured TrueNAS appliances based on the same software.

RockStor 

RockStor is a free and open source NAS (Network Attached Storage) solution. It’s Personal Cloud Server is a powerful local alternative to public cloud storage that mitigates the cost and risks of public cloud storage. This NAS and cloud storage platform is suitable for small to medium businesses and home users who don’t have much IT experience, but who may need to scale to terabytes of data storage.  If you are more interested in Linux and Btrfs, it’s a great alternative to FreeNAS. The RockStor NAS and cloud storage platform can be managed within a LAN or over the Web using a simple and intuitive UI, and with the inclusion of add-ons (fittingly named ‘Rockons’), you can extend the feature set of your Rockstor to include new apps, servers, and services.

Gluster

Red Hat-owned Gluster is a distributed scale-out network attached storaage file system that can handle really big data—up to 72 brontobytes.  It has found applications including cloud computing, streaming media services and content delivery networks. It promises high availability and performance, an elastic hash algortithm, an elastic volume manager and more. GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system.

72 Brontobytes? I admit that I hadn’t seen that term used yet in any major storage vendor’s marketing materials. How big is that? Really, really big.

1 Bit = Binary Digit
8 Bits = 1 Byte
1,000 Bytes = 1 Kilobyte
1,000 Kilobytes = 1 Megabyte
1,000 Megabytes = 1 Gigabyte
1,000 Gigabytes = 1 Terabyte
1,000 Terabytes = 1 Petabyte
1,000 Petabytes = 1 Exabyte
1,000 Exabytes = 1 Zettabyte
1,000 Zettabytes = 1 Yottabyte
1,000 Yottabytes = 1 Brontobyte
1,000 Brontobytes = 1 Geopbyte

NAS4Free

Like FreeNAS, NAS4Free allows you to create your own BSD-based storage solution from commodity hardware. It promises a low-cost, powerful network storage appliance that users can customize to their own needs.

If FreeNAS and NAS4Free sound suspiciously similar, it’s because they share a common history. Both started from the same original FreeNAS code, which was created in 2005. In 2009, the FreeNAS team pursued a more extensible plugin architecture using OpenZFS, and a project lead who disagreed with that direction departed to continue his work using Linux, thus creating NAS4Free. NAS4Free dispenses with the fancy stuff and sticks with a more focused approach of “do one thing and do it well”. You don’t get bittorrent clients or cloud servers and you can’t make a virtual server with it, but many feel that NAS4Free has a much cleaner, more usable interface.

OpenFiler

Openfiler is a storage management operating system based on rPath Linux. It is a full-fledged NAS/SAN that can be implemented as a virtual appliance for VMware and Xen hypervisors. It offers storage administrators a set of powerful tools that are used to manage complex storage environments. It supports software and hardware RAID, monitoring and alerting facilities, volume snapshot and recovery features. Configuring Openfiler can be complicated, but there are many online resources available that cover the most typical installations. I’ve seen mixed reviews about the product online, it’s worth a bit of research before you consider an implementation.

OpenSMT

OpenSMT is an open source storage management toolkit based on opensolaris. Like Openfiler, OpenSMT also allows users to use commodity hardware for a dedicated storage device with NAS features and SAN features. It uses the ZFS filesystem and includes a well-designed Web GUI.

Open Media Vault

This NAS solution is based on Debian Linux and offers plug-ins to extend it’s capabilities. It boasts really easy-to-use storage management with a web based interface, fast setup, Multilanguage support, volume management, monitoring, UPS support, and statistics reporting. Plugins allow it to be extended with LDAP support, bittorrent, and iSCSI. It is primarily designed to be used in small offices or home offices, but is not limited to those scenarios.

Turnkey Linux

The Turnkey Linux Virtual Appliance Library is a free open source project which has developed a range of Debian based pre-packaged server software appliances (a.k.a. virtual appliances). Turnkey appliances can be deployed as a virtual machine (a range of hypervisors are supported), in cloud computing infrastructures (including AWS and others) or installed in physical computers.

Turnkey offers more than 100 different software appliances based on open source software. Among them is a file server that offers simple network attached storage, hence it’s inclusion in this list.

Turnkey file server is an easy to use file server that combines Windows-compatible network file sharing with a web based file manager. TurnKey File Server includes support for SMB, SFTP, NFS, WebDAV and rsync file transfer protocols. The server is configured to allow server users to manage files in private or public storage. It is based on Samba and SambaDAV.

oVirt

oVirt is free, open-source virtualization management platform. It was founded by Red Hat as a community project on which Red Hat Enterprise Virtualization is based. It allows centralized management of virtual machines, compute, storage and networking resources, from an easy to use web-based front-end with platform independent access. With oVirt, IT can manage virtual machines, virtualized networks and virtualized storage via an intuitive Web interface. It’s based on the KVM hypervisor.

Kinetic Open Storage

Backed by companies like EMC, Seagate, Toshiba, Cisco, NetApp, Red Hat, Western Digital, Dell and others, Kinetic is a Linux Foundation project dedicated to establishing standards for a new kind of object storage architecture. It’s designed to meet the need for scale-out storage for unstructured data. Kinetic is fundamentally a way for storage applications to communicate directly with storage devices over Ethernet. With Kinetic, storage use cases that are targeted consist largely of unstructured data like NoSQL, Hadoop and other distributed file systems, and object stores in the cloud like Amazon S3, OpenStack Swift and Basho’s Riak.

Storj DriveShare and MetaDisk

Storj (pronounced “Storage”) is a new type of cloud storage built on blockchain and peer-to-peer technology. Storj offers decentralized, end-to-end encrypted cloud storage. The DriveShare app allows users to rent out their unused hard drive space for use by the service, and the MetaDisk Web app allows users to save their files to the service securely.

The core protocol allows for peer to peer negotiation and verification of storage contracts. Providers of storage are called “farmers” and those using the storage, “renters”. Renters periodically audit whether the farmers are still keeping their files safe and, in a clever twist of similar architectures, immediately pay out a small amount of cryptocurrency for each successful audit. Conversely, farmers can decide to stop storing a file if its owner does not audit and pay their services on time. Files are cut up into pieces called “shards” and stored 3 times redundantly by default. The network will automatically determine a new farmer and move data if copies become unavailable. In the core protocol, contracts are negotiated through a completely decentralized key-value store (Kademlia). The system puts measures in place that prevent farmers and renters from cheating on each other, e.g. through manipulation of the auditing process. Other measures are taken to prevent attacks on the protocol itself.

Storj, like other similar services, offers several advantages over more traditional cloud storage solutions: since data is encrypted and cut into “shards” at source, there is almost no conceivable way for unauthorized third parties to access that data. Data storage is naturally distributed and this, in turn, increases availability and download speed thanks to the use of multiple parallel connections.

Open Source File Systems

Btrfs

Btrfs is a newer Linux filesystem being developed by Facebook, Fujitsu, Intel, the Linux Foundation, Novell, Oracle, Red Hat and some other organizations. It emphasizes fault tolerance and easy administration, and it supports files as large as 16 EiB.

It has been included in the Linux 3.10 kernel as a stable filesystem since July 2014. Because of the fast development speed, btrfs noticeably improves with every new kernel version, so it’s always recommended to use the most recent, stable kernel version you can. Rockstor always runs a very recent kernel for that reason.

One of the big draws of Btrfs is its Copy on Write (CoW) nature of the filesystem. When multiple users attempt to read/write a file, it does not make a separate copy until changes are made to the original file by the user. This has the benefit of saving changes, which allows file restorations with snaps. Btrfs also has its own native RAID support built in, appropriately named Btrfs-RAID. A nice benefit the Btrfs RAID iimplemenation is that a RAID6 volume does not need additional re-syncing upon creation of the RAID set, greatly reducing the time requirement.

Ext4

This is the latest version of one of the most popular filesystems for Linux. One of its key benefits is the ability to handle very large amounts of data— 16 TB maximum per file and 1 EB (exabyte, or 1 million terabytes) maximum per filesystem. It is the evolution of the most used Linux filesystem, Ext3. In many ways, Ext4 is a deeper improvement over Ext3 than Ext3 was over Ext2. Ext3 was mostly about adding journaling to Ext2, but Ext4 modifies important data structures of the filesystem such as the ones destined to store the file data.

GlusterFS

Owned by RedHat, GlusterFS is a scale-out distributed file system designed to handle petabytes worth of data. Features include high availability, fast performance, global namespace, elastic hash algorithm and an elastic volume manager.

GlusterFS combines the unused storage space on multiple servers to create a single, large, virtual drive that you can mount like a legacy filesystem using NFS or FUSE on a client PC. It also provides the ability to add more servers or remove existing servers from the storage pool on the fly. GlusterFS functions like a “network RAID” device, many RAID concepts are apparent during setup. It really shines when you need to store huge quantities of data, have redundant file storage, or write data very quickly for later access. Geo-replication lets you mirror data on a volume across the wire. The target can be a single directory or another GlusterFS volume.  It can also handle multiple petabytes easily along with being very easy to install and manage.

Lustre

Designed for “the world’s largest and most complex computing environments,” Lustre is a high-performance scale-out file system. It boasts that it can handle tens of thousands of nodes and petabytes of data with very fast throughput.

Lustre file systems are highly scalable and can be part of multiple computer clusters with tens of thousands of client nodes, multiple petabytes of storage on hundreds of servers, and more than 1TB/s of aggregate I/O throughput. This makes Lustre file systems a popular choice for businesses with large data centers.

OpenZFS

OpenZFS is an outstanding storage platform that encompasses the functionality of traditional filesystems, volume managers, and more, with consistent reliability, functionality and performance. This popular file system is incorporated into many other open source storage projects. It offers excellent scalability and data integrity, and it’s available for most Linux distributions.

IPFS

IPFS is short for “Interplanetary File System,” and is an unusual project that uses peer-to-peer technology to connect all computers with a single file system. It aims to supplement, or possibly even replace, the Hypertext Transfer Protocol that runs the web now. According to the project owner, “In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository.”

IPFS isn’t exactly a well-known technology yet, even among many in the Valley, but it’s quickly spreading by word of mouth among folks in the open-source community. Many are excited by its potential to greatly improve file transfer and streaming speeds across the Internet.

Open Source RAID Solutions

DRBD

DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several userspace management applications and some shell scripts. It is typically used in high availability (HA) computer clusters, but beginning with v9 it can also be used to create larger software defined storage pools with more of a focus on cloud integration. Support and training are available through the project owner, LinBit.

DRBD’s replication technology is very fast and efficient. If you can live with an active-passive setup, DRBD is an efficient storage replication solution. DRBD helps keep data synchronized between multiple nodes and multiple nodes in different datacenters, and if you need to failover between two nodes DRBD is very fast and efficient.

Mdadm

This piece of the Linux kernel makes it possible to set up and manage your own software RAID array using standard hardware. While it is terminal-based, but it offers a wide variety of options for monitoring, reporting, and managing RAID arrays.

Raider

Raider applies RAID 1, 4, 5, 6 or 10 to hard drives. It is able to convert a single linux system disk in to a software raid 1, 4, 5, 6 or 10 system in a two-pass simple command. Raider is a bash shell script, that deals with specific oddities of several linux distros (Ubuntu, Debian, Arch, Mandriva, Mageia, openSuSE, Fedora, Centos, PCLinuxOS, Linux Mint, Scientific Linux, Gentoo, Slackware… – see README) and uses linux software raid (mdadm) ( http://en.wikipedia.org/wiki/Mdadm and https://raid.wiki.kernel.org/ ) to execute the conversion.

Open Source Backup and Synchronization Solutions

Zmanda

From their marketing staff… “Zmanda is the world’s leading provider of open source backup and recovery software. Our open source development and distribution model enables us to deliver the highest quality backup software such as Amanda Enterprise and Zmanda Recovery Manager for MySQL at a fraction of the cost of software from proprietary vendors. Our simple-to-use yet feature-rich backup software is complemented by top-notch services and support expected by enterprise customers.”

Zmanda offers a community and enterprise edition of their software. The enterprise edition of course offers a much more complete feature set.

AMANDA

The core of Amanda is the Amanda server, which handles all the backup operations, compression, indexing and configuration tasks. You can run it on any Linux server as it doesn’t cause any conflicts with any other processes, but it is recommend to run it on a dedicated machine as that removes any associated processing loads from the client machines and prevents the backup from negatively affecting the client’s performance.

Overall it is an extremely capable file-level backup tool that can be customized to your exact requirements. While it lacks a GUI, the command line controls are simple and the level of control you have over your backups is exceptional. Because it can be called from within your own scripts, it can be incorporated into your own custom backup scheme no matter how complex your requirements are. Paid support and a cloud-based version are available through Zmanda, which is owned by Carbonite.

Areca Backup

Areca Backup is a free backup utility for Windows and Linux.  It is written in Java and released under the GNU General Public License. It’s a good option for backing up a single system and it aims to be simple and versatile. Key features include compression, encryption, filters and support for delta backup.

Backup

Backup is a system utility for Linux and Mac OS X, distributed as a RubyGem, that allows you to easily perform backup operations. It provides an elegant DSL in Ruby for modeling your backups. Backup has built-in support for various databases, storage protocols/services, syncers, compressors, encryptors and notifiers which you can mix and match. It was built with modularity, extensibility and simplicity in mind.

BackupPC

Designed for enterprise users, BackupPC claims to be “highly configurable and easy to install and maintain.” It backs up to disk only (not tape) and offers features that reduce storage capacity and IO requirements.

Bacula

Another enterprise-grade open source back solution, Bacula offers a number of advanced features for backup and recovery, as well as a fairly easy-to-use interface. Commercial support, training and services are available through Bacula Systems.

Back In Time

Similar to FlyBack (see below), Back in Time offers a very easy-to-configure snapshot backup solution. GUIs are available for both Gnome and KDE (4.1 or greater).

Backupninja

This tool makes it easier to coordinate and manage backups on your network. With the help of programs like rdiff-backup, duplicity, mysqlhotcopy and mysqldump, Backupninja offers common backup features such as remote, secure and incremental file system backups, encrypted backup, and MySQL/MariaDB database backup. You can selectively enable status email reports, and can back up general hardware and system information as well. One key strength of backupninja is a built-in console-based wizard (called ninjahelper) that allows you to easily create configuration files for various backup scenarios. The downside is that backupninja requires other “helper” programs to be installed in order to take full advantage of all its features. While backupninja’s RPM package is available for Red Hat-based distributions, backupninja’s dependencies are optimized for Debian and its derivatives. Thus it is not recommended to try backupninja for Red Hat based systems.

Bareos

Short for “Backup Archiving Recovery Open Sourced,” Bareos is a 100% open source fork of the backup project from bacula.org. The fork is in development since late 2010, it has a lot of new features. The source has been published on github, licensed AGPLv3. It offers features like LTO hardware encryption, efficient bandwidth usage and practical console commands. A commercially supported version of the same software is available through Bareos.com.

Box Backup

Box Backup describes itself as “an open source, completely automatic, online backup system.” It creates backups continuously and can support RAID. Box Backup is stable but not yet feature complete. All of the facilities to maintain reliable encrypted backups and to allow clients to recover data are, however, already implemented and stable.

BURP

BURP, which stands for “BackUp And Restore Program,” is a network backup tool based on librsync and VSS. It’s designed to be easy to configure and to work well with disk storage. It attempts to reduce network traffic and the amount of space that is used by each backup.

Clonezilla

Conceived as a replacement for True Image or Norton Ghost, Clonezilla is a disk imaging application that can do system deployments as well as bare metal backup and recovery. Two types of Clonezilla are available, Clonezilla live and Clonezilla SE (server edition). Clonezilla live is suitable for single machine backup and restore. While Clonezilla SE is for massive deployment, it can clone many (40+) computers simultaneously. Clonezilla saves and restores only used blocks in the hard disk. This increases the clone efficiency. With some high-end hardware in a 42-node cluster, a multicast restoring at rate 8 GB/min was reported.

Create Synchronicity

Create Synchronicity’s claim to fame is its lightweight size—just 220KB. It’s also very fast, and it offers an intuitive interface for backing up standalone systems. Create Synchronicity is an easy, fast and powerful backup application. It synchronizes files and folders, has a nice interface, and can schedule backups to keep your data safe. Plus, it’s open source, portable, multilingual, and very light (180kB). Windows 2000, Windows XP, Windows Vista, and Windows Seven are supported. To run Create Synchronicity, you must install the .Net Framework, version 2.0 or later.

DAR

AR is a command-line backup and archiving tool that uses selective compression (not compressing already compressed files), strong encryption, may split an archive in different files of given size and provides on-fly hashing. DAR knows how to perform full, differential, incremental and decremental backups. It provides testing, diffing, merging, listing and of course data extracting from existing archives. Archive internal’s catalog, allows very quick restoration of a even a single file from a very large, eventually sliced, compressed and encrypted archive. Dar saves *all* UNIX inode types, takes care of hard links, sparse files as well as Extended Attributes (MacOS X file forks, Linux ACL, SELinux tags, user attributes), it has support for ssh and is suitable for tapes and disks (floppy, CD, DVD, hard disks, …). An optional GUI is available from the DarGUI project.

DirSync Pro

DirSync Pro is a small, but powerful utility for file and folder synchronization. DirSync Pro can be used to synchronize the content of one or many folders recursively. Use DirSync Pro to easily synchronize files from your desktop PC to your USB-stick (/Externa HD/PDA/Notebook). Use this USB-stick (/Externa HD/PDA/Notebook) to synchronize files to another desktop PC. It also features incremental backups, a user friendly interface, a powerful schedule engine, and real-time synchronization. It is written in Java.

Duplicati

Duplicati is designed to backup your network to a cloud computing service like Amazon S3, Microsoft OneDrive, Google Cloud or Rackspace. It includes AES-256 encryption and a scheduler, as well as features like filters, deletion rules, transfer and bandwidth options. Save space with incremental backups and data deduplication. Run backups on any machine through the web-based interface or via command line interface. It has an auto-updater.

Duplicity

Based on the librsync library, Duplicity creates encrypted archives and uploads them to remote or local servers. It can use GnuPG to encrypt and sign archives if desired.

Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.

The duplicity package also includes the rdiffdir utility. Rdiffdir is an extension of librsync’s rdiff to directories—it can be used to produce signatures and deltas of directories as well as regular files. These signatures and deltas are in GNU tar format.

FlyBack

Similar to Apple’s TimeMachine, FlyBack provides incremental backup capabilities and allows users to recover their systems from any previous time. The interface is very easy to use, but little customization is available. FlyBack creates incremental backups of files, which can be restored at a later date. FlyBack presents a chronological view of a file system, allowing individual files or directories to be previewed or retrieved one at a time. Flyback was originally based on rsync when the project began in 2007, but in October 2009 it was rewritten from scratch using Git.

FOG

An imaging and cloning solution, FOG makes it easy for administrators to backup networks of all sizes. FOG can be used to image Windows XP, Vista, Windows 7 and Window 8 PCs using PXE, PartClone, and a Web GUI to tie it together. Includes featues like memory and disk test, disk wipe, av scan & task scheduling.

FreeFileSync

FreeFileSync is a free Open Source software that helps you synchronize files and synchronize folders for Windows, Linux and Mac OS X. It is designed to save your time setting up and running data backups while having nice visual feedback along the way. This file and folder synchronization tool can be very useful for backup purposes. It can save a lot of time and receives very good reviews from its users.

FullSync

FullSync is a powerful tool that helps you keep multiple copies of various data in sync. I.e. it can update your Website using (S)Ftp, backup your data or refresh a working copy from a remote server. It offers flexible rules, a scheduler and more. Built for developers, FullSync offers synchronization capabilities suitable for backup purposes or for publishing Web pages. Features include multiple modes, flexible tools, support for multiple file transfer protocols and more.

Grsync

Grsync provides a graphical interface for rsync, a popular command line synchronization and backup tool. It’s useful for backup, mirroring, replication of partitions, etc. It’s a hack/port of Piero Orsoni’s wonderful Grsync – rsync frontend in GTK – to Windows (win32).

LuckyBackup

Award-winning LuckyBackup offers simple, fast backup. Note that while it is available in a Windows version, it’s still under development. It features Backup using snapshots, Various checks to keep data safe, Simulation mode, Remote connections, Easy restore procedure, Add/remove any rsync option, Synchronize folders, Exclude data from tasks, Execute other commands before or after a task, Scheduling, Tray notification support, and e-mail reports.

Mondo Rescue

Mondo Rescue is a GPL disaster recovery solution. It supports Linux (i386, x86_64, ia64) and FreeBSD (i386). It’s packaged for multiple distributions (Fedora, RHEL, openSuSE, SLES, Mandriva, Mageia, Debian, Ubuntu, Gentoo). It supports tapes, disks, network and CD/DVD as backup media, multiple filesystems, LVM, software and hardware Raid, BIOS and UEFI.

Obnam

Winner of the most original name for backup software – “OBligatory NAMe”. This app performs snapshot backups that can be stored on local disks or online storage services. Features include Easy usage, Snapshot backups, Data de-duplication, across files, and backup generations, Encrypted backups, and it supports both PUSH (i.e. Run on the client) and PULL (i.e. Run on the server) methods.

Partimage

Partimage is opensource disk backup software. It saves partitions having a supported filesystem on a sector basis to an image file. Although it runs under Linux, Windows and most Linux filesystems are supported. The image file can be compressed to save disk space and transfer time and can be split into multiple files to be copied to CDs or DVDs. Partitions can be saved across the network using the partimage network support, or using Samba / NFS (Network File Systems). This provides the ability to perform an hard disk partition recovery after a disk crash. Partimage can be run as part of your normal system or as a stand-alone from the live SystemRescueCd. This is helpful when the operating system cannot be started. SystemRescueCd comes with most of the data recovery software for linux that you may need .

Partimage will only copy data from the used portions of the partition. (This is why it only works for supported filesystem. For speed and efficiency, free blocks are not written to the image file. This is unlike other commands, which also copy unused blocks. Since the partition is processed on a sequential sector basis disk transfer time is maximized and seek time is minimized, Partimage also works for very full partitions. For example, a full 1 GB partition may be compressed down to 400MB.

Redo

Easy rescue system with GUI tools for full system backup, bare metal recovery, partition editing, recovering deleted files, data protection, web browsing, and more. Uses partclone (like Clonezilla) with a UI like Ghost or Acronis. Runs from CD/USB.

Rsnapshot

Rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. Using rsync and hard links, it is possible to keep multiple, full backups instantly available. The disk space required is just a little more than the space of one full backup, plus incrementals. Depending on your configuration, it is quite possible to set up in just a few minutes. Files can be restored by the users who own them, without the root user getting involved. There are no tapes to change, so once it’s set up, you may never need to think about it again. rsnapshot is written entirely in Perl. It should work on any reasonably modern UNIX compatible OS, including: Debian, Redhat, Fedora, SuSE, Gentoo, Slackware, FreeBSD, OpenBSD, NetBSD, Solaris, Mac OS X, and even IRIX.

Rsync

Rsync is a fast and extraordinarily versatile file copying tool for both remote and local files. Rsync uses a delta-transfer algorithm which provides a very fast method for bringing remote files into sync. It does this by sending just the differences in the files across the link, without requiring that both sets of files are present at one of the ends of the link beforehand. At first glance this may seem impossible because the calculation of diffs between two files normally requires local access to both files.

SafeKeep

SafeKeep is a centralized and easy to use backup application that combines the best features of a mirror and an incremental backup. It sets up the appropriate environment for compatible backup packages and simplifies the process of running them. For Linux users only, SafeKeep focuses on security and simplicity. It’s a command line tool that is a good option for a smaller environment.

Synkron

This application allows you to keep your files and folders updated and synchronized. Key features include an easy to use interface, blacklisting, analysis and restore. It is also cross-platform.

Synbak

Synbak is an software designed to unify several backup methods. Synbak provides a powerful reporting system and a very simple interface for configuration files. Synbak is a wrapper for several existing backup programs suppling the end user with common method for configuration that will manage the execution logic for every single backup and will give detailed reports of backups result. Synbak can make backups using RSync over ssh, rsync daemon, smb and cifs protocols (using internal automount functions), Tar archives (tar, tar.gz and tar.bz2), Tape devices (using multi loader changer tapes too), LDAP databases, MySQL databases, Oracle databases, CD-RW/DVD-RW, Wget to mirror HTTP/FTP servers. It offers official support to GNU/Linux Red Hat Enterprise Linux and Fedora Core Distributions only.

SnapBackup

Designed to be as easy to use as possible, SnapBackup backs up files with just one click. It can copy files to a flash drive, external hard drive or the cloud, and it includes compression capabilities.  The first time you run Snap Backup, you configure where your data files reside and where to create backup files. Snap Backup will also copy your backup to an archive location, such as a USB flash drive (memory stick), external hard drive, or cloud backup. Snap Backup automatically puts the current date in the backup file name, alleviating you from the tedious task of renaming your backup file every time you backup. The backup file is a single compressed file that can be read by zip programs such as gzip, 7-Zip, The Unarchiver, and Mac’s built-in Archive Utility.

Syncovery

File synchronization and backup software. Back up data and synchronize PCs, Macs, servers, notebooks, and online storage space. You can set up as many different jobs as you need and run them manually or using the scheduler. Syncovery works with local hard drives, network drives and any other mounted volumes. In addition, it comes with support for FTP, SSH, HTTP, WebDAV, Amazon S3, Google Drive, Microsoft Azure, SugarSync, box.net and many other cloud storage providers. You can use ZIP compression and data encryption. On Windows, the scheduler can run as a service – without users having to log on. There are powerful synchronization modes, including Standard Copying, Exact Mirror, and SmartTracking. Syncovery features a well designed GUI to make it an extremely versatile synchronizing and backup tool.

XSIbackup

XSIbackup can backup VMwareESXi environments version 5.1 or greater. It’s a command line tool with a scheduler, and it runs directly on the hypervisor. XSIBackup is a free alternative to commercial software like Veeam Backup.

UrBackup

A client-server system, UrBackup does both file and image backups. UrBackup is an easy to setup Open Source client/server backup system, that through a combination of image and file backups accomplishes both data safety and a fast restoration time. File and image backups are made while the system is running without interrupting current processes. UrBackup also continuously watches folders you want backed up in order to quickly find differences to previous backups. Because of that, incremental file backups are really fast. Your files can be restored through the web interface, via the client or the Windows Explorer while the backups of drive volumes can be restored with a bootable CD or USB-Stick (bare metal restore). A web interface makes setting up your own backup server easy.

Unison

This file synchronization tool goes beyond the capabilities of most backup systems, because it can reconcile several slightly different copies of the same file stored in different places. It can work between any two (or more) computers connected to the Internet, even if they don’t have the same operating system. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.

Unison shares a number of features with tools such as configuration management packages (CVS, PRCS, Subversion, BitKeeper, etc.), distributed filesystems (Coda, etc.), uni-directional mirroring utilities (rsync, etc.), and other synchronizers (Intellisync, Reconcile, etc). Unison runs on both Windows and many flavors of Unix (Solaris, Linux, OS X, etc.) systems. Moreover, Unison works across platforms, allowing you to synchronize a Windows laptop with a Unix server, for example. Unlike simple mirroring or backup utilities, Unison can deal with updates to both replicas of a distributed directory structure. Updates that do not conflict are propagated automatically. Conflicting updates are detected and displayed.

Win32DiskImager

This program is designed to write a raw disk image to a removable device or backup a removable device to a raw image file. It is very useful for embedded development, namely Arm development projects (Android, Ubuntu on Arm, etc). Averaging more than 50,000 downloads every week, this tool is a very popular way to copy a disk image to a new machine. It’s very useful for systems administrators and developers.

Open Source Cloud Data Storage Solutions

Camlistore

Camlistore is short for “Content-Addressable Multi-Layer Indexed Storage.” Camlistore is a set of open source formats, protocols, and software for modeling, storing, searching, sharing and synchronizing data in the post-PC era. Data may be files or objects, tweets or 5TB videos, and you can access it via a phone, browser or FUSE filesystem. It is still under active development. If you’re a programmer or fairly technical, you can probably get it up and running and get some utility out of it. Many bits and pieces are actively being developed, so be prepared for bugs and unfinished features.

CloudStack

Apache’s CloudStack project offers a complete cloud computing solution, including cloud storage. Key storage features include tiering, block storage volumes and support for most storage hardware.

CloudStack is open source software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform. CloudStack is used by a number of service providers to offer public cloud services, and by many companies to provide an on-premises (private) cloud offering, or as part of a hybrid cloud solution.

CloudStack is a turnkey solution that includes the entire “stack” of features most organizations want with an IaaS cloud: compute orchestration, Network-as-a-Service, user and account management, a full and open native API, resource accounting, and a first-class User Interface (UI). It currently supports the most popular hypervisors: VMware, KVM, Citrix XenServer, Xen Cloud Platform (XCP), Oracle VM server and Microsoft Hyper-V.

CloudStore

CloudStore synchronizes files between multiple locations. It is similar to Dropbox, but it’s completely free and, as noted by the developer, does not require the user to trust a US company.

Cozy

Cozy is a personal cloud solution allows users to “host, hack and delete” their own files. It stores calendar and contact information in addition to documents, and it also has an app store with compatible applications.

DREBS

Designed for Amazon Web Services users, DREBS stands for “Disaster Recovery for Elastic Block Store.” It runs on Amazon’s EC2 services and takes snapshots of EBS volumes for disaster recovery purposes. It can be used for taking periodic snapshots of EBS volumes. It is designed to be run on the EC2 host which the EBS volumes to be snapshoted are attached.

DuraCloud

DuraCloud is a hosted service and open technology developed by DuraSpace that makes it easy for organizations and end users to use cloud services. DuraCloud leverages existing cloud infrastructure to enable durability and access to digital content. It is particularly focused on providing preservation support services and access services for academic libraries, academic research centers, and other cultural heritage organizations. The service builds on the pure storage from expert storage providers by overlaying the access functionality and preservation support tools that are essential to ensuring long-term access and durability. DuraCloud offers cloud storage across multiple commercial and non commercial providers, and offers compute services that are key to unlocking the value of digital content stored in the cloud. DuraCloud provides services that enable digital preservation, data access, transformation, and data sharing. Customers are offered “elastic capacity” coupled with a “pay as you go” approach. DuraCloud is appropriate for individuals, single institutions, or for multiple organizations that want to use cross-institutional infrastructure. DuraCloud became available as a limited pilot in 2009 and was released broadly as a service of the DuraSpace not-for-profit organization in 2011.

FTPbox

This app allows users to set up cloud-based storage services on their own servers. It supports FTP, SFTP or FTPS file syncing.

Pydio

Pydio is the mature open source alternative to dropbox and box, for the enterprise. Formerly known as AjaXplorer, this app helps enterprises set a file-sharing service on their own servers. It’s very easy to install and offers an attractive, intuitive interface.

Seafile

With Seafile you can set up your own private cloud storage server or use their hosted service that is free for up to 1GB. Seafile is an open source cloud storage system with privacy protection and teamwork features. Collections of files are called libraries. Each library can be synced separately. A library can also be encrypted with a user chosen password. Seafile also allows users to create groups and easily sharing files into groups.

SparkleShare

Another self-hosted cloud storage solution, SparkleShare is a good storage option for files that change often and are accessed by a lot of people. (It’s not as good for complete backups.) Because it was built for developers, it also includes Git. SparkleShare is open-source client software that provides cloud storage and file synchronization services. By default, it uses Git as a storage backend. SparkleShare is comparable to Dropbox, but the cloud storage can be provided by the user’s own server, or a hosted solution such as GitHub. The advantage of self-hosting is that the user retains absolute control over their own data. In the simplest case, self-hosting only requires SSH and Git.

Syncany

Syncany is a cloud storage and filesharing application with a focus on security and abstraction of storage. It is similar to Dropbox, but you can use it with your own server or one of the popular public cloud services like Amazon, Google or Rackspace. It encrypts files locally, adding security for sensitive files.

Syncthing

Syncthing was designed to be a secure and private alternative to public cloud backup and synchronization services. It is a continuous file synchronization program. It synchronizes files between two or more computers. It offers strong encryption and authentication capabilities and includes an easy-to-use GUI.

PerlShare

PerlShare is another Dropbox alternative, allowing users to set up their own cloud storage servers. Windows and OS X support is under development, but it works on Linux today.

SeaFile

SeaFile offers open source cloud storage and file synchronization. You can self-host with the free community or paid professional editions, or you can pay for the service hosting.

Storage Management / SDS

OpenSDS

Advanced OpenSDS API’s enables enterprise storage features to be fully utilized by OpenStack. For End-Users. OpenSDS offers free choice and allows you to choose solutions from different vendors. Start transforming your IT infrastructure into a platform for cloud-native workloads and accelerate new business rollouts.

CoprHD

CoprHD is an open source software defined storage controller and API platform by Dell EMC. It enables policy-based management and cloud automation of storage resources for block, object and file storage providers.

REX-Ray

REX-Ray is a Dell EMC open source project. It’s a container storage orchestration engine enabling persistence for cloud native workloads. New updates and features contribute to enterprise readiness, as {code} by Dell EMC through REX-Ray and libStorage works with industry organizations to ensure long-lasting interoperability of storage in Cloud Native through a universal Container Storage Interface.

Nexenta

From their website: “Nexenta is the global leader in Open Source-driven Software-Defined Storage – what we call Open Software-Defined Storage (OpenSDS).We uniquely integrate software-only “Open Source” collaboration with commodity hardware-centric “Software-Defined Storage” (SDS) innovation.”

Libvirt Storage Management

libvirt is an open source API, daemon and management tool for managing platform virtualization.[3] It can be used to manage KVM, Xen, VMware ESX, QEMU and other virtualization technologies. These APIs are widely used in the orchestration layer of hypervisors in the development of a cloud-based solution.

OHSM

Online Hierarchical Storage Manager (OHSM) is the first attempt towards an enterprise level open source data storage manager which automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as hard disk drive arrays, are more expensive (per byte stored) than slower devices, such as optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise’s data on slower devices, and then copy data to faster disk drives when needed. In effect, OHSM turns the fast disk drives into caches for the slower mass storage devices. There would be certain policies that would be set by the data center administrators as to which data can safely be moved to slower devices and which data should stay on the fast devices. Under manual circumstances the data centers suffers from down time and also change in the namespace. Policy rules specify both initial allocation destinations and relocation destinations as priority-ordered lists of placement classes. Files are allocated in the first placement class in the list if free space permits, in the second class if no free space is available in the first, and so forth.

Open Source Data Destruction Solutions

BleachBit

With BleachBit you can free cache, delete cookies, clear Internet history, shred temporary files, delete logs, and discard junk you didn’t know was there. Designed for Linux and Windows systems, it wipes clean thousands of applications including Firefox, Internet Explorer, Adobe Flash, Google Chrome, Opera, Safari,and more. Beyond simply deleting files, BleachBit includes advanced features such as shredding files to prevent recovery, wiping free disk space to hide traces of files deleted by other applications, and vacuuming Firefox to make it faster.

Darik’s Boot And Nuke

Darik’s Boot and Nuke (“DBAN”) is a self-contained boot image that securely wipes the hard disks of most computers. DBAN is appropriate for bulk or emergency data destruction. This app can securely wipe an entire disk so that the data cannot be recovered. The owner of the app, Blancco, also offers related paid products, including some that support RAID.

Eraser

Eraser is a secure data removal tool for Windows. It completely removes sensitive data from your hard drive by overwriting it several times with carefully selected patterns. It erases residue from deleted files, erases MFT and MFT-resident files (for NTFS volumes) and Directory Indices (for FAT), and has a powerful and flexible scheduler.

FileKiller

FileKiller is another option for secure file deletion. It allows the user to determine how many times deleted data is overwritten depending on the sensitivity of the data being deleted. It offers fast performance and can handle large files.

It features High Performance, the ability to choose the number of overwrite iterations (1 to 100), the ability to choose overwrite method using blanks, the ability to choose overwrite method using random data, the ability to choose overwrite method using a user defined ascii character, data as well as Filename deletion. No setup is needed, you get just a single executable, and it it requires .net 3.5.

Open Source Distributed Storage/Big Data Solutions

BigData

Big data describes itself as an ultra high-performance graph database supporting the RDF data model. It can scale to 50 billion edges on a single machine. Paid Commercial support is available for this product.

Hadoop

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. The Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. This project is so well known that it has become nearly synonymous with big data.

HPCC

HPCC Systems (High Performance Computing Cluster) is an open source, massive parallel-processing computing platform for big data processing and analytics. It is Intended as an alternative to Hadoop. It is a distributed data storage and processing platform that scales to thousands of nodes. It was developed by LexisNexis Risk Solutions, which also offers paid enterprise versions of the software.

Sheepdog

Sheepdog is a distributed object storage system for volume and container services and manages the disks and nodes intelligently. Sheepdog features ease of use, simplicity of code and can scale out to thousands of nodes. The block level volume abstraction can be attached to QEMU virtual machines and Linux SCSI Target and supports advanced volume management features such as snapshot, cloning, and thin provisioning. The object level container abstraction is designed to be Openstack Swift and Amazon S3 API compatible and can be used to store and retrieve any amount of data with a simple web services interface. It’s compatible with OpenStack Swift and Amazon S3.

Open Source Document Management Systems (DMS) Solutions

bitfarm-Archiv Document Management

bitfarm-Archiv document management is an intuitive, award-winning software with fast user acceptance. The extensive and practical functionality as well as the excellent adaptability makes the open source DMS to one of the most powerful document management, archiving, and ECM solution for institutions and in all sectors at low cost. A paid enterprise version and paid services is available.

DSpace

Highly rated DSpace describes itself as “the software of choice for academic, non-profit, and commercial organizations building open digital repositories.” It offers a Web-based interface and very easy installation.

Epiware

Epiware offers customizable, Web-based document capture, management, storage, and sharing. Paid support is also available.

LogicalDOC

LogicalDOC is a Web-based, open source document management software that is very simple to use and suitable for organizations of any size and type. It uses the best-of-breed Java technologies such as Spring, Hibernate and AJAX and can run on any system, from Windows to Linux or MAC OS X. The features included in the community edition — including workflow light, version control and the full-text search engine – help manage the document lifecycle, encourage cooperation, allow to quickly find the document you need without wasting time. The application is implemented as a plugin system that allows you to easily add new features through the ability to engage the various extension points predisposed. Moreover, the presence of Web services ensures that LogicalDOC can be easily integrated with other systems.

OpenKM

OpenKM integrates all essential documents management, collaboration and an advanced search functionality into one easy to use solution. The system also includes administration tools to define the roles of various users, access control, user quota, level of document security, detailed logs of activity and automations setup. OpenKM builds a highly valuable repository of corporate information assets to facilitate knowledge creation and improve business decision making, boosting workgroups and enterprise productivity through shared practices, greater, better customer relations, faster sales cycles, improved product time-to-market, and better-informed decision making.

Open Source Encryption Solutions

AxCrypt

Downloaded nearly 3 million times, AxCrypt is one of the leading open source file encryption software for Windows. It works with the Windows file manager and with cloud-based storage services like Dropbox, Live Mesh, SkyDrive and Box.net. It offers Personal Privacy and Security with AES-256 File Encryption and Compression for Windows. Double-click to automatically decrypt and open documents.

Crypt

Extremely lightweight, the 44KB Crypt promises very fast encryption and decryption. You don’t need to install it, and it can run from a thumb drive. This tool is command line only, expected for such a lightweight application.

Gnu Privacy Guard (GPG)

GNU Privacy Guard. GNU Privacy Guard (GnuPG or GPG) is a free software replacement for Symantec’s PGP cryptographic software suite. GnuPG is compliant with RFC 4880, which is the IETF standards track specification of OpenPGP. Gnu’s implementation of the OpenPGP standard allows users to encrypt and sign data and communication. It’s a very mature project that hass been under active development for well over a decade.

gpg4win (GNU privacy guard for Windows)

See above. This is a port of the Linux version of GPG. It’s easy to install and includes plug-ins for Outlook and Windows Explorer.

GPG Tools

See above. This project ports GPG to the Mac.

TrueCrypt

TrueCrypt is a discontinued source-available freeware utility used for on-the-fly encryption (OTFE). It can create a virtual encrypted disk within a file, or encrypt a partition or the whole storage device (pre-boot authentication). Extremely popular, this utility has been downloaded millions of times. It can encrypt both single files or entire drives or partitions.

 

Custom Reporting with Isilon OneFS API Calls

Have you ever wondered where the metrics you see in InsightIQ come from?  InsightIQ uses OneFS API calls to gather information, and you can use the same API calls for custom reporting and scripting.  Whether you’re interested in performance metrics relating to cluster capacity, CPU utilization, network latency & throughput, or disk activities, you have access to all of that information.

I spent a good deal of time already on how to make this work and investigating options that are available to make the gathered data be presentable in some useful manner.  This is really just the beginning, I’m hoping to take some more time later to work on additional custom script examples that gather specific info and have useful output options.  For now, this should get anyone started who’s interested in trying this out.  This post also includes a list of available API calls you can make.  I cover these basic steps in this post to get you started:

  1. How to authenticate to the Isilon Cluster using cookies.
  2. How to make the API call to the Isilon to generate the JSON output.
  3. How to install the jq utility to parse JSON output files.
  4. Some examples of using the jq utility to parse the JSON output.

Authentication

First I’ll go over how to authenticate to the Isilon cluster using cookies. You’ll have to create a credentials file first.  Name the file auth.json and enter the following info into it:

{
 "username":"root",
 "password":"<password>",
 "services":["platform","namespace"]
 }

Note that I am using root for this example, but it would certainly be possible to create a separate account on the Isilon to use for this purpose.  Just give the account the Platform API and Statistics roles.

Once the file is created, you can make a session call to get a cookie:

curl -v -k –insecure -H “Content-Type: application/json” -c cookiefile -X POST -d @auth.json https://10.10.10.10:8080/session/1/session

The output will be over one page long, but you’re looking to verify that the cookie was generated.  You should see two lines similar to this:

* Added cookie isisessid="123456-xxxx-xxxx-xxxx-a193333a99bc" for domain 10.10.10.10, path /, expire 0
< Set-Cookie: isisessid=123456-xxxx-xxxx-xxxx-a193333a99bc; path=/; HttpOnly; Secure

Run a test command to gather data

Below is a sample string to gather some sample statistics.  Later in this document I’ll review all of the possible strings you can use to gather info on CPU, disk, performance, etc.

curl -k –insecure -b @cookiefile ‘https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total’

The command above generates the following json output:

{
"stats" :
[

{
"devid" : 0,
"error" : null,
"error_code" : null,
"key" : "ifs.bytes.total",
"time" : 1398840008,
"value" : 433974304096256
}
]
}

Install JQ

Now that we have data in json format, we need to be able to parse it and change it into a more readable format.  I’m looking to convert it to csv.  There are many different scripts, tools, and languages available for that purpose online.  I personally looked for a method that can be used in a simple bash script and jq is a good solution for that.  I use Cygwin on a windows box for my scripts, but you can download any version you like for your flavor of OS.  You can download the JQ parser here: https://github.com/stedolan/jq/releases.

Instructions for the installation of jq for Cygwin:

  1. Download the latest source tarball for jq from https://stedolan.github.io/jq/download/
  2. Open Cygwin to create the folder you’d like to extract it in
  3. Copy the ‘jq-1.5.tar.gz’ file into your folder to make it available within Cygwin
  4. From a Cygwin command shell, enter the following to uncompress the tarball file : ‘tar -xvzf jq-1.5.tar.gz’
  5. Change folder location to the uncompressed folder e.g. ‘cd /jq-1.5’
  6. Next enter ‘./configure’ and wait for the command to complete (about 5 minutes)
  7. Then enter the commands ‘make’, followed by ‘make install’
  8. You’re done.

Once jq is installed, we can play around with using it to make our json output more readable.  One simple way to make it into a comma separated output, is with this command:

cat sample.json | jq “.stats | .[]” –compact-output

It will turn json output like this:

{
"stats" :
[

{
"devid" : 8,
"error" : null,
"error_code" : null,
"key" : "node.ifs.bytes.used",
"values" :
[

{
"time" : 1498745964,
"value" : 51694140276736
},
{
"time" : 1498746264,
"value" : 51705407610880
}
]
},

Into single line, comma separated output like this:

{"devid":8,"error":null,"error_code":null,"key":"node.ifs.bytes.used","values":[{"time":1498745964,"value":51694140276736},{"time":1498746264,"value":51705407610880}]}

You can further improve the output by removing the quote marks with sed:

cat sample.json | jq “.stats | .[]” –compact-output | sed ‘s/\”//g’

At this point the data is formatted well enough to easily modify it to suit my needs in Excel.

{devid:8,error:null,error_code:null,key:node.ifs.bytes.used,values:[{time:1498745964,value:51694140276736},{time:1498746264,value:51705407610880}]}

JQ CSV

Using the –compact-output switch isn’t the only way to manipulate the data, and probably not the best way.  I haven’t had much time to work with the @csv option in JQ, but it looks very promising. for this.  Below are a few notes on using it, I will include more samples in an edit to this post or a new post in the future that relate this more directly to using it with the Isilon-generated output.  I prefer to use csv files for report output due to the ease of working with them and manipulating them with scripts.

Order is significant for csv, but not for JSON fields.  Specify the mapping from JSON named fields to csv positional fields by constructing an array of those fields, using [.date,.count,.title]:

input: { "date": "2011-01-12 13:14", "count": 17, "title":"He's dead, Jim!" }
jq -r '[.date,.count,.title] | @csv'
"2017-06-12 08:19",17,"You're going too fast!"

You also may want to apply this to an array of objects, in which case you’ll need to use the .[] operator, which streams each item of an array in turn:

jq -r '.[] | [.date, .count, .title] | @csv'
"2017-06-12 08:19",17,"You're going too fast!"
"2017-06-15 11:50",4711,"That's impossible"?"
"2017-06-19 00:01",,"I can't drive 55!"

You’ll likely also want the csv file to populate the csv field names at the top. The easiest way to do this is to add them in manually:

jq -r '["date", "count", "title"], (.[] | [.date, .count, .title]) | @csv'
"date","count","title"
"2017-06-12 08:19",17,"You're going too fast!"
"2017-06-15 11:50",4711,"That's impossible"?"
"2017-06-19 00:01",,"I can't drive 55!"

We can avoid repeating the same list of field names by reusing the header array to lookup the fields in each object.

jq -r '["date", "count", "title"] as $fields| $fields, (.[] | [.[$fields[]]]) | @csv'

Here it is as a function, with a slightly nicer field syntax, using path():

def csv(fs): [path(null|fs)[]] as $fields| $fields, (.[] | [.[$fields[]]]) | @csv;
USAGE: csv(.date, .count, .title)

If the input is not an array of objects but just a sequence of objects  then we can omit the .[] – but then we can’t get the header at the top.  It’s best to convert it to an array using the –slurp/-s option (or put [] around it if it’s generated within jq).

More to come on formatting JSON for Isilon in the future…

Isilon API calls

All of these specific API calls were pulled from the EMC community forum, I didn’t compose this list myself.  It’s a list of the calls that InsightIQ makes to the OneFS API.  They can be queried in exactly the same way that I demonstrated in the examples earlier in this post.

Please note the following about the API calls regarding time ranges:

  1. Every call to the “/platform/1/statistics/current” APIs do not contain query parameters for &begin and &end time range.
  2. Every call to the “/platform/1/statistics/history” APIs always contain query parameters for &begin and &end POSIX time range.

Capacity

https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total&key=ifs.ssd.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.bytes.avail&key=ifs.ssd.bytes.avail&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=ifs.bytes.avail&key=ifs.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.ssd.bytes.avail&key=ifs.ssd.bytes.total&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.out.rate&key=node.ifs.bytes.in.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.total&key=node.ifs.ssd.bytes.used&key=node.ifs.ssd.bytes.total&key=node.ifs.bytes.used&devid=all&degraded=true&interval=300&memory_only=true

CPU

https://10.10.10.10:8080/platform/1/statistics/history?key=node.cpu.idle.avg&devid=all&degraded=true&interval=30&memory_only=true

Network

https://10.10.10.10:8080/platform/1/statistics/current?key=node.net.iface.name.0&key=node.net.iface.name.1&key=node.net.iface.name.2&key=node.net.iface.name.3&key=node net.iface.name.4&key=node.net.iface.name.5&key=node.net.iface.name.6&key=node.net.iface.name.7&key=node.net.iface.name.8&key=node.net.iface.name.9&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.ext.packets.in.rate&key=node.net.ext.errors.in.rate&key=node.net.ext.bytes.out.rate&key=node.net.ext.errors.out.rate&key=node.net.ext.bytes.in.rate&key=node.net.ext.packets.out.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.iface.bytes.out.rate.0&key=node.net.iface.bytes.out.rate.1&key=node.net.iface.bytes.out.rate.2&key=node.net.iface.bytes.out.rate.3&key=node.net.iface.bytes.out.rate.4&key=node.net.iface.bytes.out.rate.5&key=node.net.iface.bytes.out.rate.6&key=node.net.iface.bytes.out.rate.7&key=node.net.iface.bytes.out.rate.8&key=node.net.iface.bytes.out.rate.9&key=node.net.iface.errors.in.rate.0&key=node.net.iface.errors.in.rate.1&key=node.net.iface.errors.in.rate.2&key=node.net.iface.errors.in.rate.3&key=node.net.iface.errors.in.rate.4&key=node.net.iface.errors.in.rate.5&key=node.net.iface.errors.in.rate.6&key=node.net.iface.errors.in.rate.7&key=node.net.iface.errors.in.rate.8&key=node.net.iface.errors.in.rate.9&key=node.net.iface.errors.out.rate.0&key=node.net.iface.errors.out.rate.1&key=node.net.iface.errors.out.rate.2&key=node.net.iface.errors.out.rate.3&key=node.net.iface.errors.out.rate.4&key=node.net.iface.errors.out.rate.5&key=node.net.iface.errors.out.rate.6&key=node.net.iface.errors.out.rate.7&key=node.net.iface.errors.out.rate.8&key=node.net.iface.errors.out.rate.9&key=node.net.iface.packets.in.rate.0&key=node.ne .iface.packets.in.rate.1&key=node.net.iface.packets.in.rate.2&key=node.net.iface.packets.in.rate.3&key=node.net.iface.packets.in.rate.4&key=node.net.iface.packets.in.rate.5&key=node.net.iface.packets.in.rate.6&key=node.net.iface.packets.in.rate.7&key=node.net.iface.packets.in.rate.8&key=node.net.iface.packets.in.rate.9&key=node.net.iface.bytes.in.rate.0&key=node.net.iface.bytes.in.rate.1&key=node.net.iface.bytes.in.rate.2&key=node.net.iface.bytes.in.rate.3&key=node.net.iface.bytes.in.rate.4&key=node.net.iface.bytes.in.rate.5&key=node.net.iface.bytes.in.rate.6&key=node.net.iface.bytes.in.rate.7&key=node.net.iface.bytes.in.rate.8&key=node.net.iface.bytes.in.rate.9&key=node.net.iface.packets.out.rate.0&key=node.net.iface.packets.out.rate.1&key=node.net.iface.packets.out.rate.2&key=node.net.iface.packets.out.rate.3&key=node.net.iface.packets.out.rate.4&key=node.net.iface.packets.out.rate.5&key=node.net.iface.packets.out.rate.6&key=node.net.iface.packets.out.rate.7&key=node.net.iface.packets.out.rate.8&key=node.net.iface.packets.out.rate.9&devid=all&degraded=true&interval=30&memory_only=true

Disk

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.count&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.name.0&key=node.disk.name.1&key=node.disk.name.2&key=node.disk.name.3&key=node.disk.name.4&key=node.disk.name.5&key=node.disk.name.6&key=node.disk.name.7&key=node.disk.name.8&key=node.disk.name.9&key=node.disk.name.10&key=node.disk.name.11&key=node.disk.name.12&key=node.disk.name.13&key=node.disk.name.14&key=node.disk.name.15&key=node.disk.name.16&key=node.disk.name.17&key=node.disk.name.18&key=node.disk.name.19&key=node.disk.name.20&key=node.disk.name.21&key=node.disk.name.22&key=node.disk.name.23&key=node.disk.name.24&key=node.disk.name.25&key=node.disk.name.26&key=node.disk.name.27&key=node.disk.name.28&key=node.disk.name.29&key=node.disk.name.30&key=node.disk.name.31&key=node.disk.name.32&key=node.disk.name.33&key=node.disk.name.34&key=node.disk.name.35&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.slow.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.busy.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.queue.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.in.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.out.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

Complete List of API calls made by InsightIQ

Here is a complete a list of all of the API calls that InsightIQ makes to the Isilon cluster using OneFS API. For complete reference of what these APIs actually do, you can refer to the OneFS API Info Hub and the OneFS API Reference documentation.

https://10.10.10.10:8080/platform/1/cluster/config

https://10.10.10.10:8080/platform/1/cluster/identity

https://10.10.10.10:8080/platform/1/cluster/time

https://10.10.10.10:8080/platform/1/dedupe/dedupe-summary

https://10.10.10.10:8080/platform/1/dedupe/reports

https://10.10.10.10:8080/platform/1/fsa/path

https://10.10.10.10:8080/platform/1/fsa/results

https://10.10.10.10:8080/platform/1/job/types

https://10.10.10.10:8080/platform/1/license/licenses

https://10.10.10.10:8080/platform/1/license/licenses/InsightIQ

https://10.10.10.10:8080/platform/1/quota/reports

https://10.10.10.10:8080/platform/1/snapshot/snapshots-summary

https://10.10.10.10:8080/platform/1/statistics/current?key=cluster.health&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=ifs.bytes.total&key=ifs.ssd.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.bytes.avail&key=ifs.ssd.bytes.avail&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.count&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.disk.name.0&key=node.disk.name.1&key=node.disk.name.2&key=node.disk.name.3&key=node.disk.name.4&key=node.disk.name.5&key=node.disk.name.6&key=node.disk.name.7&key=node.disk.name.8&key=node.disk.name.9&key=node.disk.name.10&key=node.disk.name.11&key=node.disk.name.12&key=node.disk.name.13&key=node.disk.name.14&key=node.disk.name.15&key=node.disk.name.16&key=node.disk.name.17&key=node.disk.name.18&key=node.disk.name.19&key=node.disk.name.20&key=node.disk.name.21&key=node.disk.name.22&key=node.disk.name.23&key=node.disk.name.24&key=node.disk.name.25&key=node.disk.name.26&key=node.disk.name.27&key=node.disk.name.28&key=node.disk.name.29&key=node.disk.name.30&key=node.disk.name.31&key=node.disk.name.32&key=node.disk.name.33&key=node.disk.name.34&key=node.disk.name.35&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.ifs.bytes.total&key=node.ifs.bytes.used&key=node.disk.count&key=node.cpu.count&key=node.uptime&devid=all

https://10.10.10.10:8080/platform/1/statistics/current?key=node.net.iface.name.0&key=node.net.iface.name.1&key=node.net.iface.name.2&key=node.net.iface.name.3&key=node.net.iface.name.4&key=node.net.iface.name.5&key=node.net.iface.name.6&key=node.net.iface.name.7&key=node.net.iface.name.8&key=node.net.iface.name.9&devid=all

https://10.10.10.10:8080/platform/1/statistics/history?key=cluster.dedupe.estimated.saved.bytes&key=cluster.dedupe.logical.deduplicated.bytes&key=cluster.dedupe.logical.saved.bytes&key=cluster.dedupe.estimated.deduplicated.bytes&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=ifs.bytes.avail&key=ifs.bytes.total&key=ifs.bytes.free&key=ifs.ssd.bytes.free&key=ifs.ssd.bytes.avail&key=ifs.ssd.bytes.total&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.ftp&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.hdfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.http&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nfs3&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nfs4&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.nlm&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.papi&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.siq&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.smb1&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.active.smb2&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.ftp&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.hdfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.http&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.nfs&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.nlm&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.papi&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.siq&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.connected.smb&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.ftp&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.hdfs&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.http&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nfs3&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nfs4&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.nlm&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.papi&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.siq&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.smb1&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.clientstats.proto.smb2&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.cpu.idle.avg&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.access.slow.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.busy.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.bytes.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.ifs.bytes.used.all&key=node.disk.ifs.bytes.total.all&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.latency.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.iosched.queue.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.in.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfer.size.out.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.in.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.disk.xfers.out.rate.all&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.out.rate&key=node.ifs.bytes.in.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.bytes.total&key=node.ifs.ssd.bytes.used&key=node.ifs.ssd.bytes.total&key=node.ifs.bytes.used&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.cache&key=node.ifs.cache.l3.data.read.miss&key=node.ifs.cache.l3.meta.read.hit&key=node.ifs.cache.l3.data.read.hit&key=node.ifs.cache.l3.data.read.start&key=node.ifs.cache.l3.meta.read.start&key=node.ifs.cache.l3.meta.read.miss&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.blocked&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.blocked.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.contended&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.contended.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.deadlocked&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.deadlocked.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.getattr&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.getattr.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.link&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.link.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lock&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lock.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lookup&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.lookup.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.read&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.read.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.rename&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.rename.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.setattr&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.setattr.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.unlink&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.unlink.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.write&devid=all&degraded=true&interval=300&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.ifs.heat.write.total&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.je.num_workers&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.ext.packets.in.rate&key=node.net.ext.errors.in.rate&key=node.net.ext.bytes.out.rate&key=node.net.ext.errors.out.rate&key=node.net.ext.bytes.in.rate&key=node.net.ext.packets.out.rate&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.net.iface.bytes.out.rate.0&key=node.net.iface.bytes.out.rate.1&key=node.net.iface.bytes.out.rate.2&key=node.net.iface.bytes.out.rate.3&key=node.net.iface.bytes.out.rate.4&key=node.net.iface.bytes.out.rate.5&key=node.net.iface.bytes.out.rate.6&key=node.net.iface.bytes.out.rate.7&key=node.net.iface.bytes.out.rate.8&key=node.net.iface.bytes.out.rate.9&key=node.net.iface.errors.in.rate.0&key=node.net.iface.errors.in.rate.1&key=node.net.iface.errors.in.rate.2&key=node.net.iface.errors.in.rate.3&key=node.net.iface.errors.in.rate.4&key=node.net.iface.errors.in.rate.5&key=node.net.iface.errors.in.rate.6&key=node.net.iface.errors.in.rate.7&key=node.net.iface.errors.in.rate.8&key=node.net.iface.errors.in.rate.9&key=node.net.iface.errors.out.rate.0&key=node.net.iface.errors.out.rate.1&key=node.net.iface.errors.out.rate.2&key=node.net.iface.errors.out.rate.3&key=node.net.iface.errors.out.rate.4&key=node.net.iface.errors.out.rate.5&key=node.net.iface.errors.out.rate.6&key=node.net.iface.errors.out.rate.7&key=node.net.iface.errors.out.rate.8&key=node.net.iface.errors.out.rate.9&key=node.net.iface.packets.in.rate.0&key=node.net.iface.packets.in.rate.1&key=node.net.iface.packets.in.rate.2&key=node.net.iface.packets.in.rate.3&key=node.net.iface.packets.in.rate.4&key=node.net.iface.packets.in.rate.5&key=node.net.iface.packets.in.rate.6&key=node.net.iface.packets.in.rate.7&key=node.net.iface.packets.in.rate.8&key=node.net.iface.packets.in.rate.9&key=node.net.iface.bytes.in.rate.0&key=node.net.iface.bytes.in.rate.1&key=node.net.iface.bytes.in.rate.2&key=node.net.iface.bytes.in.rate.3&key=node.net.iface.bytes.in.rate.4&key=node.net.iface.bytes.in.rate.5&key=node.net.iface.bytes.in.rate.6&key=node.net.iface.bytes.in.rate.7&key=node.net.iface.bytes.in.rate.8&key=node.net.iface.bytes.in.rate.9&key=node.net.iface.packets.out.rate.0&key=node.net.iface.packets.out.rate.1&key=node.net.iface.packets.out.rate.2&key=node.net.iface.packets.out.rate.3&key=node.net.iface.packets.out.rate.4&key=node.net.iface.packets.out.rate.5&key=node.net.iface.packets.out.rate.6&key=node.net.iface.packets.out.rate.7&key=node.net.iface.packets.out.rate.8&key=node.net.iface.packets.out.rate.9&devid=all&degraded=true&interval=30&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.ftp&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.hdfs&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.http&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nfs3&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nfs4&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.nlm&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.papi&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.siq&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.smb1&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/history?key=node.protostats.smb2&devid=all&degraded=true&interval=120&memory_only=true

https://10.10.10.10:8080/platform/1/statistics/keys

https://10.10.10.10:8080/platform/1/statistics/protocols

https://10.10.10.10:8080/platform/1/storagepool/nodepools

https://10.10.10.10:8080/platform/1/storagepool/tiers

https://10.10.10.10:8080/platform/1/storagepool/unprovisioned

https://10.10.10.10:8080/session/1/session

Mutiprotocol VNX File Systems: Listing and counting Shares & Exports by file system

I’m in the early process of planning a NAS data migration from VNX to Isilon, and one of the first steps I wanted to accomplish was to identify which of our VNX file systems are multiprotocol (with both CIFS shares and NFS exports from the same file system). In the environment I support, which has over 10,000 cifs shares, it’s not a trivial task to identify which shares are multiprotocol.  After some research it doesn’t appear that there is a built in method from EMC for determining this information from within the Unisphere GUI. From the CLI, however, the server_export command can be used to view the shares and exports.

Here’s an example of listing shares and exports with the server_export command:

[nasadmin@VNX1 ~]$server_export ALL -Protocol cifs -list | grep filesystem01

share "share01$" "/filesystem01/data" umask=022 maxusr=4294967294 netbios=NASSERVER comment="Contact: John Doe"
 
[nasadmin@VNX1 ~]$server_export ALL -Protocol nfs -list | grep filesystem01

export "/root_vdm_01/filesystem01/data01" rw=admins:powerusers:produsers:qausers root=storageadmins access=admins:powerusers:produsers:qausers:storageadmins
export "/root_vdm_01/filesystem01/data02" rw=admins:powerusers:produsers:qausers root=storageadmins access=admins:powerusers:produsers:qausers:storageadmins

The output above shows me that the file system named “filesystem01” has one cifs share and two NFS exports.  That’s a good start, but I want to get a count of the number of shares and exports rather than a detailed list of them. I can accomplish that by adding ‘wc’ [word count] to the command:

[nasadmin@VNX1 ~]$ server_export ALL -Protocol cifs -list | grep filesystem01 | wc
 1 223 450

[nasadmin@VNX1 ~]$ server_export ALL -Protocol nfs -list | grep filesystem01 | wc
 2 15 135

That’s closer to what I want.  The output includes three numbers and the first number is the line count.  I really only want that number, so I’ll just grab it with awk. Ultimately I want the output to go to a single file with each line containing the name of
the file system, the number of CIFS shares, and the number of NFS Exports.  This line of code will give me what I want:

[nasadmin@VNX1 ~]$ echo -n "filesystem01",`server_export ALL -Protocol cifs -list | grep filesystem01 | wc | awk '{print $1}'`, `server_export ALL -Protocol nfs -list | grep filesystem01 | wc | awk '{print $1}'` >> multiprotocol.txt ; echo 
" " >> multiprotocol.txt

The output is below.  It’s perfect, as it’s in the format of a comma delimited file and can be easily exported into Microsoft Excel for reporting purposes.

filesystem01, 1, 2

Here’s a more detailed explanation of the command:

echo -n “filesystem01”, : Echo will write the name of the file system to the screen or to a file if you’ve redirected it with “>” at the end of the command.  Adding the “-n” supresses the “new line” that is automatically created after text is outputted, as I want each file system and it’s share & export count to be on the same line in the report.

`server_export ALL -Protocol cifs -list | grep filesystem01 | wc | awk ‘{print $1}’`,: The server_export command lists all of the cifs shares for the file system that you’re grepping for.  The wc command is for the “word count”, we’re using it to count the number of output lines to verify how many exports exist for the specified file system.  The awk ‘{print $1}’ command will output only the first item of data, when it hits a blank space it will stop.  If the output is “1 23 34 32 43 1”, running ‘{print $1}’ will only output the 1.

`server_export ALL -Protocol nfs -list | grep filesystem01 | wc | awk ‘{print $1}’` >> multiprotocol.txt: This is the same command as above, but we’re now counting the number of NFS exports rather than CIFS shares.

; echo ” ” >> multiprotocol.txt:  After the count is complete and the data has been outputted, I want to run an echo command without the “-n” option to force a line break to the next line, in preparation for the next line of the script.  When exporting, using “>” will output the results to a file and overwrite the file if it already exists, if you use “>>”, it will append the results to the file if it already exists.  In this case we want to append each line.  In an actual script you’d want to create a blank file first with “echo > filename.ext”. Also, the “;” prior to the command instructs the interpreter to start a new command regardless of the success or failure of the prior command.

At this point, all that needs to be done is to create a script that includes the line above with every file system on the VNX. I copied the line of code above into excel into multiple columns, allowing me to copy and paste the file system list from Unisphere and then concatenate the results into a single script file. I’m including a screenshot of one of my script lines from Excel as an example.  The final column (AG) has the following formula:

=CONCATENATE(A4,B4,C4,D4,E4,F4,G4,H4,I4,J4,K4,L4,M4,N4,O4,P4,Q4,R4,S4,T4,U4,V4,W4,X4,Y4,Z4,AA4,AB4,AC4,AD4,AE4)

Spreadsheet example:

multiprotocol count

 

Defining Software-Defined Storage: Benefits, Strategy, Use Cases, and Products

This blog entry was made out of personal interest.  I was curious about the current state of software defined storage in the industry and decided to get myself up to speed.  I’ve done some research and reading on SDS off and on over the course of the last week and this is a summary of what I’ve learned from various sources around the internet.

What is SDS?

First things first.  What is “Software-Defined Storage?”  The term is very broadly used to describe many different products with various features and capabilities. It seems to me to be a very overused and not very well defined term, but it is the preferred term for defining the trend towards data storage becoming independent of the underlying hardware. In general, SDS describes data storage software that includes policy-based provisioning and management of data storage that is independent of the underlying hardware. The term itself has been open to interpretation among industry experts and vendors, but it usually encompasses software abstraction from hardware, policy-based provisioning and data management, and allows for a hardware agnostic implementation.

How the industry defines SDS

Because of the ambiguity surrounding the definition, I looked up multiple respected sources on how the term is defined in the industry. I first looked at IDC and Gartner. IDC defines software defined storage solutions as solutions that deploy controller software (the storage software platform) that is decoupled from underlying hardware, runs on industry standard hardware, and delivers a complete set of enterprise storage services. Gartner defines SDS in two separate parts, Infrastructure and Management:

  • Infrastructure SDS (what most of us are familiar with) utilizes commodity hardware such as x86 servers, JBOD, JBOF or other and offers features through software orchestration. It creates and provides data center services to replace or augment traditional storage arrays.
  • Management SDS controls hardware but also controls legacy storage products to integrate them into a SDS environment. It interacts with existing storage systems to deliver greater agility of storage services.

The general characteristics of SDS

So, based on what I just discussed, what follows is my summary and explanation of the general defining characteristics of software defined storage. These key characteristics are common among all vendor offerings.

  • Hardware and Software Abstraction. SDS always includes abstraction of logical storage services and capabilities from the underlying physical storage systems.
  • Storage Virtualization. External-controller based arrays include storage virtualization to manage usage and access across the drives within their own pools, other products exist independently to manage across arrays and/or directly attached server storage.
  • Automation and Orchestration. SDS includes automation with policy-driven storage provisioning, and service-level agreements (SLAs) generally replace the precise details of the actual hardware. This requires management interfaces that span traditional storage-array products.
  • Centralized Management. SDS includes management capabilities with a centralized point of management.
  • Enterprise storage features. SDS includes support for all the features desired in an enterprise storage offering, such as compression and deduplication, replication, snapshots, data tiering, and thin provisioning.

Choosing a strategy for SDS

There are a host of considerations when developing a software defined storage strategy. Below is a list of some of the important items to consider during the process.

  • Cloud Integration. It’s important to ensure any SDS implementation will integrate with the cloud, even if you’re not currently using cloud services in your environment. The storage industry is moving heavily to cloud workloads and you need to be ready to accommodate business demands in that area. In addition, Amazon’s S3 interface has become the default protocol for cloud communication, so choose an SDS solution can supports S3 for seamless integration.
  • Storage Management Analysis. A deep understanding of how SDS is managed alongside all your legacy storage will be needed. You’ll need a clear understanding of the capacity and performance being used in your environment. Determine where you might need more performance and where you might need more capacity. It’s common in the industry now to not have a deep understanding of how your storage impacts the business, to lack a service catalog portfolio, and have limited resources managing your critical storage. If your organization is on top of those common issues, you’re well ahead of the game.
  • Research your options well. SDS really marks the end of large isolated storage environments. It allows organizations to move away from silos and customize solutions to their specific business needs. SDS allows organizations to build a hybrid of pretty much anything.   Taking advantage of high density NL-SAS disks right next to the latest high performance all-flash array is easily done, and the environment can be tuned to specific needs and use cases.
  • Pay attention to Vendor Support. There are also concerns about support. A software vendor will of course support its own software-defined storage product, but will they offer support when there is a conflict between a heterogeneous hardware environment and their software? Organizations should plan and architect the environment very carefully. All competent software vendors will offer a support matrix for hardware, but only so much can be done if there is a bug in the underlying hardware.
  • Performance Impact analysis. Just like any traditional storage implementation, predictability of performance is an important item to consider when implementing an SDS architecture. A workload analysis and a working knowledge of your precise performance requirements will go a long way toward a successful implementation. Many organizations run SDS on general-purpose, server-class servers and not the purpose-built systems designed solely for storage. Performance predictability can be especially concerning when SDS is implemented into a hyper-converged environment, as the hosts must run the SDS software while also running business applications.
  • Implementation Timeframe. SDS technology can make initial implementation more time consuming and difficult, especially if you choose a software only solution. The flexibility SDS offers provides a storage architect with many more design options, which of course translates into a much more extensive hardware selection process.   Organizations must carefully evaluate the various SDS components and the total amount of time it will consume to select the appropriate storage, networking, and server hardware for the project.
  • Overall Cost and ROI. I’m sure you’ll hear this from your vendor – they will promise that SDS will decrease both acquisition and operational costs while simultaneously increasing storage infrastructure flexibility. Your results may vary, and be aware that the software based products more closely resemble the original intention of this technology and are the best suited to provide those promised benefits. A software based SDS architecture will likely involve a more complex initial implementation with higher costs. While bundled products may offer a better implementation experience, they may limit flexibility.   Determining if software solutions and bundled hardware solutions are a better fit largely depends on whether your IT team has the time and skills required to research and identify the required hardware components on their own. If so, a software-only product can provide for significant savings and provide maximum flexibility.
  • Avoid Forklift Upgrades. One of the original purposes of SDS was to be hardware agnostic so there should be no reason to remove and replace all of your existing hardware. Organizations should research solutions that enable you to protect your existing investment in hardware as opposed to requiring a forklift upgrade of all of your hardware. A new SDS implementation should complement your environment and protect your investment in existing servers, storage, networking, management tools and employee skill sets.
  • Expansion and Upgrade capability. Before you buy new hardware to expand your environment, confirm that the additional hardware can seamlessly integrate with your existing cloud or datacenter environment. Organizations should look for products that allow easy and non-disruptive hardware and software expansions & upgrades, without the need for additional time consuming customization.
  • Storage architecture. The fundamental design of the hardware can expose both efficiencies and deficiencies in the solution stack. Everything should be scrutinized from the tiniest details. Pay particular attention to features that affect storage overhead (deduplication, compression, etc).
  • Test your application workloads. Often overlooked is the fact that a storage infrastructure exists to entirely to facilitate data access by applications. It’s a common mistake to downplay the importance of an application workload analysis. Consider a proof of concept or extensive testing with a value added reseller with your own data if possible, it’s the only way to ensure it will meet your expectations when it’s placed into a production environment.   If it’s possible, test SDS software solutions with your existing storage infrastructure before a purchase is made as it will help reveal just how hardware independent the SDS software actually is.

Potential use cases and justification for SDS 

The impact that SDS solutions will continue to have a significant impact on the traditional storage market moving into the future. IDC research suggests that traditional stand-alone hybrid systems are expected to start declining while new all-flash, hyper-converged and software-defined storage adoption will be growing at a much faster rate.  So, on to some potential use cases:

  • Non-disruptive data migrations.  This is where appliance and storage controller based virtual solutions have already been used quite successfully for many years.   I have experience installing and managing the VPLEX storage virtualization device into an existing storage infrastructure quite successfully, and it was used extensively for non disruptive data migrations in the environment that I supported. By inserting an appliance or storage controller based SDS solution into an existing storage network between the server and backend storage, it’s then easy to virtualize the storage volumes on both existing and new storage arrays and then migrate data seamlessly and non-disruptively from old arrays to new ones. Weekend outages were turned into much shorter non-disruptive upgrades that the application was completely unaware of.  Great stuff.
  • Better managing deployments of archival/utility storage.  Organizations in general seem to have a growing need for deployments of large amounts of archive storage in their environments (low cost, high density disk). It’s not uncommon to have vast amounts of data with an undefined business value, but it’s sufficiently valuable that it cannot easily or justifiably be deleted. In cases like that storage arrays that are reliable, stable, and economical perform moderately well and remain easy to manage and scale are a good fit for an SDS solution. The storage that  this data resides on would need very few extra enterprise features like auto-tiering, VM integration, deduplication, etc. Cheap and deep storage will work here, and SDS solutions work in these environments. Whether the SDS software resides on a storage controller or on an appliance, more storage capacity can be quickly and easily added to these environments and then easily managed and scaled. Many of the interoperability and performance issues that have hurt SDS deployments in the past don’t make much difference in a situation where it’s simply archive data.
  • Managing heterogeneous storage environments. One of the big issues with appliance and storage controller-based SDS solutions at the beginning was that they attempted to do it all by virtualizing storage arrays from every vendor under the sun and failed to create a single pane of glass to manage all of the storage capacity while providing a common, standardized set of storage management features.  That feature is now a game changer in complex environments and is offered by most vendors. Implementing SDS can dramatically reduce administrative time and allow your top staff to focus on more important business needs.

The benefits of SDS

What follows is a summary of some of the key benefits of implementing SDS. This list is what you’re most likely to hear from your friendly local salesperson and in the marketing materials from each vendor. J

  • Non-disruptive hardware expansion. SDS solutions can enable storage capacity expansion without disruption.   New arrays can be added to the environment and data can be migrated completely non-disruptively.
  • Cloud Automation. SDS provides an optimal storage platform for next generation infrastructure of on-prem & private data centers that offers public cloud scale economics, universal access and self-service automation to private clouds.
  • Economics. SDS has potential to significantly reduce operational and management expenses using policy based automation, ease of deployment, programmable flexibility, and centralized management while providing hardware independence and using off the shelf industry-standard components to lower storage system costs. Some vendor offerings will allow the user to leverage existing hardware.
  • Increased ROI. SDS allows policy-driven data center automation that provides the ability to provision storage resources immediately based on VM workload demand. This capability of SDS will encourage organizations to deploy SDS offerings to improve their opex and capex, providing a quick return on investment (ROI).
  • Real-time scalability. SDS offers tiered capacity by service level and the ability to provision storage on demand, which enables optimal capacity based on current business requirements. It also provides details metrics for reporting of storage infrastructure usage.
  • High Availability. SDS architectures can provide for improved business continuity. In the event of a hardware failure, an SDS environment can shift load and data automatically to another available node.       Because the storage infrastructure sits above the physical hardware, any hardware can be used to replace a failed node. Older systems could even be recycled to improve disaster recovery provisions in SDS, further improving your ROI.

 

The trends for SDS in 2017 and beyond

The SAN guy is not a fortune teller, but these predictions are all creating a buzz in the industry and you’re likely to see them start to materialize in 2017.

  • SDS catches up to traditional storage. SDS is finally catching up with traditional storage. Now that enterprise-class storage features like inline deduplication, compression and QoS have been introduced across the market leaders in SDS solutions, it’s finally becoming a more mainstream solution. The rapidly declining cost of EFD along with the performance and reliability of SDS are really making it well suited for the virtual workloads of many organizations.
  • Multiple Cloud implementations. Analysts are predicting that SDS will introduce a new multi-cloud era in 2017, as leveraging the power of a software-defined infrastructure that is not tied to a specific hardware platform and configuration. SDS users will finally have a defined cloud strategy that is evolutionary to what they are doing today. As a result, IT has to be prepared to support new application models designed to bring the simplicity and agility of cloud to on-premises infrastructure. At the same time, new software-defined infrastructure enables a flexible multi-cloud architecture that extends a common and consistent operating environment from on-prem to off-prem, including public clouds.
  • Management integration improves. Integration will continue to improve. The continued integration of management into hypervisor tools, computational platforms, hyper-converged systems, and next-generation service based infrastructures will continue to enhance SDS capabilities.
  • Storage leaves the island.   Traditional storage implementations typically have many different islands of storage in independent silos. It’s been difficult to break that mold based on business requirements and the hardware and software available to provide the necessary multi-tenancy and still meet those requirements. SDS will begin to allow organizations to consolidate those islands of storage and break the artificial barriers.
  • Increased Hybrid SDS deployments. The use of SDS will continue to move toward hybrid implementations. Organizational requirements will drive the change. It’s no secret that more workloads are moving toward the cloud, and SDS will help break down that boundary. SDS will also start to blur the lines between data that is in the cloud and data that is locally stored and help make data mobility more seamless, improving the fluidity while taking into accound regulatory requirements, cost, and performance.
  • The Software-Defined Data Center starts to materialize. The ultimate goal for SDS is the software defined data center. Implementing a Hyper-converged infrastructure (HCI) is important to reach that goal, but in order to achieve it HCI must deliver consistent and predictable performance to all elements of data center management, not just storage. SDS and HCI are the stepping stones for that goal.

Software Defined Storage Vendors

Now that we have an idea of what SDS is and what it can be used for, let’s take a look at the vendors that offer SDS solutions. I put together a vendor list below along with a brief description of the product that is based mostly on the company’s marketing materials.

SwiftStack

SwiftStack’s design goal is to make it easy to deploy, operate, and scale, as well as to provide the fastest experience when deploying and managing a private cloud storage system. Another key design element is to enable large-scale growth without any disruption to performance. It has no fixed hardware configurations and can be configured using any server hardware. It is also licensed for the amount of data capacity utilized, not the total amount of hardware capacity deployed, allowing organizations to pay-as-they-grow using annual licenses.

SwiftStack offers a reliable, massively scalable, software defined storage platform. It seamlessly integrates with existing IT infrastructures, running on standard hardware, and replicated across globally distributed data centers.

HPE StoreVirtual VSA

HPE StoreVirtual VSA is storage software that runs on commodity hardware in a virtual machine in any virtualized server environment, including VMware, Hyper-V, and KVM. It turns any media presented to it via the hypervisor into shared storage. It presents the storage to all physical and virtual hosts in the environment as an iSCSI array. Additionally, StoreVirtual VSA is part of an integrated family of solutions that all share the same storage operating system, including StoreVirtual arrays and HPE’s hyper-converged systems. It has a full enterprise storage feature set that provides the capabilities and performance you would expect from a traditional storage area network. It provides low cost data protection that delivers fast, efficient, and scalable backup and does not require dedicated hardware.

HPE StoreOnce VSA

StoreOnce VSA is a SDS solution that provides backup and recovery for virtualized environments. It enables organizations to reduce the cost of secondary storage by eliminating the need for a dedicated backup appliance. It shares the same deduplication algorithm and storage features as the StoreOnce Disk Backup family, including the ability to replicate bi-directionally from a physical backup appliance to SDS.

Metalogix StoragePoint

StoragePoint is a SharePoint storage optimization solution that offloads unstructured SharePoint content data, which is known as Binary Large Objects (BLOBs), from SharePoint’s underlying SQL database to alternate tiers of storage. BLOBs quickly overwhelm the SQL database that powers SharePoint, resulting in poor performance that is expensive to maintain and grow. Many rich media formats are too large to store in SQL Server due to technical limitations, resulting in a collaboration platform that cannot address all the content needs of an organization.

StoragePoint optimizes SharePoint Storage using Remote Blob Storage (RBS). It provides a method to address file content storage issues related to large file size, slow user query times and backup failures. It externalizes SharePoint content so it can be stored and managed anywhere. An automated rules engine places content in the most appropriate storage locations based on the type, criticality, age and frequency of use.

VMware vSAN

Previously known as VMware Virtual SAN, vSAN addresses hyper-converged infrastructure systems. It aggregates locally attached disks in a vSphere cluster to create storage that can be provisioned and managed from vCenter and vSphere Web Client tools. This enables organizations to evolve their existing virtualization environment with the only natively integrated vSphere solution and leverages multiple server hardware platforms. It reduces TCO due to the cost savings of utilizing server side storage, with more affordable flash storage, on demand scaling, and simplified storage management. It can also be expanded into a complete SDS solution that can provide the foundation for a cloud architecture.

Using the VMware SDS model, the data level that’s responsible for storing data and implementing data services such as replication and snapshots is virtualized by abstracting physical hardware resources, and aggregating them into logical pools of capacity (called virtual datastores) that can be used and managed with a high degree of flexibility. By making the virtual disk the basic unit of management for storage operations in the virtual datastores, precise combinations of hardware resources and storage services can be configured and controlled independently for each virtual machine.

Microsoft S2D

Microsoft Storage Spaces Direct (or S2D) is a part of Windows Server 2016. It can be combined with Storage Replica (SR) along with resilient file system cache tiering to create scale-out, converged and hyper-converged infrastructure SDS for Windows Servers and Hyper-V environments. It has the capability to use existing tools and has many flexible configuration and deployment options.

Infinidat InfiniBox

InfiniBox is based upon a fully abstracted set of software driven storage functions layered on top of industry standard hardware, and delivers a fast, highly available, and easy-to-deploy storage system. Extreme reliability and performance is delivered through their innovative self-healing architecture, high performance double-parity RAID, and comprehensive end-to-end data verification capability. They also feature an efficient data distribution architecture that uses all of the installed drives all the time. It has a large flash cache that deliver ultra-high performance that can match or exceed 12GB/s throughput (yes, it’s a marketing number).

Pivot3

Pivot3’s virtual storage and compute operating environment, known as vSTAC, is designed to maximize overall resource utilization, providing efficient fault tolerance and giving IT the flexibility to deploy on a wide range of commodity x86 hardware. A distributed scale-out architecture pools compute and storage from each HCI node into high-availability clusters, accessible by every VM and application. Its Scalar Erasure Coding is said to be more efficient than network RAID or replication protection schemes, and it maintains performance during degraded mode conditions. Pivot3 owns multiple SDS patents, one covering their technology that creates a cross-node virtual SAN that can be accessed as a unified storage target by any application running on the cluster. By converging compute, storage and VM management, they automate system management with self-optimizing, self-healing and self-monitoring features. Their vCenter plugin provides a single pane of glass to simplify management of single and multi-site deployments.

EMC VIPR Controller

EMC ViPR Controller provides Software Defined Storage automation that centralizes and transforms multivendor storage into a simple and extensible platform. It also performs infrastructure provisioning on VCE Vblock Systems. It abstracts and pools resources to deliver automated, policy-driven, storage as-a-service on demand through a self-service catalog across a multi-vendor environment. It integrates with cloud stacks like VMWare, OpenStack, and Microsoft and offers RESTful APIs for integrating with other management systems and offers multi-vendor platform support.

EMC ECS (Elastic Cloud Storage)

ECS provides a complete software-defined cloud storage platform for commodity infrastructure. Deployed as a software-only solution or as a turnkey appliance, ECS offers all the cost savings of commodity infrastructure with enterprise reliability, availability, and serviceability. EMC launched it as its next generation hyper scale object-based storage solution, it was originally designed to overcome the limitations of Centera. It is used to store, archive, and access unstructured content at scale. It’s designed to allow businesses to deploy massively scalable storage in a private or public cloud, and allows customizable metadata for data placement, protection, and lifecycle policies. Data protection is provided by a hybrid encoding approach that utilizes local and distributed erasure coding for site level and geographic protection.

EMC ScaleIO

ScaleIO is a software-only server based storage area network that combines storage and compute resources to form a single-layer. It uses existing local disks and LANs so that the host can realize a virtual SAN with all the benefits of external storage. It provides virtual and bare metal environments with scale, elasticity, multi-tenant capabilities, and service quality that enables Service Providers to build high performance, low cost cloud offerings. It enables full data protection and persistence. The software ensures enterprise-grade resilience through meshed mirroring of randomly sliced and distributed data chunks across multiple servers.

IBM Spectrum Storage

IBM spectrum software is part of a comprehensive family of software-defined storage solutions. It is specifically structured to meet changing storage needs, including hybrid cloud, and is designed for organizations just starting out with software-defined storage as well as those with established infrastructures who need to expand their capabilities.

NetApp StorageGrid

NetApp’s SDS offerings include NetApp clustered Data ONTAP OS, NetApp OnCommand, NetApp FAS series, and NetApp FlexArray virtualization software. Some features of NetApps SDS include virtualized storage services that includes effective provision of data storage and access based on service levels, multiple hardware options that Supports deployment in a variety of enterprise platforms, and application self-service which delivers APIs for workflow automation and custom applications.

DataCore

DataCore’s storage virtualization software allows organizations to seamlessly manage and scale data storage architectures, delivering massive performance gains at a much lower cost than solutions offered by legacy storage hardware vendors. DataCore has a large customer base around the globle. Their adaptive and self-learning and healing technology eases management, and it’s solution is completely hardware agnostic.

Nexenta

Nexenta integrates software-only “Open Source” collaboration with commodity hardware. Their software is installed in thousands of companies around the world serving a wide variety of workloads and business-critical situations. It powers some of the world’s largest cloud deployments. With their complete Software-Defined Storage portfolio and recent updates to NexentaConnect for VMware VSAN and the launch of NexentaEdge, they offer a robust SDS solution.

Hitachi Data Systems G-Series

Hitachi Virtual Storage Platform G1000 provides the always-available, agile and automated foundation needed for a on-prem or hybrid cloud infrastructure. Their software enables IT agility and a low TCO. They delivering a top notch combination of enterprise-ready software-defined storage, global storage virtualization, along with efficient, scalable, and high performance hardware. It also supports self-managing policy-driven management. Their SDS implementation includes Hitachi Virtual Storage Platform G1000 (VSP G1000) and Hitachi Storage Virtualization Operation System (SVOS).

StoneFly SCVM

The StoneFly Storage Concentrator Virtual Machine (SCVM) Software-Defined Unified Storage (SDUS) is a Virtual IP Storage Software Appliance that creates a virtual network storage appliance using the existing resources of an organization’s virtual server infrastructure. It is a virtual SAN storage platform for VMware vSphere ESX/ESXi, VMware vCloud and Microsoft Hyper-V environments and provides an advanced, fully featured iSCSI, Fibre Channel SAN and NAS within a virtual machine to form a Virtual Storage Appliance.

Nutanix

Nutanix’s software-driven Xtreme Computing Platform natively converges compute, virtualization and storage into a single solution. It offers predictable performance, linear scalability and cloud-like infrastructure consumption. PernixData FVP software is a 100% software solution that clusters server flash and RAM to create a low latency I/O acceleration tier for any shared storage environment.

StorPool

StorPool is a storage software solution that runs on standard commodity based servers and builds scalable, high-performance SDS system. It offers great flexiblity and can be deployed in both converged or on separate storage nodes. It has an advanced fully-distributed architecture and is one of the fastest and most efficient cloud ready block-storage software solutions available.

Hedvig

Hedvig collapses traditional tiers of storage into a single, software platform designed for primary and secondary data. Their patented “Universal Data Plane” architecture stores, protects, and replicates data across multiple private and public clouds. The Hedvig Distributed Storage Platform is a single software-defined storage solution that is designed to meet the needs of primary, secondary, and cloud data requirements. It is a distributed system that provides cloud-like elasticity, simplicity, and flexibility.

Amax StorMax SDS

StorMax SDS is a highly available software-designed storage solution that delivers unified file and block storage services with enterprise-grade data management, data integrity, and performance that can scale from tens of Terabytes to Petabytes. It is seamlessly integrated with NexentaStor and the plug and play appliances are designed to be a simple swap-in replacement of legacy block and file storage appliances, offering unlimited file system sizes, unlimited snapshots and clones, and inline data reduction for additional storage cost savings. It’s well suited for VMWare, OpenStack, or CloudStack backend storage, generic NAS file services, home directory storage, and near-line archive and large backup & archive repositories.

Atlantis USX

USX is Atlantis’ SDS software solution. It includes policy-based management of storage resources, storage pooling and automation of storage functions. It also provides a REST API to allow organizations to automate storage functions. It promises to deliver the performance of an all-flash storage array at a lower cost than that of traditional SAN or NAS. The marketing materials state that you can pool any SAN, NAS or DAS storage and accelerate its performance by up to 10x, while at the same time consolidating storage to increase storage capacity by up to 10x.

LizardFS

The LizardFS SDS solution is a distributed, scalable, fault-tolerant and highly available file system that runs on commodity hardware. It allows users to combine disk space located on many servers into a single name space that is visible on Unix and Windows. SDS LizardFS ensures file security by keeping all the data in many replicas spread over all available servers. Disk and server failures are handled transparently without any downtime or loss of data. As your storage requirements grow it scales by adding new servers without any downtime. The system will automatically move distribute data to the newly added servers as it continuously balances disk usage across all connected nodes. Removing servers is as easy as adding a new one.

That’s a large portion of the SDS vendor playing field, but there are others. You can also check out the offerings from Maxta, Tarmin, Coraid, Cohesity, Scality, Starwind, and Red Hat Storage Server (Ceph).

There were long pauses in between as I worked on this blog post in an on and off manner, so I may make some editorial changes and additions in the coming weeks.  Feedback is welcomed and appreciated.

What’s the difference between a Storage Engineer and a Storage Administrator?

During my recent job search a recruiter asked me if there was a difference between a Storage Administrator and a Storage Engineer. He had no idea. I was initially a bit surprised at the question, as I’ve always assumed that it was widely accepted that an engineer is more involved in the architecture of systems whereas an administrator is responsible for managing them.  While his question was about Storage, it applies to many different disciplines in the IT industry as both the Administrator and Engineer titles are routinely appended to “System”, “Network”, “Database”, etc.  Many companies use the terms completely interchangeably and many storage professionals perform both roles. In my experience HR Departments generally label all technical IT employees as “Analysts”, no matter which discipline you specialize in.

From my own personal perspective, I present the following definitions:

Storage Engineer: A person who uses a disciplined, methodical approach to the design, realization, technical management, operation, and life-cycle management of a storage environment.

Storage Administrator: A person who is responsible for the daily upkeep, technical configuration, support, and reliable operation of a storage environment.

To all of my recruiter friends and associates, please think of the System Engineer as the person who is responsible for laying the foundation and ensuring that it is implemented properly.  Afterwards, the Administrator is responsible for carrying out the daily routines and supporting the vision of the engineer.

Does one title outrank the other? No. In my opinion they’d be equal. As I mentioned before HR departments generally don’t distinguish between the two and both usually are in the same pay grade, and the overlap of responsibilities is such that many people perform the duties of both regardless if their title is one or the other. In my experience performing both roles at multiple companies, A Storage Engineer at any given company is given a problem and in a nutshell their job is to find the best solution for it. What is the normal process for finding the best solution?  The Engineer researches and develops the best possible combinations of network, compute, and storage resources along with all the required software features and functionality after investigating a multitude of different vendors and technologies.  Storage industry trends and new technologies are usually researched as well. Following that research, they finally determine the best fit based on the cost, the specific business use case, expansion and scalability, performance testing in a lab or onsite with a proof of concept, all while taking into account the ease of administration and supportability of the hardware and software from both a vendor and internal admin standpoint. A Storage Administrator is generally heavily involved in this decision-making process as they will be responsible for the tuning of the environment to optimize the performance and reliability to the customer, as a result their opinion during the research phase is crucial.  Administrator feedback based on job experience is critical in the research and testing phase across the board, it’s simply not something that’s taught in a book or degree program.
With the considerable overlap between the two jobs in most companies, it’s not surprising they are used so interchangeably and that there is general market confusion about the difference. A company isn’t going to hire a group of storage administrators to simply sit at a desk and monitor a group of storage arrays, they will be required to understand the process of building a complex storage environment and how it fits in to the specific business environment. Engineering and administering a Petabyte-scale global storage environment is very complex no matter which title you’re given. A seasoned Storage Administrator or Storage Engineer should both be up to the task, regardless of how you define their roles. At the end of the day, I’m proud to be a SeniorSANStorageAnalystAdminEngineerSpecialist professional. 🙂

Did I get any of this wrong?  Share your feedback with me in the comments section.

Generating and installing SSL requests, keys, and certificates on EMC ECS

ecshttps

In this post I’ve outlined the procedure for generating SSL requests, keys and certificates for ECS, as well as outlining the process for uploading them to ECS and verifying the installed certificates afterwards.   This was a new process for me so I created very detailed documentation on the process I used, hopefully this will help someone else out.

I mention using the ECS CLI a few times in this document.  If you’d like to use the ECS CLI, I have another blog post here that reviews the details on it’s implementation.  It requires Python.

Part 1: Generating SSL requests, Keys, and Certificates.

The procedure for generating SSL requests, keys, and certificates is unnecessary if you will be given the certificate and key files from a trusted source within your organization.  If you’ve been provided the certificate and key file already, you can skip to the Part 2 that details how to upload and import the keys and certificates to ECS.  This is of course a sample procedure on how I did it in my organization, specific details may have to be altered depending on the use case.

a.       Prepare for Creating (or editing) the SSL Request file

  • The first step in this process is to generate an SSL request file.  As OpenSSL does not allow you to pass Subject Alternative Names (SANs) through the command line, they must be added to a configuration file first.
  • On ECS, the OpenSSL configuration file is located at /etc/ssl/oenssl.cnf by default.  Copy that file to a temporary directory where you will be generating your certificates.
  • Run this command to copy the request file for editing:
admin@ecs-node1:~# cp /etc/ssl/openssl.cnf /tmp/request.conf

b.      Make changes to the request.conf file.  Edit it with vi and make the edits outlined below.  Each bullet reviews a specific section of the file where changes are required.

  • [ alternate_names ] Edit the [ alternate_names ] section.  In a typical request file these are included at the very end of the configuration file.  Note that this request example includes the wildcard as the first entry (which is required by S3).

Sample:

DNS.1 = *.prod.os.example.com
DNS.2 = atmos.example.com
DNS.3 = swift.example.com
  • [ v3_ca ]  Edit the [ v3_ca ] section.

This line should be added directly below the [ v3_ca ] header:

subjectAltName = @alternate_names

Search for “basicConstraints” in the [ v3_ca ] section.  You may see “basicConstraints = CA:true”.  Make sure it is commented out – add the # to the beginning of the line.

# basicConstraints = CA:true

Search for “keyUsage = cRLSign, keyCertSign” in the [ v3_ca ] section.  You may see “# keyUsage = cRLSign, keyCertSign”.  Make sure it is commented out.

# keyUsage = cRLSign, keyCertSign
  • [ v3_req ] Verify the configuration in the [ v3_req ] section.  The line below must exist.
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
  • [ usr_cert ] Verify the configuration in the [ usr_cert ] section.

Search for the entry below and uncomment it, it should be added.

extendedKeyUsage = serverAuth

The following line is likely to already exist in this [ v3_ca ] section.  The authorityKeyIdentifier line exists in multiple locations in the config file, however in the v3_ca section it must have “always,issuer” as its option.

# authorityKeyIdentifier=keyid:always,issuer
  • [ req ] Verify the configuration In the [ req ] section.

For our dev environment, in the testing phase with a self-signed certificate, the following entry was made six lines below the [ req ] header:

x509_extensions = v3_ca         # The extensions to add to the self-signed cert

The x509_extensions line also exists in the [ CA_default ] section.  This was left untouched in my configuration.

x509_extensions = usr_cert      # The extensions to add to the cert

Change based on certificate type.  Note that this will change if you’re not using a self-signed certificate, which I did not test.  The req_extensions line exists in the default configuration file and is commented out.

x509_extensions = v3_ca           #  for a self-signed cert
req_extensions = v3_ca              # for cert signing req

Change the default_bits entry.

Search for default_bits = 1024, it should be default_bits = 2048
  • [ CA_default ]  In the CA_default section, uncomment or add the line below.  The line exists in the default configuration file and simply needs to be uncommented.
copy_extensions = copy

The following additional changes were made in my configuration:

Search for dir = ./demoCA, change to dir = /etc/pki/CA
Search for default_md = default, change to default_md = sha256
  • [ req_distinguished_name] Verify the configuraiton in the [ req_distinguished_name] section.

The following changes were made in my configuration:

countryName_default = AU, change to countryName_default = XX
stateOrProviceName_default = SomeState, change to stateOrProviceName_default = Default Province
localityName_default doesn’t exist in the default file, added as localityName_default = Default City
0.organizationName_default = Internet Widgits Pty Ltd, change to 0.organizationName_default = Default Company
commonName = Common Name (e.g. server FQDN or YOUR name), it was changed to commonName = Common Name (eg, your name or your server\'s hostname)
  • [ tsa_config1 ] Verify the configuration in the [ tsa_config1 ] section.

The following additional change was made in my configuration:

digests = md5, sha1, change to digests = sha1, sha256, sha384, sha512

c.       Generate the Private Key.  Save the key file in a secure location, the security of your certificate depends on the private key.

  • Run this command to generate the private key:
admin@ecs-node1:~# openssl genrsa -des3 -out server.key 2048
Generating RSA private key, 2048 bit long modulus
............................................................+++
Enter pass phrase for server.key: <enter a password>
Verifying - Enter pass phrase for server.key: <enter a password>
  • Modify the permissions of the server key:
admin@ecs-node1:~# chmod 0400 server.key
  • Now that the private key is generated, you can either create a certificate request (the .req file) to request a certificate from a CA or generate a self-signed certificate.  In the samples below, I’m setting the Common Name (CN) on the certificate to *.os.example.com.

d.      Generate the Certificate Request.  Next we will look at the steps used to generate a certificate request.

  • Run the command below to generate the request.
admin@ecs-node1:~# openssl req -new -key server.key -config request.conf -out server.csr
Enter pass phrase for server.key: <your passprhase from above>
  • Running the command above will prompt for additional information that will be incorporated into the final certificate request (the Distinguished Name, or DN). Some fields may be left blank and some will have default values, If you enter ‘.’ the field will be left blank.
Country Name (2 letter code) [US]: <Enter value>
State or Province Name (full name) [Province]: <Enter value>
Locality Name (eg, city) []: <Enter value>
Organization Name (eg, company) [Default Company Ltd]: <Enter value>
Organizational Unit Name (eg, section) []: <Enter value>
Common Name (e.g. server FQDN or YOUR name) []: <*.os.example.com>
Email Address []: <admin email>
  • Enter the following extra attributes to be sent with the certificate request:
A challenge password []: <optional>
An optional company name []: <optional>
  • Check request contents.  Use OpenSSL to verify the contents of the request, verify that the SANs are set correctly.
admin@ecs-node1:~# openssl req -in server.csr -text -noout
Certificate Request:
Data:
Version: 0 (0x0)
Subject: C=US, ST=North Dakota, L=Fargo, O=EMC, OU=ASD,
CN=*.os.example.com/emailAddress=admin@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:vc
a7:5a:dc:ca:ff:73:53:6b:ab:a7:ff:7a:20:c1:ff:
   … <removed a portion of the output for this example> ..
ff:9e:66:ff:43:0a:fd:31:3d:69:b1:03:20:51:ff:
Exponent: 65537 (0x10001) A
Requested Extensions:
X509v3 Subject Alternative Name:
DNS:os.example.com, DNS:atmos.example.com, DNS:swift.example.com
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication attributes:
Signature Algorithm: sha256 WithRSAEncryption
ff:7a:f3:7d:8e:8d:37:8f:66:c8:91:16:c0:00:39:df:03:c1:
… <removed a portion of the output for this example> ..
ff:d9:68:ff:be:e4:4e:e1:78:16:67:47:14:01:31:32:0e:a2:
  • Now that the certificate request is completed it may be submitted to the CA who will then return a signed certificate file.

e.      Generate a Self-Signed Certificate.  Generating a self-signed certificate is almost identical to generating the certificate request. The main difference is that instead of generating a request file, you add an -x509 argument to to the openssl req command to generate a certificate file instead.

admin@ecs-node1:~#  openssl req -x509 -new -key server.key -config request.conf -out server.crt
Enter pass phrase for server.key: <your passprhase from above>
  • Running that command will prompt for additional information that will be incorporated into the certificate request.  This is called a Distinguished Name (DN). Some fields may be left blank and some will have default values, If you enter ‘.’ the field will be left blank.
Country Name (2 letter code) [US]: <Enter value>
State or Province Name (full name) [Province]: <Enter value>
Locality Name (eg, city) []: <Enter value>
Organization Name (eg, company) [Default Company Ltd]: <Enter value>
Organizational Unit Name (eg, section) []: <Enter value>
Common Name (e.g. server FQDN or YOUR name) []: <*.os.example.com>
Email Address []: <admin email>
  • Enter the following extra attributes to be sent with the certificate request:
A challenge password []: <optional>
An optional company name []: <optional>
  • Check request contents.  Use OpenSSL to verify the contents of the request, verify that the SANs are set correctly.
admin@ecs-node1:~# openssl x509 -in server.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9999999999990453326 (0x11fc66cf7c09d762)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=North Dakota, L=Fargo, O=EMC, OU=ASD, CN=*.os.example.com/
emailAddress=admin@example.com
Validity
Not Before: Oct 14 16:47:40 2014 GMT
Not After : Nov 13 16:47:40 2014 GMT
Subject: C=US, ST=Minnesota, L=Minneapolis, O=EMC, OU=ASD,
CN=*.os.example.com/emailAddress=admin@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
ff:bc:8f:83:7b:57:72:3d:70:ef:ff:d0:f9:97:ff:
   … <removed a portion of the output for this example> ..
ff:9e:66:86:43:0a:fd:ff:3d:69:b1:03:20:51:ff:
db:77
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Extended Key Usage:
TL Web Server Authentication
X509v3 Subject Alternative Name:
DNS:os.example.com, DNS:atmos.example.com, DNS:swift.example.com
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
Signature Algorithm: sha256WithRSAEncryption
ff:bc:8f:83:7b:57:72:ff:70:ef:b9:d0:f9:97:ff:
   … <removed a portion of the output for this example> ..
ff:9e:66:ff:43:0a:fd:31:3d:69:ff:03:20:51:39:
db:77

f.        Chain File. In either a self-signed or a CA signed use case, you now have a certificate file.  In the case of a self-signed certificate, the certificate is the chain file.  If your certificate was signed by a CA, you’ll need to append the intermediate CA cert(s) to your certificate.  I used a self-signed certificate in my implementation and did not perform this step.

  • Append the CA cert if it was signed by a CA.  Do not append the root CA certificate:
admin@ecs-node1:~# cp server.crt serverCertChain.crt
admin@ecs-node1:~# cat intermediateCert.crt >> serverCertChain.crt

 

Part 2: Upload the keys and Certificates.  The next section outlines the process for installing the key and certificate pair on ECS.

a.       First log in to the management API to get a session token.  You will need the root password for the ECS node.

  • Run this command (change IP and password as needed): (ctrl+c to break)
admin@ecs-node1:/> curl -L --location-trusted -k https://10.10.10.10:4443/login -u "root:password" –v
  • Note that the prior will leave the root password in the command history.  You can run it without the password and have it prompt you instead:
curl -L --location-trusted -k https://10.10.10.10:4443/login -v -u root
Enter host password for user 'root': <enter password>
  • From the output of the command above, set an environment variable to hold the token for later use.
admin@ecs-node1:/> export ECS_TOKEN=x-sds-auth-token-value

b.      Commands used for installing a key & certificate pair for Management requests/users:

  • Use ECSCLI to run it from a client PC:
admin@ecs-node1:/> python ecscli.py vdc_keystore update –hostname <ecs host ip> -port 4443 –cf <cookiefile> –privateKey <privateKey> -certificateChain <certificateChainFile>
  • Use CURL to run it directly from the ECS management console.  Note that this command uses the TOKEN environment variable that was set earlier.

Sample Command:

admin@ecs-node1:/> curl -svk -H "X-SDS-AUTH-TOKEN: $TOKEN" -H "Content-type: application/xml" -H "X-EMC-REST-CLIENT: TRUE" -X PUT -d "<rotate_keycertchain><key_and_certificate><private_key>`cat privateKeyFile`</private_key><certificate_chain>`cat certChainFile`</certificate_chain></key_and_certificate></rotate_keycertchain>" https://localhost:4443/vdc/keystore

c.       Commands used for installing a key & certificate pair for Object requests/users.  Use the actual private key and certificate chain files here, and a successful response code should be an HTTP 200.

  • Use ECSCLI to run it from a client PC:
admin@ecs-node1:/> python ecscli.py keystore update –hostname <ecs host ip> -port 4443 –cf <cookiefile> -pkvf <privateKey> -cvf <certificateChainFile>
  • Use CURL to run it directly from the ECS management console.  If curl is used, the xml format is required so that carriage returns and the like will be handled via the `cat` command.

Sample Command:

admin@ecs-node1:/> curl -svk -H "X-SDS-AUTH-TOKEN: $TOKEN" -H "Content-type: application/xml" -H "X-EMC-REST-CLIENT: TRUE" -X PUT -d "<rotate_keycertchain><key_and_certificate><private_key>`cat privateFile.key`</private_key><certificate_chain>`cat certChainFile.pem`</certificate_chain></key_and_certificate></rotate_keycertchain>" https://localhost:4443/object-cert/keystore

d.      Important Notes:

  • Though this is the object certificate to be used for object requests sent on port 9021, the upload command is a management command which is sent on port 4443.
  • Once this is done it can take up to 2 hours for the certificate to be distributed to all of the nodes.
  • The certificate is immediately distributed upon the service restart of the node where the certificate was uploaded.

e.      Restart managment services to propagate the management certificate.  Using viprexec will run the command on all of the nodes in the cluster.

admin@ecs-node1:/> sudo -i viprexec -i -c '/etc/init.d/nginx restart;sleep 10;/etc/init.d/nginx status'

Output from host : 192.168.1.1
Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=75447)

Output from host : 192.168.1.2
Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=85534)

Output from host : 192.168.1.3

Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=87325)

Output from host : 192.168.1.4
Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=59112)

Output from host : 192.168.1.5
Stopping nginx service ..done
Starting nginx service
..done
nginx service is running (pid=77312)

f.        Verify that the certificate was propogated to each node.  The output will show the certificate, scroll up and verify all of the information is correct.  At the minimum the first and last node should be checked.

admin@ecs-node1:/> openssl s_client -connect 10.10.10.1:4443 | openssl x509 -noout -text 
admin@ecs-node1:/> openssl s_client -connect 10.10.10.2:4443 | openssl x509 -noout -text 
admin@ecs-node1:/> openssl s_client -connect 10.10.10.3:4443 | openssl x509 -noout -text 
admin@ecs-node1:/> openssl s_client -connect 10.10.10.4:4443 | openssl x509 -noout -text 
admin@ecs-node1:/> openssl s_client -connect 10.10.10.5:4443 | openssl x509 -noout -text

g.       Wait at least 2 minutes and then restart the object head services to propagate the object head certificate:

admin@ecs-node1:/> sudo -i viprexec -i -c 'kill \`pidof dataheadsvc\`'
  • Wait for the service to come back up, which you can verify with the next few commands.
  • Run netstat to verify the datahead service is listening.
admin@ecs-node1:/tmp> netstat -an | grep LIST | grep 9021
tcp        0      0 10.10.10.1:9021     :::*    LISTEN
admin@ecs-node1:/tmp> sudo netstat -anp | grep 9021
tcp  0  0 10.10.10.1:9021 :::* LISTEN 67064/dataheadsvc
  • You can run the ps command to verify the start time of the datahead service compared to the current time on the node.
admin@ecs-node1:/tmp> ps -ef | grep dataheadsvc
storage+  29052  11163  0 May19 ? 00:00:00 /opt/storageos/bin/monitor -u 444 -g 444 -c / -l /opt/storageos/logs/dataheadsvc.out -p /var/run/dataheadsvc.pid /opt/storageos/bin/dataheadsvc file:/opt/storageos/conf/datahead-conf.xml
storage+  57064  29052 88 20:27 ? 00:00:51 /opt/storageos/bin/dataheadsvc -ea -server -d64 -Xmx9216m -Dproduct.home=/opt/storageos -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/storageos/logs/dataheadsvc-78517.hprof -XX:+PrintGCDateStamps -XX:+PrintGCDetails -Dlog4j.configurationFile=file:/opt/storageos/conf/dataheadsvc-log4j2.xml -Xmn2560m -Dsun.net.inetaddr.ttl=0 -Demc.storageos.useFastMD5=1 -Dcom.twmacinta.util.MD5.NATIVE_LIB_FILE=/opt/storageos/lib/MD5.so -Dsun.security.jgss.native=true -Dsun.security.jgss.lib=libgssglue.so.1 -Djavax.security.auth.useSubjectCredsOnly=false -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+ExplicitGCInvokesConcurrent -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -XX:+PrintTenuringDistribution -XX:+PrintGCDateStamps -Xloggc:/opt/storageos/logs/dataheadsvc-gc-9.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=3 -XX:GCLogFileSize=50M com.emc.storageos.data.head.Main file:/opt/storageos/conf/datahead-conf.xml

admin@ecs-node1:/tmp> date
Wed Jun  7 20:28:41 UTC 2017

3.       Verify the Installed Certificates.  The object certificate and management certificate each have their own GET request to retrieve the installed certificate.  Note that these commands are management requests.

a.       Verify the installed/Active Management Certificate

An alternative method to this one, which I used personally, is the OpenSSL s_client command.  The details in step 3a below aren’t necessary if you are going to use s_client for verification, I’ve simply included them here for completeness.   You can skip to step 3b for the s_client method.

  • Use ECSCLI to run it from a client PC:
python ecscli.py vdc_keystore get –hostname <ecs host ip> -port 4443 –cf <cookiefile>
  • Use CURL to run it directly from the ECS management console:
curl -svk -H "X-SDS-AUTH-TOKEN: $TOKEN" https://10.10.10.1:4443/vdc/keystore

Verify the installed (active) Object Certificate.  This can be done using a variety of methods, outlined below.

  • Use ECSCLI to run it from a client PC:
python ecscli.py keystore show –hostname <ecs host ip> -port 4443 –cf <cookiefile>
  • Use CURL to run it directly from the ECS management console:
curl -svk -H "X-SDS-AUTH-TOKEN: $TOKEN" https://10.10.10.1:4443/object-cert/keystore

b.      The certificate presented by a port can also be verified using OpenSSL’s s_client tool.  If you used the method in step 3a, this is unnecessary as it will give you the same information.

  • Sample command syntax:
openssl s_client -connect host:port -showcerts
  • The command syntax I used and some sample output from my ECS environment is below.  Verify the certificate on last node as well as the expected SAN entries.
openssl s_client -connect 10.10.10.1:9021 | openssl x509 -noout -text
openssl s_client -connect 10.10.10.1:9021 -showcerts
CONNECTED(00000003)
depth=0 C = US, ST = North Dakota, L = Fargo, OU = server, O = CompanyName Worldwide, CN = *.nd.dev.ecs.CompanyName.int
verify error:num=18:self-signed certificate
verify return:1
depth=0 C = US, ST = North Dakota, L = Fargo, OU = server, O = CompanyName Worldwide, CN = *.nd.dev.ecs.CompanyName.int
verify return:1
---
Certificate chain
0 s:/C=US/ST=North Dakota/L=Fargo/OU=server/O=CompanyName Worldwide/CN=*.nd.dev.ecs.CompanyName.int
   i:/C=US/ST=North Dakota/L=Fargo/OU=server/O=CompanyName Worldwide/CN=*.nd.dev.ecs.CompanyName.int
-----BEGIN CERTIFICATE-----
MIIFQDCCBCigAwIBAgIJANzBojR+ij2xMA0GCSqGSIb3DQEBBQUAMIGRMQswCQYD
VQQGEwJVUzERMA8GA1UECBMITWlzc291cmkxFDASBgNVBAcTC1NhaW50IExvdWlz
   … <output truncated for this example> …

c.       The process is now complete.  You can have your application team test SSL access to ensure everything is working properly.

 

 

 

 

 

Installing the EMC ECS CLI Package

Below is a brief outline on installing EMC’s ECS CLI package.  I have another blog post that outlines all of the ECSCLI commands here.

Getting Started

Prerequisites:

Install Python Requests Package:

  • Versions of ECSCLI prior to 3.x may require a manual install of the python requests package.  When I installed v3.1.9, the PIP install process appears to have taken care of installing the python requests package for me, but I saw reports of this issue while reading other documentation.   Either way, you can manually install the requests package either by using “pip install requests” or downloading the code from GitHub and running “python setup.py install”.

Install ECSCLI using Python PIP:

  • There are frequent updates and fixes being made to the ECSCLI package. The latest version of ECSCLI can always be downloaded and installed via pip using “pip install ecscli” from a windows command prompt.  PIP will be in your system path once you’ve installed python so it can be run from any directory.  If you want to archive a copy, use “pip download ecscli” rather than “pip install ecscli”.  As an alternative, you can also find the ECSCLI install package available for download at EMC’s support site (v2 is available here).

ECS CLI PIP Installation and Configuration

You will need to set up a configuration profile once ECSCLI is installed.  Configuration profiles address issues with older versions of the ECSCLI regarding authentication and python dependencies.  A profile simply contains the hostname and port along with an existing management user who will be authenticating to that host.  Several profiles can be created but only one can be active.  Once the active profile is set, ECSCLI will then use that profile for authenticating and sending commands.

To install the ecscli via pip:

pip install ecscli

Collecting ecscli
Downloading ecscli-2.2.0a5.tar.gz (241kB)
100% |████████████████████████████████| 245kB 568kB/s
Requirement already satisfied (use --upgrade to upgrade): requests in ./anaconda/envs/ecscli_demoenv/lib/python2.7/site-packages (from ecscli)
Building wheels for collected packages: ecscli
Running setup.py bdist_wheel for ecscli ... done
Stored in directory: /Users/conerj/Library/Caches/pip/wheels/92/7f/c3/129ffe5cd1b3b20506264398078bdd886c27fefe89b062b711
Successfully built ecscli
Installing collected packages: ecscli
Successfully installed ecscli-2.2.0a5

To see a list of profiles:

ecscli config list

Running without an acive config profile
list of existing configuration profiles:

Since the ecscli was just installed, no profiles exist yet.

Once you have an active profile, the output will look like this:

Running with config profile: C:\python\ecscli/ecscliconfig_demouser_.json
user: root host:port: 10.10.10.1:4443
list of existing configuration profiles:
ACTIVE  |PROFILE   |HOSTNAME   |PORT   |MGMT USER   |ECS VERSION
----------------------------------------------------------------
        |demouser  |10.10.10.1 |4443   |root        |3.0

To create a profile:

ecscli config -pf demoprofile

Running without an acive config profile
Please enter the default ECS hostname or ip (127.0.0.1):10.10.10.11
Please enter the default command port (4443):
Please enter the default user for the profile (root):
Entered saveConfig profileName = demoprofile
will be saved to base path: /Users/demo_user/ecscliconfig_
Saving profile config to: /Users/demo_user/ecscliconfig_demoprofile_.json
list of existing configuration profiles:
     * demoprofile2 - hostname:10.10.10.11:4443       user:root

Normally one profile will always be active.  Because this is the first time a profile is being created, ECSCLI will run without an active profile. The CLI will prompt the user to enter the hostname, IP, port and management user for the profile. The “*” shows the active profile that will be used. Several profiles can be configured, however only one profile can be active at a time. The profiles are stored in .json files in the home directory with the name prefix “ecscliconfig_”.

To see a list of profiles and the active profile:

ecscli config list

Running with config profile: demoprofile
user: demo_user    host:port: 10.10.10.10:4443
list of existing configuration profiles:
    * demoprofile2 - hostname:10.10.10.11:4443 user:demouser
      demoprofile  - hostname:10.10.10.10:4443 user:root

The currently active profile is denoted by “*” before the profile name.

To change the active profile:

ecscli config set -pf mydemoprofile

Running with config profile: demoprofile2
user: demo_user    host:port: 10.10.10.11:4443
list of existing configuration profiles:
   demo_profile2 - hostname:10.10.10.11:4443 user:demouser
   demo_profile  - hostname:10.10.10.10:4443 user:root

To delete a profile:

ecscli config delete -pf mydemoprofile

Running with config profile: demoprofile
user: root  host:port: 10.10.10.10:4443
list of existing configuration profiles:
* demoprofile2 - hostname:10.10.10.11:4443 user:demouser

Since the currently active profile was deleted in this example, the ecscli chose another profile to set as the active profile.

Ecscli configuration handles the “–hostname” and “–port” arguments and manages the tokens for subsequent management requests.  Authentication is still required. This as well as all other requests are simplified since cookie related arguments are no longer required.

To Authenticate:

ecscli authenticate

Running with config profile: demoprofile2
user: root  host:port: 10.10.10.10:4443
Password :
authentication result: root : Authenticated Successfully
/Users/demo_user/demo_profile/rootcookie : Cookie saved successfully

Another sample command:

This command example will list the storage pools:

ecscli objectvpool list

Running with config profile: demo_rofile
user: root    host:port: 10.10.10.10:4443
{'global_data_vpool': [{'isAllowAllNamespaces': True, 'remote': None, 'name': 'lab_env', 'enable_rebalancing': True, 'global': None, 'creation_time': 1033186012844, 'isFullRep': False, 'vdc': None, 'inactive': False, 'varrayMappings': [{'name': 'urn:storageos:VirtualDataCenterData:823c6f4c-bda2-6ca2-69d7-110df3e9f022', 'value': 'urn:storageos:VirtualArray:19f03490-3f30-25dd-5f5c-8b208f64e3f0'}], 'id': 'urn:storageos:ReplicationGroupInfo:8066234b-bdc2-6234-f066-81f0aa61e7bf:global', 'description': ''}]}

Proven Storage Professional

Advertisements