Machine Learning, Cognitive Computing, and the Storage Industry

In context with my recent posts about object storage and software defined storage, this is another topic that simply interested me enough to want to do a bit of research about the topic in general, as well as how it relates to the industry that I work in.  I discovered that there is a wealth of information on the topics of Machine Learning, Cognitive Computing, Artificial Intelligence, and Neural Networking, so much that writing a summary is difficult to do.  Well, here’s my attempt.

There is pressure in the enterprise software space to incorporate new technologies in order to keep up with the needs of modern businesses. As we move farther into 2017, I believe we are approaching another turning point in technology where many concepts that were previously limited to academic research or industry niches are now being considered for actual mainstream enterprise software applications.  I believe you’ll see Machine learning and cognitive systems becoming more and more visible in the coming years in the enterprise storage space. For the storage industry, this is very good news. As this technology takes off, it will result in the need to retain massive amounts of unstructured data in order to train the cognitive systems. Once machines can learn for themselves, they will collect and generate a huge amount of data to be stored, intelligently categorized and subsequently analyzed.

The standard joke about artificial intelligence (or machine learning in general) is that, like nuclear fusion, it has been the future for more than half a century now.  My goal in this post is to define the concepts, look at ways this technology has already been implemented, look at how it affects the storage industry, and investigate use cases for this technology.  I’m writing this paragraph before I start, so we’ll see how that goes. 🙂

 What is Cognitive Computing?

Cognitive computing is the simulation of human thought processes using computerized modeling (the most well know example is probably IBM’s Watson). It incorporates self-learning systems that use data mining, pattern recognition and natural language processing to imitate the way our brains process thoughts. The goal of cognitive computing is to create automated IT systems that are capable of solving problems without requiring human assistance.

This sounds like the stuff of science fiction, right? HAL (from the movie “2001 Space Odyssey”) came to the logical conclusion that his crew had to be eliminated. It’s my hope that intelligent storage arrays utilizing cognitive computing will come to the conclusion that 99.9 percent of stored data has no value and therefore should be deleted.  It would eliminate the need for me to build my case for archiving year after year. J

Cognitive computing systems work by using machine learning algorithms, they are inescapably linked. They will continuously gather knowledge from the data fed into them by mining data for information. The systems will progressively refine the methods the look for and process data until they become capable of anticipating new problems and modeling possible solutions.

Cognitive computing is a new field that is just beginning to emerge. It’s about making computers more user friendly with an interface that understands more of what the user wants. It takes signals about what the user is trying to do and provides an appropriate response. Siri, for example, can answer questions but also understands context of the question. She can ascertain whether the user is in a car or at home, moving quickly and therefore driving, or moving more slowly while walking. This information contextualizes the potential range of responses, allowing for increased personalization.

What Is Machine Learning?

Machine Learning is a subset of the larger discipline of Artificial Intelligence, which involves the design and creation of systems that are able to learn based on the data they collect. A machine learning system learns by experience. Based on specific training, the system will be able to make generalizations based on its exposure to a number of cases and will then be able to perform actions after new or unforeseen events. Amazon already use this technology, it’s part of their recommendation engine. It’s also commonly used by ad feed systems that provide ads based on web surfing history.

While machine learning is a tremendously powerful tool for extracting information from data, but it’s not a silver bullet for every problem. The questions must be framed and presented in a way that allows the learning algorithms to answer them. Because the data needs to be set up in the appropriate way, that can add additional complexity. Sometimes the data needed to answer the questions may not be available. Once the results are available, they also need to be interpreted to be useful and it’s essential to understand the context. A sales algorithm can tell a salesman what’s working the best, but he still needs to know how to best use that information to increase his profits.

What’s the difference?

Without cognition there cannot be good Artificial intelligence, and without Artificial Intelligence cognition can never be expressed. I Cognitive computing involves self-learning systems that use pattern recognition and natural language processing to mimic the way how the human brain works. The goal of cognitive computing is to create automated systems that are capable of solving problems without requiring human assistance. Cognitive computing is used in A.I. applications, hence Cognitive Computing is also actually subset of Artificial Intelligence.

If this seems like a slew of terms that all mean almost the same thing, you’d be right. Cognitive Computing and Machine Learning can both be considered subsets of Artificial Intelligence. What’s the difference between artificial intelligence and cognitive computing? Let’s use a medical example. In an artificial intelligence system, machine learning would tell the doctor which course of action to take based on its analysis. In cognitive computing, the system would provide information to help the doctor decide, quite possibly with a natural language response (like IBM’s Watson).

In general, Cognitive computing systems include the following ostensible characteristics:

  • Machine Learning
  • Natural Language Processing
  • Adaptive algorithms
  • Highly developed pattern recognition
  • Neural Networking
  • Semantic understanding
  • Deep learning (Advanced Machine Learning)

How is Machine Learning currently visible in our everyday lives?

Machine Learning has fundamentally changed the methods in which businesses relate to their customers. When you click “like” on a Facebook post your feed is dynamically adjusted to contain more content like that in the future. When you buy a Sony PlayStation on Amazon, and it recommends that you also buy an extra controller and a top selling game for the console, that’s their recommendation engine at work. Both of those examples use machine learning technology, and both affect most people’s everyday lives. Machine language technology delivers educated recommendations to people to help them make decisions in a world of almost endless choices.

Practical business applications of Cognitive Computing and Machine Learning

Now that we have a pretty good idea of what this all means, how is this technology actually being used today in the business world? Artificial Intelligence has been around for decades, but has been slow to develop due to the storage and compute requirements being too expensive to allow for practical applications. In many fields, machine learning is finally moving from science labs to commercial and business applications. With cloud computing and robust virtualized storage solutions providing the infrastructure and necessary computational power, machine learning developments are offering new capabilities that can greatly enhance enterprise business processes.

The major approaches today include using neural networkscase-based learninggenetic algorithmsrule induction, and analytic learning. The current uses of the technology combine all of these analytic methods, or a hybrid of them, to help guarantee effective, repeatable, and reliable results. Machine learning is a reality today and is being used very effectively and efficiently. Despite what many business people might assume, it’s no longer in its infancy. It’s used quite effectively across a wide array of industry applications and is going to be part of the next evolution of enterprise intelligence business offerings.

There are many other machine learning can have an important role. This is most notable in systems that with so much complexity that algorithms are difficult to design, when an application requires the software to adapt to an operational environment, or with applications that need to work with large and complex data sets. In those scenarios, machine learning methods play an increasing role in enterprise software applications, especially for those types of applications that need in-depth data analysis and adaptability like analytics, business intelligence, and big data.

Now that I’ve discussed some general business applications for the technology, I’ll dive in to how this technology is being used today, or is in development and will be in use in the very near future.

  1. Healthcare and Medicine. Computers will never completely replace doctors and nurses, but in many ways machine learning is transforming the healthcare industry. It’s improving patient outcomes and in general changing the way doctors think about how they provide quality care. Machine learning is being implemented in health care in many ways: Improving diagnostic capabilities, medicinal research (medicines are being developed that are genetically tailored to a person’s DNA), predictive analytics tools to provide accurate insights and predictions related to symptoms, diagnoses, procedures, and medications for individual patients or patient groups, and it’s just beginning to scratch the surface of personalized care. Healthcare and personal fitness devices connected via the Internet of Things (IoT) can also be used to collect data on human and machine behavior and interaction. Improving quality of life and people’s health is one of the most exciting use cases of Machine Learning technologies.
  2. Financial services. Machine Learning is being used for predicting credit card risk, managing an individual’s finances, flagging criminal activity like money laundering and fraud, as well as automating business processes like call centers and insurance claim processing with trained AI agents. Product recommendation systems for a financial advisor or broker must leverage current interests, trends, and market movements for long periods of time, and ML is well suited to that task.
  3. Automating business analysis, reporting, and work processes. Machine learning automation systems that use detailed statistical analysis to process, analyze, categorize, and report on their data exist today. Machine learning techniques can be used for data analysis and pattern discovery and can play an important role in the development of data mining applications. Machine learning is enabling companies to increase growth and optimize processes, increase customer satisfaction, and improve employee engagement.As one specific example, adaptive analytics can be used to help stop customers from abandoning a website by analyzing and predicting the first signs they might log off and causing live chat assistance windows to appear. They are also good at upselling by showing customers the most relevant products based on their shopping behavior at that moment. A large portion of Amazon’s sales are based on their adaptive analytics, you’ll notice that you always see “Customers who purchased this item also viewed” when you view an item on their web site.Businesses are presently using Machine learning to improve their operations in other many ways. Machine learning technology allows business to personalize customer service, for example with chatbots for customer relations. Customer loyalty and retention can be improved by mining customer actions and targeting their behavior. HR departments can improve their hiring processes by using ML to shortlist candidates. Security departments can use ML to assist with detecting fraud by building models based on historical transactions and social media. Logistics departments can improve their processes by allowing contextual analysis of their supply chain. The possibilities for the application of this technology across many typical business challenges is truly exciting.
  4. Playing Games. Machine learning systems have been taught to play games, and I’m not just talking about video games. Board game like Go, IBM’s Watson in games of Chess and Jeopardy, as well as in modern real time strategy video games, all with great success. When Watson defeated Brad Rutter and Ken Jennings in the Jeopardy! Challenge of February 2011, showcasing Watson’s ability to learn, reason, and understand natural language with machine learning technology. In game development, Machine learning has been used for gesture recognition in Kinect and camera based interfaces, and It has also been used in some fighting style games to analyze the style of moves of the human to mimic the human player, such as the character ‘Mokujin’ in Tekken.
  5. Predicting the outcome of legal proceedings. A system developed by a team of British and American researchers was proven to be able to correctly predict a court’s decision with a high degree of accuracy. The study can be viewed here: https://peerj.com/articles/cs-93/. While computers are not likely to replace judges and lawyers, the technology could very effectively be used to assist the decision making process.
  6. Validating and Customizing News content. Machine learning can be used to create individually personalized news and screening and filtering out “fake news” has been a more recent investigative priority, especially given today’s political landscape. Facebook’s director of AI research Yann LeCun was quoted saying that machine learning technology that could squash fake news “either exists or can be developed.” A challenge aptly named the “Fake News Challenge” was developed for technology professionals, you can view their site http://www.fakenewschallenge.org/ for more information. Whether or not it actually works is dubious at the moment, but the application of it could have far reaching positive effects for democracy.
  7. Navigation of self-driving cars. Using sensors and onboard analytics, cars are learning to recognize obstacles and react to them appropriately using Machine Learning. Google’s experimental self-driving cars currently rely on a wide range of radar/lidar and other sensors to spot pedestrians and other objects. Eliminating some or all of that equipment would make the cars cheaper and easier to design and speed up mass adoption of the technology. Google has been developing its own video-based pedestrian detection system for years using machine learning algorithms. Back in 2015, its system was capable of accurately identifying pedestrians within 0.25 seconds, with 0.07-second identification being the benchmark needed for such a system to work in real-time.This is all good news for storage manufacturers. Typical luxury cars have up to around 200 GB of storage today, primarily for maps and other entertainment functionality. Self-driving cars will likely need terabytes of storage, and not just for the car to drive itself. Storage will be needed for intelligent assistants in the car, advanced voice and gesture recognition, caching software updates, and caching files to storage to reduce peak network bandwidth utilization.
  8. Retail Sales. Applications of ML are almost limitless when it comes to retail. Product pricing optimization, sales and customer service trending and forecasting, precise ad targeting with data mining, website content customization, prospect segmentation are all great examples of how machine learning can boost sales and save money. The digital trail left by customer’s interactions with a business both online and offline can provide huge amounts of data to a retailer. All of that data is where Machine learning comes in. Machine learning can look at history to determine which factors are most important, and to find the best way to predict what will occur based on a much larger set of variables. Systems must take into account today’s market trends not only for the past year, but for what happened as recently as 1 hour ago in order to implement real-time personalization. Machine learning applications can discover which items are not selling and pull them from the shelves before a salesperson notices, and even keep overstock from showing up in the store at all with improved procurement processes. A good example of the machine learning personalized approach to customers can be found once you get in the Jackets and Vests section of the North Face website. Click on “Shop with IBM Watson” and experience what is almost akin to a human sales associate helping you choose which jacket you need.
  9. Recoloring black and white images. Ted Turner’s dream come true. J Using computers to recognize objects and learn what they should look like to humans, color can be returned to both black and white pictures and video footage. Google’s DeepDream (https://research.googleblog.com/2015/07/deepdream-code-example-for-visualizing.html) is probably the most well-known example of one. It has been trained by examining millions of images of just about everything. It analyzes images in black and white and then colors them the way it thinks they should be colored. The “colorize” project is also taking up the challenge, you can view their progress at http://tinyclouds.org/colorize/ and download the code. A good online example is at Algorithmia, which allows you to upload and convert an image online. http://demos.algorithmia.com/colorize-photos/
  10. Enterprise Security. Security and loss of are major concerns for the modern enterprise. Some storage vendors are beginning to use artificial intelligence and machine learning to prevent data loss, increase availability and reduce downtime via smart data recovery and systematic backup strategies. Machine learning allows for smart security features to detect data and packet loss during transit and within data centers.Years ago it was common practice to spend a great deal of time reviewing security logs on a daily basis. You were expected to go through everything and manually determine the severity of any of the alerts or warnings as you combed through mountains of information. As time progresses it becomes more and more unrealistic for this process to remain manual. Machine learning technology is currently implemented and is very effective at filtering out what deviates from normal behavior, be it with live network traffic or mountains of system log files. While humans are also very good at finding patterns and noticing odd things, computers are really good at doing that repetitive work at a much larger scale, complementing what an analyst can do.Interested in looking at some real world examples of Machine Learning as it relates to security? There’s many out there. Clearcut is one example of a tool that uses machine learning to help you focus on log entries that really need manual review. David Bianco created a relatively simple Python script that can learn to find malicious activity in HTTP proxy logs. You can download David’s script here: https://github.com/DavidJBianco/Clearcut. I also recommend taking a look at the Click Security project, which also includes many code samples. http://clicksecurity.github.io/data_hacking/, as well as PatternEx, a SecOps tool that predicts cyber attacks. https://www.patternex.com/.
  11. Musical Instruments. Machine learning can also be used in more unexpected ways, even in creative outlets like making music. In the world of electronic music there are new synthesizers and hardware created and developed often, and the rise in machine learning is altering the landscape. Machine learning will allow instruments the potential to be more expressive, complex and intuitive in ways previously experienced only through traditional acoustic instruments. A good example of a new instrument using machine learning is the Mogees instrument. This device has a contact microphone that picks up sound from everyday objects and attaches to your iPhone. Machine learning could make it possible to use a drum machine then adapts to your playing style, learning as much about the player as the player learns about the instrument. Simply awe inspiring.

What does this mean for the storage industry?

As you might expect, this is all very good news for the storage industry and very well may lead to more and more disruptive changes. Machine learning has an almost insatiable appetite for data storage. It will consume huge quantities of capacity while at the same time require very high levels of throughput. As adoption of Cognitive Computing, Artificial Intelligence, and machine learning grows, it will attract a growing number of startups eager to solve the many issues that are bound to arise.

The rise of Machine learning is set to alter the storage industry in very much the same way that PC’s helped reshape the business world in the 1980’s. Just as PCs have advanced from personal productivity applications like Lotus 1-2-3 to large-scale Oracle databases, Machine learning is poised to evolve from consumer type functions like Apple’s Siri to full scale data driven programs that will drive global enterprises. So, in what specific ways is this technology set to alter and disrupt the storage industry? I’ll review my thoughts on that below.

  1. Improvements in Software-Defined Storage. I recently dove into Software defined storage in a blog post (https://thesanguy.com/2017/06/15/defining-software-defined-storage-benefits-strategy-use-cases-and-products/). As I described in that post, there are many use cases and a wide variety of software defined storage products in the market right now. Artificial Intelligence and machine learning will spark faster adoption of software-defined storage, especially as products are developed that use the technology to allow storage to be self-configurable. Once storage is all software-defined, algorithms can be integrated and far-reaching enough to process and solve complicated storage management problems because of the huge amount of data they can now access. This is a necessary step to build the monitoring, tuning, healing service abilities needed for self-driving software defined storage.
  2. Overall Costs will be reduced. Enterprises are moving towards cloud storage and fewer dedicated storage arrays. Dynamic software defined software that integrates machine learning could help organizations more efficiently utilize the capacity that they already own.
  3. Hybrid Storage Clouds. public vs. private clouds has been a hot topic in the storage industry, and with the rise of machine learning and software-defined storage it’s becoming more and more of a moot point. Well-designed software-defined architectures should be able to transition data seamlessly from one type of cloud to another, and machine learning will be used to implement that concept without human intervention. Data will be analyzed and logic engines will automate data movement. The hybrid cloud is very likely to flourish as machine learning technologies are adopted into this space.
  4. Flash Everywhere. Yes, the concept of “flash first” has been promoted for years now, and machine learning simply furthers that simple truth. The vast amount of data that machine learning needs to process will further increase the demand for throughput and bandwidth, and flash storage vendors will be lining up to fill that need.
  5. Parallel File Systems. Storage systems will have to deliver performance and throughput at scale in order to support machine learning technologies. Parallel file system can effectively reduce the problems of massive data storage and I/O bottlenecks. With its focus on high performance access to large data sets, parallel file systems combined with flash could be considered an entry point to full scale machine learning systems.
  6. Automation. Software-defined storage has had a large influence in the rise of machine learning in storage environments. Adding a heterogeneous software layer abstracted from the hardware allows the software to efficiently monitor many more tasks. The additional automation allows adminisrators like myself much more time for more strategic work.
  7. Neural Storage. Neural storage (“deep learning”) is designed to recognize and respond to problems and opportunities without any human intervention. It will drive the need for massive amounts of storage as it is utilized in modern businesses. It uses artificial neural networks, which are simplified computer simulations of how biological neurons behave to extract rules and patterns from sets of data. Unsurprisingly (based on it’s name) the concept is inspired by the way biological nervous systems process information. In general, think of of neural storage as many layers of processing on mountain-sized mounds of data. Data is fed through neural networks that are logical constructions that ask a series of binary true/false questions, or extract a numerical value of every bit of data which pass through them and classify it according to the answers that were tallied up. Deep Learning work is focused on developing these networks, which is why they became what are known as Deep Neural Networks (logic networks of the complexity needed to deal with classifying enormous datasets, think google-scale data). Using Google Images as an example, with datasets as massive and comprehensive as these and logical networks sophisticated enough to handle their classification, it becomes relatively trivial to take an image and state with a high probability of accuracy what it represents to humans.

How does Machine Learning work?

At its core, Machine learning works by recognizing patterns (such as facial expressions or spoken words), extracting insight from those patterns, discovering anomalies in those patterns, and then making evaluations and predictions based on those discoveries.

The principle can be summed up with the following formula:

Machine Learning = Model Representation + Parameter Evaluation + Learning & Optimization

Model Representation: The system that makes predictions or identifications. Includes the use of a object element represented in a formal language that a computer can handle and interpret.

Parameter Evaluation: A function needed to distinguish or evaluate the good and bad objects, the factors used by the model to form it’s decisions.

Learning & Optimization: The method used to search among these classifiers within the language to find the highest scoring ones. This is the learning system that adjust the parameters and looks at predictions vs. actual outcome.

How do we apply machine learning to a problem? First and foremost, a pattern must exist in the input data that would allow a conclusion to be drawn. To solve a problem with machine learning, the machine learning algorithm must have a pattern to deduce information from. Next, there must be a sufficient amount of data to apply machine learning to a problem. If there isn’t enough data to analyze, it will compromise the validity of the end result. Finally, machine learning is used to derive meaning from the data and perform structured learning to arrive at a mathematical approximation to describe the behavior of the problem. Therefore if the conditions above aren’t met, it will be a waste of time to apply machine learning to a problem through structured learning. All of these conditions must be met for machine learning to be successful.

Summary

Machines may not have reached the point where they can make full decisions without humans, but they have certainly progressed to the point where they can make educated, accurate recommendations to us so that we have an easier time making decisions. Current machine learning systems have delivered tremendous benefits by automating tabulation and harnessing computational processing and programming to improve both enterprise productivity and personal productivity.

Cognitive systems will learn and interact to provide expert assistance to scientists, engineers, lawyers, and other professionals in a fraction of the time it now takes. While they will likely never replace human thinking, cognitive systems will extend our cognition and free us to think more creatively and effectively, and be better problem solvers.

Advertisements

Scripting a VNX/Celerra to Isilon Data Migration with EMCOPY and Perl

datamigration

Below is a collection of perl scripts that make data migration from VNX/Celerra file systems to an Isilon system much easier.  I’ve already outlined the process of using isi_vol_copy_vnx in a prior post, however using EMCOPY may be more appropriate in a specific use case, or simply more familiar to administrators for support and management of the tool.  Note that while I have tested these scripts in my environment, they may need some modification for your use.  I recommend running them in a test environment prior to using them in production.

EMCOPY can be downloaded directly from DellEMC with the link below.  You will need to be a registered user in order to download it.

https://download.emc.com/downloads/DL14101_EMCOPY_File_migration_tool_4.17.exe

What is EMCOPY?

For those that haven’t used it before, EMCOPY is an application that allows you to copy a file, directory, and subdirectories between NTFS partitions while maintaining security information, an improvement over the similar robocopy tool that many veteran system administrators are familiar with. It allows you to back up the file and directory security ACLs, owner information, and audit information from a source directory to a destination directory.

Notes about using EMCOPY:

1) In my testing, EMCopy has shown up to a 25% performance improvement when copying CIFS data compared to Robocopy while using the same number of threads. I recommend using EMCopy over Robocopy as it has other feature improvements as well, for instance sidmapfile, which allows migrating local user data to Active Directory users. It’s available in version 4.17 or later.  Robocopy is also not an EMC supported tool, while EMCOPY is.

2) Unlike isi_vol_copy_vnx, EMCOPY is a windows application and must be run from a windows host.  I highly recommend a dedicated server for any migration tasks.  The isi_vol_copy_vnx utility runs directly on the Isilon OneFS CLI which eliminates any intermediary copy hosts, theoretically providing a much faster solution.

3) There are multiple methods to compare data sizes between the source and destination. I would recommend maintaining a log of each EMCopy session as that log indicates how much data was copied and if there were any errors.

4) If you are migrating over a WAN connection, I recommend first restoring from tape and then using an incremental data sync with EMCOPY.

Getting Started

I’ve divided this post up into a four step process.  Each step includes the relevant script and a description of the process.

  • Export File System information (export_fs.pl  Script)

Export file system information from the Celerra & generate the Isilon commands to re-create them.

  • Export SMB information (export_smb.pl Script)

Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

  • Export NFS information (export_nfs.pl Script)

Export NFS information from the Celerra & generate the Isilon commands to re-create them.

  • Create the EMCOPY migration script (EMCOPY_create.pl Script)

Perform the data migration with EMCOPY using the output from this script.

Exporting information from the Celerra to run on the Isilon

These Perl scripts are designed to be run directly on the Control Station and will subsequently create shell scripts that will run on the Isilon to assist with the migration.  You will need to manually copy the output files from the VNX/Celerra to the Isilon. The first three steps I’ve outlined do not move the data or permissions, they simply run a nas_fs query on the Celerra to generate the Isilon script files that actually make the directories, create quotas, and create the NFS and SMB shares. They are “scripts that generate scripts”. 🙂

Before you run the scripts, make sure you edit them to correctly specify the appropriate Data Mover.  Once complete, You’ll end up with three .sh files created for you to move to your Isilon cluster.  They should be run in the same order as they were created.

Note that EMC occasionally changes the syntax of certain commands when they update OneFS.  Below is a sample of the isilon specific commands that are generated by the first three scripts.  I’d recommend verifying that the syntax is still correct with your version of OneFS, and then modify the scripts if necessary with the new syntax.  I just ran a quick test with OneFS 8.0.0.2, and the base commands and switches appear to be compatible.

isi quota create –directory –path=”/ifs/data1″ –enforcement –hard-threshold=”1032575M” –container=1
isi smb share create –name=”Data01″ –path=”/ifs/Data01/data”
isi nfs exports create –path=”/Data01/data”  –roclient=”Data” –rwclient=”Data” –rootclient=”Data”

 

Step 1 – Export File system information

This script will generate a list of the file system names from the Celerra and place the appropriate Isilon commands that create the directories and quotes into a file named “create_filesystems_xx.sh”.

#!/usr/bin/perl

# Export_fs.pl – Export File system information
# Export file system information from the Celerra & generate the Isilon commands to re-create them.

use strict;
my $nas_fs="nas_fs -query:inuse=y:type=uxfs:isroot=false -fields:ServersNumeric,Id,Name,SizeValues -format:'%s,%s,%s,%sQQQQQQ'";
my @data;

open (OUTPUT, ">> create_filesystems_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nas_fs |") || die "cannot open $nas_fs: $!\n\n";

while ()

{
   chomp;
   @data = split("QQQQQQ", $_);
}

close(CMD);
foreach (@data)

{
   my ($dm, $id, $dir,$size,$free,$used_per, $inodes) = split(",", $_);
   print OUTPUT "mkdir /ifs/$dir\n";
   print OUTPUT "chmod 755 /ifs/$dir\n";
   print OUTPUT "isi quota create --directory --path=\"/ifs/$dir\" --enforcement --hard-threshold=\"${size}M\" --container=1\n";
}

The Output of the script looks like this (this is an excerpt from the create_filesystems_xx.sh file):

isi quota create --directory --path="/ifs/data1" --enforcement --hard-threshold="1032575M" --container=1
mkdir /ifs/data1
chmod 755 /ifs/data1
isi quota create --directory --path="/ifs/data2" --enforcement --hard-threshold="20104M" --container=1
mkdir /ifs/data2
chmod 755 /ifs/data2
isi quota create --directory --path="/ifs/data3" --enforcement --hard-threshold="100774M" --container=1
mkdir /ifs/data3
chmod 755 /ifs/data3

The output script can now be copied to and run from the Isilon.

Step 2 – Export SMB Information

This script will generate a list of the smb share names from the Celerra and place the appropriate Isilon commands into a file named “create_smb_exports_xx.sh”.

#!/usr/bin/perl

# Export_smb.pl – Export SMB/CIFS information
# Export SMB share information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "cifs";:wq!
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $path =~ s/^"/\"\/ifs/;
   print  OUTPUT "isi smb share create --name=$name --path=$path\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_smb_exports_xx.sh file):

isi smb share create --name="Data01" --path="/ifs/Data01/data"
isi smb share create --name="Data02" --path="/ifs/Data02/data"
isi smb share create --name="Data03" --path="/ifs/Data03/data"
isi smb share create --name="Data04" --path="/ifs/Data04/data"
isi smb share create --name="Data05" --path="/ifs/Data05/data"

 The output script can now be copied to and run from the Isilon.

Step 3 – Export NFS Information

This script will generate a list of the NFS export names from the Celerra and place the appropriate Isilon commands into a file named “create_nfs_exports_xx.sh”.

#!/usr/bin/perl

# Export_nfs.pl – Export NFS information
# Export NFS information from the Celerra & generate the Isilon commands to re-create them.

use strict;

my $datamover = "server_8";
my $prot = "nfs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep export";

open (OUTPUT, ">> create_nfs_exports_$$.sh") || die "cannot open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $test = @vars;
   my $i=2;
   my ($ro, $rw, $root, $access, $name);
   my $path=$vars[1];

   for ($i; $i < $test; $i++)
   {
      my ($type, $value) = split("=", $vars[$i]);

      if ($type eq "ro") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }
      if ($type eq "rw") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $rw .= " --rwclient=\"$_\""; }
      }

      if ($type eq "root") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $root .= " --rootclient=\"$_\""; }
      }

      if ($type eq "access") {
         my @tmp = split(":", $value);
         foreach(@tmp) { $ro .= " --roclient=\"$_\""; }
      }

      if ($type eq "name") { $name=$value; }
   }
   print OUTPUT "isi nfs exports create --path=$path $ro $rw $root\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the create_nfs_exports_xx.sh file):

isi nfs exports create --path="/Data01/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data02/data" --roclient="Data" --roclient="BACKUP" --rwclient="Data" --rwclient="BACKUP" --rootclient="Data" --rootclient="BACKUP"
isi nfs exports create --path="/Data03/data" --roclient="Backup" --roclient="Data" --rwclient="Backup" --rwclient="Data" --rootclient="Backup" --rootclient="Data"
isi nfs exports create --path="/Data04/data" --roclient="Backup" --roclient="ProdGroup" --rwclient="Backup" --rwclient="ProdGroup" --rootclient="Backup" --rootclient="ProdGroup"
isi nfs exports create --path="/" --roclient="127.0.0.1" --roclient="127.0.0.1" --roclient="127.0.0.1" -rootclient="127.0.0.1"

The output script can now be copied to and run from the Isilon.

Step 4 – Generate the EMCOPY commands

Now that the scripts have been generated and run on the Isilon, the next step is the actual data migration using EMCOPY.  This script will generate the commands for a migration script, which should be run from a windows server that has access to both the source and destination locations. It should be run after the previous three scripts have successfully completed.

This script will output the commands directly to the screen, it can then be cut and pasted from the screen directly into a windows batch script on your migration server.

#!/usr/bin/perl

# EMCOPY_create.pl – Create the EMCOPY migration script
# Perform the data migration with EMCOPY using the output from this script.

use strict;

my $datamover = "server_4";
my $source = "\\\\celerra_path\\";
my $dest = "\\\\isilon_path\\";
my $prot = "cifs";
my $nfs_cli = "server_export $datamover -list -P $prot -v |grep share";

open (OUTPUT, ">> create_smb_exports_$$.sh") || die "cant open output: $!\n\n";
open (CMD, "$nfs_cli |") || die "cant open $nfs_cli: $!\n\n";

while ()
{
   chomp;
   my (@vars) = split(" ", $_);
   my $path = $vars[2];
   my $name = $vars[1];

   $name =~ s/\"//g;
   $path =~ s/^/\/ifs/;

   my $log = "c:\\" . $name . "";
   $log =~ s/ //;
   my $src = $source . $name;
   my $dst = $dest . $name;

   print "emcopy \"$src\" \"$  dst\" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:$log\n";
}

close(CMD);

The Output of the script looks like this (this is an excerpt from the screen output):

emcopy "\\celerra_path\Data01" "\\isilon_path\billing_tmip_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_tmip_01
emcopy "\\celerra_path\Data02" "\\isilon_path\billing_trxs_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_trxs_01
emcopy "\\celerra_path\Data03" "\\isilon_path\billing_vru_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_vru_01
emcopy "\\celerra_path\Data04" "\\isilon_path\billing_rpps_01" /o /s /d /q /secfix /purge /stream /c /r:1 /w:1 /log:c:\billing_rpps_01

That’s it.  Good luck with your data migration, and I hope this has been of some assistance.  Special thanks to Mark May and his virtualstoragezone blog, he published the original versions of these scripts here.

Open Source Storage Solutions

Storage solutions can generally be grouped into four categories: SoHo NAS systems, Cloud-based/object solutions, Enterprise NAS and SAN solutions, and Microsoft Storage Server solutions. Enterprise NAS and SAN solutions are generally closed systems offered by traditional vendors like EMC and NetApp with a very large price tag, so many businesses are looking at Open Source solutions to meet their needs. This is a collection of links and brief descriptions of Open Source storage solutions currently available. Open Source of course means it’s free to use and modify, however some projects have do commercially supported versions as well for enterprise customers who require it.

Why would an enterprise business consider an Open Source storage solution? The most obvious reason is that it’s free, and any developer can customize it to suit the needs of the business. With the right people on board, innovation can be rapid. Unfortunately, as is the case with most open source software, it can be needlessly complex and difficult to use, require expert or highly trained staff, have compatibility issues, and most don’t offer the support and maintenance that enterprise customers require. There’s no such thing as a free lunch, as they say, and using Open Source generally requires compromising on support and maintenance. I’d see some of these solutions as perfect for an enterprise development or test environment, and as an easy way for a larger company to allow their staff to get their feet wet in a new technology to see how it may be applied as a potential future solution. As I mentioned, tested and supported versions of some open source storage software is available, which can ease the concerns regarding deployment, maintenance and support.

I have the solutions loosely organized into Open Source NAS and SAN Software, File Systems, RAID, Backup and Synchronization, Cloud Storage, Data Desctruction, Distributed Storage/Big Data Tools, Document Management, and Encryption tools.

Open Source NAS and SAN Software Solutions

Backblaze

Backblaze is a object data storage provider. Backblaze stores data on its customized, open source hardware platform called Storage Pods, and its cloud-based Backblaze Vault file system. It is compatible with Windows and Apple OSes. While they are primarily an online backup service, they opened up their StoragePod design starting in 2009, which uses commodity hardware that anyone can build. They are self-contained 4U data storage servers. It’s interesting stuff and worth a look.

Enterprise Storage OS (ESOS)

Enterprise Storage OS is a linux distribution based on the SCST project with the purpose of providing SCSI targets via a compatible SAN (Fibre Channel, InfiniBand, iSCSI, FCoE). ESOS can turn a server with the appropriate hardware into a disk array that sits on your enterprise Storage Area Network (SAN) and provides sharable block-level storage volumes.

OpenIO 

OpenIOis an open source object storage startup founded in 2015 by CEO Laurent Denel and six co-founders. The product is an object storage system for applications that scales from terabytes to exabytes. OpenIO specializes in software defined storage and scalability challenges, with experience in designing and running cloud platforms. It owns a general purpose object storage and data processing solution adopted by large companies for massive production.

Open vStorage

Open vStorage is an open-source, scale-out, reliable, high performance, software based storage platform which offers a block & file interface on top of a pool of drives. It is a virtual appliance (called the “Virtual Storage Router”) that is installed on a host or cluster of hosts on which Virtual Machines are running. It adds value and flexibility in a hyper converged / Open Stack provider deployment where you don’t necessarily want to be tied to a solution like VMware VSAN. Being hypervisor agnostic is a key advantage of Open vStorage.

OpenATTIC

OpenATTIC is an Open Source Ceph and storage management solution for Linux, with a strong focus on storage management in a datacenter environment. It allows for easy management of storage resources, it features a modern web interface, and supports NFS, CIFS, iSCSI and FS. It supports a wide range of file systems including Btrfs and ZFS, as well as automatic data replication using DRBD, the distributed replicated block device and automatic monitoring of shares and volumes using a built-in Nagios/Icinga instance. openATTIC 2 will support managing the Ceph distributed object store and file system.

OpenStack

OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

The OpenStack Object Storage (swift) service provides software that stores and retrieves data over HTTP. Objects (blobs of data) are stored in an organizational hierarchy that offers anonymous read-only access, ACL defined access, or even temporary access. Object Storage supports multiple token-based authentication mechanisms implemented via middleware.

CryptoNAS

CryptoNAS (formerly CryptoBox) is one NAS project that makes encrypting your storage quick and easy. It is a multilingual Debian based Linux live CD with a web based front end that can be installed into a hard disk or USB stick. CryptoNAS has various choices of encryption algorithms, the default is AES, it encrypts disk partitions using LUKS (Linux Unified Key setup) which means that any Linux operating system can also access them without using CryptoNAS software.

Ceph

Ceph is a distributed object store and file system designed to provide high performance, reliability and scalability. It’s built on the Reliable Autonomic Distributed Object Store (RADOS) and allows enterprises to build their own economical storage devices using commodity hardware. It has been maintained by RedHat since their acquisition of InkTank in April 2014. It’s capable of block, object, and file storage.  It is scale-out, meaning multiple Ceph storage nodes will present a single storage system that easily handles many petabytes, and performance and capacity increase simultaneously. Ceph has many basic enterprise storage features including replication (or erasure coding), snapshots, thin provisioning, auto-tiering and self-healing capabilities.

FreeNAS

The FreeNAS website touts itself as “the most potent and rock-solid open source NAS software,” and it counts the United Nations, The Salvation Army, The University of Florida, the Department of Homeland Security, Dr. Phil, Reuters, Michigan State University and Disney among its users. You can use it to turn standard hardware into a BSD-based NAS device, or you can purchase supported, pre-configured TrueNAS appliances based on the same software.

RockStor 

RockStor is a free and open source NAS (Network Attached Storage) solution. It’s Personal Cloud Server is a powerful local alternative to public cloud storage that mitigates the cost and risks of public cloud storage. This NAS and cloud storage platform is suitable for small to medium businesses and home users who don’t have much IT experience, but who may need to scale to terabytes of data storage.  If you are more interested in Linux and Btrfs, it’s a great alternative to FreeNAS. The RockStor NAS and cloud storage platform can be managed within a LAN or over the Web using a simple and intuitive UI, and with the inclusion of add-ons (fittingly named ‘Rockons’), you can extend the feature set of your Rockstor to include new apps, servers, and services.

Gluster

Red Hat-owned Gluster is a distributed scale-out network attached storaage file system that can handle really big data—up to 72 brontobytes.  It has found applications including cloud computing, streaming media services and content delivery networks. It promises high availability and performance, an elastic hash algortithm, an elastic volume manager and more. GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system.

72 Brontobytes? I admit that I hadn’t seen that term used yet in any major storage vendor’s marketing materials. How big is that? Really, really big.

1 Bit = Binary Digit
8 Bits = 1 Byte
1,000 Bytes = 1 Kilobyte
1,000 Kilobytes = 1 Megabyte
1,000 Megabytes = 1 Gigabyte
1,000 Gigabytes = 1 Terabyte
1,000 Terabytes = 1 Petabyte
1,000 Petabytes = 1 Exabyte
1,000 Exabytes = 1 Zettabyte
1,000 Zettabytes = 1 Yottabyte
1,000 Yottabytes = 1 Brontobyte
1,000 Brontobytes = 1 Geopbyte

NAS4Free

Like FreeNAS, NAS4Free allows you to create your own BSD-based storage solution from commodity hardware. It promises a low-cost, powerful network storage appliance that users can customize to their own needs.

If FreeNAS and NAS4Free sound suspiciously similar, it’s because they share a common history. Both started from the same original FreeNAS code, which was created in 2005. In 2009, the FreeNAS team pursued a more extensible plugin architecture using OpenZFS, and a project lead who disagreed with that direction departed to continue his work using Linux, thus creating NAS4Free. NAS4Free dispenses with the fancy stuff and sticks with a more focused approach of “do one thing and do it well”. You don’t get bittorrent clients or cloud servers and you can’t make a virtual server with it, but many feel that NAS4Free has a much cleaner, more usable interface.

OpenFiler

Openfiler is a storage management operating system based on rPath Linux. It is a full-fledged NAS/SAN that can be implemented as a virtual appliance for VMware and Xen hypervisors. It offers storage administrators a set of powerful tools that are used to manage complex storage environments. It supports software and hardware RAID, monitoring and alerting facilities, volume snapshot and recovery features. Configuring Openfiler can be complicated, but there are many online resources available that cover the most typical installations. I’ve seen mixed reviews about the product online, it’s worth a bit of research before you consider an implementation.

OpenSMT

OpenSMT is an open source storage management toolkit based on opensolaris. Like Openfiler, OpenSMT also allows users to use commodity hardware for a dedicated storage device with NAS features and SAN features. It uses the ZFS filesystem and includes a well-designed Web GUI.

Open Media Vault

This NAS solution is based on Debian Linux and offers plug-ins to extend it’s capabilities. It boasts really easy-to-use storage management with a web based interface, fast setup, Multilanguage support, volume management, monitoring, UPS support, and statistics reporting. Plugins allow it to be extended with LDAP support, bittorrent, and iSCSI. It is primarily designed to be used in small offices or home offices, but is not limited to those scenarios.

Turnkey Linux

The Turnkey Linux Virtual Appliance Library is a free open source project which has developed a range of Debian based pre-packaged server software appliances (a.k.a. virtual appliances). Turnkey appliances can be deployed as a virtual machine (a range of hypervisors are supported), in cloud computing infrastructures (including AWS and others) or installed in physical computers.

Turnkey offers more than 100 different software appliances based on open source software. Among them is a file server that offers simple network attached storage, hence it’s inclusion in this list.

Turnkey file server is an easy to use file server that combines Windows-compatible network file sharing with a web based file manager. TurnKey File Server includes support for SMB, SFTP, NFS, WebDAV and rsync file transfer protocols. The server is configured to allow server users to manage files in private or public storage. It is based on Samba and SambaDAV.

oVirt

oVirt is free, open-source virtualization management platform. It was founded by Red Hat as a community project on which Red Hat Enterprise Virtualization is based. It allows centralized management of virtual machines, compute, storage and networking resources, from an easy to use web-based front-end with platform independent access. With oVirt, IT can manage virtual machines, virtualized networks and virtualized storage via an intuitive Web interface. It’s based on the KVM hypervisor.

Kinetic Open Storage

Backed by companies like EMC, Seagate, Toshiba, Cisco, NetApp, Red Hat, Western Digital, Dell and others, Kinetic is a Linux Foundation project dedicated to establishing standards for a new kind of object storage architecture. It’s designed to meet the need for scale-out storage for unstructured data. Kinetic is fundamentally a way for storage applications to communicate directly with storage devices over Ethernet. With Kinetic, storage use cases that are targeted consist largely of unstructured data like NoSQL, Hadoop and other distributed file systems, and object stores in the cloud like Amazon S3, OpenStack Swift and Basho’s Riak.

Storj DriveShare and MetaDisk

Storj (pronounced “Storage”) is a new type of cloud storage built on blockchain and peer-to-peer technology. Storj offers decentralized, end-to-end encrypted cloud storage. The DriveShare app allows users to rent out their unused hard drive space for use by the service, and the MetaDisk Web app allows users to save their files to the service securely.

The core protocol allows for peer to peer negotiation and verification of storage contracts. Providers of storage are called “farmers” and those using the storage, “renters”. Renters periodically audit whether the farmers are still keeping their files safe and, in a clever twist of similar architectures, immediately pay out a small amount of cryptocurrency for each successful audit. Conversely, farmers can decide to stop storing a file if its owner does not audit and pay their services on time. Files are cut up into pieces called “shards” and stored 3 times redundantly by default. The network will automatically determine a new farmer and move data if copies become unavailable. In the core protocol, contracts are negotiated through a completely decentralized key-value store (Kademlia). The system puts measures in place that prevent farmers and renters from cheating on each other, e.g. through manipulation of the auditing process. Other measures are taken to prevent attacks on the protocol itself.

Storj, like other similar services, offers several advantages over more traditional cloud storage solutions: since data is encrypted and cut into “shards” at source, there is almost no conceivable way for unauthorized third parties to access that data. Data storage is naturally distributed and this, in turn, increases availability and download speed thanks to the use of multiple parallel connections.

Open Source File Systems

Btrfs

Btrfs is a newer Linux filesystem being developed by Facebook, Fujitsu, Intel, the Linux Foundation, Novell, Oracle, Red Hat and some other organizations. It emphasizes fault tolerance and easy administration, and it supports files as large as 16 EiB.

It has been included in the Linux 3.10 kernel as a stable filesystem since July 2014. Because of the fast development speed, btrfs noticeably improves with every new kernel version, so it’s always recommended to use the most recent, stable kernel version you can. Rockstor always runs a very recent kernel for that reason.

One of the big draws of Btrfs is its Copy on Write (CoW) nature of the filesystem. When multiple users attempt to read/write a file, it does not make a separate copy until changes are made to the original file by the user. This has the benefit of saving changes, which allows file restorations with snaps. Btrfs also has its own native RAID support built in, appropriately named Btrfs-RAID. A nice benefit the Btrfs RAID iimplemenation is that a RAID6 volume does not need additional re-syncing upon creation of the RAID set, greatly reducing the time requirement.

Ext4

This is the latest version of one of the most popular filesystems for Linux. One of its key benefits is the ability to handle very large amounts of data— 16 TB maximum per file and 1 EB (exabyte, or 1 million terabytes) maximum per filesystem. It is the evolution of the most used Linux filesystem, Ext3. In many ways, Ext4 is a deeper improvement over Ext3 than Ext3 was over Ext2. Ext3 was mostly about adding journaling to Ext2, but Ext4 modifies important data structures of the filesystem such as the ones destined to store the file data.

GlusterFS

Owned by RedHat, GlusterFS is a scale-out distributed file system designed to handle petabytes worth of data. Features include high availability, fast performance, global namespace, elastic hash algorithm and an elastic volume manager.

GlusterFS combines the unused storage space on multiple servers to create a single, large, virtual drive that you can mount like a legacy filesystem using NFS or FUSE on a client PC. It also provides the ability to add more servers or remove existing servers from the storage pool on the fly. GlusterFS functions like a “network RAID” device, many RAID concepts are apparent during setup. It really shines when you need to store huge quantities of data, have redundant file storage, or write data very quickly for later access. Geo-replication lets you mirror data on a volume across the wire. The target can be a single directory or another GlusterFS volume.  It can also handle multiple petabytes easily along with being very easy to install and manage.

Lustre

Designed for “the world’s largest and most complex computing environments,” Lustre is a high-performance scale-out file system. It boasts that it can handle tens of thousands of nodes and petabytes of data with very fast throughput.

Lustre file systems are highly scalable and can be part of multiple computer clusters with tens of thousands of client nodes, multiple petabytes of storage on hundreds of servers, and more than 1TB/s of aggregate I/O throughput. This makes Lustre file systems a popular choice for businesses with large data centers.

OpenZFS

OpenZFS is an outstanding storage platform that encompasses the functionality of traditional filesystems, volume managers, and more, with consistent reliability, functionality and performance. This popular file system is incorporated into many other open source storage projects. It offers excellent scalability and data integrity, and it’s available for most Linux distributions.

IPFS

IPFS is short for “Interplanetary File System,” and is an unusual project that uses peer-to-peer technology to connect all computers with a single file system. It aims to supplement, or possibly even replace, the Hypertext Transfer Protocol that runs the web now. According to the project owner, “In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository.”

IPFS isn’t exactly a well-known technology yet, even among many in the Valley, but it’s quickly spreading by word of mouth among folks in the open-source community. Many are excited by its potential to greatly improve file transfer and streaming speeds across the Internet.

Open Source RAID Solutions

DRBD

DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several userspace management applications and some shell scripts. It is typically used in high availability (HA) computer clusters, but beginning with v9 it can also be used to create larger software defined storage pools with more of a focus on cloud integration. Support and training are available through the project owner, LinBit.

DRBD’s replication technology is very fast and efficient. If you can live with an active-passive setup, DRBD is an efficient storage replication solution. DRBD helps keep data synchronized between multiple nodes and multiple nodes in different datacenters, and if you need to failover between two nodes DRBD is very fast and efficient.

Mdadm

This piece of the Linux kernel makes it possible to set up and manage your own software RAID array using standard hardware. While it is terminal-based, but it offers a wide variety of options for monitoring, reporting, and managing RAID arrays.

Raider

Raider applies RAID 1, 4, 5, 6 or 10 to hard drives. It is able to convert a single linux system disk in to a software raid 1, 4, 5, 6 or 10 system in a two-pass simple command. Raider is a bash shell script, that deals with specific oddities of several linux distros (Ubuntu, Debian, Arch, Mandriva, Mageia, openSuSE, Fedora, Centos, PCLinuxOS, Linux Mint, Scientific Linux, Gentoo, Slackware… – see README) and uses linux software raid (mdadm) ( http://en.wikipedia.org/wiki/Mdadm and https://raid.wiki.kernel.org/ ) to execute the conversion.

Open Source Backup and Synchronization Solutions

Zmanda

From their marketing staff… “Zmanda is the world’s leading provider of open source backup and recovery software. Our open source development and distribution model enables us to deliver the highest quality backup software such as Amanda Enterprise and Zmanda Recovery Manager for MySQL at a fraction of the cost of software from proprietary vendors. Our simple-to-use yet feature-rich backup software is complemented by top-notch services and support expected by enterprise customers.”

Zmanda offers a community and enterprise edition of their software. The enterprise edition of course offers a much more complete feature set.

AMANDA

The core of Amanda is the Amanda server, which handles all the backup operations, compression, indexing and configuration tasks. You can run it on any Linux server as it doesn’t cause any conflicts with any other processes, but it is recommend to run it on a dedicated machine as that removes any associated processing loads from the client machines and prevents the backup from negatively affecting the client’s performance.

Overall it is an extremely capable file-level backup tool that can be customized to your exact requirements. While it lacks a GUI, the command line controls are simple and the level of control you have over your backups is exceptional. Because it can be called from within your own scripts, it can be incorporated into your own custom backup scheme no matter how complex your requirements are. Paid support and a cloud-based version are available through Zmanda, which is owned by Carbonite.

Areca Backup

Areca Backup is a free backup utility for Windows and Linux.  It is written in Java and released under the GNU General Public License. It’s a good option for backing up a single system and it aims to be simple and versatile. Key features include compression, encryption, filters and support for delta backup.

Backup

Backup is a system utility for Linux and Mac OS X, distributed as a RubyGem, that allows you to easily perform backup operations. It provides an elegant DSL in Ruby for modeling your backups. Backup has built-in support for various databases, storage protocols/services, syncers, compressors, encryptors and notifiers which you can mix and match. It was built with modularity, extensibility and simplicity in mind.

BackupPC

Designed for enterprise users, BackupPC claims to be “highly configurable and easy to install and maintain.” It backs up to disk only (not tape) and offers features that reduce storage capacity and IO requirements.

Bacula

Another enterprise-grade open source back solution, Bacula offers a number of advanced features for backup and recovery, as well as a fairly easy-to-use interface. Commercial support, training and services are available through Bacula Systems.

Back In Time

Similar to FlyBack (see below), Back in Time offers a very easy-to-configure snapshot backup solution. GUIs are available for both Gnome and KDE (4.1 or greater).

Backupninja

This tool makes it easier to coordinate and manage backups on your network. With the help of programs like rdiff-backup, duplicity, mysqlhotcopy and mysqldump, Backupninja offers common backup features such as remote, secure and incremental file system backups, encrypted backup, and MySQL/MariaDB database backup. You can selectively enable status email reports, and can back up general hardware and system information as well. One key strength of backupninja is a built-in console-based wizard (called ninjahelper) that allows you to easily create configuration files for various backup scenarios. The downside is that backupninja requires other “helper” programs to be installed in order to take full advantage of all its features. While backupninja’s RPM package is available for Red Hat-based distributions, backupninja’s dependencies are optimized for Debian and its derivatives. Thus it is not recommended to try backupninja for Red Hat based systems.

Bareos

Short for “Backup Archiving Recovery Open Sourced,” Bareos is a 100% open source fork of the backup project from bacula.org. The fork is in development since late 2010, it has a lot of new features. The source has been published on github, licensed AGPLv3. It offers features like LTO hardware encryption, efficient bandwidth usage and practical console commands. A commercially supported version of the same software is available through Bareos.com.

Box Backup

Box Backup describes itself as “an open source, completely automatic, online backup system.” It creates backups continuously and can support RAID. Box Backup is stable but not yet feature complete. All of the facilities to maintain reliable encrypted backups and to allow clients to recover data are, however, already implemented and stable.

BURP

BURP, which stands for “BackUp And Restore Program,” is a network backup tool based on librsync and VSS. It’s designed to be easy to configure and to work well with disk storage. It attempts to reduce network traffic and the amount of space that is used by each backup.

Clonezilla

Conceived as a replacement for True Image or Norton Ghost, Clonezilla is a disk imaging application that can do system deployments as well as bare metal backup and recovery. Two types of Clonezilla are available, Clonezilla live and Clonezilla SE (server edition). Clonezilla live is suitable for single machine backup and restore. While Clonezilla SE is for massive deployment, it can clone many (40+) computers simultaneously. Clonezilla saves and restores only used blocks in the hard disk. This increases the clone efficiency. With some high-end hardware in a 42-node cluster, a multicast restoring at rate 8 GB/min was reported.

Create Synchronicity

Create Synchronicity’s claim to fame is its lightweight size—just 220KB. It’s also very fast, and it offers an intuitive interface for backing up standalone systems. Create Synchronicity is an easy, fast and powerful backup application. It synchronizes files and folders, has a nice interface, and can schedule backups to keep your data safe. Plus, it’s open source, portable, multilingual, and very light (180kB). Windows 2000, Windows XP, Windows Vista, and Windows Seven are supported. To run Create Synchronicity, you must install the .Net Framework, version 2.0 or later.

DAR

AR is a command-line backup and archiving tool that uses selective compression (not compressing already compressed files), strong encryption, may split an archive in different files of given size and provides on-fly hashing. DAR knows how to perform full, differential, incremental and decremental backups. It provides testing, diffing, merging, listing and of course data extracting from existing archives. Archive internal’s catalog, allows very quick restoration of a even a single file from a very large, eventually sliced, compressed and encrypted archive. Dar saves *all* UNIX inode types, takes care of hard links, sparse files as well as Extended Attributes (MacOS X file forks, Linux ACL, SELinux tags, user attributes), it has support for ssh and is suitable for tapes and disks (floppy, CD, DVD, hard disks, …). An optional GUI is available from the DarGUI project.

DirSync Pro

DirSync Pro is a small, but powerful utility for file and folder synchronization. DirSync Pro can be used to synchronize the content of one or many folders recursively. Use DirSync Pro to easily synchronize files from your desktop PC to your USB-stick (/Externa HD/PDA/Notebook). Use this USB-stick (/Externa HD/PDA/Notebook) to synchronize files to another desktop PC. It also features incremental backups, a user friendly interface, a powerful schedule engine, and real-time synchronization. It is written in Java.

Duplicati

Duplicati is designed to backup your network to a cloud computing service like Amazon S3, Microsoft OneDrive, Google Cloud or Rackspace. It includes AES-256 encryption and a scheduler, as well as features like filters, deletion rules, transfer and bandwidth options. Save space with incremental backups and data deduplication. Run backups on any machine through the web-based interface or via command line interface. It has an auto-updater.

Duplicity

Based on the librsync library, Duplicity creates encrypted archives and uploads them to remote or local servers. It can use GnuPG to encrypt and sign archives if desired.

Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.

The duplicity package also includes the rdiffdir utility. Rdiffdir is an extension of librsync’s rdiff to directories—it can be used to produce signatures and deltas of directories as well as regular files. These signatures and deltas are in GNU tar format.

FlyBack

Similar to Apple’s TimeMachine, FlyBack provides incremental backup capabilities and allows users to recover their systems from any previous time. The interface is very easy to use, but little customization is available. FlyBack creates incremental backups of files, which can be restored at a later date. FlyBack presents a chronological view of a file system, allowing individual files or directories to be previewed or retrieved one at a time. Flyback was originally based on rsync when the project began in 2007, but in October 2009 it was rewritten from scratch using Git.

FOG

An imaging and cloning solution, FOG makes it easy for administrators to backup networks of all sizes. FOG can be used to image Windows XP, Vista, Windows 7 and Window 8 PCs using PXE, PartClone, and a Web GUI to tie it together. Includes featues like memory and disk test, disk wipe, av scan & task scheduling.

FreeFileSync

FreeFileSync is a free Open Source software that helps you synchronize files and synchronize folders for Windows, Linux and Mac OS X. It is designed to save your time setting up and running data backups while having nice visual feedback along the way. This file and folder synchronization tool can be very useful for backup purposes. It can save a lot of time and receives very good reviews from its users.

FullSync

FullSync is a powerful tool that helps you keep multiple copies of various data in sync. I.e. it can update your Website using (S)Ftp, backup your data or refresh a working copy from a remote server. It offers flexible rules, a scheduler and more. Built for developers, FullSync offers synchronization capabilities suitable for backup purposes or for publishing Web pages. Features include multiple modes, flexible tools, support for multiple file transfer protocols and more.

Grsync

Grsync provides a graphical interface for rsync, a popular command line synchronization and backup tool. It’s useful for backup, mirroring, replication of partitions, etc. It’s a hack/port of Piero Orsoni’s wonderful Grsync – rsync frontend in GTK – to Windows (win32).

LuckyBackup

Award-winning LuckyBackup offers simple, fast backup. Note that while it is available in a Windows version, it’s still under development. It features Backup using snapshots, Various checks to keep data safe, Simulation mode, Remote connections, Easy restore procedure, Add/remove any rsync option, Synchronize folders, Exclude data from tasks, Execute other commands before or after a task, Scheduling, Tray notification support, and e-mail reports.

Mondo Rescue

Mondo Rescue is a GPL disaster recovery solution. It supports Linux (i386, x86_64, ia64) and FreeBSD (i386). It’s packaged for multiple distributions (Fedora, RHEL, openSuSE, SLES, Mandriva, Mageia, Debian, Ubuntu, Gentoo). It supports tapes, disks, network and CD/DVD as backup media, multiple filesystems, LVM, software and hardware Raid, BIOS and UEFI.

Obnam

Winner of the most original name for backup software – “OBligatory NAMe”. This app performs snapshot backups that can be stored on local disks or online storage services. Features include Easy usage, Snapshot backups, Data de-duplication, across files, and backup generations, Encrypted backups, and it supports both PUSH (i.e. Run on the client) and PULL (i.e. Run on the server) methods.

Partimage

Partimage is opensource disk backup software. It saves partitions having a supported filesystem on a sector basis to an image file. Although it runs under Linux, Windows and most Linux filesystems are supported. The image file can be compressed to save disk space and transfer time and can be split into multiple files to be copied to CDs or DVDs. Partitions can be saved across the network using the partimage network support, or using Samba / NFS (Network File Systems). This provides the ability to perform an hard disk partition recovery after a disk crash. Partimage can be run as part of your normal system or as a stand-alone from the live SystemRescueCd. This is helpful when the operating system cannot be started. SystemRescueCd comes with most of the data recovery software for linux that you may need .

Partimage will only copy data from the used portions of the partition. (This is why it only works for supported filesystem. For speed and efficiency, free blocks are not written to the image file. This is unlike other commands, which also copy unused blocks. Since the partition is processed on a sequential sector basis disk transfer time is maximized and seek time is minimized, Partimage also works for very full partitions. For example, a full 1 GB partition may be compressed down to 400MB.

Redo

Easy rescue system with GUI tools for full system backup, bare metal recovery, partition editing, recovering deleted files, data protection, web browsing, and more. Uses partclone (like Clonezilla) with a UI like Ghost or Acronis. Runs from CD/USB.

Rsnapshot

Rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. Using rsync and hard links, it is possible to keep multiple, full backups instantly available. The disk space required is just a little more than the space of one full backup, plus incrementals. Depending on your configuration, it is quite possible to set up in just a few minutes. Files can be restored by the users who own them, without the root user getting involved. There are no tapes to change, so once it’s set up, you may never need to think about it again. rsnapshot is written entirely in Perl. It should work on any reasonably modern UNIX compatible OS, including: Debian, Redhat, Fedora, SuSE, Gentoo, Slackware, FreeBSD, OpenBSD, NetBSD, Solaris, Mac OS X, and even IRIX.

Rsync

Rsync is a fast and extraordinarily versatile file copying tool for both remote and local files. Rsync uses a delta-transfer algorithm which provides a very fast method for bringing remote files into sync. It does this by sending just the differences in the files across the link, without requiring that both sets of files are present at one of the ends of the link beforehand. At first glance this may seem impossible because the calculation of diffs between two files normally requires local access to both files.

SafeKeep

SafeKeep is a centralized and easy to use backup application that combines the best features of a mirror and an incremental backup. It sets up the appropriate environment for compatible backup packages and simplifies the process of running them. For Linux users only, SafeKeep focuses on security and simplicity. It’s a command line tool that is a good option for a smaller environment.

Synkron

This application allows you to keep your files and folders updated and synchronized. Key features include an easy to use interface, blacklisting, analysis and restore. It is also cross-platform.

Synbak

Synbak is an software designed to unify several backup methods. Synbak provides a powerful reporting system and a very simple interface for configuration files. Synbak is a wrapper for several existing backup programs suppling the end user with common method for configuration that will manage the execution logic for every single backup and will give detailed reports of backups result. Synbak can make backups using RSync over ssh, rsync daemon, smb and cifs protocols (using internal automount functions), Tar archives (tar, tar.gz and tar.bz2), Tape devices (using multi loader changer tapes too), LDAP databases, MySQL databases, Oracle databases, CD-RW/DVD-RW, Wget to mirror HTTP/FTP servers. It offers official support to GNU/Linux Red Hat Enterprise Linux and Fedora Core Distributions only.

SnapBackup

Designed to be as easy to use as possible, SnapBackup backs up files with just one click. It can copy files to a flash drive, external hard drive or the cloud, and it includes compression capabilities.  The first time you run Snap Backup, you configure where your data files reside and where to create backup files. Snap Backup will also copy your backup to an archive location, such as a USB flash drive (memory stick), external hard drive, or cloud backup. Snap Backup automatically puts the current date in the backup file name, alleviating you from the tedious task of renaming your backup file every time you backup. The backup file is a single compressed file that can be read by zip programs such as gzip, 7-Zip, The Unarchiver, and Mac’s built-in Archive Utility.

Syncovery

File synchronization and backup software. Back up data and synchronize PCs, Macs, servers, notebooks, and online storage space. You can set up as many different jobs as you need and run them manually or using the scheduler. Syncovery works with local hard drives, network drives and any other mounted volumes. In addition, it comes with support for FTP, SSH, HTTP, WebDAV, Amazon S3, Google Drive, Microsoft Azure, SugarSync, box.net and many other cloud storage providers. You can use ZIP compression and data encryption. On Windows, the scheduler can run as a service – without users having to log on. There are powerful synchronization modes, including Standard Copying, Exact Mirror, and SmartTracking. Syncovery features a well designed GUI to make it an extremely versatile synchronizing and backup tool.

XSIbackup

XSIbackup can backup VMwareESXi environments version 5.1 or greater. It’s a command line tool with a scheduler, and it runs directly on the hypervisor. XSIBackup is a free alternative to commercial software like Veeam Backup.

UrBackup

A client-server system, UrBackup does both file and image backups. UrBackup is an easy to setup Open Source client/server backup system, that through a combination of image and file backups accomplishes both data safety and a fast restoration time. File and image backups are made while the system is running without interrupting current processes. UrBackup also continuously watches folders you want backed up in order to quickly find differences to previous backups. Because of that, incremental file backups are really fast. Your files can be restored through the web interface, via the client or the Windows Explorer while the backups of drive volumes can be restored with a bootable CD or USB-Stick (bare metal restore). A web interface makes setting up your own backup server easy.

Unison

This file synchronization tool goes beyond the capabilities of most backup systems, because it can reconcile several slightly different copies of the same file stored in different places. It can work between any two (or more) computers connected to the Internet, even if they don’t have the same operating system. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.

Unison shares a number of features with tools such as configuration management packages (CVS, PRCS, Subversion, BitKeeper, etc.), distributed filesystems (Coda, etc.), uni-directional mirroring utilities (rsync, etc.), and other synchronizers (Intellisync, Reconcile, etc). Unison runs on both Windows and many flavors of Unix (Solaris, Linux, OS X, etc.) systems. Moreover, Unison works across platforms, allowing you to synchronize a Windows laptop with a Unix server, for example. Unlike simple mirroring or backup utilities, Unison can deal with updates to both replicas of a distributed directory structure. Updates that do not conflict are propagated automatically. Conflicting updates are detected and displayed.

Win32DiskImager

This program is designed to write a raw disk image to a removable device or backup a removable device to a raw image file. It is very useful for embedded development, namely Arm development projects (Android, Ubuntu on Arm, etc). Averaging more than 50,000 downloads every week, this tool is a very popular way to copy a disk image to a new machine. It’s very useful for systems administrators and developers.

Open Source Cloud Data Storage Solutions

Camlistore

Camlistore is short for “Content-Addressable Multi-Layer Indexed Storage.” Camlistore is a set of open source formats, protocols, and software for modeling, storing, searching, sharing and synchronizing data in the post-PC era. Data may be files or objects, tweets or 5TB videos, and you can access it via a phone, browser or FUSE filesystem. It is still under active development. If you’re a programmer or fairly technical, you can probably get it up and running and get some utility out of it. Many bits and pieces are actively being developed, so be prepared for bugs and unfinished features.

CloudStack

Apache’s CloudStack project offers a complete cloud computing solution, including cloud storage. Key storage features include tiering, block storage volumes and support for most storage hardware.

CloudStack is open source software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform. CloudStack is used by a number of service providers to offer public cloud services, and by many companies to provide an on-premises (private) cloud offering, or as part of a hybrid cloud solution.

CloudStack is a turnkey solution that includes the entire “stack” of features most organizations want with an IaaS cloud: compute orchestration, Network-as-a-Service, user and account management, a full and open native API, resource accounting, and a first-class User Interface (UI). It currently supports the most popular hypervisors: VMware, KVM, Citrix XenServer, Xen Cloud Platform (XCP), Oracle VM server and Microsoft Hyper-V.

CloudStore

CloudStore synchronizes files between multiple locations. It is similar to Dropbox, but it’s completely free and, as noted by the developer, does not require the user to trust a US company.

Cozy

Cozy is a personal cloud solution allows users to “host, hack and delete” their own files. It stores calendar and contact information in addition to documents, and it also has an app store with compatible applications.

DREBS

Designed for Amazon Web Services users, DREBS stands for “Disaster Recovery for Elastic Block Store.” It runs on Amazon’s EC2 services and takes snapshots of EBS volumes for disaster recovery purposes. It can be used for taking periodic snapshots of EBS volumes. It is designed to be run on the EC2 host which the EBS volumes to be snapshoted are attached.

DuraCloud

DuraCloud is a hosted service and open technology developed by DuraSpace that makes it easy for organizations and end users to use cloud services. DuraCloud leverages existing cloud infrastructure to enable durability and access to digital content. It is particularly focused on providing preservation support services and access services for academic libraries, academic research centers, and other cultural heritage organizations. The service builds on the pure storage from expert storage providers by overlaying the access functionality and preservation support tools that are essential to ensuring long-term access and durability. DuraCloud offers cloud storage across multiple commercial and non commercial providers, and offers compute services that are key to unlocking the value of digital content stored in the cloud. DuraCloud provides services that enable digital preservation, data access, transformation, and data sharing. Customers are offered “elastic capacity” coupled with a “pay as you go” approach. DuraCloud is appropriate for individuals, single institutions, or for multiple organizations that want to use cross-institutional infrastructure. DuraCloud became available as a limited pilot in 2009 and was released broadly as a service of the DuraSpace not-for-profit organization in 2011.

FTPbox

This app allows users to set up cloud-based storage services on their own servers. It supports FTP, SFTP or FTPS file syncing.

Pydio

Pydio is the mature open source alternative to dropbox and box, for the enterprise. Formerly known as AjaXplorer, this app helps enterprises set a file-sharing service on their own servers. It’s very easy to install and offers an attractive, intuitive interface.

Seafile

With Seafile you can set up your own private cloud storage server or use their hosted service that is free for up to 1GB. Seafile is an open source cloud storage system with privacy protection and teamwork features. Collections of files are called libraries. Each library can be synced separately. A library can also be encrypted with a user chosen password. Seafile also allows users to create groups and easily sharing files into groups.

SparkleShare

Another self-hosted cloud storage solution, SparkleShare is a good storage option for files that change often and are accessed by a lot of people. (It’s not as good for complete backups.) Because it was built for developers, it also includes Git. SparkleShare is open-source client software that provides cloud storage and file synchronization services. By default, it uses Git as a storage backend. SparkleShare is comparable to Dropbox, but the cloud storage can be provided by the user’s own server, or a hosted solution such as GitHub. The advantage of self-hosting is that the user retains absolute control over their own data. In the simplest case, self-hosting only requires SSH and Git.

Syncany

Syncany is a cloud storage and filesharing application with a focus on security and abstraction of storage. It is similar to Dropbox, but you can use it with your own server or one of the popular public cloud services like Amazon, Google or Rackspace. It encrypts files locally, adding security for sensitive files.

Syncthing

Syncthing was designed to be a secure and private alternative to public cloud backup and synchronization services. It is a continuous file synchronization program. It synchronizes files between two or more computers. It offers strong encryption and authentication capabilities and includes an easy-to-use GUI.

PerlShare

PerlShare is another Dropbox alternative, allowing users to set up their own cloud storage servers. Windows and OS X support is under development, but it works on Linux today.

SeaFile

SeaFile offers open source cloud storage and file synchronization. You can self-host with the free community or paid professional editions, or you can pay for the service hosting.

Storage Management / SDS

OpenSDS

Advanced OpenSDS API’s enables enterprise storage features to be fully utilized by OpenStack. For End-Users. OpenSDS offers free choice and allows you to choose solutions from different vendors. Start transforming your IT infrastructure into a platform for cloud-native workloads and accelerate new business rollouts.

CoprHD

CoprHD is an open source software defined storage controller and API platform by Dell EMC. It enables policy-based management and cloud automation of storage resources for block, object and file storage providers.

REX-Ray

REX-Ray is a Dell EMC open source project. It’s a container storage orchestration engine enabling persistence for cloud native workloads. New updates and features contribute to enterprise readiness, as {code} by Dell EMC through REX-Ray and libStorage works with industry organizations to ensure long-lasting interoperability of storage in Cloud Native through a universal Container Storage Interface.

Nexenta

From their website: “Nexenta is the global leader in Open Source-driven Software-Defined Storage – what we call Open Software-Defined Storage (OpenSDS).We uniquely integrate software-only “Open Source” collaboration with commodity hardware-centric “Software-Defined Storage” (SDS) innovation.”

Libvirt Storage Management

libvirt is an open source API, daemon and management tool for managing platform virtualization.[3] It can be used to manage KVM, Xen, VMware ESX, QEMU and other virtualization technologies. These APIs are widely used in the orchestration layer of hypervisors in the development of a cloud-based solution.

OHSM

Online Hierarchical Storage Manager (OHSM) is the first attempt towards an enterprise level open source data storage manager which automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as hard disk drive arrays, are more expensive (per byte stored) than slower devices, such as optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise’s data on slower devices, and then copy data to faster disk drives when needed. In effect, OHSM turns the fast disk drives into caches for the slower mass storage devices. There would be certain policies that would be set by the data center administrators as to which data can safely be moved to slower devices and which data should stay on the fast devices. Under manual circumstances the data centers suffers from down time and also change in the namespace. Policy rules specify both initial allocation destinations and relocation destinations as priority-ordered lists of placement classes. Files are allocated in the first placement class in the list if free space permits, in the second class if no free space is available in the first, and so forth.

Open Source Data Destruction Solutions

BleachBit

With BleachBit you can free cache, delete cookies, clear Internet history, shred temporary files, delete logs, and discard junk you didn’t know was there. Designed for Linux and Windows systems, it wipes clean thousands of applications including Firefox, Internet Explorer, Adobe Flash, Google Chrome, Opera, Safari,and more. Beyond simply deleting files, BleachBit includes advanced features such as shredding files to prevent recovery, wiping free disk space to hide traces of files deleted by other applications, and vacuuming Firefox to make it faster.

Darik’s Boot And Nuke

Darik’s Boot and Nuke (“DBAN”) is a self-contained boot image that securely wipes the hard disks of most computers. DBAN is appropriate for bulk or emergency data destruction. This app can securely wipe an entire disk so that the data cannot be recovered. The owner of the app, Blancco, also offers related paid products, including some that support RAID.

Eraser

Eraser is a secure data removal tool for Windows. It completely removes sensitive data from your hard drive by overwriting it several times with carefully selected patterns. It erases residue from deleted files, erases MFT and MFT-resident files (for NTFS volumes) and Directory Indices (for FAT), and has a powerful and flexible scheduler.

FileKiller

FileKiller is another option for secure file deletion. It allows the user to determine how many times deleted data is overwritten depending on the sensitivity of the data being deleted. It offers fast performance and can handle large files.

It features High Performance, the ability to choose the number of overwrite iterations (1 to 100), the ability to choose overwrite method using blanks, the ability to choose overwrite method using random data, the ability to choose overwrite method using a user defined ascii character, data as well as Filename deletion. No setup is needed, you get just a single executable, and it it requires .net 3.5.

Open Source Distributed Storage/Big Data Solutions

BigData

Big data describes itself as an ultra high-performance graph database supporting the RDF data model. It can scale to 50 billion edges on a single machine. Paid Commercial support is available for this product.

Hadoop

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. The Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. This project is so well known that it has become nearly synonymous with big data.

HPCC

HPCC Systems (High Performance Computing Cluster) is an open source, massive parallel-processing computing platform for big data processing and analytics. It is Intended as an alternative to Hadoop. It is a distributed data storage and processing platform that scales to thousands of nodes. It was developed by LexisNexis Risk Solutions, which also offers paid enterprise versions of the software.

Sheepdog

Sheepdog is a distributed object storage system for volume and container services and manages the disks and nodes intelligently. Sheepdog features ease of use, simplicity of code and can scale out to thousands of nodes. The block level volume abstraction can be attached to QEMU virtual machines and Linux SCSI Target and supports advanced volume management features such as snapshot, cloning, and thin provisioning. The object level container abstraction is designed to be Openstack Swift and Amazon S3 API compatible and can be used to store and retrieve any amount of data with a simple web services interface. It’s compatible with OpenStack Swift and Amazon S3.

Open Source Document Management Systems (DMS) Solutions

bitfarm-Archiv Document Management

bitfarm-Archiv document management is an intuitive, award-winning software with fast user acceptance. The extensive and practical functionality as well as the excellent adaptability makes the open source DMS to one of the most powerful document management, archiving, and ECM solution for institutions and in all sectors at low cost. A paid enterprise version and paid services is available.

DSpace

Highly rated DSpace describes itself as “the software of choice for academic, non-profit, and commercial organizations building open digital repositories.” It offers a Web-based interface and very easy installation.

Epiware

Epiware offers customizable, Web-based document capture, management, storage, and sharing. Paid support is also available.

LogicalDOC

LogicalDOC is a Web-based, open source document management software that is very simple to use and suitable for organizations of any size and type. It uses the best-of-breed Java technologies such as Spring, Hibernate and AJAX and can run on any system, from Windows to Linux or MAC OS X. The features included in the community edition — including workflow light, version control and the full-text search engine – help manage the document lifecycle, encourage cooperation, allow to quickly find the document you need without wasting time. The application is implemented as a plugin system that allows you to easily add new features through the ability to engage the various extension points predisposed. Moreover, the presence of Web services ensures that LogicalDOC can be easily integrated with other systems.

OpenKM

OpenKM integrates all essential documents management, collaboration and an advanced search functionality into one easy to use solution. The system also includes administration tools to define the roles of various users, access control, user quota, level of document security, detailed logs of activity and automations setup. OpenKM builds a highly valuable repository of corporate information assets to facilitate knowledge creation and improve business decision making, boosting workgroups and enterprise productivity through shared practices, greater, better customer relations, faster sales cycles, improved product time-to-market, and better-informed decision making.

Open Source Encryption Solutions

AxCrypt

Downloaded nearly 3 million times, AxCrypt is one of the leading open source file encryption software for Windows. It works with the Windows file manager and with cloud-based storage services like Dropbox, Live Mesh, SkyDrive and Box.net. It offers Personal Privacy and Security with AES-256 File Encryption and Compression for Windows. Double-click to automatically decrypt and open documents.

Crypt

Extremely lightweight, the 44KB Crypt promises very fast encryption and decryption. You don’t need to install it, and it can run from a thumb drive. This tool is command line only, expected for such a lightweight application.

Gnu Privacy Guard (GPG)

GNU Privacy Guard. GNU Privacy Guard (GnuPG or GPG) is a free software replacement for Symantec’s PGP cryptographic software suite. GnuPG is compliant with RFC 4880, which is the IETF standards track specification of OpenPGP. Gnu’s implementation of the OpenPGP standard allows users to encrypt and sign data and communication. It’s a very mature project that hass been under active development for well over a decade.

gpg4win (GNU privacy guard for Windows)

See above. This is a port of the Linux version of GPG. It’s easy to install and includes plug-ins for Outlook and Windows Explorer.

GPG Tools

See above. This project ports GPG to the Mac.

TrueCrypt

TrueCrypt is a discontinued source-available freeware utility used for on-the-fly encryption (OTFE). It can create a virtual encrypted disk within a file, or encrypt a partition or the whole storage device (pre-boot authentication). Extremely popular, this utility has been downloaded millions of times. It can encrypt both single files or entire drives or partitions.