Storage Class Memory and Emerging Technologies

I mentioned in my earlier post, The Future of Storage Administration, that Flash will continue to dominate the industry and will be embraced by the enterprise, which I believe will drive newer technologies like NVMe and diminish older technologies like fiber channel.  While there is a lot of agreement over the latest storage technologies that are driving the adoption of flash in the enterprise, including the aforementioned NVMe technology, there doesn’t seem to be nearly as much agreement on what the “next big thing” will be in the enterprise storage space.  NVMe and NVMe-oF are definitely being driven by the trend towards the all-flash data center, and Storage Class Memory (SCM) is certainly a relevant trend that could be that “next big thing”.  Before I continue, what are NVMe, NVMe-oF and SCM?

  • NVMe is a protocol that allows for fast access for direct attached flash storage. NVMe is considered an evolutionary step toward exploiting the inherent parallelism built into SSDs.
  • NVMe-oF allows the advantages of NVMe to be used on a fabric connecting hosts with networked storage. With the increased adoption of low latency, high bandwidth network fabrics like 10GB+ Ethernet and InfiniBand, it becomes possible to build an infrastructure that extends the performance advantages of NVMe over standard fabrics to access low latency nonvolatile persistent storage.
  • SCM (Storage Class Memory) is a technology that places memory and storage on what looks like a standard DIMM board, which can be connected over NVMe or the memory bus.  I’ll dive in a bit more later on.

In the coming years, you’ll likely see every major storage vendor rolling out their own solutions for NVMe, NVMe-oF, and SCM.  The technologies alone won’t mean anything without optimization of the OS/hypervisor, drivers, and protocols, however. The NVMe software will need to be designed to take advantage of the low latency transport and media.

Enter Storage Class Memory

SCM is a hybrid memory and storage paradigm, placing memory and storage on what looks like a standard DIMM board.  It’s been gaining a lot of attention at storage industry conferences for the past year or two.  Modern solid-state drives are a compromise because they’re inherently all-flash and are still configured with all the bottlenecks of legacy standard drives even when bundled in to modern enterprise arrays.  SCM is not exactly memory and it’s not exactly storage.  It physically connects to memory slots in a mainboard just like traditional DRAM.  It is also a little bit slower than DRAM, but it is persistent, so just like traditional storage all content is saved after a power cycle.  Compared to flash SCM is orders of magnitude faster and offers equal performance gains on read and write operations.  In addition, SCM tiers are much more resilient and do not have the same wear pattern problems as flash.

A large gap exists between DRAM as a main memory and traditional SSD and HDD storage in terms of performance vs. cost, and SCM looks to address that gap.

The next-generation technologies that will drive SCM aim to be denser than current DRAM along with being faster, more durable, and hopefully cheaper than NAND solutions.  SCM, when connected over NVMe technology or directly on the memory bus, will enable device latencies to be about 10x lower than those provided by NAND-based SSDs.  SCM can also be up to 10x faster than NAND flash although at a higher cost than NAND-based SSDs. Similarly, NAND flash started out at least 10x more expensive than the dominant 15K RPM HDD media when it was introduced. Prices will come down.

Because the expected media latencies for SCM (<2us) are lower than the network latencies (<5us), SCM will probably end up being more common on servers rather than on the network.  Either way, SCM on a storage system will help accelerate metadata access and result in improvement of overall system performance.  Using NVMe-oF to provide low-latency access to networked storage SCM could potentially be used to create a different tier of network storage.

The SCM Vision

It sounds great, right?  The concept of Storage Class Memory has been around for a while, but it’s become a hard to reach albeit very desirable goal for storage professionals. The common vision seems to be a new paradigm where data can live in fast, DRAM-like storage areas in which data in memory is the center of the computer instead of the compute functions. The main problem with this vision is how we get the system and applications to recognize that something beyond just DRAM is available for use and that it can be used as either data storage or as persistent memory.

We know that SCM will allow for huge volumes of I/O to be served from memory and potentially stored in memory.  There will be fewer requirements needed to create multiple copies to protect against controller or server failure.  Exactly how this will be done remains to be seen, but there are obvious benefits from not having to continuously commit to slow external disks.  Once all the hurdles are overcome, SCM should have broad applicability in SSDs, storage controllers, PCI or NVMe boards and DIMMs.

Sofware Support

With SCM, applications won’t need to execute write IOs to get data into persistent storage. A memory level, zero copy operation moving data into XPoint will take care of that. That is just one example of the changes that systems and software will have to take on board when a hardware option like XPoint is treated as persistent storage-class memory, however.  Most importantly, the following must also be developed:

  • File systems that are aware of persistent memory must be developed
  • Operating system support for storage-class memory must be developed
  • Processors designed to use hybrid DRAM and XPoint memory must be developed

With that said, the industry is well on its way. Microsoft has added XPoint storage-class memory support into Windows Server 2016.  It provides zero-copy access and Direct Access Storage volumes, known as DAX volumes.  Red Hat Linux Operating system support is in place to use these devices as fast disks in sector mode with btt, and this usecase is fully supported in RHEL 7.3.

Hardware

SCM can be implemented with a variety of current technologies, notably Intel Optane, ReRAM, and NVDIMM-P.

Intel has introduced Optane brand XPoint SSDs and XPoint DIMMs, instead of the relatively slower PCIe bus used by the NVMe XPoint drives.

Resistive Random-Access Memory (ReRAM) is still an up-and-coming technology and comparable to Intel’s XPoint. It is currently under development by a number of companies and is a viable replacement for flash memory. The costs and performance of ReRAM are not currently at a level that makes the technology ready for the mass market. Developers of ReRAM technology all face similar challenges: overcoming temperature sensitivity, integrating with standard CMOS technology and manufacturing processes, and limiting the effects of sneak path currents, which would otherwise disrupt the stability of the data contained in each memory cell.

NVDIMM stands for “Nonvolatile Dual-Inline Memory Module.” The NVDIMM-P specification is being developed to support NAND flash directly on the host memory interface.  NVDIMMs use predictive software that allocates data in advance between DRAM and NAND.  NVDIMM-P is limited in that even though NAND flash is physically located at DIMM along with DRAM, the traditional memory hierarchy is still the same. The NAND implementation still works as a storage device and the DRAM implementation still works as main memory.

HP worked for years developing its Machine project.  Their effort revolved around memory-driven computing and an architecture aimed at big data workloads, and their goal was eliminating inefficiencies in how memory, storage, and processors interact.  While the project appears to now be dead, the technologies they developed will live on in current and future HP products. Here’s what we’ll likely see out of their research:

  • Now: ProLiant boxes with persistent memory for applications to use, using a mix of DRAM and flash.
  • Next year: Improved DRAM-based persistent memory.
  • Two-three years: True non-volatile memory (NVM) for software to use as slow but high volume RAM.
  • Three-Four years: NVM technology across many product categories.

SCM Use Cases

I think SCM’s possibly most exciting use case for high performance computing will be its use as nonvolatile memory that is tightly coupled to an application. SCM has the potential to dramatically affect the storage landscape in high performance computing, and application and storage developers will have fantastic opportunities to take advantage of this unique new technology.

Intel touts fast storage, cache, and extended memory as the primary use cases for their Optane product line.  Fast storage or cache refers to the tiering and layering which enable a better memory-to-storage hierarchy. The Optane product provides a new storage tier that breaks through the bottlenecks of traditional NAND storage to accelerate applications, and enable more work to get done per server. Intel’s extended memory use case describes the use of an Optane SSD to participate in a shared memory pool with DRAM at either the OS or application level enabling bigger memory or more affordable memory.

What the next generation of SCM will require is the industry coming together to agree on what we are all talking about and generate some standards.  Those standards will be critical to support innovation. Industry experts seem to be saying that the adoption of SCM will evolve around use cases and workloads, and task-specific, engineered machines that are built with real-time analytics in mind.  We’ll see what happens.

No matter what, new NVMe-based products coming out will definitely do a lot toward enabling fast data processing at a large scale, especially solutions that support the new NVMe-oF specification. SCM combined with software-defined storage controllers and NVMe-oF will enable users to pool flash storage drives and treat them as if they are one big local flash drive. Exciting indeed.

SCM may not turn out to be a panacea, and current NVMe flash storage systems will provide enough speed and bandwidth to handle the even the most demanding compute requirements for the foreseeable future.  I’m looking forward to seeing where this technology takes us.

Advertisements

One thought on “Storage Class Memory and Emerging Technologies”

  1. 3D Xpoint/Optane are unfortunately big fat bust. So far it has huge improvements in latency, making it potentially great candidate for caching device for much slower TLC SSDs, but it’s achilles heel is piss poor MTBF. See Semiacurate for more details

Leave a Reply