I’ve written recently about the difference between solid-state drives (SSDs), PCIe SSDs, and solid-state memory over PCIe. But a new technology was presented to me this month that’s even more radical: What if NAND flash was placed on a DIMM for direct access by the CPU? This is what Diablo Technologies just announced as “Memory Channel Storage“.
In theory, the idea of placing NAND flash on the memory bus makes perfect sense. After all, NAND really is random-access memory (RAM), just like the fancy DRAM chips used in today’s systems. And these memory channels are seriously fast, blowing even PCIe out of the water. Then there’s the fact that memory channels are highly “deterministic” – that is, they are predictably quick all the time, unlike a shared channel like PCIe, SAS, or Fibre Channel!
But putting memory on DIMMs isn’t trivial. NAND flash has radically different performance characteristics from DRAM, and today’s memory controllers were never intended to address NAND. System software, too, is unprepared to deal with a situation where some system memory is fast and volatile and other regions are slower and non-volatile. Looked at this way, the Diablo MCS concept is just plain bizarre!
How could this possibly be made to work? The most critical element is buy-in from the companies making the systems that would use MCS. This is not a simple, universal peripheral; servers must be engineered to handle MCS modules, and software drivers are required, too. As a small company, Diablo really has its work cut out for it!
First, Diablo must ink strong relationships with major server and system software vendors. This means at least one of the big blade and server companies (Cisco, Dell, HP, IBM) as well as the major hypervisor and OS vendors (VMware, Microsoft, Citrix, Oracle). This will not be easy. Although every one of these companies wants this kind of memory, there are a huge number of conflicting technology advances right now, from distributed storage to caching. Plus, existing storage OEMs from EMC, HDS, and NetApp to Fusion-io, Intel, and Micron have their own alternatives on offer.
But what if it worked? Wouldn’t it be cool to have multiple terabytes of tiered dynamic memory? How about terabytes of storage with ridiculously-low latency? This could change everything. Imagine how many virtual machines you could run on an affordable server with a terabyte of (DRAM+NAND) RAM. You could buy fewer systems, scale them bigger, and have less overhead for sharing and management. You could save money, perhaps using “no-SAN” shared storage and skipping the whole storage infrastructure.
Stephen’s Stance
The Diablo Technologies MCS announcement is seriously cool, but I’m definitely skeptical about their ability to execute. Flash on DIMM would be awesome, and would open up a whole new world for datacenter architects. If Diablo was to announce partnerships with major server and OS vendors, I’d really take notice. Until then, it’s merely a cool idea.
Bill Plein says
Intel Corporation found it hard to put NAND Flash on the motherboard. While they had a different concept for its use, I’d go as far to say that if Intel can’t force it, then others will find it even tougher. But times have changed, and maybe the time is right for a startup to come along and change the world.
Paul Braren | TinkerTry.com says
Such a good article, thank you! Inspired me to write up a related article today:
USB 3.1, Flash in DIMM slots, NGFF and Thunderbolt 2 are promising for virtualization at home:
http://TinkerTry.com/usb-3-1-flash-in-dimm-slots-ngff-thunderbolt-2-promising-for-virtualization-at-home/
Dilip Naik says
Really interesting – learnt a lot in the short power packed bloh