Solid state (NAND flash) storage is all the rage right now, but there are many lingering questions regarding its true performance, reliability, and cost. But no question is more important in determining its ultimate usefulness than that of location: Where should flash storage be placed to maximize return on investment?
Storage companies have argued that flash disks can be used most effectively in external storage devices, arguing that it’s simpler to just leverage existing storage technologies. Server companies have tended to prefer to place it inside the server, asking why, if flash disks are capable of massive random I/O performance and extremely low latency, one would put them at the other end of a Fibre Channel or iSCSI connection, which introduces latency and tends to combine I/O operations?
The Case For Servers
The argument for placing flash in (or very close to) servers boils down to two key contentions:
- Distance = latency, so moving quick flash devices away from I/O-hungry CPUs erodes their effectiveness
- Granularity (or lack thereof) is the core problem facing storage management, so moving flash (and other types of storage) closer to the omniscient application is likely to bring greater effectiveness
The server folks are relying on a technical argument – that placing high-speed cache where it could theoretically do the most good is the right decision. And they are right, in a perfect world: A flash-aware application talking to a low-latency flash device over PCI ought to really fly!
There is some disagreement in the server-side argument as well: Is flash a “fake disk” or a new level of caching between RAM and storage? It seems that the pitch leans toward the latter, even when the SSD appears as a disk drive. This is what Fusion-IO is pitching: They skip old-school disk connections like SATA and SAS altogether, placing their storage on PCI Express and asking hardware and software vendors to integrate it as best they can. Consider Sun’s flash integration for ZFS, for example. Note, by the way, that Intel’s Turbo Memory products also offer PCIe flash, despite what you might be hearing.
Of course, we can just use a flash drive in place of an internal hard drive. Just about everyone makes something like this now, and they work pretty well in some cases. Then there are hybrid drives, which have gone nowhere so far.
But will this work? We need operating systems and applications that can make use of this local flash, and that has been a problem. Intel’s flash-on-the-motherboard idea never caught on, even as Vista included ReadyBoost, because the truth is that operating systems, file systems, or applications must be re-engineered to really make use of flash in a server. That’s happening, but slowly.
The Case for Arrays
Then we turn to the other end of the storage pipe. EMC put flash in the DMX in January, and Compellent is doing it as we speak. IBM went wild with a bunch of Fusion-IO drives and an SVC over the summer, too. All this proves that flash works in storage arrays!
Why? Simply because modern storage arrays are already engineered to make good use of disk drive capabilities. This is a “what works” strategy – even though it doesn’t sound as nice in theory, the array doesn’t need lots of re-engineering to see some benefit from flash. And post-RAID virtualized systems like that Compellent can really make hay with a few super-speed flash drives, since they can move hot blocks to flash dynamically.
Sure, there is latency between the CPU and the flash drive, but storage arrays are really computers in their own right. So they can derive the same benefit from flash that a server could, and they can share that benefit to connected servers rather than leaving it locked up.
Why Not Everywhere?
How about we end the debate. Flash works great in the server, and it works great in the array. Why not just put it anywhere it makes sense in your particular environment? Have an operating system, application, or file system that can make use of server-side flash? Go buy a Fusion-IO card! Have a virtualized enterprise storage array? Get some SSD there, too. And remember that it’s not all about NAND flash – RAM-based solid state storage from companies like Texas Memory Systems, Gear6, and Violin are even faster!
But remember one thing: This stuff is still very very expensive, so you have to really need the performance to make a case for flash.
Image by Kim Hansen, GFDL or CC-BY-SA (source)
the storage anarchist says
Hear! Hear!!!
(Thanks for the support)
the storage anarchist says
Hear! Hear!!!
(Thanks for the support)
Kim Hansen says
How strange. I just came by in passing noting that an image I took two years ago in Greenland is used in an article debating solid state storage! I had never envisioned, when I published the image that it could be used in such a context. That is one of the pleasures of publishing free content media. I would also like to thank whoever put the image here for attributing me as the creator as is required by its license. Had it been perfect the license should also have been mentioned GFDL or CC-BY-SA, and it would have been good courtesy to link to the source: http://commons.wikimedia.org/wiki/File:Fram_approaching_in_front_of_iceberg_upernavik_2007-08-19_1.jpg, but it is a minor issue. So many other reusers don’t do attribution at all or even copyright the material themselves…
sfoskett says
I had originally linked your name in the caption to your Wikimedia Commons page, but it disappeared! WordPress seems to ignore/nuke hyperlinks in image captions. Weird. So I added a link at the bottom.
Thanks for making such a beautiful image available CC-BY-SA!
sfoskett says
I had originally linked your name in the caption to your Wikimedia Commons page, but it disappeared! WordPress seems to ignore/nuke hyperlinks in image captions. Weird. So I added a link at the bottom.
Thanks for making such a beautiful image available CC-BY-SA!