The world of storage can be confusing, with obscure terms hiding massive differences in technology and performance. Such is the case for the latest PCI express SSDs: They are much faster than traditional SAS or SATA SSDs, but many aren’t sure exactly why. In this article, I will try to explain the real difference.
Let’s get it out of the way from the start: All SSDs are fast compared to spinning hard disk drives. When we say that one solid-state drive (SSD) is slower than another, it would be wrong to assume it is slow in general. Even the wimpiest SSDs beat the fastest hard disk drives (HDDs) in terms of I/O operations per second (IOPS) and throughput.
Metrics are also very important: The number of I/O operations per second (IOPS) a drive can sustain is critically important for user experience, yet many people still talk only in terms of Megabytes per second throughput. And there are important considerations even inside these metrics: What size I/Os are we measuring? What is the mix of read and write operations? And how does the drive sustain performance over hours or years of use? I am not going to dive deep into these questions in this piece, but these issues are definitely worth considering!
A Typical SSD: SATA or SAS is the Bottleneck
Let’s consider a typical SSD. Crack open the drive case and you’ll find a number of NAND flash chips, a SSD controller chip, and a SAS or SATA controller. There’s a lot going on inside that case to make these memory chips appear to be a hard disk drive, and even more to maintain performance and reliability over the long haul. In fact, that SSD controller is usually a very powerful CPU: Many use multi-core ARM chips similar to those used in smartphones and tablets!
When a computer accesses this sort of SSD, it must pass all I/O through its own SATA or SAS controller. The computer “knows” nothing about the SSD: It could just as well be accessing a decade-old spinning platter of rust! Even though today’s computers feature many lanes of fast 5 Gbps PCIe I/O, all data access is squeezed through a slow SATA or SAS channel running at 1.5 Gbps, 3 Gbps, or (recently) 6 Gbps.
All this translation really takes a toll, too. The computer’s own SATA or SAS controller is probably not optimized to handle thousands of IOPS, since no hard disk drive can offer that kind of performance. And there is a tremendous amount of latency (“wait time”) involved as each chip and bus un-packs and re-transmits the I/O requests.
Again, SSDs are incredibly fast compared to hard disk drives. This is especially true in mobile devices like laptops and tablets which traditionally used thin and light-weight hard disks optimized for low power consumption not performance! It’s no exaggeration to say that today’s SSD-equipped laptop is 100x faster in terms of storage I/O than a similar model with a hard disk drive!
Typical hard disk drives are capable of 100 to 200 MBps of throughput and 100 to 200 IOPS in the real world. Yet most consumer SSDs today offer 300 to 400 MBps of throughput and a few thousand IOPS. And some do much better: The 2011-era Micron C400 SSD in my desktop tops out around 40,000 IOPS, and the Samsung SSD in my Retina MacBook Pro is just as quick!
PCIe SSDs: Skip the SATA
High-end workstations and servers have begun to use a different kind of SSDs that connect via PCI Express (PCIe) rather than SATA or SAS. Despite using similar NAND chips, these drives are much, much faster than SSDs. There are two reasons for this: They handle I/O at full low-latency PCIe speed and are optimized internally to offer greater performance.
Let’s take the most obvious difference first: PCIe SSDs connect using the PCI Express bus rather than SATA or SAS. This alone makes a huge difference, both in terms of latency and throughput. Obviously, using PCIe instead of a SATA or SAS bus increases speed from 3 Gbps to 5 or more Gbps, but eliminating the bus and controller reduces latency, too. Just removing the controller doubles or triples performance, all other things being equal.
But many PCIe SSDs are an order of magnitude or more faster, offering hundreds of thousands or even millions of IOPS. How is this possible? PCIe SSDs typically include more NAND channels than lower-end SSDs, enabling more parallelization of I/O inside the drive. Where a consumer SSD might have just 4 NAND channels, a high-end PCIe card usually uses 8 or more paths to access chips, easily doubling performance. And PCIe SSDs use much more powerful and optimized controllers to handle all this I/O, too.
The result is astounding. A relatively accessible SSD like the Micron P320h boasts 32 NAND channels and connects using 8 PCIe lanes, delivering 2-3 GBps throughput and 300k-750k IOPS. That’s 10 hard disk drives of throughput and a thousand hard disk drives (!) of I/O.
What About PCIe Memory?
But wait, there’s more! PCIe SSDs still translate flash into block SCSI storage for access. What if all that NAND flash could be accessed directly by the CPU, doing random I/O just like any other kind of memory? This is the concept behind PCIe memory cards from companies like Virident and Fusion-io, as well as new industry standards like NVMe and SCSI Express.
In theory, skipping the SSD controller should offer a massive increase in performance. After all, software can be optimized for flash without a translation layer hustling the blocks around in the background. And every time latency is removed, performance can be added thanks to ever-increasing optimizations.
This concept has worked out well for Fusion-io in particular. The company’s ioDrive products are ubiquitous in headline-grabbing data centers and applications, from Facebook to Apple. And these “ioMemory” devices perform very, very well, delivering 6 GB/s of throughput and over a million IOPS.
But, in practice, PCIe memory isn’t dramatically better performing than a PCIe SSD. Since most applications don’t support direct PCIe memory access, these products typically use a SCSI translation layer in software or firmware, too. And today’s higher-end SSD controllers do a fine job wringing out performance, even after the overhead of translation. So PCIe memory isn’t a slam dunk win like PCIe SSD, but it’s still remarkably good compared to every other storage option!
Stephen’s Stance
Moving SSD from conventional SATA and SAS interfaces to faster, lower-latency PCI Express is a huge performance win and represents the future of storage. Apple recently announced a move to PCIe not just on the high-end Mac Pro but also the mainstream MacBook Air. This switch shows that PCIe SSD will soon be displacing SAS and SCSI busses across the industry!
Guest says
Great explanation and agree 100% with your stance. Don’t forget that the PCIe bus does tie straight into the CPU now bypassing the chipset for even more bandwidth and reduced latency.
http://www.intel.com/content/www/us/en/chipsets/server-chipsets/server-chipset-c600.html
Christopher Wilkes says
I agree 100% with your stance, Don’t forget that the PCIe bus now ties straight into the CPU for even more bandwidth and reduced latency.
http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e3-1200v3-vol-1-datasheet.pdf
sfoskett says
I included that in the original SSD illustration but removed it for the PCIe ones. I added it back in. Good catch.
Paul Braren | TinkerTry.com says
This is a great article. Thank you Stephen!
Really sheds light on the limitations of “old school” SATA3/6Gbps.
Perspective has changed quite a bit in 2 short years, here’s a look at a related discussion from April 2011:
http://www.tomshardware.com/forum/267959-32-sata-sata-worthwhile