
The history of technology moves in fits and starts, but one trend trumps all else: An inevitable shift from fine-tuned specialized gear to general-purpose commodity building blocks. We see it in both hardware and software, and at all levels of the industry, from chips and wafers to operating systems and networking devices. Take a step back and you’ll certainly agree: Commodity hardware always wins (eventually).
Doing the Impossible
Building something new often requires amazing feats of engineering. Steve Wozniak’s hardware wizardry with the original Apple computer is now legend, but similar tales are found everywhere.
It’s frightfully difficult to do the impossible without at least some wizardry. Most new technologies are therefore built on specialized hardware and hand-tuned software. Engineers at the forefront of technology must eke out every ounce of performance from trailing technologies.
This is why we see so many special-purpose processors in high-performance devices, and why companies employing custom ASICs often enjoy a performance advantage. In enterprise storage, we look to companies like HDS and BlueArc who packed their arrays with special-purpose hardware, pointing the way to the future. We also see impressive developments from Fusion IO and SandForce in the SSD space, leading to the next generation of storage.
Can You Keep Ahead Of Intel?
Just about every technology sector progresses from the impossible to the commonplace, and these changes are often very quick. High-performance storage systems were once exotic multi-million dollar devices but millions of IOPS are now available for under $100k from dozens of vendors.
This progression typically includes a move from special-purpose to commodity underpinnings. Exotic real-time operating systems have been pushed aside in favor of Linux, BSD, and even Windows, while ASICs and FPGAs give way to Intel’s CPU juggernaut. Even Apple computers are today almost entirely commodity PCs.
I love how Denton Gentry phrases this in his Monday blog entry about Intel and Achronix:
“Once you commit to a specialized hardware design, the clock starts ticking. There will come a day when a software implementation could meet the requirements, and at that point the FPGA becomes an expensive liability in the BOM cost. You have to make enough profit from the hardware offload product to pay for its own design, plus a redesign in software, or the whole exercise turns out to be a waste of money.”
In other words, specialized software on proprietary hardware will eventually be overtaken by general-purpose software on commodity hardware. The decision must include not just what one can do today but what that baggage will mean in the future. Designing a system around proprietary components might look good now, but the next-generation product will be put at risk by this decision.
Stephen’s Stance
My opinion is right there in the title of this piece: Commodity hardware always wins. No matter how great your ASIC is, industry-standard CPUs will out-perform it sooner or later. No matter how much effort you put into tuning your software, Linux-based systems will eventually do just as well.
The rise of commodity hardware is everywhere: EMC, HDS, IBM, Oracle, and HP have all embraced Intel CPUs and their hardware is looking more and more like Intel’s reference designs, too. Startups are increasingly relying on software rather than hardware for their differentiation, and we’re seeing Supermicro servers shipped with just about everyone’s name on them. The Storage Bridge Bay specification is looking better all the time, too. Commodity hardware is winning in storage, just like it always does.
For more on this topic, see these related posts by others: