The history of technology moves in fits and starts, but one trend trumps all else: An inevitable shift from fine-tuned specialized gear to general-purpose commodity building blocks. We see it in both hardware and software, and at all levels of the industry, from chips and wafers to operating systems and networking devices. Take a step back and you’ll certainly agree: Commodity hardware always wins (eventually).
Doing the Impossible
Building something new often requires amazing feats of engineering. Steve Wozniak’s hardware wizardry with the original Apple computer is now legend, but similar tales are found everywhere.
It’s frightfully difficult to do the impossible without at least some wizardry. Most new technologies are therefore built on specialized hardware and hand-tuned software. Engineers at the forefront of technology must eke out every ounce of performance from trailing technologies.
This is why we see so many special-purpose processors in high-performance devices, and why companies employing custom ASICs often enjoy a performance advantage. In enterprise storage, we look to companies like HDS and BlueArc who packed their arrays with special-purpose hardware, pointing the way to the future. We also see impressive developments from Fusion IO and SandForce in the SSD space, leading to the next generation of storage.
Can You Keep Ahead Of Intel?
Just about every technology sector progresses from the impossible to the commonplace, and these changes are often very quick. High-performance storage systems were once exotic multi-million dollar devices but millions of IOPS are now available for under $100k from dozens of vendors.
This progression typically includes a move from special-purpose to commodity underpinnings. Exotic real-time operating systems have been pushed aside in favor of Linux, BSD, and even Windows, while ASICs and FPGAs give way to Intel’s CPU juggernaut. Even Apple computers are today almost entirely commodity PCs.
I love how Denton Gentry phrases this in his Monday blog entry about Intel and Achronix:
“Once you commit to a specialized hardware design, the clock starts ticking. There will come a day when a software implementation could meet the requirements, and at that point the FPGA becomes an expensive liability in the BOM cost. You have to make enough profit from the hardware offload product to pay for its own design, plus a redesign in software, or the whole exercise turns out to be a waste of money.”
In other words, specialized software on proprietary hardware will eventually be overtaken by general-purpose software on commodity hardware. The decision must include not just what one can do today but what that baggage will mean in the future. Designing a system around proprietary components might look good now, but the next-generation product will be put at risk by this decision.
Stephen’s Stance
My opinion is right there in the title of this piece: Commodity hardware always wins. No matter how great your ASIC is, industry-standard CPUs will out-perform it sooner or later. No matter how much effort you put into tuning your software, Linux-based systems will eventually do just as well.
The rise of commodity hardware is everywhere: EMC, HDS, IBM, Oracle, and HP have all embraced Intel CPUs and their hardware is looking more and more like Intel’s reference designs, too. Startups are increasingly relying on software rather than hardware for their differentiation, and we’re seeing Supermicro servers shipped with just about everyone’s name on them. The Storage Bridge Bay specification is looking better all the time, too. Commodity hardware is winning in storage, just like it always does.
For more on this topic, see these related posts by others:
DGentry says
Yup. I wasted four years of my career on a custom CPU which was a fire breathing monster when it debuted, but became old and busted before we could get its replacement out.
The chip business is a marathon, not a sprint. Startups tire out after the first few miles.
Nick says
Totally Agree Steve. Commodity will always win and provides end-users with a base platform they know is being used by the masses.
We regularly shift to the latest, best in class, commodity tin to ensure our customers get the best that is out there to run MatrixStore on. We decided from day one not to dedicate our limited resources to hardware development and that is now paying dividends.
jmartins says
As I often say… the only true commodities are found in the periodic table.
The word commodity aside…what’s your point?
We all know products become less expensive and more widely available year over year. [Though at a ZDNet Storage Summit a couple years back I forecast that trend will eventually slow down, stop and perhaps even reverse in the not-too-distant future as real commodities become more scarce.]
Early adopters will continue to pay a premium for an early competitive advantage and their behavior will continue to fuel more widespread adoption. By the time the tech goes mainstream, they’ve moved on to the next big thing. In many, but not all cases, what was once a low-volume high-margin market dominated by a handful of companies becomes a high-volume, pathetically-low-margin market overflowing with competition.
From a Joe Average Consumer perspective it’s fantastic. From a business perspective (sellers and bleeding edge adopters) I’m not so sure I’d call that a win.
Sshottan says
While indeed capturing a clear trend in the storage business, Stephen Foskett’s extrapolation is too general and thus has led him to erroneous conclusions.
Three aspects are missing from his analysis: The essence of Intel’s roadmap, the voice of the customer who deploys such a system, and a time line.
Intel, the world largest chip company, is copying a page from its game book: Identify a mature and growing market and then embed the key peripherals into their chipsets. Does anyone remember that motherboards used to include only compute and core logic elements? Intel has identified graphics and networks as ubiquitous in the previous decades, and indeed most NIC providers and Graphic chips providers vanished.
Yet, by integrating peripherals into its chipsets Intel is aiming at the sweet spot of good enough. Companies who continued to innovate and focused on the needs of the users not satisfied with mediocrity continued to excel. Look at NVIDIA with their focus on high-end graphics and GPUs. Others, who elected to compete with Intel on cost, perished. The lesson is that when a segment matures and becomes a substantial market segment, expect commoditization, and innovate since one size does not fit all.
So, what did Intel add to their roadmap? SAS connectivity and core RAID engine. Indeed challenging times for HBA vendors and RAID vendors. But, how does such a trend impact a vendor of scalable high performance file systems?
Stephen Foskett carries his arguments further. Not only are all storage systems predicted to run on the same Intel platform from one’s favorite OEM, but even software will become just “general purpose software.” According to the post, efforts put into software development are a waste since Linux will eventually do just as well. This statement is fundamentally flawed. Stephen fails to distinguish between the underlying OS and the value-added software running on top of it. BlueArc has embraced Linux. BlueArc does not pretend to improve on Linux OS fundamentals. BlueArc innovates and creates value from marrying its scalable file system to Linux.
According to Stephen’s brave new world, channels, brands and size are all there is. Buy from EMC, HDS, IBM, HP or Dell, and you get the same product. Well, I bless my luck for being the last innovator standing.
If all storage users were the same, if all access patterns were identical, if all data sets and demands were similar, commodity would have sufficed. Luckily it isn’t so. Storage is not “just” for access. BlueArc’s customers deploy storage to serve their applications. Rendering movies, performing genomic research, deduping primary data at speed or hosting a multiplicity of VM images on a storage system demand performance and scalability. Commodity is not good enough.
Customer’s demands and usage patterns are not addressed by Stephen, and adding customer’s demands to the equation makes commodity less ubiquitous.
Ray from Silverton Consulting has already addressed some user related aspects in his post, and I am cognizant of not repeating his well articulated arguments.
So, where does the current commoditization trend fit in the continuum? Commoditizing NICs and Graphics is the past; Commoditizing HBAs and RAID is current. What’s next? In any scenario possible, BlueArc wins. Innovating at the file system level and delivering the only enterprise scalable file system assures success. Since the file system remains the answer to customer requirements, and since new applications and compute grids will only create more demands, being a provider of enterprise scalable NAS is a winning proposition. Should file systems ever get on the radar screen of future consolidation I know of no other vendor better positioned to benefit from such a future trend.
The end of innovation is not near. I would have suggested changing the title of Stephen’s post to “Innovate or get commoditized.”
Shmuel Shottan, CTO
BlueArc