Ask any project manager if it’s possible to deliver something that is fast, good, and cheap, and they’ll laugh. The phenomenon known as the Iron Triangle limits just about everything in the world from meeting all three conflicting requirements. Yet, for the last two decades, enterprise storage array vendors have been trying to deliver just this. How’s that working out?
Data storage isn’t as easy as it sounds, especially at enterprise or cloud scale. It’s simple enough to read and write a bit of data, but much harder to build a system that scales to store petabytes. That’s why I’m keenly focused on a new wave of storage systems built from the ground up for scaling!
Every day, I’m briefed by another company with a range of products from entry-level to high-end. And every day I try to figure out their naming scheme: It seems most IT vendors follow the naming schemes of car companies, but few use the same naming system!
Storage arrays are big, expensive, and difficult to manage. Plus, concentrating storage in a single device puts everything at risk if there is an outage. So why buy a storage array at all? Arrays do a few things very well, and this often makes up for the difference, on balance.
I am often questioned during my Storage for Virtual Environments seminar presentations about VMware’s Pluggable Storage Architecture (PSA). This system is fairly straightforward and concept: VMware provides native multipathing support for a variety of storage arrays, and allows third parties to substitute their own plug-ins at various points in the stack. But the profusion of acronyms and third-party options makes it difficult for end-users to figure out what is going on.