Ask any project manager if it’s possible to deliver something that is fast, good, and cheap, and they’ll laugh. The phenomenon known as the Iron Triangle limits just about everything in the world from meeting all three conflicting requirements. Yet, for the last two decades, enterprise storage array vendors have been trying to deliver just this. How’s that working out?
Data storage isn’t as easy as it sounds, especially at enterprise or cloud scale. It’s simple enough to read and write a bit of data, but much harder to build a system that scales to store petabytes. That’s why I’m keenly focused on a new wave of storage systems built from the ground up for scaling!
Every day, I’m briefed by another company with a range of products from entry-level to high-end. And every day I try to figure out their naming scheme: It seems most IT vendors follow the naming schemes of car companies, but few use the same naming system!
Storage arrays are big, expensive, and difficult to manage. Plus, concentrating storage in a single device puts everything at risk if there is an outage. So why buy a storage array at all? Arrays do a few things very well, and this often makes up for the difference, on balance.
I am often questioned during my Storage for Virtual Environments seminar presentations about VMware’s Pluggable Storage Architecture (PSA). This system is fairly straightforward and concept: VMware provides native multipathing support for a variety of storage arrays, and allows third parties to substitute their own plug-ins at various points in the stack. But the profusion of acronyms and third-party options makes it difficult for end-users to figure out what is going on.
The most exciting enhancements in VMware vSphere 4.1 is the addition of vStorage API for Array Integration (VAAI). This new API allows VMware ESX to offload storage processing functions to capable storage arrays, reducing the workload on the server hardware in introducing new and exciting possibilities for performance and efficiency. VAAI in ESX 4.1 includes three separate capabilities: block zeroing, full copy, and hardware assisted locking.
Although I consider it the main stumbling block for thin provisioning, communication (or lack thereof) is being addressed with metadata monitoring, WRITE_SAME, the Veritas Thin API, and other ideas. But communication isn’t the only issue. Let’s talk about page sizes. You’ll often see vendors tossing this “softball” objection at their competitors, claiming that their (smaller) page size makes for more-effective thin provisioning. And that’s true, to a some extent.
Perhaps the previous discussion of spindles left you exhausted, imagining a spindly-legged centipede of a storage system, trying and failing to run on stilts. The Rule of Spindles would be the end of the story were it not for the second horseman: Cache. He stands in front of the spindles, quickly dispatching requests using solid state memory rather than spinning disks. Cache also acts as a buffer, allowing writes to queue up without forcing the requesters to wait in line.
Hard disk drive makers are adding flash storage to their conventional spinning-platter drives to improve performance and are targeting the performance PC market. Wait a second, haven’t we seen this before? As Rocky eventually said to Bullwinkle, “but that trick never works!”
EMC’s Iomega unit today released the rack-mount storage product we have all been waiting for. The new ix12-300r packs 12 drive bays, scaling from 4 TB all the way to 24 TB, and backs it with quad gigabit iSCSI, redundant power, and everything else the small data center needs.