Virtualization is a disruptive technology in every sense of the word. By abstracting and simplifying physical resources, virtualization enables dynamic utilization. But this “translation” from physical to virtual disrupts the assumptions that enable performance and flexibility of physical devices such as storage arrays.
Your Assumption Is Invalid
See Storage Arrays Do A Few Things Very Well to learn what exactly is disrupted by virtualization
Traditionally, storage arrays relied on a static and predictable mapping between server, HBA, controller, LUN, and RAID set. This enabled the array to deliver the performance, data protection, and data movement features that would become the key selling point for enterprise storage companies. This was the topic of part 1 of this series.
But server virtualization changes all that. By presenting storage at the hypervisor level, server virtualization environments deprive the storage array of the information it needs to function effectively.
No longer can the array prefetch cache content, since a single front end port might have the interleaved I/O of a dozen servers. The hypervisor maintains its own virtual LUNs (vDisks) on top of a few large LUNs mounted from the array. And it mixes I/O in real time from multiple virtual machines without indicating which “flow” is which. Plus, most virtualization environments include clustered datastores, so I/O for the same one can come from multiple HBA’s at once.
The storage array can’t effectively move or copy a LUN, either, since each likely contains the data of a number of different servers. Increasingly, “hypervisor huggers” are relying on VMware for data protection features and storage presentation. This takes the array out of the picture entirely. And even if they want to use array-based copying or replication, the large LUNs they use are not granular, singular, or consistent with respect to servers.
Do Not Want!
If server virtualization disrupts the fundamental assumptions that enabled storage arrays to be of compelling value, what does this mean for the future of the enterprise storage industry? After all, server virtualization has its own compelling value even without traditional storage arrays in the mix. Many are of the opinion that server virtualization even trumps the need for the expensive, dedicated storage devices that are the hallmark of this industry.
Conventional storage arrays performed well because they could predict the I/O stream; now they perform poorly because prefetching doesn’t work very well and hard disk drives are terrible at random I/O. Buyers loved snapshot and replication technology, but these are no longer consistent because the LUN is not a valid unit of storage. Virtualization requires shared storage, but whole-LUN SCSI reservations limit scalability and shareability.
Old-school RAID arrays just don’t work well in a virtual world. Needless to say, this is a major problem for the incumbent storage array vendors.
VAAI Is Half of the Solution
VMware recognized the fact that storage arrays were no longer able effectively to function in a virtual world. The company responded by introducing an API for array integration (VAAI) in vSphere 4.1.
It is particularly telling that the three initial “primitives” of VAAI directly address the challenges raised by the “I/O blender”:
- Atomic Test and Set (ATS) introduces sub-LUN locking, dramatically reducing contention for shared LUNs
- Cloning Blocks (also called full copy or extended copy) allows storage arrays to make copies of data independent of large, static LUNs
- Zeroing File Blocks addresses the communication barrier facing thin provisioned storage arrays, allowing them to free up unused capacity
Thanks to these primitives, VAAI recovers some of the value lost when enterprise storage arrays are used to support server virtualization workloads. Microsoft is introducing similar functionality with ODX in Windows Server 2012 and Hyper-V 3 this Summer.
But VAAI cannot alleviate the “random I/O” issue facing block storage protocols. Once the “fruit” (virtual disks) is chopped up in the “blender”, there is no way for the storage array to reconstruct it. This dramatically reduces the impact of traditional caching and forces the use of flash for DRAM as a tier of storage instead. But solid-state storage remains much more expensive than spinning disks, and not all arrays are capable effectively of managing it.
File Servers Don’t Go Far Enough Either
Because only a few storage arrays supported VAAI, and because the quality of the support varied dramatically, many customers turned to NFS for virtual machine storage. Don’t get me wrong: NFS helps quite a bit, and is much more friendly to “non-storage people”, but it’s still not quite good enough.
NFS didn’t need many of the mitigations provided by VAAI to block storage. NFS never had the LUN locking issue, since, as a file protocol, each virtual machine disk had its own connection through the network. Similarly, NFS natively passes thin provisioning information to the storage array, so it does not need block zeroing assistance.
In fact, because each virtual machine disk is a file on the NFS server, it retains much of the information that is lost in the I/O blender on the block side. A specialized NFS server could act on this information intelligently to provide data protection and replication of individual virtual machine disk files. This is precisely what Tintri does, as a matter of fact.
But most legacy NFS servers are poorly suited to operations on individual files. Additionally, many NFS servers are simply incapable of handling the volume of I/O load generated by server virtualization in production. As is the case with block storage, the vast majority of NFS servers are not up to the job when it comes to supporting virtual machine workloads. On the whole, however, NFS is the superior protocol today.
Stephen’s Stance
Server virtualization is a big problem for conventional enterprise storage arrays. It reduces or eliminates the value of the very features customers seek when selecting an array. Although VAAI helps, it cannot completely eliminate the impact of the I/O blender.
Andy says
Stephen – a very thought inducing post.
How would you characterize this virtualization consolidated I/O problem on the impact of features such as EMC’s Fast Cache or 3Par’s AO and the use of “sub-LUN” tiering.
Would not this data-localization problem also apply here? Both vendors seem to promote these features as highly desirable even in virtual environments.
Andy
Nick says
Very nice post Stephen, as always.
On the I/O Blender point, I would contend that even outside server virtualization, the I/O blender still exists as long as we’re dealing with shared disk arrays. A physical target port in an shared array typically does handle I/O for multiple applications with different workload characteristics. The IO path characteristics were never trully protected to begin with unless you were able dedicated specific front-end ports to specific applications, was able to use granular cache partitioning techniques and dedicated specific back-end disks to it.
I agree with you that a LUN is no longer a valid unit of management, unless arrays become intelligent enough to be able to interpret filesystems, of course the OS vendors will need to help by providing the necessary hooks.
sfoskett says
Assuming predictable I/O was practical in the old days, was never very sound practice. Indeed, there’s no reason the blender couldn’t hit a SCSI or SAS implementation just as hard! But the blender absolutely destroys storage for virtualization. No doubt about that. And VMware is the first company to actually try to do something about this, as I’ll discuss tomorrow!
sfoskett says
These caching and tiering features are desirable inasmuch as they let those old-school arrays function at all in virtualization worlds. But they’re not a fix for the blender! Just a patch that makes things not suck so much.
Andy says
The blender cracks me up