Hiding in the shadow of the huge VMware vSphere 4 announcement was a very interesting introduction by EMC: PowerPath/VE. As I mentioned in my post on storage changes in vSphere 4, PowerPath/VE plugs into the new pluggable storage architecture (PSA) found in vSphere 4 versions of ESX and takes over the decision-making and heavy-lifting tasks related to communicating with storage systems.
Driving Massive I/O
Chuck Hollis treated us to a discussion of vSphere as an I/O Engine on his blog this morning with some background on multipath IO (MPIO for short), but I’m not sure he did the topic justice. In my opinion, server virtualization is the greatest I/O driver ever brought into the data center, and it messes with all of our preconceived notions about I/O at the same time.
What’s so special about server virtualization?
- Hypervisors concentrate I/O, shifting loads that were formerly distributed to a large number of I/O channels into a far fewer channels. Picture 10 servers doing what they do. Now put all 10 in a single physical box. All of their storage access must now share a bus, a host adapter, a cable, and perhaps a LUN on the storage system. It’s the difference between lemonade and lemon juice!
- Hypervisors randomize I/O, chunking everything up and mixing it together. Forget about the carefully-designed read-ahead algorithms and caching used in enterprise storage – VMware, Hyper-V and the rest throw those expectations out the window! Virtualization is a blender – it grinds up your lemons, skin, seeds, and all!
- Hypervisors demand low I/O latency, forcing infrastructure to get quicker, not just faster. This is one reason that caching, solid state disks, and 10 GbE are going to be huge in virtual environments – all reduce latency by orders of magnitude! As any car guy will tell you, quick and fast are two very different things!
The upshot of all of this is that virtual servers are very very hard to satisfy when it comes to I/O. And the “back end” has always been a bit of a bottleneck for virtualization software. Now we have VMware claiming that vSphere 4 can push over 300,000 I/O operations per second (IOPS) without resorting to VMDirectPath and similar “cheater” measures. Of course not all IOPS are equal, and I doubt that that 300k number would hold up with a real-world workload, but it’s impressive nonetheless!
A Brief History of MPIO
Let’s turn back to multipath I/O. PowerPath/VE is just the latest in a long line of path managers, not all of which have been well-loved. Back in my HP-UX days I learned to make the most of PVlinks, the native path management on that operating system. It wasn’t always easy to get it to work well, but it sure was nice to have a path manager built into the operating system! Veritas also offered a multi-platform path manager, DMP, which worked with a variety of array types. Back in the day, both were limited to simple failover and lacked the “intelligence” to deal with the peculiarities of the weird storage arrays we learned to not hate.
Array-specific path managers from storage vendors were much more successful. CLARiiONs used ATF, Hitachi arrays used HDLM, IBM had SDD, and of course EMC had PowerPath. EMC introduced PowerPath in 1997, the software reportedly having been developed by Conley Corporation, which EMC acquired the next year and turned into its Cambridge (MA) development center. After acquiring Data General, EMC adapted PowerPath to support CLARiiON, pushing ATF off stage right. Then they kept right on developing the software, adding support for IBM, HDS, and HP arrays and data migration.
Meanwhile, Microsoft decided that HP and Veritas were on to something when they developed standard path management software, so they began working on a standard multi-path IO (MPIO) driver for Windows. But Microsoft learned a thing or two from the mediocre device support in those old solutions, so they decided to allow vendors to plug their own smarts into the standard Windows Server 2000/2003 MPIO framework. Microsoft provided basic failover capability and third parties, including EMC, wrote their own device-specific modules (DSMs). This MPIO support evolved and spread, standard on Microsoft’s iSCSI initiator and Hyper-V virtualization platform. PowerPath 5.2.1 for Windows already supported Hyper-V thanks to this.
PowerPath and VMware PSA
VMware also learned a thing or two from HP and Microsoft. Although basic path failover support has been included in ESX for years, vSphere 4 takes it to a new level with pluggable storage architecture (PSA). Every version of ESX 4 includes native multipathing (NMP), but Enterprise Plus licensees can use vendor-supplied plugins to enable more advanced path management. As I noted on Tuesday, there are two three different levels of path selection: Basic path-selection plugins (PSPs), more advanced storage array type plugins (SATPs), and complete multi-path plugins (MPPs).
This is what EMC has introduced: An MPP for vSphere 4 called PowerPath/VE. Like the DSM for Windows MPIO, PowerPath/VE for vSphere slots right into an existing MPIO framework and enables advanced path selection and load balancing without mucking with the internals of the hypervisor. PowerPath/VE has all sorts of smarts in it. It has eight different predictive load balancing policies, proactive disconnect, bus testing, and HBA monitoring.
Super VMware guy Chad Sakac described PowerPath/VE as part of the launch. He notes that EMC is first out of the gate with a multipathing plugin for vSphere, but I suspect that just about every vendor will release similar functionality pretty quickly. In particular I expect support to come from NetApp and 3PAR, since they’re so interested in VMware support.
Licensing Questions
One thing really stuck out in the vSphere launch: PSA is only included in the top-of-the-line Enterprise Plus license. Presumably, this means that, in addition to paying for a PowerPath/VE license, users will have to spring for maximum ESX, too. This is a dumb move, if you ask me. Microsoft made MPIO successful by giving it away with every copy of Windows. They even included it in the free iSCSI initiator download. VMware, in contrast, seems to be actively limiting PSA’s usefulness to the top tier of users. If it was up to me, I would set the VMware MPIO free!
I’m working with EMC and VMware to determine the extent of the NMP/PSA/PowerPath licensing mess. I’ll update this post as I find out the answers!
- Does every edition of ESX 4 include the basic VMware native multipathing (NMP)?
- Can one use a vendor-supplied PSA plugin like PowerPath/VE without an enterprise plus license?
- Does it matter (to licensing) if the plugin is a PSP or an SATP?
- If “no” to 2 or 3, can PSA be added separately without the plus license if someone wants to use something like PowerPath/VE?
Update: I received a nice email from an EMC engineer correcting me about the plugin types. This kind of open communication is why the web is so great! It turns out that PowerPath/VE is a sort of super plugin called an MPP, not “just” an SATP or PSP. I’ve updated the section above!
Chuck Hollis says
Hi Stephen
Great post — couldn’t agree more!
— Chuck
John says
Hi Stephen.
Good Stuff! This is really the explaination I’ve been looking for.
Thanks
Steve says
PowerPath/VE is very similar to MS MPIO framework. The default MPIO setup for Windows will allow for path failover, but won’t do true DMP or even select the controller that owns the LUN. In essence it’s a fail over setup. You can purchase PowerPath, which is just a DSM plugin for the Windows MPIO framework on 2008 (HDLM is also a DSM for MPIO on 2008).
sfoskett says
You got it, Steve! PSP on vSphere is very much like PowerPath on MS Windows MPIO. But you can run a third-party MPIO DSM on any version of Windows Server, while VMware makes you buy Enterprise Plus to get any PSP support…
sfoskett says
You got it, Steve! PSP on vSphere is very much like PowerPath on MS Windows MPIO. But you can run a third-party MPIO DSM on any version of Windows Server, while VMware makes you buy Enterprise Plus to get any PSP support…