I had two great storage virtualization seminars this week, in New York and Philadelphia. As usual, audience participation was key, and interest in VMware and Hyper-V remains high.
One of the main questions I always get is which protocol one should use for VMware storage. My recommendation remains that the answer is an organizational one more than a technical one. There are certainly performance, CPU utilization, and support differences between Fibre Channel, SCSI, iSCSI, and NFS on VMware, all of these can work fine in many situations. Although this is addressed in my presentation, I thought it wise to point out some of my sources and (concurring) opinions.
First, I point you to the official VMware VI Team blog, where they reiterate that VMware is protocol-agnostic. They commit to support all storage protocols equally, and promise to add missing support as soon as possible. See especially their table of support, which shows that iSCSI currently can’t be used for clustering (!), among other insights.
I’d also like to point out three sources for my seminar slides:
- VMware’s Comparison of Storage Protocol Performance paper, which pits Fibre Channel against hardware iSCSI, software iSCSI, and NFS.
- VMware’s Performance Characterization of VMFS and RDM Using a SAN, which shows that there is a negligible performance difference between shared storage and the two varieties of RDM.
- NetApp’s NetApp and VMware Virtual Infrastructure 3 Storage Best Practices (recently updated), which is a wealth of information on shared storage with NFS, even if you’re not a NetApp customer.
The only real gotchas at this point are the lack of clustering support for iSCSI, the inability to boot a VM from software iSCSI, and the learning curve for Fibre Channel. Make your choice based on what you have and what you know – that’s the best choice to make!
For more information, check out this post from vmguy.com!