Did some pNFS proponent slip a love potion into the coffee at EMC? Suddenly it’s pNFS time at the company known for its reluctance to embrace file sharing and filesystems in general. The purple prose is flying, with Chad Sakac declaring himself “a big fan of the application of NFS” and Chuck Hollis extolling the “inherent simplicity and ease-of-management of NFS.” The NetApp guys must be amused by the bear hug from Hopkinton, but many are seeing deja-vu all over again.
Chad’s Icky Bits
(Apologies for that heading, but those are Chad’s words, not mine)
Chad Sakac’s red rose for pNFS included a few thorns aimed at good old NFSv3. He calls these the “icky bits” and spills some ink over them:
- “NFS Server failure behavior,” says Chad, leads to issues as serious as “a guest OS crash” and administrators “resorting to unnatural acts” to compensate. He talks about EMC’s DART OS being optimized to fail over in under a minute to avoid application issues and the difficulty in actually accomplishing this feat.
- Chad also points out that “NFS client limitations” can lead to “unexpected bottlenecks.” Load balancing large workloads across multiple gigabit Ethernet NICs means hand-tuning, since NFS pins traffic to a single MAC address.
Certainly these limitations were known to many in the storage industry, but haven’t they also been addressed repeatedly? NetApp, EMC, and BlueArc do indeed suggest adjusting NFS heartbeat values to allow time for the cluster to recover, but this seems more a limitation of their clustered server architecture than of NFS itself. Scale-out NFS servers from Isilon and HP don’t seem to require these “unnatural acts.”
As for client limitations, manually balancing client loads is a reality in many large storage architectures, not just NFS. Perhaps the fact that NFS can handle so many more I/O requests in a given timeslice makes this more of an issue, but it tends to be transient.
Chad has repeatedly expressed his love for NFS, especially as a datastore for VMware. Clearly, he intended to point out these “icky bits” to highlight the possibilities for pNFS. But the method used (calling them “icky” for one) resembles mud slinging.
Chuck Wants pNFS
(Chuck’s titles also lend themselves to mis-reading)
Chuck Hollis is more careful in his wording, extolling the virtues of pNFS without calling anything “icky”. Indeed, there’s just one NetApp dig: He says their “emulated containers of LUNs” are “hardly optimized”, which is a welcome change of tone from previous debates.
But the underlying message is the same: pNFS is new and wonderful, encouraging proliferation of hand-holding, flower distribution, and rainbows. Again I ask, is this really true? Is pNFS ready for this kind of adulation when, as Chuck points out, “it’s going to take a while before the rest of the portfolio, industry and ecosystem catches up. Maybe a year or so.”
Seriously? A year until pNFS is ready for mass enterprise adoption? Admittedly, EMC has been working on pNFS (as MPFS) for a long time, but predictions of “just another year” for a major protocol transition set off warning bells. This is doubly true when most clients (including VMware) don’t yet offer even basic support.
Stephen’s Stance
One wonders if airing this dirty laundry is an attempt to highlight EMC’s pNFS work or to discredit plain old NFS as a datacenter protocol. As I wrote about in Our New Thing Is Awesome (‘Cause Our Old Thing Sucked), the “parade of progress” sometimes degenerates into “out with the old,” and this is perilous for purveyors of durable goods like storage systems.
I am also very concerned with the proliferation of “layout types” within pNFS. It seems that every vendor has a hand in the protocol, and each is adding their own technology to the mix. We started with files and now have both objects and blocks. Will these be widely supported? Do we really need them? Or will pNFS start looking like Bluetooth: Bloated, incompletely-implemented, and ignored except for special use cases.
But my motivation behind this post is simpler than that. I would like to pose a question: Is NFS (v3) really that “icky”? Do we really need pNFS? Or have these problems been solved previously?
Storagezilla says
Suddenly?
]
http://www.emc.com/products/detail/software/celerra-multipath-file-system.htm
MPFS or pNFS EMC has already bought into the concept. pNFS mainstreams it.
Was NFSv3 better than NFSv2.
Yes.
Michael Shea says
MPFS is a different beast from pNFS. Completely different.
Storagezilla says
MPFS (HighRoad) is a precursor to pNFS and it’s been shipping since 2001.
pNFS spilts metadata and data to achieve parallelism. MPFS splits metadata and data to achieve parallelism. But MPFS uses NAS for metadata and SAN for data.
I’m not saying they’re the same thing I am saying that in the draft documents it’s clearly stated that.
“This draft draws extensively on the authors’ familiarity with the mapping functionality and protocol in EMC’s HighRoad system [HighRoad]. The protocol used by HighRoad is called FMP (File Mapping Protocol); it is an add-on protocol that runs in parallel with file system protocols such as NFSv3 to provide pNFS-like functionality for block/volume storage. While drawing on HighRoad FMP, the data structures and functional considerations in this draft differ in significant ways, based on lessons learned and the opportunity to take advantage of NFSv4 features such as COMPOUND operations. The design to support pNFS client participation in copy-on-write is based on text and ideas contributed by Craig Everhart (formerly with IBM).”
http://tools.ietf.org/html/draft-ietf-nfsv4-pnfs-block-06#section-7
pNFS is not a new idea to EMC, hence the reason we were the first commercial provider to support it in a shipping product. (DART 6.0)
To EMC pNFS is to NFSv4 what MPFS is to NFSv3 and we’re behind both of them 100%
sfoskett says
It sounds like MPFS is a LOT different from pNFS. They’re about as similar as a SAN filesystem and NFS itself, IMHO. I’ll grant you that EMC’s experience with MPFS helped them implement pNFS and gave them something to build on in the pNFS working groups, but it’s not the same thing.
Was EMC responsible for the pNFS block layout type? Is this the son of MPFS?
Chuck Hollis says
Hi Stephen — mostly good post.
I did take a bit of exception as to your characterization of EMC as reluctant to embrace file sharing and filesystems in general.
As you know, we’ve been in the NAS/CIFS market a very long time, and IDC has given us credit for #1 market share in NAS for many years now. Although we do have competitors that like to paint us with that anti-NAS brush.
That being said, we do have the advantage of strong SAN technology in our portfolio, so we’re not usually forced into a position of advocating one over the other for product reasons.
I wasn’t aware that my blog post title lend themselves to mis-reading — I’ll have to go back and take a look 🙂
— Chuck
Storagezilla says
I wouldn’t call it son of MPFS, MPFS was an ancestor the way StorNext and SANergy were.
In 2004 the IETF pNFS Block Layout was designed on a modified version of the FMP protocol which EMC open sourced back in 2003, so there are genes there.
EMC has funded the pNFS Block layout for the Linux kernel over the past 5 years so this isn’t new love this is a long term investment starting to pay off.
sfoskett says
EMC loves NAS. EMC has always loved NAS. Ok, I get it.
As for the titles, I suppose it depends on how you pronounce “pNFS”. I read it as “pee-nifs” which leads to some really childish and inappropriate mis-readings.
Storagezilla says
I’ve wrote up my take on NFSv3/NFSv4 here.
http://storagezilla.typepad.com/storagezilla/2010/10/nfsv4-vs-nfsv3-fight.html