Last week, J Metz penned an article entitled “Thoughts on #OpenStack and Software-Defined Storage” in which he argues (rightly) that OpenStack Cinder should take storage networks into account and also (wrongly) that it should also encompass existing protocols such as “the 11 Billion ports of Fibre Channel that currently exist as part of a holistic system of storage networking.”
Although it might not seem like it all the time, Dr. Metz and I are good friends and agree more than we disagree. And although it might not seem like it all the time, I actually respect and approve of Fibre Channel in the data center. But this doesn’t change the fact that OpenStack doesn’t need Fibre Channel to achieve the goal of software-defined storage.
Storage Is More Than The Sum Of Its Parts
Let’s start with where Dr. Metz gets it right. Storage isn’t just about media. Storage isn’t even just arrays or server-side software. Storage is a continuum beginning at the application and ending at the media and includes protocols, attachments, virtualization, and (yes) networks. In short, storage is a relationship, to seize Metz’ phrasing.
Metz next dives into the fact that storage has “an intimate and symbiotic relationship with its corresponding network.” We can see all around us that this is a true statement. The history of data generally and IT storage specifically is one of creating and defeating bottlenecks between data sources and data destinations. And the last 20 years has been all about transforming storage from a simple end-to-end bus into a more dynamic multi-point network.
We have had varying levels of success on this point. Indeed, many of the success stories in enterprise storage (e.g. Fibre Channel SAN and PCIe flash) as well as the failures (e.g. CIFS and iSCSI a decade ago) have more to do with the inherent capabilities of the storage interconnect than of the rest of the technology stack.
If Cinder is indeed to be the center of software-defined storage for OpenStack, it needs to take the network into account. This is especially true if Cinder is to be as reliant on iSCSI going forward as it is now: The chief roadblock there isn’t the iSCSI or TCP/IP protocols but the reliability of the Ethernet network itself. This is why so many enterprise IT folks have a sour view of iSCSI: It was deployed initially on lossy, slow, and congested networks which slowed it to a crawl with TCP retransmissions.
Where OpenStack Fits
OpenStack is a conscious departure from the old ways of building servers, networks, and storage. It is different from enterprise IT and isn’t intended to be run on the same old infrastructure. OpenStack isn’t VMware, where existing data center technology was used to run a new virtual data center. It’s Real Cloud ™, built on commodity hardware.
Dr. Metz doesn’t outright say that OpenStack should adopt Fibre Channel but he does suggest that conventional storage technology be brought into the fold. More importantly, he charges the OpenStack community to work with “the T11 and the Fibre Channel Industry Association (FCIA), as well as the Storage Networking Industry Association (SNIA).” In his view, OpenStack and the giants of enterprise storage would together “work towards the next level of technological innovation”.
This is a fool’s errand. Cloud generally, and OpenStack specifically, is no more related to conventional open systems enterprise storage than that world of FC and NFS is related to the minicomputers and mainframes that still run the core of the world’s business. I have watched for 20 years as sad pundits have bemoaned open systems “reinventing the wheel” for data management issues that were long since solved on their beloved mainframes. And where has this discussion gotten us? Nowhere.
In fact, the success of today’s enterprise systems came because the innovators broke from tradition and took us in new directions, just as the success of cloud computing came from ignorance of conventional IT. Yes, we spun our wheels and patted ourselves on the back as we rediscovered fire, but we also created the modern world of enterprise computing.
Stephen’s Stance
Now OpenStack is doing it again. Just as the mainframe hasn’t disappeared, conventional IT won’t be swept away by the cloud. This is a new paradigm for computing that will adopt what it wants and ignore the rest, just as we in open systems did 20 years ago. The best that people like Dr. Metz and I can do is make suggestions and recommendations from our enterprise IT experience and hope the OpenStack Cinder community will listen.
OpenStack should consider the network. They should not just assume that data will arrive on time and intact. Cinder and Neutron should merge software-defined storage and networking so protocols like iSCSI can do their very best. But it would a ludicrous waste of time to wedge Fibre Channel into OpenStack beyond whatever limited support current implementers demand. Linux didn’t need CKD support and OpenStack doesn’t need Fibre Channel.
Now read J’s response to my response, “OpenStack and Storage, a Response“
Terafirma says
What does Open in Openstack mean? to ignore FC is to say your choice is the only correct way. What about RDMA or IFB or HPC? next you will be telling everyone that Openstack should drop all hypervisor support accept one as whatever one you choose is the correct one for everyone.
Also how is allowing FC for storage access any different to iSCSI last I checked Openstack did’t provision the underlying iSCSI network so why should it do so for the FC network. Just allowing cinder to manage the end points (host/array) would be enough.