Everyone is talking about “software-defined” everything lately, so it was only a matter of time before industry buzz turned to software-defined storage. VMware and EMC really stoked the flames with a constant barrage of marketing directed in this direction. But how exactly do you software-define storage? And what does this mean?
It’s always amusing to see how others react when one company comes up with a new innovation in buzz: Some obediently hop on board, others claim they’ve always been there, and a few deride the whole thing as just more hot air. Analysts and end users tend to align with these viewpoints, too, as if it’s a requirement to pick a side on every issue.
Clearly, VMware has a lot to gain in a software-defined data center (SDDC). If VMware can successfully leverage their hypervisor dominance to become the controller for network and storage devices, they will cement their position in the enterprise for another decade. This is what VMware’s SDDC strategy is all about: Put VMware in the driver’s seat, controlling network and storage resources in real time to enable new levels of integration and flexibility in the datacenter.
VMware has made a credible case for software-defined network (SDN) integration thanks to their purchase of Nicira and introduction of the NSX product. Using this technology, VMware can dynamically “reprogram” an Ethernet network made up of both virtual and physical switches in reaction to topology changes caused by the mobility of virtual machines. Networks lend themselves to this sort of orchestration by virtue of their transient nature: Once a packet has traversed a switch, no trace of its path needs to remain. An SDN controller leverages this transience to create a dynamic network topology.
So what about software-defined storage (SDS)? Some would argue that any storage device with an open management interface is already “software-defined”. Others claim it is sufficient for a storage system to be made up entirely of software, as opposed to the traditional specialized hardware model. But viewed in light of the highly configurable and flexible nature of SDN, it seems clear that neither of these is sufficient.
True software-defined storage must be extremely flexible, responsive to a standard controller and deeply integrated with the virtual environment, and capable of dramatic reconfiguration, including data distribution and changes in scale.
- Flexibility demands movement, and this is a key issue for storage systems. Data has inertia, requiring substantial amounts of time, bandwidth, and system resources to move. And most storage systems are not flexible enough to enable the kind of “read and write anywhere and everywhere” access a true SDDC vision requires.
- Standardization is a long-standing bugbear in the storage industry. For decades, industry groups have labored to create standard interface and management protocols while entrenched vendors dragged their feet or adopted an “embrace and extend” strategy to avoid losing market share. Even incessant customer demand for VMware-integrated storage has resulted in haphazard delivery of products. How can we expect SDS to be any different?
- This is an era of innovation in scale-out storage, yet most established vendors face extreme technical hurdles to delivering products that can grow dynamically. Existing protocols and architectures were simply not designed to allow growth and distribution of data across multiple loosely-coupled nodes.
This is not to say that software-defined storage is impossible. On the contrary, many recent developments suggest that the SDS vision will be productized and delivered by VMware and others in the coming years. But these SDS products will look very different from the storage arrays customers are used to buying, and many existing systems will never get there. This will be a critical limiter to VMware’s grab at storage domination, since much of the industry will be incapable of being integrated into this vision.
VMware themselves have offered two key storage platforms to enable software-defined storage. vSAN is not a SAN at all, but rather a distributed storage software layer that promises to enable dynamic data movement without impacting daily operations. Numerous other companies have introduced distributed storage concepts in recent years, including EMC ScaleIO and Maxta, and established names like Symantec and IBM should not be counted out either. Nutanix and SimpliVity are already delivering virtualization-integrated distributed storage deserving of the SDS name.
VMware has also announced vVOL, an enhancement to traditional protocols that would make conventional storage arrays more responsive and capable as demands change. Again, many other companies are working on flexible and scalable integrated storage, from Tintri and Coho Data (which gets bonus points for leveraging SDN as well) to established players like EMC, HP, and Dell. All of these are likely to release vVOL-capable products once VMware is ready to bring that product concept to market.
A dark horse is the evolution of today’s caching software companies like Proximal Data, PernixData, Infinio, SanDisk FlashSoft and Avere into a virtual distributed storage layer or scale-out storage gateway. VMware’s recent purchase of Virsto gives them another avenue to software-define storage, though it is not clear what they intend to do with this team long-term.
And what of Microsoft? Although a distant number two in virtual datacenter mindshare, Hyper-V has made great strides recently, both in terms of technical capabilities and market share. Although Microsoft has introduced their own storage integration options (SMB3, ODX, etc) and software storage solutions (scale-out file server), they have not yet articulated a wide-ranging SDDC vision. They have been reluctant to talk about their intentions for SDN, let alone delved into SDS beyond their own Hyper-V/Windows file server combination.
If this article has piqued your interest in software-defined storage, pro or con, I urge you to join me for my Interop Las Vegas session, “Software-Defined Storage: Reality or BS?” I’ll discuss the topic in greater detail, presenting the reality and limits of SDS as well as the promise and prospects of this concept. And I especially look forward to a vigorous question and answer time at the end! I’m also doing a half-day workshop at Interop, “The Realities of Enterprise Cloud Storage“
calvinz says
(Disclosure: I work for HP Storage) Good read! You asked what about Microsoft. With the HP StoreVirtual VSA (block-based VSA), it can run as a guest on either VMware or Hyper-V and the storage can be share by either hypervisor. I honestly don’t know if other solutions can do this but certainly most can’t. HP also has the StoreOnce VSA – a deduplicated backup VSA that today supports VMware.
We have free full featured trials of both VSAs that you can download at http://www.hp.com/go/TryVSA.
Pete (@vmpete) says
Very well put Stephen. Referencing Dave McCrory’s “data gravity” content reminded me of what good work that was (and is). As for SDS, it seems to me that the ones in trouble are solutions that provide storage arrays with their value-adds and IP baked into their controllers.
Kale Blankenship says
Do you feel that it is beneficial to try to define SDx or any other buzzword? Does it really add any value or product differentiation?
In my view, since every vendor seems to spin the buzzword of the day to fit their product it only serves to confuse the consumer. I prefer to suggest that my clients not to focus on such things but rather describe the pain points in their environment so we can find an appropriate solution for their business.
Hanna says
It is great!