On reading my thoughts about the evolution of enterprise storage, culminating in what I called “The Rack Endgame” where a rack of storage, networking, and compute becomes a the basic entity of the datacenter, many pointed out that this looks an awful lot like the Facebook-led Open Compute Project (OCP). This is entirely intentional. But OCP is simply one expression of this new architecture, and perhaps not the best one for the enterprise.
Hyperscale Versus Enterprise
Years ago, companies like Facebook, Yahoo, and Google discovered that conventional IT infrastructure just won’t cut it for their hyperscale data centers. What makes sense for even a large number of heterogeneous servers is entirely illogical for a massive, homogenous, software-defined cloud. So they set about creating a new kind of infrastructure that matched their needs and pared away the rest.
Hyperscale environments must shift as much intelligence as possible to free-to-use software, ruthlessly eliminating proprietary hardware. This is due to a simple economic fact: Scaling licensed or proprietary hardware costs serious money, while scaling license-free software on commodity hardware is much cheaper. The only way to be competitive in the cloud is to eliminate the IT infrastructure tax.
Another issue with conventional enterprise IT gear is that it was never designed for serious scale. “Scalable” solutions for the business typically use shared-memory clusters and can grow only by a factor of 10 or so. True hyperscale solutions reach thousands of nodes thanks to shared-nothing “sharded” architecture and software-defined integration.
The Open Compute Project Rack
Facebook and the rest of the Internet gang have spent the last decade driving this software-centric architecture, and this culminated last year with the introduction of the Open Compute Project (OCP).
First, they abandoned proprietary hardware (blade servers, storage arrays, and core switches) in favor of simplified alternatives. The mainstream vendors played along, introducing hyperscale servers that often looked like blades but were entirely different, typically with passive backplanes and no management hardware. Thus were born the Dell C5000, HP SL, and IBM iDataPlex. Although the margin on these servers was much lower, the volume of sales more than made up the difference.
But Facebook and friends kept pushing. They began exploring “white-box” servers, with SuperMicro stealing the spotlight if not the sales. The same was happening with networking, as OpenFlow and SDN opening the doors to new vendors, including some no-name alternatives. Storage, too, moved internal, leaving Fusion-io’s PCIe cards as the only big-budget name brand in hyperscale storage.
Open Compute was the next logical shift. Facebook and company invited hardware vendors like Intel and AMD to create a new simplified hyperscale server based around the “Group Hug” interconnect. These tiny servers fit in standard racks and used PCIe as their common I/O channel. This project has been moving forward over the last year, with many companies jumping onboard.
The endgame is a “Disaggregated Rack” where server components are dispersed within a rack, with Intel’s 100 GbE silicon photonics (SiPho) serving as the high-speed, low-latency interconnect. This rack would include a simple bulk storage server as “bottom of rack” capacity as well as a flash-based server for “top of rack” performance. This technology isn’t ready for production yet, but parts are becoming available and development is proceeding.
OCP Rack Versus Cisco UCS
All this may sound an awful lot like what I believe Cisco is building, but the similarities are somewhat superficial. Cisco UCS is a highly-integrated and tightly-managed enterprise product, while OCP Rack will be much simpler, relying on software for integration. Although there is some crossover between enterprise and hyperscale applications (name-drop Hadoop and Docker at your next business meeting!), this won’t go too far.
Read more about Cisco’s Trojan Horse
Consider Cisco’s new M-Series modular UCS servers. These are said to have been designed for hyperscale workloads but they don’t look much like Group Hug. M-Series uses a proprietary ASIC to aggregate servers on cartridges and share I/O resources. Like Group Hug, M-Series uses PCIe as the sole I/O protocol (a major departure for Ethernet-loving Cisco), but UCS is neither open nor really software-defined. M-Series relies on proprietary management for every function and is really designed to fit into an existing UCS environment, not an open cloud.
The same is true of Cisco’s Invicta flash appliance. It may look a bit like “Project Dragon”, Facebook’s Fusion-io-powered all-flash server, but Invicta will be as proprietary and enterprise-ready as anything from EMC and the rest of the enterprise storage old-guard.
In short, UCS is the kind of hyperscale that enterprises want and not at all what a software-driven cloud provider would be interested in. It’s very smart of Cisco to focus on enterprise needs rather than madly running off the cloud-shrouded cliff with an OCP clone.
Stephen’s Stance
Open Compute Project (OCP) is developing their own “Rack Endgame” solution, disaggregating conventional servers into rack-scale complexes interconnected by PCIe and driven by software. But these don’t really fit with the needs of the enterprise datacenter, which is much more interested in support and integration than acquisition costs and openness. They’re next-generation cousins rather than twins.
Leave a Reply