Blade systems allow compact, stripped-down servers to be mounted in a chassis with shared power and I/O support. Among the benefits of blade servers, efficiency, manageability, reliability, performance and flexibility stand out as unifersal. There are lots of blade systems available today, and even more produced in the past, that include some or all of these capabilities and meet some or all of these requirements. But are they all blade servers? And does it matter whether they are or not?
RLX Technologies ServerBlade
Most folks credit RLX Technologies with inventing the modern blade system. This Houston company, made up of many ex-Compaq folks, sought to produce a compact web server platform using Transmeta’s energy-efficient x86 CPUs. By June 2002, they were clearly using the “blade” name all over their web site.
The big innovation introduced by RLX (and adopted by HP, IBM, Dell, and everyone else) was integrated, hot-pluggable blades. The RLX blades included everything except I/O, which was entirely encapsulated on a standard rear connector. Every part of the system could be hot plugged, including the blades themselves, with none of the ribbon cables or expansion cards required by “high density servers” from the 1990’s.
Although they started strong in 2001, RLX collapsed in 2004, laying off most of its staff and selling to HP (née Compaq) a year later. But where did RLX come from? They patented a “bladed” web server, but a read through the archive of their web site reveals a company that wasn’t claiming to have invented the idea of blades.
Another blade pioneer worth mentioning is Egenera, which began producing their BladeFrame line in late 2001. Egenera’s stripped-down “pBlades” resemble earlier single-board computers more than modern “server-in-a-sled” designs, with a focus on converged I/O and management software. Today, Egenera is mostly focused on their PAN Manager software, which supports hardware from major vendors like HP and Dell.
Earlier Blades: Cubix and Company
RLX began using the term, “bladed server” in 2001, around the same time Egenera, IBM and others adopted “blade” as a description for servers like this. But the concept of a blade server, and even the term itself, were already widely understood at the time.
Another company selling “blade servers” in 2002 was Cubix, who had pioneered the concept a decade earlier. They adopted the “blade server” name in mid-2002 (while RLX still called theirs “bladed servers”) after over a decade selling “space-saving servers”. Cubix is still a going concern outside the blade space, but the same can’t be said of long-forgotten proto-blade names like J&L Information Systems, ChatCom and CommVision.
These early pre-blade vendors sold single-board computers in a rack-mount chassis that included shared power and cooling and multiplexed I/O. The “class of 1995“, including the Cubix ERS, ChatCom Chatterbox, and CommVision CommSwitch 2500, all included most of the fundamental elements of modern blade servers. They combined multiple servers in a chassis, shared power and cooling, and even had variations on the consolidated I/O theme. The Cubix ERS even had an advanced supervisor card and management software.
The biggest difference was redundancy: None of these systems was truly a hot-swappable, N+1 design. Most had cold-replaceable power supplies and server boards that plugged at the bottom (like expansion cards) rather than the back (like modern blades). And all used ribbon cables in addition to the bus connector for I/O connectivity. It was possible to hot-replace a Cubix board, but it was much more involved than the simple “pull-and-swap” on an RLX ServerBlade enclosure.
These early systems gradually evolved or died off, a process that was accelerated by the entry to the market in the mid-2000’s of major vendors like IBM, HP, and Dell. Eventually, these smaller concerns fell apart and even larger ones (like Rackable Systems) found themselves needing to “get large” to take on the big systems houses. Another key market entrant was SuperMicro, which today manufactures hardware for many vendors of blade solutions.
VMEbus Blades from Sun, HP, and Others
The late 1980’s and early 1990’s saw an even earlier wave of pre-blade single-board servers in enclosures, a development made possible by the standardization of the VMEbus. An outgrowth of the Motorola 68000 bus, VMEbus was a standardized interface used by a variety of minicomputer/microcomputer descendants. Sun, HP, Data General, Symbolics, and others (even Atari) adopted VMEbus for chassis expansion, and many eventually produced single-board computers and even rack-mount multi-system enclosures.
VMEbus was not intended for blade servers (since such a thing did not yet exist), and the resulting systems really aren’t much like what we would consider blades. Most production VMEbus computers, even those that used a VMEbus backplane and multiple single-board computers, were intended more for parallel computing than efficiency and manageability. Although most of Sun’s 1990’s-era workstations and servers used a VMEbus backplane, including the wildly-successful Sun-4 lineup, only the “/E” models were single-board computers.
Some customers were known to gather Sun’s VMEbus computers (like the Sun 3/E and Sun 4/E) and ran them as independent systems in a rack-mount VMEbus chassis. The innovative element here was the VMEbus connector and chassis form factor: Much like current blade servers, these single-board computers collected all external I/O and power in a single rear connector and shared a common mounting system. This meant customers could construct dense rack-mount server collections.
But these were not blade servers in the modern sense. They did not include any sort of management supervisor, for one thing. But the attractiveness of a massive multiprocessing solution with partitioning or domains made blades irrelevant in the world of HP-UX or Solaris. Why collect individual systems when they could scale together or be partitioned logically? Blade servers just weren’t all that attractive until the 2000’s, when Sun finally adopted the terminology and developed modern blades.
What’s Next for Blade Servers?
Amusingly, the earliest systems were lacking in the exact areas next-generation systems might neglect: Hyper-scale servers don’t need redundancy, management systems, or much integration, as we’ll discuss in the next article in this series.
Sun SPARCserver-1000E system board photo by Shieldforyoureyes. RLX ServerBlade photo by RLX Technologies.
Leave a Reply