Ask any project manager if it’s possible to deliver something that is fast, good, and cheap, and they’ll laugh. The phenomenon known as the Iron Triangle limits just about everything in the world from meeting all three conflicting requirements. Yet, for the last two decades, enterprise storage array vendors have been trying to deliver just this. How’s that working out?
Software Defined Networking (SDN) has always looked a bit like a solution in search of a problem, at least in the enterprise data center. But there are lots of potential applications that need a dynamic and scalable network. In my mind, storage is chief among these, since scalability and flexibility has always been extremely difficult to achieve.
This old-fashioned, predictable storage I/O path was deterministic and decipherable: The server, the switch, and the array all had enough information to do their jobs effectively and efficiently.
The time has come to take sides on the core question of storage for virtual servers: Do you want storage intelligence to live in the hypervisor or the array? Most administrators are already lining up on one side or the other, unintentionally casting their vote while the rest flounder. But the storage industry must wake up and embrace the divide.
The battle lines are drawn between 8 Gb Fibre Channel and 1 Gb or 10 Gb iSCSI and NFS. This is the baseline for my Interop debate. I am not arguing about the future of SAN, or even iSCSI versus NFS. Rather, I am arguing that most businesses would be best served by implementing an iSCSI SAN rather than purchasing Fibre Channel today.
Although the SANLink appears to be something of an oddball, it indicates the shape of things to come. Thunderbolt will transform the use cases for portable and all-in-one computers, likely spelling the end of the empty boxes for desktop use. In fact, I would not be at all surprised if Apple soon canceled the Mac Pro line entirely in favor of a beefed up Mac Mini and iMac stable. And the dozen or so MacBook Pro users wanting to connect to a Fibre Channel SAN will finally have the opportunity to do so sometime later this year.
Many storage challenges focus on the conflict between data management, which demands an ever-smaller unit of management, and storage management, which benefits most from consolidation. Developing data management capability that is both granular enough for applications and scalable enough for storage is one key to the future of storage.
HP has always been an alphabet soup company, assigning just about every item in their bewildering array of products a unique product number. Like Mercedes-Benz cars, even the product names are a mix of letters and numbers that can be off-putting to browsers. Now that they have grown to supersize proportions through internal expansion and acquisition, just about everyone outside the company seems to have trouble decoding the product line, so I decided to take a stab at decoding the enterprise lineup in plain english.
I’ve been talking about storage capacity utilization for my entire career, but the storage industry doesn’t seem to be getting anywhere. Every year or so, a new study is performed showing that half of storage capacity in the data center is unused. And every time there is a predictable (and poorly thought through) “networked storage is […]
This is part of an ongoing series of longer articles I will be posting every Sunday as part of an experiment in offering more in-depth content. There has been a lot of discussion in the storage industry about Fibre Channel over Ethernet (FCoE), making it the toast of Storage Networking World, but this technology remains relatively unknown […]