It’s clear how this fairy tale ends. So many companies are using “S3 plus” as their standard interface, and even inside their solutions, that it’s safe to say it’s won the cloud storage API battle. But S3 isn’t a finalized spec – the industry will extend and improve it over the coming years. Soon we’ll have a cloud storage standard based on S3, just like we have a LAN file services standard based on CIFS.
This week, SpectraLogic announced DS3 and “BlackPearl”, an innovative product for tape storage using a cloud API. Although BlackPearl sounds like an Amazon Glacier clone, it’s really nothing of the sort. BlackPearl extends the S3 API for tape storage but this “DS3” API requires well-behaved clients and disciplined access. BlackPearl is exciting, it’s novel, and it’s useful. But it’s not S3 or Glacier, despite what some initial coverage may say.
When I say â€œcloud storageâ€, you probably think of Amazon S3: Big, slow, cheap, and distributed. That’s probably why the people I talk to about SolidFire usually start shaking their heads and denouncing the company. After all, who would be crazy enough to create an all flash storage array for cloud storage applications? But maybe it’s not so crazy; maybe SolidFire is simply playing a different ballgame.
I’ve never been a fan of thin provisioning as a storage management tool. Don’t get me wrong, I love having thin provisioning in my toolkit to overcome the limitations of conventional filesystems. Thin provisioning just gets under my skin when folks try to use it to solve business problems like long deployment time and slow purchasing cycles. If you attended any of the thin provisioning sessions I’ve presented at Storage Decisions, Interop, E-Storm, or elsewhere then you’ve heard my wistful dreaming of real automatic provisioning without the hackery of thin provisioning systems. But perhaps I didn’t mention that actual automatic provisioning actually exists today! It’s one of the many things I love about API-driven cloud storage!
Change is not a word normally associated with storage, and revolution is practically unheard of. Today’s modern enterprise storage systems and networks employ massive resources to do one simple thing: Emulate the basic hard disk drives used over three decades ago. But cracks are appearing in our mausoleum of fake disks: Application developers are discovering the value of object storage, and storage systems are appearing to support this need.
Before Google could even take to the stage to announce their new “Google Storage for Developers” cloud storage offering in their I/O conference keynote, Amazon hit back with a new low-cost “Reduced Redundancy Storage” option for S3. The titans are at war, and cloud storage is the new battle ground. But what was really announced? And should you care?
Championing “open” and calling for standards has become the first stalling action by late-movers in technology spaces. They see opportunity passing by and try to hold back progress and FUD the market by yelling about proprietary solutions, vendor lock-in, and a lack of standards. Many well-intentioned IT folks follow along: After all, who doesn’t want openness, standardization, and interoperability?
After a successful presentation on enterprise storage in the cloud on Monday, I’m looking forward to wrapping up Cloud Slam ’09 in style. And what better way to discuss the storage implications of cloud computing than to get just about everybody doing storage in the cloud on one panel? I’ve got an idea! How about […]