We were never able to achieve storage virtualization in mainstream enterprise IT because we lacked the ability to identify and move data non-disruptively. This has been solved by caching and distributed storage solutions, and it’s only a matter of time before the legacy need for centralized storage falls away.
Join me for “Storage I/O is About to Get Crazy”! I’ll be speaking on Tuesday morning, August 26, at 7:30 AM at Jillian’s San Francisco, right on the corner next to Moscone and the rest of VMworld. SolarWinds is sponsoring this talk and will provide breakfast (including gourmet coffee) to any and all registered VMworld attendees.
Lots of folks conflate cloud computing and virtualization, but these are not necessarily intrinsically related. Although most cloud servers today use a hypervisor like KVM or Xen to share compute hardware, there’s no reason it has to be this way. My takeaway from Gigaom Structure this week is that an alternative paradigm is emerging: Cloud without […]
Software Defined Networking (SDN) has always looked a bit like a solution in search of a problem, at least in the enterprise data center. But there are lots of potential applications that need a dynamic and scalable network. In my mind, storage is chief among these, since scalability and flexibility has always been extremely difficult to achieve.
I’ve written and spoken quite a bit on the “software-defined” future, what it means and how it will come about. Although it seems like a marketing buzzword to some, I feel it is a fairly accurate description of the future of the enterprise and service provider data center. That’s why I’m working to organize the next Software-Defined Data Center Symposium, and am happy to announce that it will be held in Santa Clara, CA on April 22, 2014.
The only way to build a datacenter with flexibility and scale is automation. And this is as true for networks and servers as it is for storage. IT architects increasingly design integrated and automated systems, not static interconnects. They must learn scripting and look for solutions that are responsive to changing demand. And they have to start getting excited about companies playing in this space.
Once again, it’s time for vSphere-Land.com’s “Top vBlog” voting. And once again Feedbin and Twitter are full of hundreds of bloggers lamely begging for me to vote for them. And once again, I didn’t base my votes on their begs or my own hunches. Follow along as I explain how I actually voted and why I think you should use the same mechanism. And no, I’m not going to say who I voted for!
Everyone is talking about â€œsoftware-definedâ€ everything lately, so it was only a matter of time before industry buzz turned to software-defined storage. VMware and EMC really stoked the flames with a constant barrage of marketing directed in this direction. But how exactly do you software-define storage? And what does this mean?
Scaling storage is a serious challenge for the industry, but there is a great deal of thought, effort, and creativity going into it right now. Companies like Gridstore, Oxygen Cloud, and Cleversafe have come up with effective client-side solutions to enable scale-out storage to sing. If you’ve got an appropriate application, client, or gateway, scale-out is a real possibility!
It is amazing that something as simple-sounding as making an array get bigger can be so complex, yet scaling storage is notoriously difficult. Our storage protocols just werenâ€™t designed with scaling in mind, and they lack the flexibility needed to dynamically address multiple nodes. So my hat is off to these companies and others who have come up with clever ways to maintain compatibility while scaling out beyond the bounds of a single storage array.