Data storage has always been one of the most conservative areas of enterprise IT. There is little tolerance for risk, and rightly so: Storage is persistent, long-lived, and must be absolutely reliable. Lose a server or network switch and there is the potential for service disruption or transient data corruption, but lose a storage array (and thus the data on it) and there can be serious business consequences.
We were never able to achieve storage virtualization in mainstream enterprise IT because we lacked the ability to identify and move data non-disruptively. This has been solved by caching and distributed storage solutions, and it’s only a matter of time before the legacy need for centralized storage falls away.
Ask any project manager if it’s possible to deliver something that is fast, good, and cheap, and they’ll laugh. The phenomenon known as the Iron Triangle limits just about everything in the world from meeting all three conflicting requirements. Yet, for the last two decades, enterprise storage array vendors have been trying to deliver just this. How’s that working out?