Since all the cool companies are offering capacity guarantees these days, I thought I might as well throw my hat into the ring and offer one, too. Starting now, I guarantee any takers an easy plan to write 50% more production data to your existing storage environment. Even better, I’ll do it with no additional hardware or software to purchase and install and no complicated terms and conditions. You won’t even have to delete anything, but if you do I’ll guarantee double your data! And I’ll only charge 50% of the deferred storage hardware and software spend, and if I can’t do it you pay nothing. What have you got to lose?
You’re Holding It Wrong
How can I make this fantastic offer without knowing anything about your environment and without including any asterisks? Simply because when it comes to data storage, in the immortal words of Steve Jobs, you’re holding it wrong. In 15 years of enterprise storage consulting, I have never seen a single environment using anything close to their total usable capacity. In fact, I’ve never seen an environment that was using even half of its usable capacity.
This makes my job easy. If you have a half dozen storage arrays with a total of 500 TB of raw storage, about 60% of that capacity will be usable (once you take RAID, spares, and other overhead into account). Of the remaining 300 TB of capacity, you’re probably only storing 70 TB of data if you’re like the average enterprise shop. So I can just hand you a report that says “write 35 TB more” and walk out. My job will be done.
But let’s say you’re better than average. Let’s say you run a really tight ship and don’t waste expensive capacity like most people. I bet you’ve still got slack capacity you don’t know about. Maybe a project manager demanded a 30 TB LUN for his new database and won’t let you run your monitoring tools to see what he’s really using. Or perhaps another project never got off the ground but they won’t share the disk space “they paid for.” Then there is that other system that was turned off without you knowing, so the storage is still allocated. There’s always plenty of perfectly-good free usable primary storage capacity.
Wringing Out the Slack
See my post, Use Process Solutions For Process Problems, Technical Solutions For Technical Ones
Despite what the warring vendors might say, the issue isn’t the equipment or software you’re using, the issue is the way you’re using it. Storage isn’t bought as an integrated piece of a compute environment these days, and it isn’t managed that way, either. Enterprise storage arrays are purchased in fits and starts, a little here and a lot there, according to the whims of the budget and project planning process. It’s not at all unusual to see tight storage constraints delaying projects even as a new and totally unused array sits idle in the corner.
The root cause lies with how capacity is purchased, configured, allocated, and charged to projects, not with the technical capabilities of the platform. Nearly every modern array can be shared by many servers, and nearly every environment has ample storage networking potential. Are Fibre Channel directors and HBAs to expensive? Switch to iSCSI or NAS! Every server has a spare gigabit Ethernet port or two, and I bet your networking guys have a decent switch you could use.
All this applies mainly to primary storage, but backups are an equal opportunity. Most daily incremental backup tapes are left half-empty due to job scheduling, connectivity, and inappropriate manual media assignments. And those jam-packed weekly full tapes are probably a waste of time and capacity, too. How about re-thinking your backup process with fewer fulls, virtual tape, elimination of useless data, or even snapshots? I bet my friend W. Curtis Preston could offer some great advice there!
Stephen’s Stance
You don’t need to get crazy to wring out a bit more storage capacity. Deduplication and data optimization sound great, but what’s the point if you’ve got ample unused capacity already? Aren’t all these guarantees just an attempt to grab more business, more money, and sell more gear?
The leading cause of poor storage capacity utilization is failure to use storage capacity!
I’m serious about the offer. I’ve done exactly this kind of work before and have the resources to do it for you, too. Bring me in and I’ll give you a plan to write 50% more primary data. Guaranteed success or you don’t pay. But I bet you could do the same thing without me!