If there is a universal best practice, it’s the simple idiom, “use the right tool for every job“. Yet we in IT spend so much time trying to fit square pegs into round holes, it becomes second nature. How else to describe our attempts to solve virtually every business need through added technology? No matter what the request, we think there must be a technological solution. But the time has come to adopt a new best practice: Use process solutions to solve process problems, and technical solutions to solve technical ones.
Consider Storage Utilization
The effectiveness of data storage allocation and utilization is pathetic. If they really investigate the matter, most enterprise IT organizations will discover that they are using less than one quarter of the expensive disk storage capacity they have purchased. Yet storage has the largest data center footprint in terms of floor space, power and cooling, and ranks near the top in terms of operational and capital expense. In short, businesses spend excessively on storage systems that they get very little use out of, and no improvement has been made in a decade.
A combination of technical and process roadblocks impede higher utilization of storage capacity. The technical issues are many: Inflexibility of storage allocation (it’s hard to “grow” a file system or RAID set while it is in use); the lack of communication between the various levels touched by storage (applications, operating systems, networks, systems, disks); static mapping between files and storage systems; and the perverse interaction between capacity and performance (spindles and heads, platter capacity, interface speed). These are compounded by business process issues that are probably more serious: Problematic interactions between business users of IT systems, IT application managers, and IT infrastructure teams; opaque or downright fraudulent growth projections leading to ineffective capacity planning; excessive lead time for storage provisioning up and down the stack; and bulk purchasing due to vendor and customer financial and budget calendars. I’m sure the reader can imagine a few more drivers besides, but all conspire to limit effective utilization of storage.
I have a long-standing love/hate relationship with thin provisioning, one of the many proposed technical solutions to the utilization problem. Thin provisioning eliminates many technical challenges: It simplifies adding capacity to the Drobo that serves as my home office storage center; the ability to automatically grow VMware images makes virtualization practical in the tight confines of a laptop; and it contribute to the usefulness of advanced solid-state storage systems like the new Nimbus S-Class. But I have serious reservations about using thin provisioning to over-subscribe enterprise storage systems due to failures of capacity planning and IT-to-business communication. Thin provisioning will only make process issues worse.
Fix the Process
Thin provisioning is fine if your problem is a technical one, but enterprise IT should focus instead on the process. Why can’t IT communicate with the business? Application owners have so little faith in the ability of IT to respond to simple requests (“I need more storage soon”) that they overestimate their needs by 100% and hope to never have to have that conversation. Thin provisioning blunts the cost of this excess capacity but does nothing to improve the underlying problem.
As far as storage utilization goes, fixing the process starts with storage as a service. Once storage is standardized, it can be provisioned much quicker and smoother, and this will reassure application owners. It will also reduce the impact of bulk purchases and enable easier forecasting and management. I have seen enterprises build and deploy uniform “cookie-cutter” storage solutions and it really does work.
“As a service” methodology is difficult to implement, however. It takes guts to make it stick, and open-mindedness to even give it a try. But it’s the only way to fix the process.
Leave a Reply