The first storage performance horseman is spindles: If you don’t have enough disk units, performance will suffer. I have been laying out storage on enterprise arrays since the dark ages, and one of the first lessons I learned was allocating data to avoid hotspots. I remember spending hours back in the 1990’s hunched over custom Excel spreadsheets trying to get my storage layout just right, balancing the workload across every available disk.
Each disk drive consists of a spindle of spinning platters with read/write heads move back and forth. Each time you access a piece of data that’s not in cache, the drive moves its arm over the platter to access the correct piece of data. Since each drive can only access one piece of data at once, and since caches can only hold so much data, tuning a system to minimize the number of requests per drive is essential.
Manual storage array layout was an art, but we never fooled ourselves into thinking our designs were optimal. There were just too many intractable problems, so we had to compromise at every turn:
- We usually had no performance data to base our layout decisions on, so we had to rely on guesses and rules of thumb
- Workloads tend to change over time and manual layouts are painful to modify
- The smallest unit of allocation was an entire LUN or drive, so even the best disk layout mixed hot and rarely-accessed data everywhere
- Much of the allocated space was unused, so we used expensive disks to store nothing
One might think that, 10 years later, advances in technology would have solved these basic issues. But for many people using many of the so-called modern mainstream enterprise storage systems, these problems remain.
Like all good systems administrators, I’m a natural control freak. I am uncomfortable letting the system manage itself, having been burned too many times by computers (well, software really) making stupid decisions. It’s analogous to the backlash against anti-lock brakes, traction control, and automated transmissions among racing enthusiasts.
But the time has come to let go. We don’t have to micro-manage storage anymore, and we have much to gain by letting the array do the work:
- Just as traction control can manage each wheel independently, something a driver could never do, modern virtualized storage systems can allocate small “chunks” to the optimal drive type, creating a better layout than anyone could manage with LUNs
- Dynamic optimization technology can move these chunks around, adapting as loads change
- Thin provisioning can go a step further, not wasting drive capacity for unused space
- Wide striping and post-RAID storage systems have a higher threshold before performance suffers due to spindle hotspots
- Widespread availability of tiered storage, including advanced caches, solid state drives, high-performance SAS and FC, and cheap bulk disks, gives us many more options
As I mentioned, not all systems have these capabilities, and not all implementations are created equal. I’m concerned about misuse of thin provisioning, for example, but it’s hard to argue with its effectiveness in many circumstances. Find out how granular your system’s allocation is – some remain LUN-only, while others are much more effective, using tiny chunks.
These new storage automation technologies really become essential once high-dollar flash storage is added to the mix. If you’re paying 30 times more for a flash drive, you want to make sure you’re making the best use of it that you can! Look at IBM’s recently-announced SAN Volume Controller (SVC) and solid state drive (SSD) combination, for example: It will almost certainly have fine-grained thin provisioning of SSDs, and should be able to dynamically move data between flash and disk storage and even between different storage arrays, but I still have questions on how granular this capability will be. HDS can do similar things with their USP-V. NetApp’s V-Series NAS systems will do dynamic allocation, thin provisioning, and data deduplication to enable a better return on the flash drive investment. I’d love to see 3PAR, Compellent, Dell/EqualLogic, and HP/LeftHand apply their solid dynamic allocation tech to solid state storage as well!
Then there’s the 800 lb gorilla: EMC. More enterprise SSD has probably been shipped out of Hopkinton than every other vendor combined, and both the CX and DMX support (optional/expensive) “virtual provisioning” (aka, thin provisioning) of flash storage. But EMC’s Optimizer is not widely used, and only migrates entire LUNs based on user input – hardly the kind of dynamic and granular technology needed to optimally use all of that flash storage. I’m sure the company is working on addressing this issue, though. Perhaps it will appear in the DMX-5 announcement we are all expecting this year?
This article can also be found on Gestalt IT: How I Learned to Stop Worrying and Love Storage Automation
Marc Farley says
Stephen, your focus on SSDs is understandable, considering its the “NEXT BIG THING FOR STORAGE VENDORS TO SELL”, but how many people really give a rip today about SSDs given their budget circumstances. For most IT organizations today, it’s just technology for technology’s sake. By comparison, wide striping on a large number of high RPM disks provides excellent performance for most everything, with single-threaded transaction processes being the exception. But most transaction applications today are multi-threaded and can take advantage of wide striping. Of course, wide striping across all types of drives provides a more cost-effective approach because you can match data with resources most effectively.
The real killer for most customers is the inability to mix workloads effectively on a storage array. Wide striping provides excellent performance for mixed workload environments which means customers can consolidate storage onto single arrays as opposed to having specialized arrays for transaction, streaming, email and file/print applications. If you think about all the different application types that people are putting on virtual server platforms, its pretty important that storage arrays can handle mixed workloads well.
You ask in your post that you’d like to see 3PAR introduce fine grained control of SSDs. Of course, fine grained control of storage resources is our bread and butter – from Thin Provisioning to Wide Striping on Chunklets (I like to think of them as mini-disks). We’re not going to release SSDs that are bulk devices and difficult for customers to leverage and we’re not anxious to rush them out the door in order to stake our claim with the press or blog-nerds – especially when customers are trying to figure out how to get more done with less expensive resources for the foreseeable future.
sfoskett says
Marc, I’m not focused on SSDs at all, and certainly not because vendors want to sell them. What I said was that technologies like dynamic optimization, thin provisioning, and wide striping and essential if one is to get value from SSDs. But I also said that these technologies are great no matter what type of storage is used!
Perhaps the SSD focus is in the mind of the reader, not the writer? And did you just call me a nerd? Pot kettle black!
Marc Farley says
Did I call you a nerd……., let me think about that…… yes,,,,,, but its not exactly fair and I am wearing a black shirt right now. How about Geek? I know you have a life outside of technology using technology to enhance a life full of technology, like others of us.
BTW, did you watch the Grammy awards? I thought the Radiohead number with the USC marching band was by far the best mini-show.
the storage anarchist says
For the record, Symm Virtual Provisioning is priced almost identically to Dynamic Provisioning on the USP-V, and in fact very close to the way 3PAR charges for the same functionality. Storagebod might not like paying a little to save a lot, but it’s a misrepresentation of the facts to complain about VP pricing as if it were the only product that carried a price tag.
And watching all the speculation about “DMX5” is fun, especially for those of who know how Cold (and Hot) you are. While we’re waiting for that future, let’s imagine instead what IBM must have to be doing to improve the DS8000 back-end efficiency enough to deliver ANY benefit from flash drives! Fast drives in a slow system are still going to be slow!!!
Of course, they could alway just put a few flash-laden SVC nodes in front of the DS8000 and call it a day!
No wait – that won’t work for mainframes…
sfoskett says
Of course you are right about pricing. Add-on software and features are expensive almost everywhere – I just love how EqualLogic includes everything with the base cost of the array, but this drives that base cost up enough to cost them some sales, I’m sure.
So let’s sum up: Storage automation is good, but expensive. But it’s gotten so good (in general) that everyone should be using it instead of relying on spreadsheets and hopes like I used to a decade ago!
Barry Whyte says
Well given that the DMX is slower than DS8000 then what does that say? (unless you have some independent benchmark that shows otherwise you’d like to share?)
You could put SVC nodes infront of anything in the open systems world, and get thin provisioning for no extra charge – still be cheaper than paying for it on DMX or USP.
Storagezilla says
How much would an SVC config cost me for 64000 devices? Indeed, how much would SVC cost me to add thin to an entry level DMX? You support what, 2000 devices per IO group 8000 per cluster total?
SVC is only cheap when you’re not operating at scale. You’ve been living off the fat of the land in the midrange market but when we talk DMX scale the wheels fly off the SVC cart and the engine drops out.
Storagezilla says
How much would an SVC config cost me for 64000 devices? Indeed, how much would SVC cost me to add thin to an entry level DMX? You support what, 2000 devices per IO group 8000 per cluster total?
SVC is only cheap when you're not operating at scale. You've been living off the fat of the land in the midrange market but when we talk DMX scale the wheels fly off the SVC cart and the engine drops out.
lacoste online shop says
Resources like the one you mentioned here will be very useful to me! I will post a link to this page on my blog. I am sure my visitors will find that very useful.
Binghamton Process Service says
Storage automation is good, but expensive.