I often receive storage devices for review, but it’s unusual that two such similar ones arrive at once. After giving each a fair amount of testing and use, I come away unimpressed.
Storage protocols continue to mimic direct attached storage, with the concepts of block and file at its core. No amount of virtualization, and no new protocol, will fix this – we need a storage revolution.
When Apple announced the new MacBook Pro at the end of February, there were just two Thunderbolt peripherals featured: The LaCie Little Big Disk and the Promise Pegasus. Both of these storage devices were on display at the NAB Show in Las Vegas last week, and each appeals to a different market segment. The 2-drive Little Big Disk is a portable matched up with the MacBook Pro, while the Promise Pegasus is a 4- or 6-drive desktop RAID system. Promise expects to deliver the Pegasus to the market sometime after the summer.
LaCie looks to be the first out of the gate with a Thunderbolt storage system. They promise to deliver their Little Big Disk portable RAID storage device sometime this summer, and the polished look of the devices on display at the NAB show suggests that they will meet this target.
Many storage challenges focus on the conflict between data management, which demands an ever-smaller unit of management, and storage management, which benefits most from consolidation. Developing data management capability that is both granular enough for applications and scalable enough for storage is one key to the future of storage.
I spent last week tying up loose ends before Tech Field Day 5 in San Jose. It’s going to be a great event, with presentations by Symantec, Drobo, Xangati, NetEx, InfoBlox, HP, and a new company making their US launch! In the mean time, I am working hard to wrap up the Small Enterprise Storage Array Buyers’ Guide for DCIG and continuing my regular work – spreading the word about state of the art IT! I’ve been researching VMware extensively, and building a home lab server, in preparation for my Storage for Virtual Servers seminar, too.
Prognostication is a perilous business, but pundits are drawn to the topic in the month of December. The fact that most predictions fall on their faces demonstrates the intoxicating mix of hope, dreams, and irrationality that mark both geniuses and fools. I am neither, so I like to make predictions after the fact! But this year I’ve been asked to look to the future, so I’ll stick with the safe road and pick current trends rather than guessing what I hope will come.
Wrapping up this week, here are my shared items.
Perhaps the previous discussion of spindles left you exhausted, imagining a spindly-legged centipede of a storage system, trying and failing to run on stilts. The Rule of Spindles would be the end of the story were it not for the second horseman: Cache. He stands in front of the spindles, quickly dispatching requests using solid state memory rather than spinning disks. Cache also acts as a buffer, allowing writes to queue up without forcing the requesters to wait in line.
Why do some data storage solutions perform better than others? Mechanical performance, RAM caching, I/O capacity, and the intelligence of the system all have a part to play. Today we examine the rule of spindles: Adding more disk spindles is generally more effective than using faster spindles.