Waves of innovation and waves of companies, crash on the storage market, but the same incumbent leaders and product lines survive for decades. Are things changing? It’s hard to see sometimes, but real progress has been made.
The difference between traditional compression and modern data deduplication is somewhat hazy. And it doesn’t help that various implementations fall all along the spectrum from “mildly interesting” to “cutting edge!”
The latest beta of the server version of Microsoft’s forthcoming Windows 8 operating system includes a handy tool related to the new data deduplication feature. DDPEVAL will test a given dataset using the new deduplication and compression engine and report the savings to be expected. And it works even on non-Windows 8 systems!
Tomorrow, I will be in San Francisco for TechTarget’s Storage Decisions conference. This show does a good job on the editorial side, suggesting timely topics and bringing in folks like Dennis Martin, Mark Staimer, and Jon Toigo. I will have two presentations on data reduction and storage virtualization in the main conference track – both are updated from my New York sessions.
Native Format Optimization (NFO) makes a lot of sense, since it addresses a common user error in a practical way, and allows capacity savings to â€œtrickle-downâ€ to backups, e-mail systems, and archives. But wholesale compression and the duplication of primary storage may not be worth much, especially since the cost of disk keeps dropping dramatically.
Later this month, I will be heading to New York for TechTarget’s Storage Decisions conference. I will have two presentations on data reduction and storage virtualization in the main conference track. Registration is free for qualified end-users, and I urge you to attend on September 19 and 20, 2011.
Next month, I will be heading to Chicago for TechTarget’s Storage Decisions conference. This show does a good job on the editorial side, suggesting timely topics and bringing in independent voices like Howard Marks. I will have three presentations to give: Sessions on data reduction and storage virtualization in the main conference track, as well as a dinner discussion focusing on controlling the growth of data. Registration is free for qualified end-users, and I urge you to attend.
Today, IBM alerted the world that they had not fallen asleep at the wheel by kicking out an awfully-impressive midrange storage array, the Storwize V7000. This seems like an excellent device, filled with proven engineering borrowed from the successful SAN Volume Controller (SVC) line of storage virtualization products. But closer examination (and IBM’s own Tony Pearson) reveal that it contains exactly nothing from their Storwize acquisition apart from the name.
I don’t usually excerpt large amounts of text from other blogs. But this is just too cool. UNIX nerds and Mac OS X weenies alike will either shake their heads and jump out a window or laugh out loud at one of the under-reported changes in Snow Leopard. See, Snow Leopard’s version of HFS+ allows […]
One of the great ironies of storage technology is the inverse relationship between efficiency and security: Adding performance or reducing storage requirements almost always results in reducing the confidentiality, integrity, or availability of a system. Many of the advances in capacity utilization put into production over the last few years rely on deduplication of data. […]