I recently added a mirror to a very simple zfs pool and decided to document it here for posterity. This worked flawlessly in a FreeNAS 10 system with two 4 TB drives.
ZFS should have been great, but I kind of hate it: ZFS seems to be trapped in the past, before it was sidelined it as the cool storage project of choice; it’s inflexible; it lacks modern flash integration; and it’s not directly supported by most operating systems. But I put all my valuable data on ZFS because it simply offers the best level of data protection in a small office/home office (SOHO) environment. Here’s why.
Hard disk drives encounter errors from time to time, so it’s a good thing that most have the ability to recover data anyway. But RAID systems usually have their own error recovery capabilities and can be thrown off when a hard disk pauses I/O. So it’s a good idea to use hard disk drives that allow you to disable or limit error recovery in RAID systems.
I’ve dabbled with FreeNAS in the past and had such a great experience with pfSense (a similar FreeBSD-based project) that I jumped in with both feet on my home office server build. But my initial impressions were, frankly, terrible. I’ve got the system running and stable now, but I’m finding it difficult to recommend FreeNAS at this point.
A few years back, I wrote an immensely-popular series of blog posts outlining the four things that were holding storage system performance back, and the ways to fix them. At the time, I created some presentation content to go along with these posts, and even considered pulling them into a white paper, but nothing came of that. Now, however, I’m pleased to announce that my Four Horsemen are accompanying me to the stage November 10, 2015 at the DeltaWare Data Solutions Emerging Technology Summit in Edina, Minnesota.
Waves of innovation and waves of companies, crash on the storage market, but the same incumbent leaders and product lines survive for decades. Are things changing? It’s hard to see sometimes, but real progress has been made.
Data storage has always been one of the most conservative areas of enterprise IT. There is little tolerance for risk, and rightly so: Storage is persistent, long-lived, and must be absolutely reliable. Lose a server or network switch and there is the potential for service disruption or transient data corruption, but lose a storage array (and thus the data on it) and there can be serious business consequences.
Hard disk drives keep getting bigger, meaning capacity just keeps getting cheaper. But storage capacity is like money: The more you have, the more you use. And this growth in capacity means that data is at risk from a very old nemesis: Unrecoverable Read Errors (URE).
This year for my Truth in IT seminars, Iâ€™m shifting away from virtualization to focus on enterprise storage once again. But this wonâ€™t be any ordinary â€œstorage 101â€ seminar. Rather than trying to talk about every element, Iâ€™m focused on whatâ€™s new!
Virtualization is a disruptive technology in every sense of the word. By abstracting and simplifying physical resources, virtualization enables dynamic utilization. But this â€œtranslationâ€ from physical to virtual disrupts the assumptions that enable performance and flexibility of physical devices such as storage arrays.