In the previous post, I talked about how the Drobo uses metadata monitoring to solve the telephone game and make de-allocation possible. But that approach is challenging in complex enterprise environments. Instead, most enterprise arrays use a complex chain of semaphores to interpret signals from the connected hosts about the capacity that can be un-provisioned.
On the storage side, arrays can only use the information they have to de-allocate: The data that’s stored on them. They don’t know what application is using it, what file system it is. They don’t know anything at all.
But, somewhere along the line, someone had a big idea and said, “wait a second, what if we look for pages that are all zeros?” We’ll talk about pages a bit later, but for now, let’s talk about zeros. A zero is kind of a smoke signal coming up from over the hills that says, “there’s nothing valuable here.”
So the storage array watches for pages that are all zero and reclaims them. As protection against making a stupid mistake (what if you actually wanted to write all zeros?), anybody who asks for a page that has been reclaimed just gets all zeros back.
Most of the major vendors support this kind of zero page reclaim. This is good stuff. I don’t want to sound too critical of them because I appreciate them implementing at least this.
The problem is that there’s not a lot of ability to actually have those zeros be written. Almost no operating system writes zeros to deleted space. If they actually wrote pages of zeros, thin provisioning would work great.
So what do the storage vendors do? They come up with utilities that write zeros!
NetApp has SnapDrive, which zeros out empty space so that the Filer can go and recover that space. You run it whenever you want to run it. Eventually the storage array notices that you’ve zeroed out that space and it recovers it. Compellent and Symantec’s Veritas Storage Foundation have something like that, too. You can also force it using the SDelete command, and you can configure it using VMware ESX.
Zero page reclaim is pretty straightforward. It doesn’t take a lot of computing power – It’s not like you’re watching the file system for changes or anything. All you’re doing is occasionally going through and deleting pages full of zeros. So, you can post-process it, kind of like de-duplication.
There are quite a few issues with zero page reclaim, though:
- Things aren’t writing zeros
- Most of these implementations are page-based, which looks like a problem
- Theoretically, this drives more IO through the system, not less
This last is the biggest problem, really. In most cases IO performance is a bigger issue than capacity in enterprise storage. If I could give you all the capacity you could possibly want or all the performance you could possibly want, most people would pick performance. It used to be capacity, but now it’s all about performance. If infrastructure folks could get one for free and had to pay for the other, they would definitely pay for performance.
And zero page reclaim, the way that it’s implemented with SDelete or with eagerzeroedthick, is driving tons of IO. Basically, a delete is the same as a write because you have to write all these zeros over the bus. But there’s a way around that, too. And that’s the topic for the next piece in this series.