One of my criticisms of ZFS is that it’s not flexible: Once you set up a dataset, you’re pretty much stuck with it. But this isn’t entirely true: You can add a mirror or stripe to an existing pool without destroying the data. Since this is quite a common operation, I decided to write it up for posterity.
A Very Simple ZFS Pool
Let’s say you built the world’s most basic ZFS pool: A single disk. Then you learned that this is a Very Bad Idea and want to add some data protection. It’s actually quite simple to do, though it does require the command line in FreeNAS.
Here’s our setup:
- We have a single disk attached to our system, /dev/da1
- This disk is used as a simple zfs pool, tank
- You bought another disk and want to mirror the pool to the new disk, /dev/da2
First, make sure you’ve got the disk names correct. I use dmesg to see the system messages, which usually make it pretty obvious which disk is which. This process is only non-destructive if you specify the correct disk drives. If you’re not sure, do not continue!
Prepare Your Drive
My drive was formatted from the factory, so I had to delete the original partitions:
sudo gpart destroy -F /dev/da2
Then I formatted the drive as gpt:
sudo gpart create -s gpt /dev/da2
And created a zfs partition:
sudo gpart add -t freebsd-zfs /dev/da2
This new partition is /dev/da2p1, and I’ll add it to my existing partition in a moment.
Get the Partition IDs
Next we need to get the information we’ll feed zfs so it will configure the drives correctly.
We will need the gpt ID of the existing partition as well as the new one we’re adding. The “zpool status” command will show all we need to know about the existing drive, partition, and pool.
$ zpool status pool: Internal state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM Tank ONLINE 0 0 0 gptid/a8675309-b52e-12e6-a3fa-c13ed46d88b1 ONLINE 0 0 0 errors: No known data errors
See that bit after “gptid/”? That’s the unique ID of our existing zfs disk: “a8675309-b52e-12e6-a3fa-c13ed46d88b1”
Next we need to know the ID of the new partition we just created. Here’s an abbreviated output of “gpart list”:
$ gpart list Geom name: da2 modified: false state: OK fwheads: 255 fwsectors: 63 last: 7814037127 first: 40 entries: 128 scheme: GPT Providers: 1. Name: da2p1 Mediasize: 4000786984960 (3.6T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 rawuuid: 7d42ec0f-dfb7-12f7-a1fa-c13ee57d68b2 rawtype: 517e7bbb-7ece-21d5-7fe8-00122d08722b label: (null) length: 4000786984960 offset: 20480 type: freebsd-zfs index: 1 end: 7814037119 start: 40 Consumers: 1. Name: da2 Mediasize: 4000787029504 (3.6T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0
Note that I deleted a lot of lines from this output!
The thing you need is the “rawuuid”. In my example, this is “7d42ec0f-dfb7-12f7-a1fa-c13ee57d68b2”
Putting It Together
Now we just need to tell zfs that these two disks need to be joined together as a mirrored set. Again, here are the inputs:
- The pool name: Tank
- The existing partition uuid: a8675309-b52e-12e6-a3fa-c13ed46d88b1
- The new partition uuid: 7d42ec0f-dfb7-12f7-a1fa-c13ee57d68b2
So here’s the magic command:
sudo zpool attach Tank /dev/gptid/a8675309-b52e-12e6-a3fa-c13ed46d88b1 /dev/gptid/7d42ec0f-dfb7-12f7-a1fa-c13ee57d68b2
Now zfs will attach the new partition to the existing drive set, mirroring all data from the old drive to the new one. This process of copying data (“resilvering” in zfs parlance) will take a while. You can use the “zpool status Tank” command to see the progress, including an estimated time of completion. In my case (with two 4 TB drives containing a little over 1 TB of data) it took about 3 hours.
Note that zpool attach can only add mirrors to an existing zpool device. You can’t mess with RAID-Z sets in this manner, even though that would be much more useful. So don’t get your hopes up, but protect your data anyway!