Important note
Before you start, keep in mind that the choosen name will also be the mount point from root. So if you call the raidz device “storagebox”, it will be auto-mounted as /storagebox.
Also, for the official ZFS documentation, go to the OpenZFS project website.
Basic installation
First install the requirements
dnf install epel-release kernel-devel
dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
dnf install zfs
Now ensure ZFS gets loaded on boot
echo "zfs" > /etc/modules-load.d/zfs.conf
Create drive
Create a ZFS raidz2 pool (assuming RAID6 alike setup, use raidz1 for RAID5 alike)
zpool create <name> raidz2 <devices>
Now create, if applicable, the LOGs cache (this will be a RAID1 alike mirror)
zpool add <name> log mirror <devices>
Statistics
See advanced statistics with the zpool status and zpool iostat command
zpool status
# zpool status
pool: testraid
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
testraid ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
vdb ONLINE 0 0 0
vdc ONLINE 0 0 0
vdd ONLINE 0 0 0
vde ONLINE 0 0 0
vdf ONLINE 0 0 0
logs
mirror-4 ONLINE 0 0 0
vdg ONLINE 0 0 0
vdh ONLINE 0 0 0
errors: No known data errors
zpool iostat
# zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
testraid 17.9G 6.60G 63 350 7.88M 50.8M
zpool iostat extended
# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
testraid 18.3G 6.20G 63 350 7.82M 50.9M
raidz2-0 18.3G 6.20G 63 220 7.82M 23.2M
vdb - - 12 43 1.56M 4.65M
vdc - - 12 44 1.56M 4.65M
vdd - - 12 44 1.56M 4.65M
vde - - 12 44 1.56M 4.65M
vdf - - 12 44 1.56M 4.65M
logs - - - - - -
mirror-4 146M 1.73G 0 130 519 27.7M
vdg - - 0 65 259 13.8M
vdh - - 0 65 259 13.8M
---------- ----- ----- ----- ----- ----- -----
Drive scrubbing
Use this to auto-scrub the drives once a week
cat << EOF > /etc/cron.d/zfs-check
0 0 1 * * root /usr/sbin/zpool scrub <name>
EOF
Replace a faulty disk
After every command, zpool status will show you what it actually did.
Place the faulty device offline
zpool offline <name> /dev/<faulty disk>
Now remove the disk from the system and insert a new one
Now add the disk to the ZFS pool, this can take a lot of time since it will need to resilver the disk into the raidz configuration.
zpool replace <name> /dev/<new disk>
Expand a pool with new disks (or remove a disk)
Please keep in mind that either operation will cause a lot of data shuffling to restore redundancy.
add a disk to an existing pool/vdev
zpool attach <name> <vdev> /dev/<new disk>
Optimizations
See all pool settings
zfs get all <name>
Disable atime so the kernel doesn’t update the access time settings each time
zfs set atime=off <name>
Set extended attributes to go directly with the inodes
zfs set xattr=sa <name>
Set compression to LZ4, its a win!
zfs set compression=lz4 <name>
set your record size to either 64K for databases or 1M for movies
zfs set recordsize=1M <name>
Troubleshooting
- When creating a new pool, i’ve found that it sometimes fails with the error “disk or device busy” during the partitioning phase. To resolve this, remove all paritions from the disk, reboot and create a new partition manually. You’ll find that a RAID_MEMBER flag has been set in the GPT or MBR table. Remove this and then remove the created partition.
- Default ZFS won’t log-cache a-sync writes, causing them to slow down when copying from iSCSI and USB devices. To fix this, run zfs set sync=always <name>.