Source:


Cheatsheet

Show ZFS system configuration in FreeBSD

To show configurations of ZFS in FreeBSD use the following command:

sysctl bfs.zfs

A helpfull tool that has to be installed is zfs-stats:

pkg install zfs-stats
zfs-stats -a

Prepare creating a zpool

Summary

To summarize the preparation of disks for a new ZFS pool, the following commands can be use. The disk will be prepare using GPT labels. This assumes that the disk already has one partition and was therefore repurposed from another system. The disk is known to the system as da10 and is inserted in a hot swap bay in tray 13. The disk will be known to the system after those commands as gpt/tray13 to be used as a ZFS disk.

gpart delete -i 1 da10   # delete existing partition
gpart destroy da10   # destroy existing partition table
gpart create -s gpt da10   # create new partition table
gpart add -t freebsd-zfs -l tray13 da10   # create GPT label

Remove previous GPT partitions

Show available storage devices:

FreeBSD

geom disk list

Linux

df -T

If the device was used before and has a GPT table defined it should first be destroyed. Make sure the device is not mounted before doing this. First delete all existing partitions from the drive. List all current information about the disk by using the following command:

gpart list <device_name>

Then delete all partitions using the following command:

gpart delete -i <index> <device_name>

If that does not work because the GPT table is corrupted first recover it:

gpart recover <device_name>

When all partitions are deleted destroy the current partitionimg scheme: FreeBSD

gpart destroy <device_name>

Identifying Disks

Source: https://forums.freebsd.org/threads/best-practice-for-specifying-disks-vdevs-for-zfs-pools-in-2021.79161/

In order to identify disks uniquely even after they were swapped to different SATA / ATA ports two main methods can be used. However, depending on which method is used the disks will also be mounted correspondingly. This means, if a disk is mounted in /etc/fstab using a GPT label as /dev/ada3p1 then it will be mounted under this location and even if it has a GPT label associated it will not be available in /dev/gpt/<label>.

Using Disk IDS

It is possible to use the individual Disk IDs instead of the /dev/sdX nomenclature when identifying disks. The Disk IDs and GPT IDs have to be enabled in /boot/loader.conf:

kern.geom.label.disk_ident.enable="1"  
kern.geom.label.gptid.enable="1"

These Disk IDs are unique and will help to uniquely identify your disks. However, they are not easy to remember and will not help finding a disk in your system without checking the serial number every time. This, it can be helpful to define custom labels for each disk. This can be done using GPT Labels like explained below.

Using GPT Labels

Source: https://forums.freebsd.org/threads/labeling-partitions-done-right-on-modern-computers.69250/

Another method to uniquely identify disks even when they were removed and added again to the system is to create GPT Labels. These GRP Labels can be defined manually and therefore be used to define disk labels that for instance show the physical location of the drive in an enclosure.

Assuming for instance there is a disk ada0 and a disk ada1 un the system that should be labelled for use with ZFS, the following commands can be used to create a GPT partition on the whole disks. After that they are labelled with a name showing their location in the system:

gpart create -s gpt ada0
gpart create -s gpt ada1
gpart add -t freebsd-zfs -l backup-left ada0
gpart add -t freebsd-zfs -l backup-right ada1

If you are using this method of identifying your disks it is advisable to disable other methods so that only the custom labels will be used. To do so add following lines to the /boot/loader.conf:

kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
kern.geom.label.ufsid.enable="0"

To visualize existing GTP labels use the following command:

gpart list

Thinking about ZFS configuration

Source:

Alignment Shift / ashift

Source:

The ashift defines how writes are done to your disks. Currently usual ashift values of disks are 512-byte sectors and 4096-byte sectors. Some new drives even have 8192-byte sectors. However take that with a grain of salt, since the sector size reported by your disk light not be correct! This is caused by the problem that some operating systems deny sector sizes bigger than a given value - the result is that disks lie about their actual sector size today or even report multiple to choose from.

When choosing an ashift value you should default to 4k-byte sectors currently if you have 4k and 512-byte sector drives, i.e. always default to the biggest Alignment Shift. This is because a drive that has 4k sectors but is used with ashift set to 512-bytes each write command will cause the system to read an entire 4k-bytes sector, then just modify an eights of it (512-bytes) and write it back. Thus, there is much overhead when writing. The other way around the system would just break up a bigger wrote request into 512-byte sizes chunks which is a much less intensive task.

To set the target Alignment Size as default in FreeBSD modify the /etc/sysctl.conf to add the following line:

vfs.zfs.min_auto_ashift=12

Create a zpool

Create a mirrored zpool:

zpool create <pool_name> mirror <device_name_01> <device_name_02> ...

Remove a zpool:

zpool destroy <pool_name>

Create dataset

First take a look at the currently available datasets by using the following command:

zfs list

If the dataset is not already present create a new one using this command:

zfs create <pool_name><path_to_directory>

Show pool status

To check the sattus of a ZFS pool use this command:

zpool status -v <pool_name>

Replace disk in pool

If there is a problem with a vdev / disk in an existing pool the disk can be replaced if the pool has redundance. A disk could for instance be broken and if it is not recognized it will just be shown as REMOVED when using zpool status.

To replace such a disk use the following command:

zpool replace <pool_name> <old_disk> <new_disk>

Using GPT labelling the command could look like this:

zpool replace data01 gpt/tray11 gpt/tray01

Set recordsize for dataset

Source:

The recordsize can help to achieve higher performance when using datasets for specific types of data. It defines block chunks that have to be read or written when a file is accessed. If for instance a 60kB file is read on a dataset with 128kB blocks the whole block will have to be read. It is however possible to fit multiple files into one single sector if they are small enough.

Thus, big files like movies or ISOs for instance can profit from higher record sizes while databases can benefit from lower ones - it always depends on the anticipated size of your changes. If the changes to a file will always be very small for instance it can help to use a small recordsize.

The following table shows some suggestions for recordsize:

file typerecordsize
databases, logging, webserver16K - 32K
VMs, normal documents, operating systems64K - 256K
pictures, music, videos, ISOs256K - 2M

To set the recordsize on a new dataset use the following command:

zfs set recordsize=<size> <path_to_dataset>

Share via NFS

Source: https://docs.freebsd.org/de/books/handbook/network-servers/#network-configuring-nfs

To share a ZFS pool or dataset via NFS use the following command. Make sure to set the maproot optione correctly. It defines which user of the client machine will be mapped to the root user of the host server, so that acces rights are correctly mapped. If the client does not connect via root in this examples case it will not have write access to the share.

zfs set sharenfs="-maproot=root,-network=<ip-range>" <path_to_zfs>

Snapshots handling

Source:

Create a Snapshot

To create a ZFS snapshot simply use the snapshot command on the target dataset. If also descendent file systems should be snapshotted use the -r option like this:

zfs snapshot -r <path_to_dataset>@<snapshot_name>

An example could look like this:

zfs snapshot -r pool/home@yesterday

Info

Snapshots can also be renamed using zfs rename. So there is no need to destroy and recreate a snapshot because of a typo.

Remove a Snapshot

To remove an unwanted old snapshot and all descendent ones use the destroy command:

zfs destroy -r <path_to_dataset>@<snapshot_name>

List Snapshots

To show all snapshots use the following command:

zfs list -t snapshot

Rollback to a Snapshot

Source:

Info

When rolling back to a snapshot the -r options does not rollback all snapshots in a pool recursively, but it makes it possible to rollback to a snapshot older than the most recent one. To rollback all child datasets is not possible with just one rollback command.