Cheatsheet

Show the name of hardware and interface of network cards in the system

This command will show the physical name of all network devices in the system together with their interface naming used in ifconfig:

pciconf -lv | grep -A1 -B3 network

Check if RAM is running in ECC mode

Run the following command and check if the RAM slot is running with Total Width: 72 bits:

dmidecode -t memory

Show available users on a system

pw usershow -P -a

Show available temperature readings

To show temperatures of for instance an AMD Ryzen processor first load the kernel module and then check out sysctl:

kldload amdtemp
sysctl -a | grep temperature

You can also load the module automatically at start up by adding the following line to /boot/loader.conf:

amdtemp_load="YES"

Troubleshooting

Too many open files

Source:

To set the maximum number of open files manually add the following line to the /etc/sysctl.conf:

kern.maxfiles=<max_files>
kern.maxfilesperproc=<max_files_per_proc>

Exemplary values are:

kern.maxfiles=2000000
kern.maxfilesperproc=400000

You can use service sysctl restart to load the new settings, however you will have to reboot to see the change with ulimit -a or via:

sysctl -a | grep files

Read only file system in Single User Mode

See Single User Mode.


Post Installation Tasks

Load necessary drivers

Drivers maybe need to be loaded manually for special hardware in the system. This can be done as follows:

Loading Drivers

Source:

If special PCIe devices like network cards are installed in the system it may be needed to load the corresponding drivers. If the drivers for this hardware are already available from FreeBSD they can be loaded via an addition in /boot/loader.conf.

A list if the available drivers for a given release of FreeBSD can be found on the official website. Take a look at the 14.0 Release drivers here.

To load for instance the driver for Mellanox ConnectX network cards add the following line to /boot/loader.conf:

mlx4en_load="YES"
Link to original

Make ZFS datasets

It is advisable to create ZFS datasets for directories that you probably want to have fallback versions in the future. One of these directories could be for instance /usr/local.

See the following documentation on how to create a ZFS dataset.

Create dataset

First take a look at the currently available datasets by using the following command:

zfs list

If the dataset is not already present create a new one using this command:

zfs create <pool_name><path_to_directory>
Link to original

To create the examplary folder from above use the following command:

zfs create zroot/usr/local

Make first snapshot of your datasets to roll back when needed

If you create a snapshot of the root datasets directly after installation it will be possible to roll back in case something goes wrong during setup of the server. To create a snapshot follow this example:

Create a Snapshot

To create a ZFS snapshot simply use the snapshot command on the target dataset. If also descendent file systems should be snapshotted use the -r option like this:

zfs snapshot -r <path_to_dataset>@<snapshot_name>

An example could look like this:

zfs snapshot -r pool/home@yesterday

Info

Snapshots can also be renamed using zfs rename. So there is no need to destroy and recreate a snapshot because of a typo.

Link to original

Setup periodic tasks

Source:

Periodic tasks include for instance automatic scrubbing. These periodic tasks are configured in the /etc/periodic.conf file. This file has to be created and can initially be copied over from the default file /etc/defaults/periodic.conf.

Also make sure to set the value for daily_output at the top of the file. The given absolute path or space seperated mail addresses are used to store / send the output of the scripts run by the daily tasks configuration.

Also take a look here on how to set up automatic scubs:

Setting up automatic scrub for ZFS pools

Per default in FreeBSD scrubs are scheduled every 35 days but the scrubbing is disabled. This is configured in /etc/defaults/periodic.conf with the following lines:

# 800.scrub-zfs  
daily_scrub_zfs_enable="NO"  
daily_scrub_zfs_pools=""                        # empty string selects all pools  
daily_scrub_zfs_default_threshold="35"          # days between scrubs  
#daily_scrub_zfs_${poolname}_threshold="35"     # pool specific threshold

To enable automatic scrubbing copy the default periodic.conf to /etc:

cp /etc/defaults/periodic.conf /etc/periodic.conf

Then enable daily_scrub_zfs_enable by setting it to "YES".

Info

It is important to check at which time the daily, weekly and monthly periodic tasks are scheduled. If the server is not running at that time the tasks would not be executed. The tasks are scheduled in the /etc/crontab file.

To check when the last scrub was and if it repaired any errros this command can be used - note that if no scrub was performed so far no information will be displayed regarding scrubs:

zpool status <pool_name>
Link to original


Single User Mode

When in single user mode many system services are not started per default. Also the file system will be mounted as read-only. To edit configuration files for the file system first has to be mounted with read rights. This can be done as follows depending on the filesystem:

UFS

mount -u /
mount -a -t ufs

ZFS

mount -u /
zfs mount -a

Manage SSH Daemon

To start or restart the SSH daemon use the following command:

FreeBSD

/etc/rc.d/sshd <start/restart/stop>
Link to original

If you want it to start automatically upon system startup create the follwing entry in the /etc/rc.conf file:

sshd_enable="YES"
		# Identifying Disks

It is advantageous to label disks so that they can be identified even after removing ad adding it again to a system.

Identifying Disks

Source: https://forums.freebsd.org/threads/best-practice-for-specifying-disks-vdevs-for-zfs-pools-in-2021.79161/

In order to identify disks uniquely even after they were swapped to different SATA / ATA ports two main methods can be used. However, depending on which method is used the disks will also be mounted correspondingly. This means, if a disk is mounted in /etc/fstab using a GPT label as /dev/ada3p1 then it will be mounted under this location and even if it has a GPT label associated it will not be available in /dev/gpt/<label>.

Using Disk IDS

It is possible to use the individual Disk IDs instead of the /dev/sdX nomenclature when identifying disks. The Disk IDs and GPT IDs have to be enabled in /boot/loader.conf:

kern.geom.label.disk_ident.enable="1"  
kern.geom.label.gptid.enable="1"

These Disk IDs are unique and will help to uniquely identify your disks. However, they are not easy to remember and will not help finding a disk in your system without checking the serial number every time. This, it can be helpful to define custom labels for each disk. This can be done using GPT Labels like explained below.

Using GPT Labels

Source: https://forums.freebsd.org/threads/labeling-partitions-done-right-on-modern-computers.69250/

Another method to uniquely identify disks even when they were removed and added again to the system is to create GPT Labels. These GRP Labels can be defined manually and therefore be used to define disk labels that for instance show the physical location of the drive in an enclosure.

Assuming for instance there is a disk ada0 and a disk ada1 un the system that should be labelled for use with ZFS, the following commands can be used to create a GPT partition on the whole disks. After that they are labelled with a name showing their location in the system:

gpart create -s gpt ada0
gpart create -s gpt ada1
gpart add -t freebsd-zfs -l backup-left ada0
gpart add -t freebsd-zfs -l backup-right ada1

If you are using this method of identifying your disks it is advisable to disable other methods so that only the custom labels will be used. To do so add following lines to the /boot/loader.conf:

kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
kern.geom.label.ufsid.enable="0"

To visualize existing GTP labels use the following command:

gpart list
Link to original

Setting up NFS shares

Source:

A NFS share can be set up in FreeBSD quite easily. First all corresponding services have to be started at startup of the FreeBSD system. This can be configured in the /etc/rc.config file like this:

# NFS setup  
rcpbind_enable="YES"  
mountd_enable="YES"  
nfs_server_enable="YES"  
nfsv4_server_enable="YES"  
nfsuserd_enable="YES"
nfs_server_flags="-u -t -n 8"

If this is setup a share can be exported using these steps:

Export NFS share

To export a NFS share the exports configuration file has to be adjusted. On a FreeBSD server the /etc/exports file could look something like this:

V4: /data
/data/testshare -maproot=0:10

This specific configuration will share the testshare directory with NFS V4 of the data share and map UIDs 0 to 10 to the root user. This means all users that share the UID of the owner of the files inside the share and all root UIDs can edit the share files.

This means if you want non-root users to edit files in a share, it is necessary that UIDs of the file owner on the server and client match.

Export in FreeBSD

To setup a NFS share on a FreeBSD server take a look at Setting up NFS shares.

Info

Make sure to setup the nfs_server_flags correctly on FreeBSD machines, otherwise the NFS shares will cause problems if they are used for appdata storage of Docker containers or similar. Start with the exemplary configuration in Setting up NFS shares.

Link to original

After these configuration steps have been done, the nfsd daemon can be started if not already done and the mountd daemon can be reloaded:

service nfsd start
service mountd reload

Loading Drivers

Source:

If special PCIe devices like network cards are installed in the system it may be needed to load the corresponding drivers. If the drivers for this hardware are already available from FreeBSD they can be loaded via an addition in /boot/loader.conf.

A list if the available drivers for a given release of FreeBSD can be found on the official website. Take a look at the 14.0 Release drivers here.

To load for instance the driver for Mellanox ConnectX network cards add the following line to /boot/loader.conf:

mlx4en_load="YES"

Setting up automatic scrub for ZFS pools

Per default in FreeBSD scrubs are scheduled every 35 days but the scrubbing is disabled. This is configured in /etc/defaults/periodic.conf with the following lines:

# 800.scrub-zfs  
daily_scrub_zfs_enable="NO"  
daily_scrub_zfs_pools=""                        # empty string selects all pools  
daily_scrub_zfs_default_threshold="35"          # days between scrubs  
#daily_scrub_zfs_${poolname}_threshold="35"     # pool specific threshold

To enable automatic scrubbing copy the default periodic.conf to /etc:

cp /etc/defaults/periodic.conf /etc/periodic.conf

Then enable daily_scrub_zfs_enable by setting it to "YES".

Info

It is important to check at which time the daily, weekly and monthly periodic tasks are scheduled. If the server is not running at that time the tasks would not be executed. The tasks are scheduled in the /etc/crontab file.

To check when the last scrub was and if it repaired any errros this command can be used - note that if no scrub was performed so far no information will be displayed regarding scrubs:

zpool status <pool_name>

Set up Mailing

Source:

To set up mailing via SMTP the file /etc/dma/dma.conf has to be adjusted. To set it up with a Gmail account the following has to be added to the file:

SMARTHOST smtp.gmail.com
PORT 587
AUTHPATH /etc/dma/auth.conf
SECURETRANSFER
STARTTLS
MASQUERADE <gmail_username>@gmail.com

The the authentication file /etc/dma/auth.conf has to be set up as follows:

<gmail_username>@[email protected]|smtp.gmail.com:<password>

To get the Gmail password follow the instructions in Gmail SMTP Setup and get a App password.

To test the setup send an examplary mail with this command:

echo this is a test | mail -v -s testing-email <recipient_mail_address>

Send a mail via preconfigured file

Mails can also be preconfigured in a file when using the sendmail command like this:

sendmail -t -i <recipeint_mail_address> < <file_name>

The content of the file should look like this:

From: <from_mail_adddress>  
Subject: <subject>  
<mail_content>