If your shares are not displayed anymore but you are able to locate the share data within the hard drives directly, a restart should resolve the issue and the shares should be visible again.
This bug can appear for instance when a new pool is added.
Move data when target location of user share was changed
If you have changed the target location of a user share only the newly added data will be written to that new location. Old data will remain on the old location.
If the old location is a pool and you have set up regular moving to the array, the data will eventually be written to the array - or you just start the mover manually.
However, if you moved the user share from one array disk to another or if you moved the user share to a pool without the array as secondary location the data will remain in the old location.
In this case you have to manually move the files from the old to the new location.
Attention
Make sure not to copy between user shares and physical shares / drives simultaneously or before unraid was able to recognize changes of the user shares. This can result in data loss!
Generally you can follow this routine to move user share data from one drive to another: Move data between drives
Server can not be accessed after change of network interface
When you change the network interface in any way - like adding or removing network interface cards - you may have to update the network interface settings of unraid.
In this case youay have to boot into GUI mode and adjust the network settings locally on the server. The new network interface will have to be set as the main interface. Otherwise no network connection can be established.
It could happen that the new network card can not be selected as primary card - because it was not recognized properly. In this case you will have to remove all other network interfaces and force unraid to select the new / primary one as it’s network interface.
Array drive failed / Replace Array drive
If an array drive has failed and you want to exchange it, you simply have to replace it physically and then set up the new drive in the spot where the old drive was located in the array configuration of unraid.
If you have set up two parity disks and two drives have failed it is advisable to add new drives one at a time. Especially when one parity and one array drive have failed. If you add the drives one at a time it should speed up the process of setting up the missing drive a bit. Thus, the array will be unprotected for a shorter amount of time.
Post Install Tasks
Things that should be done after a fresh installation of unRAID or after a reset of a existing server are mentioned in this section. These tasks mainly focus on the setup of a unraid server with recommended QoL improvements and a good foundation to run a functioning file hosting server.
Setup an array - First of all it is important to check exactly how to set up your storage solution. More information about that can be found in the chapter Storage Management. This will allow you to set up an array for your unraid server. This array is mandatory for most of the services that will run on this server - e.g. Docker Containers or VMs.
Setup shares - When an array is set up it is important to define how your shares should be set up. It is not possible to setup a basic user based share / folder configuration! That means if you want to have seperate folders for users that can only be accessed by them, it will not be possible. This will need to be done by additional software like specialized Docker Containers for Nextcloud or the likes. The shares you create should therefore make room for all purposes you may have on your server. Some examples are the following:
Share name
Native
Description
appdata
x
docker data of your containers
system
x
system data
isos
x
VM isos
domains
x
VM instances
git
git repository storage
data_userxyz
data storage for users
backup
storage for local backups (e.g. for appdata)
syslog
syslog storage for the local syslog server
Install plugins - Some plugins for unraid are really helpfull for standard tasks like setting up cronjobs or maintaining server operation. These plugins are obtained by installing Community Applications. With that plugin yiu have access to all plugins developed by the community. The following are especially helpfull:
App name
Description
Fix common problems
Checks for common problems in unraid
Enhanced Log Viewer
Better log viewer with highlighting and filter capabilities
Mover Tuning
Tune when and what to move from cache to array and vice versa
Nerd Tools
Get additional command line tools like rclone easily
Unassigned Devices
Makes it possible to mount currently unassigned devices and make them usable
User Scripts
Lets you set up custom scripts with custom scheduling
Dynamix Stop Shell
Supports array stop by cutting all open terminals
Dynamix File Manager
Adds file manager capabilities to unraid webui
Dynamix Active Streams
Visualizes the currently open streams from and to your server - e.g. via SMB
Appdata Backup
Lets you create automatic backups of your Docker appdata
Setup Mail notification - Mail notification can be very useful to send error logs or crash notifications that need emediate actions. The settings within unraid for the setup of SMTP is explained in SMTP configuration.
Setup periodic tasks - Setting up periodic tasks that should be performed by unraid should also be one of the first things to do. Those tasks can include
Setting up rclone tasks if you want to sync specific local or remote folders - via user scripts like in chapter Filehosting of for more than one user.
Setting up automatic parity checks. The browser link to the corresponding settings page is /Main/Settings/Scheduler.
Setting up automatic balance and / or scrub tasks for SSD caches.
Setup SSH access - Setting up SSH access can be useful for debugging in the future. This can be done like explained in this chapter: Set up public keys for SSH
Setup Docker Stacks - As one of the last steps Docker stacks / containers can be set up. The first container to set up could be Portainer.
Create bootable USB stick on Linux
Since there is no media creation tool for Linux bootable USB drives for unRAID have to be created manually.
These are the steps to create such a bootable unRAID USB device on a Linux machine:
Insert USB drive.
Format to FAT (or FAT32) or create FAT partition.
Create a msdos partition table. A GPT one will not be able to boot!
Download a unRAID release. They are found in their archive.
Extract the zip-file.
Copy the files inside the extracted unRAID directory to the USB device.
Install syslinux.
Navigate to the directory you originally extracted to.
Run the make_bootable_linux script from here (NOT from on the USB drive).
If this does not work the first time the script is run, run it again.
It is recommended to use NFS shares as network storage for images of virtual machines or containers. This is because NFS supports virtual links, SMB does not. This can lead to problems with mounting VM or container images when using SMB shares to host them.
Especially in proxmox hosting images in an SMB share will not work.
A NFS share has to user / password authentication. You rather have to set a connection rule on the NFS server. This rule will allow a specified IP address to write to the share. Reading is however allowed to anyone in the network. The rule looks like this:
<client_ip_address>(sec=sys,<rw/ro>)
For more.information on network shares take a look at Network Shares.
syslog Server
When using unraid the built in syslog server can be used. This will log all incoming logs to corresponding log files. The evaluation of those files however has to be done manually.
More information on syslog can be found here: Systemlog
Plugins
Dynamic Cache Directories
Warning
Only use Dynamic Cache Directories when you really want to stop your hard drives from spinning up.
The plugin will increase the CPU usage significantly and can easily draw more power than the hard drives would need when spinning - if you have just a few hard drives in the array. This plugin is therefore not suitable for power efficiency.
Nerd tools
Nerdtools can be used to install packages like python on the unRAID server.
python packages
To install python packages however a few more steps are necessary. First pip has to be installed in order to install packages successfully.
Create an authorized_keys file for the unraid server, using all id_rsa.pub files on all the machines which require access.
Copy this file to the unraid servers /root/.ssh/ folder.
This will work until a reboot only since unraid always resets the root directory! To handle a persistent setup there are at least two ways explained below.
Change root SSH configuration
The best / the official way to maintain the keys across reboots is to
copy the authorized_keys file to /boot/config/ssh/root.pubkeys
copy /etc/ssh/sshd_config to /boot/config/ssh
modify /boot/config/sshd_config to set the following line:
AuthorizedKeysFile /etc/ssh/%u.pubkeys
Setup cronjob for key setup
Another way to brute force the copy of those key files can be implemented like this:
Copy the authorized_keys file to /boot/config/ssh/.
Add this to the end of your /boot/config/go, using your preferred editor:
The SSH service can be restarted on slackware Linux with the following command: ![[SSH#Manage SSH Daemon#Slackware / unRAID]]
Filehosting for more than one user
If you want to have data shared between different user shares so that each user can read and write to those shared files a custom setup has to be created. If those files also have to be synced to several client devices symbolic links will most probably not viable since most syncing solutions (like syncthing) will ignore symbolic links due to the possibility of data loss if they are not correctly set up.
Thus, different approaches can be used:
sync the target folders locally between the user shares using a cronjob and a local bidirecitonal syncing tool. This should only be done if the data that has to be synced is not too bigm because it will obviously use double the size it would need to with a more efficient sharing setup.
However this can be done using the rclone tool accessible on unraid via the nerdtools plugin. The command to sync to folders looks like this:
Make sure to set a oersistent working directory so that the cache for the two folders that shall be synced will be available also after reboots.
To setup a cronjob for this in unraid it is recommended to use the userscript plugin. This way the cronjob will be deployed persistently.
Another possible solution is to create a file share especially for data that is meant to be shared with several user accounts. This is obviously only viable if the data is specifically needed for collaboration and if a great amount of data is to be expected in this share. This way the syncing tool could point directly to this share and distribute it to all users.
Setting up a rclone schedule
To set up rclone like explained before a userscript can be defined that is run every few minutes. A script that would call the rclone bisync command and would also send notifications (Create notification via CLI) in unraid if it fails could look like this:
#!/bin/bashif rclone bisync <path_to_folder_A> <path_to_folder_B> --workdir <path_to_persistnet_work_dir> --verbose; then echo successelse # send mail notification /usr/local/emhttp/plugins/dynamix/scripts/notify \ -i warning \ -m 'The rclone bisync command has failed.' \ -d 'rclone bisync failed' \ -e 'rclone bisync failed' \ -t \ -r <your_mail_address> # create pop up notification in unraid /usr/local/emhttp/plugins/dynamix/scripts/notify \ -i warning \ -m 'The rclone bisync command has failed.' \ -d 'rclone bisync failed' \ -e 'rclone bisync failed'fi
Setting up automatic S.M.A.R.T. Tests
To set up automatic short and extended S.M.A.R.T. tests in unraid it is recommended to create a script run by the userscripts plugin. The script could like this:
for i in {b..o}; do smartctl --test=<short/long> /dev/sd$idone
However, make sure that spin down of the disks is disabled if a extended test should be done.
To use SMTP for Mail notifications is very easy in unraid. The settings for SMTP can be found in the webui with the URL suffix /Settings/Notifications. The SMTP settings can then be setup to use for instance a Google Gmail account to send mails to a target recipient.
Make sure to setup the credentials correctly if using Gmail as explained in Gmail SMTP Setup.
Create notification via CLI
It can be useful to use notifications when creating custom scripts. Those notifications can display problems with the execution of those scripts or to monitor specific information.
To set up a notification via the CLI the following snippets can be used. If the /dynamix/scripts directory is not available make sure to install the corresponding plugin. First a notification only in the unraid webui can be created like this:
Moving data between drives can be necessary when the mover setup or other configurations are not as expected and data gets moved unwillingly onto specific drives. For instance, of data shall remain on a SSD cache pool but was moved to the array. In this case this data has to be moved back to the cache manually.
However, before moving make sure that the following is true:
make a backup of your data before moving data between disks.
make sure your mover and other configurations are now correct so that the data will not be moved back again.
make sure no user scripts are running that work on the data (read/write/validate) that will be moved, i.e. stop syncthing and Backup scripts.
make sure no one will access the data during moving.
make sure that the user share you try to adjust has both drives you are copying between set up in its share configuration. If you move the data into a drive that is not known by the share the data will not be accessible through the user share.
Warning
If data that is moved manually between disks is accessed through a user share data can irreversibly get lost!
If all the precautions are met you can start moving the files.
To move from one disk to another use the following command:
This command will copy data from one drive to another. Make sure that the path within the drive is the same for source and destination. Otherwise your folder structure will not be identical after moving.
Also make sure to type im the asterisk at the end of the source path. Otherwise you will copy the parent folder into the target folder. The folder structure will therefore not align.
Make also sure to move hidden files in the root folder of your target path!
Set up blacklist for mover
With the mover tuning plugin and a custom script it is possible to set up a blacklist of files that should not be moved by the mover.
This script needs to be called always before a mover is started. This can be set up using the mover tuning plugin. This script will consume a file in which all directories are defined that should not be moved. It will then create a list with all containing files.
This list can then be added to the mover tuning plugin too.
The initial proposal from the reddit post looks like this:
#!/bin/bash# (https://forums.unraid.net/topic/70783-plugin-mover-tuning/)# This script enables the Mover Tuning plugin for Unraid to be able to exclude files and/or folders.# Script activity can be viewed in Unraid's system log (Tools -> System Log).# First, it reads lists of directories and files from provided text files.# Then, it converts any Windows line endings to Unix line endings in these lists.# Next, it checks if the paths listed in the files are valid directories or files.# Finally, valid file and directory paths are compiled into a single output file for use by the Mover Tuning plugin.# Mover Tuning plugin settings:# "Ignore files listed inside of a text file" - set to Yes# "File list path" - set to output file path (mover_tuning_exclusions.txt)# "Script to run before mover (No checks, always runs)" - set to the script file path (mover_tuning_exclusions.sh)# "Move Now button follows plug-in filters" - set to yes# Note: Ensure this script file has execute permission and the correct owner and group IDs.# Set permission and owner/group IDs with the following commands:# chmod +x /path/to/this/script.sh# chown 99:100 /path/to/this/script.sh# Script folder pathSCRIPT_FOLDER="/mnt/user/appdata/misc"# Temporary file for intermediate processingTEMP_FILE="$SCRIPT_FOLDER/mover_tuning_temp.txt"# File containing the list of directories to recursively search for files (one per line)EXCLUDED_DIRECTORIES_LIST="$SCRIPT_FOLDER/mover_tuning_excluded_directories.txt"# File containing a list of individual files to exclude, each with a complete path (one per line)EXCLUDED_FILES_LIST="$SCRIPT_FOLDER/mover_tuning_excluded_files.txt"# Output file that will contain the complete, validated list of excluded filesOUTPUT_EXCLUSIONS="$SCRIPT_FOLDER/mover_tuning_exclusions.txt"# Function to clean up temporary file on exitcleanup() { rm -f "$TEMP_FILE"}trap cleanup EXIT# Function to convert line endings and validate pathsconvert_line_endings_and_validate_paths() { local file_list=$1 local check_type=$2 # 'file' or 'directory' local line_number=0 # Convert Windows line endings to Unix line endings sed -i 's/\r$//' "$file_list" while IFS= read -r path || [ -n "$path" ]; do ((line_number++)) if [ "$check_type" = "file" ] && [ -f "$path" ]; then echo "$path" >> "$TEMP_FILE" elif [ "$check_type" = "directory" ] && [ -d "$path" ]; then find "$path" -type f -print >> "$TEMP_FILE" else echo "script: Error: Skipping invalid $check_type path, line $line_number in $(basename "$file_list")" fi done < "$file_list"}echo "script: Generating exclusions list..."# Process excluded files list if it existsif [ -f "$EXCLUDED_FILES_LIST" ]; then echo "script: Validating $EXCLUDED_FILES_LIST" convert_line_endings_and_validate_paths "$EXCLUDED_FILES_LIST" "file"else echo "script: Warning: Skipping excluded files, cannot find $EXCLUDED_FILES_LIST"fi# Process excluded directories list if it existsif [ -f "$EXCLUDED_DIRECTORIES_LIST" ]; then echo "script: Validating $EXCLUDED_DIRECTORIES_LIST" convert_line_endings_and_validate_paths "$EXCLUDED_DIRECTORIES_LIST" "directory"else echo "script: Warning: Skipping excluded directories, cannot find $EXCLUDED_DIRECTORIES_LIST"fi# Remove blank lines and move the temporary file to the final outputsed '/^$/d' "$TEMP_FILE" > "$OUTPUT_EXCLUSIONS"# Signal completion of the process with a messageecho "script: Generated exclusions list $OUTPUT_EXCLUSIONS"
Create blacklist to keep cache full to a specified level
This script will create a list of all the newest files that will add up to a certain usage level of the cache. These files will then not be used if configured as blacklist.
This way the cache will always hold the newest data but keep a specified space free for new data.
Exemplary script
#!/bin/bash# Define variablesTARGET_DIR="/mnt/cache/data"OUTPUT_DIR="/mnt/user/appdata"OUTPUT_FILE="$OUTPUT_DIR/moverignore.txt"MAX_SIZE="500000000000" # 500 gigabytes in bytesEXTENSIONS=("mkv" "srt" "mp4" "avi" "rar")# Ensure the output directory existsmkdir -p "$OUTPUT_DIR"# Cleanup previous temporary filesrm -f "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_filtered_metadata.txt"rm -f $OUTPUT_FILE# Step 1: Change directory to the target directorycd "$TARGET_DIR" || exit# Step 2: Find files with specified extensions and obtain metadata (loop through extensions)for ext in "${EXTENSIONS[@]}"; do find "$(pwd)" -type f -iname "*.$ext" -exec stat --printf="%i %Z %n\0" {} + >> "$OUTPUT_DIR/temp_metadata.txt"done# Step 3: Sort metadata by ctime (second column) in descending ordersort -z -k 2,2nr -o "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_metadata.txt"# Step 4: Get the newest files up to the specified size limittotal_size=0processed_inodes=()while IFS= read -r -d $'\0' line; do read -r inode ctime path <<< "$line" # Skip if the inode has already been processedif [[ "${processed_inodes[*]}" =~ $inode ]]; then continue fi size=$(stat --printf="%s" "$path") if ((total_size + size <= MAX_SIZE)); then echo "Processing file: $total_size $path" # Debug information to screen #echo "$path" >> "$OUTPUT_FILE" # Appending only path and filename to the file total_size=$((total_size + size)) # Mark the current inode as processed processed_inodes+=("$inode") # Step 4a: List hardlinks for the current file hard_links=$(find "$TARGET_DIR" -type f -samefile "$path") if [ -n "$hard_links" ]; then echo "$hard_links" >> "$OUTPUT_FILE" else echo $path >> $OUTPUT_FILE fi else break fidone < "$OUTPUT_DIR/temp_metadata.txt"# Step 5: Cleanup temporary filesrm "$OUTPUT_DIR/temp_metadata.txt"echo "File list generated and saved to: $OUTPUT_FILE"