It can happen that a LXC container crahes. This is for instance common, when the backend storage server was shutdown while a dependend LXC container is running.
This container can then not be shutdown by the WebUI and has to be killed via command line interface.
To kill such a container search for the respective processes with this command:
ps ax | grep lxc
Take a look at all processes that contain the ID of the crashed LXC container, e.g. 101. Then kill all those processes via thei PIDs:
kill -9 <PID>
Can’t get Shell from the web UI
If the shell can not be openend from the web UI because of an error like the following, this most probably is due to missing quorum.
Either try to set quorum to one using the following command via SSH or try to start another node to gain access to a shell again:
Configure quorum
If you expect one or more of the nodes of your cluster to be offline regularly you should adjust the number nodes that need to agree to a command of a node.
If for instance two nodes are in a cluster. The default will be all two nodes that have to agree on a command. This, if one of them is offline no command could be executed.
To change the number of expected node agreeing on a command use this command:
pvecm expect <number_of_nodes>
For two nodes set the number of nodes to 1 for instance. Now one node aloke can execute commands without the need of a quorum with the other node.
Info
Updating the quorum count is only applied for the current session. Upon the next restart this configuration will have to be done again.
The most important information about the current cluster state can be visualized with the following commands:
pvecm statusha-manager status
Install Proxmox on a Raspberry Pi / ARM device
To install Proxmox on an ARM device a port repository can be used. For more information about that check out Setup Proxmox on Raspberry Pi.
Pre Install Tasks
Enable IOMMU if not available automatically
Enabling IOMMU support can be done like in the following chapter:
Enabling IOMMU
To enable IOMMU - which is an essential virtualization feature for the passthrough of PCIe devices - it can be necessary to edit the grub file of proxmox to enable this feature during startup.
Frist of all the CPU and the motherboard used have to support virtualization (e.g. VTd / VTx for Intel CPUS) and IOMMU. This has to be enabled in the BIOS since it is by mostly deactivated by default.
If proxmox can not find the IOMMU feature the grub file has to be edited. To check if IOMMU is available following command can be used:
dmesg | grep -e DMAR -e IOMMU
It should output something like this:
[ 0.028730] DMAR: IOMMU enabled
If this is not the case the grub file located at /etc/default/grub has to be edited.
The file should be edited to include the following line. Make sure to edit it based on the CPU manufacturer (Intel / AMD):
If you can not use your IOMMU groups after enabling since there are several devices in one group that should not all be passed to a virtual machine try out PCIe ACS override like in this chapter: Split up IOMMU Groups
Sending Mails from a Proxmox server directly will only work if you have an IP address that is listed as an IP authorized to send mails by your ISP or if you host your very own mail server that doesn’t care about blocklists. Gmail for instance uses Spamhaus to block incoming mails from non authorized IPs (You can check listed IPs with this Link). Normally standard consumer IPs are non authorized per se! Thus only relaying will work for mail delivery.
To setup mail relaying for proxmox complete these steps as root:
apt-get install postfix mailutils libsasl2-2 ca-certificates libsasl2-modules Note: Choose “internet site” and other default options if promted with questions in terminal. Run dpkg-reconfigure postfix if you need to reconfigure postfix over.
Create your password file with vi /etc/postfix/sasl_passwd
Populate the password file. Example: [smtp.gmail.com]:587 [email protected]:mypassword
Secure the file by running chmod 600 /etc/postfix/sasl_passwd
Replace the contents of the config file by running vi /etc/postfix/main.cf
Encode password file by running postmap /etc/postfix/sasl_passwd
Restart postfix service by running systemctl restart postfix.service
Replace [email protected] with your email in the following code and test sending mail: echo "Test mail from postfix" | mail -s "Test Postfix" [email protected]
Give google a minute to process. You should see the sent mail in your sent folder for your gmail account and in the inbox of the specified destination account. If the mail doesn’t come, check cat /var/mail/root, cat /var/log/mail.log or other places depending on your distribution for errors.
You can also check the configuration using the following commands. The first one should display the configuration of postmap including the location of your sasl_passwd file and most importantly the second command should show the login info for your relay mail server (e.g. [email protected]:password).
The startup and shutdown schedule of VMs and containers that shall start automatically at boot can be defined in their options.
The order of the startup / shutdown schedule can be set there. The VM / container with the lowest number will start up at first and shut down at last.
If you have additionally processes that take time after the start up of one of the first VMs / containers to startup like network shares, you can add a time delay after startup. The VM / container that is scheduled after this will wait for the given amount of time to startup. This way it can be ensured that specific processes have time to execute after startup.
Add pam user
First install sudo on proxmox if not already done:
apt install sudo
Then add the new user you want to add:
adduser <username>
Now you can add the user to the users table in your proxmox GUI, too. Then assign the permissions your new user shall have.
Schedule a system sleep period
If the server should not run all the time a sleep phase can be scheduled. This can be done via the command explained here:
If the hardware supports it a PC or server can be send to sleep for a given time with the following command:
/usr/sbin/rtcwake -m off -s <sleep_time_in_seconds>
This can of course be used as a cronjob if it should be done regularly:
# m h dom mon dow command
30 00 * * * /usr/sbin/rtcwake -m off -s <sleep_time_in_seconds>
Deoending on the system it could be the case that it does not support the off state for RTC startups. This is the case for instance for ARM systems most of the time. Fot those it can work to use different suspend methods like mem or freeze:
rtcwake -m freeze -s <sleep_time_in_seconds>
To find out with suspend methods are supported use:
In case the proxmox machine is upgraded with a new motherboard or NIC it is most probably necessary to update the network confuguration.
This will have to be in the CLI since there will be no connection to the network at first.
To find out the currently installed NIC use this command:
ip link show
Or this command:
ip addr
To edit the current interface configuration update the file /etc/network/interfaces.
To load the new configuration use the following command.
The command ifreload -a is enough to update the configuration of the proxmox machine. The systemctl call however completely restartes the interface so that a new IP will be requested from the router. Thus, it helps with getting a correct IP from the DHCP server. However, do not forget to update the MAC-Address of your new network interface, if the DHCP assigns static IP addresses based on the MAC-Addresses.
ifreload -asystemctl restart networking
Example network configuration
An example network configuration for a server with dual NIC and link aggregation looks like this:
# network interface settings; autogenerated# Please do NOT modify this file directly, unless you know what# you're doing.## If you want to manage parts of the network configuration manually,# please utilize the 'source' or 'source-directory' directives to do# so.# PVE will preserve these directives, but will NOT read its network# configuration from sourced files, so do not attempt to move any of# the PVE managed interfaces into external files!auto loiface lo inet loopbackiface enp6s0 inet manualiface enp7s0 inet manualiface enx1628575feb94 inet manualiface enp1s0 inet manualauto bond0iface bond0 inet manual bond-slaves enp6s0 enp7s0 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3auto vmbr0iface vmbr0 inet dhcp gateway 10.1.10.1 bridge-ports bond0 bridge-stp off bridge-fd 0
The interface bond0 defines the bonded interfaced of the two NICs enp6s0 and enp7s0. The link aggregation is set up via the bond-mode 802.3ad.
It is not recommended to change the name of a node if there are already VMs or CTs set up!
If your host is even in a cluster please condult the documentation!
To change the name of a node several files have to be changed manually.
First of all /etc/hosts and possibly /etc/hostname has to be updated. All old node names have to be replaved by the new one.
If the node is part of a cluster it is strongly recommended to not rename it. However, if you still want to got on also update the file /etc/pve/corosync.conf.
Also move all files from the old node name in the path /etc/pve/nodes/<nodename> to the corresponding path with the new node name. Make sure to move those files and not copy them. If there are VMs or CTs installed on the node Proxmox will not allow to copy the configuration files of them to make sure they stay unique.
Migrate VM between systems (no cluster)
To migrate a VM from one Proxmox system to another without setting up a cluster can be done using the backup and restore features.
First create a backup of the VM on the source system and copy it to the target system using scp. The backups are generally stored in this path on Proxmox machines:
/var/lib/vz/dump/
The backup will prpbably be of the .zst type.
After it was copied over to the new system it should be found in the local storage section under Backups.
The backup can then be restored with all its configurations.
A cluster in Proxmox can be used to connect several nodes so that they can be configured and used from one central point with shared resources like a central storage. Many additional features of Proxmox can also only be used if a cluster is set up.
Generally a cluster has to consist of at least three nodes to be able to use those advanced cluster features. However also two nodes can be connected to a cluster. Just make sure that you always configure the quorum based on the number of your nodes and the uptime of those - see Configure quorum.
To set up a cluster the node that shall be added must not have any VMs or CTs installed, since it is mandatory that all virtual guests - as VMs and CTs are called by Proxmox - have a unique number as their ID. Thus, it is advisable to only add nodes to the cluster that have a completely fresh install of Proxmox.
Furthermore, all nodes have to have a static IP address set in their own network configuration. How to update the network configuration can be seen in Update network configuration. It must not be set to DHCP!
Info
If the network interface of the cluster nodes is not properly configured it can not be selected in the cluster setup process.
Info
Also make sure to update the IP addresses of configured hosts in /etc/hosts. This will make sure the cluster configuration is consistent and successful.
The configuration for the vmbr0 should look something like that:
It is important to have a static interface with a defined address. You can still also set a static IP in your DHCP service on your router for the nodes to make sure that the device is defined properly in the routers network configuration as well.
Setup Quorum Device (qdevice)
A qdevice is a device that not runs Proxmox entirely but just the voting part, to enable clusters with an even number of nodes to have a valid quorum situation.
This qdevice also does not need to be x86, a powerful machine or even run a specific OS. Thus, it is possible to use for instance Raspberry Pis for this purpose.
First the necessary packages have to be installed on the qdevice. This is assuming it runs Debian:
On the Proxmox nodes as well the following package has to be installed:
apt install corosync-qdevice
To add the new qdevice it is necessary to setup RSA keys for SSH connection between the nodes. The Proxmox nodes need root access to the qdevice. Thus, make sure to add the id_rsa.pub to the authorized keys of the root account of the qdevice and enable root login via key-authorization.
Now add the new qdevice via this command on one of the Porxmox nodes:
pvecm qdevice setup <qdevice_ip> -f
Check out the cluster status using this command:
pvecm status
If everything worked the membership information of pvecm status should look similar to this:
Membership information
----------------------
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 10.1.10.180 (local)
0x00000002 1 A,V,NMW 10.1.10.190
0x00000000 1 Qdevice
Configure quorum
If you expect one or more of the nodes of your cluster to be offline regularly you should adjust the number nodes that need to agree to a command of a node.
If for instance two nodes are in a cluster. The default will be all two nodes that have to agree on a command. This, if one of them is offline no command could be executed.
To change the number of expected node agreeing on a command use this command:
pvecm expect <number_of_nodes>
For two nodes set the number of nodes to 1 for instance. Now one node aloke can execute commands without the need of a quorum with the other node.
Info
Updating the quorum count is only applied for the current session. Upon the next restart this configuration will have to be done again.