I installed 2 more Proxmox servers using the process I used to setup the first one and this post is my notes about adding them to the cluster. Note that I already created the cluster during the first node’s setup, although at the time is was a single-node “cluster”.

After the initial install, I added the ceph, proxmox-no-subscription and disabled the proxmox-enterprise repositories. I then manually edited the apt sources files (/etc/apt/sources.list and /etc/apt/sources.list.d/*) and pointed them at my local mirror. I manually applied the updates (since I was already at the root prompt) with apt-get dist-upgrade and installed Ceph with apt-get install ceph (to pre-empt any issues with Proxmox resetting the repository to try and install when we join the cluster).

In the web interface, I also added all of the nodes to each node’s hosts configuration (under “System” -> “Hosts” in the UI).

I had already setup a cluster on the first node, so on the two new ones I went into “Cluster” and clicked “Join Cluster” in the web interface. The encoded joining information can be copied and pasted from the first Proxmox system and then just enter the root password for that system in the relevant box. Click “Join pve” and wait for it to join.

For some reason my second node failed to join the cluster cleanly (I think because I did not run through installing the updates and rebooting first) so I decided to expel it from the cluster and reinstall. The first thing to do is to shut it down - the Proxmox documentation on this process makes very clear the node must not be powered on again once removed from the cluster or the cluster will end up broken.

After powering off the node, it can be removed from the shell on another node with the command pvecm delnode pve2 (where pve2 is the name of the node - the list of cluster nodes can be found by running pvecm nodes). Once deleted from the cluster, it can be reinstalled and re-added as a new node. I did the same process on the first node (which had the wrong name and IP address the first time I set it up) to get the nodes configured as I wanted.

Once the nodes were all shown as available I was able to (e.g.) migrate my Windows Domain Controller between them (which took a while as it had to copy the disk from local to local storage). I did encounter problems with “invalid PVE ticket” errors in the UI but I determined the root cause seemed to be clock-skew between the physical hosts.

To increase fault-tolerance and reduce migration times, I attempted to continue setting up a Ceph cluster in Proxmox by adding the 2 new nodes as both “Monitor” and “Manager” nodes (under “Monitor” in the “Ceph” part to the UI), as well as adding them as “Metadata Servers” in “CephFS”. However Ceph was even more sensitive to clock-skew, refusing to tolerate less than 0.05s difference between the systems which was difficult to setup without NTP available on the network so I parked this until I have NTP available.