CephFS on Proxmox Virtual Environment
One of the things I did not setup in my new ProxmoxVE cluster was CephFS. To start with, I PXE booted my VMs into an installer but I wanted to experiment with trying to get Grub to chainload another distribution’s shim to load their kernel (spoiler alert: without success). The easiest way I thought to play with this was to boot from a Fedora install CD, whose Grub has tftp and http support, and try to chainload a Debian installer. In order to quickly do this, I wanted to attach an ISO to a new VM (as Fedora isn’t available via my PXE server and I thought this would be convenient for the future) which required setting up CephFS to store the ISO.
Ceph Metadata servers
CephFS requires metadata servers (MDS), which I hadn’t setup as it’s not required for anything else. Like with the monitors, I decided to list the servers and (in the absence of advice on number - like the managers, only one is active at a time) I setup five to match the monitor/manager configuration:
pve_ceph_metadata_servers:
- pve01
- pve02
- pve03
- pve04
- pve05
Adding the MDS is a simple process, modelled on the existing approach for monitors:
- name: MDS are configured
block:
- name: Current MDS metadata is known
become: true
ansible.builtin.command: /usr/bin/ceph mds metadata
register: ceph_mds_metadata_out
changed_when: false # Read-only operation
- name: List of configured monitors is known
ansible.builtin.set_fact:
pve_configured_mds: >-
{{
ceph_mds_metadata_out.stdout
| from_json
| map(attribute="name")
}}
- name: Metadata servers are set up
become: true
ansible.builtin.command: /usr/bin/pveceph mds create
when: inventory_hostname not in pve_configured_mds
when: inventory_hostname in pve_ceph_metadata_servers
CephFS
Adding a CephFS called cephfs
, largely default settings can be used (this makes is available for backup files, ISO images and container templates):
- name: Status of CephFS storage is known
become: true
ansible.builtin.command: /usr/sbin/pvesm status --storage cephfs
register: pvesm_status_cephfs
# Returns 255 if storage doesn't exist
failed_when: pvesm_status_cephfs.rc not in [0, 255]
changed_when: false # Read-only operation
- name: CephFS storage is created
become: true
ansible.builtin.command: /usr/bin/pveceph fs create --pg_num 32 --add-storage
when: pvesm_status_cephfs.rc == 255
Uploading the ISO
I found it was easiest to navigate to cephfs
under one of the nodes in the Proxmox UI and upload and ISO to ISO Images
for it to be available to VMs. According to the internet, uploading directly to /mnt/pve/cephfs/template/iso
should also work but I have not tested this.