Updating my Proxmox Virtual Environment hosts, often install unsigned kernel packages which renders the system unbootable in a Secure Boot enable environment. I modified my update playbook to ensure that the signed versions are installed, as I keep forgetting to check this before rebooting after updates.

The revised playbook, in full, is:

---
- hosts: all:!dummy
  tasks:
    - name: Ansible sudo password is retrieved from vault, if known
      delegate_to: localhost
      community.hashi_vault.vault_read:
        # So many things can determine the remote username (
        # ansible_user variable, SSH_DEFAULT_USER environment
        # variable, .ssh/config, etc. etc.) it's safer to use the
        # discovered fact.
        path: kv/hosts/{{ inventory_hostname }}/users/{{ ansible_facts.user_id }}
      register: sudo_pass
      # No password in vault is fine - will just not set it.
      failed_when: false
    - name: sudo password is set for host, if found in the vault
      ansible.builtin.set_fact:
        ansible_become_password: '{{ sudo_pass.data.data.password }}'
      when: "'data' in sudo_pass"
    - name: Updates are installed (apt systems)
      become: yes
      ansible.builtin.apt:
        update_cache: true
        upgrade: true
      when: ansible_facts['os_family'] == 'Debian'
    # Proxmox updates keep installing the unsigned kernels, in my secure-boot enabled
    # environment. Ensure the signed variants are installed.
    - name: Package facts are known
      ansible.builtin.package_facts:
    - name: All non-signed proxmox kernels are replaced with signed variants
      become: true
      ansible.builtin.package:
        name: '{{ item }}-signed'
      loop: "{{ ansible_facts.packages.keys() | select('ansible.builtin.match', '^proxmox-kernel-.*-pve$') }}"

...