Adding a bastion host - bootstrapping for Ansible and migrating security from SaltStack to Ansible
This post continues the chain of posts from trying to get started with Ansible for managing my own infrastructure in October and working around Ansible not playing nicely with 2 factor sudo authentication. It is one of three posts that I split out from the second in the series on the 2nd January 2023 and is the blog content I added around the 13th December 2022 - describing bootstrapping the monitoring server and migrating the first role from SaltStack to Ansible.
Setting up the bastion
The first challenge is that I have not setup the monitoring host in my lab, for one very simple reason - I only have one Raspberry PI, which uses an ARM based processor, and no other general purpose ARM hardware. As a result my simple ‘restore the entire system from backup’ method of duplicating the live service will not work.
I began by doing a bare install of Debian on one of the unused lab machines. For now, I selected a simple single filesystem (this is how the Raspberry PI is setup) install. I should, in the future, enable encryption (particularly given this host’s new supposed ‘bastion’ status) but I will want to link that with network-based unlocking which is on my (long) todo list.
As this is an exercise in moving the iPXE configuration to Ansible, I decided that configuring it by installing salt-minion and using SaltStack was not the way forward, despite being path-of-least-resistance. Being fairly recently setup, I am confident that everything on it is from the state configuration in salt for it.
From the salt state and pillar configuration for this host, I can see it has the following salt roles from my states tree (nesting indicates where one role include another):
server
remotely-accessible
(installs and configures fail2ban)monitoring.client
(installs and configures icinga2 client)monitoring.common
(installs monitoring-plugins and nagios-plugins-contrib packages, installs & configures munin client and nagios’ kernel & raid checks.)
monitoring.server
(installs and configures icinga2 server, php support for nginx, icingaweb2 and munin server)webserver
(installs and configures nginx)monitoring.common
(see above)
As servers go, it is one of the simpler setups (another reason why it is the best candidate from my servers for the bastion).
Bootstrapping with-and-for Ansible
To start, I need to get some basic setup done to enable the host to be managed by Ansible. As I used the network console feature to do the install, Debian has installed ssh-server as part of the base os but to allow for the playbook being run locally on a machine without ssh, I have included installing it anyway. As the newly installed system only had a password for the user to login, the host I am running Ansible on needs the sshpass
program installed (which I added to my existing admin-workstation
role’s list of packages):
apt-get install sshpass
By default (with my usual “as minimal as possible through the installer” package selection), on the host being bootstrapped, python
is not installed but it is a pre-requisite of Ansible. Likewise, sudo
, which I am using as my default escalation tool, and acl
, which is required to allow escalation to users other than root, are not installed. To prepare the host to run any of my existing (or soon to exist) playbooks and roles, I need to create an initial bootstrap playbook which installs python, sudo and ACL, and then adds the user Ansible is running as to the sudo group.
The bootstrap playbook just does the bare minimum to enable other playbooks to be run remotely. It consists of two plays, the first uses the raw
module to install python if it is not already present and the second installs and configures sudo
, acl
and (if desired) configures the secure shell server [ssh]. I made configuring ssh optional (based on if Ansible’s connection is over ssh) in the bootstrap because playbooks can be run locally and I may want to use this to bootstrap systems that do not require remote access.
---
- hosts: all # Limit hosts on the commandline
gather_facts: no # Facts will fail if python is not yet installed
tasks:
- name: Check for and (if required) install python
# We will set up sudo, so have to presume it is not available until
# this playbook finishes.
become_method: su
become: yes # Everything in this block requires root
# Redirects all apt-get output to stderr, so it can be seen if a
# failure happens but stdout is only 'Present', 'Installed' or
# 'Failed'
ansible.builtin.raw: bash -c '(which python3 && echo "Present") || (apt-get -y install python3 && echo "Installed") || echo "Failed"'
register: python_install_output
# Changed when we had to install python
changed_when: python_install_output.stdout_lines[-1] == 'Installed'
# Failed if python wasn't there and (in the logical sense) it
# didn't install.
failed_when: python_install_output.stdout_lines[-1] not in ['Installed', 'Present']
- hosts: all
tasks:
- block:
- name: Install sudo
ansible.builtin.package:
name: sudo
state: present
- name: Install acl package (to allow switching to non-root users)
ansible.builtin.package:
name: acl
state: present
- name: Add Ansible user to group for sudo
ansible.builtin.user:
name: "{{ ansible_facts['user_id'] }}"
append: yes
groups:
- sudo # wheel on RedHat
- name: Secure ssh server, if connected via ssh
include_role:
name: ssh
vars:
components:
- server
when: ansible_connection == 'ssh'
# We will set up sudo, so have to presume it is not available until
# this playbook finishes.
become_method: su
become: yes # Everything in this block requires root
...
For the ssh
role, I have taken some inspiration from a blog post I found about the Sensu Go roles. The default value for the components
argument is just to install the client - in this case, I only want the server so I passed that as the only item in the list of components to install/configure. Internally, the role:
- Installs the secure shell server
- Creates a group for ssh users
- Sets the
sshd_config
file’s ownership to root and denies access to all other users (to prevent users being able to inspect the security posture) - Adds the ansible user to that group if the value of
ansible_connection
isssh
- Restricts login to members of the ssh access group and denies root login
I ran this playbook against my host like this (where new-host
is the system’s hostname), entering my user’s ssh password and root’s password (for su
) when prompted:
ansible-playbook -i new-host, -k -K bootstrap.yaml
Migrating existing SaltStack roles - remotely-accessible
After bootstrapping, I began migrating my SaltStack roles to Ansible. The first, remotely-accessible
, only installs Fail2ban - so I added this to my existing os-lockdown
role that I created for disabling services and configuring firewalld for my laptop because I decided it was sensible to install and configure fail2ban on all hosts.
I next picked this up around New Year with migrating the monitoring client roles to SaltStack.