After restoring the configuration management server I needed to start making some changes specific to the lab environment (initially the MAC addresses of the hosts for the DHCP server). I also want to give the lab a separate domain, like I did when testing Proxmox.

There are a number of ways to go about doing this (further illustrating the flexibility of SaltStack I talked about in an earlier post). One option seems to be to use multiple environments although it does look to me like they are intended for a single salt master controlling both production and development environments at the same time, rather than a clone managing one or the other separately. Since I intend to have a different domain name for the lab environment, I instead opted to update the router’s minion ID and add lab-specific patterns to the SaltStack configuration.

To do this, I updated /etc/salt/minion_id on the host to the new hostname (e.g. and restarted the minion.

On the Salt master node, I created a new pillar file for the lab network (called network/lab.sls - in my setup this file, principally, maps mac addresses to ip address and hostnames) and copied the existing host-specific file to match the new hostname pattern (hosts/custom/foo-lab-my-domain-tld.sls) and edited as required (just network interface names). I then assigned it to * in the Pillar’s top.sls (the host-specific file is picked up by existing pattern-matching includes). I accepted the “new” minion’s key (Salt does not seem to notice it is the same as another minion) and ran state.highstate on the new minion.

On the router, this duplicated (rather than replaced) the network configurations in /etc/dhcp/dhcpd.conf, resulting in the DHCP server failing to start due to each subnet being declared twice. The cause of this is that *.my.domain.tld (the live network) my pillar’s top.sls also matches * My initial reaction was to change my original match for internal systems to be *.my.domain.tld and not *, adding - match: compound to turn it from a simple glob to compound match but I decided that using nodegroups to distinguish the lab/live environments was more sensible. I added nodegroups.conf to the configuration salt pushes to the master’s /etc/salt/master.d folder with this content, then updated top.sls to use the group name with - match: nodegroup:

  home_live: '*.my.domain.tld and not *'
  home_lab: '*'

(As a bit of an aside: a lot of the network information about hosts is stored in the piller. I had thought that the way Ansible makes all host-specific variables (including facts) available to any host, via hostvars, would be a neat solution to gathering (e.g.) mac addresses and public keys. SaltStack also has a mechanism for achieving this with Salt Mine, which my code pre-dates the introduction of in 2013 so I need to revisit this approach at some point.)

I also made Salt start managing the system hostname, so modifying a system’s name can be done just by changing its minion ID in salt. This is based on a formula I found online but simpified for my Debain only environment:

# Based upon
# Cut-down for recent Debian versions (systemd) only

{# Use minion id as the FQDN #}
{%- set fqdn = %}
{%- set hostname = fqdn.split('.', 1)[0] %}

    - contents: {{ hostname }}
    - backup: False
    - onchanges_in:
      - cmd: hostfile-set-fqdn

# This is completely different to upstream - set to by default on Debian so maintained that
# There are good reasons for setting it to the real ip (connections to the local machine by name take a more natural route) but also problematic on multi-homed systems (which IP to set it to?)
    - name:
    - hostnames:
      - {{ fqdn }}
      - {{ hostname }}  # Upstream also did not include the shortname
    - require_in:
        - cmd: hostfile-set-fqdn

    - name: hostnamectl set-hostname {{ hostname }}
    - unless: test "{{ hostname }}" = "$(hostname)" -a "{{ fqdn }}" = "$(hostname -f)"

I added this state to mylinux role, which is applied to all Linux hosts.

I then started iterating through all the bits that would not work in an environment with no internet access (couple of bits fetched from GitHub or my own Git repository) or missing host-specific parts for the “new” minion’s id (certificates for OpenVPN and Icinga2, and ssh keys).