From Salt to Puppet
I have been using SaltStack for many years(although my current SaltStack configuration Git history goes back to 2013, so I had been using for at least a few years before the linked post). Prior to that I have had some experience using Puppet and cfengine but this was before I started using SaltStack so my Puppet knowledge is at least 10 years old.
Installing puppetserver
I initially installed Puppet (version 5.5.10) from Debian’s distribution repositories (with apt-get install puppet-master
- which annoyingly recommends lsb-release
pulling in Python as well as Ruby interpretors), however I later hit a bug with user management that was fixed in 6.23.0 and 7.8.0 but not the 5.x line. I therefore ended up installing it from Puppet’s provided repositories - as this is a completely new install, I opted for the latest release (version 7). Unlike the version 5 release, 6 and 7 require Java.
To install from Puppet’s repository by hand:
curl -O https://apt.puppet.com/puppet7-release-bullseye.deb # Replace bullseye with your current Debian version codename
apt-get install ./puppet7-release-bullseye.deb
apt-get update # Get repository cache for Puppet's repository
apt-get install puppetserver
I later reduced the JVM Heap Size to 1g (it started at 2g) as it was very memory heavy and pushing the system into swap (even with the heap set to 1GB, the Puppet java process was using just under 4GB total). For comparison, SaltStack (which launches a number of discrete salt-master processes) uses around 1/5th of this when all added up. To change this, edit the -Xms
and -Xmx
value in /etc/defaults/puppetserver
and restart the service. According to the Puppet docs, Oracle recommends setting these both to the same value so I did that.
Configuration approach background
Puppet recommends using a roles and profiles method, assigning a single role to each node (server) that then includes the profiles to be applied. They suggest that almost all code will be in modules. In some ways this approach mirrors the approach I use with SaltStack - attach roles to servers in the pillar data and then including states based on those roles in the states tree.
Tools like r10k, g10k and Code Manager push modules out of environments (in which the desired state is defined) and into their own individual git repositories with a “Puppetfile” in the environment that defines the modules needed by the environment. Code Manager looks very interesting and uses r10k to automate deploying environments without disruption and implement CI and CD pipelines with integrated testing however it is part of Puppet Enterprise and I am an individual with very limited funds. I went with r10k, which also required installing with apt-get install r10k
(which, on Debian, pulled in Perl and Ruby, along side Java already installed for puppetserver
).
First configuration steps
Puppet defaults to production
for its environment, so create a skeleton for it and initialise a branch with the same name:
mkdir -p ~/project/puppet
cd ~/project/puppet
mkdir -p {modules,manifests,data}
touch data/common.yaml
touch manifests/site.pp
touch Puppetfile
cat - >hiera.yaml <<EOF
---
version: 5
defaults:
data_hash: yaml_data
datadir: data
hierarchy:
- name: "Per-node data"
path: "nodes/%{trusted.certname}.yaml"
- name: "Per-OS defaults"
path: "os/family/%{facts.os.family}.yaml"
- name: "Common data"
path: "common.yaml"
EOF
git init -b production
git add .
git commit -m 'Initial commit of template environment.'
git remote add origin https://youruser@git-server.domain.tld/repository # Replace with your real repository
git push -u origin production
Creating the test environment
To start as I mean to go on, I created a test environment by adding a new branch called test
in my Git front-end forked from the production
branch I just pushed up.
Setup r10k
Back on my Puppet server I created a new user for r10k, adding it to the puppet
group that was created by installing puppetserver
so I can setup some sensible permissions:
adduser --system --group --home /var/lib/r10k --disabled-login r10k
gpasswd -A r10k puppet
Next I configured r10k by creating /etc/puppetlabs/r10k/r10k.yaml
(the Debian package is woefully lacking in any documentation for how to configure r10k - there’s nothing in the man pages or /usr/shared/docs
hierarchy, I found this path worked through trial-and-error after testing various paths in /etc
, /etc/puppet
(the Debian puppet-master
package configuration directory) and /etc/puppetlabs
which it did not pick up or complained about):
mkdir /etc/puppetlabs/r10k
cat - >/etc/puppetlabs/r10k/r10k.yaml <<EOF
---
sources:
main:
remote: 'https://git-server.domain.tld/puppet-repository'
basedir: '/etc/puppetlabs/code/environments'
EOF
chown -R root:r10k /etc/puppetlabs/r10k
chmod 750 /etc/puppetlabs/r10k
chmod 640 /etc/puppetlabs/r10k/r10k.yaml
Next, I provided my read-only deployment user credentials to my r10k user via Git’s configuration in ~r10k/.gitconfig
file (modified from a recipe from the gitcredentials manual, to allow other usernames to still be prompted for a password):
[credential "https://git-server.domain.tld/puppet-repository"]
username = deploy
helper = "!f() { test \"$1\" = get && grep -q username=deploy && echo \"password=$(cat $HOME/.git_server_puppet_deploy_secret)\"; }; f"
and dropped the password into their ~r10k/.git_server_puppet_deploy_secret
file (with suitably protective permissions).
I gave r10k permission to modify Puppet’s code
directory and puppetserver
(running as the puppet
user) read-only permissions. I set the directory setgid bit so files and folders will default to being created with puppet
group ownership (e.g. by the r10k
user):
chown -R r10k:puppet /etc/puppetlabs/code
find /etc/puppetlabs/code -type d -print0 | xargs -0 chmod 2750
find /etc/puppetlabs/code -type f -print0 | xargs -0 chmod 0640
Then I deployed this by running r10k
after becoming the user and setting the umask to create files in /etc/puppetlabs/code
appropriately (-p
tells r10k to process the Puppetfile):
su -s /bin/bash -l r10k
umask 0027
r10k deploy environment -p
or, if running with sudo:
sudo -u r10k bash -c "umask 0027; r10k deploy environment -p"
A note about permissions
The puppet server runs as a puppet user, so changing the (default root) ownership of /etc/puppetlabs/code
to that user, and giving it read access to the r10k.conf
file, allows r10k to be run as the puppet
user - however allowing the puppet user to change the configuration definitions is risky because that potentially allows it to modify anything and everything on any system being managed by Puppet (and gives r10k
access to anything puppet
can access which, conversely, it does not require).
A slightly safer approach, which is what I have done, is to create a user to run r10k as, which has read access to the r10k.conf
file and write to /etc/puppetlabs/code
(although strictly r10k
probably only needs to be able to write to /etc/puppetlabs/code/environments
) and just give the puppet
user (via its private user-group) read on everything below /etc/puppetlabs/code
- denying everyone else (“other”) any access at all to avoid exposure of configuration information that could be useful to an attacker. Then, even if the puppet account gets compromised they can read but not modify the configurations.
Hooking up first client
In DNS, I created an alias called puppet
for the puppet-master server so the clients would just work (mirroring what I had done for Salt). After installing the agent, on my test box I set the environment to test
in the agent’s puppet.conf
(/etc/puppetlabs/puppet/puppet.conf
on my Red Hat compatible systems):
[agent]
environment=test
After starting the agent, I was able to sign the certificate on the puppet-master server:
puppetserver ca sign --certname client.vm.domain.tld
Restarting the agent, I was able to verify in the logs that the agent had successfully connected and retrieved information from the master.
Adding configuration to the client
Firstly, I added an entry to put a class (essentially a module) onto the node in manifests/site.pp
:
node /\.vm\.domain\.tld/ {
include profile::profile1
}
I then created a new directory and created the profile module skeleton and uploaded it to a new Git repository:
cd ~/projects
mkdir puppet-profiles
cd puppet-profiles
mkdir manifests
cat - >metadata.json <<EOF
{
"name": "puppet-profiles",
"version": "1.0.0",
"author": "Laurence Hurst",
"summary": "Local profiles for my systems",
"license": "GPL-3.0-or-later",
"source": "https://git-server.domain.tld/puppet-profiles-repository",
"dependencies": [
{ "name": "puppet/epel", "version_requirement": ">=3.1.0" }
],
"requirements": [
{ "name": "puppet", "version_requirement": ">=5.5.10" }
]
}
EOF
cat - >manifests/profile1.pp <<EOF
class profiles::profile1 {
include epel
}
EOF
And added the new profiles module to the Puppetfile and pushed those changes up to the puppet configuration repository:
mod 'profiles',
git: 'https://git-server.domain.tld/puppet-profiles-repository'
N.B I initially spelled “license” the British English spelling for the noun (“licence”) - which obviously failed but took me a while to figure out that oversight.
Dependencies
As I discovered, when I checked my test node to see if the configuration applied, r10k does not automatically install module dependencies (according to the docs nor does Code Manager, so this is not a freemium reserved feature) - the gist of the r10k feature request discussion is that global dependency resolution is a hard problem (which it is), so they are not implementing it. Instead, the approach adopted by the community seems to be to manually resolve and then list all dependencies in the environment’s Puppetfile
, so I added epel to mine:
cat - >>Puppetfile <<EOF
mod 'epel', :latest
EOF
Install a package
After solving the dependency problem, I was able to deploy a package from EPEL like this:
class profiles::profile1 {
$exim = $facts['os']['family'] ? {
'Debian' => 'exim4',
default => 'exim'
}
if $facts['os']['family'] == 'RedHat' {
require epel
}
package {
'exim':
name => $exim
}
}
Note epel has changed from include
to require
. The difference is that require
will ensure everything in epel
is applied before the enclosing stanza (profiles::profile1
in this case), where as include
will applying everything alongside profiles::profile1
but says nothing about the ordering. require
can introduce circular dependencies easily but in this case, for Red Hat family systems, we need EPEL before the package from it can be installed.
Network time
One of the things I quickly discovered playing with Puppet and VMs is that its use of certificates is very sensitive to clock-skew during the signing - if the client clock has fallen behind the server you will find that it refuses to connect due to the certificate being invalid.
Although this is a catch-22 (you cannot use Puppet to configure it until puppet is working), it is possible to use Puppet to install and configure NTP - it is one of the quickstart examples in Puppet’s documentation. This at least ensures everything stays in sync once Puppet is working. To make things slightly more challenging, Red Hat have dropped the traditional NTP package in favour of Chrony with Red Hat 8.
To choose the appropriate package, I came up with this:
class profiles::base {
$ntp = $facts['os']['family'] ? {
'RedHat' => $facts['os']['release']['major'] ? {
'8' => chrony,
default => ntp
},
default => ntp
}
# Setup NTP
include $ntp
}
This did require adding 3 dependencies to my Puppetfile
: puppetlabs/ntp
, puppet/chrony
and puppetlabs/stdlib
(the latter of which both ntp
and chrony
depend on).
My current opinion
(Although arguably all of my blog posts are opinion pieces, in the sense they are how I approached, configured or solved something and other ways will always be available…)
I have recently had experience of using SaltStack, Ansible and (now) Puppet. I would summarise my impression of each as follows:
- SaltStack The most flexible of the 3, easy to make it work in whichever way suits you. The configuration information is just data passed through a template engine making it easy to understand, however you sometimes feel like you are working against its patterns to deal with some edge-cases.
- Ansible Thinks in terms of recipes (it calls Playbooks) - a sequence of steps. It can seem more straight-forward and logical initially but I do not feel it scales as well when the complexity of real-world infrastructures starts to show through. It is fun to work with, however, and its agent-less approach is absolutely fantastic for orchestration on systems you do not have elevated (root, or otherwise) access to.
- Puppet If one follows the documentation, it is very opinionated about “the right way” to use it - fitting into the common Ruby idiom of convention-over-configuration where possible. This can feel a bit dictatorial however a common approach also has its advantages, particularly around getting help and advice from the community and the plethora of reusable modules from its “think small components” approach to defining configuration (and the Salt community has started heading in this direction too with Salt Formulas, which are very similar to Puppet’s modules).
Will I be switching to Puppet from SaltStack? - that is a difficult question:
Salt seems to be more flexible and has a “batteries included” approach, with many specific state modules (e.g. logrotate, acme, dellchassis) shipping as part of the base software.
In contrast, Puppet seems more dictatorial about how it should be used and has a minimal amount of capability included in the base software, with most things being provided by small modules that are separately retrieved.
On of the things I use most frequently is Salt’s test=True
for highstate - easily review what changes will be made before applying them. The closest way to replicate this with puppet, which required using a new environment otherwise the running agent will apply it in the background (plus using a different environment to the one configured in puppet.conf
helps prevents accidental application), was like this:
runtest() {
if [ -z "$1" ]
then
echo "Must specify which environment to test."
return 1
else
tmpcache=$( mktemp -d ) # Need a temporary cache or puppet will delete/add to the real one even with --noop
puppet agent --noop --test --vardir=$tmpcache --environment=$1
stat=$?
rm -rf $tmpcache
return $stat
fi
}
and use like:
runtest test-environment
Salt’s large collection of built-in state modules and the collecting of “Formulas” in their own GitHub that people have to apply to join feels more centralised, compared to Puppet’s small core provision and modules on Puppet Forge that anyone can create an account and start uploading and so seems a bit more decentralised and more democratic. There are advantages to the centralised approach, such as a higher expectation in terms of support and quality on the basis that anything included with Salt should be expected to work well and the central organisation takes responsibility for bugs and fixing them. On the other hand, the more democratic model makes it easier for the community to “vote with their feet” and choose to use the best modules with the centrally provided ones more-or-less equal (at least in terms of how they are downloaded/accessed).
Puppet also seem to have an interesting policy of not duplicating the functionality of community modules, as seen in the ticket requesting EL 8 support in the NTP module “puppetlabs-ntp : RHEL 8 Support”, which leads to another ticket for that “Add RedHat 8 support to ntp”, which leads to a ticket suggesting a new module for Chrony, EL 8’s supported NTP daemon, “Investigate creation of a chrony module” that was closed with “Closing this as there is https://forge.puppet.com/modules/puppet/chrony and we won’t duplicate community work.”. Yes, that is the chain of links I followed from Googling “NTP with Puppet on Red Hat 8”. As admirable as not duplicating others work is, this does mean there are no supported Chrony modules (or even “approved” modules, meaning they are not supported but have been reviewed) so Puppet Enterprise customers running EL 8 are out of luck for an approved way to do network time configuration, in contrast to users of other distributions using ntpd as the client.
The decentralised module system may also create a maintenance burden, to manage the uncoordinated dependencies and work out the highest common version of each supported by all the modules one wants to use. I also found it very unclear from the documentation (perhaps deliberately so?) whether it was considered good practice to specify a specific version of each module, specify nothing (which installs the current latest at install time but never updates it) or specify :latest
(which installs and tracks the latest version):
- In general, specifying specific versions helps with the dependencies-of-dependencies (and their potential conflicting versions of common dependencies) within the environment but it is also an approach that contributes to the situation in the first place.
- Not specifying a version seems to me the worst choice, as each install will get a different version dependent on when the modules were installed (which may be when the environment was setup, or later if modules are subsequently added to the Puppetfile).
- Using
:latest
is most likely to break things and indicates that, at best, it was known to be working with the version that was current when committed (which might not be the latest when checked out in future).
The book Puppet Best Practices: Design Patterns for Maintainable Code (although I object to the title - search the internet for “why we shouldn’t talk about best practices”, or similar variant, for a plethora of people reasoning why “best practices” don’t exist and are a bad thought pattern to get into) recommends going beyond pinning the version, but rather pinning to a specific commit’s hash by finding the module’s git repository and using that instead of the module forge. Their reasoning is that only by pinning to a commit hash can you guarantee not changes (“a full commit hash is cryptographically secure; you get exactly the code you expect, and there’s no way for the repository owner to accidentally beak you”(Barbour C. and Rhett J. 2018)). They do advocate using :latest
for automated testing: “For testing purposes, it is possible to select a branch name or :latest
as the version to deploy. This is useful for automatically testing the latest updates to public modules, but it introduces considerable risk in deployed environments. A release-ready branch should always specify explicit versions or Git hashes.”(Barbour C. and Rhett J. 2018). It seems to me that “always specify” is a bit strong - it should be a judgement based on risk appetite. They also recommend mirroring non-git repositories to git so you can use the git commit hash - which takes this idea of guaranteeing modules versions to an extreme unnecessary for most environments.
Issues with conflicting dependencies, in my experience, quickly becomes unmanageable - npm and RubyGems are amongst the worst for this. It is rare that updating a single dependency, e.g. for a bug-fix needed for my project, does not pull in a newer version of a dependency that conflicts with the version requirements of that same dependency in 2 or 3 other dependencies of my project. Often these version requirements are deep in the dependency tree and solving them is time-consuming and laborious. Whether this is a problem with Puppet’s modules, I do not yet know.
As you can probably tell, whether to go with the the batteries included/downloadable approach (and the implications for expecting timely fixes to bugs) is the choice I am most conflicted about.
References
Barbour C. and Rhett J. (2018) Puppet Best Practices: Design Patterns for Maintainable Code O’Reilly Media ISBN:9781491923009