This post is about launching a Docker Compose orchestrated collection of containers as a system service using systemd. I provisioned a VM running Docker (it’s a loooong story but short version: Azure Kubernetes and Azure container services cannot be deployed without public IP addresses, which goes against some of our secure-by-design decisions) using Terraform and Ansible to deploy and configure it. The service it is running is a web application made up of two Docker containers and I have written a Docker Compose file that builds and runs the infrastructure.
To deploy the two containers I created this
version: "3.1" services: api: build: context: https://deployuser:email@example.com/group/repo/ args: http_proxy: http://192.168.1.10:7890/ https_proxy: http://192.168.1.10:7890/ no_proxy: localhost,127.0.0.1 web: build: context: https://deployuser:firstname.lastname@example.org/group/repo/ args: http_proxy: http://192.168.1.10:7890/ https_proxy: http://192.168.1.10:7890/ no_proxy: localhost,127.0.0.1 api_id: someid api_secret: somesecret environment: http_proxy: http://192.168.1.10:7890/ https_proxy: http://192.168.1.10:7890/ no_proxy: localhost,127.0.0.1 ports: - "80:8080" depends_on: - api
Internally Docker provides a hosts entry so that the “web” container can connect to the api one using the hostname
api. As there is no internal network restrictions/firewall (the default settings for Docker) there was no need to configure any ports etc. for the “api” container. The proxy settings are only required to connect to the git service from without our restrictive firewall (and out to the database server in the live “web” environment). I think it would be preferable to configure the web application (and possibly the api) to communicate via a file-based socket (from the hardened in-bound proxy in front of it).
Based on a recipe online I created a template systemd unit that can be used to launch any Docker Compose orchestrated containers. The systemd file goes in
[Unit] Description=%i service with docker compose Requires=docker.service After=docker.service [Service] Type=oneshot RemainAfterExit=true WorkingDirectory=/opt/%i ExecStart=/usr/bin/docker-compose up -d --remove-orphans ExecStop=/usr/bin/docker-compose down [Install] WantedBy=multi-user.target
In contrast to the example I found, I had already put my
docker-compose.yml file in an application-specific directory in
/opt, mirroring the proof-of-concept version where the code was initially cloned from git into subdirectories and I developed the Docker Compose file from scratch. On the live system, only the
docker-compose.yml file is required as it fetches the code directly from git so I think the design of putting it in
/etc/docker/compose instead of
/opt is better.
To configure the application (e.g.
my-app), with a
docker-compose.yml in the corresponding directory (e.g.
/opt/my-app/docker-compose.yml), to start on boot, we simply enable a systemd unit for this template:
systemctl enable docker-compose@my-app
And to start it:
systemctl start docker-compose@my-app