In order to get time-syncronisation, to solve the annoyance of my TOTP based 2nd factor for sudo not working, I bought a cheap USB GPS dongle that uses the VK-172 chipset.

The chipset is very common, according to OpenWRT’s instructions for setting up NTP with this chip). This cheap device does not emit 1PPS signals, which would allow for very accurate timekeeping at which point the USB delay becomes significant, dropping the accuracy of a 1PPS receiver from 1μs (1 micro-second) to around 1ms (1 milli-second), according to https://gpsd.gitlab.io/gpsd/gpsd-time-service-howto.html. Estimating the timestamp without 1PPS means “you can’t expect accuracy to UTC much better than 1 second from this method” but time-sync to within 1s is substantially better than the minutes to hours of drift I am currently experiencing.

Accuracy to within several seconds is more than good enough for Kerberos and TOTP, which usually have several seconds of tolerance to allow for client time drift. It should be noted that network-based NTP should be accurate to less than 30ms (30 milli-seconds) and therefore much better than non-1PPS GPS time sync, if internet connectivity is available.

Pre-requisite work to my lab network

Before setting it up, I reconfigured the switch port that the lab’s NAS is plugged into, to migrate it from the Proxmox test network to my home network mirror environment. Ultimately this may need to be migrated to being “outside” the lab’s network, to better mirror the live network with its internet connectivity. In configure mode:

interface gigabitEthernet 1/0/24
description isolinear
switchport pvid 20
switchport general allowed vlan 20 untagged
no switchport general allowed vlan 1
exit

Once I was happy with the configuration, I copied it to the start up configuration (in enable mode):

copy running-config startup-config

In my salt configuration management tool, I updated the MAC address of the live-network’s NAS to be this old NAS’ MAC in the lab-specific configuration and added ‘mirror’ as an alias (which was then pushed out to DNS).

Next I wanted to configure apt in the lab network to use the mirror, it was at this point I discovered that versions of salt before 2019.2.0 do not support matching on nodegroups in compound matches. The only solution with older versions (I’m currently on 2018.3.4 from Debian Buster) is to create a new group in the nodegroups configuration, which can use other nodegroups in matches but also requires restarting the salt master:

nodegroups:
  home_live: '*.home.domain.tld and not *.lab.home.domain.tld'
  home_lab: '*.lab.home.domain.tld'
  home_lab_debian: 'N@home_lab and G@os:Debian'

I decided not to do down this route, instead I added a new state linux.apt.repos.debian to my roles.debian state (which is applied to all Debian machines) and had it configure the main repositories (the main Debian repo, security and updates (previously called ‘volatile’)) for all machines. It takes a pillar value (which is applied to all home_lab machines) for each repository or defaults to the UK Debian mirror pool:

# file.managed turned out to be more reliable than the obvious
# pkgrepo.managed at configuring everything correctly (without duplication
# or leaving incorrect entries behind)
debian-main:
  file.managed:
    - name: /etc/apt/sources.list.d/debian-main.list
    - contents: |
        deb {{ salt['pillar.get']('apt:sources:main', 'http://ftp.uk.debian.org/debian/') }} {{ grains.oscodename }} main
        deb-src {{ salt['pillar.get']('apt:sources:main', 'http://ftp.uk.debian.org/debian/') }} {{ grains.oscodename }} main
    - owner: root
    - group: root
    - mode: 0o444
debian-security:
  file.managed:
    - name: /etc/apt/sources.list.d/debian-security.list
    - contents: |
        deb {{ salt['pillar.get']('apt:sources:security', 'http://security.debian.org/debian-security') }} {{ grains.oscodename }}{% if grains.osmajorrelease < 11 %}/updates{% else %}-security{% endif %} main
        deb-src {{ salt['pillar.get']('apt:sources:security', 'http://security.debian.org/debian-security') }} {{ grains.oscodename }}{% if grains.osmajorrelease < 11 %}/updates{% else %}-security{% endif %} main
    - owner: root
    - group: root
    - mode: 0o444
debian-updates:
  file.managed:
    - name: /etc/apt/sources.list.d/debian-updates.list
    - contents: |
        deb {{ salt['pillar.get']('apt:sources:updates', 'http://ftp.uk.debian.org/debian/') }} {{ grains.oscodename }}-updates main
        deb-src {{ salt['pillar.get']('apt:sources:updates', 'http://ftp.uk.debian.org/debian/') }} {{ grains.oscodename }}-updates main
    - owner: root
    - group: root
    - mode: 0o444

# Remove now redundant default configuration
apt-default-sources:
  file.absent:
    - name: /etc/apt/sources.list

Setting up the GPS device

Once I had a working way to install software, I plugged in the USB dongle and checked /dev/ttyACM0 existed (kernel module cdc_acm should have been auto loaded). Then I installed the necessary packages (these are the equivalent manual commands, I actually did it through my configuration management tool):

apt-get install gpsd gpsd-clients chrony

I edited this line in /etc/default/gpsd to tell it not to wait for a client before trying to get a GPS fix:

# -n = do not wait for client to connect before polling GPS
GPSD_OPTIONS="-n"

After restarting gpsd, I unplugged and re-plugged in the dongle (so udev launched gpsd for it) and ran cgps to see if it was working. gpsmon will also show a nice summary of the status. I did have to relocate the dongle (via a USB extension lead) to near a window to get a strong enough signal but it quickly synced and good a very accurate location (Google Maps pointing to the precise point in my house where the dongle was located when the longitude and latitude were entered). More importantly it reported the time offset from my system’s clock, which had grown to be nearly 1 hour out of sync since I last manually synced it with my mobile phone’s clock.

Next, I configured Chrony by editing /etc/chrony/chrony.conf to remove (or comment) any default pool and server line(s) (won’t work with no internet connectivity) and adding lines as described in the documentation for feeding chrony from gpfsd (bearing in mind this receiver has no 1PPS support). Based on empirical evidence, the socket interface only works if your GPS supports 1PPS - for NMEA (none 1PPS) time you have to use shared-memory. As this type of GPS timekeeping it relatively inacccurate (compred to 1PPS or network-based NTP) I also told it to consider it a high-stratum source:

refclock SHM 0 refid GPS precision 1e-1 offset 0.9999 delay 0.2 stratum 10

After this, chrony needs restarting - the requirement that chrony is running before gpsd (or gpsd is restarted after) only applies for sockets and 1PPS.

date followed by a quick test of sudo -l with my TOTP 2nd factor confirmed clock sync and that TOTP was now happy.

Making chrony NTP server for the network

In order to allow other devices to sync with this, I need to tell chrony to change the default “allow nothing” behaviour to allow other clients to update from it. This is done by adding an allow line to /etc/chrony/chrony.conf and restarting chrony:

allow 192.168.0.0/16

Strictly speaking this is much broader than needed (I have multiple /24 networks) but as 192.168.0.0/16 is unroutable to the internet, and there is a restrictive firewall on the system, I felt this achieved a balance between secure and ease comapred to configuring each network individually.

Speaking of firewalls, I then had to allow udp to port 123 from my management and internet-of-things networks (trusted network has a default accept, guest network will not be allowed to NTP from my server) in my /etc/network/if-pre-up.d/00-iptables script.

I added ‘ntp’ as an alias (CNAME, with network-specific aliases for other networks e.g. ntp-mgmt, ntp-iot, ntp-cctv etc.) in DNS for the system running the NTP server, to make client configuration easier and portable if I relocate where the service is running.

Configure NTP clients

Switch

When I configured the lab switch I said I configured the switch to match the live network “excepting the ntp settings - will need to do something about this later”. To put this right, now “later” has arrived, I logged onto the switch and ran the commands to tell it to use the router’s NTP server on the management interface (as both primary and backup, since the switch requires both are specified) every 12 hours (in configure mode). Frustratingly, the switch only supports IP addresses and not hostnames so will have to be manually reconfigured if the service moves:

system-time ntp UTC 192.168.10.250 192.168.10.250 12

And, once sync is established (checking with show system-time ntp), save with the usual copy running-config startup-config in enable mode.

Other Linux systems

Since these do not need a full blown NTP service, I configured systemd-timesyncd as previously described, using the NTP service hostname (e.g. ‘ntp.lab.home.domain.tld’) instead of an ip address.

Windows

For systems permanently on the network, I also configured Windows NTP settings as previously described. For portable machines, which only exist in the live network (i.e. my laptop), I left them with their default settings so they can sync outside of my network.

Extending this to the live network

From the NTP Pool Project’s documentation: “If you are synchronising a network to pool.ntp.org, please set up one of your computers as a time server and synchronize the other computers to that one. (you’ll have some reading to do - it’s not difficult though.”. Having found my home network blocked from debian.pool.ntp.org, this seems like a good idea. I only have 4 Debian systems in the network, with default configuration, but was behind my [ISP’s carrier-grade NAT])(https://en.wikipedia.org/wiki/Carrier-grade_NAT) for a while which (as well as being blocked by the NTP project, was also blocked by Sky’s on-demand video and one of my banks’ internet banking site).

On the server-side, I just had to install Chrony and add the appropriate allow configuration (using the default pool). Clients are all configured exactly as in the lab network (which is sort of the point of the lab network), once the DNS entries are added for the NTP server.