Chris Dzombak

Setting up a secondary Pi-Hole on my home network

I run Pi-Hole as the DNS server for my home network. It provides ad and nuisance blocking for a subset of the systems in the house. Having a single DNS server for your network is very stressful; it’s a single point of failure, so even routine maintenance feels touch-and-go. I finally got around to setting up a secondary DNS server for my home network, running a second Pi-Hole instance and using Gravity Sync to keep it in sync with the primary instance.

This (rather terse) blog post walks through how I set this up. Thankfully, there’s not that much to say, because both Pi-Hole and Gravity Sync are well-documented and easy to set up.

The machine

I opted to set this DNS server up as a VM on my home NAS rather than running Pi-Hole in Docker. My reasoning is:

First, I set up the NAS to support running persistent VMs with KVM. I wrote a blog post about that last week which you should read for more detail. The following command started a VM with 2 VCPUs, 2GB of memory, and 32GB of disk space:

virt-install \
  --name altdns \
  --description "alternate DNS server for the home network" \
  --memory 2048 \
  --vcpus 2 \
  --disk path=/mnt/storage/vm/altdns/disk.qcow2,size=32 \
  --cdrom /mnt/scratch/ubuntu-22.04.3-live-server-amd64.iso \
  --graphics vnc \
  --os-variant ubuntu22.04 \
  --virt-type kvm \
  --autostart \
  --network network=hostbridge

Ubuntu setup

I installed the “minimal” version of Ubuntu 22.04 LTS on the VM. This isn’t the interesting part of the project, so I’ll mostly skip over it. The only interesting parts are:

Pi-Hole setup

I first installed Pi-Hole with their one-step automated installation process:

curl -sSL https://install.pi-hole.net | bash

Gravity Sync doesn’t sync everything, so I needed to be sure that I’d configured everything on the new server similarly to the primary server.

First, I needed to copy /etc/pihole/pihole-FTL.conf from the primary to the new secondary server, since I have a few customizations in there.

Then I logged into the alternate server’s Pi-Hole web admin console and walked through all the settings, configuring them identically to the primary server. I paid particular attention to:

Since Netdata is installed on this VM, I also set up my lighttpd config workaround for Netdata flooding the Pi-Hole admin console access logs.

Finally, since 32GB is a relatively small disk, I installed a daily cron job to clean the apt cache.

Gravity Sync setup

Gravity Sync’s wiki has excellent instructions, which I’m not going to repeat here. I followed the instructions to install & configure the software, perform a dry run then a real synchronization to the alternate DNS server, and schedule the sync process to run frequently.

Gravity Sync SSH key management

Gravity Sync creates an SSH keypair on each system, which it uses for the sync process. By default, for the user it’s running under on each host, it adds the SSH public key from the other host to ~/.ssh/authorized_keys.

For reasons, I don’t like having machine-specific contents in ~/.ssh/authorized_keys. To move this key to another file, on each server, I:

  1. Moved the line containing the new SSH key to a new file, ~/.ssh/authorized_keys_gravity-sync
  2. chmod 0644 ~/.ssh/authorized_keys_gravity-sync

To make sshd pick up on this new authorized_keys file, on each server, I edited /etc/ssh/sshd_config so that the AuthorizedKeysFile line reads:

AuthorizedKeysFile  .ssh/authorized_keys .ssh/authorized_keys_gravity-sync

Finally, I made that change take effect via sudo systemctl reload sshd.

Monitoring

In addition to using Netdata, I monitor various parts of my personal computing infrastructure with Uptime Kuma, and DNS servers are no different — they’re one of the most important things to monitor.

I have the following Uptime Kuma monitors set up for both the primary and secondary DNS servers:

To monitor Gravity Sync, I created a systemd override file for the gravity-sync service that updates an Uptime Kuma push monitor every time the sync process completes without error. I did that via sudo systemctl edit gravity-sync.service, editing the override file to add:

[Service]
Type=oneshot
ExecStartPost=curl "http://192.168.1.10:9001/api/push/XXXXXXXX?status=up&msg=OK&ping="

Changing the service from a simple type to oneshot is necessary1 because for a simple service systemd will run ExecStartPost immediately if the service started successfully. For a oneshot service, systemd waits for the task to exit successfully and then runs ExecStartPost.

See the systemd docs for more details.

Testing and network setup

I verified the new DNS server was working by running the dig command from my desktop. Assuming the new DNS server has the address 192.168.1.100, this looks like:

dig @192.168.1.100 dzombak.com

Once that was done, I just needed to configure my router to return both the old/primary and the new/secondary DNS servers in DHCP responses.

  1. Since this service runs periodically (it’s triggered by gravity-sync.timer), I think oneshot is a better fit in general.