Self-hosting multiple services
I’ve been running various services at home for a while now. My adventures in self-hosting started with an ethereum node on a Raspberry Pi back at the end of 2020. Today, I’m hosting about half a dozen services that run on two single-board computers, and I anticipate my self-hosted ecosystem to grow.
Complexities arise
When I started self-hosting things, it was an opportunity to learn about all the layers of the onion that I had to peel back in order to get a service on my home server exposed to the internet. First, the firewall on the server had to allow traffic to the port that exposed the service. Then, my router had to be configured for port forwarding. When I began to run multiple services on ports 80 and 443, I needed a reverse proxy service outside of my home network to forward the endpoints of those services to non-colliding ports on my home router. Then, I had to make a solution to keep a DNS entry updated with my residential IP (since I’m on a residential plan, I’m not eligible for a static IP). Finally, I needed a solution to provision certificates and enforce TLS.
Suffice to say, exposing multiple services behind a home router can become a complex management task. Fortunately, over a period of several months of trial-and-error, I came up with a very reliable and relatively simple solution.
Caddy for TLS
Caddy is an awesome solution for the reverse-proxy and TLS requirements. It takes just a few minutes to set up Caddy on any server. Caddy handles automatic TLS certificate provisioning via LetsEncrypt. It’s about as easy and low-maintenance as I could hope for.
Wireguard in the Cloud for DDNS and Networking
My residential IP provider doesn’t change my IP address often, but it does happen a few times per year. Because of these changes, I knew I needed some kind of dynamic DNS resolution. At first I wrote cron jobs to hit my DNS provider’s configuration API and update the IP address for DNS records. That solution felt janky.
I had been playing with Wireguard for a few months as a self-hosted VPN solution, and I had set up a free cloud server (with a static public IP) to be my Wireguard endpoint. I already had one of my home servers set up as a Wireguard peer, and the lights went off in my head when I realized that I could network all of my servers in this way without requiring port forwarding on my home router.
The key setting that makes this work is the PersistentKeepalive
setting in the [Peer]
section of all my home server Wireguard configs. I have it set to 25, which means that every 25 seconds the connection to the endpoint server is kept alive or re-established. Practically, this means that any disconnections in my home internet, or any IP address re-assignments, have minimal lasting effects because my servers simply re-establish their Wireguard network when connectivity is restored.
Automation for restarts
I use systemd to run Wireguard, and I’m starting to explore other solutions to keep the other services running. Wireguard on systemd seems to be extremely reliable, so much so that I’d build with other solutions on top of it.
Putting it all together
All in all, these are the high-level things that enable my homelab services to exist with minimal effort and configuration on my part:
- a cloud-hosted VM with static public IP, running Wireguard (as a Wireguard endpoint) and Caddy
- DNS records pointing to the static IP of the cloud VM
- home servers running Wireguard as peers, with the setting
PersistentKeepalive = 25
- Caddy configured to resolve http and https endpoints over the Wireguard network
Benefits
The benefits of the architecture that I’ve described here include the following:
- no port forwarding configuration on the home router
- home IP is hidden from DNS records
- home servers are not directly exposed to web traffic
- TLS automatically works with no load on home servers
- Wireguard encryption secures all traffic
- exposing a new service on an existing server requires a simple Caddy update
- adding a new server requires a simple Wireguard config update
- servers can be moved to anywhere with an internet connection, and when they start up again, everything will just work
Conclusion
I’ve enjoyed months of reliability and constant uptime with the architecture described here. I hope it saves someone else some time in making these kinds of decisions. I’ve purposely left out specific config examples because I find that those things tend to get stale over time, and the exercise in writing them is worthwhile.