After around 4 days of frustrating, yet rewarding work, I have set up a reasonably stable “homelab” server running the following stack:
- WireGuard for all connectivity
- Planka for personal project management and to-do lists
- Synapse/Matrix to connect all of my messaging services together in one place
- Nextcloud for a personal “Google Drive” alternative
- Caddy to route requests to these different services via “reverse” proxy
- dnsmasq to allow me to access services by subdomains (e.g. planka.home.arpa vs. nextcloud.home.arpa)
- Docker Compose to rule them all
I made many mistakes along the way due in part to the particular technological limitations of each solution, including some limitations which are entirely undocumented. With this writeup I hope to save fellow first-time homelabbers a bit of pain.
The WireGuard Server
WireGuard is an extremely lightweight traffic tunneling protocol that provides encryption and private networking with a beautifully low attack surface. Most people would probably call it a VPN.
The problem with WireGuard is that it’s so lightweight and elegant that troubleshooting is nearly impossible. Logging can be enabled, but only at the kernel level; most of my debugging consisted of installing tcpdump
on my server and trying my hardest to find any traffic bound in for the right port at all.
Part of the problem was my choice in VPS. Initially I used Linode with a VPC (virtual private cloud) which provides private networking for a group of servers. I wanted to put WireGuard on one server, and individual services on the others. That would have been an elegant solution - but Linode doesn’t allow arbitrary inbound traffic to the server in the VPC that you designate as its router. You’re allowed ports 22, 80, 443, and a few other well-knowns - which doesn’t help when deconflicting WireGuard with other services.
So I took it all down and put it on Vultr. Running on a single server. Every service works like that. And the reason it works so well is because I use Docker Compose - but I’m getting ahead of myself.
I recommend this WireGuard installation script. It sets up the initial server configuration and manages the installation for you afterwards. You can even modify the base configuration it generates by changing /etc/wireguard/params
- which is important, because we’ll be hosting DNS on this server too (so change CLIENT_DNS_1
to the server’s IP address on the WireGuard interface).
Docker Compose
In theory, the only server software I have running outside of Docker is WireGuard. In practice, I confused myself while configuring dnsmasq (DNS server) and ran it on bare metal to troubleshoot, which ended with me resolving the core issue. Since I already had it running, I vowed to “come back to it later” and move it to a Docker container. That was about 5 months ago.
I set up each service (besides dnsmasq) in its own folder under /opt
. Each service folder under /opt
minimally contains a docker-compose.yml
file. More often, they also contain the configuration files which the service uses (mounted by the Compose file as read-only in the container itself) and the caches/persistent storage needed by the container (excluding Nextcloud, which as of writing uses about 350 GB of storage and as such mounts via external hard drive).
Additionally I set up SystemD files for each Composified (Composited? Composed?) service that consisted of a rather simple set of commands:
1 | [Unit] |
I’m sure you could automate this. But I wasn’t going to do that with only 4 services running on Compose.
Let’s dive into each service.
Planka
We’re starting off easy with a configuration as simple as just a docker-compose.yml
file.
1 | services: |
As described by their official docs, the SECRET_KEY is OpenSSL generated (openssl rand -hex 64
).
I expose the host-side port (XXXX) arbitrarily (remember, 1337 is what the container believes its exposing). As long as this value is unique system-wide, it can be anything you’d like. Just avoid conflicts with common names. And, ideally, leave 80 and 443 open for caddy (set up later). Note also that I prefaced the ports with 127.0.0.1
to force binding to localhost as opposed to 0.0.0.0
(exposing to outside world).
Quite security sidenote on POSTGRES_HOST_AUTH_METHOD=trust
: trust
means “skip authentication.” Yes, really. I can use it here because this postgres container isn’t being exposed back to the host - it’s purely accessed by the planka server, and only has one database being used by that server, at which point it doesn’t really matter if a password is being used or not. If you wanted to go down a more computationally-conservative route and set up one postgres container for use by every service on your system… you really would want authentication for that. What I’ve done instead is set up one database container per service.
Nextcloud
This one is a bit more involved. Here’s my docker-compose.yml
:
1 | services: |
A nifty feature of Docker Compose (podman untested) is that you can provide names for required environment variables but not values (as I’ve done for every environment:
block here) and it’ll hunt for alternative sources. In my case, it’s a .env
file in the same directory (although if any of those variables were defined in my environment already, Docker wouldn’t bother with the .env
file at all).
The Compose file specifies the database will store its persistent data in the local directory but everything else in the “Block Storage” option I lease from my VPS provider. It also creates two containers using the same image - the difference is in their entrypoint (Nextcloud requires a maintenance task scheduler, and conveniently the Nextcloud image provides a cron setup for that purpose).
While I do keep it under /opt/nextcloud
for convenience, the config.php
file that controls most Nextcloud settings is not mounted locally. I had some bizarre issues whenever I tried that, and - yet again - told myself I’d “come back to it later” 5 months ago (in hindsight it was perhaps that I mounted it as read-only when it must, in fact, be writable). Here’s that config.php
:
1 |
|
Many of the options here are designed to mitigate the problems that crop up when you reverse proxy through Docker and Caddy.
P.S. Whenever the config file doesn’t cut it, you can run the OCC utility via docker exec --user www-data nextcloud-app-1 php occ [rest of command]
. If nextcloud-app-1
isn’t your container’s name, replace that part.
Synapse
I’m most familiar with Synapse out of any of these other services, so I made a more thouroughly customized setup. Here’s my compose file:
1 | services: |
As you can see, I’ve got a lot going on here - like a custom-built Synapse image via Dockerfile. Here’s that synapse.Dockerfile
:
1 | FROM docker.io/matrixdotorg/synapse:latest |
Coupled with a config change to include the correct module, the image built from this Dockerfile automatically accepts invites to new chats (appropriate in my case where I am using this just to bridge to my other services, and not publicly exposing my Synapse instance).
You may see that I was rather uncreative with the ports in this one (8448:8448/tcp
being the default). Unfortunately I underestimated the difficulty of changing those values after the server went online. Eventually I allowed Synapse to win that particular fight and left the ports alone, but if you change it before you set anything up I don’t see how it could fail.
I’ve also included a bunch of bridges in this Compose file - I won’t include their configurations here in the interest of space. There’s nothing really Docker-specific about them besides basic setup. That’s also true of the main Synapse homeserver.yaml
config file.
Only other thing I should mention is that all the bridge registration files were copied under /opt/matrix/files
, which is linked as /data
in the container. That made things a lot easier, if slightly less automated (as opposed to setting up mounts to each bridge’s configuration folder, which ended in disaster when I first tried it).
Caddy
Caddy, similar to WireGuard, is just used to provide infrastructure. Think of it as a switch for the web traffic reaching your server - it proxies the correct service depending on the subdomain you access it with. Here’s the Compose file I use:
1 | services: |
Providing the NET_ADMIN
capability and network_mode: host
mitigated every issue I experienced - without those two concessions, Caddy ended up being unable to access all the other ports exposed to the host system, so there was no reverse-proxying at all. It is unfortunately less containerized than the other services because of this.
Here’s my Caddyfile:
1 | planka.home.arpa { |
You’ll notice that I’m manually specifying certificates… even though the main selling point of Caddy is that HTTPS is automatic. Unfortunately that doesn’t extend to self-signed certificates (required in the case of localhost/home.arpa domain usage). Here’s the script I used to generate those certificates:
1 |
|
And my openssl.cnf
:
1 | [ req ] |
Where IP.1
is the server’s IP address under the WireGuard interface.
dnsmasq
As mentioned before, I have dnsmasq running on bare metal. Here’s my configuration for that:
1 | domain-needed |
“But wait,” I hear you asking. “Shouldn’t it define records? Isn’t it just passing all traffic to 1.1.1.1 and logging it?”
dnsmasq actually reads /etc/hosts
and creates records from that on the fly. That’s why I love dnsmasq: all of the domain name resolution, none of the zone files. (All my memories of bind9 are universally negative.)
My /etc/hosts
reads as follows:
1 | 127.0.1.1 wireguard wireguard |
Where X.X.X.X is your server’s IP address under the WireGuard interface. You just repeat it for every applicable service, and let Caddy do the actual work from there.
(server=1.1.1.1
is the fallback DNS server for when your clients try to access anything besides the homelab services.)
Summary
It’s been great running my own “homelab” server via VPS. I get to manage my own infrastructure, have reasonable faith in the software that handles my most personal information, and learn some new things along the way.
There’s also the great cost savings! I used to pay $22 per month combined for all these services (where they even had costs to begin with) and now that it’s on my VPS, I pay… $36 per month.
Admittedly cost efficiency didn’t factor into my decision to try “homelabbing.” And it will be significantly cheaper once I actually get to remove the scoff quotes from “homelabbing” because I plan to migrate to “on-prem” (once I move to a new place I’ll buy a PC and run it off that). Most of my expenses currently are towards storage ($18 per month for 400 gigabytes through my current VPS provider) which I can eliminate by plugging in a 1 terabyte hard drive into my real (planned) homelab setup.
But aside from that one hang up I do thoroughly enjoy this setup and I doubt I’ll move back to the old way anytime soon. Hopefully my documentation here can help more people set up their own “homelabs” (or real ones too!).
“WireGuard” and the “WireGuard” logo are registered trademarks of Jason A. Donenfeld.