A Homelabbing Experiment

After around 4 days of frustrating, yet rewarding work, I have set up a reasonably stable “homelab” server running the following stack:

  • WireGuard for all connectivity
  • Planka for personal project management and to-do lists
  • Synapse/Matrix to connect all of my messaging services together in one place
  • Nextcloud for a personal “Google Drive” alternative
  • Caddy to route requests to these different services via “reverse” proxy
  • dnsmasq to allow me to access services by subdomains (e.g. planka.home.arpa vs. nextcloud.home.arpa)
  • Docker Compose to rule them all

I made many mistakes along the way due in part to the particular technological limitations of each solution, including some limitations which are entirely undocumented. With this writeup I hope to save fellow first-time homelabbers a bit of pain.

The WireGuard Server

WireGuard is an extremely lightweight traffic tunneling protocol that provides encryption and private networking with a beautifully low attack surface. Most people would probably call it a VPN.

The problem with WireGuard is that it’s so lightweight and elegant that troubleshooting is nearly impossible. Logging can be enabled, but only at the kernel level; most of my debugging consisted of installing tcpdump on my server and trying my hardest to find any traffic bound in for the right port at all.

Part of the problem was my choice in VPS. Initially I used Linode with a VPC (virtual private cloud) which provides private networking for a group of servers. I wanted to put WireGuard on one server, and individual services on the others. That would have been an elegant solution - but Linode doesn’t allow arbitrary inbound traffic to the server in the VPC that you designate as its router. You’re allowed ports 22, 80, 443, and a few other well-knowns - which doesn’t help when deconflicting WireGuard with other services.

So I took it all down and put it on Vultr. Running on a single server. Every service works like that. And the reason it works so well is because I use Docker Compose - but I’m getting ahead of myself.

I recommend this WireGuard installation script. It sets up the initial server configuration and manages the installation for you afterwards. You can even modify the base configuration it generates by changing /etc/wireguard/params - which is important, because we’ll be hosting DNS on this server too (so change CLIENT_DNS_1 to the server’s IP address on the WireGuard interface).

Docker Compose

In theory, the only server software I have running outside of Docker is WireGuard. In practice, I confused myself while configuring dnsmasq (DNS server) and ran it on bare metal to troubleshoot, which ended with me resolving the core issue. Since I already had it running, I vowed to “come back to it later” and move it to a Docker container. That was about 5 months ago.

I set up each service (besides dnsmasq) in its own folder under /opt. Each service folder under /opt minimally contains a docker-compose.yml file. More often, they also contain the configuration files which the service uses (mounted by the Compose file as read-only in the container itself) and the caches/persistent storage needed by the container (excluding Nextcloud, which as of writing uses about 350 GB of storage and as such mounts via external hard drive).

Additionally I set up SystemD files for each Composified (Composited? Composed?) service that consisted of a rather simple set of commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Unit]
Description=SERVICE_NAME
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/opt/SERVICE_NAME
ExecStartPre=/usr/bin/docker compose -f docker-compose.yml down
ExecStart=/usr/bin/docker compose -f docker-compose.yml up
ExecStop=/usr/bin/docker compose -f docker-compose.yml down

[Install]
WantedBy=multi-user.target

I’m sure you could automate this. But I wasn’t going to do that with only 4 services running on Compose.

Let’s dive into each service.

Planka

We’re starting off easy with a configuration as simple as just a docker-compose.yml file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
services:
planka:
image: ghcr.io/plankanban/planka:1.17.4
restart: on-failure
volumes:
- user-avatars:/app/public/user-avatars
- project-background-images:/app/public/project-background-images
- attachments:/app/private/attachments
ports:
- 127.0.0.1:XXXX:1337
environment:
- BASE_URL=https://planka.home.arpa
- DATABASE_URL=postgresql://postgres@postgres/planka
- SECRET_KEY=nottellingu
- [email protected]
- DEFAULT_ADMIN_PASSWORD=notmypassword
- DEFAULT_ADMIN_NAME=Your Name
- DEFAULT_ADMIN_USERNAME=yours
depends_on:
postgres:
condition: service_healthy

postgres:
image: postgres:14-alpine
restart: on-failure
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=planka
- POSTGRES_HOST_AUTH_METHOD=trust
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d planka"]
interval: 10s
timeout: 5s
retries: 5

volumes:
user-avatars:
project-background-images:
attachments:
db-data:

As described by their official docs, the SECRET_KEY is OpenSSL generated (openssl rand -hex 64).

I expose the host-side port (XXXX) arbitrarily (remember, 1337 is what the container believes its exposing). As long as this value is unique system-wide, it can be anything you’d like. Just avoid conflicts with common names. And, ideally, leave 80 and 443 open for caddy (set up later). Note also that I prefaced the ports with 127.0.0.1 to force binding to localhost as opposed to 0.0.0.0 (exposing to outside world).

Quite security sidenote on POSTGRES_HOST_AUTH_METHOD=trust: trust means “skip authentication.” Yes, really. I can use it here because this postgres container isn’t being exposed back to the host - it’s purely accessed by the planka server, and only has one database being used by that server, at which point it doesn’t really matter if a password is being used or not. If you wanted to go down a more computationally-conservative route and set up one postgres container for use by every service on your system… you really would want authentication for that. What I’ve done instead is set up one database container per service.

Nextcloud

This one is a bit more involved. Here’s my docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
services:
db:
image: mariadb:11.3
restart: unless-stopped
command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
volumes:
- ./db_data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD
- MYSQL_PASSWORD
- MYSQL_DATABASE
- MYSQL_USER

app:
image: nextcloud:29
restart: unless-stopped
ports:
- 127.0.0.1:XXXX:80
links:
- db
volumes:
- /mnt/blockstorage/nextcloud/nextcloud_data:/var/www/html
environment:
- MYSQL_PASSWORD
- MYSQL_DATABASE
- MYSQL_USER
- MYSQL_HOST

cron:
image: nextcloud:29
restart: unless-stopped
volumes:
- /mnt/blockstorage/nextcloud/nextcloud_data:/var/www/html
entrypoint: /cron.sh
depends_on:
- app
- db

A nifty feature of Docker Compose (podman untested) is that you can provide names for required environment variables but not values (as I’ve done for every environment: block here) and it’ll hunt for alternative sources. In my case, it’s a .env file in the same directory (although if any of those variables were defined in my environment already, Docker wouldn’t bother with the .env file at all).

The Compose file specifies the database will store its persistent data in the local directory but everything else in the “Block Storage” option I lease from my VPS provider. It also creates two containers using the same image - the difference is in their entrypoint (Nextcloud requires a maintenance task scheduler, and conveniently the Nextcloud image provides a cron setup for that purpose).

While I do keep it under /opt/nextcloud for convenience, the config.php file that controls most Nextcloud settings is not mounted locally. I had some bizarre issues whenever I tried that, and - yet again - told myself I’d “come back to it later” 5 months ago (in hindsight it was perhaps that I mounted it as read-only when it must, in fact, be writable). Here’s that config.php:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
<?php
$CONFIG = array (
'htaccess.RewriteBase' => '/',
'memcache.local' => '\\OC\\Memcache\\APCu',
'apps_paths' =>
array (
0 =>
array (
'path' => '/var/www/html/apps',
'url' => '/apps',
'writable' => false,
),
1 =>
array (
'path' => '/var/www/html/custom_apps',
'url' => '/custom_apps',
'writable' => true,
),
),
'upgrade.disable-web' => true,
'instanceid' => 'redacted',
'passwordsalt' => 'hush',
'secret' => 'terces',
'trusted_proxies' =>
array (
0 => 'yourserverip',
),
'trusted_domains' =>
array (
0 => 'nextcloud.home.arpa',
),
'datadirectory' => '/var/www/html/data',
'dbtype' => 'mysql',
'version' => '29.0.1.1',
'overwrite.cli.url' => 'https://nextcloud.home.arpa',
'overwriteprotocol' => 'https',
'dbname' => 'nextcloud',
'dbhost' => 'db',
'dbport' => '',
'dbtableprefix' => 'oc_',
'mysql.utf8mb4' => true,
'dbuser' => 'nextcloud',
'dbpassword' => 'thatsasecret',
'maintenance_window_start' => 9,
'installed' => true,
'loglevel' => 2,
'maintenance' => false,
);

Many of the options here are designed to mitigate the problems that crop up when you reverse proxy through Docker and Caddy.

P.S. Whenever the config file doesn’t cut it, you can run the OCC utility via docker exec --user www-data nextcloud-app-1 php occ [rest of command]. If nextcloud-app-1 isn’t your container’s name, replace that part.

Synapse

I’m most familiar with Synapse out of any of these other services, so I made a more thouroughly customized setup. Here’s my compose file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
services:
synapse:
build:
context: .
dockerfile: synapse.Dockerfile
restart: unless-stopped
environment:
- SYNAPSE_CONFIG_PATH=/data/homeserver.yaml
volumes:
- ./files:/data
depends_on:
- db
ports:
- 8448:8448/tcp

db:
image: docker.io/postgres:12-alpine
environment:
- POSTGRES_USER=synapse
- POSTGRES_PASSWORD=nope
- POSTGRES_INITDB_ARGS=--encoding=UTF-8 --lc-collate=C --lc-ctype=C
volumes:
- ./schemas:/var/lib/postgresql/data

mautrix-discord:
image: dock.mau.dev/mautrix/discord:latest
restart: unless-stopped
volumes:
- ./mautrix-discord:/data
depends_on:
- synapse
- db

mautrix-telegram:
image: dock.mau.dev/mautrix/telegram:latest
restart: unless-stopped
volumes:
- ./mautrix-telegram:/data
depends_on:
- synapse
- db

mautrix-whatsapp:
image: dock.mau.dev/mautrix/whatsapp:latest
restart: unless-stopped
volumes:
- ./mautrix-whatsapp:/data
depends_on:
- synapse
- db

As you can see, I’ve got a lot going on here - like a custom-built Synapse image via Dockerfile. Here’s that synapse.Dockerfile:

1
2
3
FROM docker.io/matrixdotorg/synapse:latest

RUN pip install synapse-auto-accept-invite

Coupled with a config change to include the correct module, the image built from this Dockerfile automatically accepts invites to new chats (appropriate in my case where I am using this just to bridge to my other services, and not publicly exposing my Synapse instance).

You may see that I was rather uncreative with the ports in this one (8448:8448/tcp being the default). Unfortunately I underestimated the difficulty of changing those values after the server went online. Eventually I allowed Synapse to win that particular fight and left the ports alone, but if you change it before you set anything up I don’t see how it could fail.

I’ve also included a bunch of bridges in this Compose file - I won’t include their configurations here in the interest of space. There’s nothing really Docker-specific about them besides basic setup. That’s also true of the main Synapse homeserver.yaml config file.

Only other thing I should mention is that all the bridge registration files were copied under /opt/matrix/files, which is linked as /data in the container. That made things a lot easier, if slightly less automated (as opposed to setting up mounts to each bridge’s configuration folder, which ended in disaster when I first tried it).

Caddy

Caddy, similar to WireGuard, is just used to provide infrastructure. Think of it as a switch for the web traffic reaching your server - it proxies the correct service depending on the subdomain you access it with. Here’s the Compose file I use:

1
2
3
4
5
6
7
8
9
10
11
services:
caddy:
image: caddy:2-alpine
cap_add:
- NET_ADMIN
volumes:
- ./data:/data
- ./config:/config
- ./certificates:/var/certs
- ./Caddyfile:/etc/caddy/Caddyfile
network_mode: host

Providing the NET_ADMIN capability and network_mode: host mitigated every issue I experienced - without those two concessions, Caddy ended up being unable to access all the other ports exposed to the host system, so there was no reverse-proxying at all. It is unfortunately less containerized than the other services because of this.

Here’s my Caddyfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
planka.home.arpa {
tls /var/certs/server.crt /var/certs/server.key

reverse_proxy localhost:XXXX
}

matrix.home.arpa {
tls /var/certs/server.crt /var/certs/server.key

reverse_proxy /_matrix/* localhost:8448
reverse_proxy /_synapse/client/* localhost:8448
}

nextcloud.home.arpa {
tls /var/certs/server.crt /var/certs/server.key

request_body {
max_size 1TB
}

reverse_proxy localhost:XXXX
}

You’ll notice that I’m manually specifying certificates… even though the main selling point of Caddy is that HTTPS is automatic. Unfortunately that doesn’t extend to self-signed certificates (required in the case of localhost/home.arpa domain usage). Here’s the script I used to generate those certificates:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#! /usr/bin/env bash

# This script is presented as an archived version of the commands which
# succeeded in setting up a self-signed CA with a wildcard certificate for
# the home.arpa domain.

# Assumes the existence of a correct openssl.cnf in the CWD.

set -e

openssl genrsa -out ca.key 4096
openssl genrsa -out server.key 2048
openssl req -new -nodes -key server.key -out server.csr -config openssl.cnf
openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.pem -config openssl.cnf
openssl x509 -req -in server.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out server.crt -days 825 -sha256

And my openssl.cnf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_req

[ req_distinguished_name ]
C = US
ST = XX
L = Anytown
O = Your Name
CN = *.home.arpa

[ req_ext ]
subjectAltName = @alt_names

[ v3_req ]
subjectAltName = @alt_names

[ alt_names ]
IP.1 = X.X.X.X

Where IP.1 is the server’s IP address under the WireGuard interface.

dnsmasq

As mentioned before, I have dnsmasq running on bare metal. Here’s my configuration for that:

1
2
3
4
domain-needed
bogus-priv
log-queries
server=1.1.1.1

“But wait,” I hear you asking. “Shouldn’t it define records? Isn’t it just passing all traffic to 1.1.1.1 and logging it?”

dnsmasq actually reads /etc/hosts and creates records from that on the fly. That’s why I love dnsmasq: all of the domain name resolution, none of the zone files. (All my memories of bind9 are universally negative.)

My /etc/hosts reads as follows:

1
2
3
4
5
6
127.0.1.1 wireguard wireguard
127.0.0.1 localhost
X.X.X.X wg.home.arpa
X.X.X.X planka.home.arpa
X.X.X.X matrix.home.arpa
X.X.X.X nextcloud.home.arpa

Where X.X.X.X is your server’s IP address under the WireGuard interface. You just repeat it for every applicable service, and let Caddy do the actual work from there.

(server=1.1.1.1 is the fallback DNS server for when your clients try to access anything besides the homelab services.)

Summary

It’s been great running my own “homelab” server via VPS. I get to manage my own infrastructure, have reasonable faith in the software that handles my most personal information, and learn some new things along the way.

There’s also the great cost savings! I used to pay $22 per month combined for all these services (where they even had costs to begin with) and now that it’s on my VPS, I pay… $36 per month.

Admittedly cost efficiency didn’t factor into my decision to try “homelabbing.” And it will be significantly cheaper once I actually get to remove the scoff quotes from “homelabbing” because I plan to migrate to “on-prem” (once I move to a new place I’ll buy a PC and run it off that). Most of my expenses currently are towards storage ($18 per month for 400 gigabytes through my current VPS provider) which I can eliminate by plugging in a 1 terabyte hard drive into my real (planned) homelab setup.

But aside from that one hang up I do thoroughly enjoy this setup and I doubt I’ll move back to the old way anytime soon. Hopefully my documentation here can help more people set up their own “homelabs” (or real ones too!).


“WireGuard” and the “WireGuard” logo are registered trademarks of Jason A. Donenfeld.