Homelab Tour
In this post, I'll give a walkthrough of my homelab, including what hardware and sofware I have running, and the design choices I made while building it. It's kinda labgore, but that's how I like it.
Hardware
Let's start off with the hardware.
- ISP: Sonic.com, 1G symmetric
I can't sing praise loud enough for Sonic. I'm eternally grateful to them for saving me from Comcast and AT&T.
- HP Thin Client T730 (Farore)
- CPU: RX-427BB (4 cores, 2.7 GHz)
- 4GB RAM
- 120GB SSD
- Intel 1G Dual NIC
I bought this new from eBay with the purpose of being a lightweight and reliable. Very happy with it.
- Old gaming PC frankenstein (Hyrule)
- AMD FX-6300 (6 cores, 4GHz)
- 24GB RAM (DDR3 oof)
- Nvidia GTX 960
- 120GB internal SSD
- 1TB external SSD
This is a combination of parts from old gaming PCs. It's very power hungry so ideally I'll suspend it when I'm not using it, although this precludes me from hosting anything where uptime is important on it.
- Netgear GS108Ev3 (Mido)
- 8 port managed switch
A user on /r/homelab kindly sent me this for free many years ago after I posted a question about buying a similar one. I'm extremely grateful for that, because this switch is still the backbone of my lab today, and enabled me to learn so much.
Software
The running theme in my lab is to do everything the hard way. I eschew high-level tools for managing systems for low-level fundamental utilities. For example, NetworkManager, Proxmox, and OPNSense are out. Custom python init scripts, libvirt, and nftables are in (respectively). This approach has pros and cons.
Pros:
- Flexibility to configure your system however you want
- High-level tools have their own learning curve, and yet they don't completely unrequire you from understanding their low-level backend either. So might as well just learn the low-level backend thoroughly and reap the flexibility that comes with it.
- Lots of learning experiences
- You can put your configuration and tools in git
Cons:
- More difficult (although you could possibly argue it's less difficult in some sense)
- No nice web interface to make quick changes
- Not very battle tested (possible bugs, security issues)
That was all very abstract and vague. Let's make it a little more concrete.
The magic of my network all happens in my little HP T730 box, "Farore", which is working as a router and a host for high uptime services. It's a plain archlinux install with a bunch of custom configuration to turn it into a router. It started several years ago by following this Ars guide, but it has since evolved into something a little cooler (more bespoke). Here are some of the features I've added.
nftables
I switched from iptables as in the guide to nftables. An nftables config file is surprisingly pleasant to read and write (compared to the hellscape that is iptables). It's so good that I use it on every one of my systems except those that use docker, because docker creates iptables rules which interact with nftables rules in ways I don't understand yet.
nftrace is an extremely useful tool for debugging firewall issues.
VLANs
I recently added VLANs that isolate my home network from my homelab network. Before VLANs, they were either not isolated, or the homelab network was in a double NAT behind Farore and then an ISP router.
VPNs and rule-based routing madness
I wanted to proxy everything in the home network behind Mullvad VPN, but it was surprisingly tricky. It was made difficult by two confounding factors: (1) I only wanted traffic from the home VLAN to go through the VPN, not traffic from the homelab VLAN, and (2) I was confused by how NAT interacts with the VPN.
The wireguard website recommends network namespaces as the modern solution for forcing traffic through the VPN, but I found that to be heavy handed and tedious. Instead, I used rule based routing to send everything from the home VLAN through the wireguard interface, but let everything else go through the WAN interface directly as normal.
DNS
I have two dnsmasq instances running, one for the home network and one for the homelab network, each one doing both DNS and DHCP.
Port-forwarding ❌ Reverse-proxy ✔️
At some point I realized that I don't even need to port forward anymore. All of my services are either running directly on Farore, the machine with the WAN uplink, or are HTTP-based and thus can be forwarded by running a reverse proxy on the
SSL
I have certbot renewing wildcard certificates for both *.pigasus.net and *.<local-prefix>.pigasus.net where I use the local prefix to point to services that are only available locally. The local DNS records also only exist in the homelab dnsmasq instance. As I mentioned before, all HTTP/HTTPS traffic is routed through the reverse proxy running on my router, which handles SSL termination.
Software repos
Farore hosts a docker registry and an arch package repository. I use the docker registry to host applications I've written or customized myself, and I use the arch repository to distribute tools that are useful for systems in the homelab.
Management helpers
Since I mess with my router so much, a lot of times I screw something up and lose SSH access to it. The ultimate backup is the keyboard and monitor I have in the closet with my network equipment. But I also set up a cool thing where I assigned the MAC address of a USB to ethernet adapter to an ip address in both the homelab and home domains. So if I bring my laptop over to the network closet, and plug it into a VLAN port for the home or homelab, it will automatically get an IP address that corresponds to the respective network segment. This lets me easily get on the hylia network with a device that's convenient to work on even when there's no way to access it through the home network.
Infrastructure as code
This is still a goal of mine. But across like 5 different linux hosts (virtual and physical) I've made so many small configuration file changes and it really bothers me that I have no good way to track what I've changed. I'm currently gradually working through maintaining all configuration in a git repo in the root directory of each system, and symlinking config files from their actual location to this repo, but this is a little tedious, and it's easy to forget to do it.
Another idea is to do some overlay filesystem tricks or something similar. I believe SteamOS does something like this. But this sounds complicated and doesn't solve the change tracking problem like putting config in git would.
I like the idea of a functionally pure operating system, where the state of the system is completely determined by a central configuration source that can be kept in version control. I haven't looked deeply into nixOS, but I think it might be intended to be a solution in this domain.
SSH notifications
I get a notification to my phone whenever an SSH login is successfully made to Farore.
Services
Here's a list of services I run. (I know, it's absurdly small compared to how much effort I've put into the infrastructure running them.)
User-facing: Vaultwarden, Seafile, Jellyfin
Internal: ntfy, Nginx Proxy Manager, docker registry, arch repo
(Actually vaultwarden is on a DigitalOcean droplet right now, but I'm gonna move it back on-prem as soon as I can).