FreshRSS moves from M1 to NixOS on Proxmox
2023-08-12Background
My FreshRSS deployment has migrated a few times. From Digital Ocean Droplet to Hetzner CX Shared VM to local M1 Macbook Air, and finally to a NixOS VM in Proxmox, running on what used to be my TrueNAS hardware (w/o disks). The hardware itself is generally good, apart from 1 of its 6 SATA ports being faulty. I’ve since swapped out the motherboards, and this was left looking for a new role.
This final move was triggered when I noticed a noticeable drop in response times and general performance when syncing and marking 10s to 100s of articles as Read in bulk.
Before this migration, the FreshRSS app was deployed in a k3d cluster–which is simply k3s in docker. At the time, the options to run k3s
on the M1 CPU appeared to require the cluster to be running within a VM or a Docker container. Opting to avoid something like multipass, I opted for the k3d
option.
When I made the time for this migration, I threw in Proxmox and NixOS into the mix, as both are new-to-me but also have some baseline level of community usage and support. The key for this combination is that I didn’t want to reformat my physical server using the BMC and Virtual ISO/CD-ROM if I hosed the NixOS system, so a hypervisor and management UI made everything a lot easier. Plus, snapshots. Spoiler: I don’t hate either of them.
Desired State
The goal was to get Proxmox on the hardware that used to run my TrueNAS server. With a VM running NixOS on top of that. Both were straightforward to install and have setup.
Once in the NixOS system, I originally wanted to run k3s
directly on the host with tailscale
, but there were a few hiccups regarding general visibility of the k3s
package not having the tailscale
binary in its PATH
. This was probably a solvable problem had I looked into the configuration.nix
or other NixOS
specific patterns, but I didn’t want to spend the time on that.
Enter k3d
. Again.
So in the configuration.nix
, I was able to install k3d
from unstable
, and redeploy the k3s
cluster using the k3d.yaml
config from the M1. There were only a few modifications around the host paths, but everything else worked as you’d expect. Shortly after, k3s
was up, I was able to helm install
my FreshRSS package to the cluster, and had it all accessible over the tailnet
–just like it was on the M1.
The application is now orders of magnitude snappier than it was, and syncs as fast as I’d reasonably expect through lire on my phone.
And, of course, this is all accessible only over the tailnet
.
Minor Details
Looking at the VM resource utilization, it’s not a surprise that the M1 was struggling. My MBA was spec’d to have only 8GB RAM and I provisioned limited cores for the k3d
containers through the macOS Docker daemon.
A gotcha that I didn’t expect to find was figuring out how to cherry pick packages from unstable
while keeping the rest of the system on stable
. This was necessary because I needed to deploy the latest version of k3d
to resolve this issue. At the time of implementation, k3d-v5.5.2
was only available on the unstable
release train. To accomplish this, the general changes required looked like:
# /etc/nixos/configuration.nix:
{ config, pkgs, ... }:
let
baseconfig = { allowUnfree = true; };
unstable = import <nixos-unstable> { config = baseconfig; };
in
{
// ...
nixpkgs.config = baseconfig // {
allowUnfree = true;
packageOverrides = pkgs: {
k3d = unstable.k3d;
docker = unstable.docker;
};
};
// ...
environment.systemPackages = with pkgs; [
// ...
k3d
docker
// ...
];
// ...
system.stateVersion = "23.05";
}
What’s Next
Next steps are to monitor resource utilization and if FreshRSS requires more (or could get by with less).
But the more exciting aspects are to start deploying more applications on either the k3d
cluster or on a separate VM that I can now spin up (and snapshot) with the newly available Proxmox.