A giant Docker whale stomping through a server room, crushing iptables chains, while a furious sysadmin stands defiant on a server rack

It is 2026, and we are still fighting with Docker’s absolute arrogance regarding Linux networking.

Here is the scenario: I run a hybrid host. On one side, I have a KVM virtual machine running Home Assistant (because I need full OS control and full-disk encryption). On the other, I have the usual suspect list of Docker containers — NUT for monitoring my shitty Lakeview (Vultech) UPS, and Technitium for DNS and DHCP — running on the bare metal host.

It sounds simple. It should be simple.

But the moment I installed Docker, communication with my Home Assistant VM died. Just ceased to exist.

The Problem: Docker is a Dictator

Docker, by default, treats your iptables rules like they are merely suggestions. When the daemon starts, it essentially clobbers the FORWARD chain, inserts its own logic, and sets policies that effectively isolate anything that isn’t a container managed by itself.

If you have a bridge interface for a VM (like br0 or virbr0), Docker’s rules often end up dropping packets destined for that VM because they don’t match its internal logic for container traffic.

The Naive Fix (and why it fails)

My first reaction—like any sysadmin who has been doing this since the early 2000s—was to fix the rules manually and then run:

A locked server rack in a dark data center, an SSH connection beam reaching toward a glowing encryption padlock

Remote LUKS? Pfft. Here is how to SSH-Unlock a ZFS-Encrypted FreeBSD Root (The Hard Way)

If you run FreeBSD like I do, on a remote server with full disk encryption (ZFS on GELI), you know the panic of rebooting. You are always at the mercy of a KVM-over-IP or a VNC connection from the browser, to insert the root filesystem password at the kernel prompt.

Nevertheless, if you (like me) run a system with kern.securelevel > 0, then installing a new libc means rebooting single user and installing the updates over said KVM or VNC connection, that is not ergonomic to say the least.

The standard solution is usually a pre-boot SSH environment. On Linux, dropbear-initramfs makes this trivial. On FreeBSD? You are building a custom mfsroot (memory file system) from scratch.

Most guides out there suggest using a static shell script as init. This works, but it’s miserable. You lose job control (no Ctrl+C), you have no proper TTY, and good luck if you need to debug network issues interactively.

I didn’t want a hacky script. I wanted a real environment. I wanted init, getty, login, PAM authentication, and a ZFS chroot for maintenance - to install updates.

A Raspberry Pi 5 on a desk with an SSD depicted as a glowing vault, a padlock icon floating above

Preface

So I started running home assistant at home on a raspberry PI 5 machine and I just installed HAOS on an SD. I then started growing deeply uncomfortable about storing credentials in the HA filesystem in clear text (any obfuscation is not enough).

Considering configuring an encrypted root with HAOS is simply not possible without forking it, and also considering that dedicating a RPI5 entirely to HAOS is a waste of resources, I decided to add an SSD to the Pi, boot it with raspbian and then run HAOS inside a VM.

This way, I can have an encrypted root on the main host, thus encrypting the entire HAOS VM.

Furthermore I can now snapshot the entire HAOS VM and I have much more flexibility in managing it. Last but not least, I can also use the remaining RPI CPU and RAM for something else.

Credits

First, a big thank you to this post that gave me the initial pointers on how to set this up. But that 2021 post is now slightly outdated, and many steps are no longer necessary.

Second, a big thank you to Eric Fjøsne for actually using this guide and fixing it as I wrote it mostly after-the-fact :-).

MetalOS logo

I presented a talk at All Systems Go 2025, the foundational Linux userspace conference. The conference is organised mostly by the systemd team, and it’s a yearly meeting for all people working on Linux systems software.

This year’s theme has mostly been “containers, containers, containers” with many new features in systemd to support containerisation as well as practical experiences from people working in the field on how they’re using systemd and collateral software to build container infrastructures.

I presented together with my colleague Serge Dubrouski our work in building an Operating System at Meta scale. We run an image-based operating system, but the company comes from two decades of updating the OS online, so we had to design a suitable migration strategy and set the foundation for the future.

We describe how we cut CentOS releases from upstream, the OSS tools we’ve built to create OS images, and the internal technology (MetalOS) that we came up with to build an OS that runs on millions of Linux servers.

About the logo: it’s metal because MetalOS runs on bare metal, and the antlers are a nod to AntlirANoTher Linux Image buildeR — the open-source build system we use to produce the OS images.

Slides

📄 Download the slide deck (PDF, 482KB)

Your browser can't display embedded PDFs. Download the slides here.

Video

You can also download the video for offline viewing.

A fortress wall of glowing firewall rules with Beastie standing guard, deflecting port scanners with a ban hammer

TL;DR

FreeBSD: How to block port scanners from enumerating open ports on your server, by using fail2ban and an ASCII representation of pf logs.

Preface

I use fail2ban to keep away attackers and bots alike that attempt to scan my websites or brute force my mailboxes. Fail2ban works by scanning log files for specific patterns and keeping a count of matches per IP, and allows the systems administrator to define what to do when that count exceeds a defined threshold.

The patterns are indicative of malicious activity, such as attempting to guess a mailbox password, or attempt to scan a web site space for vulnerabilities.

The action to perform is most of the time to block the offending IP address via the machine firewall, but fail2ban supports any mechanism that you can conceive, as long as it can be enacted by a UNIX command.

PF and its logs

On my FreeBSD server I use the excellent pf packet filter to policy incoming traffic and to perform traffic normalization.

The PF logging mechanism is very UNIX-y, as it provides a virtual network interface (pflog0) onto which the initial bytes of packets blocked by a rule that has the log specifier are forwarded, so that real-time block logs can be inspected via a simple:

FreeBSD encrypted root on ZFS

An encrypted FreeBSD server wrapped in translucent shields, AWS cloud crumbling in the background

Preface

In 2023, I still run my own mailserver. Yes, because I like to keep control of (at least part of) my own digital life, and I enjoy having multiple domain names on which I have stuff. However, I was paying 30€/month to AWS to get in exchange 2 cores, 2GiBs of RAM and 40G of disk, barely sufficient to run IMAP+SMTP+MySQL+Clamd, let alone any form of spam protection or full-text search on email bodies.

So, I was paying a lot of money to run a shitty service, and I even thought about shutting everything off and moving my mail and my web sites onto some form of fully hosted service.

I still want to do it

Say what, to host four domains with just some email redirects plus the web sites I run, I would have spent more than I was paying, only to cripple myself to some service vendor and their politics.

So, I wanted to run FreeBSD and I started scouting on the ISPs page until I decided to review Hetzner and netcup, that both offer aggressive pricing and an old-fashioned VPS and little more.

Settling on a vendor

Eventually, I settled on a netcup VPS 1000 that gives me, for 1/3 of the price I was paying to AWS, 4 times the resources: 6 cores, 8GiB of RAM, 160GiB of RAID10 SSD and an uncrippled, completely totally free FreeBSD installation.

I was tasked with integrating OneSpan (formerly VASCO) hardware token two-factor authentication into a Ruby stack — wrapping their proprietary VACMAN Controller C SDK for local OTP validation, and building a client for their OneSpan Authentication Server (originally named Identikey Authentication Server, and renamed mid-project) SOAP API. Neither had a Ruby library.

For vacman_controller there was a starting point: a Ruby C extension by Marcus Lankenau wrapping the AAL2 SDK. One commit, no releases, rough around the edges, but a solid foundation — linking, importing tokens and basic wrappers — was there. I forked it at IFAD, fixed it, extended it, and pushed 97 additional commits on top. 14 releases, v0.1.0 through v0.9.3.

For identikey there was nothing — OneSpan ships a Java SDK, no Ruby library exists. I wrote one from scratch: 123 commits, 18 tags, v0.2.0 through v0.9.1.

Both are on GitHub. Here’s what’s inside.

🔍
2026 retrospective
My last release was v1.2.2 in May 2019. After that, Geremia Taglialatela took over and pushed it to v5.0.0 with Rails 8.1 and Ruby 4.0 support. 34 releases spanning 14 years, 201 stars, and still actively maintained. The API documentation and the repo are both alive.

Seven years ago I released ChronoModel v0.1.0 — a Ruby gem that gives ActiveRecord models temporal capabilities on PostgreSQL. Five days of hacking, thirty-six commits, no tests, and a confession about monkey-patching the PostgreSQL adapter constant.

Today I’m tagging v1.0.0. The commit message is :gem: this is v1.0.0. Not much of a speech, but the code speaks for itself: 506 commits, 31 releases, 52 files changed, 5,392 lines added. The core idea — updatable views on public, current data on temporal, history on history with table inheritance — never changed. Everything else did.

🔍
2026 retrospective
The repo at github.com/ifad/translation-memory is still public, still has no README, and the Pontoon fork it talks to remains private. Mozilla’s upstream is open and very much alive. Whether anyone at IFAD still runs Pontoon eight years on, I honestly don’t know — I built this for one project on my desk, not as a corporate workflow change. The hyphen-stripping regex did its job for the months I needed it. Then, presumably, the next Pontoon schema migration broke something. That’s what happens to integrations that talk to a database directly.

IFAD is a UN agency that operates in English, French, Spanish, and Arabic. Every public-facing string in our Rails apps needs to exist in four languages, which means we have a translation team, which means we have a translation workflow, which on most projects involves a desktop CAT tool, files attached to emails, and translation memories shipped around as XML.

That workflow does not survive a project I’m building right now. It’s a Rails web app on a tight schedule, the source strings change every week, and by the time a translator has finished a TM file and emailed it back the strings have already moved. I need translators and developers looking at the same database in real time. I pick Mozilla Pontoon — open-source, free, adaptable, written in Django, backed by Postgres — and stand it up for my project. The catch: there is a corpus of translations from the previous tool that I want to seed Pontoon with on day one, so the translators don’t start from a blank slate.

Today I start a translation-memory repo and write the first parser. The project is described, with all due engineering humility, as “Parser for TMX, SDL/XLIFF and TXML files and shameless importer into Mozilla Pontoon”. The “shameless” part is doing a lot of work in that sentence.

🔍
2026 retrospective
Colore is still alive at github.com/ifad/coloreGeremia Taglialatela took over after I drifted onto other things and pushed the project forward through Ruby 2.7, 3.0, 3.1, 3.2, sidekiq 6, and modern CI. He sits at 354 commits — three times mine. The nginx C module Joe wrote in February 2015 is unchanged. Heathen the standalone service was eventually folded directly into Colore as a library; the original repo is archived but the code lives on inside lib/heathen/ of Colore. Same idea, fewer moving parts.

IFAD is a UN agency that runs on documents. Loan agreements, evaluation reports, country strategy notes, board decisions, project briefs — every web application we build sooner or later needs to take a Word file and give back a PDF, or take a scan and give back something searchable, or take an arbitrary blob and turn it into a thumbnail. Three years ago we decided to stop solving this problem one application at a time and put it behind a single service.

Today I’m merging v1.0.0 of Colore. It’s the second attempt at that service, and it’s the one we get to keep. This is the story of both attempts and the people who built them — because almost none of the code below is mine.


On this page