Non che l’allarme funzioni male — il pannello SDVECU è solido, i sensori
sono affidabili, l’installazione è professionale. Ma l’app. Dio santo,
l’app.
Il problema
Apri l’app per controllare lo stato dell’allarme e ti accoglie una pubblicità
di Verisure stessa. Io pago fior di quattrini per il servizio e loro mi
piazzano le ads dentro l’app. È il 2026 e un’azienda di sicurezza mi fa
vedere banner pubblicitari quando provo a verificare se casa mia è protetta.
Ma le ads sono il meno. I veri problemi sono:
Routine cieche. Sì, l’app ha le “routine” — attiva a
mezzanotte, disattiva alle 7. Ma non sanno dove sei. Mezzanotte
e sei ancora in giardino? L’allarme si attiva e i sensori scattano.
Finestra aperta? Il pannello annuncia che non riesce ad attivare,
ma se non lo senti l’allarme resta spento. Vai in vacanza e
dimentichi di disabilitare la routine di disattivazione mattutina?
Allarme spento con la casa vuota. E le modifiche alle routine
impiegano 20 minuti a propagarsi — “o il giorno dopo”. Nel 2026.
Zero presenza. L’app non sa dove sei. Non sa chi è in casa.
Non sa se la donna delle pulizie è andata via. Nessuna automazione
basata sulla posizione.
Una telecamera alla volta. Vuoi vedere tutte le camere? Tocca,
aspetta, torna indietro, tocca la prossima, aspetta. Nessuna vista
d’insieme. Nessun “cattura tutto”.
Lentezza biblica. Richiedi un’immagine, aspetti, aspetti, forse
arriva. A volte ricarichi l’app e riprovi. Nel 2026.
Nessuno storage permanente. Le immagini catturate spariscono. Non
c’è uno storico consultabile.
Notifiche generiche. Una notifica uguale per tutti. Niente
notifiche actionable, niente notifiche critiche che bypassano il
“Non Disturbare”.
Quello che volevo: il mio allarme, integrato nella mia domotica, con
automazioni intelligenti, notifiche per tutti i residenti, e una
dashboard che mostra tutto in un colpo d’occhio. Senza pubblicità.
It started with WiFi presence detection. I had built a system that tracks which room everyone is in by scraping RSSI from my OpenWrt APs. It worked — but the room assignments kept flickering. Kitchen. Office. Kitchen. Office. Three times in ten seconds. The state machine was fine. The WiFi wasn’t.
My home network runs six OpenWrt APs across three floors, two SSIDs — Mercury on 5 GHz, Saturn on 2.4 GHz — all backed by 802.11r for fast roaming. From the outside, it looks like a proper mesh. From the inside, one phone was bouncing between access points 129 times in 24 hours.
I didn’t know this until I built the tool to see it.
Each row is a WiFi client, the color shows which AP it’s connected to. Healthy clients show long solid bars. Sick ones look like barber poles. See sara-iphone? That rainbow stripe is 129 connects in 24 hours — the phone is walking through an overlap zone between two APs where both have roughly equal (and terrible) signal.
The Problem You Can’t See
WiFi roaming is invisible. Your phone shows full bars, Netflix buffers for a moment, and you blame the internet. But what actually happened is your phone disconnected from one AP, scanned for alternatives, picked another one with a marginally different signal, associated, authenticated, and started streaming again — all in under a second if 802.11r is working, several seconds if it’s not.
Do this 15 times in 2 minutes between two APs that both have garbage signal, and you get what I call thrashing: rapid, pointless AP bouncing that kills throughput and wastes airtime.
I had two problems with Home Assistant’s presence detection.
The first: GPS tells you if someone is home, but not where in the house they are. My home has six OpenWrt access points spread across three floors. They already know exactly which phone is connected to which AP at every moment — that’s room-level presence data, sitting right there in the WiFi stack, screaming to be used. Knowing who’s in which room opens up a whole class of automations that GPS can’t touch: lights that follow you, climate control per occupied room, a dashboard that shows the household at a glance.
The second: our housekeeper stays at our place a couple days a week. I don’t want to set up a full HA account for her, install the companion app on her phone, or deal with GPS permissions. But I do need to know if she’s home — because my alarm automation needs to know whether the house is actually empty before arming. Her phone connects to WiFi. That’s all I need.
So I wrote openwrt-ha-presence: a state machine that scrapes RSSI metrics directly from your OpenWrt APs, figures out which room each person is in by signal strength, and publishes per-person home/away state to Home Assistant via MQTT Discovery. No cloud, no beacons, no log parsing, no time-series database. Python, async, ~600 lines of actual logic.
Every 5 seconds, openwrt-presence hits the /metrics endpoint on each AP and grabs wifi_station_signal_dbm for every associated station. That’s the RSSI — how loud your phone’s signal is at that AP. The engine then processes the snapshot:
A couple of months ago, my fiber went down. As per Murphy’s first corollary, it happened at the absolute worst moment: right before a crucial meeting with a partner company. I found myself frantically jamming between a distant neighbor’s AP and my phone’s hotspot, but both sucked hard. We’re talking 200ms RTT and 15% packet loss. I was apologizing profusely while my video feed turned into a 1998 slideshow; no one could parse a word I was saying. I ended up cutting the video and staying silent. Missed opportunity. Never. Again.
So I went full paranoid and built a proper 5G backup setup.
Poynting XPOL-24 directional antenna mounted on the wall outside my home office
5G signal here is non-existing, so I had to use heavy artillery. The Poynting is a beast. 11 dBi gain, real 4x4 MIMO, cross-polarized, weather-sealed. Point it at the nearest tower and suddenly your SINR jumps from “meh” to “holy shit.”
But pointing a directional antenna without visual feedback is painful. You’re basically spinning in circles, refreshing a web UI, cursing at the sky.
5g-info dumps everything your modem knows in a readable format:
5g-monitor is an ncurses TUI that refreshes in real-time and—here’s the good part—beeps based on your SINR. Higher signal quality = more beeps. Point the antenna, listen for beeps, tighten the bolts. Done.
It is 2026, and we are still fighting with Docker’s absolute arrogance regarding Linux networking.
Here is the scenario: I run a hybrid host. On one side, I have a KVM virtual machine running Home Assistant (because I need full OS control and full-disk encryption).
On the other, I have the usual suspect list of Docker containers — NUT for monitoring my shitty Lakeview (Vultech) UPS and Technitium for DNS and DHCP—running on the bare metal host.
It sounds simple. It should be simple.
But the moment I installed Docker, communication with my Home Assistant VM died. Just ceased to exist.
The Problem: Docker is a Dictator
Docker, by default, treats your iptables rules like they are merely suggestions. When the daemon starts, it essentially clobbers the FORWARD chain, inserts its own logic, and sets policies that effectively isolate anything that isn’t a container managed by itself.
If you have a bridge interface for a VM (like br0 or virbr0), Docker’s rules often end up dropping packets destined for that VM because they don’t match its internal logic for container traffic.
The Naive Fix (and why it fails)
My first reaction—like any sysadmin who has been doing this since the early 2000s—was to fix the rules manually and then run:
iptables-save > /etc/iptables/rules.v4
This is a trap!
If you use iptables-persistent (or netfilter-persistent) with Docker, you are entering a world of pain for two reasons:
Garbage Persistence: When you run iptables-save while Docker is running, you aren’t just saving your custom rules. You are saving Docker’s dynamic state—including rules for ephemeral veth interfaces and dynamic IP masquerading. When you reboot, iptables-restore tries to apply rules to interfaces that do not exist yet, causing the restore to fail or leave the firewall in an inconsistent state.
Remote LUKS? Pfft. Here is how to SSH-Unlock a ZFS-Encrypted FreeBSD Root (The Hard Way)
If you run FreeBSD like I do, on a remote server with full disk encryption (ZFS on GELI), you know the panic of rebooting. You are always at the mercy of a KVM-over-IP or a VNC connection from the browser, to insert the root filesystem password at the kernel prompt.
Nevertheless, if you (like me) run a system with kern.securelevel > 0, then installing a new libc means rebooting single user and installing the updates over said KVM or VNC connection, that is not ergonomic to say the least.
The standard solution is usually a pre-boot SSH environment. On Linux, dropbear-initramfs makes this trivial. On FreeBSD? You are building a custom mfsroot (memory file system) from scratch.
Most guides out there suggest using a static shell script as init. This works, but it’s miserable. You lose job control (no Ctrl+C), you have no proper TTY, and good luck if you need to debug network issues interactively.
I didn’t want a hacky script. I wanted a real environment. I wanted init, getty, login, PAM authentication, and a ZFS chroot for maintenance - to install updates.
Here is how I built a robust remote unlocker for FreeBSD.
The Problem with /bin/sh as Init
The naive approach is to compile a tiny ramdisk, shove a static sh binary in it, and tell the loader to run it as PID 1.
So I started running home assistant at home
on a raspberry PI 5 machine and I just installed HAOS on an SD. I then started
growing deeply uncomfortable on storing credentials in the HA filesystem in
clear text (any obfuscation is not enough).
Considering configuring an encrypted root with HAOS is simply not possible
without forking it, and also considering that dedicating a RPI5 entirely to HAOS
is a waste of resources, I decided to add an SSD to the Pi, boot it with
raspbian and then run HAOS inside a VM.
This way, I can have an encrypted root on the main host, thus encrypting the
entire HAOS VM.
Furthermore I can now snapshot the entire HAOS VM and I have much more
flexibility in managing it. Last but not least, I can also use the remaining RPI
CPU and RAM for something else.
Credits
First, a big thank you to this
post that gave me the
initial pointers on how to set this up. But that 2021 post is now slightly
outdated, and many steps are no longer necessary.
I presented a talk at All Systems Go 2025, the foundational Linux userspace conference. The conference is organised mostly by the systemd team, and it’s a yearly meeting for all people working on Linux systems software.
This year’s theme has mostly been “containers, containers, containers” with many new features in systemd to support containerisation and as well practical experiences from people working in the field on how they’re using systemd and collateral software to build container infrastructures.
I presented together with my colleague Serge Dubrouski our work in building an Operating System at Meta scale. We run an image-based operating system, but the company comes from two decades of updating the OS online, so we had to design a suitable migration strategy and set the foundation for the future.
We describe how we cut CentOS releases from upstream, the OSS tools we’ve built to create OS images, and the internal technology (MetalOS) that we came up with to build an OS that runs on millions of Linux servers.
FreeBSD: How to block port scanners from enumerating open ports on your
server, by using fail2ban and an ASCII representation of pf logs.
Preface
I use fail2ban to keep away attackers and bots alike
that attempt to scan my websites or brute force my mailboxes. Fail2ban works by
scanning log files for specific patterns and keeping a count of matches per IP,
and allows the systems administrator to define what to do when that count
exceeds a defined threshold.
The patterns are indicative of malicious activity, such as attempting to guess
a mailbox password, or attempt to scan a web site space for vulnerabilities.
The action to perform is most of the time to block the offending IP address via
the machine firewall, but fail2ban supports any mechanism that you can conceive,
as long as it can be enacted by a UNIX command.
PF and its logs
On my FreeBSD server I use the excellent pf
packet filter to policy incoming traffic and to perform traffic normalization.
The PF logging mechanism is very UNIX-y, as it provides a virtual network
interface (pflog0) onto which the initial bytes of packets blocked by a
rule that has the log specifier are forwarded, so that real-time block
logs can be inspected via a simple:
# tcpdump -eni pflog0tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pflog0, link-type PFLOG (OpenBSD pflog file), capture size 262144 bytes
01:48:13.748353 rule 1/0(match): block in on vtnet0: 121.224.77.46.41854 > 46.38.233.77.6379: Flags [S], seq 1929621329, win 29200, options [mss 1460,sackOK,TS val 840989709 ecr 0,nop,wscale 7], length 001:48:15.726215 rule 1/0(match): block in on vtnet0: 192.241.235.20.37422 > 46.38.233.77.5632: UDP, length 301:48:17.993439 rule 1/0(match): block in on vtnet0: 145.239.244.34.54154 > 46.38.233.77.1024: Flags [S], seq 3365362952, win 1024, length 0^C
3 packets captured
3 packets received by filter
0 packets dropped by kernel
These logs can be saved by pflogd into a pcap format file in
/var/log/pflog, that can be used for async troubleshooting and inspection, as
well using tcpdump or anything that can parse pcap files (such as
wireshark).
In 2023, I still run my own mailserver. Yes, because I like to keep control of
(at least part) my own digital life, and I enjoy having multiple domain names
on which I have stuff on. However, I was paying 30€/month to AWS to get in
exchange 2 cores, 2GiBs of RAM and 40G of disk, barely sufficient to run
IMAP+SMTP+MySQL+Clamd, let alone any form of spam protection or full-text
search on email bodies.
So, I was paying a lot of money to run a shitty service, and I even though
about shutting everything off and move my mail and my web sites onto some
form of fully hosted service.
I still want to do it
Say what, to host four domains with just some email redirects plus the web
sites I run, I would have spent more I was paying to also cripple me to
some service vendor and their politics.
So, I wanted to run FreeBSD and I started scouting on the ISPs
page until I decided to review
Hetzner and
netcup, that both offer aggressive
pricing and a old fashioned VPS and little more.
Settling on a vendor
Eventually, I settled on a netcup VPS 1000 that gives me, for 1/3 of the price
I was paying to AWS, 4 times the resources: 6 cores, 8GiB of RAM, 160GiB of
RAID10 SSD and an uncrippled, completely totally free FreeBSD installation.
However, the base image that Netcup provides has some limitations:
It runs on UFS
It is lacking a swap partition
It has no encryption
Making a plan
As I was already into the configuration stage and I didn’t want to restart
from scratch (this is an old-fashioned server, manually managed, no automation)
I decided to: