TL;DR

FreeBSD: How to block port scanners from enumerating open ports on your server, by using fail2ban and an ASCII representation of pf logs.

Preface

I use fail2ban to keep away attackers and bots alike that attempt to scan my websites or brute force my mailboxes. Fail2ban works by scanning log files for specific patterns and keeping a count of matches per IP, and allows the systems administrator to define what to do when that count exceeds a defined threshold.

The patterns are indicative of malicious activity, such as attempting to guess a mailbox password, or attempt to scan a web site space for vulnerabilities.

The action to perform is most of the time to block the offending IP address via the machine firewall, but fail2ban supports any mechanism that you can conceive, as long as it can be enacted by a UNIX command.

PF and its logs

On my FreeBSD server I use the excellent pf packet filter to policy incoming traffic and to perform traffic normalization.

The PF logging mechanism is very UNIX-y, as it provides a virtual network interface (pflog0) onto which the initial bytes of packets blocked by a rule that has the log specifier are forwarded, so that real-time block logs can be inspected via a simple:

# tcpdump -eni pflog0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pflog0, link-type PFLOG (OpenBSD pflog file), capture size 262144 bytes
01:48:13.748353 rule 1/0(match): block in on vtnet0: 121.224.77.46.41854 > 46.38.233.77.6379: Flags [S], seq 1929621329, win 29200, options [mss 1460,sackOK,TS val 840989709 ecr 0,nop,wscale 7], length 0
01:48:15.726215 rule 1/0(match): block in on vtnet0: 192.241.235.20.37422 > 46.38.233.77.5632: UDP, length 3
01:48:17.993439 rule 1/0(match): block in on vtnet0: 145.239.244.34.54154 > 46.38.233.77.1024: Flags [S], seq 3365362952, win 1024, length 0
^C
3 packets captured
3 packets received by filter
0 packets dropped by kernel

These logs can be saved by pflogd into a pcap format file in /var/log/pflog, that can be used for async troubleshooting and inspection, as well using tcpdump or anything that can parse pcap files (such as wireshark).

Limits of binary logs

I had already configured fail2ban to parse postfix, dovecot and nginx logs, so that if you try to brute an SMTP or IMAP passwd on my box or you try to run something like nikto against my web site you’ll soon be banned by fail2ban and your incoming connections will be dropped by pf.

However I could not ask fail2ban to read the binary pflog produced by pflogd, as fail2ban is regex-based and only understands text input.

Python to the rescue

I thought of a software that would:

  • Start an async loop
  • Run tcpdump and attach to its stdout and stderr
  • Write the stdout and stderr to a file
  • Trap a HUP and USR1 signal and re-open the file, to aid log rotation

Can I haz it?

Sure thing! Head over to github and check out pfasciilogd and the supporting fail2ban configuration.

I hope you find this useful.

Have fun!

FreeBSD encrypted root on ZFS

- 6 mins read

Preface

In 2023, I still run my own mailserver. Yes, because I like to keep control of (at least part) my own digital life, and I enjoy having multiple domain names on which I have stuff on. However, I was paying 30€/month to AWS to get in exchange 2 cores, 2GiBs of RAM and 40G of disk, barely sufficient to run IMAP+SMTP+MySQL+Clamd, let alone any form of spam protection or full-text search on email bodies.

So, I was paying a lot of money to run a shitty service, and I even though about shutting everything off and move my mail and my web sites onto some form of fully hosted service.

I still want to do it

Say what, to host four domains with just some email redirects plus the web sites I run, I would have spent more I was paying to also cripple me to some service vendor and their politics.

So, I wanted to run FreeBSD and I started scouting on the ISPs page until I decided to review Hetzner and netcup, that both offer aggressive pricing and a old fashioned VPS and little more.

Settling on a vendor

Eventually, I settled on a netcup VPS 1000 that gives me, for 1/3 of the price I was paying to AWS, 4 times the resources: 6 cores, 8GiB of RAM, 160GiB of RAID10 SSD and an uncrippled, completely totally free FreeBSD installation.

However, the base image that Netcup provides has some limitations:

  • It runs on UFS
  • It is lacking a swap partition
  • It has no encryption

Making a plan

As I was already into the configuration stage and I didn’t want to restart from scratch (this is an old-fashioned server, manually managed, no automation) I decided to:

  • Spin up temporary servers on hetzner to experiment
  • Peruse for the incantation required to have a full disk encryption bootable machine
  • Copy over the / from the netcup server to hetzner and see whether it boots
  • Rinse and repeat
  • Once the incantation is stable:
    • Boot a hetzner target server to temporarily hold all the data
    • Reboot the netcup source server from a CD so to rsync over all the data to hetzner
    • Scratch the netcup server disk and recreate all the partitions and filesystems the way I like
    • Rsync all data back from hetzner to netcup and reboot

Executing it

Turns out, it actually works. I started using the FreeBSD installation CD, to then realise I didn’t need the installer at all, because I already had a live system I was migrating, so I ended up using mfsbsd to both spin up the target server, and as well to boot the source server when it was time to copy everything back and forth.

Starting from this freebsd forum thread and this wiki page for zfs boot I ended up cooking the following incantation:

Reboot from ramdisk and copy over the data to the temp server

This configures the network, updates rsync to the latest version, mounts the current filesystem in /mnt and rsyncs everything over to a temporary storage location

ifconfig vtnet0 inet6 2a03:4000:2:33c::42 prefixlen 64
route -6 add default fe80::1%vtnet0
echo 'nameserver 2a03:4000:0:1::e1e6' > /etc/resolv.conf

pkg install rsync
pkg upgrade libiconv

mount /dev/vtbd0p2 /mnt

cd /mnt
rsync --archive --recursive --times --executability --hard-links \
  --links --perms --compress --exclude .sujournal --exclude .swapfile \
  --exclude .snap --exclude 'dev/*' --exclude 'srv/www/*/dev/*' \
  . root@m17.openssl.it:/mnt

Create the partitions

Here we create a boot partition holding the gptboot executable, whose responsibility is to load and execute the freebsd loader from the clear text /boot partition.

Then we create a swap partition and eventually a zfs partition that will contain our ZFS pool.

gpart destroy -F vtbd0
gpart create -s GPT vtbd0

gpart add -s 472 -t freebsd-boot vtbd0
gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 vtbd0

gpart add -s 1G -t freebsd-ufs -l boot vtbd0
gpart set -a bootme -i 2 vtbd0

gpart add -s 2G -t freebsd-swap -l swap vtbd0
gpart add -t freebsd-zfs -l root vtbd0

Create /boot and the encrypted root device

Here we create a UFS filesystem for the unencrypted /boot partition that’ll hold the kernel and loader, and part of the encryption key used to encrypt the root. That key alone is not sufficient to gain access to the filesystem, as also an additional passphrase is needed.

newfs -O 2 -U -m 8 -o space /dev/vtbd0p2
mkdir /tmp/ufsboot
mount /dev/vtbd0p2 /tmp/ufsboot
mkdir -p /tmp/ufsboot/boot/geli
dd if=/dev/random of=/tmp/ufsboot/boot/geli/vtbd0p4.key bs=64 count=1

geli init -e AES-XTS -l 256 -s 4096 -bd -K /tmp/ufsboot/boot/geli/vtbd0p4.key /dev/vtbd0p4
cp /var/backups/vtbd0p4.eli /tmp/ufsboot/boot/geli
geli attach -k /tmp/ufsboot/boot/geli/vtbd0p4.key /dev/vtbd0p4

Create ZFS pool

This is my layout, that I mostly use to limit executability of paths that should not be executable, and also for ease of snapshotting separate parts of the filesystem that need different retention strategies

zpool create -R /mnt -O canmount=off -O mountpoint=none -O atime=off -O compression=lz4 tank /dev/vtbd0p4.eli
zfs create -o mountpoint=/ tank/ROOT
zfs create -o mountpoint=/tmp  -o exec=off     -o setuid=off  tank/tmp
zfs create -o canmount=off -o mountpoint=/usr                 tank/usr
zfs create                                     -o setuid=off  tank/usr/ports
zfs create -o canmount=off -o mountpoint=/var                 tank/var
zfs create                     -o exec=off     -o setuid=off  tank/var/log
zfs create -o atime=on         -o exec=off     -o setuid=off  tank/var/spool
zfs create                     -o exec=off     -o setuid=off  tank/var/tmp
zfs create -o canmount=off -o mountpoint=/srv                 tank/srv
zfs create                     -o exec=off     -o setuid=off  tank/srv/mail
zfs create                     -o exec=off     -o setuid=off  tank/srv/www

Eventually, mount the unencrypted UFS boot partition below the ZFS fs hierarchy,

umount /dev/vtbd0p2
mkdir /mnt/ufsboot
mount /dev/vtbd0p2 /mnt/ufsboot

Copy everything back!

Now it’s time to get back the stuff from the temporary location it was placed to, and write it onto the new shiny ZFS pool on the GELI-encrypted root:

rsync --archive --recursive --times --executability --hard-links \
  --links --perms --compress root@m17.openssl.it:/mnt/ /mnt

mv /mnt/boot/* /mnt/ufsboot/boot
rm -rf /mnt/boot
ln -s ufsboot/boot /mnt

We use a symlink to point /boot to /ufsboot/boot, so the system will behave as if /boot was a normal directory in /. It’s required to keep a /boot subdir in the boot partition because plenty of loader code depends on hardcoded /boot paths.

What’s left

/etc/fstab, with encrypted swap of course:

/dev/vtbd0p2 /ufsboot ufs rw 0 1
/dev/vtbd0p3.eli none swap sw,ealgo=AES-XTS,keylen=128,sectorsize=4096 0 0

/boot/loader.conf.d/geli.conf:

geom_eli_load="YES"
geli_vtbd0p4_keyfile0_load="YES"
geli_vtbd0p4_keyfile0_type="vtbd0p4:geli_keyfile0"
geli_vtbd0p4_keyfile0_name="/boot/geli/vtbd0p4.key"
zfs_load="YES"
vfs.root.mountfrom="zfs:tank/ROOT"

/etc/rc.conf:

zfs_enable="YES"

Did it blend?

Yes of course it did! And it’s happily working since :-)

 03:44:10 root@m42:/srv/www/sindro.me/staging
 # uname -a
FreeBSD m42.openssl.it 13.2-RELEASE-p2 FreeBSD 13.2-RELEASE-p2 GENERIC amd64

 03:44:13 root@m42:/srv/www/sindro.me/staging
 # df -hT
Filesystem      Type      Size    Used   Avail Capacity  Mounted on
tank/ROOT       zfs       140G    6.7G    134G     5%    /
devfs           devfs     1.0K    1.0K      0B   100%    /dev
/dev/vtbd0p2    ufs       992M    189M    723M    21%    /ufsboot
tank/var/spool  zfs       134G    1.1M    134G     0%    /var/spool
tank/tmp        zfs       134G    220K    134G     0%    /tmp
tank/srv/mail   zfs       138G    4.8G    134G     3%    /srv/mail
tank/srv/www    zfs       136G    2.1G    134G     2%    /srv/www
tank/var/log    zfs       134G     13M    134G     0%    /var/log
tank/var/tmp    zfs       134G    224K    134G     0%    /var/tmp
tank/usr/ports  zfs       136G    2.6G    134G     2%    /usr/ports
/dev            nullfs    1.0K    1.0K      0B   100%    /srv/www/admin.openssl.it/dev
/dev            nullfs    1.0K    1.0K      0B   100%    /srv/www/mail.openssl.it/dev
/dev            nullfs    1.0K    1.0K      0B   100%    /srv/www/nhaima.org/dev
/dev            nullfs    1.0K    1.0K      0B   100%    /srv/www/spadaspa.it/dev
tank/usr/src    zfs       134G    773M    134G     1%    /usr/src
tank/usr/obj    zfs       134G     96K    134G     0%    /usr/obj

Il vero sistemista

- 2 mins read

Car repair

Il vero sistemista e’ un po’ come il meccanico di una volta, quello che se gli portavi la macchina per rifare la convergenza e quando arrivavi sentiva che il minimo non andava bene, ti faceva la convergenza, e giustamente la pagavi, ma poi ti sistemava anche il minimo e non ti chiedeva nulla, lo faceva perche’ non sopportava di sentire una macchina che non era a punto come si deve.

Era quello che da ogni minimo e impercettibile rumore indovinava subito qualsiasi problema, anche quello di cui il cliente non si era ancora accorto.

Era quello che dopo cena a casa con la famiglia, tornava in officina, dove potevi vedere le luci accese fino a notte tarda, perche’ stava lavorando al “suo” gioiello, una qualche macchina semi d’epoca recuperata chissa’ dove che con passione piano piano sistemava fino a farla tornare nuova.

Ecco, il sistemista e’ come quel meccanico, e le sue auto sono i server.

Fonte: Veteran Unix Admins

English version

The real sysadmin is like the old-fashioned car mechanic, the one you brought your car to adjust the wheels’ convergence and when you got into his garage he heard also your engline while idling didn’t have the right RPM. He then fixed the wheels’ convergence and you paid him for it, but he also fixed the engine idling RPM without asking you nothing - he did it because he couldn’t stand a car that was not set up properly.

He is the one that from every tiny and imperceptible noise immediately guessed every car problem, even those the customer did not yet realize.

He is the one that after dinner with family, he went back to his garage, where you could see the lights on until late at night, because he was working at “his” jewel, some old vintage car found who knows where that he was slowly and passionately rejuvenating until it became like new.

The real sysadmin is like that mechanic, and his cars are servers.