Bring your Arch Linux install everywhere

The first time I installed Arch Linux was in 2007. In that foregone time, the only supported architecture was 32-bit x86, and the ISOs carried dubious release names such as “0.8 Voodoo”.

Despite the fact Arch shipped an installer that looked suspiciously alike FreeBSD’s, you still had to configure a big deal of stuff by hand, using a slew of files with BSD-sounding names (rc.conf, anyone?). Xorg was the biggest PITA1, but tools such as xorgconfigure and shoddy patched Xorg servers helped users achieve the “Linux dream”, which at the time mostly consisted of wobbly Beryl windows and spinny desktop cubes. That was the real deal back then, and nothing gave you more street cred than having windows that wobbled like cubes of jelly.

Those days are (somewhat sadly) long gone. Today’s GNU/Linux distros are rather simple to install and setup, with often little to no configuration required (unless you are unlucky, of course). Distros targeted to advanced users, such as Arch, still require you to configure everything to your liking by yourself, but the overall stack (kernel, udev, Xorg, Wayland, …) is now exceptionally good at automatically configuring itself based on the current hardware. UEFI also smoothens a lot of warts about the booting process.

This, alongside ultra-fast USB drive bays, makes self-configuring portable installs a concrete reality. I have now been using Arch Linux installs from SSDs in USB caddies for years now, for both work, system recovery and easy access to a ready-to-use environment from any computer. Despite the tradeoffs, it’s remarkably solid and convenient.

In this post, I’ll show step-by-step (with examples) how to install Arch Linux on a USB drive, and how to make it bootable everywhere 2, including virtual machines. I will try to cover as much corner cases as possible, but as always feel free to comment or contact me if you think something may be missing.

With a few adaptations, this guide may also be helpful to install Arch Linux on a non-mobile drive, if you so desire.


  • An SSD drive in a USB enclosure. In my experience, “pre-made” external USB disks are designed for storage and tend to perform way worse than internal drives in an external caddy. Enclosures also have the extra advantage improves durability 3, and allows you to pop the SSD in a computer if you really need to.

    The SSD can be either SATA or NVMe, with the latter being the obviously better choice. In general I tend to often run out of space on my machines, so I tend to use whatever SSD I have lying around. I honestly wouldn’t bother with anything smaller than 128GB, unless you really plan to only use it for system recovery or light browsing.

    NEVER use mechanical drives anymore or, at least, avoid them for anything that isn’t cold storage or NAS. This is even more true for this project: not only spinning rust is atrociously slow, because most if not all 2.5” drives only spin at 5400 RPM, but they are also outrageously fragile and they almost always take more power that it’s supplied by the average USB port.

  • An x86-64 computer. It doesn’t have to run Arch Linux, but it makes the whole process easier (especially if you wish to clone your current system to the USB drive - see below). A viable option is also to boot from the Arch Linux ISO and install the system from there, as long as you have another machine with a working internet connection to Google any issues you might encounter.

    Note: I will only cover x86-64 with UEFI because that’s by far the easiest and more reliable setup. BIOS requires tinkering with more complex bootloaders (such as SYSLINUX), while 32-bit x86 is not supported by Arch Linux anymore. 4

  • A working internet connection - for obvious reasons. If you don’t have an internet connection, then you probably have bigger problems to worry about than installing Arch Linux on a USB drive.

Setting up the drive

Given that we are talking about a portable install, disk encryption is nothing short of mandatory. In general, I think that encrypting your system is ALWAYS a good idea, even if you don’t plan to carry it around much 5.

The choices of filesystem and encryption scheme are up to you, but there are basically three options I’ve used and I can recommend:

  1. LUKS with a classic filesystem, such as ext4, F2FS or XFS. This is the simplest option, and it is probably more than enough for most people.

  2. ZFS with native encryption. I must admit, this may be somewhat overkill, but it’s also my favourite because due to it being such a great experience overall. While ZFS isn’t probably the best choice for a removable hard drive, it’s outstandingly solid, supports compression, snapshots and checksumming - all things I do want from a system that runs from what’s potentially a flimsy USB cable. 6 I am yet to lose any data due to ZFS itself, and I have been using it for the best part of a decade now.

    ZFS is also what I use on all my installs, so I can easily migrate datasets from/to a USB-installed system using the zfs send/zfs receive commands if I need to, or quickly back up the whole system.

    Native ZFS encryption, while not as thoroughly tested and secure as LUKS, is still probably fine for most people, while also ridiculously convenient to set up. If that’s not enough for you, using ZFS on top of LUKS is still an acceptable choice (albeit more complicated to pull off).

  3. LUKS with BTRFS. I have also used this setup in the past, and there’s a lot to like about it, such as the fact that BTRFS supports lots of ZFS’s best features without requiring to install any out-of-tree kernel modules - a very nice plus indeed.

    Sadly, I have been burnt by BTRFS so many times in the past 12 years that I can’t honestly say I would want to entrust it with my data any time soon. YMMV, so maybe give it a try if you’re curious.

Regardless of that, I will now cover all three options in the next sections.

One important note: I deliberately decided to leave kernel images unencrypted (in UKI form) in the ESP, sticking with full encryption for just the root filesystem. My main concern is about protecting the data stored on the drive in case it’s lost or broken, and I assume nobody will attempt evil maid attacks. 7 Encrypting the kernel is also probably rather pointless without a signed bootloader and kernel - something that’s very hard to setup for a portable USB setup.

I also will not show how to set-up UEFI Secure Boot. While having Secure Boot enabled is a good thing in general, it makes setting the system up vastly more complex, for debatable benefits. This setup is in general not meant to be used for security critical systems, but to provide a convenient way to carry a working environment around between machines you have complete control of.

0. (Optional) Obtaining a viable ZFS setup environment

Unfortunately, ZFS on Linux is an out-of-tree filesystem. This basically means that it’s not bundled with the kernel as with all other filesystem, but instead it’s distributed by an independent project and has to be compiled and installed separately. This is due to a complex licensing incompatibility between the CDDL license used by OpenZFS, and the GPLv2 license used by the Linux kernel, which makes it impossible to ever bundle ZFS and Linux together.

If you intend on using ZFS, you must follow these steps first; if not, just skip to the section 1.

This procedure varies depending on the distribution you are using:

0.1. Arch Linux

Arch doesn’t distribute ZFS due to the aforementioned licensing issues, but it’s readily available and readily maintained by the ArchZFS project, both in form of AUR PKGBUILDs and in the third party repository archzfs.

The packages you are going to need are zfs-utils and a module compatible with your current kernel; the latter can either come from a kernel-specific package (i.e. zfs-linux-lts), or a DKMS one (i.e. zfs-dkms).

If you opt to install the packages from ArchZFS, add the [archzfs] repository to your pacman.conf (look at Arch Wiki for the correct URL), rembering to import the PGP key using pacman-key -r KEY followed by pacman-key --lsign-key KEY.

If you need to boot from an ISO, it’s a bit more complicated, so I won’t specify the details here because it would be quite long. Give a look at this repo for a quick way to generate one. If you feel adventurous, you can also try to use an ISO with ZFS support from another distribution (such as Ubuntu) and follow the instructions below to set up a working environment.

0.2. Other Linux distributions

If you are starting from another distribution, you will need to visit the OpenZFS on Linux website and follow the instructions for your distribution (if included).

This will generally involve adding a third party repository (except for Ubuntu, which has ZFS in its main repos), and following the instructions.

For instance, on Debian it’s recommended to enable the backports repository, in order to install a more up to date version. This also requires to modify APT’s settings by pinning the backports repository to a higher priority for the ZFS packages.

# cat <<'EOF' > /etc/apt/sources.list.d/bookworm-backports.list
deb bookworm-backports main contrib
deb-src bookworm-backports main contrib
# cat <<'EOF' > /etc/apt/preferences.d/90_zfs
Package: src:zfs-linux
Pin: release n=bookworm-backports
Pin-Priority: 990
# apt update
# apt install dpkg-dev linux-headers-generic linux-image-generic
# apt install zfs-dkms zfsutils-linux

Regardless of what you are using, you should now have a working ZFS setup. You can verify this by running zpool status; if it prints no pools available instead of complaining about missing kernel modules, you are good to go, and you may start setting up the drive.

1. Partitioning

From either the Arch Linux ISO or your existing system, run a disk partitioning tool. I’m personally partial to gdisk, but parted and fdisk are also fine 8. parted also has a graphical frontend, gparted, which is very easy to use, in case you are afraid to mess up the partitioning and prefer having clear feedback on what you’re doing 9.

The partitioning scheme is generally up to you, with the bare minimum being:

  • A FAT32 EFI System Partition (ESP), possibly with a comfortable size of at least 300 MiB. This is where the UKI (and optionally, a bootloader) will be stored. I do not recommend going for BIOS/MBR, given that x64 computers have supported UEFI for more than a decade now.

    The ESP will be mounted at /boot/efi in the final system.

  • A root partition. This is where the system will be installed and all files will be stored. The size is up to you, but I would recommend at least 20 GiB for a very minimal system. While the system itself doesn’t necessarily need a lot of space, with limited storage space you will find yourself often cleaning up the package caches, logs and temporary files that litter /var.

    The root partition will also be our encryption root, and it will be formatted with either LUKS or ZFS.

While some guides may suggest also creating a swap partition, I generally don’t recommend using one when booting from USB,. Swapping to storage will quickly turn into a massive bottleneck and slow down the whole system to a crawl. If you really need swap, I would recommend looking into alternatives such as zram or zswap, which are probably a wiser choice.

Also, it goes without saying, do not hibernate a system that runs from a USB drive, unless you plan on resuming it on the same machine.

1.1 Creating the partition label

Feel free to skip to the next step if you already have a partition label on your drive, with two sufficiently sized partitions for the ESP and the root filesystem, and you don’t want to use the whole drive.

First, identify the device name of your USB drive. In my case, it’s a Samsung 960 EVO 250 GB NVMe drive inside a USB 3.2 enclosure:

$ ls -l1 /dev/disk/by-id/usb*
lrwxrwxrwx 1 root root  9 Aug 10 18:42 /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Aug 10 18:42 /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Aug 10 18:42 /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part2 -> ../../sdb2

I can’t stress this enough: DO NOT USE RAW DEVICE NAMES WHEN PARTITIONING!. Always use /dev/disk when performing destructive operations of block devices - it’s not a matter of if you will lose data, but when. 10 /dev/disk/by-id is by far the best good choice due to how it clearly names devices by bus type, which makes very hard to mix up devices by mistake.

Once you have identified the device name, run gdisk (or whatever you prefer) and create a new GPT label in order to wipe the existing partition table.

# gdisk /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0
GPT fdisk (gdisk) version

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries in memory.

Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): Y

Command (? for help): p
Disk /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0: 488397168 sectors, 232.9 GiB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 55F7C0C7-35B3-44C5-A2C4-790FE33014FD
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 488397134
Partitions will be aligned on 2048-sector boundaries
Total free space is 488397101 sectors (232.9 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name

Command (? for help):

1.2. Creating the EFI System Partition (ESP)

With now a clear slate, we can create an EFI partition (GUID type EF00). The ESP not be encrypted and will contain the Unified Kernel Image the system will boot from; for this reason, I recommend giving it at least 300 MiB of space in order to avoid unpleasant surprises when updating the kernel.

Command (? for help): n               
Partition number (1-128, default 1): 1
First sector (34-488397134, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-488397134, default = 488396799) or {+-}size{KMGTP}: +300M
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): ef00
Changed type of partition to 'EFI system partition'

Notice how I left the first sector blank, and I specified +300M as the last sector. This is because I want gdisk to automatically align the partition to the nearest sector boundary (2048 in this case). gdisk tends to be quite good at automatically deducing the correct alignment, a process that tends to be finicky with USB enclosures.

I also highly recommend giving the partition a GPT name (which will be visible under /dev/disk/by-partlabel):

Command (? for help): c
Using 1
Enter name: ExtESP

Command (? for help):

1.3. Creating the root partition

Finally, we can create the root partition. This partition will be encrypted, and will contain the system and all of user data. I recommend giving it at least 20 GiB of space, but feel free to use more if you have some spare room.

For instance, the following command will create a partition using all of the remaining space on the drive:

Command (? for help): n
Partition number (2-128, default 2): 2
First sector (34-488397134, default = 616448) or {+-}size{KMGTP}: 
Last sector (616448-488397134, default = 488396799) or {+-}size{KMGTP}: 
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

Command (? for help): c
Partition number (1-2): 2
Enter name: ExtRoot

Feel free to leave use the default GUID type (8300) for the root partition, as it will be changed when formatting the partition later.

1.4. Writing the partition table

Once you are done, you should have a partition table resembling the following:

Command (? for help): p
Disk /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0: 488397168 sectors, 232.9 GiB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): AABE7D47-3477-4AB6-A7C1-BC66F87CB1C1
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 488397134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2349 sectors (1.1 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          616447   300.0 MiB   EF00  ExtESP
   2          616448       488396799   232.6 GiB   8300  ExtRoot

If everything looks OK, proceed to commit the partition table to disk. Again, ensure that you are writing to the correct device, that it does not contain any important data, and no old partition is mounted:

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0.
The operation has completed successfully.

2. Creating the filesystems

Now that we created a viable partition layout, we can proceed by creating filesystems on it.

As I’ve mentioned before, there are several potential choices regarding what filesystems and encryption schemes to use. Regardless of what you’ll end up choosing, the ESP must always be formatted as FAT (either FAT32 or FAT16):

# mkfs.fat -F 32 -n EXTESP /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part1
mkfs.fat 4.2 (2021-01-31)

After doing this, proceed depending on what filesystem you want to use.

2.1. Straighforward: LUKS with a native filesystem

LUKS with a simple filesystem is by far the simplest solution, and (probably) the “safest” for what regards setup complexity. LUKS can also be used with LVM2 for more “advanced” solutions, but it goes beyond the scope of this post.

As I’ve mentioned previously, we are going to set up full encryption for system and user data, but not for the kernel, which will reside in UKI form inside the ESP. If you are interested in a more “paranoid” setup, you can find more information in the Arch Wiki.

2.1.1. Creating the LUKS container

First, we need to format the previously created partition as a LUKS container, picking a good passphrase in the process. What makes a good passphrase is a whole topic in itself, and recommendations tend to change frequently following the current trends and cracking techniques. Personally, I recommend using a passphrase that is easy to remember but computationally hard to guess for a computer, such as a (very) long password full of spaces, letters, numbers and special characters.

# cryptsetup luksFormat --label ExtLUKS /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part2
This will overwrite data on /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part2 irrevocably.

Are you sure? (Type 'yes' in capital letters): YES 
Enter passphrase for /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part2: 
Verify passphrase: 
Ignoring bogus optimal-io size for data device (33553920 bytes).

Note that I deliberately stuck with the default settings, which are good enough for most use cases.

After creating the container, we need to open it in order to format it with a filesystem:

# cryptsetup open /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part2 ExtLUKS
Enter passphrase for /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part2:

ExtLUKS is an arbitrary name I chose for the container - feel free to pick whatever name you like. Whatever your choice is, after successfully unlocking the LUKS container it will be available as a block device under /dev/mapper/<name>:

$ ls -l1 /dev/mapper/ExtLUKS
lrwxrwxrwx 1 root root 7 Aug 28 22:45 /dev/mapper/ExtLUKS -> ../dm-0

2.1.2. Formatting the container

Now that we have an unlocked LUKS container, we can format it with a “real” filesystem. Note that, if you wish to use LVM2, this would be the right time to create the LVM volumes.

No matter the filesystem you plan to use over LUKS, ext4, F2FS, XFS and Btrfs are all created via the respective mkfs tool:

# mke2fs -t ext4 -L ExtRoot /dev/mapper/ExtLUKS # for Ext4
# mkfs.f2fs -l ExtRoot /dev/mapper/ExtLUKS # for F2FS
# mkfs.xfs -L ExtRoot /dev/mapper/ExtLUKS # for XFS
# mkfs.btrfs -L ExtRoot /dev/mapper/ExtLUKS # for Btrfs

2.2. Advanced: Btrfs subvolumes

If you picked a “plain” filesystem such as ext4, F2FS or XFS, you can skip this section.

In case you picked Btrfs, it’s a good idea to create subvolumes for / and /home in order to take advantage of Btrfs’s snapshotting capabilities.

Compared to older filesystems, Btrfs and ZFS have the built-in capability to create logical subvolumes (datasets in ZFS parlance) that can be mounted, snapshotted and managed independently. This is somewhat similar to LVM2, but immensely more powerful and flexible; all subvolumes share the same storage pool and can have different properties enabled (such as compression or CoW), or ad-hoc quotas and mount options.

Compared to other filesystems, Btrfs (and ZFS) requires filesystems to be online and mounted in order to perform operation on them, such as scrubbing (an operation akin to fsck) and subvolume management.

2.2.1. Mounting the root subvolume

Mount the filesystem on a temporary mountpoint:

# mount /dev/mapper/ExtLUKS /path/to/temp/mount
# mount | grep ExtLUKS
/dev/mapper/ExtLUKS on /path/to/temp/mount type btrfs (rw,relatime,ssd,space_cache=v2,subvolid=5,subvol=/)

Notice how mtab includes the options subvolid=5,subvol=/. This means that the default subvolume has been mounted, identified with the ID 5 and named /. This is the subvolume that will be mounted by default, acting as the root parent of all other subvolumes.

2.2.2. Creating the subvolumes

Now we can create the subvolumes for / and /home, called @ and @home respectively:

# btrfs subvolume create /path/to/temp/mount/@     # for /
Created subvolume '/path/to/temp/mount/@'
# btrfs subvolume create /path/to/temp/mount/@home # for /home
Created subvolume '/path/to/temp/mount/@home'

Using a @ prefix with Btrfs subvolumes is long established convention. The situation should now look like this:

# btrfs subvolume list -p /path/to/temp/mount
ID 256 gen 8 parent 5 top level 5 path @
ID 257 gen 9 parent 5 top level 5 path @home
# ls -l1 /path/to/temp/mount

Notice how in Btrfs subvolumes are also subdirectories of their parent subvolume. This is very useful when mounting the disk as an external drive. Subvolumes can also be mounted directly by passing the subvol and subvolid to mount.

Before moving to the next step, remember to unmount the root subvolume.

2.3. Advanced: ZFS with native encryption

My personal favourite, ZFS is a rocksolid system that’s ubiquitous in data storage, thanks to its impressive stability record and advanced features such as deduplication, built-in RAID, …

Albeit arguably less flexible than Btrfs, which was originally designed as a Linux-oriented replacement for the CDDL-encumbered ZFS, in my experience ZFS tends to be vastly more stable and reliable in day to day use. In the last 6 years, I have almost exclusively used ZFS on all my computers, and I have yet to lose any data due to ZFS itself. 11

ZFS is quite different compared to other filesystems. Instead of filesystems, ZFS works on pools, which consists in collections of one or more block devices (potentially in RAID configurations). Every pool can be divided into a hierarchy of datasets, which are roughly equivalent to subvolumes in Btrfs.

Datasets can be mounted independently, and can each have their own properties, such as compression, quotas, and so on, which may either be set per-dataset or inherited from the parent dataset.

Compared to Btrfs, ZFS manages its own mountpoints as inherent properties of the dataset. This is both incredibly useful and bothersome; on one hand, having mountpoints intrinsicly related to datasets allows for easier management and more clarity than legacy mounting, but on the other hand it may turn confusing and inflexible when managing complex setups. In any case, you can opt-out from letting ZFS managing mountpoints for a given dataset by setting the mountpoint to legacy, and mounting it manually as you would with any other filesystem.

2.3.1. Creating the ZFS pool

Our case is quite simple, given that we only have a single drive.

Create a new dataset called extzfs (or whatever you prefer), being careful to specify an altroot via -R. Otherwise, the new mountpoints will override your system ones as soon as you set up the pool:

# zpool create -m none -R /tmp/mnt extzfs /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part2

You may have to specify -f if the partition wasn’t empty before. Note the -m none option, which will set no mountpoint for the root dataset of the pool itself. Compared to Btrfs, ZFS doesn’t expose datasets as subdirectories of their parent pool, so it makes little sense to allow mounting the root dataset.

2.3.2. Creating an encrypted dataset root

As mentioned before, we are going to use native ZFS encryption, which is generally considered safe, but it may not be as water-tight and battle-tested as LUKS; this is generally not a problem for most people except the most paranoid. If you count yourself among their ranks, remember that you can always use LUKS on top of ZFS. It may end up being more complex, but it’s a viable option.

First, we need to create an encrypted dataset; this will act as the encryption root for all the other datasets. We will (arbitrarily) call it extzfs/encr:

# zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt -o mountpoint=none -o compression=lz4 extzfs/encr
Enter new passphrase:
Re-enter new passphrase:

Notice that we are using the passphrase key format alongsize the prompt key location. This means that ZFS will expect the encryption key in the form of a password entered by the user. Another option would be to use a key file, which is arguably more secure but also incredibly more cumbersome to use for the root device, so I’ll leave how to use one as an exercise to the reader.

Like with LUKS, I recommend picking a safe password that’s easy to remember but hard to guess. See paragraph 2.2.1. for more details.

Also, like in Btrfs’s case I will enable compression in order to spare some space on my small SSD. This may potentially leak a bit of information about the data contained inside the encrypted container, but it’s generally not a problem for most people.

2.3.3. Creating the system dataset

Now that we have an encryptionroot, we can create all the datasets we need under it, and they will be encrypted and unlocked automatically along with it.

Keeping in mind that’s good practice to create a hierchy that allows for the quick and easy creation of new boot environments, under encr we are going to create:

  1. A root dataset, which will not be mounted, and under which we will place datasets contain system images
  2. A home dataset, which will act as the root for all user-data datasets
  3. A default dataset under root, which will be mounted as / and contain the system we’re going to install
  4. A logs dataset for /var/log under default, which is required to be a separate dataset in order to enable the ACLs required by systemd-journald
  5. users and root datasets under home, which will respectively be mounted as /home and /root.
# zfs create -o mountpoint=none extzfs/encr/root
# zfs create -o mountpoint=none extzfs/encr/home
# zfs create -o mountpoint=/ extzfs/encr/root/default
# zfs create -o mountpoint=/var/log -o acltype=posixacl extzfs/encr/root/logs
# zfs create -o mountpoint=/home extzfs/encr/home/users
# zfs create -o mountpoint=/root extzfs/encr/home/root

After running the commands above, the situation should look like the following:

# zfs list
NAME                        USED   AVAIL     REFER  MOUNTPOINT
extzfs                     1.20M    225G       24K  none
extzfs/encr                 721K    225G       98K  none
extzfs/encr/home            294K    225G       98K  none
extzfs/encr/home/root        98K    225G       98K  /tmp/mnt/root
extzfs/encr/home/users       98K    225G       98K  /tmp/mnt/home
extzfs/encr/root            329K    225G       98K  none
extzfs/encr/root/default    231K    225G      133K  /tmp/mnt
extzfs/encr/root/logs        98K    225G       98K  /tmp/mnt/var/log

Notice how all mountpoints are relative to /tmp/mnt, which is the alternate root the extzfs pool was imported with (in this case, created) using the -R flag. The prefix will be stripped when importing the pool on the final system, leaving only the real mountpoints. This feature makes mounting systems installed on ZFS incredibly convenient, because the entire hierarchy is properly mounted under any directory you choose, allowing to rapidly chroot into the system and perform emergency maintenance operations.

2.3.4. Setting the bootfs

The pool’s bootfs property can be used to indicate which dataset contains the desired boot environment. This is not necessary, but it helps simplifying the kernel command line.

Run the following command to set the bootfs property to extzfs/encr/root/default:

# zpool set bootfs=extzfs/encr/root/default extzfs

For the sake of consistency, export now the pool before moving to the next step. This is not strictly necessary, but it doesn’t hurt to ensure that the pool can be correctly imported using the given passphrase.

To export the pool and unmount all datasets, run:

# zpool export extzfs

3. Installing Arch Linux

Installing Arch Linux is not the complex task it was a few decades ago. Arguably, it requires a bit of knowledge and experience, but it’s not out of reach for most tech-savvy users.

In general, when installing Arch onto a new drive (in this case, our portable SSD), there are two basic approaches:

  1. Install a fresh system from either an existing Linux install12 or the Arch Linux ISO;
  2. Clone an existing system to the new drive.

I’ll go cover both approaches in the next sections, alongside with a few tips and tricks I’ve learnt over the years.

3.1. Mounting the filesystems

Regardless on the filesystem or approach you’ve picked, you should now mount the root filesystem on a temporary mountpoint. I will use /tmp/mnt, but feel free to use whatever you prefer:

# mkdir /tmp/mnt

If using LUKS:

# cryptsetup open /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part2 ExtLUKS 

and then, depending on the filesystem:

# mount /dev/mapper/ExtLUKS /tmp/mnt # for ext4/XFS/F2FS
# mount -o subvol=@,compress=lzo /dev/mapper/ExtLUKS, /tmp/mnt # for Btrfs

I’ve also enabled compression for Btrfs, which may or may not be a good idea depending on your use case. Notice that compressing data before encrypting it may hypothetically leak some info about the data contained. Avoid compression if you are concerned about this and/or you have a very large SSD.

If using ZFS, run:

# zpool import -l -d /dev/disk/by-id -R /tmp/mnt extzfs

and it should do the trick.

3.2. Installing from scratch

If in doubt, just refer follow the official Arch Linux installation guide - it will not cover all the details for advanced installs, but it’s a good starting point.

In general, the steps somewhat resemble the following, regardless of what filesystem you’ve picked:

# mkdir -p /tmp/mnt/boot/efi # we still need to mount /boot, which is on a separate partition
# mount /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0-part1 /tmp/mnt/boot/efi

3.2.1. Installing from an existing Arch Linux install

If you are running from an existing Arch Linux install or the Arch ISO, installing a base system is as easy as running pacstrap (from the arch-install-scripts package) on the mountpoint of the root filesystem:

# pacstrap -K /tmp/mnt base perl neovim
[lots of output]

I’ve also thrown in neovim because there are no editors installed by default in base, but feel free to use whatever you like. perl is also (implictly) required by several packages, and not installing it may trigger unpredictable issues later.

Now enter the new system with arch-chroot:

# arch-chroot /tmp/mnt

3.2.2. Installing from a non-Arch system

All the steps above (except for pacstrap) can be performed from basically any Linux distribution. If you are running from a non-Arch system, don’t worry - there are workarounds available for that.

An always viable solution is always to use the bootstrap tarball from an Arch mirror.

A trickier (but arguably more fun) path is to build pacman from source, and then using it to install the base system. For instance, on Debian:

$ sudo apt install build-essential meson cmake libcurl4-openssl-dev libgpgme-dev libssl-dev libarchive-dev pkgconf
$ wget -O - | tar xvfJ -
$ cd pacman-6.0.2
$ meson setup --default-library static build # avoid linking pacman with the newly built shared libalpm
$ ninja -C build

You should have a working pacman binary in build/pacman. In order to install the base system, you need to create a minimal pacman.conf file:

$ cat <<'EOF' >> build/pacman.conf
SigLevel = Never

Server =$repo/os/$arch

Server =$repo/os/$arch

For this time only, I have disabled signature verification because going through the whole ordeal of setting up pacman-key and importing the Arch Linux signing keys for a makeshift pacman install is very troublesome. If you are really concerned about security, use the bootstrap tarball instead.

Create the required database directory for pacman, and install the same packages as above:

$ sudo mkdir -p /tmp/mnt/var/lib/pacman/
$ sudo build/pacman -r /tmp/mnt --config=build/pacman.conf -Sy base perl neovim

This will result in a working Arch Linux chroot, albeit only partially set up.

Chroot into the new system, and properly set up the Arch Linux keyring:

$ sudo mount --make-rslave --rbind /dev /tmp/mnt/dev
$ sudo mount --make-rslave --rbind /sys /tmp/mnt/sys
$ sudo mount --make-rslave --rbind /run /tmp/mnt/run
$ sudo mount -t proc /proc /tmp/mnt/proc
$ sudo cp -L /etc/resolv.conf /tmp/mnt/etc/resolv.conf
$ sudo chroot /tmp/mnt /bin/bash
[root@chroot /]# pacman-key --init
[root@chroot /]# pacman-key --populate archlinux

You can now proceed as if you were installing from an existing Arch Linux system.

3.2.3. Installing a kernel

In order to install packages inside your chroot, you need to enable at least one Pacman mirror first in /etc/pacman.d/mirrorlist. If you used pacstrap from an existing Arch Linux system, this may be unnecessary.

After enabling one or more mirrors, you can install a kernel of your choice:

[root@chroot /]# pacman -Sy linux-lts linux-lts-headers linux-firmware

Notice that I’ve chosen to install the LTS kernel, which is in general a good idea when depending on out-of-tree kernel modules such as ZFS or NVIDIA drivers. Feel free to install the latest kernel if you prefer, but remember to be careful when updating the system due to potential module breakage.

The command above will also generate an initrd, which we don’t really need (we will use UKI instead). We will have to delete that later.

3.2.4. Installing the correct helpers for your filesystem

In order for fsck to properly run, or to mount ZFS, you need to install the correct package for your filesystem:

  1. If you’ve installed your system over ZFS, this is a good time to set-up the ArchZFS repository in the chroot (see above)
  2. If you’ve installed your system over Btrfs, you need to install btrfs-progs. cryptsetup should already have been pulled in as a dependency to systemd
  3. If you are using another filesystem, install the correct package:

    a. For ext4, e2fsprogs should already have been pulled in by dependencies installed by base - ensure you can run e2fsck from the chroot.

    b. For XFS, install xfsprogs.

    c. For F2FS, install f2fs-tools.

Remember to also always install dosfstools, which is required to fsck the FAT filesystem on the ESP.

3.3. Cloning an existing system

Instead of installing the system from scratch, you may clone an existing system instead. Just remember after the move to

  1. fix /etc/fstab with the new PARTUUIDs
  2. give the system an unique configuration (i.e., change the hostname, fix the hostid, …) in order to avoid clashes
  3. do not transfer the contents of the ESP - if you use UKI and mount it at /boot/efi, you will regenerate its contents later when you reapply the steps from above.

There are 3 feasible ways to do this.

3.3.1. Use dd to clone a partition block by block.

This methods has a few advantages, and quite a bit of downsides:

  • PRO: because it literally clones an entire disk, byte per byte, to another, it is the most conservative method among all
  • CON: because it clones an entire disk byte per byte, issues such as fragmentation and data in unallocated sectors are copied
  • CON: because it clones an entire disk byte per byte, the target partition or disk must be at least as large as the source, or the source must be shrunk beforehand, which is not always possible (like with XFS)

If you opt for this solution, just run dd and copy one or more existing partitions to the LUKS container:

# dd if=/path/to/source/partition of=/dev/mapper/ExtLUKS bs=1M status=progress

3.3.2. Use rsync to clone a filesystem onto a new partition.

This method is the most flexible,because it’s completely agnostic regarding the source and destination filesystems, as long as the destination can fit all contents from the source. Just mount everything where it’s supposed to go, and run (as root):

# rsync -qaHAXS /{bin,boot,etc,home,lib,opt,root,sbin,srv,usr,var} /tmp/mnt/dest

The root has now been cloned, but it’s missing some base directories.

Given that I assume we are booting from an Arch Linux system, just reinstall filesystem inside the new root:

$  sudo pacman -r /tmp/mnt --config /tmp/mnt/etc/pacman.conf -S filesystem

This will fixup any missing directory and symlink, such as /dev, /proc, … Notice that only for this time I have used the -r parameter. This changes pacman’s root directory, and should always used with extreme care.

3.3.3. Use Btrfs snapshotting and replication facilities to clone existing subvolumes.

Btrfs supports incremental snapshotting and sending/receiving them as incremental data streams. This is extremely convenient, because replication ensures that files are transferred perfectly (with the right permissions, metadata, …) without having to copy any unnecessary empty space.

In order to duplicate a system using Btrfs, partition and format the disk as described above, and then snapshot and send the subvolumes to the new disk. Assuming the root subvolume has been mounted under /tmp/src

# mount -o subvol=/ /path/to/root/dev /tmp/src
# mount -o subvol=/ /dev/mapper/ExtLUKS /tmp/mnt
# btrfs su snapshot -r /tmp/src/@{,-mig}
Create a readonly snapshot of '/tmp/src/@' in '/tmp/src/@-mig'
# btrfs su snapshot -r /tmp/src/@home{,-mig}
Create a readonly snapshot of '/tmp/src/@home' in '/tmp/src/@home-mig'
# btrfs send /tmp/src/@-mig | btrfs receive /tmp/mnt
At subvol /tmp/src/@-mig
At subvol @-mig
# btrfs send /tmp/src/@home-mig | btrfs receive /tmp/mnt
At subvol /tmp/src/@home-mig
At subvol @home-mig

The system has now been correctly transferred. Rename the subvolumes to their original names and delete the now unnecessary snapshots if you want to reclaim the space 13:

# perl-rename -v 's/\-mig//g' /tmp/mnt/@* 
/tmp/mnt/@-mig -> /tmp/mnt/@
/tmp/mnt/@home-mig -> /tmp/mnt/@home
# btrfs su delete /tmp/src/@*-mig
Delete subvolume (no-commit): '/tmp/src/@-mig'
Delete subvolume (no-commit): '/tmp/src/@home-mig'
# umount /tmp/{src,mnt}
# mount -o subvol=@,compress=lzo /dev/mapper/ExtLUKS /tmp/mnt

Unmount the root subvolume and mount the system as you normally would. You are now ready to move to the next step.

3.3.4. Use ZFS snapshotting and replication facilities to clone existing datasets.

With ZFS, the process is very similar to Btrfs, with a few different steps depending if your source datasets are already encrypted or not.

After creating a pool, snapshot your root disk recursively. If your system resides on an encrypted dataset, snapshotting the encryption root will also snapshot all the datasets contained within it:

# zfs snapshot -r zroot/encr@migration # otherwise, snapshot all the required datasets

After doing that, you can either:

  1. create a new encrypted dataset and send the unencrypted snashots to it:
      # zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt -o mountpoint=none -o compression=lz4 extzfs/encr
      # for DATASET in root home ... # note: replace with the actual datasets
      do zfs send zroot/$DATASET@migration | zfs recv extzfs/encr/$DATASET
      # ...

Migrating unencrypted datasets to an encrypted root dataset requires transferring the snapshots one by one. It’s generally easier to just let the newly received snapshots inherit properties from their parents, and then fixing mountpoints and other properties later using zfs set. You can also do it directly, if necessary, by setting the properties using the -o flag with zfs recv.

Ensure that all datasets are correctly mounted before moving to the next step.

  1. clone another encrypted dataset as raw data:
      # zfs send -Rw zroot/encr@migration | zfs recv -F extzfs/encr

This will recursively clone all the datasets under a new encrypted dataset called extzfs/encr/encr. The new encryption root will have the same key as the source dataset, so you will be able to unlock it with the same passphrase. All properties and mountpoints will also be kept.

Given that all properties have been preserved, it may be enough to run

  # zfs mount -la

to unlock and mount all new datasets. If that doesn’t result in correctly mounted datasets, ensure that all properties (including mountpoints) have been correctly preserved.

3.3.5. Migrating filesystems: wrapping up

Regardless of the method you’ve picked, you should now have a working system on the new disk. Chroot into it as described in section 3.2., and then proceed to the next step.

3.4. Configuring the base system

Regardless of whatever path you took, you should now be in a working Arch Linux chroot.

3.4.1. Basic configuration

Most of the pre-boot configuration steps now are basically the same as a normal Arch Linux install:

[root@chroot /]# nvim /etc/pacman.d/mirrorlist # enable your favourite mirrors
[root@chroot /]# nvim /etc/locale.gen          # enable your favourite locales (e.g. en_US.UTF-8) 
[root@chroot /]# locale-gen                    # generate the locales configured above

The next step is to populate the /etc/fstab file with the correct entries for all your partitions. Remember to use PARTUUIDs or plain UUIDs, and never rely on disk and partition names (except for /dev/mapper device files). The contents of /etc/fstab will vary depending on the filesystem you’ve picked. Remember that the initrd will be the one to unlock the LUKS container, so you don’t need to specify it in /etc/crypttab.

  • /etc/fstab for ext4/XFS/F2FS with LUKS:
    # it is not strictly necessary to also include the root partition, but it's a good idea in case of remounts
    /dev/mapper/ExtLUKS                             /            ext4    defaults      0 1
    PARTUUID=4a0eab50-7dfc-4dcb-98a6-ad954d344ad7   /boot/efi    vfat    defaults      0 2
  • /etc/fstab for Btrfs with LUKS:
    /dev/mapper/ExtLUKS                             /            btrfs   defaults,subvol=@,compress=lzo       0 0
    /dev/mapper/ExtLUKS                             /home        btrfs   defaults,subvol=@home,compress=lzo   0 0
    PARTUUID=4a0eab50-7dfc-4dcb-98a6-ad954d344ad7   /boot/efi    vfat    defaults                             0 2
  • With ZFS, all datasets mountpoints are managed via the filesystem itself. /etc/fstab will only contain the ESP (unless you have created legacy mountpoints):
PARTUUID=4a0eab50-7dfc-4dcb-98a6-ad954d344ad7   /boot/efi    vfat    defaults      0 2

Then, set a password for root:

[root@chroot /]# passwd
New password: 
Retype new password: 
passwd: password updated successfully

and create a standard user. Remember to mount /home first if you are using a separate partition or subvolume!

[root@chroot /]# mount /home # if using a separate partition or subvolume, not needed with ZFS
[root@chroot /]# useradd -m marco # this is my name
[root@chroot /]# passwd marco
New password:
Retype new password:
passwd: password updated successfully

Before moving to the next step, install all packages required for connectivity, or you may be unable to connect to the internet after you boot up the system.

For simplicity, I’ll just install NetworkManager:

[root@chroot /]# pacman -S networkmanager

As the last step before moving to the next point, remember to configure the correct console layout in /etc/vconsole.conf, or you will have a hard time typing your password at boot time (the file will be copied to the initrd):

[root@chroot /]# cat > /etc/vconsole.conf <<'EOF'

3.4.2. Configuring the kernel

Configuring the system for booting on multiple systems is easier than it sounds, thanks to how good Linux and the graphical stack has become at automatically configuring itself depending on the hardware.

In the chroot, run the following preliminary steps:

  1. (optional) First, install ZFS (if you are using it); if using the LTS kernel, I recommend using zfs-dkms, while for a more up-to-date kernel a “fixed” build such as zfs-linux is probably safer.
  2. In order to support systems with an NVIDIA GPU, install the Nvidia driver (nvidia or nvidia-lts, depending on what you’ve chosen) 14.
  3. Install the microcode for both Intel and AMD CPUs (intel-ucode and amd-ucode respectively). Only the correct one will be loaded at boot time.

With the kernel and all necessary modules installed, we can now generate a bootable image.

For this step I’ve decided to use UKI, which is a novel approach to initramfs that simplifies the process a lot, by merging together kernel and initrd into a single bootable file. This is not strictly necessary, but it allows us to avoid messing the ESP with the contents of /boot: only UKIs and the (optional) bootloader will need to reside on it.

UKIs can be generated with several initramfs-generating tools, such as dracut and mkinitcpio. After a somewhat long stint with dracut, I’ve recently switched to mkinitcpio (Arch’s default) due to how simple it is to configure and customize with custom hooks.

For a portable system, it’s best to always boot using the fallback preset. The default preset generates a initramfs custom tailored to the current hardware, which may not work on other systems except the one that generated it. The fallback preset, on the other hand, generates a generic initramfs that contains by default the modules needed to boot on (almost) any system. The size difference may have been significant in the past, where disk space was small and expensive, but nowadays it’s negligible. A UKI image generated with the fallback preset is around 110 MiB in size, which is enough to fit on our 300 MiB ESP.

First, we ought to create a file containing the command line arguments for the kernel.

The kernel command line is a set of arguments passed to the kernel at boot time, which can be used to configure how the kernel, the initramfs or systemd will behave. Under UEFI, these parameters are usually passed by a bootloader as boot arguments to the kernel when invoked from the ESP. UKI differs in this regard by directly embedding the command line in the image itself.

Create a file called /etc/kernel/cmdline with at least the following contents; feel free to add more parameters if you need them.


rw nvidia-drm.modeset=1 cryptdevice=PARTUUID=5c97981e-4e4c-428e-8dcf-a82e2bc1ec0a:ExtLUKS root=/dev/mapper/ExtLUKS rootflags=subvol=@,compress=lzo rootfstype=btrfs

Omit the rootflags and rootfstype parameters if you are not using Btrfs.

For ZFS, try something akin to the following:

rw nvidia-drm.modeset=1 zfs=extzfs

which relies on automatic bootfs detection in order to find the root dataset.

After this, edit the /etc/mkinitcpio.conf to add any extra modules and hooks required by the new system.

You probably want to load nvidia KMS modules early, in order to avoid any issues when booting on systems with an NVIDIA discrete GPU. Notice that this may sometimes cause issues with buggy laptops with hybrid graphics, so remember this tradeoff in case you are incurring on this issue.

MODULES=(nvidia nvidia_drm nvidia_uvm nvidia_modeset)

The hooks you pick and the order in which they are run are crucial for a working system. For instance, if you are using encrypted ZFS, this is a safe starting point:

HOOKS=(base udev autodetect modconf kms block keyboard keymap consolefont zfs filesystems fsck)


HOOKS=(base udev autodetect modconf kms keyboard keymap consolefont block encrypt filesystems fsck)

Notice how the keyboard and keymap hooks have been specified before either the zfs or encrypt hooks. This ensures that the keyboard and keymap are correctly configured before reaching the root encryption password prompt.

Before triggering the generation of our image, we must enable UKI support in the fallback preset (and disable the default one).

Edit /etc/mkinitcpio.d/linux-lts.preset as follows:

# mkinitcpio preset file for the 'linux-lts' package



#default_options="--splash /usr/share/systemd/bootctl/splash-arch.bmp"

fallback_options="-S autodetect"

In the preset above, I have completely disabled out the default preset by removing it from PRESETS and commenting all of its entries. Under fallback, I only kept the uki and options entries, in order to avoid generating an initramfs image that we have no use for.

Run mkinitcpio -p linux-lts to finally generate the UKI under /boot/efi/EFI/BOOT/Bootx64.efi, which is the custom path I set fallback_uki to. This is the location conventionally associated with the UEFI Fallback bootloader, which will make the external drive bootable on any UEFI system without the need of any configuration or bootloader, as long as booting from USB is allowed (and UEFI Secure Boot is off).

[root@chroot /]# mkdir -p /boot/efi/EFI/BOOT # create the target directory
[root@chroot /]# mkinitcpio -p linux-lts

Optionally, clean up /boot by removing the initramfs images previously generated by pacman when installing the kernel package. These are unnecessary when using UKIs, and will never be generated again with the modifications we made to the kernel preset:

# rm /boot/initramfs*.img

3.4.2. Installing a bootloader (optional)

In principle, the instructions above make having a bootloader at all somewhat redundant. With UEFI, you can also always tinker with command line arguments using the UEFI Shell, which can be either already installed on the machine you are booting on or copied in the ESP under \EFI\Shellx64.efi.

In case you want to install a bootloader, change the fallback_uki argument to a different path (i.e. /boot/efi/EFI/Linux/arch-linux-lts.efi) and then just follow Arch Wiki’s instructions on how to set up systemd-boot (or rEFInd, or GRUB, or whatever you like).

If you opt for systemd-boot, ensure that bootctl install copies the bootloader to \EFI\BOOT\Bootx64.efi, or it will not get picked up by the UEFI firmware automatically.

3.5. Unmounting the filesystems

Before attempting to boot the system, remember to unmount all filesystems and close the LUKS container. After ensuring you followed all the steps above correctly, exit the chroot, and then:

[root@chroot /]# exit
$ sudo umount -l /tmp/mnt/{dev,sys,proc,run} # the `-l` flag prevents issues with busy mounts

If you used LUKS:

$ sudo umount -R /tmp/mnt

If you used ZFS, you also have to remember to export the pool - otherwise, the pool will still be in use next boot, and the initrd scripts won’t be able to import it:

$ sudo zpool export extzfs

This command may sometimes fail with an error message similar to cannot export 'extzfs': pool is busy. This is usually caused by a process still using the pool, such as a shell with its current directory set to a directory inside the pool. If this happens, the fastest way to fix it is to reboot the system, import the pool (without necessarily unlocking any dataset), and then immediately export it. This will ensure that the pool is not in use and untie it from the current system’s hostid.

4. Booting the system

If you’ve followed the instructions above, you should now have be able to boot onto the new system successfully, without any troubleshoot necessary.

You can either test the new system by booting from native hardware, or inside a virtual machine.

4.1. Setting up a VM

In order to spin up a VM, you need a working hypervisor. If you intend to run the VM on a Linux host, Qemu with KVM is an excellent choice. 15

You can either use Qemu via libvirt and tools such as virt-manager, or use plain QEMU directly. The former tends to be way easier to setup, but more troublesome to troubleshoot; libvirt is unfortunately full of abstractions that make configuring Qemu harder than just invoking it with the right parameters. On the other hand, libvirt automatically handles unpleasant parts such as configuring network bridges and dnsmasq, which you are otherwise required to configure manually.

Regardless of what approach you prefer, you should install UEFI support for guests, which is usually provided in packages called ovmf, edk2-ovmf, or similar.

4.1.1. Using libvirt

If you are using libvirt, you can use virt-manager to create a new VM (or dabble with virsh and XML directly, if that’s more to your liking). If you opt for this approach, remember to:

  1. Select the device, and not partitions or /dev/mapper devices. The disk must be unmounted and no partitions should be in use. Pick “Import an image” and then select /dev/disk/by-id/usb-XXX, without -partN, via the “Browse local” button.

  2. Select “Customize configuration before install”, or you won’t be able to enable UEFI support. In the configuration screen, in the Overview pane, select the “Firmware” tab and pick an x86-64 OVMF_CODE.fd. If you don’t see any, check that you’ve installed all the correct packages.

  3. (optional) If you wish, you may enable VirGL in order to have a smoother experience while using the VM. If you’re interested, toggle the “OpenGL” option under the Display Spice device section. Also remember to disable the SPICE socket, by setting the Listen type for SPICE to None. Check that the adapter model is Virtio, and enable 3D acceleration. 16

4.1.2. Using raw qemu

Using plain Qemu in place of libvirt is undoubtedly less convenient. It definitely requires more tinkering for networking (especially if you don’t want to use SLIRP, which is slow and limited), with the advantage of being more versatile and not requiring setting up libvirt - which tends to be problematic on machines with complex firewall rules and network configurations.

First, make a copy of the default UEFI variables file:

$ cp /usr/share/ovmf/x64/OVMF_VARS.fd ext_vars.fd

Then, temporarily take ownership of the disk device, in order to avoid having to run qemu as root:

$ sudo chown $(id -u):$(id -g) /dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0

Finally, run Qemu with the following command line. In this case, I’ll use SLIRP for simplicity, plus I will enable VirGL for a smoother experience:

$ qemu-system-x86_64 -enable-kvm -cpu host -m 8G -smp sockets=1,cpus=8,threads=1 -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/x64/OVMF_CODE.fd -drive if=pflash,format=raw,file=$PWD/ext_vars.fd -drive if=virtio,format=raw,cache=none,file=/dev/disk/by-id/usb-Samsung_SSD_960_EVO_250G_012938001243-0:0 -nic user,model=virtio-net-pci -device virtio-vga-gl -display sdl,gl=on -device intel-hda -device hda-duplex

4.2. Booting on bare hardware

The disk previously created should be capable of booting on potentially any UEFI-enabled x86-64 system, as long as booting from USB is allowed and Secure Boot is disabled.17

At machine startup, press the “Boot Menu” key for your system (usually F12 or F8, but it may vary considerably depending on the vendor) and select the external SSD. The disk may be referred to as “fallback bootloader” - this is normal, given that we’ve placed the UKI image at the fallback bootloader location.

4.3. First boot

If you did everything right in the last few steps, the boot process should stop at a password prompt from either cryptsetup (LUKS) or zpool (ZFS).

Insert the password and press enter. If everything went well, you should now be greeted by a login prompt.

Login as root, and proceed with the last missing configuration steps:

  1. If you are running on ZFS, you’ll notice that /home and /root are not mounted automatically. In order to fix this, immediately run
    # systemctl enable zfs-mount.server

After doing this, reboot the system and check that the datasets are mounted correctly. You shouldn’t need to enable zfs-import-cache.service or zfs-import-scan.service as they are unnecessary, given that we’re booting from a single pool which is already imported.

  1. Enable and start up the network manager of your choice you’ve installed previously, such as NetworkManager: # systemctl enable --now NetworkManager

    If you are using a wired connection with DHCP or IPv6 and no special configuration required, you should see any relevant IPs under ip address, and Internet should be working.

    If you need special configurations, or you must use wireless connectivity, use nmtui to configure the network.

  2. With a booted instance of systemd, you can now easily set up everything else you are missing, such as:

    • a hostname with hostnamectl set-hostname;
    • a timezone with timedatectl set-timezone (you may need to adjust it depending on where you boot from);
    • if you know as a fact you are always going to boot from systems with an RTC on localtime, set timedatectl set-local-rtc 1 to avoid having to adjust the time every time you boot. Note that this is arguably one of the most annoying parts about a portable system; I recommend setting every machine you own to UTC and properly configuring Windows to use UTC instead.
    • a different locale (generated via locale-gen), in order to change your system’s language settings.

      As an example:

      • Use localectl set-locale LANG=en_US.UTF-8 to set the default locale to en_US.UTF-8
      • Use localectl set-keymap de to change the keyboard layout to German.

4.4. Installing a desktop environment

The most useful part about a portable system is to carry a workspace around, so you can work on your projects wherever you are.

In order to do this, you need to install some kind of desktop environment, which may range from minimal (dwm, sway, fluxbox) to a full fledged environment like Plasma, GNOME, XFCE, Mate, …

Just remember that you are going to use this system on a variety of machines, so it’s useful to avoid anything that requires an excessive amount of tinkering to function properly. For instance, if one or more of the systems you plan to target involve NVIDIA GPUs, you may find running Wayland more annoying than just sticking with X11.

4.4.1. Example: Installing KDE Plasma

I’m a big fan of KDE Plasma (even though I’ve been using GNOME recently, for a few reasons), so I’ll use it as an example.

In general, all DEs require you to install a metapackage to pull in all the basic components (like the KF5 frameworks) and an (optional display manager), plus some or all the applications that are part of the DE.

If you plan on running X11, install the xorg package group, and then install plasma:

# pacman -S plasma plasma-wayland-session sddm kde-utilities

If you are using a display manager, enable it with systemctl enable --now sddm.

Otherwise, either configure your .xinitrc to start Plasma by appending

export DESKTOP_SESSION=plasma
exec startplasma-x11

and run startx.

If you prefer using Wayland, just straight out run startplasma-wayland instead.

5. Basic troubleshooting

If you followed all steps listed above, you should have a working portable system. Most troubleshooting steps after the initial booting should be identical to those of a normal Arch Linux system. Below you’ll find a very basic list of a few common issues that may arise when attempting to boot the system on different machines.

5.1. Device not found or No pool to import during boot

If the initrd fails to find the root device (or the ZFS pool), it means that the initrd failed to correctly mount the correct drive. This it’s often due to the following three reasons:

  1. The initrd is missing the required drivers. The disk is not appearing under /dev because of this.

    The fallback initrd is supposed to contain all the storage and USB drivers needed to boot on any system, but it’s possible that some may be missing if your USB controller is either particularly exotic or particularly quirky (e.g. Intel Macs).

    First, on the affected system, try to probe what drivers are in use for your USB controller. You can use lspci -k from a Linux system you can mount the external disk from:

    $ lspci -k
    0a:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller
         Subsystem: Gigabyte Technology Co., Ltd Family 17h (Models 00h-0fh) USB 3.0 Host Controller
         Kernel driver in use: xhci_hcd
         Kernel modules: xhci_pci

    Afterwards, add the relevant module(s) to the MODULES array in /etc/mkinitcpio.conf, and regenerate the initrd.

  2. The kernel command line is incorrect. The initrd either has the wrong device set, or the kernel is not receiving the correct parameters.

    This happens either due to a bad root or zfs line in /etc/kernel/cmdline, or because a bootloader or firmware are passing spurious arguments to the UKI.

    Double check that the root or zfs line in /etc/kernel/cmdline is correct. Some bootloaders such as rEFInd support automatic discovery of bootable files on ESPs; it may also be that the bootloader is wrongly assuming the UKI is a EFISTUB-capable kernel image and passing incorrect flags instead.

    In any case, ascertain that the kernel is actually receiving the correct parameters by running

    # cat /proc/cmdline

    from the initrd recovery shell.

    If you are using ZFS and you only specified the target pool instead of the root dataset, remember to set bootfs correctly first.

  3. (ZFS only) An incorrect cachefile has been embedded in the initrd. The initrd is trying to use potentially incorrect pool data instead of scanning /dev.

    The zfs hook embeds /etc/zfs/zpool.cache into the initrd during generation. While this is often useful to reduce boot times, especially with large multi-disk pools, it may cause issues if the cachefile is stale or incorrect. Return back to the setup system, chroot, remove the cachefile and regenerate the UKI. The initrd should now attempt discovery the root pool via zpool import -d /dev instead of using the cachefile (or any zfs_import_dir you may have set via the kernel command line).

If none of the previous steps work, you may want to try to boot the system from a different machine to ensure there’s not a problem in the setup itself.

5.2 The keyboard doesn’t work properly at the password prompt

  1. If the keyboard doesn’t work when typing the encryption password, it’s probably due to the keyboard hook not being run before the encryption hooks (whatever you are using). Ensure that keyboard is listed before encrypt or zfs in /etc/mkinitcpio.conf.

  2. If the keyboard is working, but the password is not being accepted, it may be due to an incorrectly set keyboard layout. Ensure that /etc/vconsole.conf is set correctly, and that the keymap hook is being run before the encryption hooks.

5.3. The system boots, but the display is not working

This is rarely an issue with Intel or AMD GPUs, but it’s pretty common with NVIDIA GPUs, especially on buggy laptops with Optimus hybrid graphics.

  1. Remember to always enable KMS modules early, in order to avoid any issues when booting on systems with an NVIDIA discrete GPU. Append nvidia-drm.modeset=1 to the kernel command line, and add the kms hook right after modconf in /etc/mkinitcpio.conf. This should force whatever KMS driver you are using to load early in the boot process, which should provide a working display as soon as the initrd is loaded.

    Note that with NVIDIA the framebuffer resolution is often not increased automatically, which may lead to a poor CLI experience. This is a common issue that unfortunately tends only to affect NVIDIA users.

  2. Add nvidia nvidia_modeset nvidia_uvm nvidia_drm to the MODULES array in /etc/mkinitcpio.conf. This will ensure that the NVIDIA driver is always loaded early in the boot process. The module will be ignored and unloaded if not needed on the system currently in use.

  3. Do not use any legacy kernel option such as video= or vga=. There are lots of old guides still suggesting to use them, but they are not compatible with KMS and should not be used anymore.

5.4. It’s impossible to log in via a display manager, or logging from a tty complains that the user directory is missing

This is an issue almost always caused by /home not being mounted correctly. Either check that /home is correctly configured in /etc/fstab, or that zfs-mount is enabled and running alongside the zfs target.

6. Conclusion

This post is a very basic guide on how to set up Arch Linux on a portable SSD, which I think feels less like a manual and more like my personal notes.

This is intentional: while nothing in this guide is unique (everything can be found in the Arch Wiki, in forums or in other blogs), I felt that it was worth it gathering some of my personal experience in a single place, hopefully with the intent of it being useful to someone else besides myself.

I suspect that after installing Linux (and Arch in particual) an infinite number of times, I grew a bit desensitised to how tricky and error-prone the process can be, especially for newcomers and people who are not accustomed to system administration and troubleshooting. Hopefully, the knowledge written in this article will be a good starting point for anyone who wants to try out Arch Linux, and maybe also get a cool portable system out of it.

Thanks a lot for reading, and as always feel free to contact me if you find anything incorrect, imprecise or hard to understand.

  1. and Wi-Fi. Wi-Fi was a PITA too, and don’t get me started on *retches* USB ADSL modems with Windows-only drivers on mini CDs. 

  2. YMMV. Some devices (e.g. Macs) are notoriously picky about booting from USB drives, but that’s not our system’s fault. 

  3. E.g. if you drop it, there’s a non-zero chance the USB connector and/or the logic board will break. USB enclosures are often very cheap compared to SSDs, so using them is the smarter choice in the long run. 

  4. ARM would be interesting too, if it wasn’t for the fact that there’s nothing akin to PC standards for ARM devices, and even today in 2023 it’s still a hodgepodge of ad-hoc systems and clunky firmware. The fact that lots of ARM devices are also severely locked down doesn’t help, either. 

  5. SSDs and HDDs are complex systems and may fail in several ways, which may lead to situations where the data on the disk is still readable using specialised tools, but cannot be accessed, deleted or overwritten using a normal computer (i.e. if the SSD controller fails). Properly encrypted disks are fundamentally random data, and as long as the encryption scheme is secure and the password is strong, you can chuck a broken disk in the trash without losing sleep over it. 

  6. Using ZFS is also a lot of fun IMHO. 

  7. If you suspect you may be a potential target for evil maid attacks, you should probably refrain from using a portable install altogether. 

  8. A small warning: compared to similar tools, parted writes changes to the disk immediately, so always triple-check what you’re doing before hitting enter. I recommend sticking to gdisk due to its better support for automatic alignment of partitions. 

  9. gparted also supports advanced features such as resizing filesystems, which is very handy when you don’t want to use the whole disk for the installation. It is also possible to perform such tasks from the command line, but it is in general more complex and error-prone. 

  10. Linux has no “absolute” naming policy for raw block devices. In particular, USB mass storage devices are enumerate alongside the SCSI and SATA devices, so it’s not uncommon for a USB disk to suddenly become sda after a reboot. 

  11. Once I’ve lost a ZFS pool due to a bug in a Git pre-alpha release of OpenZFS. That day, I learnt that running an OS from a pre-alpha filesystem driver is not a hallmark of good judgement. 

  12. If you compile pacman and/or use an Arch chroot, it’s absolutely doable from any distro, really, as long as its kernel is new enough to run Arch-distributed binaries. See section 3.2.2. to learn how to do this. 

  13. Notice that I’m using perl-rename in place of rename, because I honestly think that plain rename is just outright terrible. perl-rename is a Perl script that can be installed separately (on Arch is in the perl-rename package) and it’s just better than util-linux’ rename utility in every measurable way. 

  14. I don’t recommend using nvidia-open or Nouveau as of the time of writing (October ‘23), due to the immature state of the first is and the utter incompleteness the latter. The closed source nvidia driver is still the best choice for NVIDIA GPUs, even if it sucks due to how “third-party” it feels (its non-Mesa userland is particularly annoying). 

  15. On Windows, you can also consider using Hyper-V, which has also the advantage of being already included in Windows and supports using real device drives as virtual disks. 

  16. This feature is known to be buggy under the closed-source NVIDIA driver, so beware. 

  17. Using Secure Boot with an external disk you plan on carrying around is very troublesome for a variety of reasons - first and foremost that you’d either have to enroll your personal keys on every system you plan on booting from, or plan on using Microsoft’s keys, which means fighting with MokLists, PreLoader.efi, and going through a lot of pain for very dubious benefits. 

Unicode is harder than you think

Reading the excellent article by JeanHeyd Meneide on how broken string encoding in C/C++ is made me realise that Unicode is a topic that is often overlooked by a large number of developers. In my experience, there’s a lot of confusion and wrong expectations on what Unicode is, and what best practices to follow when dealing with strings that may contain characters outside of the ASCII range.

This article attempts to briefly summarise and clarify some of the most common misconceptions I’ve seen people struggle with, and some of the pitfalls that tend to recur in codebases that have to deal with non-ASCII text.

The convenience of ASCII

Text is usually represented and stored as a sequence of numerical values in binary form. Wherever its source is, to be represented in a way the user can understand it needs to be decoded from its binary representation, as specified by a given character encoding.

One such example of this is ASCII, the US-centric standard which has been for decades the de-facto way to represent characters and symbols in C and UNIX. ASCII is a 7-bit encoding, which means that it can represent up to 128 different characters. The first 32 characters are control characters, which are not printable, and the remaining 96 are printable characters, which include the 26 letters of the English alphabet, the 10 digits, and a few symbols:

Dec Hex    Dec Hex    Dec Hex  Dec Hex  Dec Hex  Dec Hex   Dec Hex   Dec Hex  
  0 00 NUL  16 10 DLE  32 20    48 30 0  64 40 @  80 50 P   96 60 `  112 70 p
  1 01 SOH  17 11 DC1  33 21 !  49 31 1  65 41 A  81 51 Q   97 61 a  113 71 q
  2 02 STX  18 12 DC2  34 22 "  50 32 2  66 42 B  82 52 R   98 62 b  114 72 r
  3 03 ETX  19 13 DC3  35 23 #  51 33 3  67 43 C  83 53 S   99 63 c  115 73 s
  4 04 EOT  20 14 DC4  36 24 $  52 34 4  68 44 D  84 54 T  100 64 d  116 74 t
  5 05 ENQ  21 15 NAK  37 25 %  53 35 5  69 45 E  85 55 U  101 65 e  117 75 u
  6 06 ACK  22 16 SYN  38 26 &  54 36 6  70 46 F  86 56 V  102 66 f  118 76 v
  7 07 BEL  23 17 ETB  39 27 '  55 37 7  71 47 G  87 57 W  103 67 g  119 77 w
  8 08 BS   24 18 CAN  40 28 (  56 38 8  72 48 H  88 58 X  104 68 h  120 78 x
  9 09 HT   25 19 EM   41 29 )  57 39 9  73 49 I  89 59 Y  105 69 i  121 79 y
 10 0A LF   26 1A SUB  42 2A *  58 3A :  74 4A J  90 5A Z  106 6A j  122 7A z
 11 0B VT   27 1B ESC  43 2B +  59 3B ;  75 4B K  91 5B [  107 6B k  123 7B {
 12 0C FF   28 1C FS   44 2C ,  60 3C <  76 4C L  92 5C \  108 6C l  124 7C |
 13 0D CR   29 1D GS   45 2D -  61 3D =  77 4D M  93 5D ]  109 6D m  125 7D }
 14 0E SO   30 1E RS   46 2E .  62 3E >  78 4E N  94 5E ^  110 6E n  126 7E ~
 15 0F SI   31 1F US   47 2F /  63 3F ?  79 4F O  95 5F _  111 6F o  127 7F DEL

This table defines a two-way transformation, in jargon a charset, which maps a certain sequence of bits (representing a number) to a given character, and vice versa. This can be easily seen by dumping some text as binary:

$ echo -n Cat! | xxd
00000000: 4361 7421                                Cat!

The first column represents the binary representation of the input string “Cat!” in hexadecimal form. Each character is mapped into a single byte (represented here as two hexadecimal digits):

  • 43 is the hexadecimal representation of the ASCII character C;
  • 61 is the hexadecimal representation of the ASCII character a;
  • 74 is the hexadecimal representation of the ASCII character t;
  • 21 is the hexadecimal representation of the ASCII character !.

This simple set of characters was for decades considered more than enough by most of the English-speaking world, which was where the vast majority of computer early computer users and pioneers came from.

An added benefit of ASCII is that it is a fixed-width encoding: each character is always represented univocally by the same number of bits, that in turn always represent the same number.

This leads to some very convenient ergonomics when handling strings in C:

#include <ctype.h>
#include <stdio.h>

int main(const int argc, const char *argv[const]) {
    // converts all arguments to uppercase
    for (const char *const *arg = argv + 1; *arg; ++arg) {
        // iterate over each character in the string, and print its uppercase
        for (const char *it = *arg; *it; ++it) {

        if (*(arg + 1)) {
            putchar(' ');

    if (argc > 1) {

    return 0;

The example above assumes, like a large amount of code written in the last few decades, that the C basic type char represents a byte-sized ASCII character. This assumption minimises the mental and runtime overhead of handling text, as strings can be treated as arrays of characters belonging to a very minimal set. Because of this, ASCII strings can be iterated on, addressed individually and transformed or inspected using simple, cheap operations such as isalpha or toupper.

The world outside

However, as computers started to spread worldwide it became clear that it was necessary to devise character sets capable to represent all the characters required in a given locale. For instance, Spanish needs the letter ñ, Japan needs the ¥ symbol and support for Kana and Kanji, and so on.

All of this led to a massive proliferation of different character encodings, usually tied to a given language, area or locale. These varied from 8-bit encodings, which either extended ASCII by using its unused eighth bit (like ISO-8859-1) or completely replaced its character set (like KOI-7), to multi-byte encodings for Asian languages with thousands of characters like Shift-JIS and Big5.

This turned into a huge headache for both developers and users, as it was necessary to know (or deduce via hacky heuristics) which encoding was used for a given piece of text, for instance when receiving a file from the Internet, which was becoming more and more common thanks to email, IRC and the World Wide Web.

Most crucially, multibyte encodings (a necessity for Asian characters) meant that the assumption “one char = one byte” didn’t hold anymore, with the small side effect of breaking all code in existence at the time.

For a while, the most common solution was to use a single encoding for each language, and then hope for the best. This often led to garbled text (who hasn’t seen the infamous character at least once), so much so that a specific term was coined to describe it - “mojibake”, from the Japanese “文字化け” (“character transformation”).

KOI8-R text mistakenly written on an envelope as ISO-8859-1 text

In general, for a long time using a non-English locale meant that you had to contend with broken third (often first) party software, patchy support for certain characters, and switching encodings on the fly depending on the context. The inconvenience was such that it was common for non-Latin Internet users to converse in their native languages with the Latin alphabet, using impromptu transliterations if necessary. A prime example of this was the Arabic chat alphabet widespread among Arabic-speaking netizens in the 90’s and 00’s 1.


It was clear to most people back then that the situation as it was untenable, so much so that as early as the late ’80s people started proposing a universal character encoding capable to cover all modern scripts and symbols in use.

This led to the creation of Unicode, whose first version was standardised in 1991 after a few years of joint development led by Xerox and Apple (among others). Unicode main design goal was, and still is, to define a universal character set capable to represent all the aforementioned characters, alongside a character encoding capable of uniformly representing them all.

In Unicode, every character, or more properly code point, is represented by a unique number, belonging to a specific Unicode block. Crucially, the first block of Unicode (“Basic Latin”) corresponds point per point to ASCII, so that all ASCII characters correspond to equivalent Unicode codepoints.

Code points are usually represented with the syntax U+XXXX, where XXXX is the hexadecimal representation of the code point. For instance, the code point for the A character is U+0041, while the code point for the ñ character is U+00F1.

Unicode 1.0 covered 26 scripts and 7,161 characters, covering most of the world’s languages and lots of commonplace symbols and glyphs.

UCS-2, or “how Unicode made everything worse”

Alongside the first Unicode specification, which defined the character set, two2 new character encodings, called UCS-2 and UCS-4 (which came a bit later), were also introduced. UCS-2 was the original Unicode encoding, and it’s an extension of ASCII to 16 bits, representing what Unicode called the Basic Multilingual Plane (“BMP”); UCS-4 is the same but with 32-bit values. Both were fixed-width encodings, using multiple bytes to represent each single character in a string.

In particular, UCS-2’s maximum range of 65,536 possible values was good enough to cover the entire Unicode 1.0 set of characters. The storage savings compared with UCS-4 were quite enticing, also - while ’90s machines weren’t as constrained as the ones that came before, representing basic Latin characters with 4 bytes was still seen as an egregious waste.3

Thus, 16 bits quickly became the standard size for the wchar_t type recently added by the C89 standard to support wide characters for encodings like Shift-JIS. Sure, switching from char to wchar_t required developers to rewrite all code to use wide characters and wide functions, but a bit of sed was a small price to pay for the ability to resolve internationalization, right?

The C library had also introduced, alongside the new wide char type, a set of functions and types to handle wchar_t, wide strings and (poorly designed) functions locale support, including support for multibyte encodings. Some vendors, like Microsoft, even devised tricks to make it possible to optionally switch from legacy 8-bit codepages to UCS-2 by using ad-hoc types like TCHAR and LPTSTR in place of specific character types.

All of that said, the code snippet above could be rewritten on Win32 as the following:

#include <ctype.h>
#include <tchar.h>

#if !defined(_UNICODE) && !defined(UNICODE)
#   include <stdio.h>

int _tmain(const int argc, const TCHAR *argv[const]) {
    // converts all arguments to uppercase
    for (const TCHAR *const *arg = argv + 1; *arg; ++arg) {
        // iterate over each character in the string, and print its uppercase
        for (const TCHAR *it = *arg; *it; ++it) {

        if (*(arg + 1)) {
            _puttchar(_T(' '));

    return 0;

Neat, right? This was indeed considered so convenient that developers jumped on the UCS-2 bandwagon in droves, finally glad the encoding mess was over.

16-bit Unicode was indeed a huge success, as attested by the number of applications and libraries that adopted it during the ’90s:

  • Windows NT, 2000 and XP used UCS-2 as their internal character encoding, and exposed it to developers via the Win32 API;
  • Apple’s Cocoa, too, used UCS-2 as its internal character encoding for NSString and unichar;
  • Sun’s Java used UCS-2 as its internal character encoding for all strings, even going as far as to define its String type as an array of 16-bit characters;
  • Javascript, too, didn’t want to be left behind, and basically defined its String type the same way Java did;
  • Qt, the popular C++ GUI framework, used UCS-2 as its internal character encoding, and exposed it to developers via the QString class;
  • Unreal Engine just copied the WinAPI approach and used UCS-2 as its internal character encoding 4

and many more. Every once in a while, I still find out that some piece of code I frequently use is still using UCS-2 (or UTF-16, see later) internally. In general, every time you read something along the lines of “Unicode support” without any reference to UTF, there’s an almost 100% chance that it actually means “UCS-2”, or some borked variant of it.

Combining characters

Unicode supported since its first release the concept of combining characters (later better defined as grapheme clusters), which are clusters of characters meant to be combined with other characters in order to form a single unit by text processing tools.

In Unicode jargon, these are called composite sequences and were designed to allow Unicode to represent scripts like Arabic, which uses a lot of diacritics and other combining characters, without having to define a separate code point for each possible combination.

This could have been in principle a neat idea - grapheme clusters allow Unicode to save a massive amount of code points from being pointlessly wasted for easily combinable characters (just think about South Asian languages or Hangul). The real issue was that the Consortium, anxious to help with the transition to Unicode, did not want to drop support for dedicated codepoints for “preassembled” characters such as è and ñ, which were historically supported by the various extended ASCII encodings.

This led to Unicode supporting precomposed characters, which are codepoints that stand for a glyph that also be represented using a grapheme cluster. An example of this is the Extended Latin characters with accents or diacritics, which can all be represented by combining the base Latin character with the corresponding modifier, or by using a single code point.

For instance, let’s try testing out a few things with Python’s unicodedata and two seemingly identical strings, “caña” and “caña” (notice how they look the same):

>>> import unicodedata
>>> a, b = "caña", "caña"
>>> a == b


>>> a, b
('caña', 'caña')
>>> len(a), len(b)
(4, 5)

The two strings are visually identical - they are rendered the same by our Unicode-enabled terminal - and yet, they do not evaluate as equal, and the len() function returns different lengths. This is because the ñ in the second string is a grapheme cluster composed of the U+006E LATIN SMALL LETTER N and U+0303 COMBINING TILDE character, combined by terminal into a single character.

>>> list(a), list(b)
(['c', 'a', 'ñ', 'a'], ['c', 'a', 'n', '̃', 'a'])
>>> [ for c in a]
>>> [ for c in b]

This is obviously a big departure from the “strings are just arrays of characters” model the average developer is used to:

  1. Trivial comparisons like a == b or strcmp(a, b) are no longer trivial. A Unicode-aware algorithm must to be implemented, in order to actually compare the strings as they are rendered or printed;
  2. Random access to characters is no longer safe, because a single glyph can span over multiple code points, and thus over multiple array elements;

640k 16 bits ought to be enough for everyone”

Anyone with any degree of familiarity with Asian languages will have noticed that 7,161 characters are way too small a number to include the tens of thousands of Chinese characters in existence. This is without counting minor and historical scripts, and the thousands of symbols and glyphs used in mathematics, music, and other fields.

In the years following 1991, the Unicode character set was thus expanded with tens of thousands of new characters, and it become quickly apparent that UCS-2 was soon going to run out of 16-bit code points.5

To circumvent this issue, the Unicode Consortium decided to expand the character set from 16 to 21 bits. This was a huge breaking change that basically meant obsoleting UCS-2 (and thus breaking most software designed in the ’90s), just a few years after its introduction and widespread adoption.

While UCS-2 was still capable of representing anything inside the BMP, it became clear a new encoding was needed to support the growing set of characters in the UCS.


The acronym “UTF” stands for “Unicode Transformation Format”, and represents a family of variable-width encodings capable of representing the whole Unicode character set, up to its hypothetical supported potential 2²¹ characters. Compared to UCS, UTF encodings specify how a given stream of bytes can be converted into a sequence of Unicode code points, and vice versa (i.e., “transformed”).

Compared to a fixed-width encoding like UCS-2, a variable-width character encoding can employ a variable number of code units to encode each character. This bypasses the “one code unit per character” limitation of fixed-width encodings, and allows the representation of a much larger number of characters—potentially, an infinite number, depending on how many “lead units” are reserved as markers for multi-unit sequences.

Excluding the dead-on-arrival UTF-1, there are 4 UTF encodings in use today:

  • UTF-8, a variable-width encoding that uses 1-byte characters
  • UTF-16, a variable-width encoding that uses 2-byte characters
  • UTF-32, a variable-width encoding that uses 4-byte characters
  • UTF-EBCDIC, a variable-width encoding that uses 1-byte characters designed for IBM’s EBCDIC systems (note: I think it’s safe to argue that using EBCDIC in 2023 edges very close to being a felony)


To salvage the consistent investments made to support UCS-2, the Unicode Consortium created UTF-16 as a backward-compatible extension of UCS-2. When some piece of software advertises “support for UNICODE”, it almost always means that some software supported UCS-2 and switched to UTF-16 sometimes later. 6

Like UCS-2, UTF-16 can represent the entirety of the BMP using a single 16-bit value. Every codepoint above U+FFFF is represented using a pair of 16-bit values, called surrogate pairs. The first value (the “high surrogate”) is always a value in the range U+D800 to U+DBFF, while the second value (the “low surrogate”) is always a value in the range U+DC00 to U+DFFF.

This, in practice, means that the range reserved for BMP characters never overlaps with surrogates, making it trivial to distinguish between a single 16-bit codepoint and a surrogate pair, which makes UTF-16 self-synchronizing over 16-bit values.

Emojis are an example of characters that lie outside of the BMP; as such, they are always represented using surrogate pairs. For instance, the character U+1F600 (😀) is represented in UTF-16 by the surrogate pair [0xD83D, 0xDE00]:

>>> # pack the surrogate pair into bytes by hand, and then decode it as UTF-16
>>> bys = [b for cp in (0xD83D, 0xDE00) for b in list(cp.to_bytes(2,'little'))]
>>> bys
[61, 216, 0, 222]
>>> bytes(bys).decode('utf-16le')


Notice that in the example above I had to specify an endianness for the bytes (little-endian in this case) by writing "utf-16le" instead of just "utf-16". This is due to the fact that UTF-16 is actually two different (incompatible) encodings, UTF-16LE and UTF-16BE, which differ in the endianness of the single codepoints. 7

The standard calls for UTF-16 streams to start with a Byte Order Mark (BOM), represented by the special codepoint U+FEFF. Reading 0xFEFF indicates that the endianness of a text block is the same as the endianness of the decoding system; reading those bytes flipped, as 0xFFFE, indicates opposite endianness instead.

As an example, let’s assume a big-endian system has generated the sequence [0xFE, 0xFF, 0x00, 0x61].
All systems, LE or BE, will detect that the first two bytes are a surrogate pair, and read them as they are depending on their endianness. Then:

  • A big-endian system will decode U+FEFF, which is the BOM, and thus will assume the text is in UTF-16 in its same byte endianness (BE);
  • A little-endian system will instead read U+FFEE, which is still the BOM but flipped, so it will assume the text is in the opposite endianness (BE in the case of an LE system).

In both cases, the BOM allows the following character to be correctly parsed as U+0061 (a.k.a. a).

If no BOM is detected, then most decoders will do as they please (despite the standard recommending to assume UTF-16BE), which most of the time means assuming the endianness of the system:

>> import sys
>>> sys.byteorder
>>> # BOM read as 0xFEFF and system is LE -> will assume UTF-16LE
>>> bytes([0xFF, 0xFE, 0x61, 0x00, 0x62, 0x00, 0x63, 0x00]).decode('utf-16') 
>>> # BOM read as 0xFFFE and system is LE -> will assume UTF-16BE
>>> bytes([0xFE, 0xFF, 0x00, 0x61, 0x00, 0x62, 0x00, 0x63]).decode('utf-16')
>>> # no BOM, text is BE and system is LE -> will assume UTF-16LE and read garbage
>>> bytes([0x00, 0x61, 0x00, 0x62, 0x00, 0x63]).decode('utf-16')
>>> # no BOM, text is BE and UTF-16BE is explicitly specified -> will read the text correctly
>>> bytes([0x00, 0x61, 0x00, 0x62, 0x00, 0x63]).decode('utf-16be')

Some decoders may probe the first few codepoints for zeroes to detect the endianness of the stream, which is in general not an amazing idea. As a rule of thumb, UTF-16 text should never rely on automated endianness detection, and thus either always start with a BOM or assume a fixed endianness value (which in the vast majority of cases is UTF-16LE, which is what Windows does).


Just as UTF-16 is an extension of UCS-2, UTF-32 is an evolution of UCS-4. Compared to all other UTF encodings, UTF-32 is by far the simplest, because like its predecessor, it is a fixed-width encoding.

The major difference between UCS-4 and UTF-32 is that the latter has been limited down 21 bits, from its maximum of 31 bits (UCS-4 was signed). This has been done to maintain compatibility with UTF-16, which is constrained by its design to only represent codepoints up to U+10FFFF.

While UTF-32 seems convenient at first, it is not in practice all that useful, for quite a few reasons:

  1. UTF-32 is outrageously wasteful because all characters, including those belonging to the ASCII plane, are represented using 4 bytes. Given that the vast majority of text uses ASCII characters for markup, content or both, UTF-32 encoded text tends to be mostly comprised of just a few significant bytes scattered in between a sea of zeroes:

     >>> # UTF-32BE encoded text with BOM
     >>> bytes([0x00, 0x00, 0xFE, 0xFF, 0x00, 0x00, 0x00, 0x61, 0x00, 0x00, 0x00, 0x62, 0x00, 0x00, 0x00, 0x63]).decode('utf-32')
     >>> # The same, but in UTF-16BE
     >>> bytes([0xFE, 0xFF, 0x00, 0x61, 0x00, 0x62, 0x00, 0x63]).decode('utf-16')
     >>> # The same, but in ASCII
     >>> bytes([0x61, 0x62, 0x63]).decode('ascii')
  2. No major OS or software uses UTF-32 as its internal encoding as far as I’m aware of. While locales in modern UNIX systems usually define wchar_t as representing UTF-32 codepoints, they are seldom used due to most software in existence assuming that wchar_t is 16-bit wide.

    On Linux, for instance:

     #include <locale.h>
     #include <stdio.h>
     #include <wchar.h>
     int main(void) {
         // one of the bajilion ways to set a Unicode locale - we'll talk UTF-8 later
         setlocale(LC_ALL, "en_US.UTF-8"); 
         const wchar_t s[] = L"abc";
         printf("sizeof(wchar_t) == %zu\n", sizeof *s); // 4
         printf("wcslen(s) == %zu\n", wcslen(s)); // 3
         printf("bytes in s == %zu\n", sizeof s); // 16 (12 + 4, due to the null terminator)
         return 0;    
  3. The fact UTF-32 is a fixed-width encoding is only marginally useful, due to grapheme clusters still being a thing. This means that the equivalence between codepoints and rendered glyphs is still not 1:1, just like in UCS-4:

     // GNU/Linux, x86_64
     #include <locale.h>
     #include <stdio.h>
     #include <wchar.h>
     int main(void) {
         setlocale(LC_ALL, "en_US.UTF-8");
         // "caña", with 'ñ' written as the grapheme cluster "n" + "combining tilde"
         const wchar_t string[] = L"can\u0303a";
         wprintf(L"`%ls`\n", string); // prints "caña" as 4 glyphs
         wprintf(L"`%ls` is %zu codepoints long\n", string, wcslen(string)); // 5 codepoints
         wprintf(L"`%ls` is %zu bytes long\n", string, sizeof string); // 24 bytes (5 UCS-4 codepoints + null)
         // this other string is the same as the previous one, but with the precomposed "ñ" character
         const wchar_t probe[] = L"ca\u00F1a";
         const _Bool different = wcscmp(string, probe);
         // this will always print "different", because the two strings are not the same despite being identical
         wprintf(L"`%ls` and `%ls` are %s\n", string, probe, different ? "different" : "equal");
         return 0;
     $ cc -o widestr_test widestr_test.c -std=c11
     $ ./widestr_test
     `caña` is 5 codepoints long
     `caña` is 24 bytes long
     `caña` and `caña` are different

    This is by far the biggest letdown about UTF-32: it is not the ultimate “extended ASCII” encoding most people wished for, because it is still incorrect so iterate over characters, and it still requires normalization (see below) in order to be safely operated on character by character.


I left UTF-8 as last because it is by far the best among the crop of Unicode encodings 8. UTF-8 is a variable width encoding, just like UTF-16, but with the crucial advantage that UTF-8 uses byte-sized (8-bit) code units, just like ASCII.

This is a major advantage, for a series of reasons:

  1. All ASCII text is valid UTF-8, and ASCII itself is in UTF-8, limited to the codepoints between U+0000 and U+007F.
    • This also implies that UTF-8 can encode ASCII text with one byte per character, even when mixed up with non-Latin characters;
    • Editors, terminals and other software can just support UTF-8 without having to support a separate ASCII mode;
  2. UTF-8 doesn’t require bothering with endianness, because bytes are just that - bytes. This means that UTF-8 does not require a BOM, even though poorly designed software may still add one (see below);

  3. UTF-8 doesn’t need a wide char type, like wchar_t or char16_t. Old APIs can use classic byte-sized chars, and just disregard characters above U+007F.

The following is an arguably poorly designed C program that parses a basic key-value file format defined as follows:

#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define BUFFER_SIZE 1024

int main(const int argc, const char* const argv[]) {
    if (argc != 2) {
        fprintf(stderr, "usage: %s <file>\n", argv[0]);
        return EXIT_FAILURE;

    FILE* const file = fopen(argv[1], "r");

    if (!file) {
        fprintf(stderr, "error: could not open file `%s`\n", argv[1]);
        return EXIT_FAILURE;

    int retval = EXIT_SUCCESS;

    char* line = malloc(BUFFER_SIZE);
    if (!line) {
        fprintf(stderr, "error: could not allocate memory\n");
        goto end;

    size_t line_size = BUFFER_SIZE;
    ptrdiff_t key_offs = -1, pos = 0;
    _Bool escape = 0;

    for (;;) {
        const int c = fgetc(file);

        switch (c) {
        case EOF:
            goto end;
        case '\\':
            if (!escape) {
                escape = 1;


        case ':':
            if (!escape) {
                if (key_offs >= 0) {
                    fprintf(stderr, "error: extra `:` at position %td\n", pos);
                    goto end;

                key_offs = pos;



        case '\n':
            if (escape) {

            if (key_offs < 0) {
                fprintf(stderr, "error: missing `:`\n");

                goto end;

            printf("key: `%.*s`, value: `%.*s`\n", (int)key_offs, line, (int)(pos - key_offs), line + key_offs);

            key_offs = -1;
            pos = 0;


        if ((size_t) pos >= line_size) {
            line_size = line_size * 3 / 2;
            line = realloc(line, line_size);

            if (!line) {
                fprintf(stderr, "error: could not allocate memory\n");

                goto end;

        line[pos++] = c;
        escape = 0;


    return EXIT_SUCCESS;
$ cc -o kv kv.c -std=c11
$ cat kv_test.txt
$ ./kv kv_test.txt
key: `key1`, value: `value1`
key: `key2`, value: `value2`
key: `key:3`, value: `value3`

This program operates on files char by char (or rather, int by int—that’s a long story), using whatever the “native” 8-bit (“narrow”) execution character set is to match for basic ASCII characters such as :, \ and \n.

The beauty of UTF-8 is that code that splits, searches, or synchronises using ASCII symbols9 will work fine as-is, with little to no modification, even with Unicode text.

Standard C character literals will still be valid Unicode codepoints, as long as the encoding of the source file is UTF-8. In the file above, ':' and other ASCII literals will fit in a char (int, really) as long as they are encoded as ASCII (: is U+003A).

Like UTF-16, UTF-8 is self-synchronizing: the code-splitting logic above will never match a UTF-8 codepoint in the middle, given that ASCII is reserved all of the codepoints between U+0000 and U+007F. The text can then be returned to the UTF-8 compliant system as it is, and the Unicode text will be correctly rendered.

$ cat kv_test_utf8.txt
Affet, affet:Yalvarıyorum
Why? 😒:blåbær
$ ./kv kv_test_utf8.txt
key: `tcp`, value: ``
key: `Affet, affet`, value: `Yalvarıyorum`
key: `Why? 😒`, value: `blåbær`
key: `Spla:too`, value: `3u33`

Unicode Normalization

As I previously mentioned, Unicode codepoints can be modified using combining characters, and the standard supports precomposed forms of some characters which have decomposed forms. The resulting glyphs are visually indistinguishable after being rendered, and there’s no limitation on using both forms alongside each other in the same text bit of text:

>>> import unicodedata
>>> s = 'Störfälle'
>>> len(s)
>>> [ for c in s]
>>> # getting the last 4 characters actually picks the last 3 glyphs, plus a combining character
>>> # sometimes the combining character may be mistakenly rendered over the `'` Python prints around the string
>>> s[-4:]
>>> [ for c in s[-4:]]

This is a significant issue, given how character-centric our understanding of text is: users (and by extension, developers) expect to be able to count what they see as “letters”, in a way that is consistent with how they are printed, shown on screen or inputted in a text field.

Another headache is the fact Unicode also may define special forms for the same letter or group of letters, which are visibly different but understood by humans to be derived from the same symbol.

A very common example of this is the (U+FB01), (U+FB02), (U+FB00) and (U+FB03) ligatures, which are ubiquitous in Latin text as a “more readable” form of the fi, fl and ffi digraphs. In general, users expect office, office and office to be treated and rendered similarly, because they all represent the same identical word, but not necessarily without any visual difference. 10

Canonical and Compatibility Equivalence

To solve this issue, Unicode defines two different types of equivalence between codepoints (or sequences thereof):

  • Canonical equivalence, when two combinations of one or more codepoints represent the same “abstract” character, like in the case of “ñ” and “n + combining tilde”;

  • Compatibility equivalence, when two combinations of one or more codepoints more or less represent the same “abstract” character, while being rendered differently or having different semantics, like in the case of “fi”, or mathematical signs such as “Mathematical Bold Capital A” (𝐀).

Canonical equivalence is generally considered a stronger form of equivalence than compatibility equivalence: it is critical for text processing tools to be able to treat canonically equivalent characters as the same, otherwise, users may be unable to search, edit or operate on text properly.11 On the other end, users are aware of compatibility-equivalent characters due to their different semantic and visual features, so their equivalence becomes relevant only in specific circumstances (like textual search, for instance, or when the user tries to copy “fancy” characters from Word to a text box that only accepts plain text).

Normalization Forms

Unicode defines four distinct normalization forms, which are specific forms a Unicode text can be in, and which allow for safe comparisons between strings. The standard describes how text can be transformed into any form, following a specific normalization algorithm based on per-glyph mappings.

The four normalization forms are:

  • NFD, or Normalization Form D, which applies a single canonical decomposition to all characters of a string. In general, this can be assumed to mean that every character that has a canonically-equivalent decomposed form is in it, with all of its modifiers sorted into a canonical order.

    For instance,

      >>> "e\u0302\u0323"
      >>> [ for c in "e\u0302\u0323"]
      >>> normalized = unicodedata.normalize('NFD', "e\u0302\u0323")
      >>> normalized
      >>> [ for c in normalized]

    Notice how the circumflex and the dot below were in a noncanonical order and were swapped by the normalization algorithm.

  • NFC, or Normalization Form C, which first applies a canonical decomposition, followed by a canonical composition. In NFC, all characters are composed into a precomposed character, if possible:

      >>> precomposed = unicodedata.normalize('NFC', "e\u0302\u0323")
      >>> precomposed
      >>> [ for c in precomposed]

    Notice that normalizing to NFC is not enough to “count” glyphs, given that some may not be representable with a single codepoint. An example of this is ẹ̄, which has no associated precomposed character:

      >>> [ for c in unicodedata.normalize('NFC', "ẹ̄")]

    A particularly nice property of NFC is that by definition all ASCII text is by definition already in NFC, which means that compilers and other tools do not necessarily have to bother with normalization when dealing with source code or scripts. 12

  • NFKD, or Normalization Form KD, which applies a compatibility decomposition to all characters of a string. Alongside canonical equivalence, Unicode also defines compatibility-equivalent decompositions for certain characters, like the previously mentioned ligature, which is decomposed into f and i.

      >>> fi = "fi"
      >>>'NFD', fi)) # doesn't do anything, `fi` has no canonical decomposition
      >>> decomposed = unicodedata.normalize('NFKD', "fi")
      >>> decomposed
      >>> [ for c in decomposed]

    Characters that don’t have a compatibility decomposition are canonically decomposed instead:

      >>> "\u1EC7"
      >>> [ for c in unicodedata.normalize('NFKD', "\u1EC7")
  • NFKC, or Normalization Form KC, which first applies a compatibility decomposition, followed by a canonical composition. In NFKC, all characters are composed into a precomposed character, if possible:

      >>> precomposed = unicodedata.normalize('NFKC', "fi") # this is U+FB01, "LATIN SMALL LIGATURE FI"
      >>> precomposed
      >>> [ for c in precomposed]

    Notice how the composition performed is canonical: there’s no such thing as “compatibility composition” as far as my understanding goes. This means that NFKC never recombines characters into compatibility-equivalent forms, which are thus permanently lost:

      >>> s = "Souffl\u0065\u0301" # notice the `ff` ligature
      >>> s
      >>> norm = unicodedata.normalize('NFKC', s) 
      >>> norm
      >>> # the ligature is gone, but the accent is still there

All in all, normalization is a fairly complex topic, and it’s especially tricky to implement right due to the sheer amount of special cases, so it’s always best to rely on libraries in order to get it right.

Unicode in the wild: caveats

Unicode is really the only relevant character set in existence, with UTF-8 holding the status of “best encoding”.

Unfortunately, internationalization support introduces a great deal of complexity into text handling, something that developers are often unaware of:

  1. First and foremost, there’s still a massive amount of software that doesn’t default to (or outright does not support) UTF-8, because it was either designed to work with legacy 8-bit encodings (like ISO-8859-1) or because it was designed in the ’90s to use UCS-2 and it’s permanently stuck with it or with faux “UTF-16”. Software libraries and frameworks like Qt, Java, Unreal Engine and the Win32 API are constantly converting text from UTF-8 (which is the sole Internet standard) to their internal UTF-16 representation. This is a massive waste of CPU cycles, which while more abundant than in the past, are still a finite resource.

     // Linux x86_64, Qt 6.5.1. Encoding is `en_US.UTF-8`.
     #include <iostream>
     #include <QCoreApplication>
     #include <QDebug>
     int main(int argc, char *argv[]) {
         QCoreApplication app{argc, argv};
         // converts UTF-8 (the source file's encoding) to the internal QString representation
         const QString s{"caña"}; 
         // prints `"caña"``, using Qt's debugging facilities. This will convert back to UTF-8 in order
         // to print the string to the console
         qDebug() << s;
         // prints `caña`, using C++'s IOStreams. This will force Qt to convert the string to
         // a UTF-8 encoded std::string, which will then be printed to the console
         std::cout << s.toStdString() << '\n';
         return 0;
  2. Case insensitivity in Unicode is a massive headache. First and foremost, the concept itself of “ignoring case” is deeply European-centric due to it being chiefly limited to bicameral scripts such as Latin, Cyrillic or Greek. What is considered the opposite case of a letter may vary as well, depending on the system’s locale:

     public class Up {
         public static void main(final String[] args) {
             final var uc = "CIAO";
             final var lc = "ciao";
             System.out.printf("uc(\"%s\") == \"%s\": %b\n", lc, uc, lc.toUpperCase().equals(uc));
     $ echo $LANG
     $ java Up
     uc("ciao") == "CIAO": true

    This seems working fine until the runtime locale is switched to Turkish:

     $ env LANG='tr_TR.UTF-8' java Up
     uc("ciao") == "CIAO": false

    In Turkish, the uppercase of i is İ, and the lowercase of I is ı, which breaks the ASCII-centric assumption the Java13 snippet above is built on. There is a multitude of such examples of “naive” implementations of case insensitivity in Unicode that inevitably end up being incorrect under unforeseen circumstances.

    Taking all edge cases related to Unicode case folding into account is a lot of work, especially since it’s very hard to properly test all possible locales. This is the reason why Unicode handling is always best left to a library. For C/C++ and Java, the Unicode Consortium itself provides a reference implementation of the Unicode algorithms, called ICU, which is used by a large number of frameworks and shipped by almost every major OS.

    While quite tricky to get right at times and at times more UTF-16 centric than I’d like, using ICU is still way saner than any self-written alternative:

     #include <stdint.h>
     #include <stdio.h>
     #include <stdlib.h>
     #include <string.h>
     #include <unicode/ucasemap.h>
     #include <unicode/utypes.h>
     int main(const int argc, const char *const argv[]) {
         // Support custom locales
         const char* const locale = argc > 1 ? argv[1] : "en_US";
         UErrorCode status = U_ZERO_ERROR;
         // Create a UCaseMap object for case folding
         UCaseMap* const caseMap = ucasemap_open(locale, 0, &status);
         if (U_FAILURE(status)) {
             printf("Error creating UCaseMap: %s\n", u_errorName(status));
             return EXIT_FAILURE;
         // Case fold the input string using the default settings
         const char input[] = "CIAO";
         char lc[100];
         const int32_t lcLength = ucasemap_utf8ToLower(caseMap, lc, sizeof lc, input, sizeof input, &status);
         if (U_FAILURE(status)) {
             printf("Error performing case folding: %s\n", u_errorName(status));
             return 1;
         // Print the lower case string
         printf("lc(\"%s\") == %.*s\n", input, lcLength, lc);
         // Clean up resources
         return EXIT_SUCCESS;
     $ cc -o casefold casefold.c -std=c11 $(icu-config --ldflags)
     $ ./casefold
     lc("CIAO") == ciao
     $ ./casefold tr_TR
     lc("CIAO") == cıao

    Unicode generalises “case insensitivity” into the broader concept of character folding, which boils down to a set of rules that define how characters can be transformed into other characters, in order to make them comparable.

  3. Similarly to folding, sorting text in a well-defined order (for instance alphabetical), an operation better known as collation, is also not trivial with Unicode.

    Different languages (and thus locales) may have different sorting rules, even with the Latin scripts.

    If, perchance, someone wanted to sort the list of words [ "tuck", "löwe", "luck", "zebra"]:

    • In German, ‘Ö’ is placed between ‘O’ and ‘P’, and the rest of the alphabet follows the same order as in English. The correct sorting for that list is thus [ "löwe", "luck", "tuck", "zebra"];
    • In Estonian, ‘Z’ is placed between ‘S’ and ‘T’, and ‘Ö’ is the penultimate letter of the alphabet. The list is then sorted as [ "luck", "löwe", "zebra", "tuck"];
    • In Swedish, ‘Ö’ is the last letter of the alphabet, with the classical Latin letters in their usual order. The list is thus [ "luck", "löwe", "tuck", "zebra"].

    Unicode defines a complex set of rules for collation and provides a reference implementation in ICU through the ucol API (and its relative C++ and Java equivalents).

     #define _GNU_SOURCE // for qsort_r
     #include <stdint.h>
     #include <stdio.h>
     #include <stdlib.h>
     #include <string.h>
     #include <unicode/ustring.h>
     #include <unicode/ucol.h>
     #include <unicode/uloc.h>
     int strcmp_helper(const void *const a, const void *const b, void *const ctx) {
         const char *const str1 = *(const char**) a, *const str2 = *(const char**) b;
         UErrorCode status = U_ZERO_ERROR;
         const UCollationResult cres = ucol_strcollUTF8(ctx, str1, strlen(str1), str2, strlen(str2), &status);
         return (cres == UCOL_GREATER) - (cres == UCOL_LESS);
     void sort_strings(UCollator *const collator, const char **const strings, const ptrdiff_t n) {
         qsort_r(strings, n, sizeof *strings, strcmp_helper, collator);
     int main(const int argc, const char *argv[]) {
         // Support custom locales
         const char* locale = getenv("ICU_LOCALE");
         if (!locale) {
             locale = "en_US";
         UErrorCode status = U_ZERO_ERROR;
         // Create a UCaseMap object for case folding
         UCollator *const coll = ucol_open(locale, &status);
         if (U_FAILURE(status)) {
             printf("Error creating UCollator: %s\n", u_errorName(status));
             return EXIT_FAILURE;
         sort_strings(coll, ++argv, argc - 1);
         // Clean up resources
         while (*argv) {
         return EXIT_SUCCESS;
     $ env ICU_LOCALE=de_DE ./coll "tuck" "löwe" "luck" "zebra" # German
     $ env ICU_LOCALE=et_EE ./coll "tuck" "löwe" "luck" "zebra" # Estonian
     $ env ICU_LOCALE=sv_SE ./coll "tuck" "löwe" "luck" "zebra" # Swedish
     $ # more complex case: sorting Japanese Kana using the Japanese locale's gojūon order
     $ env ICU_LOCALE=ja ./coll "パンダ" "ありがとう" "パソコン" "さよなら" "カード"
  4. To facilitate UTF-8 detection when other encodings may be in use, some platforms annoyingly add a UTF-8 BOM (EF BB BF) at the beginning of text files. Microsoft’s Visual Studio is historically a major offender in this regard:

     $  file OldProject.sln
     OldProject.sln: Unicode text, UTF-8 (with BOM) text, with CRLF line terminators
     $ xxd OldProject.sln | head -n 1
     00000000: efbb bf0d 0a4d 6963 726f 736f 6674 2056  .....Microsoft V

    The sequence is simply U+FEFF, just like in UTF-16 and 32, but encoded in UTF-8. While it’s not forbidden by the standard per se, it has no real utility besides signaling that the file is in UTF-8 (it makes no sense talking about endianness with single bytes). Programs that need to parse or operate on UTF-8 encoded files should always be aware that a BOM may be present, and probe for it to avoid exposing users to unnecessary complexity they probably don’t care about.

  5. Because of all of the reasons listed above, random, array-like access to Unicode strings is almost always broken—this is true even with UTF-32, due to grapheme clusters. It also follows that operations such as string slicing are not trivial to implement correctly, and the way languages such as Python and JavaScript do it (codepoint by codepoint) is IMHO arguably problematic.

    A good example of a modern language that attempts to mitigate this issue is Rust, which has UTF-8 strings that disallow indexed access and only support slicing at byte indices, with UTF-8 validation at runtime:

     fn main() {
         let s = "caña";
         // error[E0277]: the type `str` cannot be indexed by `{integer}`
         // let c = s[1];
         // char-by-char access requires iterators
         println!("{}", s.chars().nth(2).unwrap()); // OK: ñ
         // this will crash the program at runtime:
         // "byte index 3 is not a char boundary; it is inside 'ñ' (bytes 2..4) of `caña`"
         // let slice = &s[1..3]);
         // the user needs to check UTF-8 character bounds beforehand
         println!("{}", &s[1..4]); // OK: "añ"

    The stabilisation of the .chars() method took quite a long time, reflecting the fact that deducing what is or is not a character in Unicode is complex and quite controversial. The method itself ended up implementing iteration over Rust’s chars (aka, Unicode scalar codepoints) instead of grapheme clusters, which is rarely what the user wants. The fact it returns an iterator does at least effectively express that character-by-character access in Unicode is not, indeed, the “simple” operation developers have been so long accustomed to.

Wrapping up

Unicode is a massive standard, and it’s constantly adding new characters14, so for everybody’s safety it’s always best to rely on libraries to provide Unicode support, and if necessary ship fonts that support all the characters you may need (such as Noto Fonts). As previously introduced, C and C++ do not provide great support for Unicode, so it’s always best to just use ICU, which is widely supported and shipped by every major OS (including Windows).

When handling text that may contain non-English characters, it’s always best to stick to UTF-8 when possible and use Unicode-aware libraries for text processing. While writing custom text processing code may seem doable, it’s easy to miss a few corner cases and confuse end users in the process.

This is especially important because the main users of localized text and applications tend to often be the least technically savvy—those who may lack the ability to understand why the piece of software they are using is misbehaving, and can’t search for help in a language they don’t understand.

I hope this article may have been useful to shed some light on what is, in my opinion, an often overlooked topic in software development, especially among C++ users. If I had to be honest, I was striving for a shorter article, but I guess I had to make up for all those years I didn’t post a thing :)

As always, feel free to comment underneath or send me a message if anything does not look right, and hopefully, the next post will come before 2025…

  1. This wacky yet ingenious system made it possible to write in Arabic on ASCII-only channels, by using a mixture of Latin script and Western numerals with a passing resemblance with letters not present in English (i.e.,3 in place of ع, …). 

  2. Three actually: there was also UTF-1, a variable-width encoding that used 1 byte characters. It was pretty borked, so it never really saw much use. 

  3. 32-bit Unicode was initially resisted by both the Unicode consortium and the industry, due to its wastefulness while representing Latin text and everybody’s heavy investment in 16-bit Unicode. 

  4. And they still do it as of today. They do claim UTF-16 support, but it’s a bald-faced lie given that they don’t support anything outside of the BMP. 

  5. It was basically IPv4 all over again. I guess we’ll never learn. 

  6. A good example of this is Unreal Engine, which pretends to support UTF-16 even though it is actually UCS-2 

  7. UCS-2 also had the same issue, and so it was also in practice two different encodings, UCS-2LE and UCS-2BE. My opinions on this matter can thankfully be represented using Unicode itself with codepoint U+1F92E

  8. Or rather, it is the one Unicode encoding people want to use, as opposed to UTF-16, which is a scourge we’ll (probably) never get rid of. 

  9. I’ve specified “ASCII symbols” because letters may potentially be part of a grapheme cluster, so splitting on an e may, for instance, split an in two. 

  10. For instance, you most definitely expect that searching for “office” in a PDF also matches the words containing the ligature “fi”—string search is another tricky topic by itself

  11. And not only that: just think of how hard would it be to find a file, or to check a password or username, if there weren’t ways to verify the canonical equivalence between characters. 

  12. While most programming languages are somewhat standardizing around UTF-8 encoded source code, C and C++ still don’t have a standard encoding. Modern languages like Rust, Swift and Go also support Unicode in identifiers, which introduces some interesting challenges - see the relative Unicode specification for identifiers and parsing for more details. 

  13. I’ve used Java as an example here because it hits the right spot as a poster child of all the wrong assumptions of the ’90s: it’s old enough to easily provide naive, Western-centric built-in concepts such as “toUpperCase” and “toLowerCase”, while also attempting to implement them in a “Unicode” way. Unicode support in C and C++ is too barebones to really work as an example (despite C and C++ locales being outstandingly broken), and modern ones such as Rust or Go are usually locale agnostic; they also tend to implement case folding in a “saner” way (for instance, Rust only supports it on ASCII in its standard library). 

  14. A prime example of this is emojis, which have been ballooning in number since they were first introduced in 2010. 

Cross compiling made easy, using Clang and LLVM

Anyone who ever tried to cross-compile a C/C++ program knows how big a PITA the whole process could be. The main reasons for this sorry state of things are generally how byzantine build systems tend to be when configuring for cross-compilation, and how messy it is to set-up your cross toolchain in the first place.

One of the main culprits in my experience has been the GNU toolchain, the decades-old behemoth upon which the POSIXish world has been built for years. Like many compilers of yore, GCC and its binutils brethren were never designed with the intent to support multiple targets within a single setup, with he only supported approach being installing a full cross build for each triple you wish to target on any given host.

For instance, assuming you wish to build something for FreeBSD on your Linux machine using GCC, you need:

  • A GCC + binutils install for your host triplet (i.e., x86_64-pc-linux-gnu or similar);
  • A GCC + binutils complete install for your target triplet (i.e. x86_64-unknown-freebsd12.2-gcc, as, nm, etc)
  • A sysroot containing the necessary libraries and headers, which you can either build yourself or promptly steal from a running installation of FreeBSD.

This process is sometimes made simpler by Linux distributions or hardware vendors offering a selection of prepackaged compilers, but this will never suffice due to the sheer amount of possible host-target combinations. This sometimes means you have to build the whole toolchain yourself, something that, unless you rock a quite beefy CPU, tends to be a massive waste of time and power.

Clang as a cross compiler

This annoying limitation is one of the reasons why I got interested in LLVM (and thus Clang), which is by-design a full-fledged cross compiler toolchain and is mostly compatible with GNU. A single install can output and compile code for every supported target, as long as a complete sysroot is available at build time.

I found this to be a game-changer, and, while it can’t still compete in convenience with modern language toolchains (such as Go’s gc and GOARCH/GOOS), it’s night and day better than the rigmarole of setting up GNU toolchains. You can now just fetch whatever your favorite package management system has available in its repositories (as long as it’s not extremely old), and avoid messing around with multiple installs of GCC.

Until a few years ago, the whole process wasn’t as smooth as it could be. Due to LLVM not having a full toolchain yet available, you were still supposed to provide a binutils build specific for your target. While this is generally much more tolerable than building the whole compiler (binutils is relatively fast to build), it was still somewhat of a nuisance, and I’m glad that llvm-mc (LLVM’s integrated assembler) and lld (universal linker) are finally stable and as flexible as the rest of LLVM.

With the toolchain now set, the next step becomes to obtain a sysroot in order to provide the needed headers and libraries to compile and link for your target.

Obtaining a sysroot

A super fast way to find a working system directory for a given OS is to rip it straight out of an existing system (a Docker container image will often also do). For instance, this is how I used tar through ssh as a quick way to extract a working sysroot from a FreeBSD 13-CURRENT AArch64 VM 1:

$ mkdir ~/farm_tree
$ ssh FARM64 'tar cf - /lib /usr/include /usr/lib /usr/local/lib /usr/local/include' | bsdtar xvf - -C $HOME/farm_tree/

Invoking the cross compiler

With everything set, it’s now only a matter of invoking Clang with the right arguments:

$  clang++ --target=aarch64-pc-freebsd --sysroot=$HOME/farm_tree -fuse-ld=lld -stdlib=libc++ -o zpipe -lz --verbose
clang version 11.0.1
Target: aarch64-pc-freebsd
Thread model: posix
InstalledDir: /usr/bin
 "/usr/bin/clang-11" -cc1 -triple aarch64-pc-freebsd -emit-obj -mrelax-all -disable-free -disable-llvm-verifier -discard-value-names -main-file-name -mrelocation-model static -mframe-pointer=non-leaf -fno-rounding-math -mconstructor-aliases -munwind-tables -fno-use-init-array -target-cpu generic -target-feature +neon -target-abi aapcs -fallow-half-arguments-and-returns -fno-split-dwarf-inlining -debugger-tuning=gdb -v -resource-dir /usr/lib/clang/11.0.1 -isysroot /home/marco/farm_tree -internal-isystem /home/marco/farm_tree/usr/include/c++/v1 -fdeprecated-macro -fdebug-compilation-dir /home/marco/dummies/cxx -ferror-limit 19 -fno-signed-char -fgnuc-version=4.2.1 -fcxx-exceptions -fexceptions -faddrsig -o /tmp/zpipe-54f1b1.o -x c++
clang -cc1 version 11.0.1 based upon LLVM 11.0.1 default target x86_64-pc-linux-gnu
#include "..." search starts here:
#include <...> search starts here:
End of search list.
 "/usr/bin/ld.lld" --sysroot=/home/marco/farm_tree --eh-frame-hdr -dynamic-linker /libexec/ --enable-new-dtags -o zpipe /home/marco/farm_tree/usr/lib/crt1.o /home/marco/farm_tree/usr/lib/crti.o /home/marco/farm_tree/usr/lib/crtbegin.o -L/home/marco/farm_tree/usr/lib /tmp/zpipe-54f1b1.o -lz -lc++ -lm -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /home/marco/farm_tree/usr/lib/crtend.o /home/marco/farm_tree/usr/lib/crtn.o
$ file zpipe
zpipe: ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 13.0 (1300136), FreeBSD-style, with debug_info, not stripped

In the snipped above, I have managed to compile and link a C++ file into an executable for AArch64 FreeBSD, all while using just the clang and lld I had already installed on my GNU/Linux system.

More in detail:

  1. --target switches the LLVM default target (x86_64-pc-linux-gnu) to aarch64-pc-freebsd, thus enabling cross-compilation.
  2. --sysroot forces Clang to assume the specified path as root when searching headers and libraries, instead of the usual paths. Note that sometimes this setting might not be enough, especially if the target uses GCC and Clang somehow fails to detect its install path. This can be easily fixed by specifying --gcc-toolchain, which clarifies where to search for GCC installations.
  3. -fuse-ld=lld tells Clang to use lld instead whatever default the platform uses. As I will explain below, it’s highly unlikely that the system linker understands foreign targets, while LLD can natively support almost every binary format and OS 2.
  4. -stdlib=libc++ is needed here due to Clang failing to detect that FreeBSD on AArch64 uses LLVM’s libc++ instead of GCC’s libstdc++.
  5. -lz is also specified to show how Clang can also resolve other libraries inside the sysroot without issues, in this case, zlib.

The final test is now to copy the binary to our target system (i.e. the VM we ripped the sysroot from before) and check if it works as expected:

$ rsync zpipe FARM64:"~"
$ ssh FARM64
FreeBSD-ARM64-VM $ chmod +x zpipe
FreeBSD-ARM64-VM $ ldd zpipe
zpipe: => /lib/ (0x4029e000) => /usr/lib/ (0x402e4000) => /lib/ (0x403da000) => /lib/ (0x40426000) => /lib/ (0x40491000) => /lib/ (0x408aa000)
FreeBSD-ARM64-VM $ ./zpipe -h
zpipe usage: zpipe [-d] < source > dest

Success! It’s now possible to use this cross toolchain to build larger programs, and below I’ll give a quick example to how to use it to build real projects.

Optional: creating an LLVM toolchain directory

LLVM provides a mostly compatible counterpart for almost every tool shipped by binutils (with the notable exception of as 3), prefixed with llvm-.

The most critical of these is LLD, which is a drop in replacement for a plaform’s system linker, capable to replace both GNU ld.bfd and gold on GNU/Linux or BSD, and Microsoft’s LINK.EXE when targeting MSVC. It supports linking on (almost) every platform supported by LLVM, thus removing the nuisance to have multiple specific linkers installed.

Both GCC and Clang support using ld.lld instead of the system linker (which may well be lld, like on FreeBSD) via the command line switch -fuse-ld=lld.

In my experience, I found that Clang’s driver might get confused when picking the right linker on some uncommon platforms, especially before version 11.0. For some reason, clang sometimes decided to outright ignore the -fuse-ld=lld switch and picked the system linker (ld.bfd in my case), which does not support AArch64.

A fast solution to this is to create a toolchain directory containing symlinks that rename the LLVM utilities to the standard binutils programs:

$  ls -la ~/.llvm/bin/
Permissions Size User  Group Date Modified Name
lrwxrwxrwx    16 marco marco  3 Aug  2020  ar -> /usr/bin/llvm-ar
lrwxrwxrwx    12 marco marco  6 Aug  2020  ld -> /usr/bin/lld
lrwxrwxrwx    21 marco marco  3 Aug  2020  objcopy -> /usr/bin/llvm-objcopy
lrwxrwxrwx    21 marco marco  3 Aug  2020  objdump -> /usr/bin/llvm-objdump
lrwxrwxrwx    20 marco marco  3 Aug  2020  ranlib -> /usr/bin/llvm-ranlib
lrwxrwxrwx    21 marco marco  3 Aug  2020  strings -> /usr/bin/llvm-strings

The -B switch can then be used to force Clang (or GCC) to search the required tools in this directory, stopping the issue from ever occurring:

$  clang++ -B$HOME/.llvm/bin -stdlib=libc++ --target=aarch64-pc-freebsd --sysroot=$HOME/farm_tree -std=c++17 -o mvd-farm64
$ file mvd-farm64
mvd-farm64: ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 13.0, FreeBSD-style, with debug_info, not stripped

Optional: creating Clang wrappers to simplify cross-compilation

I happened to notice that certain build systems (and with “certain” I mean some poorly written Makefiles and sometimes Autotools) have a tendency to misbehave when $CC, $CXX or $LD contain spaces or multiple parameters. This might become a recurrent issue if we need to invoke clang with several arguments. 4

Given also how unwieldy it is to remember to write all of the parameters correctly everywhere, I usually write quick wrappers for clang and clang++ in order to simplify building for a certain target:

$ cat ~/.local/bin/aarch64-pc-freebsd-clang
#!/usr/bin/env sh

exec /usr/bin/clang -B$HOME/.llvm/bin --target=aarch64-pc-freebsd --sysroot=$HOME/farm_tree "$@"
$ cat ~/.local/bin/aarch64-pc-freebsd-clang++
#!/usr/bin/env sh

exec /usr/bin/clang++ -B$HOME/.llvm/bin -stdlib=libc++ --target=aarch64-pc-freebsd --sysroot=$HOME/farm_tree "$@"	

If created in a directory inside $PATH, these script can used everywhere as standalone commands:

$ aarch64-pc-freebsd-clang++ -o tst -static
$ file tst
tst: ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), statically linked, for FreeBSD 13.0 (1300136), FreeBSD-style, with debug_info, not stripped

Cross-building with Autotools, CMake and Meson

Autotools, CMake, and Meson are arguably the most popular building systems for C and C++ open source projects (sorry, SCons). All of three support cross-compiling out of the box, albeit with some caveats.


Over the years, Autotools has been famous for being horrendously clunky and breaking easily. While this reputation is definitely well earned, it’s still widely used by most large GNU projects. Given it’s been around for decades, it’s quite easy to find support online when something goes awry (sadly, this is not also true when writing .ac files). When compared to its more modern breathren, it doesn’t require any toolchain file or extra configuration when cross compiling, being only driven by command line options.

A ./configure script (either generated by autoconf or shipped by a tarball alongside source code) usually supports the --host flag, allowing the user to specify the triple of the host on which the final artifacts are meant to be run.

This flags activates cross compilation, and causes the “auto-something” array of tools to try to detect the correct compiler for the target, which it generally assumes to be called some-triple-gcc or some-triple-g++.

For instance, let’s try to configure binutils version 2.35.1 for aarch64-pc-freebsd, using the Clang wrapper introduced above:

$ tar xvf binutils-2.35.1.tar.xz
$ mkdir binutils-2.35.1/build # always create a build directory to avoid messing up the source tree
$ cd binutils-2.35.1/build
$ env CC='aarch64-pc-freebsd-clang' CXX='aarch64-pc-freebsd-clang++' AR=llvm-ar ../configure --build=x86_64-pc-linux-gnu --host=aarch64-pc-freebsd --enable-gold=yes
checking build system type... x86_64-pc-linux-gnu
checking host system type... aarch64-pc-freebsd
checking target system type... aarch64-pc-freebsd
checking for a BSD-compatible install... /usr/bin/install -c
checking whether ln works... yes
checking whether ln -s works... yes
checking for a sed that does not truncate output... /usr/bin/sed
checking for gawk... gawk
checking for aarch64-pc-freebsd-gcc... aarch64-pc-freebsd-clang
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... yes
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether aarch64-pc-freebsd-clang accepts -g... yes
checking for aarch64-pc-freebsd-clang option to accept ISO C89... none needed
checking whether we are using the GNU C++ compiler... yes
checking whether aarch64-pc-freebsd-clang++ accepts -g... yes

The invocation of ./configure above specifies that I want autotools to:

  1. Configure for building on an x86_64-pc-linux-gnu host (which I specified using --build);
  2. Build binaries that will run on aarch64-pc-freebsd, using the --host switch;
  3. Use the Clang wrappers made above as C and C++ compilers;
  4. Use llvm-ar as the target ar.

I also specified to build the Gold linker, which is written in C++ and it’s a good test for well our improvised toolchain handles compiling C++.

If the configuration step doesn’t fail for some reason (it shouldn’t), it’s now time to run GNU Make to build binutils:

$ make -j16 # because I have 16 theads on my system
[ lots of output]
$ mkdir dest
$ make DESTDIR=$PWD/dest install # install into a fake tree

There should now be executable files and libraries inside of the fake tree generated by make install. A quick test using file confirms they have been correctly built for aarch64-pc-freebsd:

$ file dest/usr/local/bin/
dest/usr/local/bin/ ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 13.0 (1300136), FreeBSD-style, with debug_info, not stripped


The simplest way to set CMake to configure for an arbitrary target is to write a toolchain file. These usually consist of a list of declarations that instructs CMake on how it is supposed to use a given toolchain, specifying parameters like the target operating system, the CPU architecture, the name of the C++ compiler, and such.

One reasonable toolchain file for the aarch64-pc-freebsd triple written as follows:


set(CMAKE_SYSROOT $ENV{HOME}/farm_tree)

set(CMAKE_C_COMPILER aarch64-pc-freebsd-clang)
set(CMAKE_CXX_COMPILER aarch64-pc-freebsd-clang++)
set(CMAKE_AR llvm-ar)

# these variables tell CMake to avoid using any binary it finds in 
# the sysroot, while picking headers and libraries exclusively from it 

In this file, I specified the wrapper created above as the cross compiler for C and C++ for the target. It should be possible to also use plain Clang with the right arguments, but it’s much less straightforward and potentially more error-prone.

In any case, it is very important to indicate the CMAKE_SYSROOT and CMAKE_FIND_ROOT_PATH_MODE_* variables, or otherwise CMake could wrongly pick packages from the host with disastrous results.

It is now only a matter of setting CMAKE_TOOLCHAIN_FILE with the path to the toolchain file when configuring a project. To better illustrate this, I will now also build {fmt} (which is an amazing C++ library you should definitely use) for aarch64-pc-freebsd:

$  git clone
Cloning into 'fmt'...
remote: Enumerating objects: 45, done.
remote: Counting objects: 100% (45/45), done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 24446 (delta 17), reused 12 (delta 7), pack-reused 24401
Receiving objects: 100% (24446/24446), 12.08 MiB | 2.00 MiB/s, done.
Resolving deltas: 100% (16551/16551), done.
$ cd fmt
$ cmake -B build -G Ninja -DCMAKE_TOOLCHAIN_FILE=$HOME/toolchain-aarch64-freebsd.cmake -DBUILD_SHARED_LIBS=ON -DFMT_TEST=OFF .
-- CMake version: 3.19.4
-- The CXX compiler identification is Clang 11.0.1
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /home/marco/.local/bin/aarch64-pc-freebsd-clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Version: 7.1.3
-- Build type: Release
-- Performing Test has_std_11_flag
-- Performing Test has_std_11_flag - Success
-- Performing Test has_std_0x_flag
-- Performing Test has_std_0x_flag - Success
-- Performing Test FMT_HAS_VARIANT
-- Performing Test FMT_HAS_VARIANT - Success
-- Required features: cxx_variadic_templates
-- Performing Test HAS_NULLPTR_WARNING
-- Performing Test HAS_NULLPTR_WARNING - Success
-- Looking for strtod_l
-- Looking for strtod_l - not found
-- Configuring done
-- Generating done
-- Build files have been written to: /home/marco/fmt/build

Compared with Autotools, the command line passed to cmake is very simple and doesn’t need too much explanation. After the configuration step is finished, it’s only a matter to compile the project and get ninja or make to install the resulting artifacts somewhere.

$ cmake --build build
[4/4] Creating library symlink
$ mkdir dest
$ env DESTDIR=$PWD/dest cmake --build build -- install
[0/1] Install the project...
-- Install configuration: "Release"
-- Installing: /home/marco/fmt/dest/usr/local/lib/
-- Installing: /home/marco/fmt/dest/usr/local/lib/
-- Installing: /home/marco/fmt/dest/usr/local/lib/
-- Installing: /home/marco/fmt/dest/usr/local/lib/cmake/fmt/fmt-config.cmake
-- Installing: /home/marco/fmt/dest/usr/local/lib/cmake/fmt/fmt-config-version.cmake
-- Installing: /home/marco/fmt/dest/usr/local/lib/cmake/fmt/fmt-targets.cmake
-- Installing: /home/marco/fmt/dest/usr/local/lib/cmake/fmt/fmt-targets-release.cmake
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/args.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/chrono.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/color.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/compile.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/core.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/format.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/format-inl.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/locale.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/os.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/ostream.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/posix.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/printf.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/ranges.h
-- Installing: /home/marco/fmt/dest/usr/local/lib/pkgconfig/fmt.pc
$  file dest/usr/local/lib/
dest/usr/local/lib/ ELF 64-bit LSB shared object, ARM aarch64, version 1 (FreeBSD), dynamically linked, for FreeBSD 13.0 (1300136), with debug_info, not stripped


Like CMake, Meson relies on toolchain files (here called “cross files”) to specify which tools should be used when building for a given target. Thanks to being written in a TOML-like language, they are very straightforward:

$ cat meson_aarch64_fbsd_cross.txt
c = '/home/marco/.local/bin/aarch64-pc-freebsd-clang'
cpp = '/home/marco/.local/bin/aarch64-pc-freebsd-clang++'
ld = '/usr/bin/ld.lld'
ar = '/usr/bin/llvm-ar'
objcopy = '/usr/bin/llvm-objcopy'
strip = '/usr/bin/llvm-strip'

ld_args = ['--sysroot=/home/marco/farm_tree']

system = 'freebsd'
cpu_family = 'aarch64'
cpu = 'aarch64'
endian = 'little'

This cross-file can then be specified to meson setup using the --cross-file option 5, with everything else remaining the same as with every other Meson build.

And, well, this is basically it: like with CMake, the whole process is relatively painless and foolproof. For the sake of completeness, this is how to build dav1d, VideoLAN’s AV1 decoder, for aarch64-pc-freebsd:

$ git clone
Cloning into 'dav1d'...
warning: redirecting to
remote: Enumerating objects: 164, done.
remote: Counting objects: 100% (164/164), done.
remote: Compressing objects: 100% (91/91), done.
remote: Total 9377 (delta 97), reused 118 (delta 71), pack-reused 9213
Receiving objects: 100% (9377/9377), 3.42 MiB | 54.00 KiB/s, done.
Resolving deltas: 100% (7068/7068), done.
$ meson setup build --cross-file ../meson_aarch64_fbsd_cross.txt --buildtype release
The Meson build system
Version: 0.56.2
Source dir: /home/marco/dav1d
Build dir: /home/marco/dav1d/build
Build type: cross build
Project name: dav1d
Project version: 0.8.1
C compiler for the host machine: /home/marco/.local/bin/aarch64-pc-freebsd-clang (clang 11.0.1 "clang version 11.0.1")
C linker for the host machine: /home/marco/.local/bin/aarch64-pc-freebsd-clang ld.lld 11.0.1
[ output cut ]
$ meson compile -C build
Found runner: ['/usr/bin/ninja']
ninja: Entering directory `build'
[129/129] Linking target tests/seek_stress
$ mkdir dest
$ env DESTDIR=$PWD/dest meson install -C build
ninja: Entering directory `build'
[1/11] Generating vcs_version.h with a custom command
Installing src/ to /home/marco/dav1d/dest/usr/local/lib
Installing tools/dav1d to /home/marco/dav1d/dest/usr/local/bin
Installing /home/marco/dav1d/include/dav1d/common.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/include/dav1d/data.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/include/dav1d/dav1d.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/include/dav1d/headers.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/include/dav1d/picture.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/build/include/dav1d/version.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/build/meson-private/dav1d.pc to /home/marco/dav1d/dest/usr/local/lib/pkgconfig
$ file dest/usr/local/bin/dav1d
dest/usr/local/bin/dav1d: ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 13.0 (1300136), FreeBSD-style, with debug_info, not stripped

Bonus: static linking with musl and Alpine Linux

Statically linking a C or C++ program can sometimes save you a lot of library compatibility headaches, especially when you can’t control what’s going to be installed on whatever you plan to target. Building static binaries is however quite complex on GNU/Linux, due to Glibc actively discouraging people from linking it statically. 6

Musl is a very compatible standard library implementation for Linux that plays much nicer with static linking, and it is now shipped by most major distributions. These packages often suffice in building your code statically, at least as long as you plan to stick with plain C.

The situation gets much more complicated if you plan to use C++, or if you need additional components. Any library shipped by a GNU/Linux system (like libstdc++, libz, libffi and so on) is usually only built for Glibc, meaning that any library you wish to use must be rebuilt to target Musl. This also applies to libstdc++, which inevitably means either recompiling GCC or building a copy of LLVM’s libc++.

Thankfully, there are several distributions out there that target “Musl-plus-Linux”, everyone’s favorite being Alpine Linux. It is thus possible to apply the same strategy we used above to obtain a x86_64-pc-linux-musl sysroot complete of libraries and packages built for Musl, which can then be used by Clang to generate 100% static executables.

Setting up an Alpine container

A good starting point is the minirootfs tarball provided by Alpine, which is meant for containers and tends to be very small:

$ wget -qO - | gunzip | sudo tar xfp - -C ~/alpine_tree

It is now possible to chroot inside the image in ~/alpine_tree and set it up, installing all the packages you may need. I prefer in general to use systemd-nspawn in lieu of chroot due to it being vastly better and less error prone. 7

$ $  sudo systemd-nspawn -D alpine_tree
Spawning container alpinetree on /home/marco/alpine_tree.
Press ^] three times within 1s to kill container.

We can now (optionally) switch to the edge branch of Alpine for newer packages by editing /etc/apk/repositories, and then install the required packages containing any static libraries required by the code we want to build:

alpinetree:~# cat /etc/apk/repositories
alpinetree:~# apk update
v3.13.0-1030-gbabf0a1684 []
v3.13.0-1035-ga3ac7373fd []
OK: 14029 distinct packages available
alpinetree:~# apk upgrade
OK: 6 MiB in 14 packages
alpinetree:~# apk add g++ libc-dev
(1/14) Installing libgcc (10.2.1_pre1-r3)
(2/14) Installing libstdc++ (10.2.1_pre1-r3)
(3/14) Installing binutils (2.35.1-r1)
(4/14) Installing libgomp (10.2.1_pre1-r3)
(5/14) Installing libatomic (10.2.1_pre1-r3)
(6/14) Installing libgphobos (10.2.1_pre1-r3)
(7/14) Installing gmp (6.2.1-r0)
(8/14) Installing isl22 (0.22-r0)
(9/14) Installing mpfr4 (4.1.0-r0)
(10/14) Installing mpc1 (1.2.1-r0)
(11/14) Installing gcc (10.2.1_pre1-r3)
(12/14) Installing musl-dev (1.2.2-r1)
(13/14) Installing libc-dev (0.7.2-r3)
(14/14) Installing g++ (10.2.1_pre1-r3)
Executing busybox-1.33.0-r1.trigger
OK: 188 MiB in 28 packages
alpinetree:~# apk add zlib-dev zlib-static
(1/3) Installing pkgconf (1.7.3-r0)
(2/3) Installing zlib-dev (1.2.11-r3)
(3/3) Installing zlib-static (1.2.11-r3)
Executing busybox-1.33.0-r1.trigger
OK: 189 MiB in 31 packages

In this case I installed g++ and libc-dev in order to get a static copy of libstdc++, a static libc.a (Musl) and their respective headers. I also installed zlib-dev and zlib-static to install zlib’s headers and libz.a, respectively. As a general rule, Alpine usually ships static versions available inside -static packages, and headers as somepackage-dev. 8

Also, remember every once in a while to run apk upgrade inside the sysroot in order to keep the local Alpine install up to date.

Compiling static C++ programs

With everything now set, it’s only a matter of running clang++ with the right --target and --sysroot:

$ clang++ -B$HOME/.llvm/bin --gcc-toolchain=$HOME/alpine_tree/usr --target=x86_64-alpine-linux-musl --sysroot=$HOME/alpine_tree -L$HOME/alpine_tree/lib -std=c++17 -o zpipe -lz -static
$ file zpipe
zpipe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, with debug_info, not stripped

The extra --gcc-toolchain is optional, but may help solving issues where compilation fails due to Clang not detecting where GCC and the various crt*.o files reside in the sysroot. The extra -L for /lib is required because Alpine splits its libraries between /usr/lib and /lib, and the latter is not automatically picked up by clang, which both usually expect libraries to be located in $SYSROOT/usr/bin.

Writing a wrapper for static linking with Musl and Clang

Musl packages usually come with the upstream-provided shims musl-gcc and musl-clang, which wrap the system compilers in order to build and link with the alternative libc. In order to provide a similar level of convenience, I quickly whipped up the following Perl script:

#!/usr/bin/env perl

use strict;
use utf8;
use warnings;
use v5.30;

use List::Util 'any';

my $ALPINE_DIR = $ENV{ALPINE_DIR} // "$ENV{HOME}/alpine_tree";
my $TOOLS_DIR = $ENV{TOOLS_DIR} // "$ENV{HOME}/.llvm/bin";

my $CMD_NAME = $0 =~ /\+\+/ ? 'clang++' : 'clang';
my $STATIC = $0 =~ /static/;

sub clang {
	exec $CMD_NAME, @_ or return 0;

sub main {
	my $compile = any { /^\s*-c|-S\s*$/ } @ARGV;

	my @args = (

	unshift @args, '-static' if $STATIC and not $compile;

	exit 1 unless clang @args;


This wrapper is more refined than the FreeBSD AArch64 wrapper above. For instance, it can infer C++ if invoked as clang++, or always force -static if called from a symlink containing static in its name:

$ ls -la $(which musl-clang++)
lrwxrwxrwx    10 marco marco 26 Jan 21:49  /home/marco/.local/bin/musl-clang++ -> musl-clang
$ ls -la $(which musl-clang++-static)
lrwxrwxrwx    10 marco marco 26 Jan 22:03  /home/marco/.local/bin/musl-clang++-static -> musl-clang
$ musl-clang++-static -std=c++17 -o zpipe -lz # automatically infers C++ and -static
$ file zpipe
zpipe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, with debug_info, not stripped

It is thus possible to force Clang to only ever link -static by setting $CC to musl-clang-static, which can be useful with build systems that don’t play nicely with statically linking. From my experience, the worst offenders in this regard are Autotools (sometimes) and poorly written Makefiles.


Cross-compiling C and C++ is and will probably always be an annoying task, but it has got much better since LLVM became production-ready and widely available. Clang’s -target option has saved me countless man-hours that I would have instead wasted building and re-building GCC and Binutils over and over again.

Alas, all that glitters is not gold, as is often the case. There is still code around that only builds with GCC due to nasty GNUisms (I’m looking at you, Glibc). Cross compiling for Windows/MSVC is also bordeline unfeasible due to how messy the whole Visual Studio toolchain is.

Furthermore, while targeting arbitrary triples with Clang is now definitely simpler that it was, it still pales in comparison to how trivial cross compiling with Rust or Go is.

One special mention among these new languages should go to Zig, and its goal to also make C and C++ easy to build for other platforms.

The zig cc and zig c++ commands have the potential to become an amazing swiss-army knife tool for cross compiling, thanks to Zig shipping a copy of clang and large chunks of projects such as Glibc, Musl, libc++ and MinGW. Any required library is then built on-the-fly when required:

$ zig c++ --target=x86_64-windows-gnu -o str.exe
$ file str.exe
str.exe: PE32+ executable (console) x86-64, for MS Windows

While I think this is not yet perfect, it already feels almost like magic. I dare to say, this might really become a killer selling point for Zig, making it attractive even for those who are not interested in using the language itself.

  1. If the transfer is happening across a network and not locally, it’s a good idea to compress the output tarball. 

  2. Sadly, macOS is not supported anymore by LLD due to Mach-O support being largely unmaintained and left to rot over the last years. This leaves ld64 (or a cross-build thereof, if you manage to build it) as the only way to link Mach-O executables (unless ld.bfd from binutils still supports it). 

  3. llvm-mc can be used as a (very cumbersome) assembler but it’s poorly documented. Like gcc, the clang frontend can act as an assembler, making as often redundant. 

  4. This is without talking about those criminals who hardcode gcc in their build scripts, but this is a rant better left for another day. 

  5. In the same fashion, it is also possible to tune the native toolchain for the current machine using a native file and the --native-file toggle. 

  6. Glibc’s builtin name resolution system (NSS) is one of the main culprits, which heavily uses dlopen()/dlsym(). This is due to its heavy usage of plugins, which is meant to provide support for extra third-party resolvers such as mDNS. 

  7. systemd-nspawn can also double as a lighter alternative to VMs, using the --boot option to spawn an init process inside the container. See this very helpful gist to learn how to make bootable containers for distributions based on OpenRC, like Alpine. 

  8. Sadly, Alpine for reasons unknown to me, does not ship the static version of certain libraries (like libfmt). Given that embedding a local copy of third party dependencies is common practice nowadays for C++, this is not too problematic. 

NAT66: The good, the bad, the ugly

NAT (and NAPT) is one of those technologies anyone has a strong opinion about. It has been for years the necessary evil and invaluable (yet massive) hack that kept IPv4 from falling apart in the face of its abysmally small 32-bit address space - which was, to be honest, an absolute OK choice for the time the protocol was designed, when computers cost a small fortune, and were as big as lorries.

The Internet Protocol, version 4, has been abused for quite too long now. We made it into the fundamental building block of the modern Internet, a network of a scale it was never designed for. We are well in due time to put it at rest and replace it with its controversial, yet problem-solving 128-bit grandchild, IPv6.

So, what should be the place for NAT in the new Internet, which makes the return to the end-to-end principle one of its main tenets?

NAT66 misses the point

Well, none, according to the IETF, which has for years tried to dissuade everyone with dabbing with NAT66 (the name NAT is known on IPv6); this is not without good reasons, though. For too long, the supposedly stateless, connectionless level 3 IP protocol has been made into an impromptu “stateful”, connection-oriented protocol by NAT gateways, just for the sake to meet the demands of an infinite number of devices trying to connect to the Internet.

This is without considering the false sense of security that address masquerading provides; I cannot recall how many times I’ve heard people say that (gasp!) NAT is a fundamental piece in the security of their internal networks (it’s not).

Given that the immensity of the IPv6 address space allows providers to give out full /64s to customers, I’d always failed to see the point in NAT66: it always felt to me as a feature fundamentally dead in the water, a solution seeking a problem, ready to be misused.

Well, this was before discovering how cheap some hosting services could be.

Being cheap: the root of all evils

I was quite glad to see a while ago that my VPS provider had announced IPv6 support; thanks to this, I would have been finally able to provide IPv6 access to the guests of the VPNs I host on that VPS, without having to incur into the delay penalties caused by tunneling the traffic on good old services such as Hurrican Electric and SixXS 1. Hooray!

My excitement was unfortunately not going to last for long, and it was indeed barbarically butchered when I discovered that despite having been granted a full /32 (296 IPs), my provider decided to give its VPS customers just a single /128 address.


Oh. God. Why.

Given that IPv6 connectivity was something I really wished for my OpenVPN setup, this was quite a setback. I was left with fundamentally only two reasonable choices:

  1. Get a free /64 from a Hurricane Electric tunnel, and allocate IPv6s for VPN guests from there;
  2. Be a very bad person, set up NAT66, and feel ashamed.

Hurricane Electric is, without doubt, the most orthodox option between the two; it’s free of charge, it gives out /64s, and it’s quite easy to set up.

The main showstopper here is definitely the increased network latency added by two layers of tunneling (VPN -> 6to4 -> IPv6 internet), and, given that by default native IPv6 source IPs are preferred to IPv4, it would have been bad if having a v6 public address incurred in a slow down of connections with usually tolerable latencies. Especially if there was a way to get decent RTTs for both IPv6 and IPv4…

And so, with a pang of guilt, I shamefully committed the worst crime.

How to get away with NAT66

The process of setting up NAT usually relies on picking a specially reserved privately-routable IP range, to avoid our internal network structure to get in conflict with the outer networking routing rules (it still may happen, though, if under multiple misconfigured levels of masquerading).

The IPv6 equivalent to, and has been defined in 2005 by the IETF, not without a whole deal of confusion first, with the Unique Local Addresses (ULA) specification. This RFC defines the unique, not publicly routable fc00::/7 that is supposed to be used to define local subnets, without the unicity guarantees of 2000::/3 (the range from which Global Unicast Addresses (GUA) - i.e. the Internet - are allocated from for the time being). From it, fd00::/8 is the only block really defined so far, and it’s meant to define all of the /48s your private network may ever need.

The next step was to configure my OpenVPN instances to give out ULAs from subnets of my choice to clients, by adding at the end of to my config the following lines:

server-ipv6 fd00::1:8:0/112
push "route-ipv6 2000::/3"

I resorted to picking fd00::1:8:0/112 for the UDP server and fd00::1:9:0/112 for the TCP one, due to a limitation in OpenVPN only accepting masks from /64 to /112.

Given that I also want traffic towards the Internet to be forwarded via my NAT, it is also necessary to instruct the server to push a default route to its clients at connection time.

$ ping fd00::1:8:1
PING fd00::1:8:1(fd00::1:8:1) 56 data bytes
64 bytes from fd00::1:8:1: icmp_seq=1 ttl=64 time=40.7 ms

The clients and servers were now able to ping each other through their local addresses without any issue, but the outer network was still unreachable.

I continued the creation of this abomination by configuring the kernel to forward IPv6 packets; this is achieved by setting the net.ipv6.conf.all.forwarding = 1 with sysctl or in sysctl.conf (from now on, the rest of this article assumes that you are under Linux).

# cat /etc/sysctl.d/30-ipforward.conf 
# sysctl -p /etc/sysctl.d/30-ipforward.conf

Afterwards, the only step left was to set up NAT66, which can be easily done by configuring the stateful firewall provided by Linux’ packet filter.
I personally prefer (and use) the newer nftables to the {ip,ip6,arp,eth}tables mess it is supposed to supersede, because I find it tends to be quite less moronic and clearer to understand (despite the relatively scarce documentation available online, which is sometimes a pain. I wish Linux had the excellent OpenBSD’s pf…).
Feel free to use ip6tables, if that’s what you are already using, and you don’t really feel the need to migrate your ruleset to nft.

This is a shortened, summarised snippet of the rules that I’ve had to put into my nftables.conf to make NAT66 work; I’ve also left the IPv4 rules in for the sake of completeness.

PS: Remember to change MY_EXTERNAL_IPVx with your IPv4/6!

table inet filter {
  chain forward {
    type filter hook forward priority 0;

    # allow established/related connections                                                                                                                                                                                                 
    ct state {established, related} accept
    # early drop of invalid connections                                                                                                                                                                                                     
    ct state invalid drop

    # Allow packets to be forwarded from the VPNs to the outer world
    ip saddr iifname "tun*" oifname eth0 accept
    # Using fd00::1:0:0/96 allows to match for
    # every fd00::1:xxxx:0/112 I set up
    ip6 saddr fd00::1:0:0/96 iifname "tun*" oifname eth0 accept
# IPv4 NAT table
table ip nat {
  chain prerouting {
    type nat hook prerouting priority 0; policy accept;
  chain postrouting {
    type nat hook postrouting priority 100; policy accept;
    ip saddr oif "eth0" snat to MY_EXTERNAL_IPV4

# IPv6 NAT table
table ip6 nat {
  chain prerouting {
    type nat hook prerouting priority 0; policy accept;
  chain postrouting {
    type nat hook postrouting priority 100; policy accept;

    # Creates a SNAT (source NAT) rule that changes the source 
    # address of the outbound IPs with the external IP of eth0
    ip6 saddr fd00::1:0:0/96 oif "eth0" snat to MY_EXTERNAL_IPV6

table ip6 nat table and chain forward in table inet filter are the most important things to notice here, given that they respectively configure the packet filter to perform NAT66 and to forward packets from the tun* interfaces to the outer world.

After applying the new ruleset with nft -f <path/to/ruleset> command, I was ready to witness the birth of our my little sinful setup. The only thing left was to ping a known IPv6 from one of the clients, to ensure that forwarding and NAT are working fine. One of the Google DNS servers would suffice:

$ ping 2001:4860:4860::8888
PING 2001:4860:4860::8888(2001:4860:4860::8888) 56 data bytes
64 bytes from 2001:4860:4860::8888: icmp_seq=1 ttl=54 time=48.7 ms
64 bytes from 2001:4860:4860::8888: icmp_seq=2 ttl=54 time=47.5 ms
$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=55 time=49.1 ms
64 bytes from icmp_seq=2 ttl=55 time=50.8 ms

Perfect! NAT66 was working, in its full evil glory, and the client was able to reach the outer IPv6 Internet with round-trip times as fast as IPv4. What was left now was to check if the clients were able to resolve AAAA records; given that I was already using Google’s DNS in /etc/resolv.conf, it should have worked straight away:

$ ping
PING ( 56(84) bytes of data.
$ ping -6
PING (2a03:2880:f129:83:face:b00c:0:25de)) 56 data bytes

What? Why is ping trying to reach Facebook on its IPv4 address by default instead of trying IPv6 first?

One workaround always leads to another

Well, it turned out that Glibc’s getaddrinfo() function, which is generally used to perform DNS resolution, uses a precedence system to correctly prioritise source-destination address pairs.

I started to suspect that the default behaviour of getaddrinfo() could be to consider local addresses (including ULA) as a separate case than global IPv6 ones; so, I tried to check gai.conf, the configuration file for the IPv6 DNS resolver.

label ::1/128       0  # Local IPv6 address
label ::/0          1  # Every IPv6
label 2002::/16     2 # 6to4 IPv6
label ::/96         3 # Deprecated IPv4-compatible IPv6 address prefix
label ::ffff:0:0/96 4  # Every IPv4 address
label fec0::/10     5 # Deprecated 
label fc00::/7      6 # ULA
label 2001:0::/32   7 # Teredo addresses

What is shown in the snippet above is the default label table used by getaddrinfo().
As I suspected, a ULA address is labeled differently (6) than a global Unicast one (1), and, because the default behaviour specified by RFC 3484 is to prefer pairs of source-destination addresses with the same label, the IPv4 is picked over the IPv6 ULA every time.
Damn, I was so close to committing the perfect crime.

To make this mess finally functional, I had to make yet another ugly hack (as if NAT66 using ULAs wasn’t enough), by setting a new label table in gai.conf that didn’t make distinctions between addresses.

label ::1/128       0  # Local IPv6 address
label ::/0          1  # Every IPv6
label 2002::/16     2 # 6to4 IPv6
label ::/96         3 # Deprecated IPv4-compatible IPv6 address
label ::ffff:0:0/96 4  # Every IPv4 address
label fec0::/10     5 # Deprecated 
label 2001:0::/32   7 # Teredo addresses

By omitting the label for fc00::/7, ULAs are now grouped together with GUAs, and natted IPv6 connectivity is used by default.

$ ping
PING (2a00:1450:4007:80f::200e)) 56 data bytes

In conclusion

So, yes, NAT66 can be done and it works, but that doesn’t make it any less than the messy, dirty hack it is. For the sake of getting IPv6 connectivity behind a provider too cheap to give its customers a /64, I had to forgo end-to-end connectivity, hacking Unique Local Addresses to achieve something they weren’t really devised for.

Was it worthy? Perhaps. My ping under the VPN is now as good on IPv6 as it is on IPv4, and everything works fine, but this came at the cost of an overcomplicated network configuration. This could have been much simpler if everybody had simply understood how IPv6 differs from IPv4, and that giving out a single address is simply not the right way to allocate addresses to your subscribers anymore.

The NATs we use today are relics of a past where the address space was so small that we had to break the Internet in order to save it. They were a mistake made to fix an even bigger one, a blunder whose effects we have now the chance to undo. We should just start to take the ongoing transition period as seriously as it deserves, to avoid falling into the same wrong assumptions yet again.

  1. Ironically, SixXS closed last June because “many ISPs offer IPv6 now”. 

First post!

Welcome, internet stranger, into my humble blog!

I hope I’ll be able to find the time to post at least once a month a new story or tutorial about Linux, FreeBSD, system administration or similar CS-related topics, which will, more often than not, involve a full report on something I’ve been tinkering on during my research activity (or, just because I liked it).
Everything I publish is written without any arrogance about it being in any way relevant, correct or even interesting; the only thing I hope for is for this blog to be at least in some way useful to myself, to avoid forgetting what I’ve learned, and which mistakes I have already committed.


From the very first moment I turned on a PC in the ’90s, I’ve been hooked with computers, and anything revolving around them. Exploring and better understanding how these machines work has been an immense source of entertainment and learning for me, leading to countless hours spent in trying every piece of software, gadget or device I was able to lay my hands onto.
I cannot state for certain how many times I found myself delving heart and soul into some convoluted install of fundamentally every Linux and BSD distribution I could find, sometimes even resorting into compiling some of them by scratch, just for the sake of better understanding how these complex yet fascinating software packages tied together into creating a fully-fledged and functional operating system.

Being passionate as I was (and still am) about software made the choice of enrolling in Computer Engineering extremely simple. During my university years, I had the time and opportunity to further improve my coding skills, especially focusing on striving to master C and C++, Go, and recently, Rust. I have a passion for compiler technology, and I’ve dabbled in programming language design for a while, implementing a functioning self-hosting compiler, which I hope will be the topic of a future, fully dedicated blog post.

What do you do?

After working for two years at the University of Bologna as both a researcher on distributed ledgers and as a system administrator, I decided to change my professional path and become an embedded developer. I now work as an embedded developer, mostly on the ESP32 platform.

My other hobbies are also languages (the ones spoken by people, at least for now!), cooking, writing, astronomy, biology, and science in general.

You wrote something wrong!

If you notice something is amiss with either my writing or the contents of the blog, do not esitate to contact me (in any way you prefer). I plan to add Disqus support directly on blog posts, but in the meantime don’t be shy to simply fork and PR me on Github, if you wish so.