Cross compiling made easy, using Clang and LLVM

Anyone who ever tried to cross-compile a C/C++ program knows how big a PITA the whole process could be. The main reasons for this sorry state of things are generally how byzantine build systems tend to be when configuring for cross-compilation, and how messy it is to set-up your cross toolchain in the first place.

One of the main culprits in my experience has been the GNU toolchain, the decades-old behemoth upon which the POSIXish world has been built for years. Like many compilers of yore, GCC and its binutils brethren were never designed with the intent to support multiple targets within a single setup, with he only supported approach being installing a full cross build for each triple you wish to target on any given host.

For instance, assuming you wish to build something for FreeBSD on your Linux machine using GCC, you need:

  • A GCC + binutils install for your host triplet (i.e., x86_64-pc-linux-gnu or similar);
  • A GCC + binutils complete install for your target triplet (i.e. x86_64-unknown-freebsd12.2-gcc, as, nm, etc)
  • A sysroot containing the necessary libraries and headers, which you can either build yourself or promptly steal from a running installation of FreeBSD.

This process is sometimes made simpler by Linux distributions or hardware vendors offering a selection of prepackaged compilers, but this will never suffice due to the sheer amount of possible host-target combinations. This sometimes means you have to build the whole toolchain yourself, something that, unless you rock a quite beefy CPU, tends to be a massive waste of time and power.

Clang as a cross compiler

This annoying limitation is one of the reasons why I got interested in LLVM (and thus Clang), which is by-design a full-fledged cross compiler toolchain and is mostly compatible with GNU. A single install can output and compile code for every supported target, as long as a complete sysroot is available at build time.

I found this to be a game-changer, and, while it can’t still compete in convenience with modern language toolchains (such as Go’s gc and GOARCH/GOOS), it’s night and day better than the rigmarole of setting up GNU toolchains. You can now just fetch whatever your favorite package management system has available in its repositories (as long as it’s not extremely old), and avoid messing around with multiple installs of GCC.

Until a few years ago, the whole process wasn’t as smooth as it could be. Due to LLVM not having a full toolchain yet available, you were still supposed to provide a binutils build specific for your target. While this is generally much more tolerable than building the whole compiler (binutils is relatively fast to build), it was still somewhat of a nuisance, and I’m glad that llvm-mc (LLVM’s integrated assembler) and lld (universal linker) are finally stable and as flexible as the rest of LLVM.

With the toolchain now set, the next step becomes to obtain a sysroot in order to provide the needed headers and libraries to compile and link for your target.

Obtaining a sysroot

A super fast way to find a working system directory for a given OS is to rip it straight out of an existing system (a Docker container image will often also do). For instance, this is how I used tar through ssh as a quick way to extract a working sysroot from a FreeBSD 13-CURRENT AArch64 VM 1:

$ mkdir ~/farm_tree
$ ssh FARM64 'tar cf - /lib /usr/include /usr/lib /usr/local/lib /usr/local/include' | bsdtar xvf - -C $HOME/farm_tree/

Invoking the cross compiler

With everything set, it’s now only a matter of invoking Clang with the right arguments:

$  clang++ --target=aarch64-pc-freebsd --sysroot=$HOME/farm_tree -fuse-ld=lld -stdlib=libc++ -o zpipe -lz --verbose
clang version 11.0.1
Target: aarch64-pc-freebsd
Thread model: posix
InstalledDir: /usr/bin
 "/usr/bin/clang-11" -cc1 -triple aarch64-pc-freebsd -emit-obj -mrelax-all -disable-free -disable-llvm-verifier -discard-value-names -main-file-name -mrelocation-model static -mframe-pointer=non-leaf -fno-rounding-math -mconstructor-aliases -munwind-tables -fno-use-init-array -target-cpu generic -target-feature +neon -target-abi aapcs -fallow-half-arguments-and-returns -fno-split-dwarf-inlining -debugger-tuning=gdb -v -resource-dir /usr/lib/clang/11.0.1 -isysroot /home/marco/farm_tree -internal-isystem /home/marco/farm_tree/usr/include/c++/v1 -fdeprecated-macro -fdebug-compilation-dir /home/marco/dummies/cxx -ferror-limit 19 -fno-signed-char -fgnuc-version=4.2.1 -fcxx-exceptions -fexceptions -faddrsig -o /tmp/zpipe-54f1b1.o -x c++
clang -cc1 version 11.0.1 based upon LLVM 11.0.1 default target x86_64-pc-linux-gnu
#include "..." search starts here:
#include <...> search starts here:
End of search list.
 "/usr/bin/ld.lld" --sysroot=/home/marco/farm_tree --eh-frame-hdr -dynamic-linker /libexec/ --enable-new-dtags -o zpipe /home/marco/farm_tree/usr/lib/crt1.o /home/marco/farm_tree/usr/lib/crti.o /home/marco/farm_tree/usr/lib/crtbegin.o -L/home/marco/farm_tree/usr/lib /tmp/zpipe-54f1b1.o -lz -lc++ -lm -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /home/marco/farm_tree/usr/lib/crtend.o /home/marco/farm_tree/usr/lib/crtn.o
$ file zpipe
zpipe: ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 13.0 (1300136), FreeBSD-style, with debug_info, not stripped

In the snipped above, I have managed to compile and link a C++ file into an executable for AArch64 FreeBSD, all while using just the clang and lld I had already installed on my GNU/Linux system.

More in detail:

  1. --target switches the LLVM default target (x86_64-pc-linux-gnu) to aarch64-pc-freebsd, thus enabling cross-compilation.
  2. --sysroot forces Clang to assume the specified path as root when searching headers and libraries, instead of the usual paths. Note that sometimes this setting might not be enough, especially if the target uses GCC and Clang somehow fails to detect its install path. This can be easily fixed by specifying --gcc-toolchain, which clarifies where to search for GCC installations.
  3. -fuse-ld=lld tells Clang to use lld instead whatever default the platform uses. As I will explain below, it’s highly unlikely that the system linker understands foreign targets, while LLD can natively support almost every binary format and OS 2.
  4. -stdlib=libc++ is needed here due to Clang failing to detect that FreeBSD on AArch64 uses LLVM’s libc++ instead of GCC’s libstdc++.
  5. -lz is also specified to show how Clang can also resolve other libraries inside the sysroot without issues, in this case, zlib.

The final test is now to copy the binary to our target system (i.e. the VM we ripped the sysroot from before) and check if it works as expected:

$ rsync zpipe FARM64:"~"
$ ssh FARM64
FreeBSD-ARM64-VM $ chmod +x zpipe
FreeBSD-ARM64-VM $ ldd zpipe
zpipe: => /lib/ (0x4029e000) => /usr/lib/ (0x402e4000) => /lib/ (0x403da000) => /lib/ (0x40426000) => /lib/ (0x40491000) => /lib/ (0x408aa000)
FreeBSD-ARM64-VM $ ./zpipe -h
zpipe usage: zpipe [-d] < source > dest

Success! It’s now possible to use this cross toolchain to build larger programs, and below I’ll give a quick example to how to use it to build real projects.

Optional: creating an LLVM toolchain directory

LLVM provides a mostly compatible counterpart for almost every tool shipped by binutils (with the notable exception of as 3), prefixed with llvm-.

The most critical of these is LLD, which is a drop in replacement for a plaform’s system linker, capable to replace both GNU ld.bfd and gold on GNU/Linux or BSD, and Microsoft’s LINK.EXE when targeting MSVC. It supports linking on (almost) every platform supported by LLVM, thus removing the nuisance to have multiple specific linkers installed.

Both GCC and Clang support using ld.lld instead of the system linker (which may well be lld, like on FreeBSD) via the command line switch -fuse-ld=lld.

In my experience, I found that Clang’s driver might get confused when picking the right linker on some uncommon platforms, especially before version 11.0. For some reason, clang sometimes decided to outright ignore the -fuse-ld=lld switch and picked the system linker (ld.bfd in my case), which does not support AArch64.

A fast solution to this is to create a toolchain directory containing symlinks that rename the LLVM utilities to the standard binutils programs:

$  ls -la ~/.llvm/bin/
Permissions Size User  Group Date Modified Name
lrwxrwxrwx    16 marco marco  3 Aug  2020  ar -> /usr/bin/llvm-ar
lrwxrwxrwx    12 marco marco  6 Aug  2020  ld -> /usr/bin/lld
lrwxrwxrwx    21 marco marco  3 Aug  2020  objcopy -> /usr/bin/llvm-objcopy
lrwxrwxrwx    21 marco marco  3 Aug  2020  objdump -> /usr/bin/llvm-objdump
lrwxrwxrwx    20 marco marco  3 Aug  2020  ranlib -> /usr/bin/llvm-ranlib
lrwxrwxrwx    21 marco marco  3 Aug  2020  strings -> /usr/bin/llvm-strings

The -B switch can then be used to force Clang (or GCC) to search the required tools in this directory, stopping the issue from ever occurring:

$  clang++ -B$HOME/.llvm/bin -stdlib=libc++ --target=aarch64-pc-freebsd --sysroot=$HOME/farm_tree -std=c++17 -o mvd-farm64
$ file mvd-farm64
mvd-farm64: ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 13.0, FreeBSD-style, with debug_info, not stripped

Optional: creating Clang wrappers to simplify cross-compilation

I happened to notice that certain build systems (and with “certain” I mean some poorly written Makefiles and sometimes Autotools) have a tendency to misbehave when $CC, $CXX or $LD contain spaces or multiple parameters. This might become a recurrent issue if we need to invoke clang with several arguments. 4

Given also how unwieldy it is to remember to write all of the parameters correctly everywhere, I usually write quick wrappers for clang and clang++ in order to simplify building for a certain target:

$ cat ~/.local/bin/aarch64-pc-freebsd-clang
#!/usr/bin/env sh

exec /usr/bin/clang -B$HOME/.llvm/bin --target=aarch64-pc-freebsd --sysroot=$HOME/farm_tree "$@"
$ cat ~/.local/bin/aarch64-pc-freebsd-clang++
#!/usr/bin/env sh

exec /usr/bin/clang++ -B$HOME/.llvm/bin -stdlib=libc++ --target=aarch64-pc-freebsd --sysroot=$HOME/farm_tree "$@"	

If created in a directory inside $PATH, these script can used everywhere as standalone commands:

$ aarch64-pc-freebsd-clang++ -o tst -static
$ file tst
tst: ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), statically linked, for FreeBSD 13.0 (1300136), FreeBSD-style, with debug_info, not stripped

Cross-building with Autotools, CMake and Meson

Autotools, CMake, and Meson are arguably the most popular building systems for C and C++ open source projects (sorry, SCons). All of three support cross-compiling out of the box, albeit with some caveats.


Over the years, Autotools has been famous for being horrendously clunky and breaking easily. While this reputation is definitely well earned, it’s still widely used by most large GNU projects. Given it’s been around for decades, it’s quite easy to find support online when something goes awry (sadly, this is not also true when writing .ac files). When compared to its more modern breathren, it doesn’t require any toolchain file or extra configuration when cross compiling, being only driven by command line options.

A ./configure script (either generated by autoconf or shipped by a tarball alongside source code) usually supports the --host flag, allowing the user to specify the triple of the host on which the final artifacts are meant to be run.

This flags activates cross compilation, and causes the “auto-something” array of tools to try to detect the correct compiler for the target, which it generally assumes to be called some-triple-gcc or some-triple-g++.

For instance, let’s try to configure binutils version 2.35.1 for aarch64-pc-freebsd, using the Clang wrapper introduced above:

$ tar xvf binutils-2.35.1.tar.xz
$ mkdir binutils-2.35.1/build # always create a build directory to avoid messing up the source tree
$ cd binutils-2.35.1/build
$ env CC='aarch64-pc-freebsd-clang' CXX='aarch64-pc-freebsd-clang++' AR=llvm-ar ../configure --build=x86_64-pc-linux-gnu --host=aarch64-pc-freebsd --enable-gold=yes
checking build system type... x86_64-pc-linux-gnu
checking host system type... aarch64-pc-freebsd
checking target system type... aarch64-pc-freebsd
checking for a BSD-compatible install... /usr/bin/install -c
checking whether ln works... yes
checking whether ln -s works... yes
checking for a sed that does not truncate output... /usr/bin/sed
checking for gawk... gawk
checking for aarch64-pc-freebsd-gcc... aarch64-pc-freebsd-clang
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... yes
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether aarch64-pc-freebsd-clang accepts -g... yes
checking for aarch64-pc-freebsd-clang option to accept ISO C89... none needed
checking whether we are using the GNU C++ compiler... yes
checking whether aarch64-pc-freebsd-clang++ accepts -g... yes

The invocation of ./configure above specifies that I want autotools to:

  1. Configure for building on an x86_64-pc-linux-gnu host (which I specified using --build);
  2. Build binaries that will run on aarch64-pc-freebsd, using the --host switch;
  3. Use the Clang wrappers made above as C and C++ compilers;
  4. Use llvm-ar as the target ar.

I also specified to build the Gold linker, which is written in C++ and it’s a good test for well our improvised toolchain handles compiling C++.

If the configuration step doesn’t fail for some reason (it shouldn’t), it’s now time to run GNU Make to build binutils:

$ make -j16 # because I have 16 theads on my system
[ lots of output]
$ mkdir dest
$ make DESTDIR=$PWD/dest install # install into a fake tree

There should now be executable files and libraries inside of the fake tree generated by make install. A quick test using file confirms they have been correctly built for aarch64-pc-freebsd:

$ file dest/usr/local/bin/
dest/usr/local/bin/ ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 13.0 (1300136), FreeBSD-style, with debug_info, not stripped


The simplest way to set CMake to configure for an arbitrary target is to write a toolchain file. These usually consist of a list of declarations that instructs CMake on how it is supposed to use a given toolchain, specifying parameters like the target operating system, the CPU architecture, the name of the C++ compiler, and such.

One reasonable toolchain file for the aarch64-pc-freebsd triple written as follows:


set(CMAKE_SYSROOT $ENV{HOME}/farm_tree)

set(CMAKE_C_COMPILER aarch64-pc-freebsd-clang)
set(CMAKE_CXX_COMPILER aarch64-pc-freebsd-clang++)
set(CMAKE_AR llvm-ar)

# these variables tell CMake to avoid using any binary it finds in 
# the sysroot, while picking headers and libraries exclusively from it 

In this file, I specified the wrapper created above as the cross compiler for C and C++ for the target. It should be possible to also use plain Clang with the right arguments, but it’s much less straightforward and potentially more error-prone.

In any case, it is very important to indicate the CMAKE_SYSROOT and CMAKE_FIND_ROOT_PATH_MODE_* variables, or otherwise CMake could wrongly pick packages from the host with disastrous results.

It is now only a matter of setting CMAKE_TOOLCHAIN_FILE with the path to the toolchain file when configuring a project. To better illustrate this, I will now also build {fmt} (which is an amazing C++ library you should definitely use) for aarch64-pc-freebsd:

$  git clone
Cloning into 'fmt'...
remote: Enumerating objects: 45, done.
remote: Counting objects: 100% (45/45), done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 24446 (delta 17), reused 12 (delta 7), pack-reused 24401
Receiving objects: 100% (24446/24446), 12.08 MiB | 2.00 MiB/s, done.
Resolving deltas: 100% (16551/16551), done.
$ cd fmt
$ cmake -B build -G Ninja -DCMAKE_TOOLCHAIN_FILE=$HOME/toolchain-aarch64-freebsd.cmake -DBUILD_SHARED_LIBS=ON -DFMT_TEST=OFF .
-- CMake version: 3.19.4
-- The CXX compiler identification is Clang 11.0.1
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /home/marco/.local/bin/aarch64-pc-freebsd-clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Version: 7.1.3
-- Build type: Release
-- Performing Test has_std_11_flag
-- Performing Test has_std_11_flag - Success
-- Performing Test has_std_0x_flag
-- Performing Test has_std_0x_flag - Success
-- Performing Test FMT_HAS_VARIANT
-- Performing Test FMT_HAS_VARIANT - Success
-- Required features: cxx_variadic_templates
-- Performing Test HAS_NULLPTR_WARNING
-- Performing Test HAS_NULLPTR_WARNING - Success
-- Looking for strtod_l
-- Looking for strtod_l - not found
-- Configuring done
-- Generating done
-- Build files have been written to: /home/marco/fmt/build

Compared with Autotools, the command line passed to cmake is very simple and doesn’t need too much explanation. After the configuration step is finished, it’s only a matter to compile the project and get ninja or make to install the resulting artifacts somewhere.

$ cmake --build build
[4/4] Creating library symlink
$ mkdir dest
$ env DESTDIR=$PWD/dest cmake --build build -- install
[0/1] Install the project...
-- Install configuration: "Release"
-- Installing: /home/marco/fmt/dest/usr/local/lib/
-- Installing: /home/marco/fmt/dest/usr/local/lib/
-- Installing: /home/marco/fmt/dest/usr/local/lib/
-- Installing: /home/marco/fmt/dest/usr/local/lib/cmake/fmt/fmt-config.cmake
-- Installing: /home/marco/fmt/dest/usr/local/lib/cmake/fmt/fmt-config-version.cmake
-- Installing: /home/marco/fmt/dest/usr/local/lib/cmake/fmt/fmt-targets.cmake
-- Installing: /home/marco/fmt/dest/usr/local/lib/cmake/fmt/fmt-targets-release.cmake
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/args.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/chrono.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/color.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/compile.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/core.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/format.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/format-inl.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/locale.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/os.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/ostream.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/posix.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/printf.h
-- Installing: /home/marco/fmt/dest/usr/local/include/fmt/ranges.h
-- Installing: /home/marco/fmt/dest/usr/local/lib/pkgconfig/fmt.pc
$  file dest/usr/local/lib/
dest/usr/local/lib/ ELF 64-bit LSB shared object, ARM aarch64, version 1 (FreeBSD), dynamically linked, for FreeBSD 13.0 (1300136), with debug_info, not stripped


Like CMake, Meson relies on toolchain files (here called “cross files”) to specify which tools should be used when building for a given target. Thanks to being written in a TOML-like language, they are very straightforward:

$ cat meson_aarch64_fbsd_cross.txt
c = '/home/marco/.local/bin/aarch64-pc-freebsd-clang'
cpp = '/home/marco/.local/bin/aarch64-pc-freebsd-clang++'
ld = '/usr/bin/ld.lld'
ar = '/usr/bin/llvm-ar'
objcopy = '/usr/bin/llvm-objcopy'
strip = '/usr/bin/llvm-strip'

ld_args = ['--sysroot=/home/marco/farm_tree']

system = 'freebsd'
cpu_family = 'aarch64'
cpu = 'aarch64'
endian = 'little'

This cross-file can then be specified to meson setup using the --cross-file option 5, with everything else remaining the same as with every other Meson build.

And, well, this is basically it: like with CMake, the whole process is relatively painless and foolproof. For the sake of completeness, this is how to build dav1d, VideoLAN’s AV1 decoder, for aarch64-pc-freebsd:

$ git clone
Cloning into 'dav1d'...
warning: redirecting to
remote: Enumerating objects: 164, done.
remote: Counting objects: 100% (164/164), done.
remote: Compressing objects: 100% (91/91), done.
remote: Total 9377 (delta 97), reused 118 (delta 71), pack-reused 9213
Receiving objects: 100% (9377/9377), 3.42 MiB | 54.00 KiB/s, done.
Resolving deltas: 100% (7068/7068), done.
$ meson setup build --cross-file ../meson_aarch64_fbsd_cross.txt --buildtype release
The Meson build system
Version: 0.56.2
Source dir: /home/marco/dav1d
Build dir: /home/marco/dav1d/build
Build type: cross build
Project name: dav1d
Project version: 0.8.1
C compiler for the host machine: /home/marco/.local/bin/aarch64-pc-freebsd-clang (clang 11.0.1 "clang version 11.0.1")
C linker for the host machine: /home/marco/.local/bin/aarch64-pc-freebsd-clang ld.lld 11.0.1
[ output cut ]
$ meson compile -C build
Found runner: ['/usr/bin/ninja']
ninja: Entering directory `build'
[129/129] Linking target tests/seek_stress
$ mkdir dest
$ env DESTDIR=$PWD/dest meson install -C build
ninja: Entering directory `build'
[1/11] Generating vcs_version.h with a custom command
Installing src/ to /home/marco/dav1d/dest/usr/local/lib
Installing tools/dav1d to /home/marco/dav1d/dest/usr/local/bin
Installing /home/marco/dav1d/include/dav1d/common.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/include/dav1d/data.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/include/dav1d/dav1d.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/include/dav1d/headers.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/include/dav1d/picture.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/build/include/dav1d/version.h to /home/marco/dav1d/dest/usr/local/include/dav1d
Installing /home/marco/dav1d/build/meson-private/dav1d.pc to /home/marco/dav1d/dest/usr/local/lib/pkgconfig
$ file dest/usr/local/bin/dav1d
dest/usr/local/bin/dav1d: ELF 64-bit LSB executable, ARM aarch64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 13.0 (1300136), FreeBSD-style, with debug_info, not stripped

Bonus: static linking with musl and Alpine Linux

Statically linking a C or C++ program can sometimes save you a lot of library compatibility headaches, especially when you can’t control what’s going to be installed on whatever you plan to target. Building static binaries is however quite complex on GNU/Linux, due to Glibc actively discouraging people from linking it statically. 6

Musl is a very compatible standard library implementation for Linux that plays much nicer with static linking, and it is now shipped by most major distributions. These packages often suffice in building your code statically, at least as long as you plan to stick with plain C.

The situation gets much more complicated if you plan to use C++, or if you need additional components. Any library shipped by a GNU/Linux system (like libstdc++, libz, libffi and so on) is usually only built for Glibc, meaning that any library you wish to use must be rebuilt to target Musl. This also applies to libstdc++, which inevitably means either recompiling GCC or building a copy of LLVM’s libc++.

Thankfully, there are several distributions out there that target “Musl-plus-Linux”, everyone’s favorite being Alpine Linux. It is thus possible to apply the same strategy we used above to obtain a x86_64-pc-linux-musl sysroot complete of libraries and packages built for Musl, which can then be used by Clang to generate 100% static executables.

Setting up an Alpine container

A good starting point is the minirootfs tarball provided by Alpine, which is meant for containers and tends to be very small:

$ wget -qO - | gunzip | sudo tar xfp - -C ~/alpine_tree

It is now possible to chroot inside the image in ~/alpine_tree and set it up, installing all the packages you may need. I prefer in general to use systemd-nspawn in lieu of chroot due to it being vastly better and less error prone. 7

$ $  sudo systemd-nspawn -D alpine_tree
Spawning container alpinetree on /home/marco/alpine_tree.
Press ^] three times within 1s to kill container.

We can now (optionally) switch to the edge branch of Alpine for newer packages by editing /etc/apk/repositories, and then install the required packages containing any static libraries required by the code we want to build:

alpinetree:~# cat /etc/apk/repositories
alpinetree:~# apk update
v3.13.0-1030-gbabf0a1684 []
v3.13.0-1035-ga3ac7373fd []
OK: 14029 distinct packages available
alpinetree:~# apk upgrade
OK: 6 MiB in 14 packages
alpinetree:~# apk add g++ libc-dev
(1/14) Installing libgcc (10.2.1_pre1-r3)
(2/14) Installing libstdc++ (10.2.1_pre1-r3)
(3/14) Installing binutils (2.35.1-r1)
(4/14) Installing libgomp (10.2.1_pre1-r3)
(5/14) Installing libatomic (10.2.1_pre1-r3)
(6/14) Installing libgphobos (10.2.1_pre1-r3)
(7/14) Installing gmp (6.2.1-r0)
(8/14) Installing isl22 (0.22-r0)
(9/14) Installing mpfr4 (4.1.0-r0)
(10/14) Installing mpc1 (1.2.1-r0)
(11/14) Installing gcc (10.2.1_pre1-r3)
(12/14) Installing musl-dev (1.2.2-r1)
(13/14) Installing libc-dev (0.7.2-r3)
(14/14) Installing g++ (10.2.1_pre1-r3)
Executing busybox-1.33.0-r1.trigger
OK: 188 MiB in 28 packages
alpinetree:~# apk add zlib-dev zlib-static
(1/3) Installing pkgconf (1.7.3-r0)
(2/3) Installing zlib-dev (1.2.11-r3)
(3/3) Installing zlib-static (1.2.11-r3)
Executing busybox-1.33.0-r1.trigger
OK: 189 MiB in 31 packages

In this case I installed g++ and libc-dev in order to get a static copy of libstdc++, a static libc.a (Musl) and their respective headers. I also installed zlib-dev and zlib-static to install zlib’s headers and libz.a, respectively. As a general rule, Alpine usually ships static versions available inside -static packages, and headers as somepackage-dev. 8

Also, remember every once in a while to run apk upgrade inside the sysroot in order to keep the local Alpine install up to date.

Compiling static C++ programs

With everything now set, it’s only a matter of running clang++ with the right --target and --sysroot:

$ clang++ -B$HOME/.llvm/bin --gcc-toolchain=$HOME/alpine_tree/usr --target=x86_64-alpine-linux-musl --sysroot=$HOME/alpine_tree -L$HOME/alpine_tree/lib -std=c++17 -o zpipe -lz -static
$ file zpipe
zpipe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, with debug_info, not stripped

The extra --gcc-toolchain is optional, but may help solving issues where compilation fails due to Clang not detecting where GCC and the various crt*.o files reside in the sysroot. The extra -L for /lib is required because Alpine splits its libraries between /usr/lib and /lib, and the latter is not automatically picked up by clang, which both usually expect libraries to be located in $SYSROOT/usr/bin.

Writing a wrapper for static linking with Musl and Clang

Musl packages usually come with the upstream-provided shims musl-gcc and musl-clang, which wrap the system compilers in order to build and link with the alternative libc. In order to provide a similar level of convenience, I quickly whipped up the following Perl script:

#!/usr/bin/env perl

use strict;
use utf8;
use warnings;
use v5.30;

use List::Util 'any';

my $ALPINE_DIR = $ENV{ALPINE_DIR} // "$ENV{HOME}/alpine_tree";
my $TOOLS_DIR = $ENV{TOOLS_DIR} // "$ENV{HOME}/.llvm/bin";

my $CMD_NAME = $0 =~ /\+\+/ ? 'clang++' : 'clang';
my $STATIC = $0 =~ /static/;

sub clang {
	exec $CMD_NAME, @_ or return 0;

sub main {
	my $compile = any { /^\s*-c|-S\s*$/ } @ARGV;

	my @args = (

	unshift @args, '-static' if $STATIC and not $compile;

	exit 1 unless clang @args;


This wrapper is more refined than the FreeBSD AArch64 wrapper above. For instance, it can infer C++ if invoked as clang++, or always force -static if called from a symlink containing static in its name:

$ ls -la $(which musl-clang++)
lrwxrwxrwx    10 marco marco 26 Jan 21:49  /home/marco/.local/bin/musl-clang++ -> musl-clang
$ ls -la $(which musl-clang++-static)
lrwxrwxrwx    10 marco marco 26 Jan 22:03  /home/marco/.local/bin/musl-clang++-static -> musl-clang
$ musl-clang++-static -std=c++17 -o zpipe -lz # automatically infers C++ and -static
$ file zpipe
zpipe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, with debug_info, not stripped

It is thus possible to force Clang to only ever link -static by setting $CC to musl-clang-static, which can be useful with build systems that don’t play nicely with statically linking. From my experience, the worst offenders in this regard are Autotools (sometimes) and poorly written Makefiles.


Cross-compiling C and C++ is and will probably always be an annoying task, but it has got much better since LLVM became production-ready and widely available. Clang’s -target option has saved me countless man-hours that I would have instead wasted building and re-building GCC and Binutils over and over again.

Alas, all that glitters is not gold, as is often the case. There is still code around that only builds with GCC due to nasty GNUisms (I’m looking at you, Glibc). Cross compiling for Windows/MSVC is also bordeline unfeasible due to how messy the whole Visual Studio toolchain is.

Furthermore, while targeting arbitrary triples with Clang is now definitely simpler that it was, it still pales in comparison to how trivial cross compiling with Rust or Go is.

One special mention among these new languages should go to Zig, and its goal to also make C and C++ easy to build for other platforms.

The zig cc and zig c++ commands have the potential to become an amazing swiss-army knife tool for cross compiling, thanks to Zig shipping a copy of clang and large chunks of projects such as Glibc, Musl, libc++ and MinGW. Any required library is then built on-the-fly when required:

$ zig c++ --target=x86_64-windows-gnu -o str.exe
$ file str.exe
str.exe: PE32+ executable (console) x86-64, for MS Windows

While I think this is not yet perfect, it already feels almost like magic. I dare to say, this might really become a killer selling point for Zig, making it attractive even for those who are not interested in using the language itself.

  1. If the transfer is happening across a network and not locally, it’s a good idea to compress the output tarball. 

  2. Sadly, macOS is not supported anymore by LLD due to Mach-O support being largely unmaintained and left to rot over the last years. This leaves ld64 (or a cross-build thereof, if you manage to build it) as the only way to link Mach-O executables (unless ld.bfd from binutils still supports it). 

  3. llvm-mc can be used as a (very cumbersome) assembler but it’s poorly documented. Like gcc, the clang frontend can act as an assembler, making as often redundant. 

  4. This is without talking about those criminals who hardcode gcc in their build scripts, but this is a rant better left for another day. 

  5. In the same fashion, it is also possible to tune the native toolchain for the current machine using a native file and the --native-file toggle. 

  6. Glibc’s builtin name resolution system (NSS) is one of the main culprits, which heavily uses dlopen()/dlsym(). This is due to its heavy usage of plugins, which is meant to provide support for extra third-party resolvers such as mDNS. 

  7. systemd-nspawn can also double as a lighter alternative to VMs, using the --boot option to spawn an init process inside the container. See this very helpful gist to learn how to make bootable containers for distributions based on OpenRC, like Alpine. 

  8. Sadly, Alpine for reasons unknown to me, does not ship the static version of certain libraries (like libfmt). Given that embedding a local copy of third party dependencies is common practice nowadays for C++, this is not too problematic. 

NAT66: The good, the bad, the ugly

NAT (and NAPT) is one of those technologies anyone has a strong opinion about. It has been for years the necessary evil and invaluable (yet massive) hack that kept IPv4 from falling apart in the face of its abysmally small 32-bit address space - which was, to be honest, an absolute OK choice for the time the protocol was designed, when computers cost a small fortune, and were as big as lorries.

The Internet Protocol, version 4, has been abused for quite too long now. We made it into the fundamental building block of the modern Internet, a network of a scale it was never designed for. We are well in due time to put it at rest and replace it with its controversial, yet problem-solving 128-bit grandchild, IPv6.

So, what should be the place for NAT in the new Internet, which makes the return to the end-to-end principle one of its main tenets?

NAT66 misses the point

Well, none, according to the IETF, which has for years tried to dissuade everyone with dabbing with NAT66 (the name NAT is known on IPv6); this is not without good reasons, though. For too long, the supposedly stateless, connectionless level 3 IP protocol has been made into an impromptu “stateful”, connection-oriented protocol by NAT gateways, just for the sake to meet the demands of an infinite number of devices trying to connect to the Internet.

This is without considering the false sense of security that address masquerading provides; I cannot recall how many times I’ve heard people say that (gasp!) NAT is a fundamental piece in the security of their internal networks (it’s not).

Given that the immensity of the IPv6 address space allows providers to give out full /64s to customers, I’d always failed to see the point in NAT66: it always felt to me as a feature fundamentally dead in the water, a solution seeking a problem, ready to be misused.

Well, this was before discovering how cheap some hosting services could be.

Being cheap: the root of all evils

I was quite glad to see a while ago that my VPS provider had announced IPv6 support; thanks to this, I would have been finally able to provide IPv6 access to the guests of the VPNs I host on that VPS, without having to incur into the delay penalties caused by tunneling the traffic on good old services such as Hurrican Electric and SixXS 1. Hooray!

My excitement was unfortunately not going to last for long, and it was indeed barbarically butchered when I discovered that despite having been granted a full /32 (296 IPs), my provider decided to give its VPS customers just a single /128 address.


Oh. God. Why.

Given that IPv6 connectivity was something I really wished for my OpenVPN setup, this was quite a setback. I was left with fundamentally only two reasonable choices:

  1. Get a free /64 from a Hurricane Electric tunnel, and allocate IPv6s for VPN guests from there;
  2. Be a very bad person, set up NAT66, and feel ashamed.

Hurricane Electric is, without doubt, the most orthodox option between the two; it’s free of charge, it gives out /64s, and it’s quite easy to set up.

The main showstopper here is definitely the increased network latency added by two layers of tunneling (VPN -> 6to4 -> IPv6 internet), and, given that by default native IPv6 source IPs are preferred to IPv4, it would have been bad if having a v6 public address incurred in a slow down of connections with usually tolerable latencies. Especially if there was a way to get decent RTTs for both IPv6 and IPv4…

And so, with a pang of guilt, I shamefully committed the worst crime.

How to get away with NAT66

The process of setting up NAT usually relies on picking a specially reserved privately-routable IP range, to avoid our internal network structure to get in conflict with the outer networking routing rules (it still may happen, though, if under multiple misconfigured levels of masquerading).

The IPv6 equivalent to, and has been defined in 2005 by the IETF, not without a whole deal of confusion first, with the Unique Local Addresses (ULA) specification. This RFC defines the unique, not publicly routable fc00::/7 that is supposed to be used to define local subnets, without the unicity guarantees of 2000::/3 (the range from which Global Unicast Addresses (GUA) - i.e. the Internet - are allocated from for the time being). From it, fd00::/8 is the only block really defined so far, and it’s meant to define all of the /48s your private network may ever need.

The next step was to configure my OpenVPN instances to give out ULAs from subnets of my choice to clients, by adding at the end of to my config the following lines:

server-ipv6 fd00::1:8:0/112
push "route-ipv6 2000::/3"

I resorted to picking fd00::1:8:0/112 for the UDP server and fd00::1:9:0/112 for the TCP one, due to a limitation in OpenVPN only accepting masks from /64 to /112.

Given that I also want traffic towards the Internet to be forwarded via my NAT, it is also necessary to instruct the server to push a default route to its clients at connection time.

$ ping fd00::1:8:1
PING fd00::1:8:1(fd00::1:8:1) 56 data bytes
64 bytes from fd00::1:8:1: icmp_seq=1 ttl=64 time=40.7 ms

The clients and servers were now able to ping each other through their local addresses without any issue, but the outer network was still unreachable.

I continued the creation of this abomination by configuring the kernel to forward IPv6 packets; this is achieved by setting the net.ipv6.conf.all.forwarding = 1 with sysctl or in sysctl.conf (from now on, the rest of this article assumes that you are under Linux).

# cat /etc/sysctl.d/30-ipforward.conf 
# sysctl -p /etc/sysctl.d/30-ipforward.conf

Afterwards, the only step left was to set up NAT66, which can be easily done by configuring the stateful firewall provided by Linux’ packet filter.
I personally prefer (and use) the newer nftables to the {ip,ip6,arp,eth}tables mess it is supposed to supersede, because I find it tends to be quite less moronic and clearer to understand (despite the relatively scarce documentation available online, which is sometimes a pain. I wish Linux had the excellent OpenBSD’s pf…).
Feel free to use ip6tables, if that’s what you are already using, and you don’t really feel the need to migrate your ruleset to nft.

This is a shortened, summarised snippet of the rules that I’ve had to put into my nftables.conf to make NAT66 work; I’ve also left the IPv4 rules in for the sake of completeness.

PS: Remember to change MY_EXTERNAL_IPVx with your IPv4/6!

table inet filter {
  chain forward {
    type filter hook forward priority 0;

    # allow established/related connections                                                                                                                                                                                                 
    ct state {established, related} accept
    # early drop of invalid connections                                                                                                                                                                                                     
    ct state invalid drop

    # Allow packets to be forwarded from the VPNs to the outer world
    ip saddr iifname "tun*" oifname eth0 accept
    # Using fd00::1:0:0/96 allows to match for
    # every fd00::1:xxxx:0/112 I set up
    ip6 saddr fd00::1:0:0/96 iifname "tun*" oifname eth0 accept
# IPv4 NAT table
table ip nat {
  chain prerouting {
    type nat hook prerouting priority 0; policy accept;
  chain postrouting {
    type nat hook postrouting priority 100; policy accept;
    ip saddr oif "eth0" snat to MY_EXTERNAL_IPV4

# IPv6 NAT table
table ip6 nat {
  chain prerouting {
    type nat hook prerouting priority 0; policy accept;
  chain postrouting {
    type nat hook postrouting priority 100; policy accept;

    # Creates a SNAT (source NAT) rule that changes the source 
    # address of the outbound IPs with the external IP of eth0
    ip6 saddr fd00::1:0:0/96 oif "eth0" snat to MY_EXTERNAL_IPV6

table ip6 nat table and chain forward in table inet filter are the most important things to notice here, given that they respectively configure the packet filter to perform NAT66 and to forward packets from the tun* interfaces to the outer world.

After applying the new ruleset with nft -f <path/to/ruleset> command, I was ready to witness the birth of our my little sinful setup. The only thing left was to ping a known IPv6 from one of the clients, to ensure that forwarding and NAT are working fine. One of the Google DNS servers would suffice:

$ ping 2001:4860:4860::8888
PING 2001:4860:4860::8888(2001:4860:4860::8888) 56 data bytes
64 bytes from 2001:4860:4860::8888: icmp_seq=1 ttl=54 time=48.7 ms
64 bytes from 2001:4860:4860::8888: icmp_seq=2 ttl=54 time=47.5 ms
$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=55 time=49.1 ms
64 bytes from icmp_seq=2 ttl=55 time=50.8 ms

Perfect! NAT66 was working, in its full evil glory, and the client was able to reach the outer IPv6 Internet with round-trip times as fast as IPv4. What was left now was to check if the clients were able to resolve AAAA records; given that I was already using Google’s DNS in /etc/resolv.conf, it should have worked straight away:

$ ping
PING ( 56(84) bytes of data.
$ ping -6
PING (2a03:2880:f129:83:face:b00c:0:25de)) 56 data bytes

What? Why is ping trying to reach Facebook on its IPv4 address by default instead of trying IPv6 first?

One workaround always leads to another

Well, it turned out that Glibc’s getaddrinfo() function, which is generally used to perform DNS resolution, uses a precedence system to correctly prioritise source-destination address pairs.

I started to suspect that the default behaviour of getaddrinfo() could be to consider local addresses (including ULA) as a separate case than global IPv6 ones; so, I tried to check gai.conf, the configuration file for the IPv6 DNS resolver.

label ::1/128       0  # Local IPv6 address
label ::/0          1  # Every IPv6
label 2002::/16     2 # 6to4 IPv6
label ::/96         3 # Deprecated IPv4-compatible IPv6 address prefix
label ::ffff:0:0/96 4  # Every IPv4 address
label fec0::/10     5 # Deprecated 
label fc00::/7      6 # ULA
label 2001:0::/32   7 # Teredo addresses

What is shown in the snippet above is the default label table used by getaddrinfo().
As I suspected, a ULA address is labeled differently (6) than a global Unicast one (1), and, because the default behaviour specified by RFC 3484 is to prefer pairs of source-destination addresses with the same label, the IPv4 is picked over the IPv6 ULA every time.
Damn, I was so close to committing the perfect crime.

To make this mess finally functional, I had to make yet another ugly hack (as if NAT66 using ULAs wasn’t enough), by setting a new label table in gai.conf that didn’t make distinctions between addresses.

label ::1/128       0  # Local IPv6 address
label ::/0          1  # Every IPv6
label 2002::/16     2 # 6to4 IPv6
label ::/96         3 # Deprecated IPv4-compatible IPv6 address
label ::ffff:0:0/96 4  # Every IPv4 address
label fec0::/10     5 # Deprecated 
label 2001:0::/32   7 # Teredo addresses

By omitting the label for fc00::/7, ULAs are now grouped together with GUAs, and natted IPv6 connectivity is used by default.

$ ping
PING (2a00:1450:4007:80f::200e)) 56 data bytes

In conclusion

So, yes, NAT66 can be done and it works, but that doesn’t make it any less than the messy, dirty hack it is. For the sake of getting IPv6 connectivity behind a provider too cheap to give its customers a /64, I had to forgo end-to-end connectivity, hacking Unique Local Addresses to achieve something they weren’t really devised for.

Was it worthy? Perhaps. My ping under the VPN is now as good on IPv6 as it is on IPv4, and everything works fine, but this came at the cost of an overcomplicated network configuration. This could have been much simpler if everybody had simply understood how IPv6 differs from IPv4, and that giving out a single address is simply not the right way to allocate addresses to your subscribers anymore.

The NATs we use today are relics of a past where the address space was so small that we had to break the Internet in order to save it. They were a mistake made to fix an even bigger one, a blunder whose effects we have now the chance to undo. We should just start to take the ongoing transition period as seriously as it deserves, to avoid falling into the same wrong assumptions yet again.

  1. Ironically, SixXS closed last June because “many ISPs offer IPv6 now”. 

First post!

Welcome, internet stranger, into my humble blog!

I hope I’ll be able to find the time to post at least once a month a new story or tutorial about Linux, FreeBSD, system administration or similar CS-related topics, which will, more often than not, involve a full report on something I’ve been tinkering on during my research activity (or, just because I liked it).
Everything I publish is written without any arrogance about it being in any way relevant, correct or even interesting; the only thing I hope for is for this blog to be at least in some way useful to myself, to avoid forgetting what I’ve learned, and which mistakes I have already committed.


From the very first moment I turned on a PC in the ’90s, I’ve been hooked with computers, and anything revolving around them. Exploring and better understanding how these machines work has been an immense source of entertainment and learning for me, leading to countless hours spent in trying every piece of software, gadget or device I was able to lay my hands onto.
I cannot state for certain how many times I found myself delving heart and soul into some convoluted install of fundamentally every Linux and BSD distribution I could find, sometimes even resorting into compiling some of them by scratch, just for the sake of better understanding how these complex yet fascinating software packages tied together into creating a fully-fledged and functional operating system.

Being passionate as I was (and still am) about software made the choice of enrolling in Computer Engineering extremely simple. During my university years, I had the time and opportunity to further improve my coding skills, especially focusing on striving to master C and C++, Go, and recently, Rust. I have a passion for compiler technology, and I’ve dabbled in programming language design for a while, implementing a functioning self-hosting compiler, which I hope will be the topic of a future, fully dedicated blog post.

What do you do?

After working for two years at the University of Bologna as both a researcher on distributed ledgers and as a system administrator, I decided to change my professional path and become an embedded developer. I now work as an embedded developer, mostly on the ESP32 platform.

My other hobbies are also languages (the ones spoken by people, at least for now!), cooking, writing, astronomy, biology, and science in general.

You wrote something wrong!

If you notice something is amiss with either my writing or the contents of the blog, do not esitate to contact me (in any way you prefer). I plan to add Disqus support directly on blog posts, but in the meantime don’t be shy to simply fork and PR me on Github, if you wish so.