The cycle of development we’re most familiar with is: write code, compile your code, then run this code on the same machine you were writing it on. On most desktop OSes, you pick up a compiler by downloading one from your package manager. Xcode and Visual Studio are toolchains (actually IDEs) that leverage being platform-specific, each including tools tailored around the platform your code will run on and heavily showcasing the parent OS’s design language.

Yet you can also write code that runs on platforms you aren’t simultaneously coding on. Every modern computer architecture supports a C compiler you can download and run on your PC, usually a binary + utils for a new gcc or llvm backend. In practice, using these tools means setting several non-obvious environment variables like CC and searching the internet for magic command line arguments (it takes a lot of work to convince a Makefile not to default to running gcc). Installing a compiler for another machine is easy, but getting a usable result takes trial and error.

If you’ve picked up Rust and are learning systems programming, you might ask: Does Rust, a language whose design addresses C’s inadequacies for developing secure software, also address its shortcomings in generating code on other platforms?

Let’s start with the tiered platform support system Rust maintains to track which platforms it supports and how full that support is (from “it actually might not build at all” to “we test it each release”). On its own, this is a useful reference of the target identifiers of popular consumer OSes, embedded platforms, and some experimental ones (like Redox!). The majority of these different platforms don’t actually support running rustc or cargo from the command line though.

Rust makes up for this by advertising a strong cross-compilation story. Quoting from the announcement post for rustup target:

In practice, there are a lot of ducks you have to get in a row to make [cross-compilation] work: the appropriate Rust standard library, a cross-compiling C toolchain including linker, headers and binaries for C libraries, and so on. This typically involves pouring over various blog posts and package installers to get everything “just so”. And the exact set of tools can be different for every pair of host and target platforms.

The Rust community has been hard at work toward the goal of “push-button cross-compilation”. We want to provide a complete setup for a given host/target pair with the run of a single command.

This is an excellent goal given the infrastructure and design challenges ahead. And I wanted to learn more about this part of the article:

Cross-compilation is an imposing term for a common kind of desire:

  • You want to write, test and build code on your Mac, but deploy it to your Linux server.

This is exactly the scenario I’m in! But… understandably, the article doesn’t actually include an example of how to do this, because cross-compiling for another OS requires making several assumptions about the target platform that maybe not apply to everyone. Here is a recap of how I made it work for my project.

Deploying via git and building from git

In my case, the need to build binaries for Linux came up while working on my project edit-text, a collaborative test editor for the web written in Rust. I regularly test changes out in a sandbox environment since you can’t rely on testing code locally to catch behavior that might only appear in production. Yet the issue I kept running into was how long it was taking deploy to my $10 DigitalOcean server. I spent a long time rereading the same compiler logs before it actually dawned on me—I was compiling on my web server and not my laptop. And that’s really slow.

If you have a githook that takes new source code pushed via git and loads it into a Docker container, deploying via git just sends up your source directory and points at a rustc compiler. On each new deploy, your server has to rebuild from your Dockerfile from scratch, and unless you configure it to support caching this throws away the benefit of quickly iterating on your code. If you want the benefit of faster builds by having cargo incrementally cache compiled files between builds, you’ll find it’s complex to get right in your Dockerfile configuration but entirely natural to manage this in your local development environment.

The approach I have the most experience with is to take the compilation environment and just run it locally on my machine. With Docker, we have easy way to run Linux environments (even on Mac) and to pin it to the same development environment I have on my server. Since Docker on my machine will be running Linux in a hypervisor, local performance should beat what I can do on my server even with the overhead of not being the host OS.

Did you know Rust has a first-party story for cross-compiling for Linux using Docker? The Rust cross-compilation image rustlang/rust:nightly can help you generate binaries for the Linux target compiled with nightly Rust and can be invoked on demand from the command line using docker run . I developed this set of command line arguments to get cross-compilation with caching working:

docker run --rm \
  -v $DIR_GIT:/usr/local/cargo/git \ # cache git downloads
  -v $DIR_REGISTRY:/usr/local/cargo/registry \ # cache cargo registry
  -v $DIR_RUSTUP:/usr/local/rustup/toolchains \ # cache rustup
  -v $DIR_SELF:/app \
  -w /app/edit-server \
  -t edit-build-server \
  cargo build --release --target=x86_64-unknown-linux-gnu --bin edit-server --features 'standalone'

The binaries this produced, amazingly, worked when I copied them to and ran them on my Linux server. Compiling locally was marginally faster. But there were drawbacks to this approach for cross-compilation:

  • I had to manage and run Docker locally, which on macOS requires a Docker daemon be running on macOS.
  • I had to cache each cargo and rust directory individually, which, managed separate from rustup on my machine, seemingly never got garbage collected. I accrued huge directories of cached files just for compiling for Linux.
  • Intermediate artifacts from successive builds seemed less likely to be cached between builds, meaning building took longer than they should on my machine.

I like that Docker gives a reproducible environment to build in—building Debian binaries on the Debian kernel makes things easy—but Rust’s cross-compilation support might allow me to manage all the compilation artifacts that make modern Rust compile times tolerable.

“rustup target add”

So far I’ve only mentioned the Rust compiler’s support for cross-compiling. There are a actually handful of components you need to make cross-compiling work:

  1. Compiler that support your target
  2. Library headers to link your program against (if any)
  3. Shared library files to link against (if any)
  4. A “linker” for the target platform

Let’s start with compiler support. Passing --target when running cargo build|run changes the assembly your CPU outputs and bundled in object files supported by that OS. But we have to install the new toolchain adding that capability to Cargo. This is done with the command rustup target add.Installing support for another “target” is something you do once on your machine, and from then on cargo can support it via the --target argument.

To install a Linux target on my Mac, I ran:

rustup target add x86_64-unknown-linux-musl

This installs the new compilation target based on its target triplet, which means:

  • We’re compiling for the x84-64 processor set (AMD64)
  • from an unknown (generic) vendor, targeting the Linux OS
  • compiled with the musl toolchain.

A note on musl: the alternative to musl is GNU, as in GNU libc—which almost every Linux environment has installed anyway. So why choose musl? musl is designed to be statically linked into a binary rather than dynamically linked; this means I could compile a single binary I could deploy to any server without any requirements as to what libraries were installed on that OS. This might even mean we can skip steps 2) and 3) above. And I could back to GNU if it didn’t work out.

Finally, we need a program that links our compiled objects together. On macOS, you can use brew to download a community-provided binary for Linux + musl cross-compilation. Just run this to install the toolchain including the command “x86_64-linux-musl-gcc”:

brew install FiloSottile/musl-cross/musl-cross

At this point we need to tell Rust about the linker. The official way to do this is to add a new file named .cargo/config in the root of your project and set its content to something similar:

[target.x86_64-unknown-linux-musl]
linker = "x86_64-linux-musl-gcc"

This should instruct Rust, whenever the target is set to --target=x86_64-unknown-linux-musl, to use the executable “x86_64-linux-musl-gcc” to link the compiled objects. But it seems to be the case that if you have any C code compiled by a Rust build script, you also have to set environment variables like TARGET_CC to get it working. So when my code started throwing linking errors, I just ran the following in my shell:

export TARGET_CC=x86_64-linux-musl-gcc

Thankfully, this made the compilation steps with linker errors work consistently.

Libraries and Linking

musl doesn’t support shared libraries, that is, libraries that are independently installed and versioned on your system via a package manager. Shared libraries are ones your program links to these at runtime (once the program starts), rather than literally embedding them in their binary at compile time (static libraries).

Sometimes you can work around the constraint of requiring static libraries without leaving the cargo ecosystem: some radical crates support a “bundled” feature, as in libsqlite3-sys, which compiles a static library during your build step and links it into your project. For example, the SQLite drivers I was relying on had no problem being compiled with musl if I enabled the “bundled” feature; I didn’t have to apt-get install libsqlite3 on the remote platform, nor did I have to find headers that matched it. An app with only this requirement would be a solid usecase for deploying binaries compiled with musl.

If your project depends on openssl, though, you’ll see this type of error midway during cargo build:

error: failed to run custom build command for `openssl-sys v0.9.33`
process didn't exit successfully: `/Users/timryan/edit-text/target/debug/build/openssl-sys-89bb414c25e8c29b/build-script-main` (exit code: 101)
--- stdout
[...]
--- stderr
thread 'main' panicked at '

Could not find directory of OpenSSL installation, and this `-sys` crate cannot
proceed without this knowledge. If OpenSSL is installed and this crate had
trouble finding it,  you can set the `OPENSSL_DIR` environment variable for the
compilation process.

Make sure you also have the development packages of openssl installed.
For example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.

No need to check the exit code, this clearly isn’t building correctly. The error says “Could not find directory of OpenSSL installation”. This doesn’t mean I didn’t have OpenSSL installed on my computer (I did); it means it can’t find its headers and source code to compile. Compiling a static binary is with musl is now much more complicated if I need to download and compile an arbitrary OpenSSL dependency.

Why I need OpenSSL as a dependency to begin with: one of the libraries I depend on in edit-text’s software stack,reqwest, relies on native-tls to support encryption for download data over HTTPS. reqwest is a programmatic HTTP client for Rust, and it uses native-tls to link against the OS’s native SSL implementation and expose an agnostic interface to it in order to support HTTPS. I can imagine in the future a reqwest feature that substitutes a rust-tls backend instead of native-tls, allowing me to compile all my crypto could without needing to touch gcc. But for now, since I don’t want the heavy lift of compilng OpenSSL myself, dynamic linking looks like the only way forward.

Debian Packaging

New plan: compile against the GNU toolchain and use dynamic linking. If we don’t want to cross-compile libraries ourselves, then we have to find a source of pre-compiled libraries and headers (which are sometimes distinct things). Since we’re moving away from musl, we’ll even need to bring our own copy of libc!

Luckily, we just have to recreate the same environment as a Linux compiler. This turns out to be straightforward. When I’m compiling code in Linux and need to, say, link against OpenSSL, I can run the following:

sudo apt-get install libssl-dev

Now I can compile any binary which relies on OpenSSL headers, because they were installed to my system. Where are these files? One way to find them is to run dpkg-query -L libssl-dec to list which files were installed by my package manager. In this case, most of our header files are inserted into /usr/include and the Linux libraries in /usr/lib. If we have the .deb file itself, we can actually confirm this by dumping its archive contents:

$ ar p libssl-dev_1.1.0f-3+deb9u2_amd64.deb data.tar.xz | tar tvf -
drwxr-xr-x  0 root   root        0 Mar 29 06:51 ./
drwxr-xr-x  0 root   root        0 Mar 29 06:51 ./usr/
drwxr-xr-x  0 root   root        0 Mar 29 06:51 ./usr/include/
drwxr-xr-x  0 root   root        0 Mar 29 06:51 ./usr/include/openssl/
-rw-r--r--  0 root   root     3349 Mar 29 06:51 ./usr/include/openssl/aes.h
...

Where aes.h is the header we might require linking against.

We can essentially reuse these packages on other platforms. Package managers extract files to specific locations on your machine. If can extract these same archives locally, we tell the compiler to look in these folders for headers instead of OS folders.

Let’s describe which archives we want: First, my choice of a broadly-accessible Linux distribution that has good tooling is Debian, of which Ubuntu is a fork and which has a straightforward packaging system found with apt and its .deb package format. Second, we need to pick a sufficiently old enough version of Debian that would support ABI-compatible libraries. I chose /jessie/, the version of Debian immediately prior its current stable release, /stretch/.

We can’t just fetch a .deb archive via ‘apt-get install’ on a Mac though. Downloading library headers directly means navigating a hyperlink survey of computer architectures and CDNs. I poked around for a while to see if there was an obvious way to compute the URL of any Debian package, but it looks like to retrieve the package URL you basically need to reimplement all of aptitude (the package manager used by Debian). Because there were no brew formulae for libapt, and no standalone Rust bindings either, I assumed any solution would be more complicated than just referencing the direct URL. As such, the build script fetches each package URL in sequence and extracts them into a local folder:

export URL=http://security.debian.org/debian-security/pool/updates/main/o/openssl/libssl-dev_1.1.0f-3+deb9u2_amd64.deb
curl -O $URL
ar p $(basename $URL) data.tar.xz | tar xvf -

export URL=http://security.debian.org/debian-security/pool/updates/main/o/openssl/libssl1.1_1.1.0f-3+deb9u2_amd64.deb
curl -O $URL
ar p $(basename $URL) data.tar.xz | tar xvf -

export URL=http://ftp.us.debian.org/debian/pool/main/g/glibc/libc6_2.24-11+deb9u3_amd64.deb
curl -O $URL
ar p $(basename $URL) data.tar.xz | tar xvf -

You can see the dependencies my build script relies on. Note that we only install the packages we need to build with: backtrace-rs requires the libc library headers, and openssl-sys on Rust requires not only the headers in libssl-dev but also the shared library in libssl1.1. Other than that, these are all the packages I required when cross-compiling.

Building Linux binaries on macOS

We again need to install a linker, this time one that targets GNU/Linux. Again this is made easy with brew thanks to another community contribution:

brew tap SergioBenitez/osxct

Now the executable “x86_64-unknown-linux-gnu-gcc” is available on our PATH.

We next make a series of environment variable updates:

# Linker for the target platform
# (cc can also be updated using .cargo/config)
export TARGET_CC="x86_64-unknown-linux-gnu-gcc"

# Library headers to link against
export TARGET_CFLAGS="-I $(pwd)/usr/include/x86_64-linux-gnu -isystem $(pwd)/usr/include"
# Libraries (shared objects) to link against
export LD_LIBRARY_PATH="$(pwd)/usr/lib/x86_64-linux-gnu;$(pwd)/lib/x86_64-linux-gnu"

# openssl-sys specific build flags
export OPENSSL_DIR="$(pwd)/usr/"
export OPENSSL_LIB_DIR="$(pwd)/usr/lib/x86_64-linux-gnu/"

This specifies the linker headers and shared library locations, and some OpenSSL-specific flags required by openssl-sys. Take note of -isystem, which changes where gcc looks for system headers. Because we are using only Debian packages, the OpenSSL-specific build flags refer to the same folders as our other system libraries.

Now we can run the Cargo build command to cross-compile for Linux:

cargo build --target=x86_64-unknown-linux-gnu --features 'standalone'

The “standalone” feature is part of the project, and configures everything that can be built without relying on system libraries (like SQLite).

Now in my project’s ./target/debug/x86_64-unknown-linux-gnu/ folder, I can run file on the edit-server binary:

$ file target/x86_64-unknown-linux-gnu/release/edit-server
target/x86_64-unknown-linux-gnu/release/edit-server: ELF 64-bit \
  LSB shared object, x86-64, version 1 (SYSV), dynamically linked, \
  interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, \
  with debug_info, not stripped

It says it’s a shared object (in this case, an executable) and mentions we compiled it for GNU/Linux. Next, I created a example Dockerfile based on Debian that, when the binary is placed in the same directory, just launches it:

FROM nginx

RUN apt-get update; apt-get install sqlite3 -y

ADD . /app
WORKDIR /app
EXPOSE 80

CMD RUST_BACKTRACE=1 ./edit-server

I tried it out withdocker run on my machine, and saw the server successfully boot up:

$ docker run -it $(docker build -q .)
[ ok ] Restarting nginx: nginx.
client proxy: false
Listening on http://0.0.0.0:8000/
Graphql served on http://0.0.0.0:8003
sync_socket_server is listening for ws connections on 0.0.0.0:8001

This is Debian, running locally on my machine, successfully running the binary we compiled on my Mac. Since this is the same Dockerfile we send to the server, this means the server will be able to deploy it too!

There is one more step here: the binary now has to be sent to the server along with each deploy. Checking a large binary into Git just so Dokku could receive it via git push performs very poorly, and really is not what Git is built for. What worked for me: I switched to creating an archive of my Dockerfile’s directory and piping into ssh running dokku tar:in - on the remote server. This Dokku command loads a tarball from stdin (in this case) and deploys it, making it possible to push new code to the server without needing to check in anything to git each time.

Rust advantages in webdev

And the result: it is now much faster to update code running on a server. Compilation speed improved immensely between compiling it remotely, where each compile felt as slow as a full rebuild—and compiling it locally, where cargo’s incremental cache makes builds feel as fast as targeting your default OS. It’s fast enough I can deploy new code to a remote test server when it’s too annoying to set up a local server. Yet Rust’s cross-compilation story can’t eliminate by itself the clumsy ritual of setting arbitrary environment variables in order to get compilation to succeed.

If rustup target is a blueprint for the future, I imagine an ecosystem of cross-compilation tools will inevitably spring up that makes bundling for other OSes straightforward and configurable. Even though Rust isn’t an interpreted language, if deploying code no longer means compiling code on your server, and local recompilation is fast, it makes deployment in Rust feel much more like modern web development. Cross-compilation support is an undersold factor in Rust’s webdev story.