Skip to main content
Current1mo ago

Table of Contents


  1. Why Cross-Compilation

  2. The Universal Build Architecture

  3. Host Machine Setup

  4. Docker Build Container (The Core)

  5. Cargo Configuration

  6. Build Script

  7. Multi-Target Support Matrix

  8. Adding a New Target Device

  9. CI/CD Template

  10. Deployment Package Structure

  11. Validation & Testing

  12. Common Pitfalls & Fixes


1. Why Cross-Compilation

Compiling directly on an ARM SBC like the Tinkerboard takes ~18 minutes per build. Cross-compiling on an x86_64 host takes ~2–4 minutes. At production scale, you never build on the device — you build once on a fast machine and push the binary. The goal of this guide is to set up a single build system that can target any ARM Linux device running any Tauri app, so you never have to reconfigure when you change your app or switch hardware.


2. The Universal Build Architecture

YOUR CODE (any Tauri app)


┌──────────────────────────────────────────┐
│ DOCKER BUILD CONTAINER │
│ │
│ x86_64 host kernel │
│ ┌────────────────────────────────────┐ │
│ │ Rust toolchain │ │
│ │ ├── native x86_64 compiler │ │
│ │ └── cross targets: │ │
│ │ ├── armv7-unknown-linux-gnueabihf (Tinkerboard, RPi 2/3/4 32-bit)
│ │ └── aarch64-unknown-linux-gnu (RPi 4/5 64-bit, Jetson)
│ │ │ │
│ │ ARM sysroots │ │
│ │ ├── armhf libs (WebKitGTK, GTK3) │ │
│ │ └── arm64 libs (WebKitGTK, GTK3) │ │
│ │ │ │
│ │ Cross linkers │ │
│ │ ├── arm-linux-gnueabihf-gcc │ │
│ │ └── aarch64-linux-gnu-gcc │ │
│ │ │ │
│ │ Node.js (frontend build) │ │
│ │ Tauri CLI │ │
│ └────────────────────────────────────┘ │
│ │
│ INPUT: /app (your project, mounted) │
│ OUTPUT: .deb package for target arch │
└──────────────────────────────────────────┘


┌──────────────────────┐
│ .deb / binary │
│ ready for any │
│ ARM Linux device │
└──────────────────────┘

The key insight: the Docker container IS the build system. Your app code doesn't care about cross-compilation. The container handles everything — toolchains, sysroots, linkers, environment variables. You mount your project in, you get a .deb out.


3. Host Machine Setup

Your development machine (macOS, Linux, or Windows with WSL2) only needs two things.

3.1 Install Docker


# Linux
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER

# macOS — install Docker Desktop

# Windows — install Docker Desktop with WSL2 backend

3.2 Install Rust (for local dev/testing only)

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source $HOME/.cargo/env

You do NOT need cross-compilation tools on the host. Everything runs inside Docker.


4. Docker Build Container (The Core)

This is the heart of the entire system. One Dockerfile that supports both ARMv7 and ARM64 targets.

4.1 Dockerfile


# ============================================================

# Dockerfile.cross

# Universal Tauri cross-compilation container

# Supports: armv7 (armhf) and aarch64 (arm64)

# Base: Debian Bookworm (matches Debian 12 target devices)

# ============================================================
FROM debian:bookworm-slim

ARG DEBIAN_FRONTEND=noninteractive

# ── 1. Host build essentials ──
RUN apt-get update && \
apt-get install -y --no-install-recommends \
gnupg2 \
ca-certificates \
build-essential \
pkg-config \
curl \
wget \
file \
python3 && \
apt-get clean && rm -rf /var/lib/apt/lists/*

# ── 2. ARM cross-compiler toolchains ──
RUN apt-get update && \
apt-get install -y --no-install-recommends \
gcc-arm-linux-gnueabihf \
g++-arm-linux-gnueabihf \
libc6-dev-armhf-cross \
gcc-aarch64-linux-gnu \
g++-aarch64-linux-gnu \
libc6-dev-arm64-cross && \
apt-get clean && rm -rf /var/lib/apt/lists/*

# ── 3. Rust with both ARM targets ──
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | \
sh -s -- -y \
--target armv7-unknown-linux-gnueabihf \
--target aarch64-unknown-linux-gnu
ENV PATH="/root/.cargo/bin:${PATH}"

# ── 4. Node.js (for Tauri frontend builds) ──
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
apt-get update && \
apt-get install -y --no-install-recommends nodejs && \
apt-get clean && rm -rf /var/lib/apt/lists/*

# ── 5. ARM sysroot libraries (the critical part) ──

# Enable both ARM architectures in dpkg
RUN dpkg --add-architecture armhf && \
dpkg --add-architecture arm64

# Install Tauri runtime dependencies for both architectures
RUN apt-get update && \
apt-get install -y --no-install-recommends \
# ── armhf (ARMv7: Tinkerboard, RPi 32-bit) ──
libwebkit2gtk-4.1-dev:armhf \
libssl-dev:armhf \
libgtk-3-dev:armhf \
libayatana-appindicator3-dev:armhf \
librsvg2-dev:armhf \
# ── arm64 (ARMv8: RPi 64-bit, Jetson, etc.) ──
libwebkit2gtk-4.1-dev:arm64 \
libssl-dev:arm64 \
libgtk-3-dev:arm64 \
libayatana-appindicator3-dev:arm64 \
librsvg2-dev:arm64 && \
apt-get clean && rm -rf /var/lib/apt/lists/*

# ── 6. Tauri CLI ──
RUN cargo install tauri-cli

# ── 7. Cargo cross-compilation config ──
RUN mkdir -p /root/.cargo && \
printf '[target.armv7-unknown-linux-gnueabihf]\n\
linker = "arm-linux-gnueabihf-gcc"\n\
\n\
[target.aarch64-unknown-linux-gnu]\n\
linker = "aarch64-linux-gnu-gcc"\n' \
> /root/.cargo/config.toml

WORKDIR /app

# ── 8. Default build entrypoint ──
COPY build.sh /usr/local/bin/build.sh
RUN chmod +x /usr/local/bin/build.sh
ENTRYPOINT ["/usr/local/bin/build.sh"]

4.2 Note on Debian vs Ubuntu Base

Using debian:bookworm-slim as the base is deliberate:

  • Matches Debian 12 on the Tinkerboard (same glibc version)

  • The dpkg --add-architecture command works without modifying sources.list on Debian

  • If you use an Ubuntu base image instead, you MUST add ARM package sources to /etc/apt/sources.list manually and restrict native sources with [arch=amd64] tags — this is a common source of build failures


5. Cargo Configuration

This file lives in your project at <project-root>/.cargo/config.toml. It's also baked into the Docker image, but having it in the project makes local development easier.


# .cargo/config.toml

# Cross-compilation linker mapping

[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"

[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"

# Uncomment to set a default target (optional)

# [build]

# target = "armv7-unknown-linux-gnueabihf"


6. Build Script

This is the entrypoint script that the Docker container runs. It handles environment setup per-target automatically.

6.1 build.sh

#!/bin/bash
set -euo pipefail

# ============================================================

# build.sh — Universal Tauri cross-compilation build script

# Usage: ./build.sh \<target> [extra cargo args]
#

# Targets:

# armv7 → armv7-unknown-linux-gnueabihf

# arm64 → aarch64-unknown-linux-gnu

# native → host architecture (for testing)

# ============================================================

TARGET="${1:-armv7}"
shift 2>/dev/null || true

case "$TARGET" in
armv7|armhf)
RUST_TARGET="armv7-unknown-linux-gnueabihf"
export PKG_CONFIG_SYSROOT_DIR="/usr/arm-linux-gnueabihf/"
export PKG_CONFIG_PATH="/usr/lib/arm-linux-gnueabihf/pkgconfig:/usr/share/pkgconfig"
;;
arm64|aarch64)
RUST_TARGET="aarch64-unknown-linux-gnu"
export PKG_CONFIG_SYSROOT_DIR="/usr/aarch64-linux-gnu/"
export PKG_CONFIG_PATH="/usr/lib/aarch64-linux-gnu/pkgconfig:/usr/share/pkgconfig"
;;
native|host)
RUST_TARGET=""
;;
*)
echo "ERROR: Unknown target '$TARGET'"
echo "Usage: build.sh <armv7|arm64|native> [cargo args]"
exit 1
;;
esac

export PKG_CONFIG_ALLOW_CROSS=1

# Install frontend dependencies if package.json exists
if [ -f "package.json" ]; then
echo "==> Installing frontend dependencies..."
npm install --prefer-offline 2>/dev/null || npm install
fi

# Build
if [ -n "$RUST_TARGET" ]; then
echo "==> Cross-compiling for $RUST_TARGET..."
cargo tauri build --target "$RUST_TARGET" "$@"

echo ""
echo "==> Build complete. Output:"
find "target/$RUST_TARGET/release/bundle" -name "*.deb" 2>/dev/null || \
find "target/$RUST_TARGET/release" -maxdepth 1 -type f -executable
else
echo "==> Building for native host..."
cargo tauri build "$@"
fi

6.2 Usage


# Build the Docker image (one-time)
docker build -f Dockerfile.cross -t tauri-cross .

# Build for Tinkerboard (ARMv7)
docker run --rm -v "$(pwd):/app" -v cargo-cache:/root/.cargo/registry \
tauri-cross armv7

# Build for 64-bit ARM (RPi 4/5, Jetson)
docker run --rm -v "$(pwd):/app" -v cargo-cache:/root/.cargo/registry \
tauri-cross arm64

That's it. Any Tauri project. Mount it in, pick a target, get a .deb out.


7. Multi-Target Support Matrix

The Docker container supports these targets out of the box:

Short NameRust Target TripleDevicesDebian Arch
armv7armv7-unknown-linux-gnueabihfTinkerboard, RPi 2/3/4 (32-bit), BeagleBonearmhf
arm64aarch64-unknown-linux-gnuRPi 4/5 (64-bit), Jetson Nano/Orin, Rock Piarm64
native(host)Your dev machineamd64

Adding More Targets

To add a new architecture (e.g., RISC-V), you extend the Dockerfile with three things:

  1. The cross-compiler toolchain (gcc-riscv64-linux-gnu)

  2. The Rust target (rustup target add riscv64gc-unknown-linux-gnu)

  3. The sysroot libraries (libwebkit2gtk-4.1-dev:riscv64) Then add a case to build.sh. That's all.


8. Adding a New Target Device

When you get a new SBC or embedded board, follow this checklist:

Step 1 — Identify the Architecture

SSH into the device and run:

uname -m          # Shows: armv7l, aarch64, x86_64, riscv64, etc.
cat /etc/os-release # Shows: Debian version, Ubuntu version, etc.
ldd --version # Shows: glibc version (critical for compatibility)

Step 2 — Map to Rust Target

uname -m OutputRust TargetDebian Arch
armv7larmv7-unknown-linux-gnueabihfarmhf
aarch64aarch64-unknown-linux-gnuarm64
x86_64x86_64-unknown-linux-gnuamd64
riscv64riscv64gc-unknown-linux-gnuriscv64

Step 3 — Verify glibc Compatibility

Your Docker build container's glibc version must be less than or equal to the target device's glibc version. Using debian:bookworm-slim (glibc 2.36) is safe for any Debian 12+ device.


# On the target device
ldd --version | head -1

# Output: ldd (Debian GLIBC 2.36-9+deb12u9) 2.36

# In the Docker container
ldd --version | head -1

# Must show same or older version

If the device runs an older OS (e.g., Debian 11), use debian:bullseye-slim as your Docker base.

Step 4 — Install Runtime Dependencies on Device


# On the target device — install what the Tauri app needs to run
sudo apt install \
libwebkit2gtk-4.1-0 \
libgtk-3-0 \
libayatana-appindicator3-1 \
librsvg2-2

Step 5 — Build and Deploy


# On your host machine
docker run --rm -v "$(pwd):/app" tauri-cross armv7

# Copy to device
scp target/armv7-unknown-linux-gnueabihf/release/bundle/deb/*.deb user@device:~/

# Install on device
ssh user@device 'sudo dpkg -i ~/your-app*.deb'


9. CI/CD Template

GitHub Actions — Build for All Targets


# .github/workflows/build.yml
name: Cross-Compile

on:
push:
branches: [main]
pull_request:

jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
target: [armv7, arm64]

steps:
- uses: actions/checkout@v4

- name: Cache Docker image
uses: actions/cache@v4
with:
path: /tmp/docker-image.tar
key: tauri-cross-${{ hashFiles('Dockerfile.cross') }}

- name: Build or load Docker image
run: |
if [ -f /tmp/docker-image.tar ]; then
docker load < /tmp/docker-image.tar
else
docker build -f Dockerfile.cross -t tauri-cross .
docker save tauri-cross > /tmp/docker-image.tar
fi

- name: Cache Cargo registry
uses: actions/cache@v4
with:
path: cargo-cache
key: cargo-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}

- name: Cross-compile
run: |
docker run --rm \
-v "${{ github.workspace }}:/app" \
-v "$(pwd)/cargo-cache:/root/.cargo/registry" \
tauri-cross ${{ matrix.target }}

- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: deb-${{ matrix.target }}
path: target/*/release/bundle/deb/*.deb


10. Deployment Package Structure

What the build outputs and what goes where on the device:

Build Output (.deb package)

├── /usr/bin/your-app # The Tauri binary
├── /usr/share/applications/your-app.desktop # Desktop entry
├── /usr/share/icons/.../your-app.png # App icons (multiple sizes)

└── (optional, via tauri.conf.json > bundle > linux > deb > files)
├── /etc/your-app/config.toml # Default config
└── /usr/lib/systemd/system/your-app.service # Auto-start service

tauri.conf.json Bundle Config

{
"bundle": {
"identifier": "com.yourcompany.your-app",
"targets": ["deb"],
"linux": {
"deb": {
"depends": [
"libwebkit2gtk-4.1-0",
"libgtk-3-0",
"libayatana-appindicator3-1",
"librsvg2-2"
],
"files": {
"/etc/your-app/config.toml": "./config/default.toml"
}
}
}
}
}


11. Validation & Testing

11.1 Verify the Binary

After building, always check the output binary:


# Check it's the right architecture
file target/armv7-unknown-linux-gnueabihf/release/your-app

# Expected: ELF 32-bit LSB pie executable, ARM, EABI5, ... dynamically linked

file target/aarch64-unknown-linux-gnu/release/your-app

# Expected: ELF 64-bit LSB pie executable, ARM aarch64, ... dynamically linked

11.2 Check Dynamic Dependencies


# From within the Docker container (uses the ARM linker)

# For armv7:
arm-linux-gnueabihf-readelf -d target/armv7-unknown-linux-gnueabihf/release/your-app | grep NEEDED

# Expected output: lists libwebkit2gtk, libgtk, libc, etc.

# These must all exist on the target device

11.3 Quick Smoke Test on Device

scp target/armv7-unknown-linux-gnueabihf/release/your-app user@device:~/
ssh user@device 'ldd ~/your-app'

# If all libraries resolve → it will run

# If any show "not found" → install the missing package on the device


12. Common Pitfalls & Fixes

pkg-config can't find libraries

error: failed to run custom build command for `webkit2gtk-sys`

--- stderr
`pkg-config` could not find `webkit2gtk-4.1`

Cause: Environment variables not set for cross-compilation.

Fix: Make sure these are exported before building:

export PKG_CONFIG_SYSROOT_DIR=/usr/arm-linux-gnueabihf/
export PKG_CONFIG_PATH=/usr/lib/arm-linux-gnueabihf/pkgconfig:/usr/share/pkgconfig
export PKG_CONFIG_ALLOW_CROSS=1

The build.sh script handles this automatically.


Linker not found

error: linker `arm-linux-gnueabihf-gcc` not found

Cause: Cross-compiler not installed, or .cargo/config.toml missing.

Fix: Install gcc-arm-linux-gnueabihf and verify .cargo/config.toml has the [target.armv7-unknown-linux-gnueabihf] section with the correct linker value.


OpenSSL headers not found

Failed to find OpenSSL development headers

Fix (Option A — system):

sudo apt install libssl-dev:armhf

Fix (Option B — vendored, no sysroot dependency):
Add to Cargo.toml:

[dependencies]
openssl-sys = { version = "0.9", features = ["vendored"] }


Binary won't run on device: "No such file or directory"

./your-app
-bash: ./your-app: No such file or directory

Cause: The dynamic linker path baked into the binary doesn't exist on the device. Usually means wrong target triple or missing armhf runtime.

Fix: Run file ./your-app to confirm it's the correct architecture. Then run ldd ./your-app to see which libraries are missing.


Ubuntu sources.list conflicts with armhf

E: Failed to fetch http://archive.ubuntu.com/.../armhf/Packages 404 Not Found

Cause: Ubuntu serves ARM packages from ports.ubuntu.com, not archive.ubuntu.com. When you dpkg --add-architecture armhf, apt tries to fetch armhf from all sources.

Fix: Use Debian as the Docker base (avoids this entirely), or pin architectures in every source line:

deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ ...
deb [arch=armhf] http://ports.ubuntu.com/ubuntu-ports/ ...

This is why the Dockerfile uses debian:bookworm-slim.


Build is slow (10+ minutes)

Fix — Cache the Cargo registry:

docker run --rm \
-v "$(pwd):/app" \
-v cargo-registry:/root/.cargo/registry \
-v cargo-git:/root/.cargo/git \
tauri-cross armv7

Fix — Cache the target directory (careful, can get large):

-v target-armv7:/app/target

Fix — Use release profile optimized for compile speed during dev:


# Cargo.toml — fast dev builds, optimized release builds
[profile.dev]
opt-level = 0
debug = false

[profile.release]
lto = true
codegen-units = 1
opt-level = "s"
strip = true


File Checklist

To set up cross-compilation for any new Tauri project, you need these files:

your-project/
├── .cargo/
│ └── config.toml # Linker mapping (Section 5)
├── Dockerfile.cross # Build container (Section 4)
├── build.sh # Build entrypoint (Section 6)
└── src-tauri/
├── Cargo.toml # Add openssl-sys vendored if needed
└── tauri.conf.json # Bundle config with deb depends

That's 4 files. Copy them into any Tauri project and run:

docker build -f Dockerfile.cross -t tauri-cross .
docker run --rm -v "$(pwd):/app" tauri-cross armv7

Done.


References


Document Version: 1.0
Last Updated: April 2026
Scope: Universal — works for any Tauri app on any ARM Linux target

Related Articles