Skip to content
zoryn/ maintainer-assistant

Build farm

A practical walkthrough for setting up and scaling build machines (builders) for zoryn. If you only build on your own workstation, you can skip this — zoryn build uses the built-in local builder with no configuration. Read on once you want:

  • Builds for architectures your workstation doesn't have (ARM64, i586, RISC-V, e2k).
  • Parallel builds — several hashers on one machine building different packages/branches simultaneously.
  • Per-branch defaults — sisyphus on one node, p11 on another.

See zoryn builder for the full command reference.

How builders are organised

ConceptMeaning
BuilderA named hasher setup. Lives as a .conf file in ~/.config/zoryn/builders.d/.
LocalRuns hsh on your machine. The built-in local builder always exists; you can add more with different hasher numbers / architectures.
RemoteRuns hsh on another host over SSH. Tarball is uploaded, result RPMs come back via rsync.
Hasher numberhasher_number = N lets multiple parallel hashers coexist on one host. Requires hasher-useradd --number=N on that host.
FarmA set of builders covering the architectures / branches you care about.

You pick builders with -b/--builder patterns — see Builder patterns. @all runs on every configured builder.

Prerequisites

On your local machine

apt-get install hasher bubblewrap rsync openssh-clients

hasher-priv needs your user in the hashman group:

sudo hasher-useradd          # create satellite users for hasher_number=1

For parallel local hashers you will need --number=N invocations — more on that below.

On each remote host

# On the remote host (once, as root or via sudo)
apt-get install hasher rsync openssh-server
hasher-useradd               # satellite users for hasher_number=1

Set up SSH key-based login from your local machine to the remote so that ssh remote.host true works without a password prompt. The remote's user also needs hashman membership.

Make sure /etc/hasher-priv/user.d/<user> on the remote has allowed_mountpoints and allowed_devices matching whatever your build command uses (e.g. /proc,/dev/pts,/dev/kvm). zoryn checks this before every build and will tell you exactly what to add if something is missing.

Step 1 — Add one remote hasher

This is the "common case" — you have one extra machine (often ARM64 on your home network or a corporate server) that you want to build on.

Interactive

zoryn builder add

zoryn will ask for the name, host, architecture, hasher directory, and repo for apt config. Architecture is auto-detected from the host. Hit TAB on the repo prompt — paths complete both locally and via SSH on the remote.

Non-interactive (for scripts or dotfiles)

zoryn builder add \
  --name arm64 \
  --host arm.internal \
  --arch aarch64 \
  --repo /srv/repo/sisyphus \
  -y

What happens:

  1. zoryn writes ~/.config/zoryn/builders.d/arm64.conf.
  2. Runs sudo hasher-useradd on arm.internal if the satellite users don't exist (pass --no-create-hasher-users to skip).
  3. Detects the branch from the repo's release file.
  4. Generates apt.conf/sources.list/preferences locally in ~/.config/zoryn/builders.d/arm64/apt/ — synced to the remote before every build.

Verify

zoryn builder list
zoryn builder status -b arm64
zoryn builder shell -b arm64   # sanity: drop into the remote hasher chroot

First build

Inside a package directory:

zoryn build -b arm64

Tarball is created locally with gear --commit, uploaded via rsync, built in the remote hasher, result RPMs copied back into {git_root}/hasher_out/.

Step 2 — Add extra local hashers for parallel builds

One hasher works on one package at a time. To run two or more builds in parallel on the same machine, you need additional hasher user sets, each with a distinct hasher_number.

Create satellite users

On the machine (once per desired parallel slot):

sudo hasher-useradd --number=2
sudo hasher-useradd --number=3
sudo hasher-useradd --number=4

Register each slot with zoryn

zoryn builder add --name local2 --type local --number 2 -y
zoryn builder add --name local3 --type local --number 3 -y
zoryn builder add --name local4 --type local --number 4 -y

Each writes a separate .conf file with its own hasher_dir template expanded from {hasher_number} — so chroots don't collide.

Alternatively — one shot via mass creation:

zoryn builder add --name local --type local --multi-add 3 --start-number 2 \
  --repo /srv/repo/sisyphus -y
# creates local2, local3, local4

Use them

zoryn build -b local2             # specific slot
zoryn build -b 'local[2-4]' -p    # all three, in parallel
zoryn build -b @all -p --top      # all builders, parallel, htop-like TUI

With --parallel (or parallel = "on" in ~/.zoryn's [builders]), zoryn dispatches the build to every matched builder concurrently.

Step 3 — Scale to a build farm

When you move past "one machine + a remote" into a real multi-arch farm, three extra knobs matter: per-branch defaults, mass builder creation, and the --multi-add flag for parallel remote slots.

Per-branch defaults

Different builders for different branches:

# ~/.zoryn
[builders]
default = "local, arm64"
default_arch = ["x86_64", "aarch64"]

[builders.p11]
default = "p11-x86, p11-arm"
default_arch = ["x86_64", "aarch64"]

[builders.p10]
default = "p10-x86"
default_arch = "x86_64"

Now:

  • zoryn build → sisyphus on local + arm64, parallel.
  • zoryn build -B p11p11-x86 + p11-arm.
  • zoryn build -B p10 → only p10-x86.

Mass remote creation

To stand up 5 parallel hasher slots for both sisyphus and p11 on a beefy build server:

zoryn builder add --name farm --host buildnode.internal --multi-add 5 \
  --repo /srv/repo/sisyphus --repo /srv/repo/p11 -y
# creates 10 builders: farm1..farm5 (sisyphus) + farm6..farm10 (p11),
# branches auto-detected from each --repo

zoryn will also sudo hasher-useradd --number=N on the remote for every slot that doesn't exist. Preview without creating — add --dry-run.

Example farms

These are starting points. Tune hasher numbers to your RAM (~1-2 GB per parallel slot is sane for most packages).

3 architectures — the common maintainer setup

Your workstation (x86_64) + one ARM64 + one i586 — covers the three architectures every ALT Linux package is expected to build on.

# local — the built-in builder, x86_64 from your workstation
zoryn builder add --name arm64 --host arm.internal \
  --repo /srv/repo/sisyphus -y
zoryn builder add --name i586 --host i586.internal --arch i586 \
  --repo /srv/repo/sisyphus -y
# ~/.zoryn
[builders]
default = "local, arm64, i586"
default_arch = ["x86_64", "aarch64", "i586"]
parallel = "on"

zoryn build builds on all three simultaneously. zoryn up -b @all -p --top gives you a live TUI for all three.

4 architectures — add RISC-V

Add a RISC-V builder (typically a remote VM or a real board):

zoryn builder add --name riscv64 --host riscv.internal --arch riscv64 \
  --repo /srv/repo/sisyphus -y
[builders]
default = "local, arm64, i586, riscv64"
default_arch = ["x86_64", "aarch64", "i586", "riscv64"]
parallel = "on"

5 architectures — full modern ALT coverage

Add e2k (Elbrus) — assumes the host has a working Elbrus hasher chroot:

zoryn builder add --name e2k --host elbrus.internal --arch e2k \
  --repo /srv/repo/sisyphus -y
[builders]
default = "local, arm64, i586, riscv64, e2k"
default_arch = ["x86_64", "aarch64", "i586", "riscv64", "e2k"]
parallel = "on"

[builders.p11]
default = "p11-x86, p11-arm64, p11-i586, p11-riscv"
default_arch = ["x86_64", "aarch64", "i586", "riscv64"]

zoryn build now runs sisyphus builds on all five arches in parallel; zoryn build -B p11 runs p11 on four (no e2k in that lane).

5 architectures with dense parallelism

If you have one fat build server (say 64 GB RAM, 32 cores) and want both multi-arch and parallel-package builds inside each arch:

zoryn builder add --name node --host bignode.internal \
  --multi-add 4 --repo /srv/repo/sisyphus --repo /srv/repo/p11 -y
# creates node1..node4 (sisyphus) + node5..node8 (p11)

zoryn builder add --name arm --host bignode-arm.internal \
  --multi-add 4 --repo /srv/repo/sisyphus -y
# creates arm1..arm4 (sisyphus)

# … repeat for i586, riscv, e2k

With parallel = "on" and default pointing to @host:bignode,@host:bignode-arm,…, a single zoryn task batch php-8.4 can saturate dozens of hasher slots across the farm.

Useful patterns

  • zoryn build -b @all,^local — everyone except your workstation (offload to the farm).
  • zoryn build -b @host:bignode[2-4] — slots 2, 3, 4 on a specific host.
  • zoryn build -b @all --arch=aarch64 — every ARM builder, regardless of host.
  • zoryn builder status — see which builders are free/busy before queueing.
  • zoryn builder clean --all-remote --dry-run — preview cleaning remote chroots.

See Builder patterns for the full DSL.

Troubleshooting

  • "Builder mountpoint not allowed." Add the missing entries to /etc/hasher-priv/user.d/<user> on the host; zoryn tells you the exact strings.
  • Hasher users missing on remote. Re-run builder add (idempotent) or sudo hasher-useradd --number=N manually.
  • First build hangs on rsync. Check SSH ControlMaster sockets — ls $XDG_RUNTIME_DIR/zoryn/. Kill stale sockets; ssh multiplexing is on by default.
  • "Parallel = on" but builds go sequential. Your matched set has one builder. Confirm with zoryn builder list + the pattern you used.
  • Slots conflict on local multi-hasher. Different hasher_number values require different hasher_dir paths — use the template ~/hasher_{hasher_number} (default for --multi-add).