Skip to content

Boot server

Netsoc servers run Alpine Linux as their base OS. This is loaded over the network from the boot server and runs from RAM. Packages are installed from the internet. Configuration is downloaded over HTTP from the boot server and overlayed on the base system.

Main components

These instructions assume a working Arch Linux installation (and should be run as root unless otherwise specified).

Make sure packages are up to date with pacman -Syu (reboot if kernel was upgraded). Once all of the sections below are completed, reboot.

To get started, clone the infrastructure repo into /var/lib/infrastructure, ensuring it's owned by the unprivileged user (assumed to be netsoc) and is world-readable. This can be done by running (as netsoc):

sudo install -dm 755 -o netsoc -g netsoc /var/lib/infrastructure
git clone /var/lib/infrastructure

Any time a step is given to symlink a configuration file out of this repo, the provided inline configuration matches 1:1 with what is actually deployed on the current boot server!


When making changes to infrastructure repo-based config files, be sure to commit and push them! Make sure to pull new external changes that are made too. Failing to keep in sync with the upstream repo will cause the backup script to fail!


Set up dnsmasq, the DNS and DHCP server

  1. Install dnsmasq
  2. Replace /etc/dnsmasq.conf with a symlink to config/dnsmasq.conf (i.e. ln -sf /var/lib/infrastructure/boot/config/dnsmasq.conf /etc/dnsmasq.conf). Current live configuration:

    # Interface for DHCP and DNS
    # Bind only to the LAN interface
    # Port for DNS server
    # Append full domain to hosts from /etc/hosts
    # Upstream DNS servers
    # Machines
    # BMCs
    # VMs
    # If a client is using BIOS, send them the BIOS variant of iPXE
    # When a client is using iPXE (detected by DHCP option 175), we want to give
    # them the iPXE script

    This configuration sets up

    • A forwarding DNS server
    • DHCP server (with static leases, add a new dhcp-host line for each new server that should get the same IP)
    • DNS resolution for clients by hostname (*.netsoc.internal)
    • TFTP server for loading iPXE over PXE (and then chain loading to the boot script over HTTP)
  3. Create the TFTP directory /srv/tftp

  4. Replace /etc/hosts with a symlink to boot/config/hosts. Current live configuration: shoe.netsoc.internal shoe nintendo.netsoc.internal
  5. Enable dnsmasq (systemctl enable dnsmasq)

Network interfaces

  1. Install netctl
  2. Remove any existing network configuration

  3. Create a symlink to boot/config/netctl/mgmt at /etc/netctl/mgmt. Current live configuration:

    Description='Netsoc management VLAN'

    This sets up the mgmt interface with a static IP address. Make sure to replace eth0 with the name of the ethernet interface!

  4. Enable the mgmt config (netctl enable mgmt)


    If the configuration ever changes, be sure to netctl re-enable it!

  5. Create a symlink to boot/config/netctl/lan at /etc/netctl/lan. Current live configuration:

    Description='VLAN 69 Netsoc LAN'

    This sets up the lan interface with a static IP address. Make sure to replace eth0 with the name of the ethernet interface!

  6. Enable the lan config (netctl enable lan)

  7. Create a symlink to boot/config/netctl/wan at /etc/netctl/wan. Current live configuration:

    Description='VLAN 420 public TCD network'

    This sets up the wan interface with a static IP address. Make sure to replace eth0 with the name of the ethernet interface and use the desired public IP!

  8. Enable the wan config (netctl enable wan)

  9. Ensure systemd-resolved is stopped and disabled (systemctl disable --now systemd-resolved)
  10. Replace /etc/resolv.conf with a symlink to boot/config/resolv.conf. Current live configuration:

    domain netsoc.internal


  1. Install nginx
  2. Replace /etc/nginx/nginx.conf with a symlink to boot/config/nginx.conf. Current live configuration:

    user http;
    worker_processes 1;
    events {
      worker_connections 1024;
    http {
      include mime.types;
      default_type application/octet-stream;
      sendfile on;
      server {
        listen 80 default_server;
        server_name _;
        location / {
          root /srv/http;
          index index.html;
          autoindex on;
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
          root /usr/share/nginx/html;
  3. Enable nginx (systemctl enable nginx)

  4. Create the apk overlay directory /srv/http/apkovl


iPXE is an advanced bootloader designed for use with network booting. This is used to boot Alpine over the network. The version used on Netsoc is the current revision of the submodule in boot/ipxe (built from source).

To update and build iPXE:

  1. Clone this repo and then iPXE: git submodule update --init
  2. Update to the latest version:

    git -C boot/ipxe pull
    git commit -am "Update iPXE version"
  3. Build the latest EFI binary: make -C boot/ipxe/src -j$(nproc) bin-x86_64-efi/ipxe.efi bin/unionly.kpxe

  4. Copy boot/ipxe/src/bin-x86_64-efi/ipxe.efi (for UEFI boot) and boot/ipxe/src/bin/undionly.kpxe (for BIOS) to the boot server (/srv/tftp/ipxe.efi, /srv/tftp/ipxe.kpxe)
  5. Create a symlink to boot/config/boot.ipxe at /srv/http/boot.ipxe (the boot script). Current live configuration:

    # Based on
    set mirror
    set branch v3.14
    set version 3.14.0
    set flavor lts
    set arch x86_64
    set console tty0
    set cmdline modules=loop,squashfs BOOTIF=01-${net0/mac:hexhyp} ip=dhcp apkovl=http://shoe.netsoc.internal/apkovl/{MAC}.tar.gz ssh_key=http://shoe.netsoc.internal/
    set default_cmdline default
    set title Netsoc network boot
    iseq ${manufacturer} QEMU && set flavor virt && set console ttyS0 ||
    # gandalf's remote console seems to be very slow without `noapic`
    iseq ${net0/mac} 40:a8:f0:30:3a:d4 && set cmdline ${cmdline} noapic ||
    set space:hex 20:20
    set space ${space:string}
    menu ${title}
    item --gap Boot options
    item flavor ${space} Kernel flavor [ ${flavor} ]
    item console ${space} Set console [ ${console} ]
    item cmdline ${space} Linux cmdline [ ${default_cmdline} ]
    item --gap Booting
    item --default boot ${space} Boot with above settings
    item --gap Utilities
    item shell ${space} iPXE Shell
    item exit ${space} Exit iPXE
    item reboot ${space} Reboot system
    item poweroff ${space} Shut down system
    choose --timeout 5000 item
    goto ${item}
    menu ${title}
    item lts Linux lts
    item virt Linux virt
    choose flavor || goto shell
    goto menu
    menu ${title}
    item tty0 Console on tty0
    item ttyS0 Console on ttyS0
    item ttyS1 Console on ttyS1
    item ttyAMA0 Console on ttyAMA0
    item custom Enter custom console
    choose console || goto menu
    iseq ${console} custom && goto custom_console ||
    goto menu
    clear console
    echo -n Enter console:${space} && read console
    goto menu
    echo -n Enter extra cmdline options:${space} && read cmdline
    set default_cmdline modified
    goto menu
    isset ${console} && set console console=${console} ||
    set img-url ${mirror}/${branch}/releases/${arch}/netboot-${version}
    set repo-url ${mirror}/${branch}/main
    set modloop-url ${img-url}/modloop-${flavor}
    kernel ${img-url}/vmlinuz-${flavor} initrd=/initramfs-${flavor} ${cmdline} alpine_repo=${repo-url} modloop=${modloop-url} ${console}
    initrd ${img-url}/initramfs-${flavor}
    goto exit
    echo Type "exit" to return to menu.
    goto menu
    clear menu
    exit 0
  6. Copy an SSH public key to /srv/http/


NFS allows the booted systems to update their apkovl archives.

  1. Install nfs-utils
  2. Put /srv/http/apkovl,sync,no_subtree_check,no_root_squash,fsid=0) into /etc/exports (any machine on the LAN will have access as root)
  3. Enable nfs-server (systemctl enable nfs-server)

Firewall (nftables)

  1. Install nftables
  2. Replace /etc/nftables.conf with a symlink to boot/config/nftables.conf. Current live configuration:

    #!/usr/bin/nft -f
    flush ruleset
    define lan_net =
    define vpn_net =
    define firewall =
    define firewall_public =
    define mail_host =
    define ns1 =
    define wireguard = 51820
    define iperf3 = 5201
    define mail_ports = { smtp, submissions, submission, imap, imaps, pop3, pop3s, sieve }
    table inet filter {
      chain wan-tcp {
        tcp dport ssh accept
        tcp dport $iperf3 accept
      chain wan-udp {
        udp dport $wireguard accept
        udp dport $iperf3 accept
      chain wan {
        # ICMP & IGMP
        ip6 nexthdr icmpv6 icmpv6 type {
        } accept
        ip protocol icmp icmp type {
        } accept
        ip protocol igmp accept
        # separate chains for TCP / UDP
        ip protocol tcp tcp flags & (fin|syn|rst|ack) == syn ct state new jump wan-tcp
        ip protocol udp ct state new jump wan-udp
        ip protocol esp accept
      chain filter-port-forwards {
        ip daddr $mail_host tcp dport $mail_ports accept
        ip daddr $ns1 udp dport domain accept
        ip daddr $ns1 tcp dport domain accept
      chain input {
        type filter hook input priority 0; policy drop;
        # established/related connections
        ct state established,related accept
        # invalid connections
        ct state invalid drop
        # allow all from loopback / lan
        iif lo accept
        iifname { eth0, lan, vpn } accept
        iifname wan jump wan
      chain forward {
        type filter hook forward priority 0; policy drop;
        # see comment on same rule in output chain
        oifname wan tcp flags { syn, rst } tcp option maxseg size set 1000
        # lan can go anywhere
        iifname { eth0, lan, vpn } accept
        iifname wan oifname { lan, wan } ct state related,established accept
        iifname wan oifname { lan, wan } jump filter-port-forwards
      chain output {
        type filter hook output priority 0; policy accept;
        # something is weird with downstream networking in maths, clamping the mss
        # greatly reduces loss and improves tcp bandwidth
        oifname wan tcp flags { syn, rst } tcp option maxseg size set 1000
    table nat {
      chain port-forward {
        tcp dport $mail_ports dnat $mail_host
        # Hack for a "second nameserver"
        ip daddr $firewall_public udp dport domain dnat $ns1
        ip daddr $firewall_public tcp dport domain dnat $ns1
      chain prerouting {
        type nat hook prerouting priority 0;
        iifname wan jump port-forward
        iifname lan ip daddr { $firewall, $firewall_public } jump port-forward
      chain lan-port-forwarding {
        ip daddr $mail_host tcp dport $mail_ports snat $firewall_public
      chain postrouting {
        type nat hook postrouting priority 100;
        oifname wan counter masquerade
        oifname lan ip saddr $lan_net jump lan-port-forwarding
        oifname lan ip saddr $vpn_net snat $firewall
    # vim:set ts=2 sw=2 et:
  3. Enable nftables (systemctl enable nftables)

  4. Write net.ipv4.ip_forward=1 into /etc/sysctl.d/forwarding.conf


  1. Install wireguard-tools and wireguard-dkms (you'll also need the kernel headers, e.g. linux-headers for regular Arch, linux-raspberrypi4-headers for a Raspberry Pi 4)
  2. Generate private and public key (as root): wg genkey | sudo tee /etc/wireguard/privkey | wg pubkey > /etc/wireguard/pubkey
  3. Change private key permissions chmod 600 /etc/wireguard/privkey
  4. Create /etc/wireguard/vpn.conf:

    PrivateKey = theprivatekeyhere
    Address =
    ListenPort = 51820
    PublicKey = theirpublickeyhere
    AllowedIPs =

    Replace the private key with the contents of /etc/wireguard/privkey! For each user, create a [Peer] section with their public key and a new IP.

  5. Create a client configuration file:

    PrivateKey = theprivatekeyhere
    Address =
    DNS =, netsoc.internal
    PublicKey = serverpublickeyhere
    AllowedIPs =,,
    Endpoint =

    A private key for the client can be generated with wg genkey as before.

  6. Enable and start the WireGuard service: systemctl enable --now wg-quick@vpn

APKOVL backup

  1. Import the Netsoc PGP secret key. To back up Alpine configurations stored on the boot server, they first must be encrypted. You can transfer the PGP key from a machine which already has it by running the following:

    gpg --export-secret-keys --armor DB2E28B13D53C8DD62FE560B408F6E592A12DF74 | ssh netsoc@my.boot.server -- gpg --import
  2. Mark the key as trusted. Run gpg --edit DB2E28B13D53C8DD62FE560B408F6E592A12DF74. Type trust, set the level to 5 ("I trust ultimately") and accept, before quitting gpg.

  3. Install the backup service by symlinking boot/scripts/backup-apkovl.service into /etc/systemd/system/backup-apkovl.service. Current live service:

    Description=APKOVL backup
  4. Install the backup timer by symlinking boot/scripts/backup-apkovl.timer into /etc/systemd/system/backup-apkovl.timer. Current live timer:

    Description=Backup APKOVL's weekly
    OnCalendar=Wed *-*-* 07:00
  5. Enable and start the timer (systemctl enable --now backup-apkovl.timer)


Pi-KVM is a neat software solution adding a sort of software BMC with a Raspberry Pi 4.

Disable auditing

Add audit=0 to /boot/cmdline.txt.

pikvm pacman repo

Pi-KVM provides pre-built packages for the Raspberry Pi via their own repo.

  1. Import the Pi-KVM PGP key (run pacman-key -r 912C773ABBD1B584 && pacman-key --lsign-key 912C773ABBD1B584)
  2. Add the following to /etc/pacman.conf:

    Server =
    SigLevel = Required DatabaseOptional


The Linux watchdog will attempt to reset the machine if the system locks up.

  1. Install watchdog
  2. Replace /etc/watchdog.conf with:

    min-memory       = 1280
    max-load-1       = 24
    max-load-5       = 18
    max-load-15      = 12
    watchdog-device  = /dev/watchdog
    watchdog-timeout = 15
    interval         = 1
    realtime         = yes
    priority         = 1
  3. Enable watchdog (systemctl enable watchdog)


kvmd is the main Pi-KVM component.

  1. Add a USB drive (or additional SD card partition) for storing virtual media images. Format the partition as ext4 and add the following to /etc/fstab:

    /dev/sda1 /var/lib/kvmd/msd ext4 nodev,nosuid,noexec,ro,errors=remount-ro,data=journal,X-kvmd.otgmsd-root=/var/lib/kvmd/msd,X-kvmd.otgmsd-user=kvmd  0 0

    Be sure to replace /dev/sda1 with the actual device name!

  2. Install kvmd-platform-v2-rpi4 and kvmd-webterm


    nginx may be replaced by nginx-mainline (a dependency of kvmd). If this is the case, /etc/nginx/nginx.conf will be backed up to /etc/nginx/nginx.conf.pacsave. Be sure to move this file back to /etc/nginx/nginx.conf once the install is complete.

  3. Disable kvmd's nginx on port 80 (in /etc/kvmd/nginx/nginx.conf)

  4. Enable kvmd, kvmd-nginx, kvmd-webterm and kvmd-otg.
  5. Add the following to /boot/config.txt:

  6. Check the USB port for the capture card. Once plugged in, kvmd uses a udev rule to create a symlink /dev/kvmd-video -> /dev/video0. This is only done if the /dev/video0 is connected to a hardcoded USB port, however. The script /usr/bin/kvmd-udev-hdmiusb-check will perform this check. Edit the script and replace the rpi4 port with the output of the following command: sudo udevadm info -q path -n /dev/video0 | sed 's|/| |g' | awk '{ print $11 }'

k3s Kubernetes API load balancer

When deploying k3s in HA mode, clients should access the Kubernetes API via a load balancer in case a node goes offline. There are a number of ways to achieve this.


HAProxy is a highly-configurable proxy, with more proxying features than nginx.

  1. Install haproxy
  2. Replace /etc/haproxy/haproxy.cfg with a symlink to boot/config/haproxy.cfg. Current live configuration:

        maxconn 20000
        log local0
        user haproxy
        pidfile /run/
        ssl-load-extra-files key
    resolvers self
        nameserver dnsmasq
        hold valid 30s
        hold nx 10s
    backend k3s_servers
        mode tcp
        balance roundrobin
        timeout connect 5s
        timeout server 30m
        option httpchk GET /readyz
        http-check expect rstatus 2[0-9][0-9]
        default-server resolvers self check inter 3s check-ssl verify required ca-file /etc/haproxy/
        # For some reason the client cert can't be set via `default-server`???
        server cube cube:6443 crt /etc/haproxy/k3s-client.crt
        server napalm napalm:6443 crt /etc/haproxy/k3s-client.crt
        server saruman saruman:6443 crt /etc/haproxy/k3s-client.crt
    frontend k3s
        bind :6443
        mode tcp
        timeout client 30m
        default_backend k3s_servers

    This sets up each Kubernetes server node as a backend to the frontend on port 6443. For each new k3s server, a server line should be added to the backend.

  3. Copy the k3s CA, admin client certificate and key to /etc/haproxy (these can be found in /var/lib/rancher/k3s/server/tls on any k3s server node):

    • server-ca.crt ->
    • client-admin.crt -> k3s-client.crt
    • client-admin.key -> k3s-client.crt.key
  4. Enable and start haproxy



Due to issues with IPVS NAT for clients on the same LAN as the load balancer, this method is currently not viable.

IPVS provides in-kernel layer 4 capabilities, which can be configured in a manner similar to iptables. However, on its own IPVS does not perform any health checks. Keepalived can be set up to program IPVS based on a configuration file which features high-level health checking capabilities.

  1. Install keepalived
  2. Set KEEPALIVED_OPTIONS="-D -C" in /etc/sysconfig/keepalived. This disables the failover functionality provided by Keepalived, which is unneeded here as there will only be one load balancer.
  3. Replace /etc/keepalived/keepalived.conf with a symlink to boot/config/keepalived.conf. Current live configuration:

    global_defs {
      smtp_alert false
    virtual_server 6443 {
      lvs_sched rr
      protocol TCP
      delay_loop 5
      retry 3
      # cube
      real_server 6443 {
        SSL_GET {
          url {
            path /readyz
      # napalm
      real_server 6443 {
        SSL_GET {
          url {
            path /readyz
      # saruman
      real_server 6443 {
        SSL_GET {
          url {
            path /readyz
    SSL {
      ca /etc/keepalived/
      certificate /etc/keepalived/k3s.crt
      key /etc/keepalived/k3s.key

    This sets up each Kubernetes server node as a backend to the service. For each new Kubernetes server, a real_server section should be added.

  4. Copy the k3s CA, admin client certificate and key to /etc/keepalived (these can be found in /var/lib/rancher/k3s/server/tls on any k3s server node):

    • server-ca.crt ->
    • client-admin.crt -> k3s.crt
    • client-admin.key -> k3s.key
  5. Enable and start keepalived

Last update: 2021-08-29