Files
darkforge/tests/proxmox
Danny 8f9fa9f00e Fix OVMF detection for Arch Linux (split CODE/VARS files)
Arch's edk2-ovmf package installs split files (OVMF_CODE.fd +
OVMF_VARS.fd) instead of a single OVMF.fd. Updated the search
to check OVMF_CODE.fd paths first, with a find fallback.

QEMU boot command now handles both formats:
- Split: -drive if=pflash,format=raw,readonly=on,file=OVMF_CODE.fd
         -drive if=pflash,format=raw,file=OVMF_VARS.fd
- Single: -bios OVMF.fd

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 16:32:06 +01:00
..

DarkForge — Proxmox Test Environment

Automated testing of DarkForge Linux on a Proxmox VE server. Creates an Arch Linux VM, clones the project, and runs the full test suite including toolchain compilation and QEMU boot tests.

What gets tested

Without the target hardware (9950X3D / RTX 5090), we can still test:

  • dpack compilation and unit tests
  • All 154 package definitions parse correctly
  • Toolchain scripts have valid bash syntax
  • Kernel config has all required options
  • Init system scripts have valid syntax
  • Source downloads and SHA256 signing (network test)
  • Toolchain bootstrap (cross-compiler build — full LFS Ch.5-7)
  • Kernel compilation (generic x86_64, not znver5-optimized)
  • ISO generation
  • UEFI boot in nested QEMU (kernel boots, reaches userspace)
  • Installer runs in QEMU without errors

What can NOT be tested without target hardware

  • znver5 CPU optimization (requires Zen 5 CPU)
  • NVIDIA RTX 5090 driver (requires the GPU)
  • Realtek RTL8125BN network driver (requires the NIC)
  • Full gaming stack (Steam, Wine — requires GPU)
  • dwl compositor (requires Wayland + GPU)
  • Real UEFI firmware boot (QEMU OVMF is close but not identical)

Requirements

  • Proxmox VE 8.x or 9.x
  • At least 8 CPU cores and 16GB RAM available for the test VM
  • ~100GB storage on a Proxmox storage pool
  • Internet access from the VM
  • SSH access to the Proxmox host

Usage

1. Create the test VM (run on Proxmox host)

# Copy the script to Proxmox and run it
scp tests/proxmox/create-vm.sh root@your-proxmox:/root/
ssh root@your-proxmox bash /root/create-vm.sh

This creates the VM and cloud-init installs all packages + clones the repo. Wait ~5 minutes for provisioning to complete.

2. SSH in and run tests

# Get the VM IP from Proxmox
ssh root@your-proxmox "qm guest cmd 900 network-get-interfaces" | grep ip-address

# SSH into the VM
ssh darkforge@<VM_IP>    # password: darkforge

Then start the tests. They run inside a tmux session so you can disconnect and reconnect without interrupting them:

# Full test suite (~2-6 hours) — runs in tmux, safe to disconnect
darkforge-test

# Fast mode (~30 min) — skips toolchain/kernel/ISO builds
darkforge-test --quick

# Medium mode (~1 hour) — skips only toolchain bootstrap
darkforge-test --no-build

tmux controls:

  • Ctrl+B then D — detach (tests keep running in background)
  • tmux attach -t darkforge — reattach to see progress
  • tmux ls — list running sessions

3. Collect the report

Once tests finish (check with tmux attach -t darkforge):

# From your local machine
scp darkforge@<VM_IP>:~/darkforge/tests/report.json ./
scp darkforge@<VM_IP>:~/darkforge/tests/report.txt ./

The report.txt is a human-readable summary. The report.json is machine-readable and can be given to the development process for automated debugging.

4. Re-run after code changes

ssh darkforge@<VM_IP>
cd ~/darkforge
git pull --recurse-submodules
darkforge-test --quick    # re-run tests

Files

  • create-vm.sh — runs on the Proxmox host, creates and configures the VM
  • run-in-vm.sh — runs inside the VM, executes all test suites, generates reports