Files
harmony/examples/iot_vm_setup/README.md
Jean-Gabriel Gill-Couture 1577348dbb
All checks were successful
Run Check Script / check (pull_request) Successful in 2m20s
refactor(linux): ansible ad-hoc mode + self-installing venv
Rewrites AnsibleHostConfigurator to avoid the two coupling points that
last year's Kubespray investigation taught us to stay away from: YAML
playbook generation and Ansible inventory.

- **No more YAML, no more inventory files.** Every primitive is now one
  or two `ansible all -i '<ip>,' -m <module> -a '<json>'` ad-hoc
  invocations. JSON args go straight through Ansible's own module
  interface; the tmpfile-playbook-and-inventory dance is gone entirely.
  Harmony owns 100% of orchestration, Ansible owns only per-host
  idempotent module execution. `ensure_systemd_unit` collapses to two
  ad-hoc calls (copy + systemd) rather than a multi-task playbook.
  `ensure_linger` sentinels change-state through the shell module's
  stdout since ad-hoc has no `changed_when`.

- **Self-installing venv.** New `modules::linux::ansible_venv`:
  `ensure_ansible_venv()` creates `$HARMONY_DATA_DIR/ansible-venv/` via
  `python3 -m venv` + `pip install ansible-core==2.17.*` on first use,
  cached via `tokio::sync::OnceCell`. No more "install ansible before
  running Harmony" step — python3 + venv is the only host requirement,
  and we print the exact package names for Arch/Debian/Fedora when
  python is missing.

- **smoke-a3.sh**: drop `ansible-playbook` from preflight, add
  `python3`. Example gains `--bootstrap-ansible-only` for warming the
  venv ahead of the real run (turns a ~60s first-run smoke into
  deterministic sub-second after bootstrap).

Output parsing uses the `oneline` callback (`host | VERB => {json}`)
which is trivially regex-free to split and handles FAILED!/UNREACHABLE!
as errors. SSH control sockets are pinned under `$HARMONY_DATA_DIR/
ansible-cp` so multiple Harmony processes don't race in /tmp.

Verified: `ensure_ansible_venv()` first call installs ansible-core
2.17.14 into the managed venv (~12s, network-bound); second call is
cache-fast (<50ms). Clippy + fmt clean, aarch64 cross-compile green.
2026-04-20 08:49:15 -04:00

2.3 KiB

example_iot_vm_setup

End-to-end driver for the IoT walking-skeleton VM-as-device flow. Runs two Harmony Scores in sequence:

  1. KvmVmScore — provision a libvirt VM from an Ubuntu 24.04 cloud image with a cloud-init seed ISO that authorizes one SSH key. Returns the booted VM's IP.
  2. IotDeviceSetupScore — SSH into the VM (via the Ansible-backed HostConfigurationProvider) and install podman + the iot-agent binary, drop the TOML config, bring up the systemd unit.

After a successful run, the VM is a fleet member reporting to NATS under the --device-id you chose, carrying the --group label you passed.

One-time setup

WORK=/var/tmp/harmony-iot-smoke
mkdir -p "$WORK/ssh"

# 1. Ubuntu 24.04 cloud image (~700 MB) — cached between runs.
curl -o "$WORK/ubuntu-24.04-server-cloudimg-amd64.img" \
     https://cloud-images.ubuntu.com/releases/24.04/release/ubuntu-24.04-server-cloudimg-amd64.img

# 2. SSH keypair the VM will trust.
ssh-keygen -t ed25519 -N '' -f "$WORK/ssh/id_ed25519"

# 3. Runtime deps — Harmony self-installs Ansible into a managed venv
#    under $HARMONY_DATA_DIR/ansible-venv on first run, so you only need
#    python3 + venv on the runner. No system-wide `ansible` needed.
# On Arch:
#   sudo pacman -S libvirt qemu-full xorriso python
# On Debian/Ubuntu:
#   sudo apt install libvirt-daemon-system qemu-kvm xorriso python3 python3-venv

# 4. libvirt default network.
sudo virsh net-start default
sudo virsh net-autostart default

Run

cargo build -p iot-agent-v0

cargo run -p example_iot_vm_setup -- \
  --base-image /var/tmp/harmony-iot-smoke/ubuntu-24.04-server-cloudimg-amd64.img \
  --ssh-pubkey /var/tmp/harmony-iot-smoke/ssh/id_ed25519.pub \
  --ssh-privkey /var/tmp/harmony-iot-smoke/ssh/id_ed25519 \
  --work-dir /var/tmp/harmony-iot-smoke \
  --agent-binary target/debug/iot-agent-v0 \
  --nats-url nats://192.168.122.1:4222

Changing groups

Re-running with a different --group rewrites /etc/iot-agent/config.toml on the VM and restarts the agent. The VM itself is untouched.

cargo run -p example_iot_vm_setup -- ... --group group-b

Full end-to-end via smoke test

See iot/scripts/smoke-a3.sh — stands up NATS in a podman container, runs this example, asserts the agent's status lands in NATS.