Re-frame ADR-021 from an accepted shell-executor decision into an explicit problem statement plus one candidate proposal (Alternative A), with an Open Questions section capturing the concerns raised during review: wrong abstraction level, no idempotency, no resource model, no typed status, incoherence with the Score-Topology-Interpret pattern, and weak security posture. Add ADR-022 enumerating four alternatives: - A: shell command executor (current scaffold) - B: mini-kubelet with typed resource manifests and reconcilers - C: embedded Score interpreter on the agent - D: hybrid — typed manifests now, Scores later Recommends Alternative D: ship typed AgentManifest/AgentStatus with a small fixed reconciler set for the IoT MVP, keeping an explicit migration seam to the Score-based end state once Scores become uniformly wire-serializable. Also documents what specifically is wrong with the happy-path shell executor in harmony_agent/src/desired_state.rs and clarifies that the NATS KV watch + typed CAS write skeleton is reusable, while the execute_command shell-out should be gated behind an audited ShellJob variant or deleted once real reconcilers land.
15 KiB
ADR-022: Agent Desired-State — Alternatives and Recommendation
Status: Proposed Date: 2026-04-09 Supersedes (candidate): ADR-021 shell-executor proposal
Context
ADR-021 drafted a first-pass "desired-state convergence" mechanism for the Harmony Agent (ADR-016) in the form of a shell-command executor. On review, that shape raised serious concerns (see ADR-021 §"Open Questions and Concerns"): it is incoherent with Harmony's Score-Topology-Interpret pattern, it is not idempotent, it has no resource model, no typed status, no lifecycle, and it weakens the agent's security posture.
Separately, the team has been converging on a "mini-kubelet" framing for the IoT agent:
- The agent owns a small, fixed set of reconcilers, one per resource type it can manage (systemd unit, container, file, network interface, overlay config...).
- The desired state is a typed manifest — a bag of resources with identities, generations, and typed status.
- The agent runs reconcile loops similar to kubelet's Pod Lifecycle Event Generator (PLEG): for each managed resource, observe actual, compare to desired, apply the minimum delta, update typed status.
- Failure and drift are first-class. "I tried, it failed, here is why" is a valid steady state.
ADR-017-3 already borrows Kubernetes vocabulary (staleness, fencing, promotion) on purpose. Doubling down on the kubelet metaphor at the desired-state layer is the natural continuation, not a tangent.
This ADR enumerates the candidate designs, argues their tradeoffs honestly, and recommends a path.
Alternatives
Alternative A — Shell Command Executor (ADR-021 as-is)
Shape: DesiredState { command: String, generation: u64 }, agent does sh -c $command, pipes stdout/stderr/exit into ActualState.
Pros:
- Trivial to implement. ~200 LOC, already on this branch.
- Works for any task that can be expressed as a shell pipeline — maximum flexibility at v1.
- Zero new abstractions: reuses existing NATS KV watch + CAS patterns.
- End-to-end demo-able in an afternoon.
Cons:
- Wrong abstraction level. Harmony's entire thesis is "no more stringly-typed YAML/shell mud pits". This design ships that mud pit to the edge.
- Not idempotent. The burden of idempotency falls on whoever writes the command string.
systemctl start foorun twice is fine;apt install foo && echo "done" >> /etc/staterun twice is broken. We cannot enforce correctness. - No resource model. No concept of "this manifest owns X". No diffing, no GC, no drift detection, no "what does this agent currently run?".
- No typed status. stdout/stderr/exit_code does not tell a fleet dashboard "container nginx is running, restarted 3 times, last healthy 2s ago". It tells it "this bash ran and exited 0 once, three minutes ago".
- No lifecycle. Fire-and-forget; post-exit the agent has no notion of whether the resource is still healthy.
- Security. Even with NATS ACLs, the API's shape invites abuse. Any bug in the control plane that lets a user influence a desired-state write equals RCE on every Pi.
- Incoherent with ADR-017-3 and Score-Topology-Interpret. Introduces a parallel concept that has nothing to do with the rest of Harmony.
Verdict: Acceptable only as a named escape hatch inside a richer design (a ShellJob resource variant, explicitly labeled as such and audited). Not acceptable as the whole design.
Alternative B — Mini-Kubelet with Typed Resource Manifests
Shape: The agent owns a fixed set of Resource variants and one reconciler per variant.
/// The unit of desired state shipped to an agent.
/// Serialized to JSON, pushed via NATS KV to `desired-state.<agent-id>`.
struct AgentManifest {
generation: u64, // monotonic, control-plane assigned
resources: Vec<ManagedResource>,
}
struct ManagedResource {
/// Stable, manifest-unique identity. Used for diffing across generations.
id: ResourceId,
spec: ResourceSpec,
}
enum ResourceSpec {
SystemdUnit(SystemdUnitSpec), // ensure unit exists, enabled, active
Container(ContainerSpec), // podman/docker run with image, env, volumes
File(FileSpec), // path, mode, owner, content (hash or inline)
NetworkConfig(NetworkConfigSpec), // interface, addresses, routes
ShellJob(ShellJobSpec), // explicit escape hatch, audited separately
// ...extend carefully
}
/// What the agent reports back.
struct AgentStatus {
manifest_generation: u64, // which desired-state gen this reflects
observed_generation: u64, // highest gen the agent has *processed*
resources: HashMap<ResourceId, ResourceStatus>,
conditions: Vec<AgentCondition>, // Ready, Degraded, Reconciling, ...
}
enum ResourceStatus {
Pending,
Reconciling { since: Timestamp },
Ready { since: Timestamp, details: ResourceReadyDetails },
Failed { since: Timestamp, error: String, retry_after: Option<Timestamp> },
}
Each reconciler implements a small trait:
trait Reconciler {
type Spec;
type Status;
async fn observe(&self, id: &ResourceId) -> Result<Self::Status>;
async fn reconcile(&self, id: &ResourceId, spec: &Self::Spec) -> Result<Self::Status>;
async fn delete(&self, id: &ResourceId) -> Result<()>;
}
The agent loop becomes:
- Watch
desired-state.<agent-id>for the latestAgentManifest. - On change, compute diff vs. observed set: additions, updates, deletions.
- Dispatch each resource to its reconciler. Reconcilers are idempotent by contract.
- Aggregate per-resource status into
AgentStatus, write toactual-state.<agent-id>via CAS. - Re-run periodically to detect drift even when desired state has not changed (PLEG-equivalent).
Pros:
- Declarative and idempotent by construction. Reconcilers are required to be idempotent; the contract is enforced in Rust traits, not in docs.
- Typed status. Dashboards, alerts, and the control plane get structured data.
- Drift detection. Periodic re-observation catches "someone SSH'd in and stopped the service".
- Lifecycle. Each resource has a clear state machine; health is a first-class concept.
- Coherent with ADR-017-3. The kubelet framing becomes literal, not metaphorical.
- Narrow attack surface. The agent only knows how to do a handful of well-audited things. Adding a new capability is an explicit code change, not a new shell string.
- Composable with Harmony's existing philosophy.
ManagedResourceis to the agent what a Score is to a Topology, at a smaller scale.
Cons:
- More upfront design. Each reconciler needs to be written and tested.
- Requires us to commit to a resource type set and its status schema. Adding a new kind is a versioned change to the wire format.
- Duplicates, at the edge, some of the vocabulary already present in Harmony's Score layer (e.g.,
FileDeployment, container deployments). Risk of two parallel abstractions evolving in tension. - Harder to demo in a single afternoon.
Verdict: Strong candidate. Matches the team's mini-kubelet intuition directly.
Alternative C — Embedded Score Interpreter on the Agent
Shape: The desired state is a Harmony Score (or a set of Scores), serialized and pushed via NATS. The agent hosts a local PiTopology that exposes a small, carefully chosen set of capabilities (SystemdHost, ContainerRuntime, FileSystemHost, NetworkConfigurator, ...). The agent runs the Score's interpret against that local topology.
// On the control plane:
let score = SystemdServiceScore { ... };
let wire: SerializedScore = score.to_wire()?;
nats.put(format!("desired-state.{agent_id}"), wire).await?;
// On the agent:
let score = SerializedScore::decode(payload)?;
let topology = PiTopology::new();
let outcome = score.interpret(&inventory, &topology).await?;
// Outcome is already a typed Harmony result (SUCCESS/NOOP/FAILURE/RUNNING/...).
Pros:
- Zero new abstractions. The agent becomes "a Harmony executor that happens to run on a Pi". Everything we already know how to do in Harmony works, for free.
- Maximum coherence. There is exactly one way to describe desired state in the whole system: a Score. The type system enforces that a score requesting
K8sclientcannot be shipped to a Pi topology that does not offer it — at compile time on the control plane, at deserialization time on the agent. - Composability. Higher-order topologies (ADR-015) work unchanged:
FailoverTopology<PiTopology>gets you HA at the edge for free. - Single mental model for the whole team. "Write a Score" is already the Harmony primitive; no one needs to learn a second one.
Cons:
- Serializability. This is the hard one. Harmony Scores today hold trait objects, references to live topology state, and embedded closures in places. Making them uniformly serde-serializable is a non-trivial refactor that touches dozens of modules. We would be gating the IoT MVP on a cross-cutting refactor.
- Agent binary size. If "the agent can run any Score", it links every module. On a Pi Zero 2 W, that matters. We can mitigate with feature flags, but then we are back to "which scores does this agent support?" — i.e., we have reinvented resource-type registration, just spelled differently.
- Capability scoping is subtle. We have to be extremely careful about which capabilities
PiTopologyexposes. "A Pi can run containers" is true; "a Pi can run arbitrary k8s clusters" is not. Getting that boundary wrong opens the same attack surface as Alternative A, just hidden behind a Score. - Control-plane UX. The central platform now needs to instantiate Scores for specific Pis, handle their inventories, and ship them. That is heavier than "push a JSON blob".
Verdict: The principled end state, almost certainly where we want to be in 18 months. Not shippable for the IoT MVP.
Alternative D — Hybrid: Typed Manifests Now, Scores Later
Shape: Ship Alternative B (typed AgentManifest with a fixed set of reconcilers). Keep the Score ambition (Alternative C) as an explicit roadmap item. When Scores become uniformly wire-serializable and PiTopology is mature, migrate by adding a ResourceSpec::Score(SerializedScore) variant. Eventually that variant may subsume the others.
Pros:
- Shippable soon. Alternative B is the implementable core; we can have a fleet demo in weeks, not months.
- On a path to the ideal. We do not dead-end. The
ResourceSpecenum becomes the migration seam. - De-risks the Score serialization refactor. We learn what resource types we actually need on the edge before we refactor the Score layer.
- Lets us delete Alternative A cleanly. The shell executor either disappears or survives as a narrow, explicitly-audited
ResourceSpec::ShellJobvariant that documents itself as an escape hatch.
Cons:
- Temporarily maintains two vocabularies (
ResourceSpecat the edge,Scorein the core). There is a risk they drift before they reconverge. - Requires team discipline to actually do the C migration and not leave B as the permanent design.
Verdict: Recommended.
Recommendation
Adopt Alternative D (Hybrid: typed manifests now, Scores later).
Reasoning:
- Speed to IoT MVP is real. Alternative C is a 3-6 month refactor of the Score layer before we can deploy anything; Alternative B can ship within the current iteration.
- Long-term coherence with Harmony's design philosophy is preserved because D has an explicit migration seam to C. We do not paint ourselves into a corner.
- The mini-kubelet framing is directly satisfied by B. Typed resources, reconciler loops, observed-generation pattern, PLEG-style drift detection. This is exactly what the team has been describing.
- Capability-trait discipline carries over cleanly.
Reconcileris the agent-side analog of a capability trait (DnsServer,K8sclient, etc.). The rule "capabilities are industry concepts, not tools" applies toResourceSpectoo: we name itContainer, notPodman;SystemdUnit, notSystemctl. - The shell executor is not wasted work. It proved the NATS KV watch + typed CAS write pattern that Alternative B will also need. It becomes either
ResourceSpec::ShellJob(audited escape hatch) or gets deleted. - Security posture improves immediately. A fixed resource-type allowlist is dramatically tighter than "run any shell", even before we add signing or sandboxing.
- The IoT product use case actually is "deploy simple workloads to Pi fleets". Containers, systemd services, config files, network config. That is a short list, and it maps to four or five resource types. We do not need the full expressive power of a Score layer to hit the product milestone.
Specific Findings on the Current Implementation
harmony_agent/src/desired_state.rs (≈250 lines, implemented on this branch):
- Keep as scaffolding, do not wire into user tooling.
- The NATS KV watch loop, the
ActualStateCAS write, and the generation-tracking skeleton are all reusable by Alternative B. They are the only parts worth keeping. - The
execute_commandfunction (shelling out viaCommand::new("sh").arg("-c")) is the part that bakes in the wrong abstraction. It should be:- Moved behind a
ResourceSpec::ShellJobreconciler if we decide to keep shell as an explicit, audited escape hatch, or - Deleted when the first two real reconcilers (Container, SystemdUnit) land.
- Moved behind a
- The
DesiredStateConfig/ActualStatetypes inharmony_agent/src/agent/config.rsare too narrow. They should be replaced byAgentManifest/AgentStatusas sketched above.generation: u64at the manifest level stays; per-resource status is added. - The existing tests (
executes_command_and_reports_result,reports_failure_for_bad_command) are testing the shell executor specifically; they will be deleted or repurposed when the resource model lands.
Open Questions (to resolve before implementing B)
- What is the minimum viable resource type set for the IoT MVP? Proposal:
Container,SystemdUnit,File. DeferNetworkConfig,ShellJobuntil a concrete use case appears. - Where does
AgentManifestlive in the crate graph? It is consumed by both the control plane and the agent. Likelyharmony_agent_types(new) or an existing shared types crate. - How are images, files, and secrets referenced? By content hash + asset store URL (ADR:
harmony_assets)? By inline payload under a size cap? - What is the reconcile cadence? On NATS KV change + periodic drift check every N seconds? What is N on a Pi?
- How does
AgentStatusinteract with the heartbeat loop? Is the status written on every reconcile, or aggregated into the heartbeat payload? The heartbeat cares about liveness; the status cares about workload health. They are probably separate KV keys, coupled by generation. - How do we handle partial failures and retry? Exponential backoff per resource? Global pause on repeated failures? Surface to the control plane via
conditions? - Can the agent refuse a manifest it does not understand? (Forward compatibility: new
ResourceSpecvariant rolled out before the agent upgrade.) Proposal: fail loudly and report a typedUnknownResourcestatus so the control plane can detect version skew.
Decision
None yet. This ADR is explicitly a proposal to adopt Alternative D, pending team review. If approved, a follow-up ADR-023 will specify the concrete AgentManifest / AgentStatus schema and the initial reconciler set.