Compare commits

..

2 Commits

Author SHA1 Message Date
5cce9f8e74 adr: draft ADR proposing harmony agent and nats-jetstram for decentralized workload management
All checks were successful
Run Check Script / check (pull_request) Successful in 1m31s
2025-12-19 10:12:44 -05:00
03e98a51e3 Merge pull request 'fix: added fields missing for haproxy after most recent update' (#191) from fix/opnsense_update into master
Some checks failed
Run Check Script / check (push) Failing after 12m40s
Reviewed-on: #191
2025-12-17 20:03:49 +00:00

View File

@@ -0,0 +1,90 @@
# Architecture Decision Record: Global Orchestration Mesh & The Harmony Agent
**Status:** Proposed
**Date:** 2025-12-19
## Context
Harmony is designed to enable a truly decentralized infrastructure where independent clusters—owned by different organizations or running on diverse hardware—can collaborate reliably. This vision combines the decentralization of Web3 with the performance and capabilities of Web2.
Currently, Harmony operates as a stateless CLI tool, invoked manually or via CI runners. While effective for deployment, this model presents a critical limitation: **a CLI cannot react to real-time events.**
To achieve automated failover and dynamic workload management, we need a system that is "always on." Relying on manual intervention or scheduled CI jobs to recover from a cluster failure creates unacceptable latency and prevents us from scaling to thousands of nodes.
Furthermore, we face a challenge in serving diverse workloads:
* **Financial workloads** require absolute consistency (CP - Consistency/Partition Tolerance).
* **AI/Inference workloads** require maximum availability (AP - Availability/Partition Tolerance).
There are many more use cases, but those are the two extremes.
We need a unified architecture that automates cluster coordination and supports both consistency models without requiring a complete re-architecture in the future.
## Decision
We propose a fundamental architectural evolution. It has been clear since the start of Harmony that it would be necessary to transition Harmony from a purely ephemeral CLI tool to a system that includes a persistent **Harmony Agent**. This Agent will connect to a **Global Orchestration Mesh** based on a strongly consistent protocol.
The proposal consists of four key pillars:
### 1. The Harmony Agent (New Component)
We will develop a long-running process (Daemon/Agent) to be deployed alongside workloads.
* **Shift from CLI:** Unlike the CLI, which applies configuration and exits, the Agent maintains a persistent connection to the mesh.
* **Responsibility:** It actively monitors cluster health, participates in consensus, and executes lifecycle commands (start/stop/fence) instantly when the mesh dictates a state change.
### 2. The Technology: NATS JetStream
We will utilize **NATS JetStream** as the underlying transport and consensus layer for the Agent and the Mesh.
* **Why not raw Raft?** Implementing a raw Raft library requires building and maintaining the transport layer, log compaction, snapshotting, and peer discovery manually. NATS JetStream provides a battle-tested, distributed log and Key-Value store (based on Raft) out of the box, along with a high-performance pub/sub system for event propagation.
* **Role:** It will act as the "source of truth" for the cluster state.
### 3. Strong Consistency at the Mesh Layer
The mesh will operate with **Strong Consistency** by default.
* All critical cluster state changes (topology updates, lease acquisitions, leadership elections) will require consensus among the Agents.
* This ensures that in the event of a network partition, we have a mathematical guarantee of which side holds the valid state, preventing data corruption.
### 4. Public UX: The `FailoverStrategy` Abstraction
To keep the user experience stable and simple, we will expose the complexity of the mesh through a high-level configuration API, tentatively called `FailoverStrategy`.
The user defines the *intent* in their config, and the Harmony Agent automates the *execution*:
* **`FailoverStrategy::AbsoluteConsistency`**:
* *Use Case:* Banking, Transactional DBs.
* *Behavior:* If the mesh detects a partition, the Agent on the minority side immediately halts workloads. No split-brain is ever allowed.
* **`FailoverStrategy::SplitBrainAllowed`**:
* *Use Case:* LLM Inference, Stateless Web Servers.
* *Behavior:* If a partition occurs, the Agent keeps workloads running to maximize uptime. State is reconciled when connectivity returns.
## Rationale
**The Necessity of an Agent**
You cannot automate what you do not monitor. Moving to an Agent-based model is the only way to achieve sub-second reaction times to infrastructure failures. It transforms Harmony from a deployment tool into a self-healing platform.
**Scaling & Decentralization**
To allow independent clusters to collaborate, they need a shared language. A strongly consistent mesh allows Cluster A (Organization X) and Cluster B (Organization Y) to agree on workload placement without a central authority.
**Why Strong Consistency First?**
It is technically feasible to relax a strongly consistent system to allow for "Split Brain" behavior (AP) when the user requests it. However, it is nearly impossible to take an eventually consistent system and force it to be strongly consistent (CP) later. By starting with strict constraints, we cover the hardest use cases (Finance) immediately.
**Future Topologies**
While our immediate need is `FailoverTopology` (Multi-site), this architecture supports any future topology logic:
* **`CostTopology`**: Agents negotiate to route workloads to the cluster with the cheapest spot instances.
* **`HorizontalTopology`**: Spreading a single workload across 100 clusters for massive scale.
* **`GeoTopology`**: Ensuring data stays within specific legal jurisdictions.
The mesh provides the *capability* (consensus and messaging); the topology provides the *logic*.
## Consequences
**Positive**
* **Automation:** Eliminates manual failover, enabling massive scale.
* **Reliability:** Guarantees data safety for critical workloads by default.
* **Flexibility:** A single codebase serves both high-frequency trading and AI inference.
* **Stability:** The public API remains abstract, allowing us to optimize the mesh internals without breaking user code.
**Negative**
* **Deployment Complexity:** Users must now deploy and maintain a running service (the Agent) rather than just downloading a binary.
* **Engineering Complexity:** Integrating NATS JetStream and handling distributed state machines is significantly more complex than the current CLI logic.
## Implementation Plan (Short Term)
1. **Agent Bootstrap:** Create the initial scaffold for the Harmony Agent (daemon).
2. **Mesh Integration:** Prototype NATS JetStream embedding within the Agent.
3. **Strategy Implementation:** Add `FailoverStrategy` to the configuration schema and implement the logic in the Agent to read and act on it.
4. **Migration:** Transition the current manual failover scripts into event-driven logic handled by the Agent.