Files
harmony/adr/015-higher-order-topologies.md
Jean-Gabriel Gill-Couture bfde5f58ed
All checks were successful
Run Check Script / check (pull_request) Successful in 1m33s
adr: Higher order topologies
These types of Topologies will orchestrate behavior in regular Topologies.

For example, a FailoverTopology is a Higher Order, it will orchestrate its capabilities between a primary and a replica topology. A great use case for this is a database deployment. The FailoverTopology will deploy both instances, connect them, and the able to execute the appropriate actions to promote de replica to primary and revert back to original state.

Other use cases are ShardedTopology, DecentralizedTopology, etc.
2025-12-09 11:23:30 -05:00

4.5 KiB
Raw Blame History

Architecture Decision Record: Higher-Order Topologies

Initial Author: Jean-Gabriel Gill-Couture
Initial Date: 2025-12-08
Last Updated Date: 2025-12-08

Status

Implemented

Context

Harmony models infrastructure as Topologies (deployment targets like K8sAnywhereTopology, LinuxHostTopology) implementing Capabilities (tech traits like PostgreSQL, Docker).

Higher-Order Topologies (e.g., FailoverTopology<T>) compose/orchestrate capabilities across multiple underlying topologies (e.g., primary+replica T).

Naive design requires manual impl Capability for HigherOrderTopology<T> per T per capability, causing:

  • Impl explosion: N topologies × M capabilities = N×M boilerplate.
  • ISP violation: Topologies forced to impl unrelated capabilities.
  • Maintenance hell: New topology needs impls for all orchestrated capabilities; new capability needs impls for all topologies/higher-order.
  • Barrier to extension: Users can't easily add topologies without todos/panics.

This makes scaling Harmony impractical as ecosystem grows.

Decision

Use blanket trait impls on higher-order topologies to automatically derive orchestration:

/// Higher-Order Topology: Orchestrates capabilities across sub-topologies.
pub struct FailoverTopology<T> {
    /// Primary sub-topology.
    primary: T,
    /// Replica sub-topology.
    replica: T,
}

/// Automatically provides PostgreSQL failover for *any* `T: PostgreSQL`.
/// Delegates to primary for queries; orchestrates deploy across both.
#[async_trait]
impl<T: PostgreSQL> PostgreSQL for FailoverTopology<T> {
    async fn deploy(&self, config: &PostgreSQLConfig) -> Result<String, String> {
        // Deploy primary; extract certs/endpoint;
        // deploy replica with pg_basebackup + TLS passthrough.
        // (Full impl logged/elaborated.)
    }

    // Delegate queries to primary.
    async fn get_replication_certs(&self, cluster_name: &str) -> Result<ReplicationCerts, String> {
        self.primary.get_replication_certs(cluster_name).await
    }
    // ...
}

/// Similarly for other capabilities.
#[async_trait]
impl<T: Docker> Docker for FailoverTopology<T> {
    // Failover Docker orchestration.
}

Key properties:

  • Auto-derivation: Failover<K8sAnywhere> gets PostgreSQL iff K8sAnywhere: PostgreSQL.
  • No boilerplate: One blanket impl per capability per higher-order type.

Rationale

  • Composition via generics: Rust trait solver auto-selects impls; zero runtime cost.
  • Compile-time safety: Missing T: Capability → compile error (no panics).
  • Scalable: O(capabilities) impls per higher-order; new T auto-works.
  • ISP-respecting: Capabilities only surface if sub-topology provides.
  • Centralized logic: Orchestration (e.g., cert propagation) in one place.

Example usage:

// ✅ Works: K8sAnywhere: PostgreSQL → Failover provides failover PG
let pg_failover: FailoverTopology<K8sAnywhereTopology> = ...;
pg_failover.deploy_pg(config).await;

// ✅ Works: LinuxHost: Docker → Failover provides failover Docker
let docker_failover: FailoverTopology<LinuxHostTopology> = ...;
docker_failover.deploy_docker(...).await;

// ❌ Compile fail: K8sAnywhere !: Docker
let invalid: FailoverTopology<K8sAnywhereTopology>;
invalid.deploy_docker(...); // `T: Docker` bound unsatisfied

Consequences

Pros:

  • Extensible: New topology AWSTopology: PostgreSQL → instant Failover<AWSTopology>: PostgreSQL.
  • Lean: No useless impls (e.g., no K8sAnywhere: Docker).
  • Observable: Logs trace every step.

Cons:

  • Monomorphization: Generics generate code per T (mitigated: few Ts).
  • Delegation opacity: Relies on rustdoc/logs for internals.

Alternatives considered

Approach Pros Cons
Manual per-T impls
impl PG for Failover<K8s> {..}
impl PG for Failover<Linux> {..}
Explicit control N×M explosion; violates ISP; hard to extend.
Dynamic trait objects
Box<dyn AnyCapability>
Runtime flex Perf hit; type erasure; error-prone dispatch.
Mega-topology trait
All-in-one OrchestratedTopology
Simple wiring Monolithic; poor composition.
Registry dispatch
Runtime capability lookup
Decoupled Complex; no compile safety; perf/debug overhead.

Selected: Blanket impls leverage Rust generics for safe, zero-cost composition.

Additional Notes

  • Applies to MultisiteTopology<T>, ShardedTopology<T>, etc.
  • FailoverTopology in failover.rs is first implementation.