Compare commits

..

1 Commits

Author SHA1 Message Date
adb0e7014d adr: Staless based failover mechanism ADR proposed
All checks were successful
Run Check Script / check (pull_request) Successful in 1m6s
2026-01-08 23:58:30 -05:00
15 changed files with 95 additions and 752 deletions

View File

@@ -0,0 +1,95 @@
# Architecture Decision Record: Staleness-Based Failover Mechanism & Observability
**Status:** Proposed
**Date:** 2026-01-09
**Precedes:** [016-Harmony-Agent-And-Global-Mesh-For-Decentralized-Workload-Management.md](https://git.nationtech.io/NationTech/harmony/raw/branch/master/adr/016-Harmony-Agent-And-Global-Mesh-For-Decentralized-Workload-Management.md)
## Context
In ADR 016, we established the **Harmony Agent** and the **Global Orchestration Mesh** (powered by NATS JetStream) as the foundation for our decentralized infrastructure. We defined the high-level need for a `FailoverStrategy` that can support both financial consistency (CP) and AI availability (AP).
However, a specific implementation challenge remains: **How do we reliably detect node failure without losing the ability to debug the event later?**
Standard distributed systems often use "Key Expiration" (TTL) for heartbeats. If a key disappears, the node is presumed dead. While simple, this approach is catastrophic for post-mortem analysis. When the key expires, the evidence of *when* and *how* the failure occurred evaporates.
For NationTechs vision of **Humane Computing**—where micro datacenters might be heating a family home or running a local business—reliability and diagnosability are paramount. If a cluster fails over, we owe it to the user to provide a clear, historical log of exactly what happened. We cannot build a "wonderful future for computers" on ephemeral, untraceable errors.
## Decision
We will implement a **Staleness Detection** mechanism rather than a Key Expiration mechanism. We will leverage NATS JetStream Key-Value (KV) stores with **History Enabled** to create an immutable audit trail of cluster health.
### 1. The "Black Box" Flight Recorder (NATS Configuration)
We will utilize a persistent NATS KV bucket named `harmony_failover`.
* **Storage:** File (Persistent).
* **History:** Set to `64` (or higher). This allows us to query the last 64 heartbeat entries to visualize the exact degradation of the primary node before failure.
* **TTL:** None. Data never disappears; it only becomes "stale."
### 2. Data Structures
We will define two primary schemas to manage the state.
**A. The Rules of Engagement (`cluster_config`)**
This persistent key defines the behavior of the mesh. It allows us to tune failover sensitivity dynamically without redeploying the Agent binary.
```json
{
"primary_site_id": "site-a-basement",
"replica_site_id": "site-b-cloud",
"failover_timeout_ms": 5000, // Time before Replica takes over
"heartbeat_interval_ms": 1000 // Frequency of Primary updates
}
```
> **Note :** The location for this configuration data structure is TBD. See https://git.nationtech.io/NationTech/harmony/issues/206
**B. The Heartbeat (`primary_heartbeat`)**
The Primary writes this; the Replica watches it.
```json
{
"site_id": "site-a-basement",
"status": "HEALTHY",
"counter": 10452,
"timestamp": 1704661549000
}
```
### 3. The Failover Algorithm
**The Primary (Site A) Logic:**
The Primary's ability to write to the mesh is its "License to Operate."
1. **Write Loop:** Attempts to write `primary_heartbeat` every `heartbeat_interval_ms`.
2. **Self-Preservation (Fencing):** If the write fails (NATS Ack timeout or NATS unreachable), the Primary **immediately self-demotes**. It assumes it is network-isolated. This prevents Split Brain scenarios where a partitioned Primary continues to accept writes while the Replica promotes itself.
**The Replica (Site B) Logic:**
The Replica acts as the watchdog.
1. **Watch:** Subscribes to updates on `primary_heartbeat`.
2. **Staleness Check:** Maintains a local timer. Every time a heartbeat arrives, the timer resets.
3. **Promotion:** If the timer exceeds `failover_timeout_ms`, the Replica declares the Primary dead and promotes itself to Leader.
4. **Yielding:** If the Replica is Leader, but suddenly receives a valid, new heartbeat from the configured `primary_site_id` (indicating the Primary has recovered), the Replica will voluntarily **demote** itself to restore the preferred topology.
## Rationale
**Observability as a First-Class Citizen**
By keeping the last 64 heartbeats, we can run `nats kv history` to see the exact timeline. Did the Primary stop suddenly (crash)? or did the heartbeats become erratic and slow before stopping (network congestion)? This data is critical for optimizing the "Micro Data Centers" described in our vision, where internet connections in residential areas may vary in quality.
**Energy Efficiency & Resource Optimization**
NationTech aims to "maximize the value of our energy." A "flapping" cluster (constantly failing over and back) wastes immense energy in data re-synchronization and startup costs. By making the `failover_timeout_ms` configurable via `cluster_config`, we can tune a cluster heating a greenhouse to be less sensitive (slower failover is fine) compared to a cluster running a payment gateway.
**Decentralized Trust**
This architecture relies on NATS as the consensus engine. If the Primary is part of the NATS majority, it lives. If it isn't, it dies. This removes ambiguity and allows us to scale to thousands of independent sites without a central "God mode" controller managing every single failover.
## Consequences
**Positive**
* **Auditability:** Every failover event leaves a permanent trace in the KV history.
* **Safety:** The "Write Ack" check on the Primary provides a strong guarantee against Split Brain in `AbsoluteConsistency` mode.
* **Dynamic Tuning:** We can adjust timeouts for specific environments (e.g., high-latency satellite links) by updating a JSON key, requiring no downtime.
**Negative**
* **Storage Overhead:** Keeping history requires marginally more disk space on the NATS servers, though for 64 small JSON payloads, this is negligible.
* **Clock Skew:** While we rely on NATS server-side timestamps for ordering, extreme clock skew on the client side could confuse the debug logs (though not the failover logic itself).
## Alignment with Vision
This architecture supports the NationTech goal of a **"Beautifully Integrated Design."** It takes the complex, high-stakes problem of distributed consensus and wraps it in a mechanism that is robust enough for enterprise banking yet flexible enough to manage a basement server heating a swimming pool. It bridges the gap between the reliability of Web2 clouds and the decentralized nature of Web3 infrastructure.
```

View File

@@ -1,19 +0,0 @@
[package]
name = "cert_manager"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
assert_cmd = "2.0.16"

View File

@@ -1,32 +0,0 @@
use harmony::{
inventory::Inventory,
modules::{
cert_manager::{
capability::CertificateManagementConfig, score_k8s::CertificateManagementScore,
},
postgresql::{PostgreSQLScore, capability::PostgreSQLConfig},
},
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let cert_manager = CertificateManagementScore {
config: CertificateManagementConfig {
name: todo!(),
namespace: todo!(),
acme_issuer: todo!(),
ca_issuer: todo!(),
self_signed: todo!(),
},
};
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(cert_manager)],
None,
)
.await
.unwrap();
}

View File

@@ -17,11 +17,6 @@ use crate::{
interpret::InterpretStatus, interpret::InterpretStatus,
inventory::Inventory, inventory::Inventory,
modules::{ modules::{
cert_manager::{
capability::{CertificateManagement, CertificateManagementConfig},
crd::{score_certificate::CertificateScore, score_issuer::IssuerScore},
operator::CertManagerOperatorScore,
},
k3d::K3DInstallationScore, k3d::K3DInstallationScore,
k8s::ingress::{K8sIngressScore, PathType}, k8s::ingress::{K8sIngressScore, PathType},
monitoring::{ monitoring::{
@@ -364,81 +359,6 @@ impl Serialize for K8sAnywhereTopology {
} }
} }
#[async_trait]
impl CertificateManagement for K8sAnywhereTopology {
async fn install(
&self,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError> {
let cert_management_operator = CertManagerOperatorScore::default();
cert_management_operator
.interpret(&Inventory::empty(), self)
.await
.map_err(|e| PreparationError { msg: e.to_string() })?;
Ok(PreparationOutcome::Success {
details: format!(
"Installed cert-manager into ns: {}",
cert_management_operator.namespace
),
})
}
async fn ensure_ready(
&self,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError> {
todo!()
}
async fn create_issuer(
&self,
issuer_name: String,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError> {
let issuer_score = IssuerScore {
config: config.clone(),
};
issuer_score
.interpret(&Inventory::empty(), self)
.await
.map_err(|e| PreparationError { msg: e.to_string() })?;
Ok(PreparationOutcome::Success {
details: format!("issuer of kind {} is ready", issuer_name),
})
}
async fn create_certificate(
&self,
cert_name: String,
issuer_name: String,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError> {
self.certificate_issuer_ready(
issuer_name.clone(),
self.k8s_client().await.unwrap(),
config,
)
.await?;
let cert = CertificateScore {
cert_name: cert_name,
config: config.clone(),
issuer_name,
};
cert.interpret(&Inventory::empty(), self)
.await
.map_err(|e| PreparationError { msg: e.to_string() })?;
Ok(PreparationOutcome::Success {
details: format!("Created cert into ns: {:#?}", config.namespace.clone()),
})
}
}
impl K8sAnywhereTopology { impl K8sAnywhereTopology {
pub fn from_env() -> Self { pub fn from_env() -> Self {
Self { Self {
@@ -458,35 +378,6 @@ impl K8sAnywhereTopology {
} }
} }
pub async fn certificate_issuer_ready(
&self,
issuer_name: String,
k8s_client: Arc<K8sClient>,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError> {
let ns = config.namespace.clone().ok_or_else(|| PreparationError {
msg: "namespace is required".to_string(),
})?;
let gvk = GroupVersionKind {
group: "cert-manager.io".to_string(),
version: "v1".to_string(),
kind: "Issuer".to_string(),
};
match k8s_client
.get_resource_json_value(&issuer_name, Some(&ns), &gvk)
.await
{
Ok(_cert_issuer) => Ok(PreparationOutcome::Success {
details: format!("issuer of kind {} is ready", issuer_name),
}),
Err(e) => Err(PreparationError {
msg: format!("{} issuer {} not present", e.to_string(), issuer_name),
}),
}
}
pub async fn get_k8s_distribution(&self) -> Result<&KubernetesDistribution, PreparationError> { pub async fn get_k8s_distribution(&self) -> Result<&KubernetesDistribution, PreparationError> {
self.k8s_distribution self.k8s_distribution
.get_or_try_init(async || { .get_or_try_init(async || {

View File

@@ -1,43 +0,0 @@
use async_trait::async_trait;
use serde::Serialize;
use crate::{
modules::cert_manager::crd::{AcmeIssuer, CaIssuer},
topology::{PreparationError, PreparationOutcome},
};
///TODO rust doc explaining issuer, certificate etc
#[async_trait]
pub trait CertificateManagement: Send + Sync {
async fn install(
&self,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError>;
async fn ensure_ready(
&self,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError>;
async fn create_issuer(
&self,
issuer_name: String,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError>;
async fn create_certificate(
&self,
cert_name: String,
issuer_name: String,
config: &CertificateManagementConfig,
) -> Result<PreparationOutcome, PreparationError>;
}
#[derive(Debug, Clone, Serialize)]
pub struct CertificateManagementConfig {
pub name: String,
pub namespace: Option<String>,
pub acme_issuer: Option<AcmeIssuer>,
pub ca_issuer: Option<CaIssuer>,
pub self_signed: bool,
}

View File

@@ -1,112 +0,0 @@
use kube::{CustomResource, api::ObjectMeta};
use serde::{Deserialize, Serialize};
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug)]
#[kube(
group = "cert-manager.io",
version = "v1",
kind = "Certificate",
plural = "certificates",
namespaced = true,
schema = "disabled"
)]
#[serde(rename_all = "camelCase")]
pub struct CertificateSpec {
/// Name of the Secret where the certificate will be stored
pub secret_name: String,
/// Common Name (optional but often discouraged in favor of SANs)
#[serde(skip_serializing_if = "Option::is_none")]
pub common_name: Option<String>,
/// DNS Subject Alternative Names
#[serde(skip_serializing_if = "Option::is_none")]
pub dns_names: Option<Vec<String>>,
/// IP Subject Alternative Names
#[serde(skip_serializing_if = "Option::is_none")]
pub ip_addresses: Option<Vec<String>>,
/// Certificate duration (e.g. "2160h")
#[serde(skip_serializing_if = "Option::is_none")]
pub duration: Option<String>,
/// How long before expiry cert-manager should renew
#[serde(skip_serializing_if = "Option::is_none")]
pub renew_before: Option<String>,
/// Reference to the Issuer or ClusterIssuer
pub issuer_ref: IssuerRef,
/// Is this a CA certificate
#[serde(skip_serializing_if = "Option::is_none")]
pub is_ca: Option<bool>,
/// Private key configuration
#[serde(skip_serializing_if = "Option::is_none")]
pub private_key: Option<PrivateKey>,
}
impl Default for Certificate {
fn default() -> Self {
Certificate {
metadata: ObjectMeta::default(),
spec: CertificateSpec::default(),
}
}
}
impl Default for CertificateSpec {
fn default() -> Self {
Self {
secret_name: String::new(),
common_name: None,
dns_names: None,
ip_addresses: None,
duration: None,
renew_before: None,
issuer_ref: IssuerRef::default(),
is_ca: None,
private_key: None,
}
}
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct IssuerRef {
pub name: String,
/// Either "Issuer" or "ClusterIssuer"
#[serde(skip_serializing_if = "Option::is_none")]
pub kind: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub group: Option<String>,
}
impl Default for IssuerRef {
fn default() -> Self {
Self {
name: String::new(),
kind: None,
group: None,
}
}
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct PrivateKey {
/// RSA or ECDSA
#[serde(skip_serializing_if = "Option::is_none")]
pub algorithm: Option<String>,
/// Key size (e.g. 2048, 4096)
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u32>,
/// Rotation policy: "Never" or "Always"
#[serde(skip_serializing_if = "Option::is_none")]
pub rotation_policy: Option<String>,
}

View File

@@ -1,44 +0,0 @@
use kube::{CustomResource, api::ObjectMeta};
use serde::{Deserialize, Serialize};
use crate::modules::cert_manager::crd::{AcmeIssuer, CaIssuer, SelfSignedIssuer};
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug)]
#[kube(
group = "cert-manager.io",
version = "v1",
kind = "ClusterIssuer",
plural = "clusterissuers",
namespaced = false,
schema = "disabled"
)]
#[serde(rename_all = "camelCase")]
pub struct ClusterIssuerSpec {
#[serde(skip_serializing_if = "Option::is_none")]
pub self_signed: Option<SelfSignedIssuer>,
#[serde(skip_serializing_if = "Option::is_none")]
pub ca: Option<CaIssuer>,
#[serde(skip_serializing_if = "Option::is_none")]
pub acme: Option<AcmeIssuer>,
}
impl Default for ClusterIssuer {
fn default() -> Self {
ClusterIssuer {
metadata: ObjectMeta::default(),
spec: ClusterIssuerSpec::default(),
}
}
}
impl Default for ClusterIssuerSpec {
fn default() -> Self {
Self {
self_signed: None,
ca: None,
acme: None,
}
}
}

View File

@@ -1,44 +0,0 @@
use kube::{CustomResource, api::ObjectMeta};
use serde::{Deserialize, Serialize};
use crate::modules::cert_manager::crd::{AcmeIssuer, CaIssuer, SelfSignedIssuer};
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug)]
#[kube(
group = "cert-manager.io",
version = "v1",
kind = "Issuer",
plural = "issuers",
namespaced = true,
schema = "disabled"
)]
#[serde(rename_all = "camelCase")]
pub struct IssuerSpec {
#[serde(skip_serializing_if = "Option::is_none")]
pub self_signed: Option<SelfSignedIssuer>,
#[serde(skip_serializing_if = "Option::is_none")]
pub ca: Option<CaIssuer>,
#[serde(skip_serializing_if = "Option::is_none")]
pub acme: Option<AcmeIssuer>,
}
impl Default for Issuer {
fn default() -> Self {
Issuer {
metadata: ObjectMeta::default(),
spec: IssuerSpec::default(),
}
}
}
impl Default for IssuerSpec {
fn default() -> Self {
Self {
self_signed: None,
ca: None,
acme: None,
}
}
}

View File

@@ -1,65 +0,0 @@
use serde::{Deserialize, Serialize};
pub mod certificate;
pub mod cluster_issuer;
pub mod issuer;
//pub mod score_cluster_issuer;
pub mod score_certificate;
pub mod score_issuer;
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct CaIssuer {
/// Secret containing `tls.crt` and `tls.key`
pub secret_name: String,
}
#[derive(Deserialize, Serialize, Clone, Debug, Default)]
#[serde(rename_all = "camelCase")]
pub struct SelfSignedIssuer {}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct AcmeIssuer {
pub server: String,
pub email: String,
/// Secret used to store the ACME account private key
pub private_key_secret_ref: SecretKeySelector,
pub solvers: Vec<AcmeSolver>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct SecretKeySelector {
pub name: String,
pub key: String,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct AcmeSolver {
#[serde(skip_serializing_if = "Option::is_none")]
pub http01: Option<Http01Solver>,
#[serde(skip_serializing_if = "Option::is_none")]
pub dns01: Option<Dns01Solver>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Dns01Solver {}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Http01Solver {
pub ingress: IngressSolver,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct IngressSolver {
#[serde(skip_serializing_if = "Option::is_none")]
pub class: Option<String>,
}

View File

@@ -1,48 +0,0 @@
use kube::api::ObjectMeta;
use serde::Serialize;
use crate::{
interpret::Interpret,
modules::{
cert_manager::{
capability::CertificateManagementConfig,
crd::certificate::{Certificate, CertificateSpec, IssuerRef},
},
k8s::resource::K8sResourceScore,
},
score::Score,
topology::{K8sclient, Topology},
};
#[derive(Debug, Clone, Serialize)]
pub struct CertificateScore {
pub cert_name: String,
pub issuer_name: String,
pub config: CertificateManagementConfig,
}
impl<T: Topology + K8sclient> Score<T> for CertificateScore {
fn name(&self) -> String {
"CertificateScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let cert = Certificate {
metadata: ObjectMeta {
name: Some(self.cert_name.clone()),
namespace: self.config.namespace.clone(),
..Default::default()
},
spec: CertificateSpec {
secret_name: format!("{}-tls", self.cert_name.clone()),
issuer_ref: IssuerRef {
name: self.issuer_name.clone(),
kind: Some("Issuer".into()),
group: Some("cert-manager.io".into()),
},
..Default::default()
},
};
K8sResourceScore::single(cert, self.config.namespace.clone()).create_interpret()
}
}

View File

@@ -1,51 +0,0 @@
use kube::api::ObjectMeta;
use serde::Serialize;
use crate::{
interpret::Interpret,
modules::{
cert_manager::crd::{
AcmeIssuer, CaIssuer, SelfSignedIssuer,
cluster_issuer::{ClusterIssuer, ClusterIssuerSpec},
},
k8s::resource::K8sResourceScore,
},
score::Score,
topology::{K8sclient, Topology},
};
#[derive(Debug, Clone, Serialize)]
pub struct ClusterIssuerScore {
name: String,
acme_issuer: Option<AcmeIssuer>,
ca_issuer: Option<CaIssuer>,
self_signed: bool,
}
impl<T: Topology + K8sclient> Score<T> for ClusterIssuerScore {
fn name(&self) -> String {
"ClusterIssuerScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let metadata = ObjectMeta {
name: Some(self.name.clone()),
namespace: None,
..ObjectMeta::default()
};
let spec = ClusterIssuerSpec {
acme: self.acme_issuer.clone(),
ca: self.ca_issuer.clone(),
self_signed: if self.self_signed {
Some(SelfSignedIssuer::default())
} else {
None
},
};
let cluster_issuer = ClusterIssuer { metadata, spec };
K8sResourceScore::single(cluster_issuer, None).create_interpret()
}
}

View File

@@ -1,51 +0,0 @@
use kube::api::ObjectMeta;
use serde::Serialize;
use crate::{
interpret::Interpret,
modules::{
cert_manager::{
capability::CertificateManagementConfig,
crd::{
SelfSignedIssuer,
issuer::{Issuer, IssuerSpec},
},
},
k8s::resource::K8sResourceScore,
},
score::Score,
topology::{K8sclient, Topology},
};
#[derive(Debug, Clone, Serialize)]
pub struct IssuerScore {
pub config: CertificateManagementConfig,
}
impl<T: Topology + K8sclient> Score<T> for IssuerScore {
fn name(&self) -> String {
"IssuerScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let metadata = ObjectMeta {
name: Some(format!("{}-issuer", self.config.namespace.clone().unwrap())),
namespace: self.config.namespace.clone(),
..ObjectMeta::default()
};
let spec = IssuerSpec {
acme: self.config.acme_issuer.clone(),
ca: self.config.ca_issuer.clone(),
self_signed: if self.config.self_signed {
Some(SelfSignedIssuer::default())
} else {
None
},
};
let issuer = Issuer { metadata, spec };
K8sResourceScore::single(issuer, self.config.namespace.clone()).create_interpret()
}
}

View File

@@ -1,7 +1,3 @@
pub mod capability;
pub mod cluster_issuer; pub mod cluster_issuer;
pub mod crd;
mod helm; mod helm;
pub mod operator;
pub mod score_k8s;
pub use helm::*; pub use helm::*;

View File

@@ -1,64 +0,0 @@
use kube::api::ObjectMeta;
use serde::Serialize;
use crate::{
interpret::Interpret,
modules::k8s::{
apps::crd::{Subscription, SubscriptionSpec},
resource::K8sResourceScore,
},
score::Score,
topology::{K8sclient, Topology, k8s::K8sClient},
};
/// Install the Cert-Manager Operator via RedHat Community Operators registry.redhat.io/redhat/community-operator-index:v4.19
/// This Score creates a Subscription CR in the specified namespace
#[derive(Debug, Clone, Serialize)]
pub struct CertManagerOperatorScore {
pub namespace: String,
pub channel: String,
pub install_plan_approval: String,
pub source: String,
pub source_namespace: String,
}
impl Default for CertManagerOperatorScore {
fn default() -> Self {
Self {
namespace: "openshift-operators".to_string(),
channel: "stable".to_string(),
install_plan_approval: "Automatic".to_string(),
source: "community-operators".to_string(),
source_namespace: "openshift-marketplace".to_string(),
}
}
}
impl<T: Topology + K8sclient> Score<T> for CertManagerOperatorScore {
fn name(&self) -> String {
"CertManagerOperatorScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let metadata = ObjectMeta {
name: Some("cert-manager".to_string()),
namespace: Some(self.namespace.clone()),
..ObjectMeta::default()
};
let spec = SubscriptionSpec {
channel: Some(self.channel.clone()),
config: None,
install_plan_approval: Some(self.install_plan_approval.clone()),
name: "cert-manager".to_string(),
source: self.source.clone(),
source_namespace: self.source_namespace.clone(),
starting_csv: None,
};
let subscription = Subscription { metadata, spec };
K8sResourceScore::single(subscription, Some(self.namespace.clone())).create_interpret()
}
}

View File

@@ -1,66 +0,0 @@
use async_trait::async_trait;
use harmony_types::id::Id;
use serde::Serialize;
use crate::{
data::Version,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
modules::cert_manager::capability::{CertificateManagement, CertificateManagementConfig},
score::Score,
topology::Topology,
};
#[derive(Debug, Clone, Serialize)]
pub struct CertificateManagementScore {
pub config: CertificateManagementConfig,
}
impl<T: Topology + CertificateManagement> Score<T> for CertificateManagementScore {
fn name(&self) -> String {
"CertificateManagementScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(CertificateManagementInterpret {
config: self.config.clone(),
})
}
}
#[derive(Debug)]
struct CertificateManagementInterpret {
config: CertificateManagementConfig,
}
#[async_trait]
impl<T: Topology + CertificateManagement> Interpret<T> for CertificateManagementInterpret {
async fn execute(
&self,
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let cert_management = topology
.install(&self.config)
.await
.map_err(|e| InterpretError::new(e.to_string()))?;
Ok(Outcome::success(format!("Installed CertificateManagement")))
}
fn get_name(&self) -> InterpretName {
InterpretName::Custom("CertificateManagementInterpret")
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}