Compare commits

...

5 Commits

Author SHA1 Message Date
1837623394 adr: 17-1 nats clusters interconnection using islands of trust. mTLS via shared ca-bundle with each cluster distributing its own CA.
Some checks failed
Run Check Script / check (pull_request) Failing after 51s
2026-01-13 10:40:30 -05:00
270b6b87df wip nats supercluster
Some checks failed
Run Check Script / check (pull_request) Failing after 51s
2026-01-09 17:30:51 -05:00
6933280575 feat(helm): refactor helm execution to use topology-specific commands
Some checks failed
Run Check Script / check (pull_request) Failing after 8s
Refactors the `HelmChartInterpret` to move away from the `helm-wrapper-rs` crate in favor of a custom command builder pattern. This allows the `HelmCommand` trait to provide topology-specific configurations, such as `kubeconfig` and `kube-context`, directly to the `helm` CLI.

- Implements `get_helm_command` for `K8sAnywhereTopology` to inject configuration flags.
- Replaces `DefaultHelmExecutor` with a manual `Command` construction in `run_helm_command`.
- Updates `HelmChartInterpret` to pass the topology through to repository and installation logic.
- Cleans up unused imports and removes the temporary `HelmCommand` implementation for `LocalhostTopology`.
2026-01-08 23:42:54 -05:00
77583a1ad1 wip: nats multi cluster, fixing helm command to follow multiple k8s config by providing the helm command from the topology itself, fix cli_logger that can now be initialized multiple times, some more stuff 2026-01-08 16:03:15 -05:00
f7404bed36 wip: initial setup for installing nats helm chart score 2026-01-07 16:14:58 -05:00
12 changed files with 599 additions and 143 deletions

View File

@@ -0,0 +1,189 @@
### 1. ADR 017-1: NATS Cluster Interconnection & Trust Topology
# Architecture Decision Record: NATS Cluster Interconnection & Trust Topology
**Status:** Proposed
**Date:** 2026-01-12
**Precedes:** [017-Staleness-Detection-for-Failover.md]
## Context
In ADR 017, we defined the failover mechanisms for the Harmony mesh. However, for a Primary (Site A) and a Replica (Site B) to communicate securely—or for the Global Mesh to function across disparate locations—we must establish a robust Transport Layer Security (TLS) strategy.
Our primary deployment platform is OKD (Kubernetes). While OKD provides an internal `service-ca`, it is designed primarily for intra-cluster service-to-service communication. It lacks the flexibility required for:
1. **Public/External Gateway Identities:** NATS Gateways need to identify themselves via public DNS names or external IPs, not just internal `.svc` cluster domains.
2. **Cross-Cluster Trust:** We need a mechanism to allow Cluster A to trust Cluster B without sharing a single private root key.
## Decision
We will implement an **"Islands of Trust"** topology using **cert-manager** on OKD.
### 1. Per-Cluster Certificate Authorities (CA)
* We explicitly **reject** the use of a single "Supercluster CA" shared across all sites.
* Instead, every Harmony Cluster (Site A, Site B, etc.) will generate its own unique Self-Signed Root CA managed by `cert-manager` inside that cluster.
* **Lifecycle:** Root CAs will have a long duration (e.g., 10 years) to minimize rotation friction, while Leaf Certificates (NATS servers) will remain short-lived (e.g., 90 days) and rotate automatically.
> Note : The decision to have a single CA for various workloads managed by Harmony on each deployment, or to have multiple CA for each service that requires interconnection is not made yet. This ADR leans towards one CA per service. This allows for maximum flexibility. But the direction might change and no clear decision has been made yet. The alternative of establishing that each cluster/harmony deployment has a single identity could make mTLS very simple between tenants.
### 2. Trust Federation via Bundle Exchange
To enable secure communication (mTLS) between clusters (e.g., for NATS Gateways or Leaf Nodes):
* **No Private Keys are shared.**
* We will aggregate the **Public CA Certificates** of all trusted clusters into a shared `ca-bundle.pem`.
* This bundle is distributed to the NATS configuration of every node.
* **Verification Logic:** When Site A connects to Site B, Site A verifies Site B's certificate against the bundle. Since Site B's CA public key is in the bundle, the connection is accepted.
### 3. Tooling
* We will use **cert-manager** (deployed via Operator on OKD) rather than OKD's built-in `service-ca`. This provides us with standard CRDs (`Issuer`, `Certificate`) to manage the lifecycle, rotation, and complex SANs (Subject Alternative Names) required for external connectivity.
* Harmony will manage installation, configuration and bundle creation across all sites
## Rationale
**Security Blast Radius (The "Key Leak" Scenario)**
If we used a single global CA and the private key for Site A was compromised (e.g., physical theft of a server from a basement), the attacker could impersonate *any* site in the global mesh.
By using Per-Cluster CAs:
* If Site A is compromised, only Site A's identity is stolen.
* We can "evict" Site A from the mesh simply by removing Site A's Public CA from the `ca-bundle.pem` on the remaining healthy clusters and reloading. The attacker can no longer authenticate.
**Decentralized Autonomy**
This aligns with the "Humane Computing" vision. A local cluster owns its identity. It does not depend on a central authority to issue its certificates. It can function in isolation (offline) indefinitely without needing to "phone home" to renew credentials.
## Consequences
**Positive**
* **High Security:** Compromise of one node does not compromise the global mesh.
* **Flexibility:** Easier to integrate with third-party clusters or partners by simply adding their public CA to the bundle.
* **Standardization:** `cert-manager` is the industry standard, making the configuration portable to non-OKD K8s clusters if needed.
**Negative**
* **Configuration Complexity:** We must manage a mechanism to distribute the `ca-bundle.pem` containing public keys to all sites. This should be automated (e.g., via a Harmony Agent) to ensure timely updates and revocation.
* **Revocation Latency:** Revoking a compromised cluster requires updating and reloading the bundle on all other clusters. This is slower than OCSP/CRL but acceptable for infrastructure-level trust if automation is in place.
---
# 2. Concrete overview of the process, how it can be implemented manually across multiple OKD clusters
All of this will be automated via Harmony, but to understand correctly the process it is outlined in details here :
## 1. Deploying and Configuring cert-manager on OKD
While OKD has a built-in `service-ca` controller, it is "opinionated" and primarily signs certs for internal services (like `my-svc.my-namespace.svc`). It is **not suitable** for the Harmony Global Mesh because you cannot easily control the Subject Alternative Names (SANs) for external routes (e.g., `nats.site-a.nationtech.io`), nor can you easily export its CA to other clusters.
**The Solution:** Use the **cert-manager Operator for Red Hat OpenShift**.
### Step 1: Install the Operator
1. Log in to the OKD Web Console.
2. Navigate to **Operators** -> **OperatorHub**.
3. Search for **"cert-manager"**.
4. Choose the **"cert-manager Operator for Red Hat OpenShift"** (Red Hat provided) or the community version.
5. Click **Install**. Use the default settings (Namespace: `cert-manager-operator`).
### Step 2: Create the "Island" CA (The Issuer)
Once installed, you define your cluster's unique identity. Apply this YAML to your NATS namespace.
```yaml
# filepath: k8s/01-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: harmony-selfsigned-issuer
namespace: harmony-nats
spec:
selfSigned: {}
---
# This generates the unique Root CA for THIS specific cluster
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: harmony-root-ca
namespace: harmony-nats
spec:
isCA: true
commonName: "harmony-site-a-ca" # CHANGE THIS per cluster (e.g., site-b-ca)
duration: 87600h # 10 years
renewBefore: 2160h # 3 months before expiry
secretName: harmony-root-ca-secret
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: harmony-selfsigned-issuer
kind: Issuer
group: cert-manager.io
---
# This Issuer uses the Root CA generated above to sign NATS certs
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: harmony-ca-issuer
namespace: harmony-nats
spec:
ca:
secretName: harmony-root-ca-secret
```
### Step 3: Generate the NATS Server Certificate
This certificate will be used by the NATS server. It includes both internal DNS names (for local clients) and external DNS names (for the global mesh).
```yaml
# filepath: k8s/02-nats-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: nats-server-cert
namespace: harmony-nats
spec:
secretName: nats-server-tls
duration: 2160h # 90 days
renewBefore: 360h # 15 days
issuerRef:
name: harmony-ca-issuer
kind: Issuer
# CRITICAL: Define all names this server can be reached by
dnsNames:
- "nats"
- "nats.harmony-nats.svc"
- "nats.harmony-nats.svc.cluster.local"
- "*.nats.harmony-nats.svc.cluster.local"
- "nats-gateway.site-a.nationtech.io" # External Route for Mesh
```
## 2. Implementing the "Islands of Trust" (Trust Bundle)
To make Site A and Site B talk, you need to exchange **Public Keys**.
1. **Extract Public CA from Site A:**
```bash
oc get secret harmony-root-ca-secret -n harmony-nats -o jsonpath='{.data.ca\.crt}' | base64 -d > site-a.crt
```
2. **Extract Public CA from Site B:**
```bash
oc get secret harmony-root-ca-secret -n harmony-nats -o jsonpath='{.data.ca\.crt}' | base64 -d > site-b.crt
```
3. **Create the Bundle:**
Combine them into one file.
```bash
cat site-a.crt site-b.crt > ca-bundle.crt
```
4. **Upload Bundle to Both Clusters:**
Create a ConfigMap or Secret in *both* clusters containing this combined bundle.
```bash
oc create configmap nats-trust-bundle --from-file=ca.crt=ca-bundle.crt -n harmony-nats
```
5. **Configure NATS:**
Mount this ConfigMap and point NATS to it.
```conf
# nats.conf snippet
tls {
cert_file: "/etc/nats-certs/tls.crt"
key_file: "/etc/nats-certs/tls.key"
# Point to the bundle containing BOTH Site A and Site B public CAs
ca_file: "/etc/nats-trust/ca.crt"
}
```
This setup ensures that Site A can verify Site B's certificate (signed by `harmony-site-b-ca`) because Site B's CA is in Site A's trust store, and vice versa, without ever sharing the private keys that generated them.

View File

@@ -3,15 +3,58 @@ use std::str::FromStr;
use harmony::{ use harmony::{
inventory::Inventory, inventory::Inventory,
modules::helm::chart::{HelmChartScore, HelmRepository, NonBlankString}, modules::helm::chart::{HelmChartScore, HelmRepository, NonBlankString},
topology::K8sAnywhereTopology, topology::{HelmCommand, K8sAnywhereConfig, K8sAnywhereTopology, TlsRouter, Topology},
}; };
use harmony_macros::hurl; use harmony_macros::hurl;
use log::info; use log::info;
#[tokio::main] #[tokio::main]
async fn main() { async fn main() {
// env_logger::init(); let site1_topo = K8sAnywhereTopology::with_config(K8sAnywhereConfig::remote_k8s_from_env_var(
let values_yaml = Some( "HARMONY_NATS_SITE_1",
));
let site2_topo = K8sAnywhereTopology::with_config(K8sAnywhereConfig::remote_k8s_from_env_var(
"HARMONY_NATS_SITE_2",
));
let site1_domain = site1_topo.get_internal_domain().await.unwrap().unwrap();
let site2_domain = site2_topo.get_internal_domain().await.unwrap().unwrap();
let site1_gateway = format!("nats-gateway.{}", site1_domain);
let site2_gateway = format!("nats-gateway.{}", site2_domain);
tokio::join!(
deploy_nats(
site1_topo,
"site-1",
vec![("site-2".to_string(), site2_gateway)]
),
deploy_nats(
site2_topo,
"site-2",
vec![("site-1".to_string(), site1_gateway)]
),
);
}
async fn deploy_nats<T: Topology + HelmCommand + TlsRouter + 'static>(
topology: T,
cluster_name: &str,
remote_gateways: Vec<(String, String)>,
) {
topology.ensure_ready().await.unwrap();
let mut gateway_gateways = String::new();
for (name, url) in remote_gateways {
gateway_gateways.push_str(&format!(
r#"
- name: {name}
urls:
- nats://{url}:7222"#
));
}
let values_yaml = Some(format!(
r#"config: r#"config:
cluster: cluster:
enabled: true enabled: true
@@ -25,16 +68,31 @@ async fn main() {
leafnodes: leafnodes:
enabled: false enabled: false
# port: 7422 # port: 7422
websocket:
enabled: true
ingress:
enabled: true
className: openshift-default
pathType: Prefix
hosts:
- nats-ws.{}
gateway: gateway:
enabled: false enabled: true
# name: my-gateway name: {}
# port: 7522 port: 7222
gateways: {}
service:
ports:
gateway:
enabled: true
natsBox: natsBox:
container: container:
image: image:
tag: nonroot"# tag: nonroot"#,
.to_string(), topology.get_internal_domain().await.unwrap().unwrap(),
); cluster_name,
gateway_gateways,
));
let namespace = "nats"; let namespace = "nats";
let nats = HelmChartScore { let nats = HelmChartScore {
namespace: Some(NonBlankString::from_str(namespace).unwrap()), namespace: Some(NonBlankString::from_str(namespace).unwrap()),
@@ -52,14 +110,9 @@ natsBox:
)), )),
}; };
harmony_cli::run( harmony_cli::run(Inventory::autoload(), topology, vec![Box::new(nats)], None)
Inventory::autoload(), .await
K8sAnywhereTopology::from_env(), .unwrap();
vec![Box::new(nats)],
None,
)
.await
.unwrap();
info!( info!(
"Enjoy! You can test your nats cluster by running : `kubectl exec -n {namespace} -it deployment/nats-box -- nats pub test hi`" "Enjoy! You can test your nats cluster by running : `kubectl exec -n {namespace} -it deployment/nats-box -- nats pub test hi`"

View File

@@ -1 +1,5 @@
pub trait HelmCommand {} use std::process::Command;
pub trait HelmCommand {
fn get_helm_command(&self) -> Command;
}

View File

@@ -35,6 +35,7 @@ use crate::{
service_monitor::ServiceMonitor, service_monitor::ServiceMonitor,
}, },
}, },
okd::crd::ingresses_config::Ingress as IngressResource,
okd::route::OKDTlsPassthroughScore, okd::route::OKDTlsPassthroughScore,
prometheus::{ prometheus::{
k8s_prometheus_alerting_score::K8sPrometheusCRDAlertingScore, k8s_prometheus_alerting_score::K8sPrometheusCRDAlertingScore,
@@ -107,8 +108,32 @@ impl K8sclient for K8sAnywhereTopology {
#[async_trait] #[async_trait]
impl TlsRouter for K8sAnywhereTopology { impl TlsRouter for K8sAnywhereTopology {
async fn get_wildcard_domain(&self) -> Result<Option<String>, String> { async fn get_internal_domain(&self) -> Result<Option<String>, String> {
todo!() match self.get_k8s_distribution().await.map_err(|e| {
format!(
"Could not get internal domain, error getting k8s distribution : {}",
e.to_string()
)
})? {
KubernetesDistribution::OpenshiftFamily => {
let client = self.k8s_client().await?;
if let Some(ingress_config) = client
.get_resource::<IngressResource>("cluster", None)
.await
.map_err(|e| {
format!("Error attempting to get ingress config : {}", e.to_string())
})?
{
debug!("Found ingress config {:?}", ingress_config.spec);
Ok(ingress_config.spec.domain.clone())
} else {
warn!("Could not find a domain configured in this cluster");
Ok(None)
}
}
KubernetesDistribution::K3sFamily => todo!(),
KubernetesDistribution::Default => todo!(),
}
} }
/// Returns the port that this router exposes externally. /// Returns the port that this router exposes externally.
@@ -1087,7 +1112,21 @@ impl MultiTargetTopology for K8sAnywhereTopology {
} }
} }
impl HelmCommand for K8sAnywhereTopology {} impl HelmCommand for K8sAnywhereTopology {
fn get_helm_command(&self) -> Command {
let mut cmd = Command::new("helm");
if let Some(k) = &self.config.kubeconfig {
cmd.args(["--kubeconfig", k]);
}
if let Some(c) = &self.config.k8s_context {
cmd.args(["--kube-context", c]);
}
info!("Using helm command {cmd:?}");
cmd
}
}
#[async_trait] #[async_trait]
impl TenantManager for K8sAnywhereTopology { impl TenantManager for K8sAnywhereTopology {
@@ -1108,7 +1147,7 @@ impl TenantManager for K8sAnywhereTopology {
#[async_trait] #[async_trait]
impl Ingress for K8sAnywhereTopology { impl Ingress for K8sAnywhereTopology {
async fn get_domain(&self, service: &str) -> Result<String, PreparationError> { async fn get_domain(&self, service: &str) -> Result<String, PreparationError> {
use log::{debug, trace, warn}; use log::{trace, warn};
let client = self.k8s_client().await?; let client = self.k8s_client().await?;

View File

@@ -2,7 +2,7 @@ use async_trait::async_trait;
use derive_new::new; use derive_new::new;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use super::{HelmCommand, PreparationError, PreparationOutcome, Topology}; use super::{PreparationError, PreparationOutcome, Topology};
#[derive(new, Clone, Debug, Serialize, Deserialize)] #[derive(new, Clone, Debug, Serialize, Deserialize)]
pub struct LocalhostTopology; pub struct LocalhostTopology;
@@ -19,6 +19,3 @@ impl Topology for LocalhostTopology {
}) })
} }
} }
// TODO: Delete this, temp for test
impl HelmCommand for LocalhostTopology {}

View File

@@ -112,12 +112,13 @@ pub trait TlsRouter: Send + Sync {
/// HAProxy frontend→backend \"postgres-upstream\". /// HAProxy frontend→backend \"postgres-upstream\".
async fn install_route(&self, config: TlsRoute) -> Result<(), String>; async fn install_route(&self, config: TlsRoute) -> Result<(), String>;
/// Gets the base domain that can be used to deploy applications that will be automatically /// Gets the base domain of this cluster. On openshift family clusters, this is the domain
/// routed to this cluster. /// used by default for all components, including the default ingress controller that
/// transforms ingress to routes.
/// ///
/// For example, if we have *.apps.nationtech.io pointing to a public load balancer, then this /// For example, get_internal_domain on a cluster that has `console-openshift-console.apps.mycluster.something`
/// function would install route apps.nationtech.io /// will return `apps.mycluster.something`
async fn get_wildcard_domain(&self) -> Result<Option<String>, String>; async fn get_internal_domain(&self) -> Result<Option<String>, String>;
/// Returns the port that this router exposes externally. /// Returns the port that this router exposes externally.
async fn get_router_port(&self) -> u16; async fn get_router_port(&self) -> u16;

View File

@@ -6,15 +6,11 @@ use crate::topology::{HelmCommand, Topology};
use async_trait::async_trait; use async_trait::async_trait;
use harmony_types::id::Id; use harmony_types::id::Id;
use harmony_types::net::Url; use harmony_types::net::Url;
use helm_wrapper_rs;
use helm_wrapper_rs::blocking::{DefaultHelmExecutor, HelmExecutor};
use log::{debug, info, warn}; use log::{debug, info, warn};
pub use non_blank_string_rs::NonBlankString; pub use non_blank_string_rs::NonBlankString;
use serde::Serialize; use serde::Serialize;
use std::collections::HashMap; use std::collections::HashMap;
use std::path::Path; use std::process::{Output, Stdio};
use std::process::{Command, Output, Stdio};
use std::str::FromStr;
use temp_file::TempFile; use temp_file::TempFile;
#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
@@ -65,7 +61,7 @@ pub struct HelmChartInterpret {
pub score: HelmChartScore, pub score: HelmChartScore,
} }
impl HelmChartInterpret { impl HelmChartInterpret {
fn add_repo(&self) -> Result<(), InterpretError> { fn add_repo<T: HelmCommand>(&self, topology: &T) -> Result<(), InterpretError> {
let repo = match &self.score.repository { let repo = match &self.score.repository {
Some(repo) => repo, Some(repo) => repo,
None => { None => {
@@ -84,7 +80,7 @@ impl HelmChartInterpret {
add_args.push("--force-update"); add_args.push("--force-update");
} }
let add_output = run_helm_command(&add_args)?; let add_output = run_helm_command(topology, &add_args)?;
let full_output = format!( let full_output = format!(
"{}\n{}", "{}\n{}",
String::from_utf8_lossy(&add_output.stdout), String::from_utf8_lossy(&add_output.stdout),
@@ -100,23 +96,19 @@ impl HelmChartInterpret {
} }
} }
fn run_helm_command(args: &[&str]) -> Result<Output, InterpretError> { fn run_helm_command<T: HelmCommand>(topology: &T, args: &[&str]) -> Result<Output, InterpretError> {
let command_str = format!("helm {}", args.join(" ")); let mut helm_cmd = topology.get_helm_command();
debug!( helm_cmd.args(args);
"Got KUBECONFIG: `{}`",
std::env::var("KUBECONFIG").unwrap_or("".to_string())
);
debug!("Running Helm command: `{}`", command_str);
let output = Command::new("helm") debug!("Running Helm command: `{:?}`", helm_cmd);
.args(args)
let output = helm_cmd
.stdout(Stdio::piped()) .stdout(Stdio::piped())
.stderr(Stdio::piped()) .stderr(Stdio::piped())
.output() .output()
.map_err(|e| { .map_err(|e| {
InterpretError::new(format!( InterpretError::new(format!(
"Failed to execute helm command '{}': {}. Is helm installed and in PATH?", "Failed to execute helm command '{helm_cmd:?}': {e}. Is helm installed and in PATH?",
command_str, e
)) ))
})?; })?;
@@ -124,13 +116,13 @@ fn run_helm_command(args: &[&str]) -> Result<Output, InterpretError> {
let stdout = String::from_utf8_lossy(&output.stdout); let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr); let stderr = String::from_utf8_lossy(&output.stderr);
warn!( warn!(
"Helm command `{}` failed with status: {}\nStdout:\n{}\nStderr:\n{}", "Helm command `{helm_cmd:?}` failed with status: {}\nStdout:\n{stdout}\nStderr:\n{stderr}",
command_str, output.status, stdout, stderr output.status
); );
} else { } else {
debug!( debug!(
"Helm command `{}` finished successfully. Status: {}", "Helm command `{helm_cmd:?}` finished successfully. Status: {}",
command_str, output.status output.status
); );
} }
@@ -142,7 +134,7 @@ impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
async fn execute( async fn execute(
&self, &self,
_inventory: &Inventory, _inventory: &Inventory,
_topology: &T, topology: &T,
) -> Result<Outcome, InterpretError> { ) -> Result<Outcome, InterpretError> {
let ns = self let ns = self
.score .score
@@ -150,98 +142,62 @@ impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
.as_ref() .as_ref()
.unwrap_or_else(|| todo!("Get namespace from active kubernetes cluster")); .unwrap_or_else(|| todo!("Get namespace from active kubernetes cluster"));
let tf: TempFile; self.add_repo(topology)?;
let yaml_path: Option<&Path> = match self.score.values_yaml.as_ref() {
Some(yaml_str) => { let mut args = if self.score.install_only {
tf = temp_file::with_contents(yaml_str.as_bytes()); vec!["install"]
debug!( } else {
"values yaml string for chart {} :\n {yaml_str}", vec!["upgrade", "--install"]
self.score.chart_name
);
Some(tf.path())
}
None => None,
}; };
self.add_repo()?; args.extend(vec![
let helm_executor = DefaultHelmExecutor::new_with_opts(
&NonBlankString::from_str("helm").unwrap(),
None,
900,
false,
false,
);
let mut helm_options = Vec::new();
if self.score.create_namespace {
helm_options.push(NonBlankString::from_str("--create-namespace").unwrap());
}
if self.score.install_only {
let chart_list = match helm_executor.list(Some(ns)) {
Ok(charts) => charts,
Err(e) => {
return Err(InterpretError::new(format!(
"Failed to list scores in namespace {:?} because of error : {}",
self.score.namespace, e
)));
}
};
if chart_list
.iter()
.any(|item| item.name == self.score.release_name.to_string())
{
info!(
"Release '{}' already exists in namespace '{}'. Skipping installation as install_only is true.",
self.score.release_name, ns
);
return Ok(Outcome::success(format!(
"Helm Chart '{}' already installed to namespace {ns} and install_only=true",
self.score.release_name
)));
} else {
info!(
"Release '{}' not found in namespace '{}'. Proceeding with installation.",
self.score.release_name, ns
);
}
}
let res = helm_executor.install_or_upgrade(
ns,
&self.score.release_name, &self.score.release_name,
&self.score.chart_name, &self.score.chart_name,
self.score.chart_version.as_ref(), "--namespace",
self.score.values_overrides.as_ref(), &ns,
yaml_path, ]);
Some(&helm_options),
);
let status = match res { if self.score.create_namespace {
Ok(status) => status, args.push("--create-namespace");
Err(err) => return Err(InterpretError::new(err.to_string())), }
};
match status { if let Some(version) = &self.score.chart_version {
helm_wrapper_rs::HelmDeployStatus::Deployed => Ok(Outcome::success(format!( args.push("--version");
args.push(&version);
}
let tf: TempFile;
if let Some(yaml_str) = &self.score.values_yaml {
tf = temp_file::with_contents(yaml_str.as_bytes());
args.push("--values");
args.push(tf.path().to_str().unwrap());
}
let overrides_strings: Vec<String>;
if let Some(overrides) = &self.score.values_overrides {
overrides_strings = overrides
.iter()
.map(|(key, value)| format!("{key}={value}"))
.collect();
for o in overrides_strings.iter() {
args.push("--set");
args.push(&o);
}
}
let output = run_helm_command(topology, &args)?;
if output.status.success() {
Ok(Outcome::success(format!(
"Helm Chart {} deployed", "Helm Chart {} deployed",
self.score.release_name self.score.release_name
))), )))
helm_wrapper_rs::HelmDeployStatus::PendingInstall => Ok(Outcome::running(format!( } else {
"Helm Chart {} pending install...", Err(InterpretError::new(format!(
self.score.release_name "Helm Chart {} installation failed: {}",
))), self.score.release_name,
helm_wrapper_rs::HelmDeployStatus::PendingUpgrade => Ok(Outcome::running(format!( String::from_utf8_lossy(&output.stderr)
"Helm Chart {} pending upgrade...", )))
self.score.release_name
))),
helm_wrapper_rs::HelmDeployStatus::Failed => Err(InterpretError::new(format!(
"Helm Chart {} installation failed",
self.score.release_name
))),
} }
} }

View File

@@ -5,7 +5,7 @@ use crate::topology::{FailoverTopology, TlsRoute, TlsRouter};
#[async_trait] #[async_trait]
impl<T: TlsRouter> TlsRouter for FailoverTopology<T> { impl<T: TlsRouter> TlsRouter for FailoverTopology<T> {
async fn get_wildcard_domain(&self) -> Result<Option<String>, String> { async fn get_internal_domain(&self) -> Result<Option<String>, String> {
todo!() todo!()
} }

View File

@@ -0,0 +1,214 @@
use k8s_openapi::apimachinery::pkg::apis::meta::v1::{ListMeta, ObjectMeta};
use k8s_openapi::{ClusterResourceScope, Resource};
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Ingress {
#[serde(skip_serializing_if = "Option::is_none")]
pub api_version: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub kind: Option<String>,
pub metadata: ObjectMeta,
pub spec: IngressSpec,
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<IngressStatus>,
}
impl Resource for Ingress {
const API_VERSION: &'static str = "config.openshift.io/v1";
const GROUP: &'static str = "config.openshift.io";
const VERSION: &'static str = "v1";
const KIND: &'static str = "Ingress";
const URL_PATH_SEGMENT: &'static str = "ingresses";
type Scope = ClusterResourceScope;
}
impl k8s_openapi::Metadata for Ingress {
type Ty = ObjectMeta;
fn metadata(&self) -> &Self::Ty {
&self.metadata
}
fn metadata_mut(&mut self) -> &mut Self::Ty {
&mut self.metadata
}
}
impl Default for Ingress {
fn default() -> Self {
Ingress {
api_version: Some("config.openshift.io/v1".to_string()),
kind: Some("Ingress".to_string()),
metadata: ObjectMeta::default(),
spec: IngressSpec::default(),
status: None,
}
}
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct IngressList {
pub metadata: ListMeta,
pub items: Vec<Ingress>,
}
impl Default for IngressList {
fn default() -> Self {
Self {
metadata: ListMeta::default(),
items: Vec::new(),
}
}
}
impl Resource for IngressList {
const API_VERSION: &'static str = "config.openshift.io/v1";
const GROUP: &'static str = "config.openshift.io";
const VERSION: &'static str = "v1";
const KIND: &'static str = "IngressList";
const URL_PATH_SEGMENT: &'static str = "ingresses";
type Scope = ClusterResourceScope;
}
impl k8s_openapi::Metadata for IngressList {
type Ty = ListMeta;
fn metadata(&self) -> &Self::Ty {
&self.metadata
}
fn metadata_mut(&mut self) -> &mut Self::Ty {
&mut self.metadata
}
}
#[derive(Deserialize, Serialize, Clone, Debug, Default)]
#[serde(rename_all = "camelCase")]
pub struct IngressSpec {
#[serde(skip_serializing_if = "Option::is_none")]
pub apps_domain: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub component_routes: Option<Vec<ComponentRouteSpec>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub domain: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub load_balancer: Option<LoadBalancer>,
#[serde(skip_serializing_if = "Option::is_none")]
pub required_hsts_policies: Option<Vec<RequiredHSTSPolicy>>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct ComponentRouteSpec {
pub hostname: String,
pub name: String,
pub namespace: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub serving_cert_key_pair_secret: Option<SecretNameReference>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct SecretNameReference {
pub name: String,
}
#[derive(Deserialize, Serialize, Clone, Debug, Default)]
#[serde(rename_all = "camelCase")]
pub struct LoadBalancer {
#[serde(skip_serializing_if = "Option::is_none")]
pub platform: Option<IngressPlatform>,
}
#[derive(Deserialize, Serialize, Clone, Debug, Default)]
#[serde(rename_all = "camelCase")]
pub struct IngressPlatform {
#[serde(skip_serializing_if = "Option::is_none")]
pub aws: Option<AWSPlatformLoadBalancer>,
#[serde(skip_serializing_if = "Option::is_none")]
pub r#type: Option<String>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct AWSPlatformLoadBalancer {
pub r#type: String,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct RequiredHSTSPolicy {
pub domain_patterns: Vec<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub include_sub_domains_policy: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub max_age: Option<MaxAgePolicy>,
#[serde(skip_serializing_if = "Option::is_none")]
pub namespace_selector: Option<k8s_openapi::apimachinery::pkg::apis::meta::v1::LabelSelector>,
#[serde(skip_serializing_if = "Option::is_none")]
pub preload_policy: Option<String>,
}
#[derive(Deserialize, Serialize, Clone, Debug, Default)]
#[serde(rename_all = "camelCase")]
pub struct MaxAgePolicy {
#[serde(skip_serializing_if = "Option::is_none")]
pub largest_max_age: Option<i32>,
#[serde(skip_serializing_if = "Option::is_none")]
pub smallest_max_age: Option<i32>,
}
#[derive(Deserialize, Serialize, Clone, Debug, Default)]
#[serde(rename_all = "camelCase")]
pub struct IngressStatus {
#[serde(skip_serializing_if = "Option::is_none")]
pub component_routes: Option<Vec<ComponentRouteStatus>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub default_placement: Option<String>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct ComponentRouteStatus {
#[serde(skip_serializing_if = "Option::is_none")]
pub conditions: Option<Vec<k8s_openapi::apimachinery::pkg::apis::meta::v1::Condition>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub consuming_users: Option<Vec<String>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub current_hostnames: Option<Vec<String>>,
pub default_hostname: String,
pub name: String,
pub namespace: String,
pub related_objects: Vec<ObjectReference>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct ObjectReference {
pub group: String,
pub name: String,
pub namespace: String,
pub resource: String,
}

View File

@@ -1,2 +1,3 @@
pub mod nmstate; pub mod nmstate;
pub mod route; pub mod route;
pub mod ingresses_config;

View File

@@ -1,5 +1,4 @@
use k8s_openapi::apimachinery::pkg::apis::meta::v1::{ListMeta, ObjectMeta, Time}; use k8s_openapi::apimachinery::pkg::apis::meta::v1::{ListMeta, ObjectMeta, Time};
use k8s_openapi::apimachinery::pkg::util::intstr::IntOrString;
use k8s_openapi::{NamespaceResourceScope, Resource}; use k8s_openapi::{NamespaceResourceScope, Resource};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};

View File

@@ -7,11 +7,14 @@ use harmony::{
}; };
use log::{error, info, log_enabled}; use log::{error, info, log_enabled};
use std::io::Write; use std::io::Write;
use std::sync::Mutex; use std::sync::{Mutex, OnceLock};
pub fn init() { pub fn init() {
configure_logger(); static INITIALIZED: OnceLock<()> = OnceLock::new();
handle_events(); INITIALIZED.get_or_init(|| {
configure_logger();
handle_events();
});
} }
fn configure_logger() { fn configure_logger() {