Compare commits

..

68 Commits

Author SHA1 Message Date
07bc59d414 Merge pull request 'feat/cluster_monitoring' (#179) from feat/cluster_monitoring into master
All checks were successful
Run Check Script / check (push) Successful in 1m4s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m16s
Reviewed-on: #179
2026-01-06 20:47:06 +00:00
d5137d5ebc Merge remote-tracking branch 'origin/master' into feat/cluster_monitoring
Some checks failed
Run Check Script / check (pull_request) Failing after 10m33s
2026-01-06 15:43:34 -05:00
f2ca97b3bf Merge pull request 'feat(application): Webapp feature with production dns' (#167) from feat/webappdns into master
All checks were successful
Run Check Script / check (push) Successful in 54s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m31s
Reviewed-on: #167
2026-01-06 20:15:28 +00:00
dbfae8539f Merge remote-tracking branch 'origin/master' into feat/webappdns
All checks were successful
Run Check Script / check (pull_request) Successful in 58s
2026-01-06 15:14:19 -05:00
9359d43fe1 chore: Fix pr comments, documentation, slight refactor for better apis
All checks were successful
Run Check Script / check (pull_request) Successful in 49s
2026-01-06 15:09:17 -05:00
e026ad4d69 Merge pull request 'adr: draft ADR proposing harmony agent and nats-jetstram for decentralized workload management' (#202) from adr/decentralized-workload-management into master
All checks were successful
Run Check Script / check (push) Successful in 58s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m41s
Reviewed-on: #202
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 19:45:54 +00:00
98f098ffa4 Merge pull request 'feat: implementation for opnsense os-node_exporter' (#173) from feat/install_opnsense_node_exporter into master
All checks were successful
Run Check Script / check (push) Successful in 59s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m58s
Reviewed-on: #173
2026-01-06 19:19:34 +00:00
fdf1dfaa30 fix: leave implementers to define their Debug, so removed impl Debug for dyn NodeExporter
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2026-01-06 14:17:04 -05:00
4f8cd0c1cb Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2026-01-06 13:56:48 -05:00
004b35f08e Merge pull request 'feat/brocade_snmp' (#193) from feat/brocade_snmp into master
Some checks failed
Run Check Script / check (push) Failing after 2s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 27s
Reviewed-on: #193
2026-01-06 16:22:25 +00:00
2b19d8c3e8 fix: changed name to switch_ips for more clarity
All checks were successful
Run Check Script / check (pull_request) Successful in 54s
2026-01-06 10:51:53 -05:00
745479c667 Merge pull request 'doc for removing worker flag from cp on UPI' (#165) from doc/worker-flag into master
All checks were successful
Run Check Script / check (push) Successful in 53s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 9m24s
Reviewed-on: #165
2026-01-06 15:46:13 +00:00
2d89e08877 Merge pull request 'doc to clone and transfer a coreos disk' (#166) from doc/clone into master
Some checks failed
Run Check Script / check (push) Successful in 54s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #166
2026-01-06 15:42:56 +00:00
e5bd866c09 Merge pull request 'feat: cnpg operator score' (#199) from feat/cnpgOperator into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #199
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 15:41:55 +00:00
0973f76701 Merge pull request 'feat: Introducing FailoverTopology and OperatorHub Catalog Subscription with example' (#196) from feat/multisitePostgreSQL into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #196
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 15:41:12 +00:00
fd69a2d101 Merge pull request 'feat/rebuild_inventory' (#201) from feat/rebuild_inventory into master
Reviewed-on: #201
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-05 20:30:33 +00:00
5cce9f8e74 adr: draft ADR proposing harmony agent and nats-jetstram for decentralized workload management
All checks were successful
Run Check Script / check (pull_request) Successful in 1m31s
2025-12-19 10:12:44 -05:00
07e610c54a fix git merge conflict
All checks were successful
Run Check Script / check (pull_request) Successful in 1m24s
2025-12-17 17:09:32 -05:00
03e98a51e3 Merge pull request 'fix: added fields missing for haproxy after most recent update' (#191) from fix/opnsense_update into master
Some checks failed
Run Check Script / check (push) Failing after 12m40s
Reviewed-on: #191
2025-12-17 20:03:49 +00:00
22875fe8f3 fix: updated test xml structures to match with new fields added to opnsense
All checks were successful
Run Check Script / check (pull_request) Successful in 1m32s
2025-12-17 15:00:48 -05:00
c6f859f973 fix(OPNSense): update fields for haproxyy and opnsense following most recent update and upgrade to opnsense
Some checks failed
Run Check Script / check (pull_request) Failing after 1m25s
2025-12-16 15:31:35 -05:00
bbf28a1a28 Merge branch 'master' into fix/opnsense_update
Some checks failed
Run Check Script / check (pull_request) Failing after 1m21s
2025-12-16 20:00:54 +00:00
f242aafebb feat: Subscription for cnpg-operator fixed default values, tested and added to operatorhub example.
All checks were successful
Run Check Script / check (pull_request) Successful in 1m31s
2025-12-11 12:18:28 -05:00
3e14ebd62c feat: cnpg operator score
All checks were successful
Run Check Script / check (pull_request) Successful in 1m36s
2025-12-10 22:55:08 -05:00
1b19638df4 wip(failover): Started implementation of the FailoverTopology with PostgreSQL capability
All checks were successful
Run Check Script / check (pull_request) Successful in 1m32s
This is our first Higher Order Topology (see ADR-015)
2025-12-10 21:15:51 -05:00
d39b1957cd feat(k8s_app): OperatorhubCatalogSourceScore can now install the operatorhub catalogsource on a cluster that already has operator lifecycle manager installed 2025-12-10 16:58:58 -05:00
bfdb11b217 Merge pull request 'feat(OKDInstallation): Implemented bootstrap of okd worker node, added features to allow both control plane and worker node to use the same bootstrap_okd_node score' (#198) from feat/okd-nodes into master
Some checks failed
Run Check Script / check (push) Successful in 1m57s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m59s
Reviewed-on: #198
Reviewed-by: johnride <jg@nationtech.io>
2025-12-10 19:27:51 +00:00
d5fadf4f44 fix: deleted storage node role, fixed erroneous comment, modified score name to be in line with clean code naming conventions, fixed how the OKDNodeInstallationScore is called via OKDSetup03ControlPlaneScore and OKDSetup04WorkersScore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m45s
2025-12-10 14:20:24 -05:00
357ca93d90 wip: FailoverTopology implementation for PostgreSQL on the way! 2025-12-10 13:12:53 -05:00
8103932f23 doc: Initial documentation for the MultisitePostgreSQL module 2025-12-10 13:12:53 -05:00
50bd5c5bba feat(OKDInstallation): Implemented bootstrap of okd worker node, added features to allow both control plane and worker node to use the same bootstrap_okd_node score
All checks were successful
Run Check Script / check (pull_request) Successful in 1m46s
2025-12-10 12:15:07 -05:00
9fbdc72cd0 fix: git ignore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m29s
2025-11-18 08:41:09 -05:00
78e595e696 feat: added alert manager routes to openshift cluster monitoring
All checks were successful
Run Check Script / check (pull_request) Successful in 1m37s
2025-11-17 15:22:43 -05:00
90b89224d8 fix: added K8sName type for strict naming of Kubernetes resources 2025-11-17 15:20:51 -05:00
43a17811cc fix formatting
Some checks failed
Run Check Script / check (pull_request) Failing after 1m49s
2025-11-14 12:53:43 -05:00
93ac89157a feat: added score to enable snmp_server on brocade switch and a working example
All checks were successful
Run Check Script / check (pull_request) Successful in 2m4s
2025-11-14 12:49:00 -05:00
29c82db70d fix: added fields missing for haproxy after most recent update
Some checks failed
Run Check Script / check (pull_request) Failing after 49s
2025-11-12 13:21:55 -05:00
8ee3f8a4ad chore: Update harmony-inventory-agent binary as some fixes were introduced : port is 25000 now and nbd devices wont make the inventory crash
Some checks failed
Run Check Script / check (pull_request) Failing after 40s
2025-11-11 11:32:42 -05:00
d3634a6313 fix(types): Switch port location failed on port channel interfaces 2025-11-11 09:53:59 -05:00
a0a8d5277c fix: opnsense definitions more accurate for various resources such as ProxyGeneral, System, StaticMap, Job, etc. Also fixed brocade crate export and some warnings 2025-11-11 09:06:36 -05:00
43b04edbae feat(brocade): Add feature and example to remove port channel and configure switchport 2025-11-10 22:59:37 -05:00
755a4b7749 feat(inventory-agent): Discover algorithm by scanning a subnet of ips, slower than mdns but more reliable and versatile 2025-11-10 22:15:31 -05:00
5953bc58f4 feat: added function to enable snmp-server for brocade switches 2025-11-10 14:57:22 -05:00
51a5afbb6d fix: added some extra details
All checks were successful
Run Check Script / check (pull_request) Successful in 1m4s
2025-11-07 09:04:27 -05:00
759a9287d3 Merge remote-tracking branch 'origin/master' into feat/cluster_monitoring
Some checks failed
Run Check Script / check (pull_request) Failing after 19s
2025-11-05 17:02:10 -05:00
24922321b1 fix: webhook name must be k8s field compliant, add a FIXME note 2025-11-05 16:59:48 -05:00
cf84f2cce8 wip: cluster_monitoring almost there, a kink to fix in the yaml handling
All checks were successful
Run Check Script / check (pull_request) Successful in 1m15s
2025-10-29 23:12:34 -04:00
a12d12aa4f feat: example OpenshiftClusterAlertScore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2025-10-29 17:29:28 -04:00
cefb65933a wip: cluster monitoring score coming along, this simply edits OKD builtin alertmanager instance and adds a receiver 2025-10-29 17:26:21 -04:00
c2fa4f1869 fix:cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m21s
2025-10-29 13:53:58 -04:00
ee278ac817 Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
Some checks failed
Run Check Script / check (pull_request) Failing after 25s
2025-10-29 13:49:56 -04:00
09a06f136e Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
All checks were successful
Run Check Script / check (pull_request) Successful in 1m21s
2025-10-29 13:42:12 -04:00
5f147fa672 fix: opnsense-config reload_config() returns live config.xml rather than dropping it, allows function is_package_installed() to read live state after package installation rather than old config before installation
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2025-10-29 13:25:37 -04:00
9ba939bde1 wip: cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m16s
2025-10-28 15:45:02 -04:00
44bf21718c wip: example score with impl topolgy for opnsense topology 2025-10-28 14:41:15 -04:00
5e1580e5c1 Merge branch 'master' into doc/clone 2025-10-23 19:32:26 +00:00
1802b10ddf fix:translated documentaion notes into English 2025-10-23 15:31:45 -04:00
008b03f979 fix: changed documentation language to english 2025-10-23 14:56:07 -04:00
9f7b90d182 feat(argocd): Can now detect argocd instance when already installed and write crd accordingly. One major caveat though is that crd versions are not managed properly yet
Some checks failed
Run Check Script / check (pull_request) Failing after 39s
2025-10-23 13:12:38 -04:00
dc70266b5a wip: install argocd app depending on how argocd is already installed in the cluster 2025-10-23 13:11:39 -04:00
8fb755cda1 wip: argocd discovery 2025-10-23 13:10:35 -04:00
cb7a64b160 feat: Support tls enabled by default on rust web app 2025-10-23 13:10:35 -04:00
afdd511a6d feat(application): Webapp feature with production dns 2025-10-23 13:10:35 -04:00
5ab58f0253 fix: added impl node exporter for hacluster topology and dummy infra
All checks were successful
Run Check Script / check (pull_request) Successful in 1m26s
2025-10-22 14:39:12 -04:00
5af13800b7 fix: removed unimplemnted marco and returned Err instead
Some checks failed
Run Check Script / check (pull_request) Failing after 29s
some formatting error
2025-10-22 11:51:22 -04:00
8126b233d8 feat: implementation for opnsense os-node_exporter
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-10-22 11:27:28 -04:00
e5eb7fde9f doc to clone and transfer a coreos disk
All checks were successful
Run Check Script / check (pull_request) Successful in 1m11s
2025-10-09 15:29:09 -04:00
dd3f07e5b7 doc for removing worker flag from cp on UPI
All checks were successful
Run Check Script / check (pull_request) Successful in 1m13s
2025-10-09 15:28:42 -04:00
137 changed files with 4978 additions and 784 deletions

102
Cargo.lock generated
View File

@@ -690,6 +690,41 @@ dependencies = [
"tokio",
]
[[package]]
name = "brocade-snmp-server"
version = "0.1.0"
dependencies = [
"base64 0.22.1",
"brocade",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_secret",
"harmony_types",
"log",
"serde",
"tokio",
"url",
]
[[package]]
name = "brocade-switch"
version = "0.1.0"
dependencies = [
"async-trait",
"brocade",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"serde",
"tokio",
"url",
]
[[package]]
name = "brotli"
version = "8.0.2"
@@ -1804,6 +1839,25 @@ dependencies = [
"url",
]
[[package]]
name = "example-okd-cluster-alerts"
version = "0.1.0"
dependencies = [
"brocade",
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_secret",
"harmony_secret_derive",
"harmony_types",
"log",
"serde",
"tokio",
"url",
]
[[package]]
name = "example-okd-install"
version = "0.1.0"
@@ -1835,6 +1889,21 @@ dependencies = [
"url",
]
[[package]]
name = "example-operatorhub-catalogsource"
version = "0.1.0"
dependencies = [
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"tokio",
"url",
]
[[package]]
name = "example-opnsense"
version = "0.1.0"
@@ -1853,6 +1922,25 @@ dependencies = [
"url",
]
[[package]]
name = "example-opnsense-node-exporter"
version = "0.1.0"
dependencies = [
"async-trait",
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_secret",
"harmony_secret_derive",
"harmony_types",
"log",
"serde",
"tokio",
"url",
]
[[package]]
name = "example-pxe"
version = "0.1.0"
@@ -2479,6 +2567,19 @@ dependencies = [
"tokio",
]
[[package]]
name = "harmony_inventory_builder"
version = "0.1.0"
dependencies = [
"cidr",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"tokio",
"url",
]
[[package]]
name = "harmony_macros"
version = "0.1.0"
@@ -2544,6 +2645,7 @@ dependencies = [
name = "harmony_types"
version = "0.1.0"
dependencies = [
"log",
"rand 0.9.2",
"serde",
"url",

View File

@@ -0,0 +1,90 @@
# Architecture Decision Record: Global Orchestration Mesh & The Harmony Agent
**Status:** Proposed
**Date:** 2025-12-19
## Context
Harmony is designed to enable a truly decentralized infrastructure where independent clusters—owned by different organizations or running on diverse hardware—can collaborate reliably. This vision combines the decentralization of Web3 with the performance and capabilities of Web2.
Currently, Harmony operates as a stateless CLI tool, invoked manually or via CI runners. While effective for deployment, this model presents a critical limitation: **a CLI cannot react to real-time events.**
To achieve automated failover and dynamic workload management, we need a system that is "always on." Relying on manual intervention or scheduled CI jobs to recover from a cluster failure creates unacceptable latency and prevents us from scaling to thousands of nodes.
Furthermore, we face a challenge in serving diverse workloads:
* **Financial workloads** require absolute consistency (CP - Consistency/Partition Tolerance).
* **AI/Inference workloads** require maximum availability (AP - Availability/Partition Tolerance).
There are many more use cases, but those are the two extremes.
We need a unified architecture that automates cluster coordination and supports both consistency models without requiring a complete re-architecture in the future.
## Decision
We propose a fundamental architectural evolution. It has been clear since the start of Harmony that it would be necessary to transition Harmony from a purely ephemeral CLI tool to a system that includes a persistent **Harmony Agent**. This Agent will connect to a **Global Orchestration Mesh** based on a strongly consistent protocol.
The proposal consists of four key pillars:
### 1. The Harmony Agent (New Component)
We will develop a long-running process (Daemon/Agent) to be deployed alongside workloads.
* **Shift from CLI:** Unlike the CLI, which applies configuration and exits, the Agent maintains a persistent connection to the mesh.
* **Responsibility:** It actively monitors cluster health, participates in consensus, and executes lifecycle commands (start/stop/fence) instantly when the mesh dictates a state change.
### 2. The Technology: NATS JetStream
We will utilize **NATS JetStream** as the underlying transport and consensus layer for the Agent and the Mesh.
* **Why not raw Raft?** Implementing a raw Raft library requires building and maintaining the transport layer, log compaction, snapshotting, and peer discovery manually. NATS JetStream provides a battle-tested, distributed log and Key-Value store (based on Raft) out of the box, along with a high-performance pub/sub system for event propagation.
* **Role:** It will act as the "source of truth" for the cluster state.
### 3. Strong Consistency at the Mesh Layer
The mesh will operate with **Strong Consistency** by default.
* All critical cluster state changes (topology updates, lease acquisitions, leadership elections) will require consensus among the Agents.
* This ensures that in the event of a network partition, we have a mathematical guarantee of which side holds the valid state, preventing data corruption.
### 4. Public UX: The `FailoverStrategy` Abstraction
To keep the user experience stable and simple, we will expose the complexity of the mesh through a high-level configuration API, tentatively called `FailoverStrategy`.
The user defines the *intent* in their config, and the Harmony Agent automates the *execution*:
* **`FailoverStrategy::AbsoluteConsistency`**:
* *Use Case:* Banking, Transactional DBs.
* *Behavior:* If the mesh detects a partition, the Agent on the minority side immediately halts workloads. No split-brain is ever allowed.
* **`FailoverStrategy::SplitBrainAllowed`**:
* *Use Case:* LLM Inference, Stateless Web Servers.
* *Behavior:* If a partition occurs, the Agent keeps workloads running to maximize uptime. State is reconciled when connectivity returns.
## Rationale
**The Necessity of an Agent**
You cannot automate what you do not monitor. Moving to an Agent-based model is the only way to achieve sub-second reaction times to infrastructure failures. It transforms Harmony from a deployment tool into a self-healing platform.
**Scaling & Decentralization**
To allow independent clusters to collaborate, they need a shared language. A strongly consistent mesh allows Cluster A (Organization X) and Cluster B (Organization Y) to agree on workload placement without a central authority.
**Why Strong Consistency First?**
It is technically feasible to relax a strongly consistent system to allow for "Split Brain" behavior (AP) when the user requests it. However, it is nearly impossible to take an eventually consistent system and force it to be strongly consistent (CP) later. By starting with strict constraints, we cover the hardest use cases (Finance) immediately.
**Future Topologies**
While our immediate need is `FailoverTopology` (Multi-site), this architecture supports any future topology logic:
* **`CostTopology`**: Agents negotiate to route workloads to the cluster with the cheapest spot instances.
* **`HorizontalTopology`**: Spreading a single workload across 100 clusters for massive scale.
* **`GeoTopology`**: Ensuring data stays within specific legal jurisdictions.
The mesh provides the *capability* (consensus and messaging); the topology provides the *logic*.
## Consequences
**Positive**
* **Automation:** Eliminates manual failover, enabling massive scale.
* **Reliability:** Guarantees data safety for critical workloads by default.
* **Flexibility:** A single codebase serves both high-frequency trading and AI inference.
* **Stability:** The public API remains abstract, allowing us to optimize the mesh internals without breaking user code.
**Negative**
* **Deployment Complexity:** Users must now deploy and maintain a running service (the Agent) rather than just downloading a binary.
* **Engineering Complexity:** Integrating NATS JetStream and handling distributed state machines is significantly more complex than the current CLI logic.
## Implementation Plan (Short Term)
1. **Agent Bootstrap:** Create the initial scaffold for the Harmony Agent (daemon).
2. **Mesh Integration:** Prototype NATS JetStream embedding within the Agent.
3. **Strategy Implementation:** Add `FailoverStrategy` to the configuration schema and implement the logic in the Agent to read and act on it.
4. **Migration:** Transition the current manual failover scripts into event-driven logic handled by the Agent.

View File

@@ -1,6 +1,6 @@
use std::net::{IpAddr, Ipv4Addr};
use brocade::BrocadeOptions;
use brocade::{BrocadeOptions, ssh};
use harmony_secret::{Secret, SecretManager};
use harmony_types::switch::PortLocation;
use serde::{Deserialize, Serialize};
@@ -16,23 +16,28 @@ async fn main() {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();
// let ip = IpAddr::V4(Ipv4Addr::new(10, 0, 0, 250)); // old brocade @ ianlet
let ip = IpAddr::V4(Ipv4Addr::new(192, 168, 55, 101)); // brocade @ sto1
let ip = IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)); // brocade @ sto1
// let ip = IpAddr::V4(Ipv4Addr::new(192, 168, 4, 11)); // brocade @ st
let switch_addresses = vec![ip];
let config = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
.await
.unwrap();
// let config = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
// .await
// .unwrap();
let brocade = brocade::init(
&switch_addresses,
22,
&config.username,
&config.password,
Some(BrocadeOptions {
// &config.username,
// &config.password,
"admin",
"password",
BrocadeOptions {
dry_run: true,
ssh: ssh::SshOptions {
port: 2222,
..Default::default()
},
..Default::default()
}),
},
)
.await
.expect("Brocade client failed to connect");
@@ -54,6 +59,7 @@ async fn main() {
}
println!("--------------");
todo!();
let channel_name = "1";
brocade.clear_port_channel(channel_name).await.unwrap();

View File

@@ -1,7 +1,8 @@
use super::BrocadeClient;
use crate::{
BrocadeInfo, Error, ExecutionMode, InterSwitchLink, InterfaceInfo, MacAddressEntry,
PortChannelId, PortOperatingMode, parse_brocade_mac_address, shell::BrocadeShell,
PortChannelId, PortOperatingMode, SecurityLevel, parse_brocade_mac_address,
shell::BrocadeShell,
};
use async_trait::async_trait;
@@ -140,7 +141,7 @@ impl BrocadeClient for FastIronClient {
async fn configure_interfaces(
&self,
_interfaces: Vec<(String, PortOperatingMode)>,
_interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error> {
todo!()
}
@@ -209,4 +210,20 @@ impl BrocadeClient for FastIronClient {
info!("[Brocade] Port-channel '{channel_name}' cleared.");
Ok(())
}
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error> {
let commands = vec![
"configure terminal".into(),
"snmp-server view ALL 1 included".into(),
"snmp-server group public v3 priv read ALL".into(),
format!(
"snmp-server user {user_name} groupname public auth md5 auth-password {auth} priv des priv-password {des}"
),
"exit".into(),
];
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await?;
Ok(())
}
}

View File

@@ -14,11 +14,12 @@ use async_trait::async_trait;
use harmony_types::net::MacAddress;
use harmony_types::switch::{PortDeclaration, PortLocation};
use regex::Regex;
use serde::Serialize;
mod fast_iron;
mod network_operating_system;
mod shell;
mod ssh;
pub mod ssh;
#[derive(Default, Clone, Debug)]
pub struct BrocadeOptions {
@@ -118,7 +119,7 @@ impl fmt::Display for InterfaceType {
}
/// Defines the primary configuration mode of a switch interface, representing mutually exclusive roles.
#[derive(Debug, PartialEq, Eq, Clone)]
#[derive(Debug, PartialEq, Eq, Clone, Serialize)]
pub enum PortOperatingMode {
/// The interface is explicitly configured for Brocade fabric roles (ISL or Trunk enabled).
Fabric,
@@ -141,12 +142,11 @@ pub enum InterfaceStatus {
pub async fn init(
ip_addresses: &[IpAddr],
port: u16,
username: &str,
password: &str,
options: Option<BrocadeOptions>,
options: BrocadeOptions,
) -> Result<Box<dyn BrocadeClient + Send + Sync>, Error> {
let shell = BrocadeShell::init(ip_addresses, port, username, password, options).await?;
let shell = BrocadeShell::init(ip_addresses, username, password, options).await?;
let version_info = shell
.with_session(ExecutionMode::Regular, |session| {
@@ -208,7 +208,7 @@ pub trait BrocadeClient: std::fmt::Debug {
/// Configures a set of interfaces to be operated with a specified mode (access ports, ISL, etc.).
async fn configure_interfaces(
&self,
interfaces: Vec<(String, PortOperatingMode)>,
interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error>;
/// Scans the existing configuration to find the next available (unused)
@@ -237,6 +237,15 @@ pub trait BrocadeClient: std::fmt::Debug {
ports: &[PortLocation],
) -> Result<(), Error>;
/// Enables Simple Network Management Protocol (SNMP) server for switch
///
/// # Parameters
///
/// * `user_name`: The user name for the snmp server
/// * `auth`: The password for authentication process for verifying the identity of a device
/// * `des`: The Data Encryption Standard algorithm key
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error>;
/// Removes all configuration associated with the specified Port-Channel name.
///
/// This operation should be idempotent; attempting to clear a non-existent
@@ -300,6 +309,11 @@ fn parse_brocade_mac_address(value: &str) -> Result<MacAddress, String> {
Ok(MacAddress(bytes))
}
#[derive(Debug)]
pub enum SecurityLevel {
AuthPriv(String),
}
#[derive(Debug)]
pub enum Error {
NetworkError(String),

View File

@@ -8,7 +8,7 @@ use regex::Regex;
use crate::{
BrocadeClient, BrocadeInfo, Error, ExecutionMode, InterSwitchLink, InterfaceInfo,
InterfaceStatus, InterfaceType, MacAddressEntry, PortChannelId, PortOperatingMode,
parse_brocade_mac_address, shell::BrocadeShell,
SecurityLevel, parse_brocade_mac_address, shell::BrocadeShell,
};
#[derive(Debug)]
@@ -187,7 +187,7 @@ impl BrocadeClient for NetworkOperatingSystemClient {
async fn configure_interfaces(
&self,
interfaces: Vec<(String, PortOperatingMode)>,
interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error> {
info!("[Brocade] Configuring {} interface(s)...", interfaces.len());
@@ -204,9 +204,12 @@ impl BrocadeClient for NetworkOperatingSystemClient {
PortOperatingMode::Trunk => {
commands.push("switchport".into());
commands.push("switchport mode trunk".into());
commands.push("no spanning-tree shutdown".into());
commands.push("switchport trunk allowed vlan all".into());
commands.push("no switchport trunk tag native-vlan".into());
commands.push("spanning-tree shutdown".into());
commands.push("no fabric isl enable".into());
commands.push("no fabric trunk enable".into());
commands.push("no shutdown".into());
}
PortOperatingMode::Access => {
commands.push("switchport".into());
@@ -330,4 +333,20 @@ impl BrocadeClient for NetworkOperatingSystemClient {
info!("[Brocade] Port-channel '{channel_name}' cleared.");
Ok(())
}
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error> {
let commands = vec![
"configure terminal".into(),
"snmp-server view ALL 1 included".into(),
"snmp-server group public v3 priv read ALL".into(),
format!(
"snmp-server user {user_name} groupname public auth md5 auth-password {auth} priv des priv-password {des}"
),
"exit".into(),
];
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await?;
Ok(())
}
}

View File

@@ -16,7 +16,6 @@ use tokio::time::timeout;
#[derive(Debug)]
pub struct BrocadeShell {
ip: IpAddr,
port: u16,
username: String,
password: String,
options: BrocadeOptions,
@@ -27,33 +26,31 @@ pub struct BrocadeShell {
impl BrocadeShell {
pub async fn init(
ip_addresses: &[IpAddr],
port: u16,
username: &str,
password: &str,
options: Option<BrocadeOptions>,
options: BrocadeOptions,
) -> Result<Self, Error> {
let ip = ip_addresses
.first()
.ok_or_else(|| Error::ConfigurationError("No IP addresses provided".to_string()))?;
let base_options = options.unwrap_or_default();
let options = ssh::try_init_client(username, password, ip, base_options).await?;
let brocade_ssh_client_options =
ssh::try_init_client(username, password, ip, options).await?;
Ok(Self {
ip: *ip,
port,
username: username.to_string(),
password: password.to_string(),
before_all_commands: vec![],
after_all_commands: vec![],
options,
options: brocade_ssh_client_options,
})
}
pub async fn open_session(&self, mode: ExecutionMode) -> Result<BrocadeSession, Error> {
BrocadeSession::open(
self.ip,
self.port,
self.options.ssh.port,
&self.username,
&self.password,
self.options.clone(),

View File

@@ -2,6 +2,7 @@ use std::borrow::Cow;
use std::sync::Arc;
use async_trait::async_trait;
use log::debug;
use russh::client::Handler;
use russh::kex::DH_G1_SHA1;
use russh::kex::ECDH_SHA2_NISTP256;
@@ -10,29 +11,43 @@ use russh_keys::key::SSH_RSA;
use super::BrocadeOptions;
use super::Error;
#[derive(Default, Clone, Debug)]
#[derive(Clone, Debug)]
pub struct SshOptions {
pub preferred_algorithms: russh::Preferred,
pub port: u16,
}
impl Default for SshOptions {
fn default() -> Self {
Self {
preferred_algorithms: Default::default(),
port: 22,
}
}
}
impl SshOptions {
fn ecdhsa_sha2_nistp256() -> Self {
fn ecdhsa_sha2_nistp256(port: u16) -> Self {
Self {
preferred_algorithms: russh::Preferred {
kex: Cow::Borrowed(&[ECDH_SHA2_NISTP256]),
key: Cow::Borrowed(&[SSH_RSA]),
..Default::default()
},
port,
..Default::default()
}
}
fn legacy() -> Self {
fn legacy(port: u16) -> Self {
Self {
preferred_algorithms: russh::Preferred {
kex: Cow::Borrowed(&[DH_G1_SHA1]),
key: Cow::Borrowed(&[SSH_RSA]),
..Default::default()
},
port,
..Default::default()
}
}
}
@@ -57,18 +72,21 @@ pub async fn try_init_client(
ip: &std::net::IpAddr,
base_options: BrocadeOptions,
) -> Result<BrocadeOptions, Error> {
let mut default = SshOptions::default();
default.port = base_options.ssh.port;
let ssh_options = vec![
SshOptions::default(),
SshOptions::ecdhsa_sha2_nistp256(),
SshOptions::legacy(),
default,
SshOptions::ecdhsa_sha2_nistp256(base_options.ssh.port),
SshOptions::legacy(base_options.ssh.port),
];
for ssh in ssh_options {
let opts = BrocadeOptions {
ssh,
ssh: ssh.clone(),
..base_options.clone()
};
let client = create_client(*ip, 22, username, password, &opts).await;
debug!("Creating client {ip}:{} {username}", ssh.port);
let client = create_client(*ip, ssh.port, username, password, &opts).await;
match client {
Ok(_) => {

Binary file not shown.

View File

@@ -0,0 +1,133 @@
## Working procedure to clone and restore CoreOS disk from OKD Cluster
### **Step 1 - take a backup**
```
sudo dd if=/dev/old of=/dev/backup status=progress
```
### **Step 2 - clone beginning of old disk to new**
```
sudo dd if=/dev/old of=/dev/backup status=progress count=1000 bs=1M
```
### **Step 3 - verify and modify disk partitions**
list disk partitions
```
sgdisk -p /dev/new
```
if new disk is smaller than old disk and there is space on the xfs partition of the old disk, modify partitions of new disk
```
gdisk /dev/new
```
inside of gdisk commands
```
-v -> verify table
-p -> print table
-d -> select partition to delete partition
-n -> recreate partition with same partition number as deleted partition
```
For end sector, either specify the new end or just press Enter for maximum available
When asked about partition type, enter the same type code (it will show the old one)
```
p - >to verify
w -> to write
```
make xfs file system for new partition <new4>
```
sudo mkfs.xfs -f /dev/new4
```
### **Step 4 - copy old PARTUUID **
**careful here**
get old patuuid:
```
sgdisk -i <partition_number> /dev/old_disk # Note the "Partition unique GUID"
```
get labels
```
sgdisk -p /dev/old_disk # Shows partition names in the table
blkid /dev/old_disk* # Shows PARTUUIDs and labels for all partitions
```
set it on new disk
```
sgdisk -u <partition_number>:<old_partuuid> /dev/sdc
```
partition name:
```
sgdisk -c <partition_number>:"<old_name>" /dev/sdc
```
verify all:
```
lsblk -o NAME,SIZE,PARTUUID,PARTLABEL /dev/old_disk
```
### **Step 5 - Mount disks and copy files from old to new disk**
mount files before copy:
```
mkdir -p /mnt/new
mkdir -p /mnt/old
mount /dev/old4 /mnt/old
mount /dev/new4 /mnt/new
```
copy:
with -n flag can run as dry-run
```
rsync -aAXHvn --numeric-ids /source/ /destination/
```
```
rsync -aAXHv --numeric-ids /source/ /destination/
```
### **Step 6 - Set correct UUID for new partition 4**
to set uuid with xfs_admin you must unmount first
unmount old devices
```
umount /mnt/new
umount /mnt/old
```
to set correct uuid for partition 4
```
blkid /dev/old4
```
```
xfs_admin -U <old_uuid> /dev/new_partition
```
to set labels
get it
```
sgdisk -i 4 /dev/sda | grep "Partition name"
```
set it
```
sgdisk -c 4:"<label_name>" /dev/sdc
or
(check existing with xfs_admin -l /dev/old_partition)
Use xfs_admin -L <label> /dev/new_partition
```
### **Step 7 - Verify**
verify everything:
```
sgdisk -p /dev/sda # Old disk
sgdisk -p /dev/sdc # New disk
```
```
lsblk -o NAME,SIZE,PARTUUID,PARTLABEL /dev/sda
lsblk -o NAME,SIZE,PARTUUID,PARTLABEL /dev/sdc
```
```
blkid /dev/sda* | grep UUID=
blkid /dev/sdc* | grep UUID=
```

View File

@@ -0,0 +1,56 @@
## **Remove Worker flag from OKD Control Planes**
### **Context**
On OKD user provisioned infrastructure the control plane nodes can have the flag node-role.kubernetes.io/worker which allows non critical workloads to be scheduled on the control-planes
### **Observed Symptoms**
- After adding HAProxy servers to the backend each back end appears down
- Traffic is redirected to the control planes instead of workers
- The pods router-default are incorrectly applied on the control planes rather than on the workers
- Pods are being scheduled on the control planes causing cluster instability
```
ss -tlnp | grep 80
```
- shows process haproxy is listening at 0.0.0.0:80 on cps
- same problem for port 443
- In namespace rook-ceph certain pods are deploted on cps rather than on worker nodes
### **Cause**
- when intalling UPI, the roles (master, worker) are not managed by the Machine Config operator and the cps are made schedulable by default.
### **Diagnostic**
check node labels:
```
oc get nodes --show-labels | grep control-plane
```
Inspecter kubelet configuration:
```
cat /etc/systemd/system/kubelet.service
```
find the line:
```
--node-labels=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master,node-role.kubernetes.io/worker
```
→ presence of label worker confirms the problem.
Verify the flag doesnt come from MCO
```
oc get machineconfig | grep rendered-master
```
**Solution:**
To make the control planes non schedulable you must patch the cluster scheduler resource
```
oc patch scheduler cluster --type merge -p '{"spec":{"mastersSchedulable":false}}'
```
after the patch is applied the workloads can be deplaced by draining the nodes
```
oc adm cordon <cp-node>
oc adm drain <cp-node> --ignore-daemonsets delete-emptydir-data
```

View File

@@ -0,0 +1,105 @@
# Design Document: Harmony PostgreSQL Module
**Status:** Draft
**Last Updated:** 2025-12-01
**Context:** Multi-site Data Replication & Orchestration
## 1. Overview
The Harmony PostgreSQL Module provides a high-level abstraction for deploying and managing high-availability PostgreSQL clusters across geographically distributed Kubernetes/OKD sites.
Instead of manually configuring complex replication slots, firewalls, and operator settings on each cluster, users define a single intent (a **Score**), and Harmony orchestrates the underlying infrastructure (the **Arrangement**) to establish a Primary-Replica architecture.
Currently, the implementation relies on the **CloudNativePG (CNPG)** operator as the backing engine.
## 2. Architecture
### 2.1 The Abstraction Model
Following **ADR 003 (Infrastructure Abstraction)**, Harmony separates the *intent* from the *implementation*.
1. **The Score (Intent):** The user defines a `MultisitePostgreSQL` resource. This describes *what* is needed (e.g., "A Postgres 15 cluster with 10GB storage, Primary on Site A, Replica on Site B").
2. **The Interpret (Action):** Harmony MultisitePostgreSQLInterpret processes this Score and orchestrates the deployment on both sites to reach the state defined in the Score.
3. **The Capability (Implementation):** The PostgreSQL Capability is implemented by the K8sTopology and the interpret can deploy it, configure it and fetch information about it. The concrete implementation will rely on the mature CloudnativePG operator to manage all the Kubernetes resources required.
### 2.2 Network Connectivity (TLS Passthrough)
One of the critical challenges in multi-site orchestration is secure connectivity between clusters that may have dynamic IPs or strict firewalls.
To solve this, we utilize **OKD/OpenShift Routes with TLS Passthrough**.
* **Mechanism:** The Primary site exposes a `Route` configured for `termination: passthrough`.
* **Routing:** The OpenShift HAProxy router inspects the **SNI (Server Name Indication)** header of the incoming TCP connection to route traffic to the correct PostgreSQL Pod.
* **Security:** SSL is **not** terminated at the ingress router. The encrypted stream is passed directly to the PostgreSQL instance. Mutual TLS (mTLS) authentication is handled natively by CNPG between the Primary and Replica instances.
* **Dynamic IPs:** Because connections are established via DNS hostnames (the Route URL), this architecture is resilient to dynamic IP changes at the Primary site.
#### Traffic Flow Diagram
```text
[ Site B: Replica ] [ Site A: Primary ]
| |
(CNPG Instance) --[Encrypted TCP]--> (OKD HAProxy Router)
| (Port 443) |
| |
| [SNI Inspection]
| |
| v
| (PostgreSQL Primary Pod)
| (Port 5432)
```
## 3. Design Decisions
### Why CloudNativePG?
We selected CloudNativePG because it relies exclusively on standard Kubernetes primitives and uses the native PostgreSQL replication protocol (WAL shipping/Streaming). This aligns with Harmony's goal of being "K8s Native."
### Why TLS Passthrough instead of VPN/NodePort?
* **NodePort:** Requires static IPs and opening non-standard ports on the firewall, which violates our security constraints.
* **VPN (e.g., Wireguard/Tailscale):** While secure, it introduces significant complexity (sidecars, key management) and external dependencies.
* **TLS Passthrough:** Leverages the existing Ingress/Router infrastructure already present in OKD. It requires zero additional software and respects multi-tenancy (Routes are namespaced).
### Configuration Philosophy (YAGNI)
The current design exposes a **generic configuration surface**. Users can configure standard parameters (Storage size, CPU/Memory requests, Postgres version).
**We explicitly do not expose advanced CNPG or PostgreSQL configurations at this stage.**
* **Reasoning:** We aim to keep the API surface small and manageable.
* **Future Path:** We plan to implement a "pass-through" mechanism to allow sending raw config maps or custom parameters to the underlying engine (CNPG) *only when a concrete use case arises*. Until then, we adhere to the **YAGNI (You Ain't Gonna Need It)** principle to avoid premature optimization and API bloat.
## 4. Usage Guide
To deploy a multi-site cluster, apply the `MultisitePostgreSQL` resource to the Harmony Control Plane.
### Example Manifest
```yaml
apiVersion: harmony.io/v1alpha1
kind: MultisitePostgreSQL
metadata:
name: finance-db
namespace: tenant-a
spec:
version: "15"
storage: "10Gi"
resources:
requests:
cpu: "500m"
memory: "1Gi"
# Topology Definition
topology:
primary:
site: "site-paris" # The name of the cluster in Harmony
replicas:
- site: "site-newyork"
```
### What happens next?
1. Harmony detects the CR.
2. **On Site Paris:** It deploys a CNPG Cluster (Primary) and creates a Passthrough Route `postgres-finance-db.apps.site-paris.example.com`.
3. **On Site New York:** It deploys a CNPG Cluster (Replica) configured with `externalClusters` pointing to the Paris Route.
4. Data begins replicating immediately over the encrypted channel.
## 5. Troubleshooting
* **Connection Refused:** Ensure the Primary site's Route is successfully admitted by the Ingress Controller.
* **Certificate Errors:** CNPG manages mTLS automatically. If errors persist, ensure the CA secrets were correctly propagated by Harmony from Primary to Replica namespaces.

BIN
empty_database.sqlite Normal file

Binary file not shown.

View File

@@ -27,6 +27,7 @@ async fn main() {
};
let application = Arc::new(RustWebapp {
name: "example-monitoring".to_string(),
dns: "example-monitoring.harmony.mcd".to_string(),
project_root: PathBuf::from("./examples/rust/webapp"),
framework: Some(RustWebFramework::Leptos),
service_port: 3000,

View File

@@ -0,0 +1,20 @@
[package]
name = "brocade-snmp-server"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
brocade = { path = "../../brocade" }
harmony_secret = { path = "../../harmony_secret" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_macros = { path = "../../harmony_macros" }
tokio = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
base64.workspace = true
serde.workspace = true

View File

@@ -0,0 +1,22 @@
use std::net::{IpAddr, Ipv4Addr};
use harmony::{
inventory::Inventory, modules::brocade::BrocadeEnableSnmpScore, topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let brocade_snmp_server = BrocadeEnableSnmpScore {
switch_ips: vec![IpAddr::V4(Ipv4Addr::new(192, 168, 1, 111))],
dry_run: true,
};
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(brocade_snmp_server)],
None,
)
.await
.unwrap();
}

View File

@@ -0,0 +1,19 @@
[package]
name = "brocade-switch"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_macros = { path = "../../harmony_macros" }
harmony_types = { path = "../../harmony_types" }
tokio.workspace = true
url.workspace = true
async-trait.workspace = true
serde.workspace = true
log.workspace = true
env_logger.workspace = true
brocade = { path = "../../brocade" }

View File

@@ -0,0 +1,157 @@
use std::str::FromStr;
use async_trait::async_trait;
use brocade::{BrocadeOptions, PortOperatingMode};
use harmony::{
data::Version,
infra::brocade::BrocadeSwitchClient,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::{
HostNetworkConfig, PortConfig, PreparationError, PreparationOutcome, Switch, SwitchClient,
SwitchError, Topology,
},
};
use harmony_macros::ip;
use harmony_types::{id::Id, net::MacAddress, switch::PortLocation};
use log::{debug, info};
use serde::Serialize;
#[tokio::main]
async fn main() {
let switch_score = BrocadeSwitchScore {
port_channels_to_clear: vec![
Id::from_str("17").unwrap(),
Id::from_str("19").unwrap(),
Id::from_str("18").unwrap(),
],
ports_to_configure: vec![
(PortLocation(2, 0, 17), PortOperatingMode::Trunk),
(PortLocation(2, 0, 19), PortOperatingMode::Trunk),
(PortLocation(1, 0, 18), PortOperatingMode::Trunk),
],
};
harmony_cli::run(
Inventory::autoload(),
SwitchTopology::new().await,
vec![Box::new(switch_score)],
None,
)
.await
.unwrap();
}
#[derive(Clone, Debug, Serialize)]
struct BrocadeSwitchScore {
port_channels_to_clear: Vec<Id>,
ports_to_configure: Vec<PortConfig>,
}
impl<T: Topology + Switch> Score<T> for BrocadeSwitchScore {
fn name(&self) -> String {
"BrocadeSwitchScore".to_string()
}
#[doc(hidden)]
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(BrocadeSwitchInterpret {
score: self.clone(),
})
}
}
#[derive(Debug)]
struct BrocadeSwitchInterpret {
score: BrocadeSwitchScore,
}
#[async_trait]
impl<T: Topology + Switch> Interpret<T> for BrocadeSwitchInterpret {
async fn execute(
&self,
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
info!("Applying switch configuration {:?}", self.score);
debug!(
"Clearing port channel {:?}",
self.score.port_channels_to_clear
);
topology
.clear_port_channel(&self.score.port_channels_to_clear)
.await
.map_err(|e| InterpretError::new(e.to_string()))?;
debug!("Configuring interfaces {:?}", self.score.ports_to_configure);
topology
.configure_interface(&self.score.ports_to_configure)
.await
.map_err(|e| InterpretError::new(e.to_string()))?;
Ok(Outcome::success("switch configured".to_string()))
}
fn get_name(&self) -> InterpretName {
InterpretName::Custom("BrocadeSwitchInterpret")
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
struct SwitchTopology {
client: Box<dyn SwitchClient>,
}
#[async_trait]
impl Topology for SwitchTopology {
fn name(&self) -> &str {
"SwitchTopology"
}
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
Ok(PreparationOutcome::Noop)
}
}
impl SwitchTopology {
async fn new() -> Self {
let mut options = BrocadeOptions::default();
options.ssh.port = 2222;
let client =
BrocadeSwitchClient::init(&vec![ip!("127.0.0.1")], &"admin", &"password", options)
.await
.expect("Failed to connect to switch");
let client = Box::new(client);
Self { client }
}
}
#[async_trait]
impl Switch for SwitchTopology {
async fn setup_switch(&self) -> Result<(), SwitchError> {
todo!()
}
async fn get_port_for_mac_address(
&self,
_mac_address: &MacAddress,
) -> Result<Option<PortLocation>, SwitchError> {
todo!()
}
async fn configure_port_channel(&self, _config: &HostNetworkConfig) -> Result<(), SwitchError> {
todo!()
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
self.client.clear_port_channel(ids).await
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
self.client.configure_interface(ports).await
}
}

View File

@@ -2,7 +2,7 @@ use harmony::{
inventory::Inventory,
modules::{
dummy::{ErrorScore, PanicScore, SuccessScore},
inventory::LaunchDiscoverInventoryAgentScore,
inventory::{HarmonyDiscoveryStrategy, LaunchDiscoverInventoryAgentScore},
},
topology::LocalhostTopology,
};
@@ -18,6 +18,7 @@ async fn main() {
Box::new(PanicScore {}),
Box::new(LaunchDiscoverInventoryAgentScore {
discovery_timeout: Some(10),
discovery_strategy: HarmonyDiscoveryStrategy::MDNS,
}),
],
None,

View File

@@ -0,0 +1,15 @@
[package]
name = "harmony_inventory_builder"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_macros = { path = "../../harmony_macros" }
harmony_types = { path = "../../harmony_types" }
tokio.workspace = true
url.workspace = true
cidr.workspace = true

View File

@@ -0,0 +1,11 @@
cargo build -p harmony_inventory_builder --release --target x86_64-unknown-linux-musl
SCRIPT_DIR="$(dirname ${0})"
cd "${SCRIPT_DIR}/docker/"
cp ../../../target/x86_64-unknown-linux-musl/release/harmony_inventory_builder .
docker build . -t hub.nationtech.io/harmony/harmony_inventory_builder
docker push hub.nationtech.io/harmony/harmony_inventory_builder

View File

@@ -0,0 +1,10 @@
FROM debian:12-slim
RUN mkdir /app
WORKDIR /app/
COPY harmony_inventory_builder /app/
ENV RUST_LOG=info
CMD ["sleep", "infinity"]

View File

@@ -0,0 +1,36 @@
use harmony::{
inventory::{HostRole, Inventory},
modules::inventory::{DiscoverHostForRoleScore, HarmonyDiscoveryStrategy},
topology::LocalhostTopology,
};
use harmony_macros::cidrv4;
#[tokio::main]
async fn main() {
let discover_worker = DiscoverHostForRoleScore {
role: HostRole::Worker,
number_desired_hosts: 3,
discovery_strategy: HarmonyDiscoveryStrategy::SUBNET {
cidr: cidrv4!("192.168.0.1/25"),
port: 25000,
},
};
let discover_control_plane = DiscoverHostForRoleScore {
role: HostRole::ControlPlane,
number_desired_hosts: 3,
discovery_strategy: HarmonyDiscoveryStrategy::SUBNET {
cidr: cidrv4!("192.168.0.1/25"),
port: 25000,
},
};
harmony_cli::run(
Inventory::autoload(),
LocalhostTopology::new(),
vec![Box::new(discover_worker), Box::new(discover_control_plane)],
None,
)
.await
.unwrap();
}

View File

@@ -24,13 +24,14 @@ use harmony::{
},
topology::K8sAnywhereTopology,
};
use harmony_types::net::Url;
use harmony_types::{k8s_name::K8sName, net::Url};
#[tokio::main]
async fn main() {
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
selectors: vec![],
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();

View File

@@ -22,8 +22,8 @@ use harmony::{
tenant::{ResourceLimits, TenantConfig, TenantNetworkPolicy},
},
};
use harmony_types::id::Id;
use harmony_types::net::Url;
use harmony_types::{id::Id, k8s_name::K8sName};
#[tokio::main]
async fn main() {
@@ -43,8 +43,9 @@ async fn main() {
};
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
selectors: vec![],
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();

View File

@@ -39,10 +39,10 @@ async fn main() {
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.33.101")];
let brocade_options = Some(BrocadeOptions {
let brocade_options = BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
});
};
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
@@ -106,6 +106,7 @@ async fn main() {
name: "wk2".to_string(),
},
],
node_exporter: opnsense.clone(),
switch_client: switch_client.clone(),
network_manager: OnceLock::new(),
};

View File

@@ -0,0 +1,22 @@
[package]
name = "example-okd-cluster-alerts"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_secret = { path = "../../harmony_secret" }
harmony_secret_derive = { path = "../../harmony_secret_derive" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
serde.workspace = true
brocade = { path = "../../brocade" }

View File

@@ -0,0 +1,38 @@
use std::collections::HashMap;
use harmony::{
inventory::Inventory,
modules::monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
okd::cluster_monitoring::OpenshiftClusterAlertScore,
},
topology::K8sAnywhereTopology,
};
use harmony_macros::hurl;
use harmony_types::k8s_name::K8sName;
#[tokio::main]
async fn main() {
let mut sel = HashMap::new();
sel.insert(
"openshift_io_alert_source".to_string(),
"platform".to_string(),
);
let mut sel2 = HashMap::new();
sel2.insert("openshift_io_alert_source".to_string(), "".to_string());
let selectors = vec![sel, sel2];
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(OpenshiftClusterAlertScore {
receivers: vec![Box::new(DiscordWebhook {
name: K8sName("wills-discord-webhook-example".to_string()),
url: hurl!("https://something.io"),
selectors: selectors,
})],
})],
None,
)
.await
.unwrap();
}

View File

@@ -4,7 +4,10 @@ use crate::topology::{get_inventory, get_topology};
use harmony::{
config::secret::SshKeyPair,
data::{FileContent, FilePath},
modules::okd::{installation::OKDInstallationPipeline, ipxe::OKDIpxeScore},
modules::{
inventory::HarmonyDiscoveryStrategy,
okd::{installation::OKDInstallationPipeline, ipxe::OKDIpxeScore},
},
score::Score,
topology::HAClusterTopology,
};
@@ -26,7 +29,8 @@ async fn main() {
},
})];
scores.append(&mut OKDInstallationPipeline::get_all_scores().await);
scores
.append(&mut OKDInstallationPipeline::get_all_scores(HarmonyDiscoveryStrategy::MDNS).await);
harmony_cli::run(inventory, topology, scores, None)
.await

View File

@@ -31,10 +31,10 @@ pub async fn get_topology() -> HAClusterTopology {
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.1.101")]; // TODO: Adjust me
let brocade_options = Some(BrocadeOptions {
let brocade_options = BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
});
};
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
@@ -83,6 +83,7 @@ pub async fn get_topology() -> HAClusterTopology {
name: "bootstrap".to_string(),
},
workers: vec![],
node_exporter: opnsense.clone(),
switch_client: switch_client.clone(),
network_manager: OnceLock::new(),
}

View File

@@ -26,10 +26,10 @@ pub async fn get_topology() -> HAClusterTopology {
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.1.101")]; // TODO: Adjust me
let brocade_options = Some(BrocadeOptions {
let brocade_options = BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
});
};
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
@@ -78,6 +78,7 @@ pub async fn get_topology() -> HAClusterTopology {
name: "cp0".to_string(),
},
workers: vec![],
node_exporter: opnsense.clone(),
switch_client: switch_client.clone(),
network_manager: OnceLock::new(),
}

View File

@@ -0,0 +1,18 @@
[package]
name = "example-operatorhub-catalogsource"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -0,0 +1,22 @@
use std::str::FromStr;
use harmony::{
inventory::Inventory,
modules::{k8s::apps::OperatorHubCatalogSourceScore, postgresql::CloudNativePgOperatorScore},
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let operatorhub_catalog = OperatorHubCatalogSourceScore::default();
let cnpg_operator = CloudNativePgOperatorScore::default();
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(operatorhub_catalog), Box::new(cnpg_operator)],
None,
)
.await
.unwrap();
}

View File

@@ -35,10 +35,10 @@ async fn main() {
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.5.101")]; // TODO: Adjust me
let brocade_options = Some(BrocadeOptions {
let brocade_options = BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
});
};
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
@@ -78,6 +78,7 @@ async fn main() {
name: "cp0".to_string(),
},
workers: vec![],
node_exporter: opnsense.clone(),
switch_client: switch_client.clone(),
network_manager: OnceLock::new(),
};

View File

@@ -0,0 +1,21 @@
[package]
name = "example-opnsense-node-exporter"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_secret = { path = "../../harmony_secret" }
harmony_secret_derive = { path = "../../harmony_secret_derive" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
serde.workspace = true
async-trait.workspace = true

View File

@@ -0,0 +1,80 @@
use std::{
net::{IpAddr, Ipv4Addr},
sync::Arc,
};
use async_trait::async_trait;
use cidr::Ipv4Cidr;
use harmony::{
executors::ExecutorError,
hardware::{HostCategory, Location, PhysicalHost, SwitchGroup},
infra::opnsense::OPNSenseManagementInterface,
inventory::Inventory,
modules::opnsense::node_exporter::NodeExporterScore,
topology::{
HAClusterTopology, LogicalHost, PreparationError, PreparationOutcome, Topology,
UnmanagedRouter, node_exporter::NodeExporter,
},
};
use harmony_macros::{ip, ipv4, mac_address};
#[derive(Debug)]
struct OpnSenseTopology {
node_exporter: Arc<dyn NodeExporter>,
}
#[async_trait]
impl Topology for OpnSenseTopology {
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
Ok(PreparationOutcome::Success {
details: "Success".to_string(),
})
}
fn name(&self) -> &str {
"OpnsenseTopology"
}
}
#[async_trait]
impl NodeExporter for OpnSenseTopology {
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
self.node_exporter.ensure_initialized().await
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
self.node_exporter.commit_config().await
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
self.node_exporter.reload_restart().await
}
}
#[tokio::main]
async fn main() {
let firewall = harmony::topology::LogicalHost {
ip: ip!("192.168.1.1"),
name: String::from("fw0"),
};
let opnsense = Arc::new(
harmony::infra::opnsense::OPNSenseFirewall::new(firewall, None, "root", "opnsense").await,
);
let topology = OpnSenseTopology {
node_exporter: opnsense.clone(),
};
let inventory = Inventory::empty();
let node_exporter_score = NodeExporterScore {};
harmony_cli::run(
inventory,
topology,
vec![Box::new(node_exporter_score)],
None,
)
.await
.unwrap();
}

View File

@@ -1,4 +1,4 @@
use std::{path::PathBuf, sync::Arc};
use std::{collections::HashMap, path::PathBuf, sync::Arc};
use harmony::{
inventory::Inventory,
@@ -10,20 +10,22 @@ use harmony::{
},
topology::K8sAnywhereTopology,
};
use harmony_types::net::Url;
use harmony_types::{k8s_name::K8sName, net::Url};
#[tokio::main]
async fn main() {
let application = Arc::new(RustWebapp {
name: "test-rhob-monitoring".to_string(),
dns: "test-rhob-monitoring.harmony.mcd".to_string(),
project_root: PathBuf::from("./webapp"), // Relative from 'harmony-path' param
framework: Some(RustWebFramework::Leptos),
service_port: 3000,
});
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
selectors: vec![],
};
let app = ApplicationScore {

View File

@@ -1,3 +1,4 @@
Dockerfile.harmony
.harmony_generated
harmony
webapp

View File

@@ -1,4 +1,4 @@
use std::{path::PathBuf, sync::Arc};
use std::{collections::HashMap, path::PathBuf, sync::Arc};
use harmony::{
inventory::Inventory,
@@ -14,19 +14,22 @@ use harmony::{
topology::K8sAnywhereTopology,
};
use harmony_macros::hurl;
use harmony_types::k8s_name::K8sName;
#[tokio::main]
async fn main() {
let application = Arc::new(RustWebapp {
name: "harmony-example-rust-webapp".to_string(),
dns: "harmony-example-rust-webapp.harmony.mcd".to_string(),
project_root: PathBuf::from("./webapp"),
framework: Some(RustWebFramework::Leptos),
service_port: 3000,
});
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: hurl!("https://discord.doesnt.exist.com"),
selectors: vec![],
};
let webhook_receiver = WebhookReceiver {

View File

@@ -0,0 +1,7 @@
apiVersion: v2
name: harmony-example-rust-webapp-chart
description: A Helm chart for the harmony-example-rust-webapp web application.
type: application
version: 0.1.0
appVersion: "latest"

View File

@@ -0,0 +1,16 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "chart.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}

View File

@@ -0,0 +1,23 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "chart.name" . }}
template:
metadata:
labels:
app: {{ include "chart.name" . }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 3000
protocol: TCP

View File

@@ -0,0 +1,35 @@
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "chart.fullname" . }}
annotations:
{{- toYaml .Values.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "chart.fullname" $ }}
port:
number: 3000
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "chart.fullname" . }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 3000
protocol: TCP
name: http
selector:
app: {{ include "chart.name" . }}

View File

@@ -0,0 +1,34 @@
# Default values for harmony-example-rust-webapp-chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: hub.nationtech.io/harmony/harmony-example-rust-webapp
pullPolicy: IfNotPresent
# Overridden by the chart's appVersion
tag: "latest"
service:
type: ClusterIP
port: 3000
ingress:
enabled: true
# Annotations for cert-manager to handle SSL.
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# Add other annotations like nginx ingress class if needed
# kubernetes.io/ingress.class: nginx
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: harmony-example-rust-webapp-tls
hosts:
- chart-example.local

View File

@@ -2,12 +2,11 @@ use harmony::{
inventory::Inventory,
modules::{
application::{
ApplicationScore, RustWebFramework, RustWebapp,
features::{PackagingDeployment, rhob_monitoring::Monitoring},
features::{rhob_monitoring::Monitoring, PackagingDeployment}, ApplicationScore, RustWebFramework, RustWebapp
},
monitoring::alert_channel::discord_alert_channel::DiscordWebhook,
},
topology::K8sAnywhereTopology,
topology::{K8sAnywhereTopology, LocalhostTopology},
};
use harmony_macros::hurl;
use std::{path::PathBuf, sync::Arc};
@@ -22,8 +21,8 @@ async fn main() {
});
let discord_webhook = DiscordWebhook {
name: "harmony_demo".to_string(),
url: hurl!("http://not_a_url.com"),
name: "harmony-demo".to_string(),
url: hurl!("https://discord.com/api/webhooks/1415391405681021050/V6KzV41vQ7yvbn7BchejRu9C8OANxy0i2ESZOz2nvCxG8xAY3-2i3s5MS38k568JKTzH"),
};
let app = ApplicationScore {

View File

@@ -10,12 +10,14 @@ use harmony::{
topology::K8sAnywhereTopology,
};
use harmony_macros::hurl;
use harmony_types::k8s_name::K8sName;
use std::{path::PathBuf, sync::Arc};
#[tokio::main]
async fn main() {
let application = Arc::new(RustWebapp {
name: "harmony-example-tryrust".to_string(),
dns: "tryrust.example.harmony.mcd".to_string(),
project_root: PathBuf::from("./tryrust.org"), // <== Project root, in this case it is a
// submodule
framework: Some(RustWebFramework::Leptos),
@@ -31,8 +33,9 @@ async fn main() {
Box::new(Monitoring {
application: application.clone(),
alert_receiver: vec![Box::new(DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: hurl!("https://discord.doesnt.exist.com"),
selectors: vec![],
})],
}),
],

View File

@@ -152,10 +152,10 @@ impl PhysicalHost {
pub fn parts_list(&self) -> String {
let PhysicalHost {
id,
category,
category: _,
network,
storage,
labels,
labels: _,
memory_modules,
cpus,
} = self;
@@ -226,8 +226,8 @@ impl PhysicalHost {
speed_mhz,
manufacturer,
part_number,
serial_number,
rank,
serial_number: _,
rank: _,
} = mem;
parts_list.push_str(&format!(
"\n{}Gb, {}Mhz, Manufacturer ({}), Part Number ({})",

View File

@@ -4,6 +4,8 @@ use std::error::Error;
use async_trait::async_trait;
use derive_new::new;
use crate::inventory::HostRole;
use super::{
data::Version, executors::ExecutorError, inventory::Inventory, topology::PreparationError,
};

View File

@@ -1,4 +1,6 @@
mod repository;
use std::fmt;
pub use repository::*;
#[derive(Debug, new, Clone)]
@@ -69,5 +71,14 @@ pub enum HostRole {
Bootstrap,
ControlPlane,
Worker,
Storage,
}
impl fmt::Display for HostRole {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
HostRole::Bootstrap => write!(f, "Bootstrap"),
HostRole::ControlPlane => write!(f, "ControlPlane"),
HostRole::Worker => write!(f, "Worker"),
}
}
}

View File

@@ -0,0 +1,19 @@
use async_trait::async_trait;
use crate::topology::{PreparationError, PreparationOutcome, Topology};
pub struct FailoverTopology<T> {
pub primary: T,
pub replica: T,
}
#[async_trait]
impl<T: Send + Sync> Topology for FailoverTopology<T> {
fn name(&self) -> &str {
"FailoverTopology"
}
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
todo!()
}
}

View File

@@ -1,4 +1,5 @@
use async_trait::async_trait;
use brocade::PortOperatingMode;
use harmony_macros::ip;
use harmony_types::{
id::Id,
@@ -8,9 +9,9 @@ use harmony_types::{
use log::debug;
use log::info;
use crate::infra::network_manager::OpenShiftNmStateNetworkManager;
use crate::topology::PxeOptions;
use crate::{data::FileContent, executors::ExecutorError};
use crate::{data::FileContent, executors::ExecutorError, topology::node_exporter::NodeExporter};
use crate::{infra::network_manager::OpenShiftNmStateNetworkManager, topology::PortConfig};
use crate::{modules::inventory::HarmonyDiscoveryStrategy, topology::PxeOptions};
use super::{
DHCPStaticEntry, DhcpServer, DnsRecord, DnsRecordType, DnsServer, Firewall, HostNetworkConfig,
@@ -18,7 +19,6 @@ use super::{
NetworkManager, PreparationError, PreparationOutcome, Router, Switch, SwitchClient,
SwitchError, TftpServer, Topology, k8s::K8sClient,
};
use std::sync::{Arc, OnceLock};
#[derive(Debug, Clone)]
@@ -31,6 +31,7 @@ pub struct HAClusterTopology {
pub tftp_server: Arc<dyn TftpServer>,
pub http_server: Arc<dyn HttpServer>,
pub dns_server: Arc<dyn DnsServer>,
pub node_exporter: Arc<dyn NodeExporter>,
pub switch_client: Arc<dyn SwitchClient>,
pub bootstrap_host: LogicalHost,
pub control_plane: Vec<LogicalHost>,
@@ -115,6 +116,7 @@ impl HAClusterTopology {
tftp_server: dummy_infra.clone(),
http_server: dummy_infra.clone(),
dns_server: dummy_infra.clone(),
node_exporter: dummy_infra.clone(),
switch_client: dummy_infra.clone(),
bootstrap_host: dummy_host,
control_plane: vec![],
@@ -298,6 +300,13 @@ impl Switch for HAClusterTopology {
Ok(())
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
todo!()
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
todo!()
}
}
#[async_trait]
@@ -312,6 +321,23 @@ impl NetworkManager for HAClusterTopology {
async fn configure_bond(&self, config: &HostNetworkConfig) -> Result<(), NetworkError> {
self.network_manager().await.configure_bond(config).await
}
//TODO add snmp here
}
#[async_trait]
impl NodeExporter for HAClusterTopology {
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
self.node_exporter.ensure_initialized().await
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
self.node_exporter.commit_config().await
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
self.node_exporter.reload_restart().await
}
}
#[derive(Debug)]
@@ -501,6 +527,21 @@ impl DnsServer for DummyInfra {
}
}
#[async_trait]
impl NodeExporter for DummyInfra {
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
}
#[async_trait]
impl SwitchClient for DummyInfra {
async fn setup(&self) -> Result<(), SwitchError> {
@@ -521,4 +562,10 @@ impl SwitchClient for DummyInfra {
) -> Result<u8, SwitchError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
todo!()
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
todo!()
}
}

View File

@@ -1,4 +1,4 @@
use std::time::Duration;
use std::{collections::HashMap, time::Duration};
use derive_new::new;
use k8s_openapi::{
@@ -7,6 +7,7 @@ use k8s_openapi::{
apps::v1::Deployment,
core::v1::{Node, Pod, ServiceAccount},
},
apiextensions_apiserver::pkg::apis::apiextensions::v1::CustomResourceDefinition,
apimachinery::pkg::version::Info,
};
use kube::{
@@ -15,7 +16,7 @@ use kube::{
Api, AttachParams, DeleteParams, ListParams, ObjectList, Patch, PatchParams, ResourceExt,
},
config::{KubeConfigOptions, Kubeconfig},
core::ErrorResponse,
core::{DynamicResourceScope, ErrorResponse},
discovery::{ApiCapabilities, Scope},
error::DiscoveryError,
runtime::reflector::Lookup,
@@ -27,7 +28,7 @@ use kube::{
};
use log::{debug, error, trace, warn};
use serde::{Serialize, de::DeserializeOwned};
use serde_json::json;
use serde_json::{Value, json};
use similar::TextDiff;
use tokio::{io::AsyncReadExt, time::sleep};
use url::Url;
@@ -64,6 +65,149 @@ impl K8sClient {
})
}
/// Returns true if any deployment in the given namespace matching the label selector
/// has status.availableReplicas > 0 (or condition Available=True).
pub async fn has_healthy_deployment_with_label(
&self,
namespace: &str,
label_selector: &str,
) -> Result<bool, Error> {
let api: Api<Deployment> = Api::namespaced(self.client.clone(), namespace);
let lp = ListParams::default().labels(label_selector);
let list = api.list(&lp).await?;
for d in list.items {
// Check AvailableReplicas > 0 or Available condition
let available = d
.status
.as_ref()
.and_then(|s| s.available_replicas)
.unwrap_or(0);
if available > 0 {
return Ok(true);
}
// Fallback: scan conditions
if let Some(conds) = d.status.as_ref().and_then(|s| s.conditions.as_ref()) {
if conds
.iter()
.any(|c| c.type_ == "Available" && c.status == "True")
{
return Ok(true);
}
}
}
Ok(false)
}
/// Cluster-wide: returns namespaces that have at least one healthy deployment
/// matching the label selector (equivalent to kubectl -A -l ...).
pub async fn list_namespaces_with_healthy_deployments(
&self,
label_selector: &str,
) -> Result<Vec<String>, Error> {
let api: Api<Deployment> = Api::all(self.client.clone());
let lp = ListParams::default().labels(label_selector);
let list = api.list(&lp).await?;
let mut healthy_ns: HashMap<String, bool> = HashMap::new();
for d in list.items {
let ns = match d.metadata.namespace.clone() {
Some(n) => n,
None => continue,
};
let available = d
.status
.as_ref()
.and_then(|s| s.available_replicas)
.unwrap_or(0);
let is_healthy = if available > 0 {
true
} else {
d.status
.as_ref()
.and_then(|s| s.conditions.as_ref())
.map(|conds| {
conds
.iter()
.any(|c| c.type_ == "Available" && c.status == "True")
})
.unwrap_or(false)
};
if is_healthy {
healthy_ns.insert(ns, true);
}
}
Ok(healthy_ns.into_keys().collect())
}
/// Get the application-controller ServiceAccount name (fallback to default)
pub async fn get_controller_service_account_name(
&self,
ns: &str,
) -> Result<Option<String>, Error> {
let api: Api<Deployment> = Api::namespaced(self.client.clone(), ns);
let lp = ListParams::default().labels("app.kubernetes.io/component=controller");
let list = api.list(&lp).await?;
if let Some(dep) = list.items.get(0) {
if let Some(sa) = dep
.spec
.as_ref()
.and_then(|ds| ds.template.spec.as_ref())
.and_then(|ps| ps.service_account_name.clone())
{
return Ok(Some(sa));
}
}
Ok(None)
}
// List ClusterRoleBindings dynamically and return as JSON values
pub async fn list_clusterrolebindings_json(&self) -> Result<Vec<Value>, Error> {
let gvk = kube::api::GroupVersionKind::gvk(
"rbac.authorization.k8s.io",
"v1",
"ClusterRoleBinding",
);
let ar = kube::api::ApiResource::from_gvk(&gvk);
let api: Api<kube::api::DynamicObject> = Api::all_with(self.client.clone(), &ar);
let crbs = api.list(&ListParams::default()).await?;
let mut out = Vec::new();
for o in crbs {
let v = serde_json::to_value(&o).unwrap_or(Value::Null);
out.push(v);
}
Ok(out)
}
/// Determine if Argo controller in ns has cluster-wide permissions via CRBs
// TODO This does not belong in the generic k8s client, should be refactored at some point
pub async fn is_service_account_cluster_wide(&self, sa: &str, ns: &str) -> Result<bool, Error> {
let crbs = self.list_clusterrolebindings_json().await?;
let sa_user = format!("system:serviceaccount:{}:{}", ns, sa);
for crb in crbs {
if let Some(subjects) = crb.get("subjects").and_then(|s| s.as_array()) {
for subj in subjects {
let kind = subj.get("kind").and_then(|v| v.as_str()).unwrap_or("");
let name = subj.get("name").and_then(|v| v.as_str()).unwrap_or("");
let subj_ns = subj.get("namespace").and_then(|v| v.as_str()).unwrap_or("");
if (kind == "ServiceAccount" && name == sa && subj_ns == ns)
|| (kind == "User" && name == sa_user)
{
return Ok(true);
}
}
}
}
Ok(false)
}
pub async fn has_crd(&self, name: &str) -> Result<bool, Error> {
let api: Api<CustomResourceDefinition> = Api::all(self.client.clone());
let lp = ListParams::default().fields(&format!("metadata.name={}", name));
let crds = api.list(&lp).await?;
Ok(!crds.items.is_empty())
}
pub async fn service_account_api(&self, namespace: &str) -> Api<ServiceAccount> {
let api: Api<ServiceAccount> = Api::namespaced(self.client.clone(), namespace);
api
@@ -96,6 +240,23 @@ impl K8sClient {
resource.get(name).await
}
pub async fn get_secret_json_value(
&self,
name: &str,
namespace: Option<&str>,
) -> Result<DynamicObject, Error> {
self.get_resource_json_value(
name,
namespace,
&GroupVersionKind {
group: "".to_string(),
version: "v1".to_string(),
kind: "Secret".to_string(),
},
)
.await
}
pub async fn get_deployment(
&self,
name: &str,
@@ -339,6 +500,169 @@ impl K8sClient {
}
}
fn get_api_for_dynamic_object(
&self,
object: &DynamicObject,
ns: Option<&str>,
) -> Result<Api<DynamicObject>, Error> {
let api_resource = object
.types
.as_ref()
.and_then(|t| {
let parts: Vec<&str> = t.api_version.split('/').collect();
match parts.as_slice() {
[version] => Some(ApiResource::from_gvk(&GroupVersionKind::gvk(
"", version, &t.kind,
))),
[group, version] => Some(ApiResource::from_gvk(&GroupVersionKind::gvk(
group, version, &t.kind,
))),
_ => None,
}
})
.ok_or_else(|| {
Error::BuildRequest(kube::core::request::Error::Validation(
"Invalid apiVersion in DynamicObject {object:#?}".to_string(),
))
})?;
match ns {
Some(ns) => Ok(Api::namespaced_with(self.client.clone(), ns, &api_resource)),
None => Ok(Api::default_namespaced_with(
self.client.clone(),
&api_resource,
)),
}
}
pub async fn apply_dynamic_many(
&self,
resource: &[DynamicObject],
namespace: Option<&str>,
force_conflicts: bool,
) -> Result<Vec<DynamicObject>, Error> {
let mut result = Vec::new();
for r in resource.iter() {
result.push(self.apply_dynamic(r, namespace, force_conflicts).await?);
}
Ok(result)
}
/// Apply DynamicObject resource to the cluster
pub async fn apply_dynamic(
&self,
resource: &DynamicObject,
namespace: Option<&str>,
force_conflicts: bool,
) -> Result<DynamicObject, Error> {
// Build API for this dynamic object
let api = self.get_api_for_dynamic_object(resource, namespace)?;
let name = resource
.metadata
.name
.as_ref()
.ok_or_else(|| {
Error::BuildRequest(kube::core::request::Error::Validation(
"DynamicObject must have metadata.name".to_string(),
))
})?
.as_str();
debug!(
"Applying dynamic resource kind={:?} apiVersion={:?} name='{}' ns={:?}",
resource.types.as_ref().map(|t| &t.kind),
resource.types.as_ref().map(|t| &t.api_version),
name,
namespace
);
trace!(
"Dynamic resource payload:\n{:#}",
serde_json::to_value(resource).unwrap_or(serde_json::Value::Null)
);
// Using same field manager as in apply()
let mut patch_params = PatchParams::apply("harmony");
patch_params.force = force_conflicts;
if *crate::config::DRY_RUN {
// Dry-run path: fetch current, show diff, and return appropriate object
match api.get(name).await {
Ok(current) => {
trace!("Received current dynamic value {current:#?}");
println!("\nPerforming dry-run for resource: '{}'", name);
// Serialize current and new, and strip status from current if present
let mut current_yaml =
serde_yaml::to_value(&current).unwrap_or_else(|_| serde_yaml::Value::Null);
if let Some(map) = current_yaml.as_mapping_mut() {
if map.contains_key(&serde_yaml::Value::String("status".to_string())) {
let removed =
map.remove(&serde_yaml::Value::String("status".to_string()));
trace!("Removed status from current dynamic object: {:?}", removed);
} else {
trace!(
"Did not find status entry for current dynamic object {}/{}",
current.metadata.namespace.as_deref().unwrap_or(""),
current.metadata.name.as_deref().unwrap_or("")
);
}
}
let current_yaml = serde_yaml::to_string(&current_yaml)
.unwrap_or_else(|_| "Failed to serialize current resource".to_string());
let new_yaml = serde_yaml::to_string(resource)
.unwrap_or_else(|_| "Failed to serialize new resource".to_string());
if current_yaml == new_yaml {
println!("No changes detected.");
return Ok(current);
}
println!("Changes detected:");
let diff = TextDiff::from_lines(&current_yaml, &new_yaml);
for change in diff.iter_all_changes() {
let sign = match change.tag() {
similar::ChangeTag::Delete => "-",
similar::ChangeTag::Insert => "+",
similar::ChangeTag::Equal => " ",
};
print!("{}{}", sign, change);
}
// Return the incoming resource as the would-be applied state
Ok(resource.clone())
}
Err(Error::Api(ErrorResponse { code: 404, .. })) => {
println!("\nPerforming dry-run for new resource: '{}'", name);
println!(
"Resource does not exist. It would be created with the following content:"
);
let new_yaml = serde_yaml::to_string(resource)
.unwrap_or_else(|_| "Failed to serialize new resource".to_string());
for line in new_yaml.lines() {
println!("+{}", line);
}
Ok(resource.clone())
}
Err(e) => {
error!("Failed to get dynamic resource '{}': {}", name, e);
Err(e)
}
}
} else {
// Real apply via server-side apply
debug!("Patching (server-side apply) dynamic resource '{}'", name);
api.patch(name, &patch_params, &Patch::Apply(resource))
.await
.map_err(|e| {
error!("Failed to apply dynamic resource '{}': {}", name, e);
e
})
}
}
/// Apply a resource in namespace
///
/// See `kubectl apply` for more information on the expected behavior of this function

View File

@@ -7,7 +7,7 @@ use k8s_openapi::api::{
rbac::v1::{ClusterRoleBinding, RoleRef, Subject},
};
use kube::api::{DynamicObject, GroupVersionKind, ObjectMeta};
use log::{debug, info, warn};
use log::{debug, info, trace, warn};
use serde::Serialize;
use tokio::sync::OnceCell;
@@ -88,6 +88,7 @@ pub struct K8sAnywhereTopology {
#[async_trait]
impl K8sclient for K8sAnywhereTopology {
async fn k8s_client(&self) -> Result<Arc<K8sClient>, String> {
trace!("getting k8s client");
let state = match self.k8s_state.get() {
Some(state) => state,
None => return Err("K8s state not initialized yet".to_string()),
@@ -975,36 +976,68 @@ impl TenantManager for K8sAnywhereTopology {
#[async_trait]
impl Ingress for K8sAnywhereTopology {
//TODO this is specifically for openshift/okd which violates the k8sanywhere idea
async fn get_domain(&self, service: &str) -> Result<String, PreparationError> {
use log::{debug, trace, warn};
let client = self.k8s_client().await?;
if let Some(Some(k8s_state)) = self.k8s_state.get() {
match k8s_state.source {
K8sSource::LocalK3d => Ok(format!("{service}.local.k3d")),
K8sSource::LocalK3d => {
// Local developer UX
return Ok(format!("{service}.local.k3d"));
}
K8sSource::Kubeconfig => {
self.openshift_ingress_operator_available().await?;
trace!("K8sSource is kubeconfig; attempting to detect domain");
let gvk = GroupVersionKind {
group: "operator.openshift.io".into(),
version: "v1".into(),
kind: "IngressController".into(),
};
let ic = client
.get_resource_json_value(
"default",
Some("openshift-ingress-operator"),
&gvk,
)
.await
.map_err(|_| {
PreparationError::new("Failed to fetch IngressController".to_string())
})?;
// 1) Try OpenShift IngressController domain (backward compatible)
if self.openshift_ingress_operator_available().await.is_ok() {
trace!("OpenShift ingress operator detected; using IngressController");
let gvk = GroupVersionKind {
group: "operator.openshift.io".into(),
version: "v1".into(),
kind: "IngressController".into(),
};
let ic = client
.get_resource_json_value(
"default",
Some("openshift-ingress-operator"),
&gvk,
)
.await
.map_err(|_| {
PreparationError::new(
"Failed to fetch IngressController".to_string(),
)
})?;
match ic.data["status"]["domain"].as_str() {
Some(domain) => Ok(format!("{service}.{domain}")),
None => Err(PreparationError::new("Could not find domain".to_string())),
if let Some(domain) = ic.data["status"]["domain"].as_str() {
return Ok(format!("{service}.{domain}"));
} else {
warn!("OpenShift IngressController present but no status.domain set");
}
} else {
trace!(
"OpenShift ingress operator not detected; trying generic Kubernetes"
);
}
// 2) Try NGINX Ingress Controller common setups
// 2.a) Well-known namespace/name for the controller Service
// - upstream default: namespace "ingress-nginx", service "ingress-nginx-controller"
// - some distros: "ingress-nginx-controller" svc in "ingress-nginx" ns
// If found with LoadBalancer ingress hostname, use its base domain.
if let Some(domain) = try_nginx_lb_domain(&client).await? {
return Ok(format!("{service}.{domain}"));
}
// 3) Fallback: internal cluster DNS suffix (service.namespace.svc.cluster.local)
// We don't have tenant namespace here, so we fallback to 'default' with a warning.
warn!(
"Could not determine external ingress domain; falling back to internal-only DNS"
);
let internal = format!("{service}.default.svc.cluster.local");
Ok(internal)
}
}
} else {
@@ -1014,3 +1047,63 @@ impl Ingress for K8sAnywhereTopology {
}
}
}
async fn try_nginx_lb_domain(client: &K8sClient) -> Result<Option<String>, PreparationError> {
use log::{debug, trace};
// Try common service path: svc/ingress-nginx-controller in ns/ingress-nginx
let svc_gvk = GroupVersionKind {
group: "".into(), // core
version: "v1".into(),
kind: "Service".into(),
};
let candidates = [
("ingress-nginx", "ingress-nginx-controller"),
("ingress-nginx", "ingress-nginx-controller-internal"),
("ingress-nginx", "ingress-nginx"), // some charts name the svc like this
("kube-system", "ingress-nginx-controller"), // less common but seen
];
for (ns, name) in candidates {
trace!("Checking NGINX Service {ns}/{name} for LoadBalancer hostname");
if let Ok(svc) = client
.get_resource_json_value(ns, Some(name), &svc_gvk)
.await
{
let lb_hosts = svc.data["status"]["loadBalancer"]["ingress"]
.as_array()
.cloned()
.unwrap_or_default();
for entry in lb_hosts {
if let Some(host) = entry.get("hostname").and_then(|v| v.as_str()) {
debug!("Found NGINX LB hostname: {host}");
if let Some(domain) = extract_base_domain(host) {
return Ok(Some(domain.to_string()));
} else {
return Ok(Some(host.to_string())); // already a domain
}
}
if let Some(ip) = entry.get("ip").and_then(|v| v.as_str()) {
// If only an IP is exposed, we can't create a hostname; return None to keep searching
debug!("NGINX LB exposes IP {ip} (no hostname); skipping");
}
}
}
}
Ok(None)
}
fn extract_base_domain(host: &str) -> Option<String> {
// For a host like a1b2c3d4e5f6abcdef.elb.amazonaws.com -> base domain elb.amazonaws.com
// For a managed DNS like xyz.example.com -> base domain example.com (keep 2+ labels)
// Heuristic: keep last 2 labels by default; special-case known multi-label TLDs if needed.
let parts: Vec<&str> = host.split('.').collect();
if parts.len() >= 2 {
// Very conservative: last 2 labels
Some(parts[parts.len() - 2..].join("."))
} else {
None
}
}

View File

@@ -1,5 +1,8 @@
mod failover;
mod ha_cluster;
pub mod ingress;
pub mod node_exporter;
pub use failover::*;
use harmony_types::net::IpAddress;
mod host_binding;
mod http;
@@ -186,7 +189,7 @@ impl TopologyState {
}
}
#[derive(Debug)]
#[derive(Debug, PartialEq)]
pub enum DeploymentTarget {
LocalDev,
Staging,

View File

@@ -7,6 +7,7 @@ use std::{
};
use async_trait::async_trait;
use brocade::PortOperatingMode;
use derive_new::new;
use harmony_types::{
id::Id,
@@ -214,6 +215,8 @@ impl From<String> for NetworkError {
}
}
pub type PortConfig = (PortLocation, PortOperatingMode);
#[async_trait]
pub trait Switch: Send + Sync {
async fn setup_switch(&self) -> Result<(), SwitchError>;
@@ -224,6 +227,8 @@ pub trait Switch: Send + Sync {
) -> Result<Option<PortLocation>, SwitchError>;
async fn configure_port_channel(&self, config: &HostNetworkConfig) -> Result<(), SwitchError>;
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError>;
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError>;
}
#[derive(Clone, Debug, PartialEq)]
@@ -283,6 +288,9 @@ pub trait SwitchClient: Debug + Send + Sync {
channel_name: &str,
switch_ports: Vec<PortLocation>,
) -> Result<u8, SwitchError>;
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError>;
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError>;
}
#[cfg(test)]

View File

@@ -0,0 +1,17 @@
use async_trait::async_trait;
use crate::executors::ExecutorError;
#[async_trait]
pub trait NodeExporter: Send + Sync + std::fmt::Debug {
async fn ensure_initialized(&self) -> Result<(), ExecutorError>;
async fn commit_config(&self) -> Result<(), ExecutorError>;
async fn reload_restart(&self) -> Result<(), ExecutorError>;
}
// //TODO complete this impl
// impl std::fmt::Debug for dyn NodeExporter {
// fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// f.write_fmt(format_args!("NodeExporter ",))
// }
// }

View File

@@ -1,6 +1,7 @@
use std::any::Any;
use std::{any::Any, collections::HashMap};
use async_trait::async_trait;
use kube::api::DynamicObject;
use log::debug;
use crate::{
@@ -76,6 +77,15 @@ pub trait AlertReceiver<S: AlertSender>: std::fmt::Debug + Send + Sync {
fn name(&self) -> String;
fn clone_box(&self) -> Box<dyn AlertReceiver<S>>;
fn as_any(&self) -> &dyn Any;
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String>;
}
#[derive(Debug)]
pub struct AlertManagerReceiver {
pub receiver_config: serde_json::Value,
// FIXME we should not leak k8s here. DynamicObject is k8s specific
pub additional_ressources: Vec<DynamicObject>,
pub route_config: serde_json::Value,
}
#[async_trait]

View File

@@ -14,7 +14,7 @@ use k8s_openapi::{
},
apimachinery::pkg::util::intstr::IntOrString,
};
use kube::Resource;
use kube::{Resource, api::DynamicObject};
use log::debug;
use serde::de::DeserializeOwned;
use serde_json::json;

View File

@@ -1,12 +1,13 @@
use async_trait::async_trait;
use brocade::{BrocadeClient, BrocadeOptions, InterSwitchLink, InterfaceStatus, PortOperatingMode};
use harmony_types::{
id::Id,
net::{IpAddress, MacAddress},
switch::{PortDeclaration, PortLocation},
};
use option_ext::OptionExt;
use crate::topology::{SwitchClient, SwitchError};
use crate::topology::{PortConfig, SwitchClient, SwitchError};
#[derive(Debug)]
pub struct BrocadeSwitchClient {
@@ -18,9 +19,9 @@ impl BrocadeSwitchClient {
ip_addresses: &[IpAddress],
username: &str,
password: &str,
options: Option<BrocadeOptions>,
options: BrocadeOptions,
) -> Result<Self, brocade::Error> {
let brocade = brocade::init(ip_addresses, 22, username, password, options).await?;
let brocade = brocade::init(ip_addresses, username, password, options).await?;
Ok(Self { brocade })
}
}
@@ -59,7 +60,7 @@ impl SwitchClient for BrocadeSwitchClient {
}
self.brocade
.configure_interfaces(interfaces)
.configure_interfaces(&interfaces)
.await
.map_err(|e| SwitchError::new(e.to_string()))?;
@@ -111,6 +112,27 @@ impl SwitchClient for BrocadeSwitchClient {
Ok(channel_id)
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
for i in ids {
self.brocade
.clear_port_channel(&i.to_string())
.await
.map_err(|e| SwitchError::new(e.to_string()))?;
}
Ok(())
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
// FIXME hardcoded TenGigabitEthernet = bad
let ports = ports
.iter()
.map(|p| (format!("TenGigabitEthernet {}", p.0), p.1.clone()))
.collect();
self.brocade
.configure_interfaces(&ports)
.await
.map_err(|e| SwitchError::new(e.to_string()))?;
Ok(())
}
}
#[cfg(test)]
@@ -121,7 +143,7 @@ mod tests {
use async_trait::async_trait;
use brocade::{
BrocadeClient, BrocadeInfo, Error, InterSwitchLink, InterfaceInfo, InterfaceStatus,
InterfaceType, MacAddressEntry, PortChannelId, PortOperatingMode,
InterfaceType, MacAddressEntry, PortChannelId, PortOperatingMode, SecurityLevel,
};
use harmony_types::switch::PortLocation;
@@ -145,6 +167,7 @@ mod tests {
client.setup().await.unwrap();
//TODO not sure about this
let configured_interfaces = brocade.configured_interfaces.lock().unwrap();
assert_that!(*configured_interfaces).contains_exactly(vec![
(first_interface.name.clone(), PortOperatingMode::Access),
@@ -255,10 +278,10 @@ mod tests {
async fn configure_interfaces(
&self,
interfaces: Vec<(String, PortOperatingMode)>,
interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error> {
let mut configured_interfaces = self.configured_interfaces.lock().unwrap();
*configured_interfaces = interfaces;
*configured_interfaces = interfaces.clone();
Ok(())
}
@@ -279,6 +302,10 @@ mod tests {
async fn clear_port_channel(&self, _channel_name: &str) -> Result<(), Error> {
todo!()
}
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error> {
todo!()
}
}
impl FakeBrocadeClient {

View File

@@ -121,7 +121,7 @@ mod test {
#[test]
fn deployment_to_dynamic_roundtrip() {
// Create a sample Deployment with nested structures
let mut deployment = Deployment {
let deployment = Deployment {
metadata: ObjectMeta {
name: Some("my-deployment".to_string()),
labels: Some({

View File

@@ -10,7 +10,7 @@ use super::OPNSenseFirewall;
#[async_trait]
impl DnsServer for OPNSenseFirewall {
async fn register_hosts(&self, hosts: Vec<DnsRecord>) -> Result<(), ExecutorError> {
async fn register_hosts(&self, _hosts: Vec<DnsRecord>) -> Result<(), ExecutorError> {
todo!("Refactor this to use dnsmasq")
// let mut writable_opnsense = self.opnsense_config.write().await;
// let mut dns = writable_opnsense.dns();
@@ -68,7 +68,7 @@ impl DnsServer for OPNSenseFirewall {
self.host.clone()
}
async fn register_dhcp_leases(&self, register: bool) -> Result<(), ExecutorError> {
async fn register_dhcp_leases(&self, _register: bool) -> Result<(), ExecutorError> {
todo!("Refactor this to use dnsmasq")
// let mut writable_opnsense = self.opnsense_config.write().await;
// let mut dns = writable_opnsense.dns();

View File

@@ -4,11 +4,11 @@ mod firewall;
mod http;
mod load_balancer;
mod management;
pub mod node_exporter;
mod tftp;
use std::sync::Arc;
pub use management::*;
use opnsense_config_xml::Host;
use tokio::sync::RwLock;
use crate::{executors::ExecutorError, topology::LogicalHost};

View File

@@ -0,0 +1,47 @@
use async_trait::async_trait;
use log::debug;
use crate::{
executors::ExecutorError, infra::opnsense::OPNSenseFirewall,
topology::node_exporter::NodeExporter,
};
#[async_trait]
impl NodeExporter for OPNSenseFirewall {
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
let mut config = self.opnsense_config.write().await;
let node_exporter = config.node_exporter();
if let Some(config) = node_exporter.get_full_config() {
debug!(
"Node exporter available in opnsense config, assuming it is already installed. {config:?}"
);
} else {
config
.install_package("os-node_exporter")
.await
.map_err(|e| {
ExecutorError::UnexpectedError(format!("Executor failed when trying to install os-node_exporter package with error {e:?}"
))
})?;
}
config
.node_exporter()
.enable(true)
.map_err(|e| ExecutorError::UnexpectedError(e.to_string()))?;
Ok(())
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
OPNSenseFirewall::commit_config(self).await
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
self.opnsense_config
.write()
.await
.node_exporter()
.reload_restart()
.await
.map_err(|e| ExecutorError::UnexpectedError(e.to_string()))
}
}

View File

@@ -181,13 +181,11 @@ impl From<CDApplicationConfig> for ArgoApplication {
}
impl ArgoApplication {
pub fn to_yaml(&self) -> serde_yaml::Value {
pub fn to_yaml(&self, target_namespace: Option<&str>) -> serde_yaml::Value {
let name = &self.name;
let namespace = if let Some(ns) = self.namespace.as_ref() {
ns
} else {
"argocd"
};
let default_ns = "argocd".to_string();
let namespace: &str =
target_namespace.unwrap_or(self.namespace.as_ref().unwrap_or(&default_ns));
let project = &self.project;
let yaml_str = format!(
@@ -345,7 +343,7 @@ spec:
assert_eq!(
expected_yaml_output.trim(),
serde_yaml::to_string(&app.clone().to_yaml())
serde_yaml::to_string(&app.clone().to_yaml(None))
.unwrap()
.trim()
);

View File

@@ -1,22 +1,21 @@
use async_trait::async_trait;
use harmony_macros::hurl;
use kube::{Api, api::GroupVersionKind};
use log::{debug, warn};
use log::{debug, info, trace, warn};
use non_blank_string_rs::NonBlankString;
use serde::Serialize;
use serde::de::DeserializeOwned;
use std::{process::Command, str::FromStr, sync::Arc};
use std::{str::FromStr, sync::Arc};
use crate::{
data::Version,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
modules::helm::chart::{HelmChartScore, HelmRepository},
score::Score,
topology::{
HelmCommand, K8sclient, PreparationError, PreparationOutcome, Topology, ingress::Ingress,
k8s::K8sClient,
modules::{
argocd::{ArgoDeploymentType, detect_argo_deployment_type},
helm::chart::{HelmChartScore, HelmRepository},
},
score::Score,
topology::{HelmCommand, K8sclient, Topology, ingress::Ingress, k8s::K8sClient},
};
use harmony_types::id::Id;
@@ -25,6 +24,7 @@ use super::ArgoApplication;
#[derive(Debug, Serialize, Clone)]
pub struct ArgoHelmScore {
pub namespace: String,
// TODO: remove and rely on topology (it now knows the flavor)
pub openshift: bool,
pub argo_apps: Vec<ArgoApplication>,
}
@@ -55,29 +55,98 @@ impl<T: Topology + K8sclient + HelmCommand + Ingress> Interpret<T> for ArgoInter
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let k8s_client = topology.k8s_client().await?;
let svc = format!("argo-{}", self.score.namespace.clone());
trace!("Starting ArgoInterpret execution {self:?}");
let k8s_client: Arc<K8sClient> = topology.k8s_client().await?;
trace!("Got k8s client");
let desired_ns = self.score.namespace.clone();
debug!("ArgoInterpret detecting cluster configuration");
let svc = format!("argo-{}", desired_ns);
let domain = topology.get_domain(&svc).await?;
let helm_score =
argo_helm_chart_score(&self.score.namespace, self.score.openshift, &domain);
debug!("Resolved Argo service domain for '{}': {}", svc, domain);
helm_score.interpret(inventory, topology).await?;
// Detect current Argo deployment type
let current = detect_argo_deployment_type(&k8s_client, &desired_ns).await?;
info!("Detected Argo deployment type: {:?}", current);
// Decide control namespace and whether we must install
let (control_ns, must_install) = match current.clone() {
ArgoDeploymentType::NotInstalled => {
info!(
"Argo CD not installed. Will install via Helm into namespace '{}'.",
desired_ns
);
(desired_ns.clone(), true)
}
ArgoDeploymentType::AvailableInDesiredNamespace(ns) => {
info!(
"Argo CD already installed by Harmony in '{}'. Skipping install.",
ns
);
(ns, false)
}
ArgoDeploymentType::InstalledClusterWide(ns) => {
info!("Argo CD installed cluster-wide in namespace '{}'.", ns);
(ns, false)
}
ArgoDeploymentType::InstalledNamespaceScoped(ns) => {
// TODO we could support this use case by installing a new argo instance. But that
// means handling a few cases that are out of scope for now :
// - Wether argo operator is installed
// - Managing CRD versions compatibility
// - Potentially handling the various k8s flavors and setups we might encounter
//
// There is a possibility that the helm chart already handles most or even all of these use cases but they are out of scope for now.
let msg = format!(
"Argo CD found in '{}' but it is namespace-scoped and not supported for attachment yet.",
ns
);
warn!("{}", msg);
return Err(InterpretError::new(msg));
}
};
info!("ArgoCD will be installed : {must_install} . Current argocd status : {current:?} ");
if must_install {
let helm_score = argo_helm_chart_score(&desired_ns, self.score.openshift, &domain);
info!(
"Installing Argo CD via Helm into namespace '{}' ...",
desired_ns
);
helm_score.interpret(inventory, topology).await?;
info!("Argo CD install complete in '{}'.", desired_ns);
}
let yamls: Vec<serde_yaml::Value> = self
.argo_apps
.iter()
.map(|a| a.to_yaml(Some(&control_ns)))
.collect();
info!(
"Applying {} Argo application object(s) into control namespace '{}'.",
yamls.len(),
control_ns
);
k8s_client
.apply_yaml_many(&self.argo_apps.iter().map(|a| a.to_yaml()).collect(), None)
.apply_yaml_many(&yamls, Some(control_ns.as_str()))
.await
.unwrap();
.map_err(|e| InterpretError::new(format!("Failed applying Argo CRs: {e}")))?;
Ok(Outcome::success_with_details(
format!(
"ArgoCD {} {}",
self.argo_apps.len(),
match self.argo_apps.len() {
1 => "application",
_ => "applications",
if self.argo_apps.len() == 1 {
"application"
} else {
"applications"
}
),
vec![format!("argo application: http://{}", domain)],
vec![
format!("control_namespace={}", control_ns),
format!("argo ui: http://{}", domain),
],
))
}
@@ -86,7 +155,7 @@ impl<T: Topology + K8sclient + HelmCommand + Ingress> Interpret<T> for ArgoInter
}
fn get_version(&self) -> Version {
todo!()
Version::from("0.1.0").unwrap()
}
fn get_status(&self) -> InterpretStatus {
@@ -94,39 +163,7 @@ impl<T: Topology + K8sclient + HelmCommand + Ingress> Interpret<T> for ArgoInter
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
impl ArgoInterpret {
pub async fn get_host_domain(
&self,
client: Arc<K8sClient>,
openshift: bool,
) -> Result<String, InterpretError> {
//This should be the job of the topology to determine if we are in
//openshift, potentially we need on openshift topology the same way we create a
//localhosttopology
match openshift {
true => {
let gvk = GroupVersionKind {
group: "operator.openshift.io".into(),
version: "v1".into(),
kind: "IngressController".into(),
};
let ic = client
.get_resource_json_value("default", Some("openshift-ingress-operator"), &gvk)
.await?;
match ic.data["status"]["domain"].as_str() {
Some(domain) => return Ok(domain.to_string()),
None => return Err(InterpretError::new("Could not find domain".to_string())),
}
}
false => {
todo!()
}
};
vec![]
}
}

View File

@@ -12,6 +12,7 @@ use crate::{
modules::application::{
ApplicationFeature, HelmPackage, InstallationError, InstallationOutcome, OCICompliant,
features::{ArgoApplication, ArgoHelmScore},
webapp::Webapp,
},
score::Score,
topology::{
@@ -47,11 +48,11 @@ use crate::{
/// - ArgoCD to install/upgrade/rollback/inspect k8s resources
/// - Kubernetes for runtime orchestration
#[derive(Debug, Default, Clone)]
pub struct PackagingDeployment<A: OCICompliant + HelmPackage> {
pub struct PackagingDeployment<A: OCICompliant + HelmPackage + Webapp> {
pub application: Arc<A>,
}
impl<A: OCICompliant + HelmPackage> PackagingDeployment<A> {
impl<A: OCICompliant + HelmPackage + Webapp> PackagingDeployment<A> {
async fn deploy_to_local_k3d(
&self,
app_name: String,
@@ -137,7 +138,7 @@ impl<A: OCICompliant + HelmPackage> PackagingDeployment<A> {
#[async_trait]
impl<
A: OCICompliant + HelmPackage + Clone + 'static,
A: OCICompliant + HelmPackage + Webapp + Clone + 'static,
T: Topology + HelmCommand + MultiTargetTopology + K8sclient + Ingress + 'static,
> ApplicationFeature<T> for PackagingDeployment<A>
{
@@ -146,10 +147,15 @@ impl<
topology: &T,
) -> Result<InstallationOutcome, InstallationError> {
let image = self.application.image_name();
let domain = topology
.get_domain(&self.application.name())
.await
.map_err(|e| e.to_string())?;
let domain = if topology.current_target() == DeploymentTarget::Production {
self.application.dns()
} else {
topology
.get_domain(&self.application.name())
.await
.map_err(|e| e.to_string())?
};
// TODO Write CI/CD workflow files
// we can autotedect the CI type using the remote url (default to github action for github
@@ -193,8 +199,7 @@ impl<
namespace: format!("{}", self.application.name()),
openshift: true,
argo_apps: vec![ArgoApplication::from(CDApplicationConfig {
// helm pull oci://hub.nationtech.io/harmony/harmony-example-rust-webapp-chart --version 0.1.0
version: Version::from("0.1.0").unwrap(),
version: Version::from("0.2.1").unwrap(),
helm_chart_repo_url: "hub.nationtech.io/harmony".to_string(),
helm_chart_name: format!("{}-chart", self.application.name()),
values_overrides: None,

View File

@@ -3,7 +3,6 @@ use std::sync::Arc;
use crate::modules::application::{
Application, ApplicationFeature, InstallationError, InstallationOutcome,
};
use crate::modules::monitoring::application_monitoring::application_monitoring_score::ApplicationMonitoringScore;
use crate::modules::monitoring::application_monitoring::rhobs_application_monitoring_score::ApplicationRHOBMonitoringScore;
use crate::modules::monitoring::kube_prometheus::crd::rhob_alertmanager_config::RHOBObservability;

View File

@@ -2,6 +2,7 @@ mod feature;
pub mod features;
pub mod oci;
mod rust;
mod webapp;
use std::sync::Arc;
pub use feature::*;

View File

@@ -16,6 +16,7 @@ use tar::{Builder, Header};
use walkdir::WalkDir;
use crate::config::{REGISTRY_PROJECT, REGISTRY_URL};
use crate::modules::application::webapp::Webapp;
use crate::{score::Score, topology::Topology};
use super::{Application, ApplicationFeature, ApplicationInterpret, HelmPackage, OCICompliant};
@@ -60,6 +61,10 @@ pub struct RustWebapp {
pub project_root: PathBuf,
pub service_port: u32,
pub framework: Option<RustWebFramework>,
/// Host name that will be used in production environment.
///
/// This is the place to put the public host name if this is a public facing webapp.
pub dns: String,
}
impl Application for RustWebapp {
@@ -68,6 +73,12 @@ impl Application for RustWebapp {
}
}
impl Webapp for RustWebapp {
fn dns(&self) -> String {
self.dns.clone()
}
}
#[async_trait]
impl HelmPackage for RustWebapp {
async fn build_push_helm_package(
@@ -194,10 +205,10 @@ impl RustWebapp {
Some(body_full(tar_data.into())),
);
while let Some(mut msg) = image_build_stream.next().await {
while let Some(msg) = image_build_stream.next().await {
trace!("Got bollard msg {msg:?}");
match msg {
Ok(mut msg) => {
Ok(msg) => {
if let Some(progress) = msg.progress_detail {
info!(
"Build progress {}/{}",
@@ -257,7 +268,6 @@ impl RustWebapp {
".harmony_generated",
"harmony",
"node_modules",
"Dockerfile.harmony",
];
let mut entries: Vec<_> = WalkDir::new(project_root)
.into_iter()
@@ -461,52 +471,53 @@ impl RustWebapp {
let (image_repo, image_tag) = image_url.rsplit_once(':').unwrap_or((image_url, "latest"));
let app_name = &self.name;
let service_port = self.service_port;
// Create Chart.yaml
let chart_yaml = format!(
r#"
apiVersion: v2
name: {}
description: A Helm chart for the {} web application.
name: {chart_name}
description: A Helm chart for the {app_name} web application.
type: application
version: 0.1.0
appVersion: "{}"
version: 0.2.1
appVersion: "{image_tag}"
"#,
chart_name, self.name, image_tag
);
fs::write(chart_dir.join("Chart.yaml"), chart_yaml)?;
// Create values.yaml
let values_yaml = format!(
r#"
# Default values for {}.
# Default values for {chart_name}.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: {}
repository: {image_repo}
pullPolicy: IfNotPresent
# Overridden by the chart's appVersion
tag: "{}"
tag: "{image_tag}"
service:
type: ClusterIP
port: {}
port: {service_port}
ingress:
enabled: true
tls: true
# Annotations for cert-manager to handle SSL.
annotations:
# Add other annotations like nginx ingress class if needed
# kubernetes.io/ingress.class: nginx
hosts:
- host: {}
- host: {domain}
paths:
- path: /
pathType: ImplementationSpecific
"#,
chart_name, image_repo, image_tag, self.service_port, domain,
);
fs::write(chart_dir.join("values.yaml"), values_yaml)?;
@@ -583,7 +594,11 @@ spec:
);
fs::write(templates_dir.join("deployment.yaml"), deployment_yaml)?;
let service_port = self.service_port;
// Create templates/ingress.yaml
// TODO get issuer name and tls config from topology as it may be different from one
// cluster to another, also from one version to another
let ingress_yaml = format!(
r#"
{{{{- if $.Values.ingress.enabled -}}}}
@@ -596,13 +611,11 @@ metadata:
spec:
{{{{- if $.Values.ingress.tls }}}}
tls:
{{{{- range $.Values.ingress.tls }}}}
- hosts:
{{{{- range .hosts }}}}
- {{{{ . | quote }}}}
- secretName: {{{{ include "chart.fullname" . }}}}-tls
hosts:
{{{{- range $.Values.ingress.hosts }}}}
- {{{{ .host | quote }}}}
{{{{- end }}}}
secretName: {{{{ .secretName }}}}
{{{{- end }}}}
{{{{- end }}}}
rules:
{{{{- range $.Values.ingress.hosts }}}}
@@ -616,12 +629,11 @@ spec:
service:
name: {{{{ include "chart.fullname" $ }}}}
port:
number: {{{{ $.Values.service.port | default {} }}}}
number: {{{{ $.Values.service.port | default {service_port} }}}}
{{{{- end }}}}
{{{{- end }}}}
{{{{- end }}}}
"#,
self.service_port
);
fs::write(templates_dir.join("ingress.yaml"), ingress_yaml)?;

View File

@@ -0,0 +1,7 @@
use super::Application;
use async_trait::async_trait;
#[async_trait]
pub trait Webapp: Application {
fn dns(&self) -> String;
}

View File

@@ -0,0 +1,208 @@
use std::sync::Arc;
use log::{debug, info};
use crate::{interpret::InterpretError, topology::k8s::K8sClient};
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum ArgoScope {
ClusterWide(String),
NamespaceScoped(String),
}
#[derive(Clone, Debug)]
pub struct DiscoveredArgo {
pub control_namespace: String,
pub scope: ArgoScope,
pub has_crds: bool,
pub has_applicationset: bool,
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum ArgoDeploymentType {
NotInstalled,
AvailableInDesiredNamespace(String),
InstalledClusterWide(String),
InstalledNamespaceScoped(String),
}
pub async fn discover_argo_all(
k8s: &Arc<K8sClient>,
) -> Result<Vec<DiscoveredArgo>, InterpretError> {
use log::{debug, info, trace, warn};
trace!("Starting Argo discovery");
// CRDs
let mut has_crds = true;
let required_crds = vec!["applications.argoproj.io", "appprojects.argoproj.io"];
trace!("Checking required Argo CRDs: {:?}", required_crds);
for crd in required_crds {
trace!("Verifying CRD presence: {crd}");
let crd_exists = k8s.has_crd(crd).await.map_err(|e| {
InterpretError::new(format!("Failed to verify existence of CRD {crd}: {e}"))
})?;
debug!("CRD {crd} exists: {crd_exists}");
if !crd_exists {
info!(
"Missing Argo CRD {crd}, looks like Argo CD is not installed (or partially installed)"
);
has_crds = false;
break;
}
}
trace!(
"Listing namespaces with healthy Argo CD deployments using selector app.kubernetes.io/part-of=argocd"
);
let mut candidate_namespaces = k8s
.list_namespaces_with_healthy_deployments("app.kubernetes.io/part-of=argocd")
.await
.map_err(|e| InterpretError::new(format!("List healthy argocd deployments: {e}")))?;
trace!(
"Listing namespaces with healthy Argo CD deployments using selector app.kubernetes.io/name=argo-cd"
);
candidate_namespaces.append(
&mut k8s
.list_namespaces_with_healthy_deployments("app.kubernetes.io/name=argo-cd")
.await
.map_err(|e| InterpretError::new(format!("List healthy argocd deployments: {e}")))?,
);
debug!(
"Discovered {} candidate namespace(s) for Argo CD: {:?}",
candidate_namespaces.len(),
candidate_namespaces
);
let mut found = Vec::new();
for ns in candidate_namespaces {
trace!("Evaluating namespace '{ns}' for Argo CD instance");
// Require the application-controller to be healthy (sanity check)
trace!(
"Checking healthy deployment with label app.kubernetes.io/name=argocd-application-controller in namespace '{ns}'"
);
let controller_ok = k8s
.has_healthy_deployment_with_label(
&ns,
"app.kubernetes.io/name=argocd-application-controller",
)
.await
.unwrap_or_else(|e| {
warn!(
"Error while checking application-controller health in namespace '{ns}': {e}"
);
false
}) || k8s
.has_healthy_deployment_with_label(
&ns,
"app.kubernetes.io/component=controller",
)
.await
.unwrap_or_else(|e| {
warn!(
"Error while checking application-controller health in namespace '{ns}': {e}"
);
false
});
debug!("Namespace '{ns}': application-controller healthy = {controller_ok}");
if !controller_ok {
trace!("Skipping namespace '{ns}' because application-controller is not healthy");
continue;
}
trace!("Determining Argo CD scope for namespace '{ns}' (cluster-wide vs namespace-scoped)");
let sa = k8s
.get_controller_service_account_name(&ns)
.await?
.unwrap_or("argocd-application-controller".to_string());
let scope = match k8s.is_service_account_cluster_wide(&sa, &ns).await {
Ok(true) => {
debug!("Namespace '{ns}' identified as cluster-wide Argo CD control plane");
ArgoScope::ClusterWide(ns.to_string())
}
Ok(false) => {
debug!("Namespace '{ns}' identified as namespace-scoped Argo CD control plane");
ArgoScope::NamespaceScoped(ns.to_string())
}
Err(e) => {
warn!(
"Failed to determine Argo CD scope for namespace '{ns}': {e}. Assuming namespace-scoped."
);
ArgoScope::NamespaceScoped(ns.to_string())
}
};
trace!("Checking optional ApplicationSet CRD (applicationsets.argoproj.io)");
let has_applicationset = match k8s.has_crd("applicationsets.argoproj.io").await {
Ok(v) => {
debug!("applicationsets.argoproj.io present: {v}");
v
}
Err(e) => {
warn!("Failed to check applicationsets.argoproj.io CRD: {e}. Assuming absent.");
false
}
};
let argo = DiscoveredArgo {
control_namespace: ns.clone(),
scope,
has_crds,
has_applicationset,
};
debug!("Discovered Argo instance in '{ns}': {argo:?}");
found.push(argo);
}
if found.is_empty() {
info!("No Argo CD installations discovered");
} else {
info!(
"Argo CD discovery complete: {} instance(s) found",
found.len()
);
}
Ok(found)
}
pub async fn detect_argo_deployment_type(
k8s: &Arc<K8sClient>,
desired_namespace: &str,
) -> Result<ArgoDeploymentType, InterpretError> {
let discovered = discover_argo_all(k8s).await?;
debug!("Discovered argo instances {discovered:?}");
if discovered.is_empty() {
return Ok(ArgoDeploymentType::NotInstalled);
}
if let Some(d) = discovered
.iter()
.find(|d| d.control_namespace == desired_namespace)
{
return Ok(ArgoDeploymentType::AvailableInDesiredNamespace(
d.control_namespace.clone(),
));
}
if let Some(d) = discovered
.iter()
.find(|d| matches!(d.scope, ArgoScope::ClusterWide(_)))
{
return Ok(ArgoDeploymentType::InstalledClusterWide(
d.control_namespace.clone(),
));
}
Ok(ArgoDeploymentType::InstalledNamespaceScoped(
discovered[0].control_namespace.clone(),
))
}

View File

@@ -0,0 +1,116 @@
use std::net::{IpAddr, Ipv4Addr};
use async_trait::async_trait;
use brocade::BrocadeOptions;
use harmony_secret::{Secret, SecretManager};
use harmony_types::id::Id;
use serde::{Deserialize, Serialize};
use crate::{
data::Version,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::Topology,
};
#[derive(Debug, Clone, Serialize)]
pub struct BrocadeEnableSnmpScore {
pub switch_ips: Vec<IpAddr>,
pub dry_run: bool,
}
impl<T: Topology> Score<T> for BrocadeEnableSnmpScore {
fn name(&self) -> String {
"BrocadeEnableSnmpScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(BrocadeEnableSnmpInterpret {
score: self.clone(),
})
}
}
#[derive(Debug, Clone, Serialize)]
pub struct BrocadeEnableSnmpInterpret {
score: BrocadeEnableSnmpScore,
}
#[derive(Secret, Clone, Debug, Serialize, Deserialize)]
struct BrocadeSwitchAuth {
username: String,
password: String,
}
#[derive(Secret, Clone, Debug, Serialize, Deserialize)]
struct BrocadeSnmpAuth {
username: String,
auth_password: String,
des_password: String,
}
#[async_trait]
impl<T: Topology> Interpret<T> for BrocadeEnableSnmpInterpret {
async fn execute(
&self,
_inventory: &Inventory,
_topology: &T,
) -> Result<Outcome, InterpretError> {
let switch_addresses = &self.score.switch_ips;
let snmp_auth = SecretManager::get_or_prompt::<BrocadeSnmpAuth>()
.await
.unwrap();
let config = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
.await
.unwrap();
let brocade = brocade::init(
&switch_addresses,
&config.username,
&config.password,
BrocadeOptions {
dry_run: self.score.dry_run,
..Default::default()
},
)
.await
.expect("Brocade client failed to connect");
brocade
.enable_snmp(
&snmp_auth.username,
&snmp_auth.auth_password,
&snmp_auth.des_password,
)
.await
.map_err(|e| InterpretError::new(e.to_string()))?;
Ok(Outcome::success(format!(
"Activated snmp server for Brocade at {}",
switch_addresses
.iter()
.map(|s| s.to_string())
.collect::<Vec<_>>()
.join(", ")
)))
}
fn get_name(&self) -> InterpretName {
InterpretName::Custom("BrocadeEnableSnmpInterpret")
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -19,8 +19,11 @@ pub struct DhcpScore {
pub host_binding: Vec<HostBinding>,
pub next_server: Option<IpAddress>,
pub boot_filename: Option<String>,
/// Boot filename to be provided to PXE clients identifying as BIOS
pub filename: Option<String>,
/// Boot filename to be provided to PXE clients identifying as uefi but NOT iPXE
pub filename64: Option<String>,
/// Boot filename to be provided to PXE clients identifying as iPXE
pub filenameipxe: Option<String>,
pub dhcp_range: (IpAddress, IpAddress),
pub domain: Option<String>,

View File

@@ -5,11 +5,10 @@ use serde::{Deserialize, Serialize};
use crate::{
data::Version,
hardware::PhysicalHost,
infra::inventory::InventoryRepositoryFactory,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::{HostRole, Inventory},
modules::inventory::LaunchDiscoverInventoryAgentScore,
modules::inventory::{HarmonyDiscoveryStrategy, LaunchDiscoverInventoryAgentScore},
score::Score,
topology::Topology,
};
@@ -17,11 +16,13 @@ use crate::{
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DiscoverHostForRoleScore {
pub role: HostRole,
pub number_desired_hosts: i16,
pub discovery_strategy: HarmonyDiscoveryStrategy,
}
impl<T: Topology> Score<T> for DiscoverHostForRoleScore {
fn name(&self) -> String {
"DiscoverInventoryAgentScore".to_string()
format!("DiscoverHostForRoleScore({:?})", self.role)
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
@@ -48,13 +49,15 @@ impl<T: Topology> Interpret<T> for DiscoverHostForRoleInterpret {
);
LaunchDiscoverInventoryAgentScore {
discovery_timeout: None,
discovery_strategy: self.score.discovery_strategy.clone(),
}
.interpret(inventory, topology)
.await?;
let host: PhysicalHost;
let mut chosen_hosts = vec![];
let host_repo = InventoryRepositoryFactory::build().await?;
let mut assigned_hosts = 0;
loop {
let all_hosts = host_repo.get_all_hosts().await?;
@@ -75,15 +78,24 @@ impl<T: Topology> Interpret<T> for DiscoverHostForRoleInterpret {
match ans {
Ok(choice) => {
info!(
"Selected {} as the {:?} node.",
choice.summary(),
self.score.role
"Assigned role {:?} for node {}",
self.score.role,
choice.summary()
);
host_repo
.save_role_mapping(&self.score.role, &choice)
.await?;
host = choice;
break;
chosen_hosts.push(choice);
assigned_hosts += 1;
info!(
"Found {assigned_hosts} hosts for role {:?}",
self.score.role
);
if assigned_hosts == self.score.number_desired_hosts {
break;
}
}
Err(inquire::InquireError::OperationCanceled) => {
info!("Refresh requested. Fetching list of discovered hosts again...");
@@ -100,8 +112,13 @@ impl<T: Topology> Interpret<T> for DiscoverHostForRoleInterpret {
}
Ok(Outcome::success(format!(
"Successfully discovered host {} for role {:?}",
host.summary(),
"Successfully discovered {} hosts {} for role {:?}",
self.score.number_desired_hosts,
chosen_hosts
.iter()
.map(|h| h.summary())
.collect::<Vec<String>>()
.join(", "),
self.score.role
)))
}

View File

@@ -1,6 +1,10 @@
mod discovery;
pub mod inspect;
use std::net::Ipv4Addr;
use cidr::{Ipv4Cidr, Ipv4Inet};
pub use discovery::*;
use tokio::time::{Duration, timeout};
use async_trait::async_trait;
use harmony_inventory_agent::local_presence::DiscoveryEvent;
@@ -24,6 +28,7 @@ use harmony_types::id::Id;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LaunchDiscoverInventoryAgentScore {
pub discovery_timeout: Option<u64>,
pub discovery_strategy: HarmonyDiscoveryStrategy,
}
impl<T: Topology> Score<T> for LaunchDiscoverInventoryAgentScore {
@@ -43,6 +48,12 @@ struct DiscoverInventoryAgentInterpret {
score: LaunchDiscoverInventoryAgentScore,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum HarmonyDiscoveryStrategy {
MDNS,
SUBNET { cidr: cidr::Ipv4Cidr, port: u16 },
}
#[async_trait]
impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
async fn execute(
@@ -57,6 +68,37 @@ impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
),
};
match self.score.discovery_strategy {
HarmonyDiscoveryStrategy::MDNS => self.launch_mdns_discovery().await,
HarmonyDiscoveryStrategy::SUBNET { cidr, port } => {
self.launch_cidr_discovery(&cidr, port).await
}
};
Ok(Outcome::success(
"Discovery process completed successfully".to_string(),
))
}
fn get_name(&self) -> InterpretName {
InterpretName::DiscoverInventoryAgent
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
impl DiscoverInventoryAgentInterpret {
async fn launch_mdns_discovery(&self) {
harmony_inventory_agent::local_presence::discover_agents(
self.score.discovery_timeout,
|event: DiscoveryEvent| -> Result<(), String> {
@@ -88,6 +130,103 @@ impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
trace!("Found host information {host:?}");
// TODO its useless to have two distinct host types but requires a bit much
// refactoring to do it now
let harmony_inventory_agent::hwinfo::PhysicalHost {
storage_drives,
storage_controller: _,
memory_modules,
cpus,
chipset: _,
network_interfaces,
management_interface: _,
host_uuid,
} = host;
let host = PhysicalHost {
id: Id::from(host_uuid),
category: HostCategory::Server,
network: network_interfaces,
storage: storage_drives,
labels: vec![Label {
name: "discovered-by".to_string(),
value: "harmony-inventory-agent".to_string(),
}],
memory_modules,
cpus,
};
// FIXME only save the host when it is new or something changed in it.
// we currently are saving the host every time it is discovered.
let repo = InventoryRepositoryFactory::build()
.await
.map_err(|e| format!("Could not build repository : {e}"))
.unwrap();
repo.save(&host)
.await
.map_err(|e| format!("Could not save host : {e}"))
.unwrap();
info!(
"Saved new host id {}, summary : {}",
host.id,
host.summary()
);
});
}
_ => debug!("Unhandled event {event:?}"),
};
Ok(())
},
)
.await
}
// async fn launch_cidr_discovery(&self, cidr : &Ipv4Cidr, port: u16) {
// todo!("launnch cidr discovery for {cidr} : {port}
// - Iterate over all possible addresses in cidr
// - make calls in batches of 20 attempting to reach harmony inventory agent on <addr, port> using same as above harmony_inventory_agent::client::get_host_inventory(&address, port)
// - Log warn when response is 404, it means the port was used by something else unexpected
// - Log error when response is 5xx
// - Log debug when no response (timeout 15 seconds)
// - Log info when found and response is 2xx
// ");
// }
async fn launch_cidr_discovery(&self, cidr: &Ipv4Cidr, port: u16) {
let addrs: Vec<Ipv4Inet> = cidr.iter().collect();
let total = addrs.len();
info!(
"Starting CIDR discovery for {} hosts on {}/{} (port {})",
total,
cidr.network_length(),
cidr,
port
);
let batch_size: usize = 20;
let timeout_secs = 5;
let request_timeout = Duration::from_secs(timeout_secs);
let mut current_batch = 0;
let num_batches = addrs.len() / batch_size;
for batch in addrs.chunks(batch_size) {
current_batch += 1;
info!("Starting query batch {current_batch} of {num_batches}, timeout {timeout_secs}");
let mut tasks = Vec::with_capacity(batch.len());
for addr in batch {
let addr = addr.address().to_string();
let port = port;
let task = tokio::spawn(async move {
match timeout(
request_timeout,
harmony_inventory_agent::client::get_host_inventory(&addr, port),
)
.await
{
Ok(Ok(host)) => {
info!("Found and response is 2xx for {addr}:{port}");
// Reuse the same conversion to PhysicalHost as MDNS flow
let harmony_inventory_agent::hwinfo::PhysicalHost {
storage_drives,
storage_controller,
@@ -112,45 +251,36 @@ impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
cpus,
};
// Save host to inventory
let repo = InventoryRepositoryFactory::build()
.await
.map_err(|e| format!("Could not build repository : {e}"))
.unwrap();
repo.save(&host)
.await
.map_err(|e| format!("Could not save host : {e}"))
.unwrap();
info!(
"Saved new host id {}, summary : {}",
host.id,
host.summary()
);
});
if let Err(e) = repo.save(&host).await {
log::debug!("Failed to save host {}: {e}", host.id);
} else {
info!("Saved host id {}, summary : {}", host.id, host.summary());
}
}
Ok(Err(e)) => {
log::info!("Error querying inventory agent on {addr}:{port} : {e}");
}
Err(_) => {
// Timeout for this host
log::debug!("No response (timeout) for {addr}:{port}");
}
}
_ => debug!("Unhandled event {event:?}"),
};
Ok(())
},
)
.await;
Ok(Outcome::success(
"Discovery process completed successfully".to_string(),
))
}
});
fn get_name(&self) -> InterpretName {
InterpretName::DiscoverInventoryAgent
}
tasks.push(task);
}
fn get_version(&self) -> Version {
todo!()
}
// Wait for this batch to complete
for t in tasks {
let _ = t.await;
}
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
info!("CIDR discovery completed");
}
}

View File

@@ -0,0 +1,157 @@
use std::collections::BTreeMap;
use k8s_openapi::{
api::core::v1::{Affinity, Toleration},
apimachinery::pkg::apis::meta::v1::ObjectMeta,
};
use kube::CustomResource;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use serde_json::Value;
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug)]
#[kube(
group = "operators.coreos.com",
version = "v1alpha1",
kind = "CatalogSource",
plural = "catalogsources",
namespaced = true,
schema = "disabled"
)]
#[serde(rename_all = "camelCase")]
pub struct CatalogSourceSpec {
#[serde(skip_serializing_if = "Option::is_none")]
pub address: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub config_map: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub display_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub grpc_pod_config: Option<GrpcPodConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
pub icon: Option<Icon>,
#[serde(skip_serializing_if = "Option::is_none")]
pub image: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub priority: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub publisher: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub run_as_root: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub secrets: Option<Vec<String>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub source_type: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub update_strategy: Option<UpdateStrategy>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct GrpcPodConfig {
#[serde(skip_serializing_if = "Option::is_none")]
pub affinity: Option<Affinity>,
#[serde(skip_serializing_if = "Option::is_none")]
pub extract_content: Option<ExtractContent>,
#[serde(skip_serializing_if = "Option::is_none")]
pub memory_target: Option<Value>,
#[serde(skip_serializing_if = "Option::is_none")]
pub node_selector: Option<BTreeMap<String, String>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub priority_class_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub security_context_config: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub tolerations: Option<Vec<Toleration>>,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct ExtractContent {
pub cache_dir: String,
pub catalog_dir: String,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct Icon {
pub base64data: String,
pub mediatype: String,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct UpdateStrategy {
#[serde(skip_serializing_if = "Option::is_none")]
pub registry_poll: Option<RegistryPoll>,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct RegistryPoll {
#[serde(skip_serializing_if = "Option::is_none")]
pub interval: Option<String>,
}
impl Default for CatalogSource {
fn default() -> Self {
Self {
metadata: ObjectMeta::default(),
spec: CatalogSourceSpec {
address: None,
config_map: None,
description: None,
display_name: None,
grpc_pod_config: None,
icon: None,
image: None,
priority: None,
publisher: None,
run_as_root: None,
secrets: None,
source_type: None,
update_strategy: None,
},
}
}
}
impl Default for CatalogSourceSpec {
fn default() -> Self {
Self {
address: None,
config_map: None,
description: None,
display_name: None,
grpc_pod_config: None,
icon: None,
image: None,
priority: None,
publisher: None,
run_as_root: None,
secrets: None,
source_type: None,
update_strategy: None,
}
}
}

View File

@@ -0,0 +1,4 @@
mod catalogsources_operators_coreos_com;
pub use catalogsources_operators_coreos_com::*;
mod subscriptions_operators_coreos_com;
pub use subscriptions_operators_coreos_com::*;

View File

@@ -0,0 +1,68 @@
use k8s_openapi::apimachinery::pkg::apis::meta::v1::ObjectMeta;
use kube::CustomResource;
use serde::{Deserialize, Serialize};
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug)]
#[kube(
group = "operators.coreos.com",
version = "v1alpha1",
kind = "Subscription",
plural = "subscriptions",
namespaced = true,
schema = "disabled"
)]
#[serde(rename_all = "camelCase")]
pub struct SubscriptionSpec {
#[serde(skip_serializing_if = "Option::is_none")]
pub channel: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub config: Option<SubscriptionConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
pub install_plan_approval: Option<String>,
pub name: String,
pub source: String,
pub source_namespace: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub starting_csv: Option<String>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct SubscriptionConfig {
#[serde(skip_serializing_if = "Option::is_none")]
pub env: Option<Vec<k8s_openapi::api::core::v1::EnvVar>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub node_selector: Option<std::collections::BTreeMap<String, String>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub tolerations: Option<Vec<k8s_openapi::api::core::v1::Toleration>>,
}
impl Default for Subscription {
fn default() -> Self {
Subscription {
metadata: ObjectMeta::default(),
spec: SubscriptionSpec::default(),
}
}
}
impl Default for SubscriptionSpec {
fn default() -> SubscriptionSpec {
SubscriptionSpec {
name: String::new(),
source: String::new(),
source_namespace: String::new(),
channel: None,
config: None,
install_plan_approval: None,
starting_csv: None,
}
}
}

View File

@@ -0,0 +1,3 @@
mod operatorhub;
pub use operatorhub::*;
pub mod crd;

View File

@@ -0,0 +1,107 @@
// Write operatorhub catalog score
// for now this will only support on OKD with the default catalog and operatorhub setup and does not verify OLM state or anything else. Very opinionated and bare-bones to start
use k8s_openapi::apimachinery::pkg::apis::meta::v1::ObjectMeta;
use serde::Serialize;
use crate::interpret::Interpret;
use crate::modules::k8s::apps::crd::{
CatalogSource, CatalogSourceSpec, RegistryPoll, UpdateStrategy,
};
use crate::modules::k8s::resource::K8sResourceScore;
use crate::score::Score;
use crate::topology::{K8sclient, Topology};
/// Installs the CatalogSource in a cluster which already has the required services and CRDs installed.
///
/// ```rust
/// use harmony::modules::k8s::apps::OperatorHubCatalogSourceScore;
///
/// let score = OperatorHubCatalogSourceScore::default();
/// ```
///
/// Required services:
/// - catalog-operator
/// - olm-operator
///
/// They are installed by default with OKD/Openshift
///
/// **Warning** : this initial implementation does not manage the dependencies. They must already
/// exist in the cluster.
#[derive(Debug, Clone, Serialize)]
pub struct OperatorHubCatalogSourceScore {
pub name: String,
pub namespace: String,
pub image: String,
}
impl OperatorHubCatalogSourceScore {
pub fn new(name: &str, namespace: &str, image: &str) -> Self {
Self {
name: name.to_string(),
namespace: namespace.to_string(),
image: image.to_string(),
}
}
}
impl Default for OperatorHubCatalogSourceScore {
/// This default implementation will create this k8s resource :
///
/// ```yaml
/// apiVersion: operators.coreos.com/v1alpha1
/// kind: CatalogSource
/// metadata:
/// name: operatorhubio-catalog
/// namespace: openshift-marketplace
/// spec:
/// sourceType: grpc
/// image: quay.io/operatorhubio/catalog:latest
/// displayName: Operatorhub Operators
/// publisher: OperatorHub.io
/// updateStrategy:
/// registryPoll:
/// interval: 60m
/// ```
fn default() -> Self {
OperatorHubCatalogSourceScore {
name: "operatorhubio-catalog".to_string(),
namespace: "openshift-marketplace".to_string(),
image: "quay.io/operatorhubio/catalog:latest".to_string(),
}
}
}
impl<T: Topology + K8sclient> Score<T> for OperatorHubCatalogSourceScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let metadata = ObjectMeta {
name: Some(self.name.clone()),
namespace: Some(self.namespace.clone()),
..ObjectMeta::default()
};
let spec = CatalogSourceSpec {
source_type: Some("grpc".to_string()),
image: Some(self.image.clone()),
display_name: Some("Operatorhub Operators".to_string()),
publisher: Some("OperatorHub.io".to_string()),
update_strategy: Some(UpdateStrategy {
registry_poll: Some(RegistryPoll {
interval: Some("60m".to_string()),
}),
}),
..CatalogSourceSpec::default()
};
let catalog_source = CatalogSource {
metadata,
spec: spec,
};
K8sResourceScore::single(catalog_source, Some(self.namespace.clone())).create_interpret()
}
fn name(&self) -> String {
format!("OperatorHubCatalogSourceScore({})", self.name)
}
}

View File

@@ -1,3 +1,4 @@
pub mod apps;
pub mod deployment;
pub mod ingress;
pub mod namespace;

View File

@@ -1,4 +1,6 @@
pub mod application;
pub mod argocd;
pub mod brocade;
pub mod cert_manager;
pub mod dhcp;
pub mod dns;
@@ -13,6 +15,7 @@ pub mod load_balancer;
pub mod monitoring;
pub mod okd;
pub mod opnsense;
pub mod postgresql;
pub mod prometheus;
pub mod storage;
pub mod tenant;

View File

@@ -1,18 +1,23 @@
use std::any::Any;
use std::collections::BTreeMap;
use std::collections::{BTreeMap, HashMap};
use async_trait::async_trait;
use harmony_types::k8s_name::K8sName;
use k8s_openapi::api::core::v1::Secret;
use kube::api::ObjectMeta;
use log::debug;
use kube::Resource;
use kube::api::{DynamicObject, ObjectMeta};
use log::{debug, trace};
use serde::Serialize;
use serde_json::json;
use serde_yaml::{Mapping, Value};
use crate::infra::kube::kube_resource_to_dynamic;
use crate::modules::monitoring::kube_prometheus::crd::crd_alertmanager_config::{
AlertmanagerConfig, AlertmanagerConfigSpec, CRDPrometheus,
};
use crate::modules::monitoring::kube_prometheus::crd::rhob_alertmanager_config::RHOBObservability;
use crate::modules::monitoring::okd::OpenshiftClusterAlertSender;
use crate::topology::oberservability::monitoring::AlertManagerReceiver;
use crate::{
interpret::{InterpretError, Outcome},
modules::monitoring::{
@@ -28,14 +33,13 @@ use harmony_types::net::Url;
#[derive(Debug, Clone, Serialize)]
pub struct DiscordWebhook {
pub name: String,
pub name: K8sName,
pub url: Url,
pub selectors: Vec<HashMap<String, String>>,
}
#[async_trait]
impl AlertReceiver<RHOBObservability> for DiscordWebhook {
async fn install(&self, sender: &RHOBObservability) -> Result<Outcome, InterpretError> {
let ns = sender.namespace.clone();
impl DiscordWebhook {
fn get_receiver_config(&self) -> Result<AlertManagerReceiver, String> {
let secret_name = format!("{}-secret", self.name.clone());
let webhook_key = format!("{}", self.url.clone());
@@ -52,33 +56,91 @@ impl AlertReceiver<RHOBObservability> for DiscordWebhook {
..Default::default()
};
let _ = sender.client.apply(&secret, Some(&ns)).await;
let mut matchers: Vec<String> = Vec::new();
for selector in &self.selectors {
trace!("selector: {:#?}", selector);
for (k, v) in selector {
matchers.push(format!("{} = {}", k, v));
}
}
Ok(AlertManagerReceiver {
additional_ressources: vec![kube_resource_to_dynamic(&secret)?],
receiver_config: json!({
"name": self.name,
"discord_configs": [
{
"webhook_url": self.url.clone(),
"title": "{{ template \"discord.default.title\" . }}",
"message": "{{ template \"discord.default.message\" . }}"
}
]
}),
route_config: json!({
"receiver": self.name,
"matchers": matchers,
}),
})
}
}
#[async_trait]
impl AlertReceiver<OpenshiftClusterAlertSender> for DiscordWebhook {
async fn install(
&self,
sender: &OpenshiftClusterAlertSender,
) -> Result<Outcome, InterpretError> {
todo!()
}
fn name(&self) -> String {
self.name.clone().to_string()
}
fn clone_box(&self) -> Box<dyn AlertReceiver<OpenshiftClusterAlertSender>> {
Box::new(self.clone())
}
fn as_any(&self) -> &dyn Any {
todo!()
}
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
self.get_receiver_config()
}
}
#[async_trait]
impl AlertReceiver<RHOBObservability> for DiscordWebhook {
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
todo!()
}
async fn install(&self, sender: &RHOBObservability) -> Result<Outcome, InterpretError> {
let ns = sender.namespace.clone();
let config = self.get_receiver_config()?;
for resource in config.additional_ressources.iter() {
todo!("can I apply a dynamicresource");
// sender.client.apply(resource, Some(&ns)).await;
}
let spec = crate::modules::monitoring::kube_prometheus::crd::rhob_alertmanager_config::AlertmanagerConfigSpec {
data: json!({
"route": {
"receiver": self.name,
},
"receivers": [
{
"name": self.name,
"discordConfigs": [
{
"apiURL": {
"name": secret_name,
"key": "webhook-url",
},
"title": "{{ template \"discord.default.title\" . }}",
"message": "{{ template \"discord.default.message\" . }}"
}
]
}
config.receiver_config
]
}),
};
let alertmanager_configs = crate::modules::monitoring::kube_prometheus::crd::rhob_alertmanager_config::AlertmanagerConfig {
metadata: ObjectMeta {
name: Some(self.name.clone()),
name: Some(self.name.clone().to_string()),
labels: Some(std::collections::BTreeMap::from([(
"alertmanagerConfig".to_string(),
"enabled".to_string(),
@@ -122,6 +184,9 @@ impl AlertReceiver<RHOBObservability> for DiscordWebhook {
#[async_trait]
impl AlertReceiver<CRDPrometheus> for DiscordWebhook {
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
todo!()
}
async fn install(&self, sender: &CRDPrometheus) -> Result<Outcome, InterpretError> {
let ns = sender.namespace.clone();
let secret_name = format!("{}-secret", self.name.clone());
@@ -167,7 +232,7 @@ impl AlertReceiver<CRDPrometheus> for DiscordWebhook {
let alertmanager_configs = AlertmanagerConfig {
metadata: ObjectMeta {
name: Some(self.name.clone()),
name: Some(self.name.clone().to_string()),
labels: Some(std::collections::BTreeMap::from([(
"alertmanagerConfig".to_string(),
"enabled".to_string(),
@@ -200,6 +265,9 @@ impl AlertReceiver<CRDPrometheus> for DiscordWebhook {
#[async_trait]
impl AlertReceiver<Prometheus> for DiscordWebhook {
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
todo!()
}
async fn install(&self, sender: &Prometheus) -> Result<Outcome, InterpretError> {
sender.install_receiver(self).await
}
@@ -217,7 +285,7 @@ impl AlertReceiver<Prometheus> for DiscordWebhook {
#[async_trait]
impl PrometheusReceiver for DiscordWebhook {
fn name(&self) -> String {
self.name.clone()
self.name.clone().to_string()
}
async fn configure_receiver(&self) -> AlertManagerChannelConfig {
self.get_config().await
@@ -226,6 +294,9 @@ impl PrometheusReceiver for DiscordWebhook {
#[async_trait]
impl AlertReceiver<KubePrometheus> for DiscordWebhook {
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
todo!()
}
async fn install(&self, sender: &KubePrometheus) -> Result<Outcome, InterpretError> {
sender.install_receiver(self).await
}
@@ -243,7 +314,7 @@ impl AlertReceiver<KubePrometheus> for DiscordWebhook {
#[async_trait]
impl KubePrometheusReceiver for DiscordWebhook {
fn name(&self) -> String {
self.name.clone()
self.name.clone().to_string()
}
async fn configure_receiver(&self) -> AlertManagerChannelConfig {
self.get_config().await
@@ -270,7 +341,7 @@ impl DiscordWebhook {
let mut route = Mapping::new();
route.insert(
Value::String("receiver".to_string()),
Value::String(self.name.clone()),
Value::String(self.name.clone().to_string()),
);
route.insert(
Value::String("matchers".to_string()),
@@ -284,7 +355,7 @@ impl DiscordWebhook {
let mut receiver = Mapping::new();
receiver.insert(
Value::String("name".to_string()),
Value::String(self.name.clone()),
Value::String(self.name.clone().to_string()),
);
let mut discord_config = Mapping::new();
@@ -309,8 +380,9 @@ mod tests {
#[tokio::test]
async fn discord_serialize_should_match() {
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: Url::Url(url::Url::parse("https://discord.i.dont.exist.com").unwrap()),
selectors: vec![],
};
let discord_receiver_receiver =

View File

@@ -19,7 +19,7 @@ use crate::{
},
prometheus::prometheus::{Prometheus, PrometheusReceiver},
},
topology::oberservability::monitoring::AlertReceiver,
topology::oberservability::monitoring::{AlertManagerReceiver, AlertReceiver},
};
use harmony_types::net::Url;
@@ -31,6 +31,9 @@ pub struct WebhookReceiver {
#[async_trait]
impl AlertReceiver<RHOBObservability> for WebhookReceiver {
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
todo!()
}
async fn install(&self, sender: &RHOBObservability) -> Result<Outcome, InterpretError> {
let spec = crate::modules::monitoring::kube_prometheus::crd::rhob_alertmanager_config::AlertmanagerConfigSpec {
data: json!({
@@ -97,6 +100,9 @@ impl AlertReceiver<RHOBObservability> for WebhookReceiver {
#[async_trait]
impl AlertReceiver<CRDPrometheus> for WebhookReceiver {
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
todo!()
}
async fn install(&self, sender: &CRDPrometheus) -> Result<Outcome, InterpretError> {
let spec = crate::modules::monitoring::kube_prometheus::crd::crd_alertmanager_config::AlertmanagerConfigSpec {
data: json!({
@@ -158,6 +164,9 @@ impl AlertReceiver<CRDPrometheus> for WebhookReceiver {
#[async_trait]
impl AlertReceiver<Prometheus> for WebhookReceiver {
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
todo!()
}
async fn install(&self, sender: &Prometheus) -> Result<Outcome, InterpretError> {
sender.install_receiver(self).await
}
@@ -184,6 +193,9 @@ impl PrometheusReceiver for WebhookReceiver {
#[async_trait]
impl AlertReceiver<KubePrometheus> for WebhookReceiver {
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String> {
todo!()
}
async fn install(&self, sender: &KubePrometheus) -> Result<Outcome, InterpretError> {
sender.install_receiver(self).await
}

View File

@@ -1,12 +1,8 @@
use std::collections::BTreeMap;
use kube::CustomResource;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use crate::modules::monitoring::kube_prometheus::crd::rhob_prometheuses::{
LabelSelector, PrometheusSpec,
};
use crate::modules::monitoring::kube_prometheus::crd::rhob_prometheuses::LabelSelector;
/// MonitoringStack CRD for monitoring.rhobs/v1alpha1
#[derive(CustomResource, Serialize, Deserialize, Debug, Clone, JsonSchema)]

View File

@@ -0,0 +1,270 @@
use base64::prelude::*;
use async_trait::async_trait;
use harmony_types::id::Id;
use kube::api::DynamicObject;
use log::{debug, info, trace};
use serde::Serialize;
use crate::{
data::Version,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
modules::monitoring::okd::OpenshiftClusterAlertSender,
score::Score,
topology::{K8sclient, Topology, oberservability::monitoring::AlertReceiver},
};
impl Clone for Box<dyn AlertReceiver<OpenshiftClusterAlertSender>> {
fn clone(&self) -> Self {
self.clone_box()
}
}
impl Serialize for Box<dyn AlertReceiver<OpenshiftClusterAlertSender>> {
fn serialize<S>(&self, _serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
todo!()
}
}
#[derive(Debug, Clone, Serialize)]
pub struct OpenshiftClusterAlertScore {
pub receivers: Vec<Box<dyn AlertReceiver<OpenshiftClusterAlertSender>>>,
}
impl<T: Topology + K8sclient> Score<T> for OpenshiftClusterAlertScore {
fn name(&self) -> String {
"ClusterAlertScore".to_string()
}
#[doc(hidden)]
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(OpenshiftClusterAlertInterpret {
receivers: self.receivers.clone(),
})
}
}
#[derive(Debug)]
pub struct OpenshiftClusterAlertInterpret {
receivers: Vec<Box<dyn AlertReceiver<OpenshiftClusterAlertSender>>>,
}
#[async_trait]
impl<T: Topology + K8sclient> Interpret<T> for OpenshiftClusterAlertInterpret {
async fn execute(
&self,
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let client = topology.k8s_client().await?;
let openshift_monitoring_namespace = "openshift-monitoring";
let mut alertmanager_main_secret: DynamicObject = client
.get_secret_json_value("alertmanager-main", Some(openshift_monitoring_namespace))
.await?;
trace!("Got secret {alertmanager_main_secret:#?}");
let data: &mut serde_json::Value = &mut alertmanager_main_secret.data;
trace!("Alertmanager-main secret data {data:#?}");
let data_obj = data
.get_mut("data")
.ok_or(InterpretError::new(
"Missing 'data' field in alertmanager-main secret.".to_string(),
))?
.as_object_mut()
.ok_or(InterpretError::new(
"'data' field in alertmanager-main secret is expected to be an object ."
.to_string(),
))?;
let config_b64 = data_obj
.get("alertmanager.yaml")
.ok_or(InterpretError::new(
"Missing 'alertmanager.yaml' in alertmanager-main secret data".to_string(),
))?
.as_str()
.unwrap_or("");
trace!("Config base64 {config_b64}");
let config_bytes = BASE64_STANDARD.decode(config_b64).unwrap_or_default();
let mut am_config: serde_yaml::Value =
serde_yaml::from_str(&String::from_utf8(config_bytes).unwrap_or_default())
.unwrap_or_default();
debug!("Current alertmanager config {am_config:#?}");
let existing_receivers_sequence = if let Some(receivers) = am_config.get_mut("receivers") {
match receivers.as_sequence_mut() {
Some(seq) => seq,
None => {
return Err(InterpretError::new(format!(
"Expected alertmanager config receivers to be a sequence, got {:?}",
receivers
)));
}
}
} else {
&mut serde_yaml::Sequence::default()
};
let mut additional_resources = vec![];
for custom_receiver in &self.receivers {
let name = custom_receiver.name();
let alertmanager_receiver = custom_receiver.as_alertmanager_receiver()?;
let receiver_json_value = alertmanager_receiver.receiver_config;
let receiver_yaml_string =
serde_json::to_string(&receiver_json_value).map_err(|e| {
InterpretError::new(format!("Failed to serialize receiver config: {}", e))
})?;
let receiver_yaml_value: serde_yaml::Value =
serde_yaml::from_str(&receiver_yaml_string).map_err(|e| {
InterpretError::new(format!("Failed to parse receiver config as YAML: {}", e))
})?;
if let Some(idx) = existing_receivers_sequence.iter().position(|r| {
r.get("name")
.and_then(|n| n.as_str())
.map_or(false, |n| n == name)
}) {
info!("Replacing existing AlertManager receiver: {}", name);
existing_receivers_sequence[idx] = receiver_yaml_value;
} else {
debug!("Adding new AlertManager receiver: {}", name);
existing_receivers_sequence.push(receiver_yaml_value);
}
additional_resources.push(alertmanager_receiver.additional_ressources);
}
let existing_route_mapping = if let Some(route) = am_config.get_mut("route") {
match route.as_mapping_mut() {
Some(map) => map,
None => {
return Err(InterpretError::new(format!(
"Expected alertmanager config route to be a mapping, got {:?}",
route
)));
}
}
} else {
&mut serde_yaml::Mapping::default()
};
let existing_route_sequence = if let Some(routes) = existing_route_mapping.get_mut("routes")
{
match routes.as_sequence_mut() {
Some(seq) => seq,
None => {
return Err(InterpretError::new(format!(
"Expected alertmanager config routes to be a sequence, got {:?}",
routes
)));
}
}
} else {
&mut serde_yaml::Sequence::default()
};
for custom_receiver in &self.receivers {
let name = custom_receiver.name();
let alertmanager_receiver = custom_receiver.as_alertmanager_receiver()?;
let route_json_value = alertmanager_receiver.route_config;
let route_yaml_string = serde_json::to_string(&route_json_value).map_err(|e| {
InterpretError::new(format!("Failed to serialize route config: {}", e))
})?;
let route_yaml_value: serde_yaml::Value = serde_yaml::from_str(&route_yaml_string)
.map_err(|e| {
InterpretError::new(format!("Failed to parse route config as YAML: {}", e))
})?;
if let Some(idy) = existing_route_sequence.iter().position(|r| {
r.get("receiver")
.and_then(|n| n.as_str())
.map_or(false, |n| n == name)
}) {
info!("Replacing existing AlertManager receiver: {}", name);
existing_route_sequence[idy] = route_yaml_value;
} else {
debug!("Adding new AlertManager receiver: {}", name);
existing_route_sequence.push(route_yaml_value);
}
}
debug!("Current alertmanager config {am_config:#?}");
// TODO
// - save new version of alertmanager config
// - write additional ressources to the cluster
let am_config = serde_yaml::to_string(&am_config).map_err(|e| {
InterpretError::new(format!(
"Failed to serialize new alertmanager config to string : {e}"
))
})?;
let mut am_config_b64 = String::new();
BASE64_STANDARD.encode_string(am_config, &mut am_config_b64);
// TODO put update configmap value and save new value
data_obj.insert(
"alertmanager.yaml".to_string(),
serde_json::Value::String(am_config_b64),
);
// https://kubernetes.io/docs/reference/using-api/server-side-apply/#field-management
alertmanager_main_secret.metadata.managed_fields = None;
trace!("Applying new alertmanager_main_secret {alertmanager_main_secret:#?}");
client
.apply_dynamic(
&alertmanager_main_secret,
Some(openshift_monitoring_namespace),
true,
)
.await?;
let additional_resources = additional_resources.concat();
trace!("Applying additional ressources for alert receivers {additional_resources:#?}");
client
.apply_dynamic_many(
&additional_resources,
Some(openshift_monitoring_namespace),
true,
)
.await?;
Ok(Outcome::success(format!(
"Successfully configured {} cluster alert receivers: {}",
self.receivers.len(),
self.receivers
.iter()
.map(|r| r.name())
.collect::<Vec<_>>()
.join(", ")
)))
}
fn get_name(&self) -> InterpretName {
InterpretName::Custom("OpenshiftClusterAlertInterpret")
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -0,0 +1,90 @@
use std::{collections::BTreeMap, sync::Arc};
use crate::{
interpret::{InterpretError, Outcome},
topology::k8s::K8sClient,
};
use k8s_openapi::api::core::v1::ConfigMap;
use kube::api::ObjectMeta;
pub(crate) struct Config;
impl Config {
pub async fn create_cluster_monitoring_config_cm(
client: &Arc<K8sClient>,
) -> Result<Outcome, InterpretError> {
let mut data = BTreeMap::new();
data.insert(
"config.yaml".to_string(),
r#"
enableUserWorkload: true
alertmanagerMain:
enableUserAlertmanagerConfig: true
"#
.to_string(),
);
let cm = ConfigMap {
metadata: ObjectMeta {
name: Some("cluster-monitoring-config".to_string()),
namespace: Some("openshift-monitoring".to_string()),
..Default::default()
},
data: Some(data),
..Default::default()
};
client.apply(&cm, Some("openshift-monitoring")).await?;
Ok(Outcome::success(
"updated cluster-monitoring-config-map".to_string(),
))
}
pub async fn create_user_workload_monitoring_config_cm(
client: &Arc<K8sClient>,
) -> Result<Outcome, InterpretError> {
let mut data = BTreeMap::new();
data.insert(
"config.yaml".to_string(),
r#"
alertmanager:
enabled: true
enableAlertmanagerConfig: true
"#
.to_string(),
);
let cm = ConfigMap {
metadata: ObjectMeta {
name: Some("user-workload-monitoring-config".to_string()),
namespace: Some("openshift-user-workload-monitoring".to_string()),
..Default::default()
},
data: Some(data),
..Default::default()
};
client
.apply(&cm, Some("openshift-user-workload-monitoring"))
.await?;
Ok(Outcome::success(
"updated openshift-user-monitoring-config-map".to_string(),
))
}
pub async fn verify_user_workload(client: &Arc<K8sClient>) -> Result<Outcome, InterpretError> {
let namespace = "openshift-user-workload-monitoring";
let alertmanager_name = "alertmanager-user-workload-0";
let prometheus_name = "prometheus-user-workload-0";
client
.wait_for_pod_ready(alertmanager_name, Some(namespace))
.await?;
client
.wait_for_pod_ready(prometheus_name, Some(namespace))
.await?;
Ok(Outcome::success(format!(
"pods: {}, {} ready in ns: {}",
alertmanager_name, prometheus_name, namespace
)))
}
}

View File

@@ -1,16 +1,13 @@
use std::{collections::BTreeMap, sync::Arc};
use crate::{
data::Version,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
modules::monitoring::okd::config::Config,
score::Score,
topology::{K8sclient, Topology, k8s::K8sClient},
topology::{K8sclient, Topology},
};
use async_trait::async_trait;
use harmony_types::id::Id;
use k8s_openapi::api::core::v1::ConfigMap;
use kube::api::ObjectMeta;
use serde::Serialize;
#[derive(Clone, Debug, Serialize)]
@@ -37,10 +34,9 @@ impl<T: Topology + K8sclient> Interpret<T> for OpenshiftUserWorkloadMonitoringIn
topology: &T,
) -> Result<Outcome, InterpretError> {
let client = topology.k8s_client().await.unwrap();
self.update_cluster_monitoring_config_cm(&client).await?;
self.update_user_workload_monitoring_config_cm(&client)
.await?;
self.verify_user_workload(&client).await?;
Config::create_cluster_monitoring_config_cm(&client).await?;
Config::create_user_workload_monitoring_config_cm(&client).await?;
Config::verify_user_workload(&client).await?;
Ok(Outcome::success(
"successfully enabled user-workload-monitoring".to_string(),
))
@@ -62,88 +58,3 @@ impl<T: Topology + K8sclient> Interpret<T> for OpenshiftUserWorkloadMonitoringIn
todo!()
}
}
impl OpenshiftUserWorkloadMonitoringInterpret {
pub async fn update_cluster_monitoring_config_cm(
&self,
client: &Arc<K8sClient>,
) -> Result<Outcome, InterpretError> {
let mut data = BTreeMap::new();
data.insert(
"config.yaml".to_string(),
r#"
enableUserWorkload: true
alertmanagerMain:
enableUserAlertmanagerConfig: true
"#
.to_string(),
);
let cm = ConfigMap {
metadata: ObjectMeta {
name: Some("cluster-monitoring-config".to_string()),
namespace: Some("openshift-monitoring".to_string()),
..Default::default()
},
data: Some(data),
..Default::default()
};
client.apply(&cm, Some("openshift-monitoring")).await?;
Ok(Outcome::success(
"updated cluster-monitoring-config-map".to_string(),
))
}
pub async fn update_user_workload_monitoring_config_cm(
&self,
client: &Arc<K8sClient>,
) -> Result<Outcome, InterpretError> {
let mut data = BTreeMap::new();
data.insert(
"config.yaml".to_string(),
r#"
alertmanager:
enabled: true
enableAlertmanagerConfig: true
"#
.to_string(),
);
let cm = ConfigMap {
metadata: ObjectMeta {
name: Some("user-workload-monitoring-config".to_string()),
namespace: Some("openshift-user-workload-monitoring".to_string()),
..Default::default()
},
data: Some(data),
..Default::default()
};
client
.apply(&cm, Some("openshift-user-workload-monitoring"))
.await?;
Ok(Outcome::success(
"updated openshift-user-monitoring-config-map".to_string(),
))
}
pub async fn verify_user_workload(
&self,
client: &Arc<K8sClient>,
) -> Result<Outcome, InterpretError> {
let namespace = "openshift-user-workload-monitoring";
let alertmanager_name = "alertmanager-user-workload-0";
let prometheus_name = "prometheus-user-workload-0";
client
.wait_for_pod_ready(alertmanager_name, Some(namespace))
.await?;
client
.wait_for_pod_ready(prometheus_name, Some(namespace))
.await?;
Ok(Outcome::success(format!(
"pods: {}, {} ready in ns: {}",
alertmanager_name, prometheus_name, namespace
)))
}
}

View File

@@ -1 +1,14 @@
use crate::topology::oberservability::monitoring::AlertSender;
pub mod cluster_monitoring;
pub(crate) mod config;
pub mod enable_user_workload;
#[derive(Debug)]
pub struct OpenshiftClusterAlertSender;
impl AlertSender for OpenshiftClusterAlertSender {
fn name(&self) -> String {
"OpenshiftClusterAlertSender".to_string()
}
}

View File

@@ -4,7 +4,7 @@ use crate::{
infra::inventory::InventoryRepositoryFactory,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::{HostRole, Inventory},
modules::inventory::DiscoverHostForRoleScore,
modules::inventory::{DiscoverHostForRoleScore, HarmonyDiscoveryStrategy},
score::Score,
topology::HAClusterTopology,
};
@@ -104,6 +104,8 @@ When you can dig them, confirm to continue.
bootstrap_host = hosts.into_iter().next().to_owned();
DiscoverHostForRoleScore {
role: HostRole::Bootstrap,
number_desired_hosts: 1,
discovery_strategy: HarmonyDiscoveryStrategy::MDNS,
}
.interpret(inventory, topology)
.await?;

View File

@@ -1,20 +1,10 @@
use crate::{
data::Version,
hardware::PhysicalHost,
infra::inventory::InventoryRepositoryFactory,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::{HostRole, Inventory},
modules::{
dhcp::DhcpHostBindingScore, http::IPxeMacBootFileScore,
inventory::DiscoverHostForRoleScore, okd::templates::BootstrapIpxeTpl,
},
interpret::Interpret,
inventory::HostRole,
modules::{inventory::HarmonyDiscoveryStrategy, okd::bootstrap_okd_node::OKDNodeInterpret},
score::Score,
topology::{HAClusterTopology, HostBinding},
topology::HAClusterTopology,
};
use async_trait::async_trait;
use derive_new::new;
use harmony_types::id::Id;
use log::{debug, info};
use serde::Serialize;
// -------------------------------------------------------------------------------------------------
@@ -23,231 +13,23 @@ use serde::Serialize;
// - Persist bonding via MachineConfigs (or NNCP) once SCOS is active.
// -------------------------------------------------------------------------------------------------
#[derive(Debug, Clone, Serialize, new)]
pub struct OKDSetup03ControlPlaneScore {}
#[derive(Debug, Clone, Serialize)]
pub struct OKDSetup03ControlPlaneScore {
pub discovery_strategy: HarmonyDiscoveryStrategy,
}
impl Score<HAClusterTopology> for OKDSetup03ControlPlaneScore {
fn create_interpret(&self) -> Box<dyn Interpret<HAClusterTopology>> {
Box::new(OKDSetup03ControlPlaneInterpret::new())
// TODO: Implement a step to wait for the control plane nodes to join the cluster
// and for the cluster operators to become available. This would be similar to
// the `wait-for bootstrap-complete` command.
Box::new(OKDNodeInterpret::new(
HostRole::ControlPlane,
self.discovery_strategy.clone(),
))
}
fn name(&self) -> String {
"OKDSetup03ControlPlaneScore".to_string()
}
}
#[derive(Debug, Clone)]
pub struct OKDSetup03ControlPlaneInterpret {
version: Version,
status: InterpretStatus,
}
impl OKDSetup03ControlPlaneInterpret {
pub fn new() -> Self {
let version = Version::from("1.0.0").unwrap();
Self {
version,
status: InterpretStatus::QUEUED,
}
}
/// Ensures that three physical hosts are discovered and available for the ControlPlane role.
/// It will trigger discovery if not enough hosts are found.
async fn get_nodes(
&self,
inventory: &Inventory,
topology: &HAClusterTopology,
) -> Result<Vec<PhysicalHost>, InterpretError> {
const REQUIRED_HOSTS: usize = 3;
let repo = InventoryRepositoryFactory::build().await?;
let mut control_plane_hosts = repo.get_host_for_role(&HostRole::ControlPlane).await?;
while control_plane_hosts.len() < REQUIRED_HOSTS {
info!(
"Discovery of {} control plane hosts in progress, current number {}",
REQUIRED_HOSTS,
control_plane_hosts.len()
);
// This score triggers the discovery agent for a specific role.
DiscoverHostForRoleScore {
role: HostRole::ControlPlane,
}
.interpret(inventory, topology)
.await?;
control_plane_hosts = repo.get_host_for_role(&HostRole::ControlPlane).await?;
}
if control_plane_hosts.len() < REQUIRED_HOSTS {
Err(InterpretError::new(format!(
"OKD Requires at least {} control plane hosts, but only found {}. Cannot proceed.",
REQUIRED_HOSTS,
control_plane_hosts.len()
)))
} else {
// Take exactly the number of required hosts to ensure consistency.
Ok(control_plane_hosts
.into_iter()
.take(REQUIRED_HOSTS)
.collect())
}
}
/// Configures DHCP host bindings for all control plane nodes.
async fn configure_host_binding(
&self,
inventory: &Inventory,
topology: &HAClusterTopology,
nodes: &Vec<PhysicalHost>,
) -> Result<(), InterpretError> {
info!("[ControlPlane] Configuring host bindings for control plane nodes.");
// Ensure the topology definition matches the number of physical nodes found.
if topology.control_plane.len() != nodes.len() {
return Err(InterpretError::new(format!(
"Mismatch between logical control plane hosts defined in topology ({}) and physical nodes found ({}).",
topology.control_plane.len(),
nodes.len()
)));
}
// Create a binding for each physical host to its corresponding logical host.
let bindings: Vec<HostBinding> = topology
.control_plane
.iter()
.zip(nodes.iter())
.map(|(logical_host, physical_host)| {
info!(
"Creating binding: Logical Host '{}' -> Physical Host ID '{}'",
logical_host.name, physical_host.id
);
HostBinding {
logical_host: logical_host.clone(),
physical_host: physical_host.clone(),
}
})
.collect();
DhcpHostBindingScore {
host_binding: bindings,
domain: Some(topology.domain_name.clone()),
}
.interpret(inventory, topology)
.await?;
Ok(())
}
/// Renders and deploys a per-MAC iPXE boot file for each control plane node.
async fn configure_ipxe(
&self,
inventory: &Inventory,
topology: &HAClusterTopology,
nodes: &Vec<PhysicalHost>,
) -> Result<(), InterpretError> {
info!("[ControlPlane] Rendering per-MAC iPXE configurations.");
// The iPXE script content is the same for all control plane nodes,
// pointing to the 'master.ign' ignition file.
let content = BootstrapIpxeTpl {
http_ip: &topology.http_server.get_ip().to_string(),
scos_path: "scos",
ignition_http_path: "okd_ignition_files",
installation_device: "/dev/sda", // This might need to be configurable per-host in the future
ignition_file_name: "master.ign", // Control plane nodes use the master ignition file
}
.to_string();
debug!("[ControlPlane] iPXE content template:\n{content}");
// Create and apply an iPXE boot file for each node.
for node in nodes {
let mac_address = node.get_mac_address();
if mac_address.is_empty() {
return Err(InterpretError::new(format!(
"Physical host with ID '{}' has no MAC addresses defined.",
node.id
)));
}
info!(
"[ControlPlane] Applying iPXE config for node ID '{}' with MACs: {:?}",
node.id, mac_address
);
IPxeMacBootFileScore {
mac_address,
content: content.clone(),
}
.interpret(inventory, topology)
.await?;
}
Ok(())
}
/// Prompts the user to reboot the target control plane nodes.
async fn reboot_targets(&self, nodes: &Vec<PhysicalHost>) -> Result<(), InterpretError> {
let node_ids: Vec<String> = nodes.iter().map(|n| n.id.to_string()).collect();
info!("[ControlPlane] Requesting reboot for control plane nodes: {node_ids:?}",);
let confirmation = inquire::Confirm::new(
&format!("Please reboot the {} control plane nodes ({}) to apply their PXE configuration. Press enter when ready.", nodes.len(), node_ids.join(", ")),
)
.prompt()
.map_err(|e| InterpretError::new(format!("User prompt failed: {e}")))?;
if !confirmation {
return Err(InterpretError::new(
"User aborted the operation.".to_string(),
));
}
Ok(())
}
}
#[async_trait]
impl Interpret<HAClusterTopology> for OKDSetup03ControlPlaneInterpret {
fn get_name(&self) -> InterpretName {
InterpretName::Custom("OKDSetup03ControlPlane")
}
fn get_version(&self) -> Version {
self.version.clone()
}
fn get_status(&self) -> InterpretStatus {
self.status.clone()
}
fn get_children(&self) -> Vec<Id> {
vec![]
}
async fn execute(
&self,
inventory: &Inventory,
topology: &HAClusterTopology,
) -> Result<Outcome, InterpretError> {
// 1. Ensure we have 3 physical hosts for the control plane.
let nodes = self.get_nodes(inventory, topology).await?;
// 2. Create DHCP reservations for the control plane nodes.
self.configure_host_binding(inventory, topology, &nodes)
.await?;
// 3. Create iPXE files for each control plane node to boot from the master ignition.
self.configure_ipxe(inventory, topology, &nodes).await?;
// 4. Reboot the nodes to start the OS installation.
self.reboot_targets(&nodes).await?;
// TODO: Implement a step to wait for the control plane nodes to join the cluster
// and for the cluster operators to become available. This would be similar to
// the `wait-for bootstrap-complete` command.
info!("[ControlPlane] Provisioning initiated. Monitor the cluster convergence manually.");
Ok(Outcome::success(
"Control plane provisioning has been successfully initiated.".into(),
))
}
}

View File

@@ -1,13 +1,9 @@
use async_trait::async_trait;
use derive_new::new;
use harmony_types::id::Id;
use log::info;
use serde::Serialize;
use crate::{
data::Version,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
interpret::Interpret,
inventory::HostRole,
modules::{inventory::HarmonyDiscoveryStrategy, okd::bootstrap_okd_node::OKDNodeInterpret},
score::Score,
topology::HAClusterTopology,
};
@@ -18,66 +14,20 @@ use crate::{
// - Persist bonding via MC/NNCP as required (same approach as masters).
// -------------------------------------------------------------------------------------------------
#[derive(Debug, Clone, Serialize, new)]
pub struct OKDSetup04WorkersScore {}
#[derive(Debug, Clone, Serialize)]
pub struct OKDSetup04WorkersScore {
pub discovery_strategy: HarmonyDiscoveryStrategy,
}
impl Score<HAClusterTopology> for OKDSetup04WorkersScore {
fn create_interpret(&self) -> Box<dyn Interpret<HAClusterTopology>> {
Box::new(OKDSetup04WorkersInterpret::new(self.clone()))
Box::new(OKDNodeInterpret::new(
HostRole::ControlPlane,
self.discovery_strategy.clone(),
))
}
fn name(&self) -> String {
"OKDSetup04WorkersScore".to_string()
}
}
#[derive(Debug, Clone)]
pub struct OKDSetup04WorkersInterpret {
score: OKDSetup04WorkersScore,
version: Version,
status: InterpretStatus,
}
impl OKDSetup04WorkersInterpret {
pub fn new(score: OKDSetup04WorkersScore) -> Self {
let version = Version::from("1.0.0").unwrap();
Self {
version,
score,
status: InterpretStatus::QUEUED,
}
}
async fn render_and_reboot(&self) -> Result<(), InterpretError> {
info!("[Workers] Rendering per-MAC PXE for workers and rebooting");
Ok(())
}
}
#[async_trait]
impl Interpret<HAClusterTopology> for OKDSetup04WorkersInterpret {
fn get_name(&self) -> InterpretName {
InterpretName::Custom("OKDSetup04Workers")
}
fn get_version(&self) -> Version {
self.version.clone()
}
fn get_status(&self) -> InterpretStatus {
self.status.clone()
}
fn get_children(&self) -> Vec<Id> {
vec![]
}
async fn execute(
&self,
_inventory: &Inventory,
_topology: &HAClusterTopology,
) -> Result<Outcome, InterpretError> {
self.render_and_reboot().await?;
Ok(Outcome::success("Workers provisioned".into()))
}
}

View File

@@ -0,0 +1,313 @@
use async_trait::async_trait;
use derive_new::new;
use harmony_types::id::Id;
use log::{debug, info};
use serde::Serialize;
use crate::{
data::Version,
hardware::PhysicalHost,
infra::inventory::InventoryRepositoryFactory,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::{HostRole, Inventory},
modules::{
dhcp::DhcpHostBindingScore,
http::IPxeMacBootFileScore,
inventory::{DiscoverHostForRoleScore, HarmonyDiscoveryStrategy},
okd::{
okd_node::{BootstrapRole, ControlPlaneRole, OKDRoleProperties, WorkerRole},
templates::BootstrapIpxeTpl,
},
},
score::Score,
topology::{HAClusterTopology, HostBinding, LogicalHost},
};
#[derive(Debug, Clone, Serialize, new)]
pub struct OKDNodeInstallationScore {
host_role: HostRole,
discovery_strategy: HarmonyDiscoveryStrategy,
}
impl Score<HAClusterTopology> for OKDNodeInstallationScore {
fn name(&self) -> String {
"OKDNodeScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<HAClusterTopology>> {
Box::new(OKDNodeInterpret::new(
self.host_role.clone(),
self.discovery_strategy.clone(),
))
}
}
#[derive(Debug, Clone)]
pub struct OKDNodeInterpret {
host_role: HostRole,
discovery_strategy: HarmonyDiscoveryStrategy,
}
impl OKDNodeInterpret {
pub fn new(host_role: HostRole, discovery_strategy: HarmonyDiscoveryStrategy) -> Self {
Self {
host_role,
discovery_strategy,
}
}
fn okd_role_properties(&self, role: &HostRole) -> &'static dyn OKDRoleProperties {
match role {
HostRole::Bootstrap => &BootstrapRole,
HostRole::ControlPlane => &ControlPlaneRole,
HostRole::Worker => &WorkerRole,
}
}
async fn get_nodes(
&self,
inventory: &Inventory,
topology: &HAClusterTopology,
) -> Result<Vec<PhysicalHost>, InterpretError> {
let repo = InventoryRepositoryFactory::build().await?;
let mut hosts = repo.get_host_for_role(&self.host_role).await?;
let okd_host_properties = self.okd_role_properties(&self.host_role);
let required_hosts: i16 = okd_host_properties.required_hosts();
info!(
"Discovery of {} {} hosts in progress, current number {}",
required_hosts,
self.host_role,
hosts.len()
);
// This score triggers the discovery agent for a specific role.
DiscoverHostForRoleScore {
role: self.host_role.clone(),
number_desired_hosts: required_hosts,
discovery_strategy: self.discovery_strategy.clone(),
}
.interpret(inventory, topology)
.await?;
hosts = repo.get_host_for_role(&self.host_role).await?;
if hosts.len() < required_hosts.try_into().unwrap_or(0) {
Err(InterpretError::new(format!(
"OKD Requires at least {} {} hosts, but only found {}. Cannot proceed.",
required_hosts,
self.host_role,
hosts.len()
)))
} else {
// Take exactly the number of required hosts to ensure consistency.
Ok(hosts
.into_iter()
.take(required_hosts.try_into().unwrap())
.collect())
}
}
/// Configures DHCP host bindings for all nodes.
async fn configure_host_binding(
&self,
inventory: &Inventory,
topology: &HAClusterTopology,
nodes: &Vec<PhysicalHost>,
) -> Result<(), InterpretError> {
info!(
"[{}] Configuring host bindings for {} plane nodes.",
self.host_role, self.host_role,
);
let host_properties = self.okd_role_properties(&self.host_role);
self.validate_host_node_match(nodes, host_properties.logical_hosts(topology))?;
let bindings: Vec<HostBinding> =
self.host_bindings(nodes, host_properties.logical_hosts(topology));
DhcpHostBindingScore {
host_binding: bindings,
domain: Some(topology.domain_name.clone()),
}
.interpret(inventory, topology)
.await?;
Ok(())
}
// Ensure the topology definition matches the number of physical nodes found.
fn validate_host_node_match(
&self,
nodes: &Vec<PhysicalHost>,
hosts: &Vec<LogicalHost>,
) -> Result<(), InterpretError> {
if hosts.len() != nodes.len() {
return Err(InterpretError::new(format!(
"Mismatch between logical hosts defined in topology ({}) and physical nodes found ({}).",
hosts.len(),
nodes.len()
)));
}
Ok(())
}
// Create a binding for each physical host to its corresponding logical host.
fn host_bindings(
&self,
nodes: &Vec<PhysicalHost>,
hosts: &Vec<LogicalHost>,
) -> Vec<HostBinding> {
hosts
.iter()
.zip(nodes.iter())
.map(|(logical_host, physical_host)| {
info!(
"Creating binding: Logical Host '{}' -> Physical Host ID '{}'",
logical_host.name, physical_host.id
);
HostBinding {
logical_host: logical_host.clone(),
physical_host: physical_host.clone(),
}
})
.collect()
}
/// Renders and deploys a per-MAC iPXE boot file for each node.
async fn configure_ipxe(
&self,
inventory: &Inventory,
topology: &HAClusterTopology,
nodes: &Vec<PhysicalHost>,
) -> Result<(), InterpretError> {
info!(
"[{}] Rendering per-MAC iPXE configurations.",
self.host_role
);
let okd_role_properties = self.okd_role_properties(&self.host_role);
// The iPXE script content is the same for all control plane nodes,
// pointing to the 'master.ign' ignition file.
let content = BootstrapIpxeTpl {
http_ip: &topology.http_server.get_ip().to_string(),
scos_path: "scos",
ignition_http_path: "okd_ignition_files",
//TODO must be refactored to not only use /dev/sda
installation_device: "/dev/sda", // This might need to be configurable per-host in the future
ignition_file_name: okd_role_properties.ignition_file(),
}
.to_string();
debug!("[{}] iPXE content template:\n{content}", self.host_role);
// Create and apply an iPXE boot file for each node.
for node in nodes {
let mac_address = node.get_mac_address();
if mac_address.is_empty() {
return Err(InterpretError::new(format!(
"Physical host with ID '{}' has no MAC addresses defined.",
node.id
)));
}
info!(
"[{}] Applying iPXE config for node ID '{}' with MACs: {:?}",
self.host_role, node.id, mac_address
);
IPxeMacBootFileScore {
mac_address,
content: content.clone(),
}
.interpret(inventory, topology)
.await?;
}
Ok(())
}
/// Prompts the user to reboot the target control plane nodes.
async fn reboot_targets(&self, nodes: &Vec<PhysicalHost>) -> Result<(), InterpretError> {
let node_ids: Vec<String> = nodes.iter().map(|n| n.id.to_string()).collect();
info!(
"[{}] Requesting reboot for control plane nodes: {node_ids:?}",
self.host_role
);
let confirmation = inquire::Confirm::new(
&format!("Please reboot the {} {} nodes ({}) to apply their PXE configuration. Press enter when ready.", nodes.len(), self.host_role, node_ids.join(", ")),
)
.prompt()
.map_err(|e| InterpretError::new(format!("User prompt failed: {e}")))?;
if !confirmation {
return Err(InterpretError::new(
"User aborted the operation.".to_string(),
));
}
Ok(())
}
}
#[async_trait]
impl Interpret<HAClusterTopology> for OKDNodeInterpret {
async fn execute(
&self,
inventory: &Inventory,
topology: &HAClusterTopology,
) -> Result<Outcome, InterpretError> {
// 1. Ensure we have the specfied number of physical hosts.
let nodes = self.get_nodes(inventory, topology).await?;
// 2. Create DHCP reservations for the nodes.
self.configure_host_binding(inventory, topology, &nodes)
.await?;
// 3. Create iPXE files for each node to boot from the ignition.
self.configure_ipxe(inventory, topology, &nodes).await?;
// 4. Reboot the nodes to start the OS installation.
self.reboot_targets(&nodes).await?;
// TODO: Implement a step to validate that the installation of the nodes is
// complete and for the cluster operators to become available.
//
// The OpenShift installer only provides two wait commands which currently need to be
// run manually:
// - `openshift-install wait-for bootstrap-complete`
// - `openshift-install wait-for install-complete`
//
// There is no installer command that waits specifically for worker node
// provisioning. Worker nodes join asynchronously (via ignition + CSR approval),
// and the cluster becomes fully functional only once all nodes are Ready and the
// cluster operators report Available=True.
info!(
"[{}] Provisioning initiated. Monitor the cluster convergence manually.",
self.host_role
);
Ok(Outcome::success(format!(
"{} provisioning has been successfully initiated.",
self.host_role
)))
}
fn get_name(&self) -> InterpretName {
InterpretName::Custom("OKDNodeSetup".into())
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -251,14 +251,15 @@ impl<T: Topology + NetworkManager + Switch> Interpret<T> for HostNetworkConfigur
#[cfg(test)]
mod tests {
use assertor::*;
use brocade::PortOperatingMode;
use harmony_types::{net::MacAddress, switch::PortLocation};
use lazy_static::lazy_static;
use crate::{
hardware::HostCategory,
topology::{
HostNetworkConfig, NetworkError, PreparationError, PreparationOutcome, SwitchError,
SwitchPort,
HostNetworkConfig, NetworkError, PortConfig, PreparationError, PreparationOutcome,
SwitchError, SwitchPort,
},
};
use std::{
@@ -692,5 +693,14 @@ mod tests {
Ok(())
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
todo!()
}
async fn configure_interface(
&self,
port_config: &Vec<PortConfig>,
) -> Result<(), SwitchError> {
todo!()
}
}
}

View File

@@ -48,10 +48,13 @@
//! - internal_domain: Internal cluster domain (e.g., cluster.local or harmony.mcd).
use crate::{
modules::okd::{
OKDSetup01InventoryScore, OKDSetup02BootstrapScore, OKDSetup03ControlPlaneScore,
OKDSetup04WorkersScore, OKDSetup05SanityCheckScore, OKDSetupPersistNetworkBondScore,
bootstrap_06_installation_report::OKDSetup06InstallationReportScore,
modules::{
inventory::HarmonyDiscoveryStrategy,
okd::{
OKDSetup01InventoryScore, OKDSetup02BootstrapScore, OKDSetup03ControlPlaneScore,
OKDSetup04WorkersScore, OKDSetup05SanityCheckScore, OKDSetupPersistNetworkBondScore,
bootstrap_06_installation_report::OKDSetup06InstallationReportScore,
},
},
score::Score,
topology::HAClusterTopology,
@@ -60,13 +63,19 @@ use crate::{
pub struct OKDInstallationPipeline;
impl OKDInstallationPipeline {
pub async fn get_all_scores() -> Vec<Box<dyn Score<HAClusterTopology>>> {
pub async fn get_all_scores(
discovery_strategy: HarmonyDiscoveryStrategy,
) -> Vec<Box<dyn Score<HAClusterTopology>>> {
vec![
Box::new(OKDSetup01InventoryScore::new()),
Box::new(OKDSetup02BootstrapScore::new()),
Box::new(OKDSetup03ControlPlaneScore::new()),
Box::new(OKDSetup03ControlPlaneScore {
discovery_strategy: discovery_strategy.clone(),
}),
Box::new(OKDSetupPersistNetworkBondScore::new()),
Box::new(OKDSetup04WorkersScore::new()),
Box::new(OKDSetup04WorkersScore {
discovery_strategy: discovery_strategy.clone(),
}),
Box::new(OKDSetup05SanityCheckScore::new()),
Box::new(OKDSetup06InstallationReportScore::new()),
]

View File

@@ -6,12 +6,14 @@ mod bootstrap_05_sanity_check;
mod bootstrap_06_installation_report;
pub mod bootstrap_dhcp;
pub mod bootstrap_load_balancer;
pub mod bootstrap_okd_node;
mod bootstrap_persist_network_bond;
pub mod dhcp;
pub mod dns;
pub mod installation;
pub mod ipxe;
pub mod load_balancer;
pub mod okd_node;
pub mod templates;
pub mod upgrade;
pub use bootstrap_01_prepare::*;

View File

@@ -0,0 +1,54 @@
use crate::topology::{HAClusterTopology, LogicalHost};
pub trait OKDRoleProperties {
fn ignition_file(&self) -> &'static str;
fn required_hosts(&self) -> i16;
fn logical_hosts<'a>(&self, t: &'a HAClusterTopology) -> &'a Vec<LogicalHost>;
}
pub struct BootstrapRole;
pub struct ControlPlaneRole;
pub struct WorkerRole;
pub struct StorageRole;
impl OKDRoleProperties for BootstrapRole {
fn ignition_file(&self) -> &'static str {
"bootstrap.ign"
}
fn required_hosts(&self) -> i16 {
1
}
fn logical_hosts<'a>(&self, t: &'a HAClusterTopology) -> &'a Vec<LogicalHost> {
todo!()
}
}
impl OKDRoleProperties for ControlPlaneRole {
fn ignition_file(&self) -> &'static str {
"master.ign"
}
fn required_hosts(&self) -> i16 {
3
}
fn logical_hosts<'a>(&self, t: &'a HAClusterTopology) -> &'a Vec<LogicalHost> {
&t.control_plane
}
}
impl OKDRoleProperties for WorkerRole {
fn ignition_file(&self) -> &'static str {
"worker.ign"
}
fn required_hosts(&self) -> i16 {
2
}
fn logical_hosts<'a>(&self, t: &'a HAClusterTopology) -> &'a Vec<LogicalHost> {
&t.workers
}
}

View File

@@ -1,3 +1,4 @@
pub mod node_exporter;
mod shell;
mod upgrade;
pub use shell::*;

Some files were not shown because too many files have changed in this diff Show More