Compare commits

...

105 Commits

Author SHA1 Message Date
9a1aad62c9 Merge pull request 'fix: kubeconfig falls back to .kube if KUBECONFIG env variable is not set' (#205) from fix/kubeconfig into master
All checks were successful
Run Check Script / check (push) Successful in 1m9s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m49s
Reviewed-on: #205
2026-01-07 21:05:49 +00:00
0f9a53c8f6 cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m4s
2026-01-07 15:57:56 -05:00
b21829470d Merge pull request 'fix: modified nats box to use image tag non root for use in openshift environment' (#204) from fix/nats_non_root into master
All checks were successful
Run Check Script / check (push) Successful in 1m5s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m36s
Reviewed-on: #204
2026-01-07 20:52:42 +00:00
15f5e14c70 fix: modified nats box to use image tag non root for use in openshift environment
All checks were successful
Run Check Script / check (pull_request) Successful in 1m12s
2026-01-07 15:48:37 -05:00
4dcaf55dc5 fix: kubeconfig falls back to .kube if KUBECONFIG env variable is not set
Some checks failed
Run Check Script / check (pull_request) Failing after 2s
2026-01-07 15:47:08 -05:00
05c6398875 Merge pull request 'fix: added missing functions to impl SwitchClient for unmanagedSwitch' (#203) from fix/brocade_unmaged_switch into master
All checks were successful
Run Check Script / check (push) Successful in 1m3s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m17s
Reviewed-on: #203
2026-01-07 19:22:23 +00:00
ad5abe1748 fix(test): Use a mutex to prevent race conditions between tests relying on KUBECONFIG env var
All checks were successful
Run Check Script / check (pull_request) Successful in 1m0s
2026-01-07 14:21:29 -05:00
4e5a24b07a fix: added missing functions to impl SwitchClient for unmanagedSwitch
Some checks failed
Run Check Script / check (pull_request) Failing after 53s
2026-01-07 13:22:41 -05:00
7f0b77969c Merge pull request 'feat: PostgreSQLScore happy path using cnpg operator' (#200) from feat/postgresqlScore into master
Some checks failed
Run Check Script / check (push) Failing after 4s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 26s
Reviewed-on: #200
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 23:58:17 +00:00
166af498a0 Merge pull request 'Unmanaged switch client' (#187) from jvtrudel/harmony:feat/unmanaged-switch into master
Some checks failed
Run Check Script / check (push) Failing after 0s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 28s
Reviewed-on: #187
2026-01-06 21:43:18 +00:00
4144633098 Merge pull request 'feat: OPNSense Topology useful to interact with only an opnsense instance.' (#184) from feat/opnsenseTopology into master
Some checks failed
Run Check Script / check (push) Successful in 56s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #184
Reviewed-by: Ian Letourneau <ian@noma.to>
2026-01-06 21:37:33 +00:00
59253a65da Merge remote-tracking branch 'origin/master' into feat/opnsenseTopology
All checks were successful
Run Check Script / check (pull_request) Successful in 56s
2026-01-06 16:37:11 -05:00
16f65efe4f Merge remote-tracking branch 'origin/master' into feat/postgresqlScore
All checks were successful
Run Check Script / check (pull_request) Successful in 56s
2026-01-06 15:54:34 -05:00
07bc59d414 Merge pull request 'feat/cluster_monitoring' (#179) from feat/cluster_monitoring into master
All checks were successful
Run Check Script / check (push) Successful in 1m4s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m16s
Reviewed-on: #179
2026-01-06 20:47:06 +00:00
d5137d5ebc Merge remote-tracking branch 'origin/master' into feat/cluster_monitoring
Some checks failed
Run Check Script / check (pull_request) Failing after 10m33s
2026-01-06 15:43:34 -05:00
f2ca97b3bf Merge pull request 'feat(application): Webapp feature with production dns' (#167) from feat/webappdns into master
All checks were successful
Run Check Script / check (push) Successful in 54s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m31s
Reviewed-on: #167
2026-01-06 20:15:28 +00:00
dbfae8539f Merge remote-tracking branch 'origin/master' into feat/webappdns
All checks were successful
Run Check Script / check (pull_request) Successful in 58s
2026-01-06 15:14:19 -05:00
ed61ed1d93 Merge remote-tracking branch 'origin/master' into feat/postgresqlScore
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2026-01-06 15:10:48 -05:00
9359d43fe1 chore: Fix pr comments, documentation, slight refactor for better apis
All checks were successful
Run Check Script / check (pull_request) Successful in 49s
2026-01-06 15:09:17 -05:00
5935d66407 removed serial_test crate to keep tests running un parallel
All checks were successful
Run Check Script / check (pull_request) Successful in 58s
2026-01-06 15:02:30 -05:00
e026ad4d69 Merge pull request 'adr: draft ADR proposing harmony agent and nats-jetstram for decentralized workload management' (#202) from adr/decentralized-workload-management into master
All checks were successful
Run Check Script / check (push) Successful in 58s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m41s
Reviewed-on: #202
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 19:45:54 +00:00
98f098ffa4 Merge pull request 'feat: implementation for opnsense os-node_exporter' (#173) from feat/install_opnsense_node_exporter into master
All checks were successful
Run Check Script / check (push) Successful in 59s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m58s
Reviewed-on: #173
2026-01-06 19:19:34 +00:00
fdf1dfaa30 fix: leave implementers to define their Debug, so removed impl Debug for dyn NodeExporter
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2026-01-06 14:17:04 -05:00
4f8cd0c1cb Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2026-01-06 13:56:48 -05:00
028161000e Merge remote-tracking branch 'origin/master' into feat/postgresqlScore
Some checks failed
Run Check Script / check (pull_request) Failing after 1s
2026-01-06 13:44:50 -05:00
457d3d4546 fix tests, cargo fmt, introduced crate serial_test to allow sequential testing env sensitive tests
All checks were successful
Run Check Script / check (pull_request) Successful in 56s
2026-01-06 13:06:59 -05:00
004b35f08e Merge pull request 'feat/brocade_snmp' (#193) from feat/brocade_snmp into master
Some checks failed
Run Check Script / check (push) Failing after 2s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 27s
Reviewed-on: #193
2026-01-06 16:22:25 +00:00
2b19d8c3e8 fix: changed name to switch_ips for more clarity
All checks were successful
Run Check Script / check (pull_request) Successful in 54s
2026-01-06 10:51:53 -05:00
745479c667 Merge pull request 'doc for removing worker flag from cp on UPI' (#165) from doc/worker-flag into master
All checks were successful
Run Check Script / check (push) Successful in 53s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 9m24s
Reviewed-on: #165
2026-01-06 15:46:13 +00:00
2d89e08877 Merge pull request 'doc to clone and transfer a coreos disk' (#166) from doc/clone into master
Some checks failed
Run Check Script / check (push) Successful in 54s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #166
2026-01-06 15:42:56 +00:00
e5bd866c09 Merge pull request 'feat: cnpg operator score' (#199) from feat/cnpgOperator into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #199
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 15:41:55 +00:00
0973f76701 Merge pull request 'feat: Introducing FailoverTopology and OperatorHub Catalog Subscription with example' (#196) from feat/multisitePostgreSQL into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #196
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 15:41:12 +00:00
fd69a2d101 Merge pull request 'feat/rebuild_inventory' (#201) from feat/rebuild_inventory into master
Reviewed-on: #201
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-05 20:30:33 +00:00
4d535e192d feat: Add new nats example
Some checks failed
Run Check Script / check (pull_request) Failing after -5s
2025-12-22 16:02:10 -05:00
ef307081d8 chore: slight refactor of postgres public score
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-12-22 09:54:27 -05:00
5cce9f8e74 adr: draft ADR proposing harmony agent and nats-jetstram for decentralized workload management
All checks were successful
Run Check Script / check (pull_request) Successful in 1m31s
2025-12-19 10:12:44 -05:00
07e610c54a fix git merge conflict
All checks were successful
Run Check Script / check (pull_request) Successful in 1m24s
2025-12-17 17:09:32 -05:00
204795a74f feat(failoverPostgres): Its alive! We can now deploy a multisite postgres instance. The public hostname is still hardcoded, we will have to fix that but the rest is good enough
Some checks failed
Run Check Script / check (pull_request) Failing after 36s
2025-12-17 16:43:37 -05:00
03e98a51e3 Merge pull request 'fix: added fields missing for haproxy after most recent update' (#191) from fix/opnsense_update into master
Some checks failed
Run Check Script / check (push) Failing after 12m40s
Reviewed-on: #191
2025-12-17 20:03:49 +00:00
22875fe8f3 fix: updated test xml structures to match with new fields added to opnsense
All checks were successful
Run Check Script / check (pull_request) Successful in 1m32s
2025-12-17 15:00:48 -05:00
66a9a76a6b feat(postgres): Failover postgres example maybe working!? Added FailoverTopology implementations for required capabilities, documented a bit, some more tests, and quite a few utility functions
Some checks failed
Run Check Script / check (pull_request) Failing after 1m49s
2025-12-17 14:35:10 -05:00
440e684b35 feat: Postgresql score based on the postgres capability now. true infrastructure abstraction!
Some checks failed
Run Check Script / check (pull_request) Failing after 33s
2025-12-16 23:35:52 -05:00
b0383454f0 feat(types): Add utility initialization functions for StorageSize such as StorageSize::kb(324)
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-12-16 16:24:53 -05:00
9e8f3ce52f feat(postgres): Postgres Connection Test score now has a script that provides more insight. Not quite working properly but easy to improve at this point.
Some checks failed
Run Check Script / check (pull_request) Failing after 43s
2025-12-16 15:53:54 -05:00
c6f859f973 fix(OPNSense): update fields for haproxyy and opnsense following most recent update and upgrade to opnsense
Some checks failed
Run Check Script / check (pull_request) Failing after 1m25s
2025-12-16 15:31:35 -05:00
bbf28a1a28 Merge branch 'master' into fix/opnsense_update
Some checks failed
Run Check Script / check (pull_request) Failing after 1m21s
2025-12-16 20:00:54 +00:00
c3ec7070ec feat: PostgreSQL public and Connection test score, also moved k8s_anywhere in a folder
Some checks failed
Run Check Script / check (pull_request) Failing after 40s
2025-12-16 14:57:02 -05:00
29821d5e9f feat: TlsPassthroughScore works, improved logging, fixed CRD
Some checks failed
Run Check Script / check (pull_request) Failing after 35s
2025-12-15 19:09:10 -05:00
446e079595 wip: public postgres many fixes and refactoring to have a more cohesive routing management
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-12-15 17:04:30 -05:00
e0da5764fb feat(types): Added Rfc1123 String type, useful for k8s names
Some checks failed
Run Check Script / check (pull_request) Failing after 38s
2025-12-15 12:57:52 -05:00
e9cab92585 feat: Impl TlsRoute for K8sAnywhereTopology 2025-12-14 22:22:09 -05:00
d06bd4dac6 feat: OKD route CRD and OKD specific route score
All checks were successful
Run Check Script / check (pull_request) Successful in 1m30s
2025-12-14 17:05:26 -05:00
142300802d wip: TlsRoute score first version
Some checks failed
Run Check Script / check (pull_request) Failing after 1m11s
2025-12-14 06:19:33 -05:00
2254641f3d fix: Tests, doctests, formatting
All checks were successful
Run Check Script / check (pull_request) Successful in 1m38s
2025-12-13 17:56:53 -05:00
b61e4f9a96 wip: Expose postgres publicly. Created tlsroute capability and postgres implementations
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-12-13 09:47:59 -05:00
2e367d88d4 feat: PostgreSQL score works, added postgresql example, tested on OKD 4.19, added note about incompatible default namespace settings
Some checks failed
Run Check Script / check (pull_request) Failing after 2m37s
2025-12-11 22:54:57 -05:00
9edc42a665 feat: PostgreSQLScore happy path using cnpg operator
Some checks failed
Run Check Script / check (pull_request) Failing after 37s
2025-12-11 14:36:39 -05:00
f242aafebb feat: Subscription for cnpg-operator fixed default values, tested and added to operatorhub example.
All checks were successful
Run Check Script / check (pull_request) Successful in 1m31s
2025-12-11 12:18:28 -05:00
3e14ebd62c feat: cnpg operator score
All checks were successful
Run Check Script / check (pull_request) Successful in 1m36s
2025-12-10 22:55:08 -05:00
1b19638df4 wip(failover): Started implementation of the FailoverTopology with PostgreSQL capability
All checks were successful
Run Check Script / check (pull_request) Successful in 1m32s
This is our first Higher Order Topology (see ADR-015)
2025-12-10 21:15:51 -05:00
d39b1957cd feat(k8s_app): OperatorhubCatalogSourceScore can now install the operatorhub catalogsource on a cluster that already has operator lifecycle manager installed 2025-12-10 16:58:58 -05:00
bfdb11b217 Merge pull request 'feat(OKDInstallation): Implemented bootstrap of okd worker node, added features to allow both control plane and worker node to use the same bootstrap_okd_node score' (#198) from feat/okd-nodes into master
Some checks failed
Run Check Script / check (push) Successful in 1m57s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m59s
Reviewed-on: #198
Reviewed-by: johnride <jg@nationtech.io>
2025-12-10 19:27:51 +00:00
d5fadf4f44 fix: deleted storage node role, fixed erroneous comment, modified score name to be in line with clean code naming conventions, fixed how the OKDNodeInstallationScore is called via OKDSetup03ControlPlaneScore and OKDSetup04WorkersScore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m45s
2025-12-10 14:20:24 -05:00
357ca93d90 wip: FailoverTopology implementation for PostgreSQL on the way! 2025-12-10 13:12:53 -05:00
8103932f23 doc: Initial documentation for the MultisitePostgreSQL module 2025-12-10 13:12:53 -05:00
50bd5c5bba feat(OKDInstallation): Implemented bootstrap of okd worker node, added features to allow both control plane and worker node to use the same bootstrap_okd_node score
All checks were successful
Run Check Script / check (pull_request) Successful in 1m46s
2025-12-10 12:15:07 -05:00
9fbdc72cd0 fix: git ignore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m29s
2025-11-18 08:41:09 -05:00
78e595e696 feat: added alert manager routes to openshift cluster monitoring
All checks were successful
Run Check Script / check (pull_request) Successful in 1m37s
2025-11-17 15:22:43 -05:00
90b89224d8 fix: added K8sName type for strict naming of Kubernetes resources 2025-11-17 15:20:51 -05:00
43a17811cc fix formatting
Some checks failed
Run Check Script / check (pull_request) Failing after 1m49s
2025-11-14 12:53:43 -05:00
93ac89157a feat: added score to enable snmp_server on brocade switch and a working example
All checks were successful
Run Check Script / check (pull_request) Successful in 2m4s
2025-11-14 12:49:00 -05:00
29c82db70d fix: added fields missing for haproxy after most recent update
Some checks failed
Run Check Script / check (pull_request) Failing after 49s
2025-11-12 13:21:55 -05:00
734c9704ab feat: provide an unmanaged switch
Some checks failed
Run Check Script / check (pull_request) Failing after 40s
2025-11-11 13:30:03 -05:00
8ee3f8a4ad chore: Update harmony-inventory-agent binary as some fixes were introduced : port is 25000 now and nbd devices wont make the inventory crash
Some checks failed
Run Check Script / check (pull_request) Failing after 40s
2025-11-11 11:32:42 -05:00
d3634a6313 fix(types): Switch port location failed on port channel interfaces 2025-11-11 09:53:59 -05:00
a0a8d5277c fix: opnsense definitions more accurate for various resources such as ProxyGeneral, System, StaticMap, Job, etc. Also fixed brocade crate export and some warnings 2025-11-11 09:06:36 -05:00
43b04edbae feat(brocade): Add feature and example to remove port channel and configure switchport 2025-11-10 22:59:37 -05:00
755a4b7749 feat(inventory-agent): Discover algorithm by scanning a subnet of ips, slower than mdns but more reliable and versatile 2025-11-10 22:15:31 -05:00
5953bc58f4 feat: added function to enable snmp-server for brocade switches 2025-11-10 14:57:22 -05:00
51a5afbb6d fix: added some extra details
All checks were successful
Run Check Script / check (pull_request) Successful in 1m4s
2025-11-07 09:04:27 -05:00
759a9287d3 Merge remote-tracking branch 'origin/master' into feat/cluster_monitoring
Some checks failed
Run Check Script / check (pull_request) Failing after 19s
2025-11-05 17:02:10 -05:00
24922321b1 fix: webhook name must be k8s field compliant, add a FIXME note 2025-11-05 16:59:48 -05:00
7b542c9865 feat: OPNSense Topology useful to interact with only an opnsense instance.
All checks were successful
Run Check Script / check (pull_request) Successful in 1m11s
With this work, no need to initialize a full HAClusterTopology to run
opnsense scores.

Also added an example showing how to use it and perform basic
operations.

Made a video out of it, might publish it at some point!
2025-11-05 10:02:45 -05:00
cf84f2cce8 wip: cluster_monitoring almost there, a kink to fix in the yaml handling
All checks were successful
Run Check Script / check (pull_request) Successful in 1m15s
2025-10-29 23:12:34 -04:00
a12d12aa4f feat: example OpenshiftClusterAlertScore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2025-10-29 17:29:28 -04:00
cefb65933a wip: cluster monitoring score coming along, this simply edits OKD builtin alertmanager instance and adds a receiver 2025-10-29 17:26:21 -04:00
c2fa4f1869 fix:cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m21s
2025-10-29 13:53:58 -04:00
ee278ac817 Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
Some checks failed
Run Check Script / check (pull_request) Failing after 25s
2025-10-29 13:49:56 -04:00
09a06f136e Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
All checks were successful
Run Check Script / check (pull_request) Successful in 1m21s
2025-10-29 13:42:12 -04:00
5f147fa672 fix: opnsense-config reload_config() returns live config.xml rather than dropping it, allows function is_package_installed() to read live state after package installation rather than old config before installation
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2025-10-29 13:25:37 -04:00
9ba939bde1 wip: cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m16s
2025-10-28 15:45:02 -04:00
44bf21718c wip: example score with impl topolgy for opnsense topology 2025-10-28 14:41:15 -04:00
5e1580e5c1 Merge branch 'master' into doc/clone 2025-10-23 19:32:26 +00:00
1802b10ddf fix:translated documentaion notes into English 2025-10-23 15:31:45 -04:00
008b03f979 fix: changed documentation language to english 2025-10-23 14:56:07 -04:00
9f7b90d182 feat(argocd): Can now detect argocd instance when already installed and write crd accordingly. One major caveat though is that crd versions are not managed properly yet
Some checks failed
Run Check Script / check (pull_request) Failing after 39s
2025-10-23 13:12:38 -04:00
dc70266b5a wip: install argocd app depending on how argocd is already installed in the cluster 2025-10-23 13:11:39 -04:00
8fb755cda1 wip: argocd discovery 2025-10-23 13:10:35 -04:00
cb7a64b160 feat: Support tls enabled by default on rust web app 2025-10-23 13:10:35 -04:00
afdd511a6d feat(application): Webapp feature with production dns 2025-10-23 13:10:35 -04:00
5ab58f0253 fix: added impl node exporter for hacluster topology and dummy infra
All checks were successful
Run Check Script / check (pull_request) Successful in 1m26s
2025-10-22 14:39:12 -04:00
5af13800b7 fix: removed unimplemnted marco and returned Err instead
Some checks failed
Run Check Script / check (pull_request) Failing after 29s
some formatting error
2025-10-22 11:51:22 -04:00
8126b233d8 feat: implementation for opnsense os-node_exporter
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-10-22 11:27:28 -04:00
e5eb7fde9f doc to clone and transfer a coreos disk
All checks were successful
Run Check Script / check (pull_request) Successful in 1m11s
2025-10-09 15:29:09 -04:00
dd3f07e5b7 doc for removing worker flag from cp on UPI
All checks were successful
Run Check Script / check (pull_request) Successful in 1m13s
2025-10-09 15:28:42 -04:00
172 changed files with 7890 additions and 915 deletions

163
Cargo.lock generated
View File

@@ -690,6 +690,41 @@ dependencies = [
"tokio",
]
[[package]]
name = "brocade-snmp-server"
version = "0.1.0"
dependencies = [
"base64 0.22.1",
"brocade",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_secret",
"harmony_types",
"log",
"serde",
"tokio",
"url",
]
[[package]]
name = "brocade-switch"
version = "0.1.0"
dependencies = [
"async-trait",
"brocade",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"serde",
"tokio",
"url",
]
[[package]]
name = "brotli"
version = "8.0.2"
@@ -1776,6 +1811,21 @@ dependencies = [
"url",
]
[[package]]
name = "example-multisite-postgres"
version = "0.1.0"
dependencies = [
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"tokio",
"url",
]
[[package]]
name = "example-nanodc"
version = "0.1.0"
@@ -1794,6 +1844,21 @@ dependencies = [
"url",
]
[[package]]
name = "example-nats"
version = "0.1.0"
dependencies = [
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"tokio",
"url",
]
[[package]]
name = "example-ntfy"
version = "0.1.0"
@@ -1804,6 +1869,25 @@ dependencies = [
"url",
]
[[package]]
name = "example-okd-cluster-alerts"
version = "0.1.0"
dependencies = [
"brocade",
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_secret",
"harmony_secret_derive",
"harmony_types",
"log",
"serde",
"tokio",
"url",
]
[[package]]
name = "example-okd-install"
version = "0.1.0"
@@ -1835,6 +1919,21 @@ dependencies = [
"url",
]
[[package]]
name = "example-operatorhub-catalogsource"
version = "0.1.0"
dependencies = [
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"tokio",
"url",
]
[[package]]
name = "example-opnsense"
version = "0.1.0"
@@ -1853,6 +1952,55 @@ dependencies = [
"url",
]
[[package]]
name = "example-postgresql"
version = "0.1.0"
dependencies = [
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"tokio",
"url",
]
[[package]]
name = "example-public-postgres"
version = "0.1.0"
dependencies = [
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"tokio",
"url",
]
[[package]]
name = "example-opnsense-node-exporter"
version = "0.1.0"
dependencies = [
"async-trait",
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_secret",
"harmony_secret_derive",
"harmony_types",
"log",
"serde",
"tokio",
"url",
]
[[package]]
name = "example-pxe"
version = "0.1.0"
@@ -2479,6 +2627,19 @@ dependencies = [
"tokio",
]
[[package]]
name = "harmony_inventory_builder"
version = "0.1.0"
dependencies = [
"cidr",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"tokio",
"url",
]
[[package]]
name = "harmony_macros"
version = "0.1.0"
@@ -2544,8 +2705,10 @@ dependencies = [
name = "harmony_types"
version = "0.1.0"
dependencies = [
"log",
"rand 0.9.2",
"serde",
"serde_json",
"url",
]

View File

@@ -0,0 +1,90 @@
# Architecture Decision Record: Global Orchestration Mesh & The Harmony Agent
**Status:** Proposed
**Date:** 2025-12-19
## Context
Harmony is designed to enable a truly decentralized infrastructure where independent clusters—owned by different organizations or running on diverse hardware—can collaborate reliably. This vision combines the decentralization of Web3 with the performance and capabilities of Web2.
Currently, Harmony operates as a stateless CLI tool, invoked manually or via CI runners. While effective for deployment, this model presents a critical limitation: **a CLI cannot react to real-time events.**
To achieve automated failover and dynamic workload management, we need a system that is "always on." Relying on manual intervention or scheduled CI jobs to recover from a cluster failure creates unacceptable latency and prevents us from scaling to thousands of nodes.
Furthermore, we face a challenge in serving diverse workloads:
* **Financial workloads** require absolute consistency (CP - Consistency/Partition Tolerance).
* **AI/Inference workloads** require maximum availability (AP - Availability/Partition Tolerance).
There are many more use cases, but those are the two extremes.
We need a unified architecture that automates cluster coordination and supports both consistency models without requiring a complete re-architecture in the future.
## Decision
We propose a fundamental architectural evolution. It has been clear since the start of Harmony that it would be necessary to transition Harmony from a purely ephemeral CLI tool to a system that includes a persistent **Harmony Agent**. This Agent will connect to a **Global Orchestration Mesh** based on a strongly consistent protocol.
The proposal consists of four key pillars:
### 1. The Harmony Agent (New Component)
We will develop a long-running process (Daemon/Agent) to be deployed alongside workloads.
* **Shift from CLI:** Unlike the CLI, which applies configuration and exits, the Agent maintains a persistent connection to the mesh.
* **Responsibility:** It actively monitors cluster health, participates in consensus, and executes lifecycle commands (start/stop/fence) instantly when the mesh dictates a state change.
### 2. The Technology: NATS JetStream
We will utilize **NATS JetStream** as the underlying transport and consensus layer for the Agent and the Mesh.
* **Why not raw Raft?** Implementing a raw Raft library requires building and maintaining the transport layer, log compaction, snapshotting, and peer discovery manually. NATS JetStream provides a battle-tested, distributed log and Key-Value store (based on Raft) out of the box, along with a high-performance pub/sub system for event propagation.
* **Role:** It will act as the "source of truth" for the cluster state.
### 3. Strong Consistency at the Mesh Layer
The mesh will operate with **Strong Consistency** by default.
* All critical cluster state changes (topology updates, lease acquisitions, leadership elections) will require consensus among the Agents.
* This ensures that in the event of a network partition, we have a mathematical guarantee of which side holds the valid state, preventing data corruption.
### 4. Public UX: The `FailoverStrategy` Abstraction
To keep the user experience stable and simple, we will expose the complexity of the mesh through a high-level configuration API, tentatively called `FailoverStrategy`.
The user defines the *intent* in their config, and the Harmony Agent automates the *execution*:
* **`FailoverStrategy::AbsoluteConsistency`**:
* *Use Case:* Banking, Transactional DBs.
* *Behavior:* If the mesh detects a partition, the Agent on the minority side immediately halts workloads. No split-brain is ever allowed.
* **`FailoverStrategy::SplitBrainAllowed`**:
* *Use Case:* LLM Inference, Stateless Web Servers.
* *Behavior:* If a partition occurs, the Agent keeps workloads running to maximize uptime. State is reconciled when connectivity returns.
## Rationale
**The Necessity of an Agent**
You cannot automate what you do not monitor. Moving to an Agent-based model is the only way to achieve sub-second reaction times to infrastructure failures. It transforms Harmony from a deployment tool into a self-healing platform.
**Scaling & Decentralization**
To allow independent clusters to collaborate, they need a shared language. A strongly consistent mesh allows Cluster A (Organization X) and Cluster B (Organization Y) to agree on workload placement without a central authority.
**Why Strong Consistency First?**
It is technically feasible to relax a strongly consistent system to allow for "Split Brain" behavior (AP) when the user requests it. However, it is nearly impossible to take an eventually consistent system and force it to be strongly consistent (CP) later. By starting with strict constraints, we cover the hardest use cases (Finance) immediately.
**Future Topologies**
While our immediate need is `FailoverTopology` (Multi-site), this architecture supports any future topology logic:
* **`CostTopology`**: Agents negotiate to route workloads to the cluster with the cheapest spot instances.
* **`HorizontalTopology`**: Spreading a single workload across 100 clusters for massive scale.
* **`GeoTopology`**: Ensuring data stays within specific legal jurisdictions.
The mesh provides the *capability* (consensus and messaging); the topology provides the *logic*.
## Consequences
**Positive**
* **Automation:** Eliminates manual failover, enabling massive scale.
* **Reliability:** Guarantees data safety for critical workloads by default.
* **Flexibility:** A single codebase serves both high-frequency trading and AI inference.
* **Stability:** The public API remains abstract, allowing us to optimize the mesh internals without breaking user code.
**Negative**
* **Deployment Complexity:** Users must now deploy and maintain a running service (the Agent) rather than just downloading a binary.
* **Engineering Complexity:** Integrating NATS JetStream and handling distributed state machines is significantly more complex than the current CLI logic.
## Implementation Plan (Short Term)
1. **Agent Bootstrap:** Create the initial scaffold for the Harmony Agent (daemon).
2. **Mesh Integration:** Prototype NATS JetStream embedding within the Agent.
3. **Strategy Implementation:** Add `FailoverStrategy` to the configuration schema and implement the logic in the Agent to read and act on it.
4. **Migration:** Transition the current manual failover scripts into event-driven logic handled by the Agent.

View File

@@ -1,6 +1,6 @@
use std::net::{IpAddr, Ipv4Addr};
use brocade::BrocadeOptions;
use brocade::{BrocadeOptions, ssh};
use harmony_secret::{Secret, SecretManager};
use harmony_types::switch::PortLocation;
use serde::{Deserialize, Serialize};
@@ -16,23 +16,28 @@ async fn main() {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();
// let ip = IpAddr::V4(Ipv4Addr::new(10, 0, 0, 250)); // old brocade @ ianlet
let ip = IpAddr::V4(Ipv4Addr::new(192, 168, 55, 101)); // brocade @ sto1
let ip = IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)); // brocade @ sto1
// let ip = IpAddr::V4(Ipv4Addr::new(192, 168, 4, 11)); // brocade @ st
let switch_addresses = vec![ip];
let config = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
.await
.unwrap();
// let config = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
// .await
// .unwrap();
let brocade = brocade::init(
&switch_addresses,
22,
&config.username,
&config.password,
Some(BrocadeOptions {
// &config.username,
// &config.password,
"admin",
"password",
BrocadeOptions {
dry_run: true,
ssh: ssh::SshOptions {
port: 2222,
..Default::default()
},
..Default::default()
}),
},
)
.await
.expect("Brocade client failed to connect");
@@ -54,6 +59,7 @@ async fn main() {
}
println!("--------------");
todo!();
let channel_name = "1";
brocade.clear_port_channel(channel_name).await.unwrap();

View File

@@ -1,7 +1,8 @@
use super::BrocadeClient;
use crate::{
BrocadeInfo, Error, ExecutionMode, InterSwitchLink, InterfaceInfo, MacAddressEntry,
PortChannelId, PortOperatingMode, parse_brocade_mac_address, shell::BrocadeShell,
PortChannelId, PortOperatingMode, SecurityLevel, parse_brocade_mac_address,
shell::BrocadeShell,
};
use async_trait::async_trait;
@@ -140,7 +141,7 @@ impl BrocadeClient for FastIronClient {
async fn configure_interfaces(
&self,
_interfaces: Vec<(String, PortOperatingMode)>,
_interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error> {
todo!()
}
@@ -209,4 +210,20 @@ impl BrocadeClient for FastIronClient {
info!("[Brocade] Port-channel '{channel_name}' cleared.");
Ok(())
}
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error> {
let commands = vec![
"configure terminal".into(),
"snmp-server view ALL 1 included".into(),
"snmp-server group public v3 priv read ALL".into(),
format!(
"snmp-server user {user_name} groupname public auth md5 auth-password {auth} priv des priv-password {des}"
),
"exit".into(),
];
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await?;
Ok(())
}
}

View File

@@ -14,11 +14,12 @@ use async_trait::async_trait;
use harmony_types::net::MacAddress;
use harmony_types::switch::{PortDeclaration, PortLocation};
use regex::Regex;
use serde::Serialize;
mod fast_iron;
mod network_operating_system;
mod shell;
mod ssh;
pub mod ssh;
#[derive(Default, Clone, Debug)]
pub struct BrocadeOptions {
@@ -56,7 +57,7 @@ enum ExecutionMode {
#[derive(Clone, Debug)]
pub struct BrocadeInfo {
os: BrocadeOs,
version: String,
_version: String,
}
#[derive(Clone, Debug)]
@@ -118,7 +119,7 @@ impl fmt::Display for InterfaceType {
}
/// Defines the primary configuration mode of a switch interface, representing mutually exclusive roles.
#[derive(Debug, PartialEq, Eq, Clone)]
#[derive(Debug, PartialEq, Eq, Clone, Serialize)]
pub enum PortOperatingMode {
/// The interface is explicitly configured for Brocade fabric roles (ISL or Trunk enabled).
Fabric,
@@ -141,12 +142,11 @@ pub enum InterfaceStatus {
pub async fn init(
ip_addresses: &[IpAddr],
port: u16,
username: &str,
password: &str,
options: Option<BrocadeOptions>,
options: BrocadeOptions,
) -> Result<Box<dyn BrocadeClient + Send + Sync>, Error> {
let shell = BrocadeShell::init(ip_addresses, port, username, password, options).await?;
let shell = BrocadeShell::init(ip_addresses, username, password, options).await?;
let version_info = shell
.with_session(ExecutionMode::Regular, |session| {
@@ -208,7 +208,7 @@ pub trait BrocadeClient: std::fmt::Debug {
/// Configures a set of interfaces to be operated with a specified mode (access ports, ISL, etc.).
async fn configure_interfaces(
&self,
interfaces: Vec<(String, PortOperatingMode)>,
interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error>;
/// Scans the existing configuration to find the next available (unused)
@@ -237,6 +237,15 @@ pub trait BrocadeClient: std::fmt::Debug {
ports: &[PortLocation],
) -> Result<(), Error>;
/// Enables Simple Network Management Protocol (SNMP) server for switch
///
/// # Parameters
///
/// * `user_name`: The user name for the snmp server
/// * `auth`: The password for authentication process for verifying the identity of a device
/// * `des`: The Data Encryption Standard algorithm key
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error>;
/// Removes all configuration associated with the specified Port-Channel name.
///
/// This operation should be idempotent; attempting to clear a non-existent
@@ -263,7 +272,7 @@ async fn get_brocade_info(session: &mut BrocadeSession) -> Result<BrocadeInfo, E
return Ok(BrocadeInfo {
os: BrocadeOs::NetworkOperatingSystem,
version,
_version: version,
});
} else if output.contains("ICX") {
let re = Regex::new(r"(?m)^\s*SW: Version\s*(?P<version>[a-zA-Z0-9.\-]+)")
@@ -276,7 +285,7 @@ async fn get_brocade_info(session: &mut BrocadeSession) -> Result<BrocadeInfo, E
return Ok(BrocadeInfo {
os: BrocadeOs::FastIron,
version,
_version: version,
});
}
@@ -300,6 +309,11 @@ fn parse_brocade_mac_address(value: &str) -> Result<MacAddress, String> {
Ok(MacAddress(bytes))
}
#[derive(Debug)]
pub enum SecurityLevel {
AuthPriv(String),
}
#[derive(Debug)]
pub enum Error {
NetworkError(String),

View File

@@ -8,7 +8,7 @@ use regex::Regex;
use crate::{
BrocadeClient, BrocadeInfo, Error, ExecutionMode, InterSwitchLink, InterfaceInfo,
InterfaceStatus, InterfaceType, MacAddressEntry, PortChannelId, PortOperatingMode,
parse_brocade_mac_address, shell::BrocadeShell,
SecurityLevel, parse_brocade_mac_address, shell::BrocadeShell,
};
#[derive(Debug)]
@@ -187,7 +187,7 @@ impl BrocadeClient for NetworkOperatingSystemClient {
async fn configure_interfaces(
&self,
interfaces: Vec<(String, PortOperatingMode)>,
interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error> {
info!("[Brocade] Configuring {} interface(s)...", interfaces.len());
@@ -204,9 +204,12 @@ impl BrocadeClient for NetworkOperatingSystemClient {
PortOperatingMode::Trunk => {
commands.push("switchport".into());
commands.push("switchport mode trunk".into());
commands.push("no spanning-tree shutdown".into());
commands.push("switchport trunk allowed vlan all".into());
commands.push("no switchport trunk tag native-vlan".into());
commands.push("spanning-tree shutdown".into());
commands.push("no fabric isl enable".into());
commands.push("no fabric trunk enable".into());
commands.push("no shutdown".into());
}
PortOperatingMode::Access => {
commands.push("switchport".into());
@@ -330,4 +333,20 @@ impl BrocadeClient for NetworkOperatingSystemClient {
info!("[Brocade] Port-channel '{channel_name}' cleared.");
Ok(())
}
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error> {
let commands = vec![
"configure terminal".into(),
"snmp-server view ALL 1 included".into(),
"snmp-server group public v3 priv read ALL".into(),
format!(
"snmp-server user {user_name} groupname public auth md5 auth-password {auth} priv des priv-password {des}"
),
"exit".into(),
];
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await?;
Ok(())
}
}

View File

@@ -16,7 +16,6 @@ use tokio::time::timeout;
#[derive(Debug)]
pub struct BrocadeShell {
ip: IpAddr,
port: u16,
username: String,
password: String,
options: BrocadeOptions,
@@ -27,33 +26,31 @@ pub struct BrocadeShell {
impl BrocadeShell {
pub async fn init(
ip_addresses: &[IpAddr],
port: u16,
username: &str,
password: &str,
options: Option<BrocadeOptions>,
options: BrocadeOptions,
) -> Result<Self, Error> {
let ip = ip_addresses
.first()
.ok_or_else(|| Error::ConfigurationError("No IP addresses provided".to_string()))?;
let base_options = options.unwrap_or_default();
let options = ssh::try_init_client(username, password, ip, base_options).await?;
let brocade_ssh_client_options =
ssh::try_init_client(username, password, ip, options).await?;
Ok(Self {
ip: *ip,
port,
username: username.to_string(),
password: password.to_string(),
before_all_commands: vec![],
after_all_commands: vec![],
options,
options: brocade_ssh_client_options,
})
}
pub async fn open_session(&self, mode: ExecutionMode) -> Result<BrocadeSession, Error> {
BrocadeSession::open(
self.ip,
self.port,
self.options.ssh.port,
&self.username,
&self.password,
self.options.clone(),

View File

@@ -2,6 +2,7 @@ use std::borrow::Cow;
use std::sync::Arc;
use async_trait::async_trait;
use log::debug;
use russh::client::Handler;
use russh::kex::DH_G1_SHA1;
use russh::kex::ECDH_SHA2_NISTP256;
@@ -10,29 +11,43 @@ use russh_keys::key::SSH_RSA;
use super::BrocadeOptions;
use super::Error;
#[derive(Default, Clone, Debug)]
#[derive(Clone, Debug)]
pub struct SshOptions {
pub preferred_algorithms: russh::Preferred,
pub port: u16,
}
impl Default for SshOptions {
fn default() -> Self {
Self {
preferred_algorithms: Default::default(),
port: 22,
}
}
}
impl SshOptions {
fn ecdhsa_sha2_nistp256() -> Self {
fn ecdhsa_sha2_nistp256(port: u16) -> Self {
Self {
preferred_algorithms: russh::Preferred {
kex: Cow::Borrowed(&[ECDH_SHA2_NISTP256]),
key: Cow::Borrowed(&[SSH_RSA]),
..Default::default()
},
port,
..Default::default()
}
}
fn legacy() -> Self {
fn legacy(port: u16) -> Self {
Self {
preferred_algorithms: russh::Preferred {
kex: Cow::Borrowed(&[DH_G1_SHA1]),
key: Cow::Borrowed(&[SSH_RSA]),
..Default::default()
},
port,
..Default::default()
}
}
}
@@ -57,18 +72,21 @@ pub async fn try_init_client(
ip: &std::net::IpAddr,
base_options: BrocadeOptions,
) -> Result<BrocadeOptions, Error> {
let mut default = SshOptions::default();
default.port = base_options.ssh.port;
let ssh_options = vec![
SshOptions::default(),
SshOptions::ecdhsa_sha2_nistp256(),
SshOptions::legacy(),
default,
SshOptions::ecdhsa_sha2_nistp256(base_options.ssh.port),
SshOptions::legacy(base_options.ssh.port),
];
for ssh in ssh_options {
let opts = BrocadeOptions {
ssh,
ssh: ssh.clone(),
..base_options.clone()
};
let client = create_client(*ip, 22, username, password, &opts).await;
debug!("Creating client {ip}:{} {username}", ssh.port);
let client = create_client(*ip, ssh.port, username, password, &opts).await;
match client {
Ok(_) => {

Binary file not shown.

View File

@@ -0,0 +1,133 @@
## Working procedure to clone and restore CoreOS disk from OKD Cluster
### **Step 1 - take a backup**
```
sudo dd if=/dev/old of=/dev/backup status=progress
```
### **Step 2 - clone beginning of old disk to new**
```
sudo dd if=/dev/old of=/dev/backup status=progress count=1000 bs=1M
```
### **Step 3 - verify and modify disk partitions**
list disk partitions
```
sgdisk -p /dev/new
```
if new disk is smaller than old disk and there is space on the xfs partition of the old disk, modify partitions of new disk
```
gdisk /dev/new
```
inside of gdisk commands
```
-v -> verify table
-p -> print table
-d -> select partition to delete partition
-n -> recreate partition with same partition number as deleted partition
```
For end sector, either specify the new end or just press Enter for maximum available
When asked about partition type, enter the same type code (it will show the old one)
```
p - >to verify
w -> to write
```
make xfs file system for new partition <new4>
```
sudo mkfs.xfs -f /dev/new4
```
### **Step 4 - copy old PARTUUID **
**careful here**
get old patuuid:
```
sgdisk -i <partition_number> /dev/old_disk # Note the "Partition unique GUID"
```
get labels
```
sgdisk -p /dev/old_disk # Shows partition names in the table
blkid /dev/old_disk* # Shows PARTUUIDs and labels for all partitions
```
set it on new disk
```
sgdisk -u <partition_number>:<old_partuuid> /dev/sdc
```
partition name:
```
sgdisk -c <partition_number>:"<old_name>" /dev/sdc
```
verify all:
```
lsblk -o NAME,SIZE,PARTUUID,PARTLABEL /dev/old_disk
```
### **Step 5 - Mount disks and copy files from old to new disk**
mount files before copy:
```
mkdir -p /mnt/new
mkdir -p /mnt/old
mount /dev/old4 /mnt/old
mount /dev/new4 /mnt/new
```
copy:
with -n flag can run as dry-run
```
rsync -aAXHvn --numeric-ids /source/ /destination/
```
```
rsync -aAXHv --numeric-ids /source/ /destination/
```
### **Step 6 - Set correct UUID for new partition 4**
to set uuid with xfs_admin you must unmount first
unmount old devices
```
umount /mnt/new
umount /mnt/old
```
to set correct uuid for partition 4
```
blkid /dev/old4
```
```
xfs_admin -U <old_uuid> /dev/new_partition
```
to set labels
get it
```
sgdisk -i 4 /dev/sda | grep "Partition name"
```
set it
```
sgdisk -c 4:"<label_name>" /dev/sdc
or
(check existing with xfs_admin -l /dev/old_partition)
Use xfs_admin -L <label> /dev/new_partition
```
### **Step 7 - Verify**
verify everything:
```
sgdisk -p /dev/sda # Old disk
sgdisk -p /dev/sdc # New disk
```
```
lsblk -o NAME,SIZE,PARTUUID,PARTLABEL /dev/sda
lsblk -o NAME,SIZE,PARTUUID,PARTLABEL /dev/sdc
```
```
blkid /dev/sda* | grep UUID=
blkid /dev/sdc* | grep UUID=
```

View File

@@ -0,0 +1,56 @@
## **Remove Worker flag from OKD Control Planes**
### **Context**
On OKD user provisioned infrastructure the control plane nodes can have the flag node-role.kubernetes.io/worker which allows non critical workloads to be scheduled on the control-planes
### **Observed Symptoms**
- After adding HAProxy servers to the backend each back end appears down
- Traffic is redirected to the control planes instead of workers
- The pods router-default are incorrectly applied on the control planes rather than on the workers
- Pods are being scheduled on the control planes causing cluster instability
```
ss -tlnp | grep 80
```
- shows process haproxy is listening at 0.0.0.0:80 on cps
- same problem for port 443
- In namespace rook-ceph certain pods are deploted on cps rather than on worker nodes
### **Cause**
- when intalling UPI, the roles (master, worker) are not managed by the Machine Config operator and the cps are made schedulable by default.
### **Diagnostic**
check node labels:
```
oc get nodes --show-labels | grep control-plane
```
Inspecter kubelet configuration:
```
cat /etc/systemd/system/kubelet.service
```
find the line:
```
--node-labels=node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master,node-role.kubernetes.io/worker
```
→ presence of label worker confirms the problem.
Verify the flag doesnt come from MCO
```
oc get machineconfig | grep rendered-master
```
**Solution:**
To make the control planes non schedulable you must patch the cluster scheduler resource
```
oc patch scheduler cluster --type merge -p '{"spec":{"mastersSchedulable":false}}'
```
after the patch is applied the workloads can be deplaced by draining the nodes
```
oc adm cordon <cp-node>
oc adm drain <cp-node> --ignore-daemonsets delete-emptydir-data
```

View File

@@ -0,0 +1,105 @@
# Design Document: Harmony PostgreSQL Module
**Status:** Draft
**Last Updated:** 2025-12-01
**Context:** Multi-site Data Replication & Orchestration
## 1. Overview
The Harmony PostgreSQL Module provides a high-level abstraction for deploying and managing high-availability PostgreSQL clusters across geographically distributed Kubernetes/OKD sites.
Instead of manually configuring complex replication slots, firewalls, and operator settings on each cluster, users define a single intent (a **Score**), and Harmony orchestrates the underlying infrastructure (the **Arrangement**) to establish a Primary-Replica architecture.
Currently, the implementation relies on the **CloudNativePG (CNPG)** operator as the backing engine.
## 2. Architecture
### 2.1 The Abstraction Model
Following **ADR 003 (Infrastructure Abstraction)**, Harmony separates the *intent* from the *implementation*.
1. **The Score (Intent):** The user defines a `MultisitePostgreSQL` resource. This describes *what* is needed (e.g., "A Postgres 15 cluster with 10GB storage, Primary on Site A, Replica on Site B").
2. **The Interpret (Action):** Harmony MultisitePostgreSQLInterpret processes this Score and orchestrates the deployment on both sites to reach the state defined in the Score.
3. **The Capability (Implementation):** The PostgreSQL Capability is implemented by the K8sTopology and the interpret can deploy it, configure it and fetch information about it. The concrete implementation will rely on the mature CloudnativePG operator to manage all the Kubernetes resources required.
### 2.2 Network Connectivity (TLS Passthrough)
One of the critical challenges in multi-site orchestration is secure connectivity between clusters that may have dynamic IPs or strict firewalls.
To solve this, we utilize **OKD/OpenShift Routes with TLS Passthrough**.
* **Mechanism:** The Primary site exposes a `Route` configured for `termination: passthrough`.
* **Routing:** The OpenShift HAProxy router inspects the **SNI (Server Name Indication)** header of the incoming TCP connection to route traffic to the correct PostgreSQL Pod.
* **Security:** SSL is **not** terminated at the ingress router. The encrypted stream is passed directly to the PostgreSQL instance. Mutual TLS (mTLS) authentication is handled natively by CNPG between the Primary and Replica instances.
* **Dynamic IPs:** Because connections are established via DNS hostnames (the Route URL), this architecture is resilient to dynamic IP changes at the Primary site.
#### Traffic Flow Diagram
```text
[ Site B: Replica ] [ Site A: Primary ]
| |
(CNPG Instance) --[Encrypted TCP]--> (OKD HAProxy Router)
| (Port 443) |
| |
| [SNI Inspection]
| |
| v
| (PostgreSQL Primary Pod)
| (Port 5432)
```
## 3. Design Decisions
### Why CloudNativePG?
We selected CloudNativePG because it relies exclusively on standard Kubernetes primitives and uses the native PostgreSQL replication protocol (WAL shipping/Streaming). This aligns with Harmony's goal of being "K8s Native."
### Why TLS Passthrough instead of VPN/NodePort?
* **NodePort:** Requires static IPs and opening non-standard ports on the firewall, which violates our security constraints.
* **VPN (e.g., Wireguard/Tailscale):** While secure, it introduces significant complexity (sidecars, key management) and external dependencies.
* **TLS Passthrough:** Leverages the existing Ingress/Router infrastructure already present in OKD. It requires zero additional software and respects multi-tenancy (Routes are namespaced).
### Configuration Philosophy (YAGNI)
The current design exposes a **generic configuration surface**. Users can configure standard parameters (Storage size, CPU/Memory requests, Postgres version).
**We explicitly do not expose advanced CNPG or PostgreSQL configurations at this stage.**
* **Reasoning:** We aim to keep the API surface small and manageable.
* **Future Path:** We plan to implement a "pass-through" mechanism to allow sending raw config maps or custom parameters to the underlying engine (CNPG) *only when a concrete use case arises*. Until then, we adhere to the **YAGNI (You Ain't Gonna Need It)** principle to avoid premature optimization and API bloat.
## 4. Usage Guide
To deploy a multi-site cluster, apply the `MultisitePostgreSQL` resource to the Harmony Control Plane.
### Example Manifest
```yaml
apiVersion: harmony.io/v1alpha1
kind: MultisitePostgreSQL
metadata:
name: finance-db
namespace: tenant-a
spec:
version: "15"
storage: "10Gi"
resources:
requests:
cpu: "500m"
memory: "1Gi"
# Topology Definition
topology:
primary:
site: "site-paris" # The name of the cluster in Harmony
replicas:
- site: "site-newyork"
```
### What happens next?
1. Harmony detects the CR.
2. **On Site Paris:** It deploys a CNPG Cluster (Primary) and creates a Passthrough Route `postgres-finance-db.apps.site-paris.example.com`.
3. **On Site New York:** It deploys a CNPG Cluster (Replica) configured with `externalClusters` pointing to the Paris Route.
4. Data begins replicating immediately over the encrypted channel.
## 5. Troubleshooting
* **Connection Refused:** Ensure the Primary site's Route is successfully admitted by the Ingress Controller.
* **Certificate Errors:** CNPG manages mTLS automatically. If errors persist, ensure the CA secrets were correctly propagated by Harmony from Primary to Replica namespaces.

BIN
empty_database.sqlite Normal file

Binary file not shown.

View File

@@ -27,6 +27,7 @@ async fn main() {
};
let application = Arc::new(RustWebapp {
name: "example-monitoring".to_string(),
dns: "example-monitoring.harmony.mcd".to_string(),
project_root: PathBuf::from("./examples/rust/webapp"),
framework: Some(RustWebFramework::Leptos),
service_port: 3000,

View File

@@ -0,0 +1,20 @@
[package]
name = "brocade-snmp-server"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
brocade = { path = "../../brocade" }
harmony_secret = { path = "../../harmony_secret" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_macros = { path = "../../harmony_macros" }
tokio = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
base64.workspace = true
serde.workspace = true

View File

@@ -0,0 +1,22 @@
use std::net::{IpAddr, Ipv4Addr};
use harmony::{
inventory::Inventory, modules::brocade::BrocadeEnableSnmpScore, topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let brocade_snmp_server = BrocadeEnableSnmpScore {
switch_ips: vec![IpAddr::V4(Ipv4Addr::new(192, 168, 1, 111))],
dry_run: true,
};
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(brocade_snmp_server)],
None,
)
.await
.unwrap();
}

View File

@@ -0,0 +1,19 @@
[package]
name = "brocade-switch"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_macros = { path = "../../harmony_macros" }
harmony_types = { path = "../../harmony_types" }
tokio.workspace = true
url.workspace = true
async-trait.workspace = true
serde.workspace = true
log.workspace = true
env_logger.workspace = true
brocade = { path = "../../brocade" }

View File

@@ -0,0 +1,157 @@
use std::str::FromStr;
use async_trait::async_trait;
use brocade::{BrocadeOptions, PortOperatingMode};
use harmony::{
data::Version,
infra::brocade::BrocadeSwitchClient,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::{
HostNetworkConfig, PortConfig, PreparationError, PreparationOutcome, Switch, SwitchClient,
SwitchError, Topology,
},
};
use harmony_macros::ip;
use harmony_types::{id::Id, net::MacAddress, switch::PortLocation};
use log::{debug, info};
use serde::Serialize;
#[tokio::main]
async fn main() {
let switch_score = BrocadeSwitchScore {
port_channels_to_clear: vec![
Id::from_str("17").unwrap(),
Id::from_str("19").unwrap(),
Id::from_str("18").unwrap(),
],
ports_to_configure: vec![
(PortLocation(2, 0, 17), PortOperatingMode::Trunk),
(PortLocation(2, 0, 19), PortOperatingMode::Trunk),
(PortLocation(1, 0, 18), PortOperatingMode::Trunk),
],
};
harmony_cli::run(
Inventory::autoload(),
SwitchTopology::new().await,
vec![Box::new(switch_score)],
None,
)
.await
.unwrap();
}
#[derive(Clone, Debug, Serialize)]
struct BrocadeSwitchScore {
port_channels_to_clear: Vec<Id>,
ports_to_configure: Vec<PortConfig>,
}
impl<T: Topology + Switch> Score<T> for BrocadeSwitchScore {
fn name(&self) -> String {
"BrocadeSwitchScore".to_string()
}
#[doc(hidden)]
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(BrocadeSwitchInterpret {
score: self.clone(),
})
}
}
#[derive(Debug)]
struct BrocadeSwitchInterpret {
score: BrocadeSwitchScore,
}
#[async_trait]
impl<T: Topology + Switch> Interpret<T> for BrocadeSwitchInterpret {
async fn execute(
&self,
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
info!("Applying switch configuration {:?}", self.score);
debug!(
"Clearing port channel {:?}",
self.score.port_channels_to_clear
);
topology
.clear_port_channel(&self.score.port_channels_to_clear)
.await
.map_err(|e| InterpretError::new(e.to_string()))?;
debug!("Configuring interfaces {:?}", self.score.ports_to_configure);
topology
.configure_interface(&self.score.ports_to_configure)
.await
.map_err(|e| InterpretError::new(e.to_string()))?;
Ok(Outcome::success("switch configured".to_string()))
}
fn get_name(&self) -> InterpretName {
InterpretName::Custom("BrocadeSwitchInterpret")
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
struct SwitchTopology {
client: Box<dyn SwitchClient>,
}
#[async_trait]
impl Topology for SwitchTopology {
fn name(&self) -> &str {
"SwitchTopology"
}
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
Ok(PreparationOutcome::Noop)
}
}
impl SwitchTopology {
async fn new() -> Self {
let mut options = BrocadeOptions::default();
options.ssh.port = 2222;
let client =
BrocadeSwitchClient::init(&vec![ip!("127.0.0.1")], &"admin", &"password", options)
.await
.expect("Failed to connect to switch");
let client = Box::new(client);
Self { client }
}
}
#[async_trait]
impl Switch for SwitchTopology {
async fn setup_switch(&self) -> Result<(), SwitchError> {
todo!()
}
async fn get_port_for_mac_address(
&self,
_mac_address: &MacAddress,
) -> Result<Option<PortLocation>, SwitchError> {
todo!()
}
async fn configure_port_channel(&self, _config: &HostNetworkConfig) -> Result<(), SwitchError> {
todo!()
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
self.client.clear_port_channel(ids).await
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
self.client.configure_interface(ports).await
}
}

View File

@@ -2,7 +2,7 @@ use harmony::{
inventory::Inventory,
modules::{
dummy::{ErrorScore, PanicScore, SuccessScore},
inventory::LaunchDiscoverInventoryAgentScore,
inventory::{HarmonyDiscoveryStrategy, LaunchDiscoverInventoryAgentScore},
},
topology::LocalhostTopology,
};
@@ -18,6 +18,7 @@ async fn main() {
Box::new(PanicScore {}),
Box::new(LaunchDiscoverInventoryAgentScore {
discovery_timeout: Some(10),
discovery_strategy: HarmonyDiscoveryStrategy::MDNS,
}),
],
None,

View File

@@ -0,0 +1,21 @@
[package]
name = "example-ha-cluster"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_tui = { path = "../../harmony_tui" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
harmony_secret = { path = "../../harmony_secret" }
brocade = { path = "../../brocade" }
serde = { workspace = true }

View File

@@ -0,0 +1,15 @@
## OPNSense demo
Download the virtualbox snapshot from {{TODO URL}}
Start the virtualbox image
This virtualbox image is configured to use a bridge on the host's physical interface, make sure the bridge is up and the virtual machine can reach internet.
Credentials are opnsense default (root/opnsense)
Run the project with the correct ip address on the command line :
```bash
cargo run -p example-opnsense -- 192.168.5.229
```

View File

@@ -0,0 +1,143 @@
use std::{
net::{IpAddr, Ipv4Addr},
sync::{Arc, OnceLock},
};
use brocade::BrocadeOptions;
use cidr::Ipv4Cidr;
use harmony::{
hardware::{HostCategory, Location, PhysicalHost, SwitchGroup},
infra::{brocade::BrocadeSwitchClient, opnsense::OPNSenseManagementInterface},
inventory::Inventory,
modules::{
dummy::{ErrorScore, PanicScore, SuccessScore},
http::StaticFilesHttpScore,
okd::{dhcp::OKDDhcpScore, dns::OKDDnsScore, load_balancer::OKDLoadBalancerScore},
opnsense::OPNsenseShellCommandScore,
tftp::TftpScore,
},
topology::{LogicalHost, UnmanagedRouter},
};
use harmony_macros::{ip, mac_address};
use harmony_secret::{Secret, SecretManager};
use harmony_types::net::Url;
use serde::{Deserialize, Serialize};
#[tokio::main]
async fn main() {
let firewall = harmony::topology::LogicalHost {
ip: ip!("192.168.5.229"),
name: String::from("opnsense-1"),
};
let switch_auth = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
.await
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.5.101")]; // TODO: Adjust me
let brocade_options = BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
};
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
&switch_auth.password,
brocade_options,
)
.await
.expect("Failed to connect to switch");
let switch_client = Arc::new(switch_client);
let opnsense = Arc::new(
harmony::infra::opnsense::OPNSenseFirewall::new(firewall, None, "root", "opnsense").await,
);
let lan_subnet = Ipv4Addr::new(10, 100, 8, 0);
let gateway_ipv4 = Ipv4Addr::new(10, 100, 8, 1);
let gateway_ip = IpAddr::V4(gateway_ipv4);
let topology = harmony::topology::HAClusterTopology {
kubeconfig: None,
domain_name: "demo.harmony.mcd".to_string(),
router: Arc::new(UnmanagedRouter::new(
gateway_ip,
Ipv4Cidr::new(lan_subnet, 24).unwrap(),
)),
load_balancer: opnsense.clone(),
firewall: opnsense.clone(),
tftp_server: opnsense.clone(),
http_server: opnsense.clone(),
dhcp_server: opnsense.clone(),
dns_server: opnsense.clone(),
control_plane: vec![LogicalHost {
ip: ip!("10.100.8.20"),
name: "cp0".to_string(),
}],
bootstrap_host: LogicalHost {
ip: ip!("10.100.8.20"),
name: "cp0".to_string(),
},
workers: vec![],
switch_client: switch_client.clone(),
node_exporter: opnsense.clone(),
network_manager: OnceLock::new(),
};
let inventory = Inventory {
location: Location::new(
"232 des Éperviers, Wendake, Qc, G0A 4V0".to_string(),
"wk".to_string(),
),
switch: SwitchGroup::from([]),
firewall_mgmt: Box::new(OPNSenseManagementInterface::new()),
storage_host: vec![],
worker_host: vec![],
control_plane_host: vec![
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("08:00:27:62:EC:C3")),
],
};
// TODO regroup smaller scores in a larger one such as this
// let okd_boostrap_preparation();
let dhcp_score = OKDDhcpScore::new(&topology, &inventory);
let dns_score = OKDDnsScore::new(&topology);
let load_balancer_score = OKDLoadBalancerScore::new(&topology);
let tftp_score = TftpScore::new(Url::LocalFolder("./data/watchguard/tftpboot".to_string()));
let http_score = StaticFilesHttpScore {
folder_to_serve: Some(Url::LocalFolder(
"./data/watchguard/pxe-http-files".to_string(),
)),
files: vec![],
remote_path: None,
};
harmony_tui::run(
inventory,
topology,
vec![
Box::new(dns_score),
Box::new(dhcp_score),
Box::new(load_balancer_score),
Box::new(tftp_score),
Box::new(http_score),
Box::new(OPNsenseShellCommandScore {
opnsense: opnsense.get_opnsense_config(),
command: "touch /tmp/helloharmonytouching".to_string(),
}),
Box::new(SuccessScore {}),
Box::new(ErrorScore {}),
Box::new(PanicScore {}),
],
)
.await
.unwrap();
}
#[derive(Secret, Serialize, Deserialize, Debug)]
pub struct BrocadeSwitchAuth {
pub username: String,
pub password: String,
}

View File

@@ -0,0 +1,15 @@
[package]
name = "harmony_inventory_builder"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_macros = { path = "../../harmony_macros" }
harmony_types = { path = "../../harmony_types" }
tokio.workspace = true
url.workspace = true
cidr.workspace = true

View File

@@ -0,0 +1,11 @@
cargo build -p harmony_inventory_builder --release --target x86_64-unknown-linux-musl
SCRIPT_DIR="$(dirname ${0})"
cd "${SCRIPT_DIR}/docker/"
cp ../../../target/x86_64-unknown-linux-musl/release/harmony_inventory_builder .
docker build . -t hub.nationtech.io/harmony/harmony_inventory_builder
docker push hub.nationtech.io/harmony/harmony_inventory_builder

View File

@@ -0,0 +1,10 @@
FROM debian:12-slim
RUN mkdir /app
WORKDIR /app/
COPY harmony_inventory_builder /app/
ENV RUST_LOG=info
CMD ["sleep", "infinity"]

View File

@@ -0,0 +1,36 @@
use harmony::{
inventory::{HostRole, Inventory},
modules::inventory::{DiscoverHostForRoleScore, HarmonyDiscoveryStrategy},
topology::LocalhostTopology,
};
use harmony_macros::cidrv4;
#[tokio::main]
async fn main() {
let discover_worker = DiscoverHostForRoleScore {
role: HostRole::Worker,
number_desired_hosts: 3,
discovery_strategy: HarmonyDiscoveryStrategy::SUBNET {
cidr: cidrv4!("192.168.0.1/25"),
port: 25000,
},
};
let discover_control_plane = DiscoverHostForRoleScore {
role: HostRole::ControlPlane,
number_desired_hosts: 3,
discovery_strategy: HarmonyDiscoveryStrategy::SUBNET {
cidr: cidrv4!("192.168.0.1/25"),
port: 25000,
},
};
harmony_cli::run(
Inventory::autoload(),
LocalhostTopology::new(),
vec![Box::new(discover_worker), Box::new(discover_control_plane)],
None,
)
.await
.unwrap();
}

View File

@@ -24,13 +24,14 @@ use harmony::{
},
topology::K8sAnywhereTopology,
};
use harmony_types::net::Url;
use harmony_types::{k8s_name::K8sName, net::Url};
#[tokio::main]
async fn main() {
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
selectors: vec![],
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();

View File

@@ -22,8 +22,8 @@ use harmony::{
tenant::{ResourceLimits, TenantConfig, TenantNetworkPolicy},
},
};
use harmony_types::id::Id;
use harmony_types::net::Url;
use harmony_types::{id::Id, k8s_name::K8sName};
#[tokio::main]
async fn main() {
@@ -43,8 +43,9 @@ async fn main() {
};
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
selectors: vec![],
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();

View File

@@ -0,0 +1,18 @@
[package]
name = "example-multisite-postgres"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -0,0 +1,3 @@
export HARMONY_FAILOVER_TOPOLOGY_K8S_PRIMARY="context=default/api-your-openshift-cluster:6443/kube:admin"
export HARMONY_FAILOVER_TOPOLOGY_K8S_REPLICA="context=someuser/somecluster"
export RUST_LOG="harmony=debug"

View File

@@ -0,0 +1,28 @@
use harmony::{
inventory::Inventory,
modules::postgresql::{PublicPostgreSQLScore, capability::PostgreSQLConfig},
topology::{FailoverTopology, K8sAnywhereTopology},
};
#[tokio::main]
async fn main() {
// env_logger::init();
let postgres = PublicPostgreSQLScore {
config: PostgreSQLConfig {
cluster_name: "harmony-postgres-example".to_string(), // Override default name
namespace: "harmony-public-postgres".to_string(),
..Default::default() // Use harmony defaults, they are based on CNPG's default values :
// "default" namespace, 1 instance, 1Gi storage
},
hostname: "postgrestest.sto1.nationtech.io".to_string(),
};
harmony_cli::run(
Inventory::autoload(),
FailoverTopology::<K8sAnywhereTopology>::from_env(),
vec![Box::new(postgres)],
None,
)
.await
.unwrap();
}

View File

@@ -39,10 +39,10 @@ async fn main() {
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.33.101")];
let brocade_options = Some(BrocadeOptions {
let brocade_options = BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
});
};
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
@@ -106,6 +106,7 @@ async fn main() {
name: "wk2".to_string(),
},
],
node_exporter: opnsense.clone(),
switch_client: switch_client.clone(),
network_manager: OnceLock::new(),
};

18
examples/nats/Cargo.toml Normal file
View File

@@ -0,0 +1,18 @@
[package]
name = "example-nats"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

67
examples/nats/src/main.rs Normal file
View File

@@ -0,0 +1,67 @@
use std::str::FromStr;
use harmony::{
inventory::Inventory,
modules::helm::chart::{HelmChartScore, HelmRepository, NonBlankString},
topology::K8sAnywhereTopology,
};
use harmony_macros::hurl;
use log::info;
#[tokio::main]
async fn main() {
// env_logger::init();
let values_yaml = Some(
r#"config:
cluster:
enabled: true
replicas: 3
jetstream:
enabled: true
fileStorage:
enabled: true
size: 10Gi
storageDirectory: /data/jetstream
leafnodes:
enabled: false
# port: 7422
gateway:
enabled: false
# name: my-gateway
# port: 7522
natsBox:
container:
image:
tag: nonroot"#
.to_string(),
);
let namespace = "nats";
let nats = HelmChartScore {
namespace: Some(NonBlankString::from_str(namespace).unwrap()),
release_name: NonBlankString::from_str("nats").unwrap(),
chart_name: NonBlankString::from_str("nats/nats").unwrap(),
chart_version: None,
values_overrides: None,
values_yaml,
create_namespace: true,
install_only: false,
repository: Some(HelmRepository::new(
"nats".to_string(),
hurl!("https://nats-io.github.io/k8s/helm/charts/"),
true,
)),
};
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(nats)],
None,
)
.await
.unwrap();
info!(
"Enjoy! You can test your nats cluster by running : `kubectl exec -n {namespace} -it deployment/nats-box -- nats pub test hi`"
);
}

View File

@@ -0,0 +1,22 @@
[package]
name = "example-okd-cluster-alerts"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_secret = { path = "../../harmony_secret" }
harmony_secret_derive = { path = "../../harmony_secret_derive" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
serde.workspace = true
brocade = { path = "../../brocade" }

View File

@@ -0,0 +1,38 @@
use std::collections::HashMap;
use harmony::{
inventory::Inventory,
modules::monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
okd::cluster_monitoring::OpenshiftClusterAlertScore,
},
topology::K8sAnywhereTopology,
};
use harmony_macros::hurl;
use harmony_types::k8s_name::K8sName;
#[tokio::main]
async fn main() {
let mut sel = HashMap::new();
sel.insert(
"openshift_io_alert_source".to_string(),
"platform".to_string(),
);
let mut sel2 = HashMap::new();
sel2.insert("openshift_io_alert_source".to_string(), "".to_string());
let selectors = vec![sel, sel2];
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(OpenshiftClusterAlertScore {
receivers: vec![Box::new(DiscordWebhook {
name: K8sName("wills-discord-webhook-example".to_string()),
url: hurl!("https://something.io"),
selectors: selectors,
})],
})],
None,
)
.await
.unwrap();
}

View File

@@ -4,7 +4,10 @@ use crate::topology::{get_inventory, get_topology};
use harmony::{
config::secret::SshKeyPair,
data::{FileContent, FilePath},
modules::okd::{installation::OKDInstallationPipeline, ipxe::OKDIpxeScore},
modules::{
inventory::HarmonyDiscoveryStrategy,
okd::{installation::OKDInstallationPipeline, ipxe::OKDIpxeScore},
},
score::Score,
topology::HAClusterTopology,
};
@@ -26,7 +29,8 @@ async fn main() {
},
})];
scores.append(&mut OKDInstallationPipeline::get_all_scores().await);
scores
.append(&mut OKDInstallationPipeline::get_all_scores(HarmonyDiscoveryStrategy::MDNS).await);
harmony_cli::run(inventory, topology, scores, None)
.await

View File

@@ -31,10 +31,10 @@ pub async fn get_topology() -> HAClusterTopology {
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.1.101")]; // TODO: Adjust me
let brocade_options = Some(BrocadeOptions {
let brocade_options = BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
});
};
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
@@ -83,6 +83,7 @@ pub async fn get_topology() -> HAClusterTopology {
name: "bootstrap".to_string(),
},
workers: vec![],
node_exporter: opnsense.clone(),
switch_client: switch_client.clone(),
network_manager: OnceLock::new(),
}

View File

@@ -26,10 +26,10 @@ pub async fn get_topology() -> HAClusterTopology {
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.1.101")]; // TODO: Adjust me
let brocade_options = Some(BrocadeOptions {
let brocade_options = BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
});
};
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
@@ -78,6 +78,7 @@ pub async fn get_topology() -> HAClusterTopology {
name: "cp0".to_string(),
},
workers: vec![],
node_exporter: opnsense.clone(),
switch_client: switch_client.clone(),
network_manager: OnceLock::new(),
}

View File

@@ -1,4 +1,4 @@
use std::{collections::HashMap, str::FromStr};
use std::str::FromStr;
use harmony::{
inventory::Inventory,

View File

@@ -0,0 +1,18 @@
[package]
name = "example-operatorhub-catalogsource"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -0,0 +1,22 @@
use std::str::FromStr;
use harmony::{
inventory::Inventory,
modules::{k8s::apps::OperatorHubCatalogSourceScore, postgresql::CloudNativePgOperatorScore},
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let operatorhub_catalog = OperatorHubCatalogSourceScore::default();
let cnpg_operator = CloudNativePgOperatorScore::default();
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(operatorhub_catalog), Box::new(cnpg_operator)],
None,
)
.await
.unwrap();
}

View File

@@ -8,7 +8,7 @@ publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_tui = { path = "../../harmony_tui" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }

3
examples/opnsense/env.sh Normal file
View File

@@ -0,0 +1,3 @@
export HARMONY_SECRET_NAMESPACE=example-opnsense
export HARMONY_SECRET_STORE=file
export RUST_LOG=info

View File

@@ -1,135 +1,70 @@
use std::{
net::{IpAddr, Ipv4Addr},
sync::{Arc, OnceLock},
};
use brocade::BrocadeOptions;
use cidr::Ipv4Cidr;
use harmony::{
hardware::{HostCategory, Location, PhysicalHost, SwitchGroup},
infra::{brocade::BrocadeSwitchClient, opnsense::OPNSenseManagementInterface},
config::secret::OPNSenseFirewallCredentials,
infra::opnsense::OPNSenseFirewall,
inventory::Inventory,
modules::{
dummy::{ErrorScore, PanicScore, SuccessScore},
http::StaticFilesHttpScore,
okd::{dhcp::OKDDhcpScore, dns::OKDDnsScore, load_balancer::OKDLoadBalancerScore},
opnsense::OPNsenseShellCommandScore,
tftp::TftpScore,
},
topology::{LogicalHost, UnmanagedRouter},
modules::{dhcp::DhcpScore, opnsense::OPNsenseShellCommandScore},
topology::LogicalHost,
};
use harmony_macros::{ip, mac_address};
use harmony_macros::{ip, ipv4};
use harmony_secret::{Secret, SecretManager};
use harmony_types::net::Url;
use serde::{Deserialize, Serialize};
#[tokio::main]
async fn main() {
let firewall = harmony::topology::LogicalHost {
ip: ip!("192.168.5.229"),
let firewall = LogicalHost {
ip: ip!("192.168.55.1"),
name: String::from("opnsense-1"),
};
let switch_auth = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
let opnsense_auth = SecretManager::get_or_prompt::<OPNSenseFirewallCredentials>()
.await
.expect("Failed to get credentials");
let switches: Vec<IpAddr> = vec![ip!("192.168.5.101")]; // TODO: Adjust me
let brocade_options = Some(BrocadeOptions {
dry_run: *harmony::config::DRY_RUN,
..Default::default()
});
let switch_client = BrocadeSwitchClient::init(
&switches,
&switch_auth.username,
&switch_auth.password,
brocade_options,
let opnsense = OPNSenseFirewall::new(
firewall,
None,
&opnsense_auth.username,
&opnsense_auth.password,
)
.await
.expect("Failed to connect to switch");
.await;
let switch_client = Arc::new(switch_client);
let opnsense = Arc::new(
harmony::infra::opnsense::OPNSenseFirewall::new(firewall, None, "root", "opnsense").await,
);
let lan_subnet = Ipv4Addr::new(10, 100, 8, 0);
let gateway_ipv4 = Ipv4Addr::new(10, 100, 8, 1);
let gateway_ip = IpAddr::V4(gateway_ipv4);
let topology = harmony::topology::HAClusterTopology {
kubeconfig: None,
domain_name: "demo.harmony.mcd".to_string(),
router: Arc::new(UnmanagedRouter::new(
gateway_ip,
Ipv4Cidr::new(lan_subnet, 24).unwrap(),
)),
load_balancer: opnsense.clone(),
firewall: opnsense.clone(),
tftp_server: opnsense.clone(),
http_server: opnsense.clone(),
dhcp_server: opnsense.clone(),
dns_server: opnsense.clone(),
control_plane: vec![LogicalHost {
ip: ip!("10.100.8.20"),
name: "cp0".to_string(),
}],
bootstrap_host: LogicalHost {
ip: ip!("10.100.8.20"),
name: "cp0".to_string(),
},
workers: vec![],
switch_client: switch_client.clone(),
network_manager: OnceLock::new(),
};
let inventory = Inventory {
location: Location::new(
"232 des Éperviers, Wendake, Qc, G0A 4V0".to_string(),
"wk".to_string(),
let dhcp_score = DhcpScore {
dhcp_range: (
ipv4!("192.168.55.100").into(),
ipv4!("192.168.55.150").into(),
),
switch: SwitchGroup::from([]),
firewall_mgmt: Box::new(OPNSenseManagementInterface::new()),
storage_host: vec![],
worker_host: vec![],
control_plane_host: vec![
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("08:00:27:62:EC:C3")),
],
host_binding: vec![],
next_server: None,
boot_filename: None,
filename: None,
filename64: None,
filenameipxe: Some("filename.ipxe".to_string()),
domain: None,
};
// let dns_score = OKDDnsScore::new(&topology);
// let load_balancer_score = OKDLoadBalancerScore::new(&topology);
//
// let tftp_score = TftpScore::new(Url::LocalFolder("./data/watchguard/tftpboot".to_string()));
// let http_score = StaticFilesHttpScore {
// folder_to_serve: Some(Url::LocalFolder(
// "./data/watchguard/pxe-http-files".to_string(),
// )),
// files: vec![],
// remote_path: None,
// };
let opnsense_config = opnsense.get_opnsense_config();
// TODO regroup smaller scores in a larger one such as this
// let okd_boostrap_preparation();
let dhcp_score = OKDDhcpScore::new(&topology, &inventory);
let dns_score = OKDDnsScore::new(&topology);
let load_balancer_score = OKDLoadBalancerScore::new(&topology);
let tftp_score = TftpScore::new(Url::LocalFolder("./data/watchguard/tftpboot".to_string()));
let http_score = StaticFilesHttpScore {
folder_to_serve: Some(Url::LocalFolder(
"./data/watchguard/pxe-http-files".to_string(),
)),
files: vec![],
remote_path: None,
};
harmony_tui::run(
inventory,
topology,
harmony_cli::run(
Inventory::autoload(),
opnsense,
vec![
Box::new(dns_score),
Box::new(dhcp_score),
Box::new(load_balancer_score),
Box::new(tftp_score),
Box::new(http_score),
Box::new(OPNsenseShellCommandScore {
opnsense: opnsense.get_opnsense_config(),
command: "touch /tmp/helloharmonytouching".to_string(),
opnsense: opnsense_config,
command: "touch /tmp/helloharmonytouching_2".to_string(),
}),
Box::new(SuccessScore {}),
Box::new(ErrorScore {}),
Box::new(PanicScore {}),
],
None,
)
.await
.unwrap();

View File

@@ -0,0 +1,21 @@
[package]
name = "example-opnsense-node-exporter"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_secret = { path = "../../harmony_secret" }
harmony_secret_derive = { path = "../../harmony_secret_derive" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
serde.workspace = true
async-trait.workspace = true

View File

@@ -0,0 +1,80 @@
use std::{
net::{IpAddr, Ipv4Addr},
sync::Arc,
};
use async_trait::async_trait;
use cidr::Ipv4Cidr;
use harmony::{
executors::ExecutorError,
hardware::{HostCategory, Location, PhysicalHost, SwitchGroup},
infra::opnsense::OPNSenseManagementInterface,
inventory::Inventory,
modules::opnsense::node_exporter::NodeExporterScore,
topology::{
HAClusterTopology, LogicalHost, PreparationError, PreparationOutcome, Topology,
UnmanagedRouter, node_exporter::NodeExporter,
},
};
use harmony_macros::{ip, ipv4, mac_address};
#[derive(Debug)]
struct OpnSenseTopology {
node_exporter: Arc<dyn NodeExporter>,
}
#[async_trait]
impl Topology for OpnSenseTopology {
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
Ok(PreparationOutcome::Success {
details: "Success".to_string(),
})
}
fn name(&self) -> &str {
"OpnsenseTopology"
}
}
#[async_trait]
impl NodeExporter for OpnSenseTopology {
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
self.node_exporter.ensure_initialized().await
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
self.node_exporter.commit_config().await
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
self.node_exporter.reload_restart().await
}
}
#[tokio::main]
async fn main() {
let firewall = harmony::topology::LogicalHost {
ip: ip!("192.168.1.1"),
name: String::from("fw0"),
};
let opnsense = Arc::new(
harmony::infra::opnsense::OPNSenseFirewall::new(firewall, None, "root", "opnsense").await,
);
let topology = OpnSenseTopology {
node_exporter: opnsense.clone(),
};
let inventory = Inventory::empty();
let node_exporter_score = NodeExporterScore {};
harmony_cli::run(
inventory,
topology,
vec![Box::new(node_exporter_score)],
None,
)
.await
.unwrap();
}

View File

@@ -0,0 +1,18 @@
[package]
name = "example-postgresql"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -0,0 +1,26 @@
use harmony::{
inventory::Inventory,
modules::postgresql::{PostgreSQLScore, capability::PostgreSQLConfig},
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let postgresql = PostgreSQLScore {
config: PostgreSQLConfig {
cluster_name: "harmony-postgres-example".to_string(), // Override default name
namespace: "harmony-postgres-example".to_string(),
..Default::default() // Use harmony defaults, they are based on CNPG's default values :
// "default" namespace, 1 instance, 1Gi storage
},
};
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(postgresql)],
None,
)
.await
.unwrap();
}

View File

@@ -0,0 +1,18 @@
[package]
name = "example-public-postgres"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -0,0 +1,38 @@
use harmony::{
inventory::Inventory,
modules::postgresql::{
K8sPostgreSQLScore, PostgreSQLConnectionScore, PublicPostgreSQLScore,
capability::PostgreSQLConfig,
},
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let postgres = PublicPostgreSQLScore {
config: PostgreSQLConfig {
cluster_name: "harmony-postgres-example".to_string(), // Override default name
namespace: "harmony-public-postgres".to_string(),
..Default::default() // Use harmony defaults, they are based on CNPG's default values :
// 1 instance, 1Gi storage
},
hostname: "postgrestest.sto1.nationtech.io".to_string(),
};
let test_connection = PostgreSQLConnectionScore {
name: "harmony-postgres-example".to_string(),
namespace: "harmony-public-postgres".to_string(),
cluster_name: "harmony-postgres-example".to_string(),
hostname: Some("postgrestest.sto1.nationtech.io".to_string()),
port_override: Some(443),
};
harmony_cli::run(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(postgres), Box::new(test_connection)],
None,
)
.await
.unwrap();
}

View File

@@ -1,4 +1,4 @@
use std::{path::PathBuf, sync::Arc};
use std::{collections::HashMap, path::PathBuf, sync::Arc};
use harmony::{
inventory::Inventory,
@@ -10,20 +10,22 @@ use harmony::{
},
topology::K8sAnywhereTopology,
};
use harmony_types::net::Url;
use harmony_types::{k8s_name::K8sName, net::Url};
#[tokio::main]
async fn main() {
let application = Arc::new(RustWebapp {
name: "test-rhob-monitoring".to_string(),
dns: "test-rhob-monitoring.harmony.mcd".to_string(),
project_root: PathBuf::from("./webapp"), // Relative from 'harmony-path' param
framework: Some(RustWebFramework::Leptos),
service_port: 3000,
});
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
selectors: vec![],
};
let app = ApplicationScore {

View File

@@ -1,3 +1,4 @@
Dockerfile.harmony
.harmony_generated
harmony
webapp

View File

@@ -1,4 +1,4 @@
use std::{path::PathBuf, sync::Arc};
use std::{collections::HashMap, path::PathBuf, sync::Arc};
use harmony::{
inventory::Inventory,
@@ -14,19 +14,22 @@ use harmony::{
topology::K8sAnywhereTopology,
};
use harmony_macros::hurl;
use harmony_types::k8s_name::K8sName;
#[tokio::main]
async fn main() {
let application = Arc::new(RustWebapp {
name: "harmony-example-rust-webapp".to_string(),
dns: "harmony-example-rust-webapp.harmony.mcd".to_string(),
project_root: PathBuf::from("./webapp"),
framework: Some(RustWebFramework::Leptos),
service_port: 3000,
});
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: hurl!("https://discord.doesnt.exist.com"),
selectors: vec![],
};
let webhook_receiver = WebhookReceiver {

View File

@@ -0,0 +1,7 @@
apiVersion: v2
name: harmony-example-rust-webapp-chart
description: A Helm chart for the harmony-example-rust-webapp web application.
type: application
version: 0.1.0
appVersion: "latest"

View File

@@ -0,0 +1,16 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "chart.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}

View File

@@ -0,0 +1,23 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "chart.name" . }}
template:
metadata:
labels:
app: {{ include "chart.name" . }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 3000
protocol: TCP

View File

@@ -0,0 +1,35 @@
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "chart.fullname" . }}
annotations:
{{- toYaml .Values.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "chart.fullname" $ }}
port:
number: 3000
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "chart.fullname" . }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 3000
protocol: TCP
name: http
selector:
app: {{ include "chart.name" . }}

View File

@@ -0,0 +1,34 @@
# Default values for harmony-example-rust-webapp-chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: hub.nationtech.io/harmony/harmony-example-rust-webapp
pullPolicy: IfNotPresent
# Overridden by the chart's appVersion
tag: "latest"
service:
type: ClusterIP
port: 3000
ingress:
enabled: true
# Annotations for cert-manager to handle SSL.
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# Add other annotations like nginx ingress class if needed
# kubernetes.io/ingress.class: nginx
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: harmony-example-rust-webapp-tls
hosts:
- chart-example.local

View File

@@ -2,12 +2,11 @@ use harmony::{
inventory::Inventory,
modules::{
application::{
ApplicationScore, RustWebFramework, RustWebapp,
features::{PackagingDeployment, rhob_monitoring::Monitoring},
features::{rhob_monitoring::Monitoring, PackagingDeployment}, ApplicationScore, RustWebFramework, RustWebapp
},
monitoring::alert_channel::discord_alert_channel::DiscordWebhook,
},
topology::K8sAnywhereTopology,
topology::{K8sAnywhereTopology, LocalhostTopology},
};
use harmony_macros::hurl;
use std::{path::PathBuf, sync::Arc};
@@ -22,8 +21,8 @@ async fn main() {
});
let discord_webhook = DiscordWebhook {
name: "harmony_demo".to_string(),
url: hurl!("http://not_a_url.com"),
name: "harmony-demo".to_string(),
url: hurl!("https://discord.com/api/webhooks/1415391405681021050/V6KzV41vQ7yvbn7BchejRu9C8OANxy0i2ESZOz2nvCxG8xAY3-2i3s5MS38k568JKTzH"),
};
let app = ApplicationScore {

View File

@@ -10,12 +10,14 @@ use harmony::{
topology::K8sAnywhereTopology,
};
use harmony_macros::hurl;
use harmony_types::k8s_name::K8sName;
use std::{path::PathBuf, sync::Arc};
#[tokio::main]
async fn main() {
let application = Arc::new(RustWebapp {
name: "harmony-example-tryrust".to_string(),
dns: "tryrust.example.harmony.mcd".to_string(),
project_root: PathBuf::from("./tryrust.org"), // <== Project root, in this case it is a
// submodule
framework: Some(RustWebFramework::Leptos),
@@ -31,8 +33,9 @@ async fn main() {
Box::new(Monitoring {
application: application.clone(),
alert_receiver: vec![Box::new(DiscordWebhook {
name: "test-discord".to_string(),
name: K8sName("test-discord".to_string()),
url: hurl!("https://discord.doesnt.exist.com"),
selectors: vec![],
})],
}),
],

View File

@@ -152,10 +152,10 @@ impl PhysicalHost {
pub fn parts_list(&self) -> String {
let PhysicalHost {
id,
category,
category: _,
network,
storage,
labels,
labels: _,
memory_modules,
cpus,
} = self;
@@ -226,8 +226,8 @@ impl PhysicalHost {
speed_mhz,
manufacturer,
part_number,
serial_number,
rank,
serial_number: _,
rank: _,
} = mem;
parts_list.push_str(&format!(
"\n{}Gb, {}Mhz, Manufacturer ({}), Part Number ({})",

View File

@@ -4,6 +4,8 @@ use std::error::Error;
use async_trait::async_trait;
use derive_new::new;
use crate::inventory::HostRole;
use super::{
data::Version, executors::ExecutorError, inventory::Inventory, topology::PreparationError,
};
@@ -152,6 +154,12 @@ pub struct InterpretError {
msg: String,
}
impl From<InterpretError> for String {
fn from(e: InterpretError) -> String {
format!("InterpretError : {}", e.msg)
}
}
impl std::fmt::Display for InterpretError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(&self.msg)

View File

@@ -1,4 +1,6 @@
mod repository;
use std::fmt;
pub use repository::*;
#[derive(Debug, new, Clone)]
@@ -69,5 +71,14 @@ pub enum HostRole {
Bootstrap,
ControlPlane,
Worker,
Storage,
}
impl fmt::Display for HostRole {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
HostRole::Bootstrap => write!(f, "Bootstrap"),
HostRole::ControlPlane => write!(f, "ControlPlane"),
HostRole::Worker => write!(f, "Worker"),
}
}
}

View File

@@ -67,16 +67,16 @@ impl<T: Topology> Maestro<T> {
}
}
pub fn register_all(&mut self, mut scores: ScoreVec<T>) {
let mut score_mut = self.scores.write().expect("Should acquire lock");
score_mut.append(&mut scores);
}
fn is_topology_initialized(&self) -> bool {
self.topology_state.status == TopologyStatus::Success
|| self.topology_state.status == TopologyStatus::Noop
}
pub fn register_all(&mut self, mut scores: ScoreVec<T>) {
let mut score_mut = self.scores.write().expect("Should acquire lock");
score_mut.append(&mut scores);
}
pub async fn interpret(&self, score: Box<dyn Score<T>>) -> Result<Outcome, InterpretError> {
if !self.is_topology_initialized() {
warn!(

View File

@@ -0,0 +1,64 @@
use async_trait::async_trait;
use crate::topology::k8s_anywhere::K8sAnywhereConfig;
use crate::topology::{K8sAnywhereTopology, PreparationError, PreparationOutcome, Topology};
pub struct FailoverTopology<T> {
pub primary: T,
pub replica: T,
}
#[async_trait]
impl<T: Topology + Send + Sync> Topology for FailoverTopology<T> {
fn name(&self) -> &str {
"FailoverTopology"
}
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
let primary_outcome = self.primary.ensure_ready().await?;
let replica_outcome = self.replica.ensure_ready().await?;
match (primary_outcome, replica_outcome) {
(PreparationOutcome::Noop, PreparationOutcome::Noop) => Ok(PreparationOutcome::Noop),
(p, r) => {
let mut details = Vec::new();
if let PreparationOutcome::Success { details: d } = p {
details.push(format!("Primary: {}", d));
}
if let PreparationOutcome::Success { details: d } = r {
details.push(format!("Replica: {}", d));
}
Ok(PreparationOutcome::Success {
details: details.join(", "),
})
}
}
}
}
impl FailoverTopology<K8sAnywhereTopology> {
/// Creates a new `FailoverTopology` from environment variables.
///
/// Expects two environment variables:
/// - `HARMONY_FAILOVER_TOPOLOGY_K8S_PRIMARY`: Comma-separated `key=value` pairs, e.g.,
/// `kubeconfig=/path/to/primary.kubeconfig,context_name=primary-ctx`
/// - `HARMONY_FAILOVER_TOPOLOGY_K8S_REPLICA`: Same format for the replica.
///
/// Parses `kubeconfig` (path to kubeconfig file) and `context_name` (Kubernetes context),
/// and constructs `K8sAnywhereConfig` with local installs disabled (`use_local_k3d=false`,
/// `autoinstall=false`, `use_system_kubeconfig=false`).
/// `harmony_profile` is read from `HARMONY_PROFILE` env or defaults to `"dev"`.
///
/// Panics if required env vars are missing or malformed.
pub fn from_env() -> Self {
let primary_config =
K8sAnywhereConfig::remote_k8s_from_env_var("HARMONY_FAILOVER_TOPOLOGY_K8S_PRIMARY");
let replica_config =
K8sAnywhereConfig::remote_k8s_from_env_var("HARMONY_FAILOVER_TOPOLOGY_K8S_REPLICA");
let primary = K8sAnywhereTopology::with_config(primary_config);
let replica = K8sAnywhereTopology::with_config(replica_config);
Self { primary, replica }
}
}

View File

@@ -1,4 +1,5 @@
use async_trait::async_trait;
use brocade::PortOperatingMode;
use harmony_macros::ip;
use harmony_types::{
id::Id,
@@ -8,9 +9,9 @@ use harmony_types::{
use log::debug;
use log::info;
use crate::infra::network_manager::OpenShiftNmStateNetworkManager;
use crate::topology::PxeOptions;
use crate::{data::FileContent, executors::ExecutorError};
use crate::{data::FileContent, executors::ExecutorError, topology::node_exporter::NodeExporter};
use crate::{infra::network_manager::OpenShiftNmStateNetworkManager, topology::PortConfig};
use crate::{modules::inventory::HarmonyDiscoveryStrategy, topology::PxeOptions};
use super::{
DHCPStaticEntry, DhcpServer, DnsRecord, DnsRecordType, DnsServer, Firewall, HostNetworkConfig,
@@ -18,7 +19,6 @@ use super::{
NetworkManager, PreparationError, PreparationOutcome, Router, Switch, SwitchClient,
SwitchError, TftpServer, Topology, k8s::K8sClient,
};
use std::sync::{Arc, OnceLock};
#[derive(Debug, Clone)]
@@ -31,6 +31,7 @@ pub struct HAClusterTopology {
pub tftp_server: Arc<dyn TftpServer>,
pub http_server: Arc<dyn HttpServer>,
pub dns_server: Arc<dyn DnsServer>,
pub node_exporter: Arc<dyn NodeExporter>,
pub switch_client: Arc<dyn SwitchClient>,
pub bootstrap_host: LogicalHost,
pub control_plane: Vec<LogicalHost>,
@@ -115,6 +116,7 @@ impl HAClusterTopology {
tftp_server: dummy_infra.clone(),
http_server: dummy_infra.clone(),
dns_server: dummy_infra.clone(),
node_exporter: dummy_infra.clone(),
switch_client: dummy_infra.clone(),
bootstrap_host: dummy_host,
control_plane: vec![],
@@ -298,6 +300,13 @@ impl Switch for HAClusterTopology {
Ok(())
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
todo!()
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
todo!()
}
}
#[async_trait]
@@ -312,6 +321,23 @@ impl NetworkManager for HAClusterTopology {
async fn configure_bond(&self, config: &HostNetworkConfig) -> Result<(), NetworkError> {
self.network_manager().await.configure_bond(config).await
}
//TODO add snmp here
}
#[async_trait]
impl NodeExporter for HAClusterTopology {
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
self.node_exporter.ensure_initialized().await
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
self.node_exporter.commit_config().await
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
self.node_exporter.reload_restart().await
}
}
#[derive(Debug)]
@@ -501,6 +527,21 @@ impl DnsServer for DummyInfra {
}
}
#[async_trait]
impl NodeExporter for DummyInfra {
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
}
#[async_trait]
impl SwitchClient for DummyInfra {
async fn setup(&self) -> Result<(), SwitchError> {
@@ -521,4 +562,10 @@ impl SwitchClient for DummyInfra {
) -> Result<u8, SwitchError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
todo!()
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
todo!()
}
}

View File

@@ -1,4 +1,4 @@
use std::time::Duration;
use std::{collections::HashMap, time::Duration};
use derive_new::new;
use k8s_openapi::{
@@ -7,6 +7,7 @@ use k8s_openapi::{
apps::v1::Deployment,
core::v1::{Node, Pod, ServiceAccount},
},
apiextensions_apiserver::pkg::apis::apiextensions::v1::CustomResourceDefinition,
apimachinery::pkg::version::Info,
};
use kube::{
@@ -15,7 +16,7 @@ use kube::{
Api, AttachParams, DeleteParams, ListParams, ObjectList, Patch, PatchParams, ResourceExt,
},
config::{KubeConfigOptions, Kubeconfig},
core::ErrorResponse,
core::{DynamicResourceScope, ErrorResponse},
discovery::{ApiCapabilities, Scope},
error::DiscoveryError,
runtime::reflector::Lookup,
@@ -27,7 +28,7 @@ use kube::{
};
use log::{debug, error, trace, warn};
use serde::{Serialize, de::DeserializeOwned};
use serde_json::json;
use serde_json::{Value, json};
use similar::TextDiff;
use tokio::{io::AsyncReadExt, time::sleep};
use url::Url;
@@ -64,6 +65,149 @@ impl K8sClient {
})
}
/// Returns true if any deployment in the given namespace matching the label selector
/// has status.availableReplicas > 0 (or condition Available=True).
pub async fn has_healthy_deployment_with_label(
&self,
namespace: &str,
label_selector: &str,
) -> Result<bool, Error> {
let api: Api<Deployment> = Api::namespaced(self.client.clone(), namespace);
let lp = ListParams::default().labels(label_selector);
let list = api.list(&lp).await?;
for d in list.items {
// Check AvailableReplicas > 0 or Available condition
let available = d
.status
.as_ref()
.and_then(|s| s.available_replicas)
.unwrap_or(0);
if available > 0 {
return Ok(true);
}
// Fallback: scan conditions
if let Some(conds) = d.status.as_ref().and_then(|s| s.conditions.as_ref()) {
if conds
.iter()
.any(|c| c.type_ == "Available" && c.status == "True")
{
return Ok(true);
}
}
}
Ok(false)
}
/// Cluster-wide: returns namespaces that have at least one healthy deployment
/// matching the label selector (equivalent to kubectl -A -l ...).
pub async fn list_namespaces_with_healthy_deployments(
&self,
label_selector: &str,
) -> Result<Vec<String>, Error> {
let api: Api<Deployment> = Api::all(self.client.clone());
let lp = ListParams::default().labels(label_selector);
let list = api.list(&lp).await?;
let mut healthy_ns: HashMap<String, bool> = HashMap::new();
for d in list.items {
let ns = match d.metadata.namespace.clone() {
Some(n) => n,
None => continue,
};
let available = d
.status
.as_ref()
.and_then(|s| s.available_replicas)
.unwrap_or(0);
let is_healthy = if available > 0 {
true
} else {
d.status
.as_ref()
.and_then(|s| s.conditions.as_ref())
.map(|conds| {
conds
.iter()
.any(|c| c.type_ == "Available" && c.status == "True")
})
.unwrap_or(false)
};
if is_healthy {
healthy_ns.insert(ns, true);
}
}
Ok(healthy_ns.into_keys().collect())
}
/// Get the application-controller ServiceAccount name (fallback to default)
pub async fn get_controller_service_account_name(
&self,
ns: &str,
) -> Result<Option<String>, Error> {
let api: Api<Deployment> = Api::namespaced(self.client.clone(), ns);
let lp = ListParams::default().labels("app.kubernetes.io/component=controller");
let list = api.list(&lp).await?;
if let Some(dep) = list.items.get(0) {
if let Some(sa) = dep
.spec
.as_ref()
.and_then(|ds| ds.template.spec.as_ref())
.and_then(|ps| ps.service_account_name.clone())
{
return Ok(Some(sa));
}
}
Ok(None)
}
// List ClusterRoleBindings dynamically and return as JSON values
pub async fn list_clusterrolebindings_json(&self) -> Result<Vec<Value>, Error> {
let gvk = kube::api::GroupVersionKind::gvk(
"rbac.authorization.k8s.io",
"v1",
"ClusterRoleBinding",
);
let ar = kube::api::ApiResource::from_gvk(&gvk);
let api: Api<kube::api::DynamicObject> = Api::all_with(self.client.clone(), &ar);
let crbs = api.list(&ListParams::default()).await?;
let mut out = Vec::new();
for o in crbs {
let v = serde_json::to_value(&o).unwrap_or(Value::Null);
out.push(v);
}
Ok(out)
}
/// Determine if Argo controller in ns has cluster-wide permissions via CRBs
// TODO This does not belong in the generic k8s client, should be refactored at some point
pub async fn is_service_account_cluster_wide(&self, sa: &str, ns: &str) -> Result<bool, Error> {
let crbs = self.list_clusterrolebindings_json().await?;
let sa_user = format!("system:serviceaccount:{}:{}", ns, sa);
for crb in crbs {
if let Some(subjects) = crb.get("subjects").and_then(|s| s.as_array()) {
for subj in subjects {
let kind = subj.get("kind").and_then(|v| v.as_str()).unwrap_or("");
let name = subj.get("name").and_then(|v| v.as_str()).unwrap_or("");
let subj_ns = subj.get("namespace").and_then(|v| v.as_str()).unwrap_or("");
if (kind == "ServiceAccount" && name == sa && subj_ns == ns)
|| (kind == "User" && name == sa_user)
{
return Ok(true);
}
}
}
}
Ok(false)
}
pub async fn has_crd(&self, name: &str) -> Result<bool, Error> {
let api: Api<CustomResourceDefinition> = Api::all(self.client.clone());
let lp = ListParams::default().fields(&format!("metadata.name={}", name));
let crds = api.list(&lp).await?;
Ok(!crds.items.is_empty())
}
pub async fn service_account_api(&self, namespace: &str) -> Api<ServiceAccount> {
let api: Api<ServiceAccount> = Api::namespaced(self.client.clone(), namespace);
api
@@ -96,6 +240,23 @@ impl K8sClient {
resource.get(name).await
}
pub async fn get_secret_json_value(
&self,
name: &str,
namespace: Option<&str>,
) -> Result<DynamicObject, Error> {
self.get_resource_json_value(
name,
namespace,
&GroupVersionKind {
group: "".to_string(),
version: "v1".to_string(),
kind: "Secret".to_string(),
},
)
.await
}
pub async fn get_deployment(
&self,
name: &str,
@@ -339,6 +500,169 @@ impl K8sClient {
}
}
fn get_api_for_dynamic_object(
&self,
object: &DynamicObject,
ns: Option<&str>,
) -> Result<Api<DynamicObject>, Error> {
let api_resource = object
.types
.as_ref()
.and_then(|t| {
let parts: Vec<&str> = t.api_version.split('/').collect();
match parts.as_slice() {
[version] => Some(ApiResource::from_gvk(&GroupVersionKind::gvk(
"", version, &t.kind,
))),
[group, version] => Some(ApiResource::from_gvk(&GroupVersionKind::gvk(
group, version, &t.kind,
))),
_ => None,
}
})
.ok_or_else(|| {
Error::BuildRequest(kube::core::request::Error::Validation(
"Invalid apiVersion in DynamicObject {object:#?}".to_string(),
))
})?;
match ns {
Some(ns) => Ok(Api::namespaced_with(self.client.clone(), ns, &api_resource)),
None => Ok(Api::default_namespaced_with(
self.client.clone(),
&api_resource,
)),
}
}
pub async fn apply_dynamic_many(
&self,
resource: &[DynamicObject],
namespace: Option<&str>,
force_conflicts: bool,
) -> Result<Vec<DynamicObject>, Error> {
let mut result = Vec::new();
for r in resource.iter() {
result.push(self.apply_dynamic(r, namespace, force_conflicts).await?);
}
Ok(result)
}
/// Apply DynamicObject resource to the cluster
pub async fn apply_dynamic(
&self,
resource: &DynamicObject,
namespace: Option<&str>,
force_conflicts: bool,
) -> Result<DynamicObject, Error> {
// Build API for this dynamic object
let api = self.get_api_for_dynamic_object(resource, namespace)?;
let name = resource
.metadata
.name
.as_ref()
.ok_or_else(|| {
Error::BuildRequest(kube::core::request::Error::Validation(
"DynamicObject must have metadata.name".to_string(),
))
})?
.as_str();
debug!(
"Applying dynamic resource kind={:?} apiVersion={:?} name='{}' ns={:?}",
resource.types.as_ref().map(|t| &t.kind),
resource.types.as_ref().map(|t| &t.api_version),
name,
namespace
);
trace!(
"Dynamic resource payload:\n{:#}",
serde_json::to_value(resource).unwrap_or(serde_json::Value::Null)
);
// Using same field manager as in apply()
let mut patch_params = PatchParams::apply("harmony");
patch_params.force = force_conflicts;
if *crate::config::DRY_RUN {
// Dry-run path: fetch current, show diff, and return appropriate object
match api.get(name).await {
Ok(current) => {
trace!("Received current dynamic value {current:#?}");
println!("\nPerforming dry-run for resource: '{}'", name);
// Serialize current and new, and strip status from current if present
let mut current_yaml =
serde_yaml::to_value(&current).unwrap_or_else(|_| serde_yaml::Value::Null);
if let Some(map) = current_yaml.as_mapping_mut() {
if map.contains_key(&serde_yaml::Value::String("status".to_string())) {
let removed =
map.remove(&serde_yaml::Value::String("status".to_string()));
trace!("Removed status from current dynamic object: {:?}", removed);
} else {
trace!(
"Did not find status entry for current dynamic object {}/{}",
current.metadata.namespace.as_deref().unwrap_or(""),
current.metadata.name.as_deref().unwrap_or("")
);
}
}
let current_yaml = serde_yaml::to_string(&current_yaml)
.unwrap_or_else(|_| "Failed to serialize current resource".to_string());
let new_yaml = serde_yaml::to_string(resource)
.unwrap_or_else(|_| "Failed to serialize new resource".to_string());
if current_yaml == new_yaml {
println!("No changes detected.");
return Ok(current);
}
println!("Changes detected:");
let diff = TextDiff::from_lines(&current_yaml, &new_yaml);
for change in diff.iter_all_changes() {
let sign = match change.tag() {
similar::ChangeTag::Delete => "-",
similar::ChangeTag::Insert => "+",
similar::ChangeTag::Equal => " ",
};
print!("{}{}", sign, change);
}
// Return the incoming resource as the would-be applied state
Ok(resource.clone())
}
Err(Error::Api(ErrorResponse { code: 404, .. })) => {
println!("\nPerforming dry-run for new resource: '{}'", name);
println!(
"Resource does not exist. It would be created with the following content:"
);
let new_yaml = serde_yaml::to_string(resource)
.unwrap_or_else(|_| "Failed to serialize new resource".to_string());
for line in new_yaml.lines() {
println!("+{}", line);
}
Ok(resource.clone())
}
Err(e) => {
error!("Failed to get dynamic resource '{}': {}", name, e);
Err(e)
}
}
} else {
// Real apply via server-side apply
debug!("Patching (server-side apply) dynamic resource '{}'", name);
api.patch(name, &patch_params, &Patch::Apply(resource))
.await
.map_err(|e| {
error!("Failed to apply dynamic resource '{}': {}", name, e);
e
})
}
}
/// Apply a resource in namespace
///
/// See `kubectl apply` for more information on the expected behavior of this function
@@ -451,7 +775,20 @@ impl K8sClient {
{
let mut result = Vec::new();
for r in resource.iter() {
result.push(self.apply(r, ns).await?);
let apply_result = self.apply(r, ns).await;
if apply_result.is_err() {
// NOTE : We should be careful about this one, it may leak sensitive information in
// logs
// Maybe just reducing it to debug would be enough as we already know debug logs
// are unsafe.
// But keeping it at warn makes it much easier to understand what is going on. So be it for now.
warn!(
"Failed to apply k8s resource : {}",
serde_json::to_string_pretty(r).map_err(|e| Error::SerdeError(e))?
);
}
result.push(apply_result?);
}
Ok(result)
@@ -618,6 +955,23 @@ impl K8sClient {
}
pub async fn from_kubeconfig(path: &str) -> Option<K8sClient> {
Self::from_kubeconfig_with_opts(path, &KubeConfigOptions::default()).await
}
pub async fn from_kubeconfig_with_context(
path: &str,
context: Option<String>,
) -> Option<K8sClient> {
let mut opts = KubeConfigOptions::default();
opts.context = context;
Self::from_kubeconfig_with_opts(path, &opts).await
}
pub async fn from_kubeconfig_with_opts(
path: &str,
opts: &KubeConfigOptions,
) -> Option<K8sClient> {
let k = match Kubeconfig::read_from(path) {
Ok(k) => k,
Err(e) => {
@@ -625,13 +979,9 @@ impl K8sClient {
return None;
}
};
Some(K8sClient::new(
Client::try_from(
Config::from_custom_kubeconfig(k, &KubeConfigOptions::default())
.await
.unwrap(),
)
.unwrap(),
Client::try_from(Config::from_custom_kubeconfig(k, &opts).await.unwrap()).unwrap(),
))
}
}

View File

@@ -2,12 +2,13 @@ use std::{collections::BTreeMap, process::Command, sync::Arc, time::Duration};
use async_trait::async_trait;
use base64::{Engine, engine::general_purpose};
use harmony_types::rfc1123::Rfc1123Name;
use k8s_openapi::api::{
core::v1::Secret,
rbac::v1::{ClusterRoleBinding, RoleRef, Subject},
};
use kube::api::{DynamicObject, GroupVersionKind, ObjectMeta};
use log::{debug, info, warn};
use log::{debug, info, trace, warn};
use serde::Serialize;
use tokio::sync::OnceCell;
@@ -34,16 +35,17 @@ use crate::{
service_monitor::ServiceMonitor,
},
},
okd::route::OKDTlsPassthroughScore,
prometheus::{
k8s_prometheus_alerting_score::K8sPrometheusCRDAlertingScore,
prometheus::PrometheusMonitoring, rhob_alerting_score::RHOBAlertingScore,
},
},
score::Score,
topology::ingress::Ingress,
topology::{TlsRoute, TlsRouter, ingress::Ingress},
};
use super::{
use super::super::{
DeploymentTarget, HelmCommand, K8sclient, MultiTargetTopology, PreparationError,
PreparationOutcome, Topology,
k8s::K8sClient,
@@ -88,6 +90,7 @@ pub struct K8sAnywhereTopology {
#[async_trait]
impl K8sclient for K8sAnywhereTopology {
async fn k8s_client(&self) -> Result<Arc<K8sClient>, String> {
trace!("getting k8s client");
let state = match self.k8s_state.get() {
Some(state) => state,
None => return Err("K8s state not initialized yet".to_string()),
@@ -102,6 +105,41 @@ impl K8sclient for K8sAnywhereTopology {
}
}
#[async_trait]
impl TlsRouter for K8sAnywhereTopology {
async fn get_wildcard_domain(&self) -> Result<Option<String>, String> {
todo!()
}
/// Returns the port that this router exposes externally.
async fn get_router_port(&self) -> u16 {
// TODO un-hardcode this :)
443
}
async fn install_route(&self, route: TlsRoute) -> Result<(), String> {
let distro = self
.get_k8s_distribution()
.await
.map_err(|e| format!("Could not get k8s distribution {e}"))?;
match distro {
KubernetesDistribution::OpenshiftFamily => {
OKDTlsPassthroughScore {
name: Rfc1123Name::try_from(route.backend_info_string().as_str())?,
route,
}
.interpret(&Inventory::empty(), self)
.await?;
Ok(())
}
KubernetesDistribution::K3sFamily | KubernetesDistribution::Default => Err(format!(
"Distribution not supported yet for Tlsrouter {distro:?}"
)),
}
}
}
#[async_trait]
impl Grafana for K8sAnywhereTopology {
async fn ensure_grafana_operator(
@@ -343,6 +381,7 @@ impl K8sAnywhereTopology {
pub async fn get_k8s_distribution(&self) -> Result<&KubernetesDistribution, PreparationError> {
self.k8s_distribution
.get_or_try_init(async || {
debug!("Trying to detect k8s distribution");
let client = self.k8s_client().await.unwrap();
let discovery = client.discovery().await.map_err(|e| {
@@ -358,14 +397,17 @@ impl K8sAnywhereTopology {
.groups()
.any(|g| g.name() == "project.openshift.io")
{
info!("Found KubernetesDistribution OpenshiftFamily");
return Ok(KubernetesDistribution::OpenshiftFamily);
}
// K3d / K3s
if version.git_version.contains("k3s") {
info!("Found KubernetesDistribution K3sFamily");
return Ok(KubernetesDistribution::K3sFamily);
}
info!("Could not identify KubernetesDistribution, using Default");
return Ok(KubernetesDistribution::Default);
})
.await
@@ -613,7 +655,7 @@ impl K8sAnywhereTopology {
}
async fn try_load_kubeconfig(&self, path: &str) -> Option<K8sClient> {
K8sClient::from_kubeconfig(path).await
K8sClient::from_kubeconfig_with_context(path, self.config.k8s_context.clone()).await
}
fn get_k3d_installation_score(&self) -> K3DInstallationScore {
@@ -651,7 +693,14 @@ impl K8sAnywhereTopology {
return Ok(Some(K8sState {
client: Arc::new(client),
source: K8sSource::Kubeconfig,
message: format!("Loaded k8s client from kubeconfig {kubeconfig}"),
message: format!(
"Loaded k8s client from kubeconfig {kubeconfig} using context {}",
self.config
.k8s_context
.as_ref()
.map(|s| s.clone())
.unwrap_or_default()
),
}));
}
None => {
@@ -891,12 +940,94 @@ pub struct K8sAnywhereConfig {
/// default: true
pub use_local_k3d: bool,
pub harmony_profile: String,
/// Name of the kubeconfig context to use.
///
/// If None, it will use the current context.
///
/// If the context name is not found, it will fail to initialize.
pub k8s_context: Option<String>,
}
impl K8sAnywhereConfig {
/// Reads an environment variable `env_var` and parses its content :
/// Comma-separated `key=value` pairs, e.g.,
/// `kubeconfig=/path/to/primary.kubeconfig,context=primary-ctx`
///
/// Then creates a K8sAnywhereConfig from it local installs disabled (`use_local_k3d=false`,
/// `autoinstall=false`, `use_system_kubeconfig=false`).
/// `harmony_profile` is read from `HARMONY_PROFILE` env or defaults to `"dev"`.
///
/// If no kubeconfig path is provided it will fall back to system kubeconfig
///
/// Panics if `env_var` is missing or malformed.
pub fn remote_k8s_from_env_var(env_var: &str) -> Self {
Self::remote_k8s_from_env_var_with_profile(env_var, "HARMONY_PROFILE")
}
pub fn remote_k8s_from_env_var_with_profile(env_var: &str, profile_env_var: &str) -> Self {
debug!("Looking for env var named : {env_var}");
let env_var_value = std::env::var(env_var)
.map_err(|e| format!("Missing required env var {env_var} : {e}"))
.unwrap();
info!("Initializing remote k8s from env var value : {env_var_value}");
let mut kubeconfig: Option<String> = None;
let mut k8s_context: Option<String> = None;
for part in env_var_value.split(',') {
let kv: Vec<&str> = part.splitn(2, '=').collect();
if kv.len() == 2 {
match kv[0].trim() {
"kubeconfig" => kubeconfig = Some(kv[1].trim().to_string()),
"context" => k8s_context = Some(kv[1].trim().to_string()),
_ => {}
}
}
}
debug!("Found in {env_var} : kubeconfig {kubeconfig:?} and context {k8s_context:?}");
let use_system_kubeconfig = kubeconfig.is_none();
if let Some(kubeconfig_value) = std::env::var("KUBECONFIG").ok().map(|v| v.to_string()) {
kubeconfig.get_or_insert(kubeconfig_value);
}
info!("Loading k8s environment with kubeconfig {kubeconfig:?} and context {k8s_context:?}");
K8sAnywhereConfig {
kubeconfig,
k8s_context,
use_system_kubeconfig,
autoinstall: false,
use_local_k3d: false,
harmony_profile: std::env::var(profile_env_var).unwrap_or_else(|_| "dev".to_string()),
}
}
fn from_env() -> Self {
fn get_kube_config_path() -> Option<std::path::PathBuf> {
// 1. Check for the KUBECONFIG environment variable first (standard practice)
if let Ok(val) = std::env::var("KUBECONFIG") {
if !val.is_empty() {
return Some(std::path::PathBuf::from(val));
}
}
// 2. Use the standard library to find the home directory
// As of recent Rust versions, this is the preferred cross-platform method.
let mut path = std::env::home_dir()?;
// 3. Construct the path to .kube/config
// .push() handles OS-specific separators (\ for Windows, / for Unix)
path.push(".kube");
path.push("config");
Some(path)
}
Self {
kubeconfig: std::env::var("KUBECONFIG").ok().map(|v| v.to_string()),
kubeconfig: get_kube_config_path().map(|s| s.to_string_lossy().into_owned()),
use_system_kubeconfig: std::env::var("HARMONY_USE_SYSTEM_KUBECONFIG")
.map_or_else(|_| false, |v| v.parse().ok().unwrap_or(false)),
autoinstall: std::env::var("HARMONY_AUTOINSTALL")
@@ -908,6 +1039,7 @@ impl K8sAnywhereConfig {
),
use_local_k3d: std::env::var("HARMONY_USE_LOCAL_K3D")
.map_or_else(|_| true, |v| v.parse().ok().unwrap_or(true)),
k8s_context: std::env::var("HARMONY_K8S_CONTEXT").ok(),
}
}
}
@@ -975,36 +1107,68 @@ impl TenantManager for K8sAnywhereTopology {
#[async_trait]
impl Ingress for K8sAnywhereTopology {
//TODO this is specifically for openshift/okd which violates the k8sanywhere idea
async fn get_domain(&self, service: &str) -> Result<String, PreparationError> {
use log::{debug, trace, warn};
let client = self.k8s_client().await?;
if let Some(Some(k8s_state)) = self.k8s_state.get() {
match k8s_state.source {
K8sSource::LocalK3d => Ok(format!("{service}.local.k3d")),
K8sSource::LocalK3d => {
// Local developer UX
return Ok(format!("{service}.local.k3d"));
}
K8sSource::Kubeconfig => {
self.openshift_ingress_operator_available().await?;
trace!("K8sSource is kubeconfig; attempting to detect domain");
let gvk = GroupVersionKind {
group: "operator.openshift.io".into(),
version: "v1".into(),
kind: "IngressController".into(),
};
let ic = client
.get_resource_json_value(
"default",
Some("openshift-ingress-operator"),
&gvk,
)
.await
.map_err(|_| {
PreparationError::new("Failed to fetch IngressController".to_string())
})?;
// 1) Try OpenShift IngressController domain (backward compatible)
if self.openshift_ingress_operator_available().await.is_ok() {
trace!("OpenShift ingress operator detected; using IngressController");
let gvk = GroupVersionKind {
group: "operator.openshift.io".into(),
version: "v1".into(),
kind: "IngressController".into(),
};
let ic = client
.get_resource_json_value(
"default",
Some("openshift-ingress-operator"),
&gvk,
)
.await
.map_err(|_| {
PreparationError::new(
"Failed to fetch IngressController".to_string(),
)
})?;
match ic.data["status"]["domain"].as_str() {
Some(domain) => Ok(format!("{service}.{domain}")),
None => Err(PreparationError::new("Could not find domain".to_string())),
if let Some(domain) = ic.data["status"]["domain"].as_str() {
return Ok(format!("{service}.{domain}"));
} else {
warn!("OpenShift IngressController present but no status.domain set");
}
} else {
trace!(
"OpenShift ingress operator not detected; trying generic Kubernetes"
);
}
// 2) Try NGINX Ingress Controller common setups
// 2.a) Well-known namespace/name for the controller Service
// - upstream default: namespace "ingress-nginx", service "ingress-nginx-controller"
// - some distros: "ingress-nginx-controller" svc in "ingress-nginx" ns
// If found with LoadBalancer ingress hostname, use its base domain.
if let Some(domain) = try_nginx_lb_domain(&client).await? {
return Ok(format!("{service}.{domain}"));
}
// 3) Fallback: internal cluster DNS suffix (service.namespace.svc.cluster.local)
// We don't have tenant namespace here, so we fallback to 'default' with a warning.
warn!(
"Could not determine external ingress domain; falling back to internal-only DNS"
);
let internal = format!("{service}.default.svc.cluster.local");
Ok(internal)
}
}
} else {
@@ -1014,3 +1178,248 @@ impl Ingress for K8sAnywhereTopology {
}
}
}
async fn try_nginx_lb_domain(client: &K8sClient) -> Result<Option<String>, PreparationError> {
use log::{debug, trace};
// Try common service path: svc/ingress-nginx-controller in ns/ingress-nginx
let svc_gvk = GroupVersionKind {
group: "".into(), // core
version: "v1".into(),
kind: "Service".into(),
};
let candidates = [
("ingress-nginx", "ingress-nginx-controller"),
("ingress-nginx", "ingress-nginx-controller-internal"),
("ingress-nginx", "ingress-nginx"), // some charts name the svc like this
("kube-system", "ingress-nginx-controller"), // less common but seen
];
for (ns, name) in candidates {
trace!("Checking NGINX Service {ns}/{name} for LoadBalancer hostname");
if let Ok(svc) = client
.get_resource_json_value(ns, Some(name), &svc_gvk)
.await
{
let lb_hosts = svc.data["status"]["loadBalancer"]["ingress"]
.as_array()
.cloned()
.unwrap_or_default();
for entry in lb_hosts {
if let Some(host) = entry.get("hostname").and_then(|v| v.as_str()) {
debug!("Found NGINX LB hostname: {host}");
if let Some(domain) = extract_base_domain(host) {
return Ok(Some(domain.to_string()));
} else {
return Ok(Some(host.to_string())); // already a domain
}
}
if let Some(ip) = entry.get("ip").and_then(|v| v.as_str()) {
// If only an IP is exposed, we can't create a hostname; return None to keep searching
debug!("NGINX LB exposes IP {ip} (no hostname); skipping");
}
}
}
}
Ok(None)
}
fn extract_base_domain(host: &str) -> Option<String> {
// For a host like a1b2c3d4e5f6abcdef.elb.amazonaws.com -> base domain elb.amazonaws.com
// For a managed DNS like xyz.example.com -> base domain example.com (keep 2+ labels)
// Heuristic: keep last 2 labels by default; special-case known multi-label TLDs if needed.
let parts: Vec<&str> = host.split('.').collect();
if parts.len() >= 2 {
// Very conservative: last 2 labels
Some(parts[parts.len() - 2..].join("."))
} else {
None
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::Mutex;
use std::sync::atomic::{AtomicUsize, Ordering};
static TEST_COUNTER: AtomicUsize = AtomicUsize::new(0);
static ENV_LOCK: Mutex<()> = Mutex::new(());
/// Sets environment variables with unique names to avoid concurrency issues between tests.
/// Returns the names of the (config_var, profile_var) used.
fn setup_env_vars(config_value: Option<&str>, profile_value: Option<&str>) -> (String, String) {
let id = TEST_COUNTER.fetch_add(1, Ordering::SeqCst);
let config_var = format!("TEST_VAR_{}", id);
let profile_var = format!("TEST_PROFILE_{}", id);
unsafe {
if let Some(v) = config_value {
std::env::set_var(&config_var, v);
} else {
std::env::remove_var(&config_var);
}
if let Some(v) = profile_value {
std::env::set_var(&profile_var, v);
} else {
std::env::remove_var(&profile_var);
}
}
(config_var, profile_var)
}
/// Runs a test in a separate thread to avoid polluting the process environment.
fn run_in_isolated_env<F>(f: F)
where
F: FnOnce() + Send + 'static,
{
let handle = std::thread::spawn(f);
handle.join().expect("Test thread panicked");
}
#[test]
fn test_remote_k8s_from_env_var_full() {
let (config_var, profile_var) =
setup_env_vars(Some("kubeconfig=/foo.kc,context=bar"), Some("testprof"));
let cfg =
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
assert_eq!(cfg.kubeconfig.as_deref(), Some("/foo.kc"));
assert_eq!(cfg.k8s_context.as_deref(), Some("bar"));
assert_eq!(cfg.harmony_profile, "testprof");
assert!(!cfg.use_local_k3d);
assert!(!cfg.autoinstall);
assert!(!cfg.use_system_kubeconfig);
}
#[test]
fn test_remote_k8s_from_env_var_only_kubeconfig() {
let (config_var, profile_var) = setup_env_vars(Some("kubeconfig=/foo.kc"), None);
let cfg =
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
assert_eq!(cfg.kubeconfig.as_deref(), Some("/foo.kc"));
assert_eq!(cfg.k8s_context, None);
assert_eq!(cfg.harmony_profile, "dev");
}
#[test]
fn test_remote_k8s_from_env_var_only_context() {
let _lock = ENV_LOCK.lock().unwrap_or_else(|e| e.into_inner());
run_in_isolated_env(|| {
unsafe {
std::env::remove_var("KUBECONFIG");
}
let (config_var, profile_var) = setup_env_vars(Some("context=bar"), None);
let cfg =
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
assert_eq!(cfg.kubeconfig, None);
assert_eq!(cfg.k8s_context.as_deref(), Some("bar"));
});
}
#[test]
fn test_remote_k8s_from_env_var_unknown_key_trim() {
let _lock = ENV_LOCK.lock().unwrap_or_else(|e| e.into_inner());
run_in_isolated_env(|| {
unsafe {
std::env::remove_var("KUBECONFIG");
}
let (config_var, profile_var) = setup_env_vars(
Some(" unknown=bla , kubeconfig= /foo.kc ,context= bar "),
None,
);
let cfg =
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
assert_eq!(cfg.kubeconfig.as_deref(), Some("/foo.kc"));
assert_eq!(cfg.k8s_context.as_deref(), Some("bar"));
});
}
#[test]
fn test_remote_k8s_from_env_var_empty_malformed() {
let _lock = ENV_LOCK.lock().unwrap_or_else(|e| e.into_inner());
run_in_isolated_env(|| {
unsafe {
std::env::remove_var("KUBECONFIG");
}
let (config_var, profile_var) = setup_env_vars(Some("malformed,no=,equal"), None);
let cfg =
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
// Unknown/malformed ignored, defaults to None
assert_eq!(cfg.kubeconfig, None);
assert_eq!(cfg.k8s_context, None);
});
}
#[test]
fn test_remote_k8s_from_env_var_kubeconfig_fallback() {
let _lock = ENV_LOCK.lock().unwrap_or_else(|e| e.into_inner());
run_in_isolated_env(|| {
unsafe {
std::env::set_var("KUBECONFIG", "/fallback/path");
}
let (config_var, profile_var) = setup_env_vars(Some("context=bar"), None);
let cfg =
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
assert_eq!(cfg.kubeconfig.as_deref(), Some("/fallback/path"));
assert_eq!(cfg.k8s_context.as_deref(), Some("bar"));
});
}
#[test]
fn test_remote_k8s_from_env_var_kubeconfig_no_fallback_if_provided() {
let _lock = ENV_LOCK.lock().unwrap_or_else(|e| e.into_inner());
run_in_isolated_env(|| {
unsafe {
std::env::set_var("KUBECONFIG", "/fallback/path");
}
let (config_var, profile_var) =
setup_env_vars(Some("kubeconfig=/primary/path,context=bar"), None);
let cfg =
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
// Primary path should take precedence
assert_eq!(cfg.kubeconfig.as_deref(), Some("/primary/path"));
assert_eq!(cfg.k8s_context.as_deref(), Some("bar"));
});
}
#[test]
#[should_panic(expected = "Missing required env var")]
fn test_remote_k8s_from_env_var_missing() {
let (config_var, profile_var) = setup_env_vars(None, None);
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
}
#[test]
fn test_remote_k8s_from_env_var_context_key() {
let (config_var, profile_var) = setup_env_vars(
Some("context=default/api-sto1-harmony-mcd:6443/kube:admin"),
None,
);
let cfg =
K8sAnywhereConfig::remote_k8s_from_env_var_with_profile(&config_var, &profile_var);
assert_eq!(
cfg.k8s_context.as_deref(),
Some("default/api-sto1-harmony-mcd:6443/kube:admin")
);
}
}

View File

@@ -0,0 +1,3 @@
mod k8s_anywhere;
mod postgres;
pub use k8s_anywhere::*;

View File

@@ -0,0 +1,128 @@
use async_trait::async_trait;
use crate::{
interpret::Outcome,
inventory::Inventory,
modules::postgresql::{
K8sPostgreSQLScore,
capability::{PostgreSQL, PostgreSQLConfig, PostgreSQLEndpoint, ReplicationCerts},
},
score::Score,
topology::{K8sAnywhereTopology, K8sclient},
};
use k8s_openapi::api::core::v1::{Secret, Service};
use log::info;
#[async_trait]
impl PostgreSQL for K8sAnywhereTopology {
async fn deploy(&self, config: &PostgreSQLConfig) -> Result<String, String> {
K8sPostgreSQLScore {
config: config.clone(),
}
.interpret(&Inventory::empty(), self)
.await
.map_err(|e| format!("Failed to deploy k8s postgresql : {e}"))?;
Ok(config.cluster_name.clone())
}
/// Extracts PostgreSQL-specific replication certs (PEM format) from a deployed primary cluster.
/// Abstracts away storage/retrieval details (e.g., secrets, files).
async fn get_replication_certs(
&self,
config: &PostgreSQLConfig,
) -> Result<ReplicationCerts, String> {
let cluster_name = &config.cluster_name;
let namespace = &config.namespace;
let k8s_client = self.k8s_client().await.map_err(|e| e.to_string())?;
let replication_secret_name = format!("{cluster_name}-replication");
let replication_secret = k8s_client
.get_resource::<Secret>(&replication_secret_name, Some(namespace))
.await
.map_err(|e| format!("Failed to get {replication_secret_name}: {e}"))?
.ok_or_else(|| format!("Replication secret '{replication_secret_name}' not found"))?;
let ca_secret_name = format!("{cluster_name}-ca");
let ca_secret = k8s_client
.get_resource::<Secret>(&ca_secret_name, Some(namespace))
.await
.map_err(|e| format!("Failed to get {ca_secret_name}: {e}"))?
.ok_or_else(|| format!("CA secret '{ca_secret_name}' not found"))?;
let replication_data = replication_secret
.data
.as_ref()
.ok_or("Replication secret has no data".to_string())?;
let ca_data = ca_secret
.data
.as_ref()
.ok_or("CA secret has no data".to_string())?;
let tls_key_bs = replication_data
.get("tls.key")
.ok_or("missing tls.key in replication secret".to_string())?;
let tls_crt_bs = replication_data
.get("tls.crt")
.ok_or("missing tls.crt in replication secret".to_string())?;
let ca_crt_bs = ca_data
.get("ca.crt")
.ok_or("missing ca.crt in CA secret".to_string())?;
let streaming_replica_key_pem = String::from_utf8_lossy(&tls_key_bs.0).to_string();
let streaming_replica_cert_pem = String::from_utf8_lossy(&tls_crt_bs.0).to_string();
let ca_cert_pem = String::from_utf8_lossy(&ca_crt_bs.0).to_string();
info!("Successfully extracted replication certs for cluster '{cluster_name}'");
Ok(ReplicationCerts {
ca_cert_pem,
streaming_replica_cert_pem,
streaming_replica_key_pem,
})
}
/// Gets the internal/private endpoint (e.g., k8s service FQDN:5432) for the cluster.
async fn get_endpoint(&self, config: &PostgreSQLConfig) -> Result<PostgreSQLEndpoint, String> {
let cluster_name = &config.cluster_name;
let namespace = &config.namespace;
let k8s_client = self.k8s_client().await.map_err(|e| e.to_string())?;
let service_name = format!("{cluster_name}-rw");
let service = k8s_client
.get_resource::<Service>(&service_name, Some(namespace))
.await
.map_err(|e| format!("Failed to get service '{service_name}': {e}"))?
.ok_or_else(|| {
format!("Service '{service_name}' not found for cluster '{cluster_name}")
})?;
let ns = service
.metadata
.namespace
.as_deref()
.unwrap_or("default")
.to_string();
let host = format!("{service_name}.{ns}.svc.cluster.local");
info!("Internal endpoint for '{cluster_name}': {host}:5432");
Ok(PostgreSQLEndpoint { host, port: 5432 })
}
// /// Gets the public/externally routable endpoint if configured (e.g., OKD Route:443 for TLS passthrough).
// /// Returns None if no public endpoint (internal-only cluster).
// /// UNSTABLE: This is opinionated for initial multisite use cases. Networking abstraction is complex
// /// (cf. k8s Ingress -> Gateway API evolution); may move to higher-order Networking/PostgreSQLNetworking trait.
// async fn get_public_endpoint(
// &self,
// cluster_name: &str,
// ) -> Result<Option<PostgreSQLEndpoint>, String> {
// // TODO: Implement OpenShift Route lookup targeting '{cluster_name}-rw' service on port 5432 with TLS passthrough
// // For now, return None assuming internal-only access or manual route configuration
// info!("Public endpoint lookup not implemented for '{cluster_name}', returning None");
// Ok(None)
// }
}

View File

@@ -1,5 +1,9 @@
mod failover;
mod ha_cluster;
pub mod ingress;
pub mod node_exporter;
pub mod opnsense;
pub use failover::*;
use harmony_types::net::IpAddress;
mod host_binding;
mod http;
@@ -13,7 +17,7 @@ pub use k8s_anywhere::*;
pub use localhost::*;
pub mod k8s;
mod load_balancer;
mod router;
pub mod router;
mod tftp;
use async_trait::async_trait;
pub use ha_cluster::*;
@@ -186,7 +190,7 @@ impl TopologyState {
}
}
#[derive(Debug)]
#[derive(Debug, PartialEq)]
pub enum DeploymentTarget {
LocalDev,
Staging,

View File

@@ -7,6 +7,7 @@ use std::{
};
use async_trait::async_trait;
use brocade::PortOperatingMode;
use derive_new::new;
use harmony_types::{
id::Id,
@@ -214,6 +215,8 @@ impl From<String> for NetworkError {
}
}
pub type PortConfig = (PortLocation, PortOperatingMode);
#[async_trait]
pub trait Switch: Send + Sync {
async fn setup_switch(&self) -> Result<(), SwitchError>;
@@ -224,6 +227,8 @@ pub trait Switch: Send + Sync {
) -> Result<Option<PortLocation>, SwitchError>;
async fn configure_port_channel(&self, config: &HostNetworkConfig) -> Result<(), SwitchError>;
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError>;
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError>;
}
#[derive(Clone, Debug, PartialEq)]
@@ -283,6 +288,9 @@ pub trait SwitchClient: Debug + Send + Sync {
channel_name: &str,
switch_ports: Vec<PortLocation>,
) -> Result<u8, SwitchError>;
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError>;
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError>;
}
#[cfg(test)]

View File

@@ -0,0 +1,17 @@
use async_trait::async_trait;
use crate::executors::ExecutorError;
#[async_trait]
pub trait NodeExporter: Send + Sync + std::fmt::Debug {
async fn ensure_initialized(&self) -> Result<(), ExecutorError>;
async fn commit_config(&self) -> Result<(), ExecutorError>;
async fn reload_restart(&self) -> Result<(), ExecutorError>;
}
// //TODO complete this impl
// impl std::fmt::Debug for dyn NodeExporter {
// fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// f.write_fmt(format_args!("NodeExporter ",))
// }
// }

View File

@@ -1,6 +1,7 @@
use std::any::Any;
use std::{any::Any, collections::HashMap};
use async_trait::async_trait;
use kube::api::DynamicObject;
use log::debug;
use crate::{
@@ -76,6 +77,15 @@ pub trait AlertReceiver<S: AlertSender>: std::fmt::Debug + Send + Sync {
fn name(&self) -> String;
fn clone_box(&self) -> Box<dyn AlertReceiver<S>>;
fn as_any(&self) -> &dyn Any;
fn as_alertmanager_receiver(&self) -> Result<AlertManagerReceiver, String>;
}
#[derive(Debug)]
pub struct AlertManagerReceiver {
pub receiver_config: serde_json::Value,
// FIXME we should not leak k8s here. DynamicObject is k8s specific
pub additional_ressources: Vec<DynamicObject>,
pub route_config: serde_json::Value,
}
#[async_trait]

View File

@@ -0,0 +1,23 @@
use async_trait::async_trait;
use log::info;
use crate::{
infra::opnsense::OPNSenseFirewall,
topology::{PreparationError, PreparationOutcome, Topology},
};
#[async_trait]
impl Topology for OPNSenseFirewall {
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
// FIXME we should be initializing the opnsense config here instead of
// OPNSenseFirewall::new as this causes the config to be loaded too early in
// harmony initialization process
let details = "OPNSenseFirewall topology is ready".to_string();
info!("{}", details);
Ok(PreparationOutcome::Success { details })
}
fn name(&self) -> &str {
"OPNSenseFirewall"
}
}

View File

@@ -1,11 +1,20 @@
use async_trait::async_trait;
use cidr::Ipv4Cidr;
use derive_new::new;
use serde::Serialize;
use super::{IpAddress, LogicalHost};
/// Basic network router abstraction (L3 IP routing/gateway).
/// Distinguished from TlsRouter (L4 TLS passthrough).
pub trait Router: Send + Sync {
/// Gateway IP address for this subnet/router.
fn get_gateway(&self) -> IpAddress;
/// CIDR block managed by this router.
fn get_cidr(&self) -> Ipv4Cidr;
/// Logical host associated with this router.
fn get_host(&self) -> LogicalHost;
}
@@ -38,3 +47,78 @@ impl Router for UnmanagedRouter {
todo!()
}
}
/// Desired state config for a TLS passthrough route.
/// Forwards external TLS (port 443) → backend service:target_port (no termination at router).
/// Inspired by CNPG multisite: exposes `-rw`/`-ro` services publicly via OKD Route/HAProxy/K8s
/// Gateway etc.
///
/// # Example
/// ```
/// use harmony::topology::router::TlsRoute;
/// let postgres_rw = TlsRoute {
/// hostname: "postgres-cluster-example.public.domain.io".to_string(),
/// backend: "postgres-cluster-example-rw".to_string(), // k8s Service or HAProxy upstream
/// target_port: 5432,
/// namespace: "sample-namespace".to_string(),
/// };
/// ```
#[derive(Clone, Debug, Serialize)]
pub struct TlsRoute {
/// Public hostname clients connect to (TLS SNI, port 443 implicit).
/// Router matches this for passthrough forwarding.
pub hostname: String,
/// Backend/host identifier (k8s Service, HAProxy upstream, IP/FQDN, etc.).
pub backend: String,
/// Backend TCP port (Postgres: 5432).
pub target_port: u16,
/// The environment in which it lives.
/// TODO clarify how we handle this in higher level abstractions. The namespace name is a
/// direct mapping to k8s but that could be misleading for other implementations.
pub namespace: String,
}
impl TlsRoute {
pub fn to_string_short(&self) -> String {
format!("{}-{}:{}", self.hostname, self.backend, self.target_port)
}
pub fn backend_info_string(&self) -> String {
format!("{}:{}", self.backend, self.target_port)
}
}
/// Installs and queries TLS passthrough routes (L4 TCP/SNI forwarding, no TLS termination).
/// Agnostic to impl: OKD Route, AWS NLB+HAProxy, k3s Envoy Gateway, Apache ProxyPass.
/// Used by PostgreSQL capability to expose CNPG clusters multisite (site1 → site2 replication).
///
/// # Usage
/// ```ignore
/// use harmony::topology::router::TlsRoute;
/// // After CNPG deploy, expose RW endpoint
/// async fn route() {
/// let topology = okd_topology();
/// let route = TlsRoute { /* ... */ };
/// topology.install_route(route).await; // OKD Route, HAProxy reload, etc.
/// }
/// ```
#[async_trait]
pub trait TlsRouter: Send + Sync {
/// Provisions the route (idempotent where possible).
/// Example: OKD Route{ host, to: backend:target_port, tls: {passthrough} };
/// HAProxy frontend→backend \"postgres-upstream\".
async fn install_route(&self, config: TlsRoute) -> Result<(), String>;
/// Gets the base domain that can be used to deploy applications that will be automatically
/// routed to this cluster.
///
/// For example, if we have *.apps.nationtech.io pointing to a public load balancer, then this
/// function would install route apps.nationtech.io
async fn get_wildcard_domain(&self) -> Result<Option<String>, String>;
/// Returns the port that this router exposes externally.
async fn get_router_port(&self) -> u16;
}

View File

@@ -14,7 +14,7 @@ use k8s_openapi::{
},
apimachinery::pkg::util::intstr::IntOrString,
};
use kube::Resource;
use kube::{Resource, api::DynamicObject};
use log::debug;
use serde::de::DeserializeOwned;
use serde_json::json;

View File

@@ -1,12 +1,13 @@
use async_trait::async_trait;
use brocade::{BrocadeClient, BrocadeOptions, InterSwitchLink, InterfaceStatus, PortOperatingMode};
use harmony_types::{
id::Id,
net::{IpAddress, MacAddress},
switch::{PortDeclaration, PortLocation},
};
use option_ext::OptionExt;
use crate::topology::{SwitchClient, SwitchError};
use crate::topology::{PortConfig, SwitchClient, SwitchError};
#[derive(Debug)]
pub struct BrocadeSwitchClient {
@@ -18,9 +19,9 @@ impl BrocadeSwitchClient {
ip_addresses: &[IpAddress],
username: &str,
password: &str,
options: Option<BrocadeOptions>,
options: BrocadeOptions,
) -> Result<Self, brocade::Error> {
let brocade = brocade::init(ip_addresses, 22, username, password, options).await?;
let brocade = brocade::init(ip_addresses, username, password, options).await?;
Ok(Self { brocade })
}
}
@@ -59,7 +60,7 @@ impl SwitchClient for BrocadeSwitchClient {
}
self.brocade
.configure_interfaces(interfaces)
.configure_interfaces(&interfaces)
.await
.map_err(|e| SwitchError::new(e.to_string()))?;
@@ -111,6 +112,65 @@ impl SwitchClient for BrocadeSwitchClient {
Ok(channel_id)
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
for i in ids {
self.brocade
.clear_port_channel(&i.to_string())
.await
.map_err(|e| SwitchError::new(e.to_string()))?;
}
Ok(())
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
// FIXME hardcoded TenGigabitEthernet = bad
let ports = ports
.iter()
.map(|p| (format!("TenGigabitEthernet {}", p.0), p.1.clone()))
.collect();
self.brocade
.configure_interfaces(&ports)
.await
.map_err(|e| SwitchError::new(e.to_string()))?;
Ok(())
}
}
#[derive(Debug)]
pub struct UnmanagedSwitch;
impl UnmanagedSwitch {
pub async fn init() -> Result<Self, ()> {
Ok(Self)
}
}
#[async_trait]
impl SwitchClient for UnmanagedSwitch {
async fn setup(&self) -> Result<(), SwitchError> {
todo!("unmanaged switch. Nothing to do.")
}
async fn find_port(
&self,
mac_address: &MacAddress,
) -> Result<Option<PortLocation>, SwitchError> {
todo!("unmanaged switch. Nothing to do.")
}
async fn configure_port_channel(
&self,
channel_name: &str,
switch_ports: Vec<PortLocation>,
) -> Result<u8, SwitchError> {
todo!("unmanaged switch. Nothing to do.")
}
async fn clear_port_channel(&self, ids: &Vec<Id>) -> Result<(), SwitchError> {
todo!("unmanged switch. Nothing to do.")
}
async fn configure_interface(&self, ports: &Vec<PortConfig>) -> Result<(), SwitchError> {
todo!("unmanged switch. Nothing to do.")
}
}
#[cfg(test)]
@@ -121,7 +181,7 @@ mod tests {
use async_trait::async_trait;
use brocade::{
BrocadeClient, BrocadeInfo, Error, InterSwitchLink, InterfaceInfo, InterfaceStatus,
InterfaceType, MacAddressEntry, PortChannelId, PortOperatingMode,
InterfaceType, MacAddressEntry, PortChannelId, PortOperatingMode, SecurityLevel,
};
use harmony_types::switch::PortLocation;
@@ -145,6 +205,7 @@ mod tests {
client.setup().await.unwrap();
//TODO not sure about this
let configured_interfaces = brocade.configured_interfaces.lock().unwrap();
assert_that!(*configured_interfaces).contains_exactly(vec![
(first_interface.name.clone(), PortOperatingMode::Access),
@@ -255,10 +316,10 @@ mod tests {
async fn configure_interfaces(
&self,
interfaces: Vec<(String, PortOperatingMode)>,
interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error> {
let mut configured_interfaces = self.configured_interfaces.lock().unwrap();
*configured_interfaces = interfaces;
*configured_interfaces = interfaces.clone();
Ok(())
}
@@ -279,6 +340,10 @@ mod tests {
async fn clear_port_channel(&self, _channel_name: &str) -> Result<(), Error> {
todo!()
}
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error> {
todo!()
}
}
impl FakeBrocadeClient {

View File

@@ -121,7 +121,7 @@ mod test {
#[test]
fn deployment_to_dynamic_roundtrip() {
// Create a sample Deployment with nested structures
let mut deployment = Deployment {
let deployment = Deployment {
metadata: ObjectMeta {
name: Some("my-deployment".to_string()),
labels: Some({

View File

@@ -10,7 +10,7 @@ use super::OPNSenseFirewall;
#[async_trait]
impl DnsServer for OPNSenseFirewall {
async fn register_hosts(&self, hosts: Vec<DnsRecord>) -> Result<(), ExecutorError> {
async fn register_hosts(&self, _hosts: Vec<DnsRecord>) -> Result<(), ExecutorError> {
todo!("Refactor this to use dnsmasq")
// let mut writable_opnsense = self.opnsense_config.write().await;
// let mut dns = writable_opnsense.dns();
@@ -68,7 +68,7 @@ impl DnsServer for OPNSenseFirewall {
self.host.clone()
}
async fn register_dhcp_leases(&self, register: bool) -> Result<(), ExecutorError> {
async fn register_dhcp_leases(&self, _register: bool) -> Result<(), ExecutorError> {
todo!("Refactor this to use dnsmasq")
// let mut writable_opnsense = self.opnsense_config.write().await;
// let mut dns = writable_opnsense.dns();

View File

@@ -4,11 +4,11 @@ mod firewall;
mod http;
mod load_balancer;
mod management;
pub mod node_exporter;
mod tftp;
use std::sync::Arc;
pub use management::*;
use opnsense_config_xml::Host;
use tokio::sync::RwLock;
use crate::{executors::ExecutorError, topology::LogicalHost};
@@ -25,6 +25,8 @@ impl OPNSenseFirewall {
self.host.ip
}
/// panics : if the opnsense config file cannot be loaded by the underlying opnsense_config
/// crate
pub async fn new(host: LogicalHost, port: Option<u16>, username: &str, password: &str) -> Self {
Self {
opnsense_config: Arc::new(RwLock::new(

View File

@@ -0,0 +1,47 @@
use async_trait::async_trait;
use log::debug;
use crate::{
executors::ExecutorError, infra::opnsense::OPNSenseFirewall,
topology::node_exporter::NodeExporter,
};
#[async_trait]
impl NodeExporter for OPNSenseFirewall {
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
let mut config = self.opnsense_config.write().await;
let node_exporter = config.node_exporter();
if let Some(config) = node_exporter.get_full_config() {
debug!(
"Node exporter available in opnsense config, assuming it is already installed. {config:?}"
);
} else {
config
.install_package("os-node_exporter")
.await
.map_err(|e| {
ExecutorError::UnexpectedError(format!("Executor failed when trying to install os-node_exporter package with error {e:?}"
))
})?;
}
config
.node_exporter()
.enable(true)
.map_err(|e| ExecutorError::UnexpectedError(e.to_string()))?;
Ok(())
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
OPNSenseFirewall::commit_config(self).await
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
self.opnsense_config
.write()
.await
.node_exporter()
.reload_restart()
.await
.map_err(|e| ExecutorError::UnexpectedError(e.to_string()))
}
}

View File

@@ -181,13 +181,11 @@ impl From<CDApplicationConfig> for ArgoApplication {
}
impl ArgoApplication {
pub fn to_yaml(&self) -> serde_yaml::Value {
pub fn to_yaml(&self, target_namespace: Option<&str>) -> serde_yaml::Value {
let name = &self.name;
let namespace = if let Some(ns) = self.namespace.as_ref() {
ns
} else {
"argocd"
};
let default_ns = "argocd".to_string();
let namespace: &str =
target_namespace.unwrap_or(self.namespace.as_ref().unwrap_or(&default_ns));
let project = &self.project;
let yaml_str = format!(
@@ -345,7 +343,7 @@ spec:
assert_eq!(
expected_yaml_output.trim(),
serde_yaml::to_string(&app.clone().to_yaml())
serde_yaml::to_string(&app.clone().to_yaml(None))
.unwrap()
.trim()
);

View File

@@ -1,22 +1,20 @@
use async_trait::async_trait;
use harmony_macros::hurl;
use kube::{Api, api::GroupVersionKind};
use log::{debug, warn};
use log::{debug, info, trace, warn};
use non_blank_string_rs::NonBlankString;
use serde::Serialize;
use serde::de::DeserializeOwned;
use std::{process::Command, str::FromStr, sync::Arc};
use std::{str::FromStr, sync::Arc};
use crate::{
data::Version,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
modules::helm::chart::{HelmChartScore, HelmRepository},
score::Score,
topology::{
HelmCommand, K8sclient, PreparationError, PreparationOutcome, Topology, ingress::Ingress,
k8s::K8sClient,
modules::{
argocd::{ArgoDeploymentType, detect_argo_deployment_type},
helm::chart::{HelmChartScore, HelmRepository},
},
score::Score,
topology::{HelmCommand, K8sclient, Topology, ingress::Ingress, k8s::K8sClient},
};
use harmony_types::id::Id;
@@ -25,6 +23,7 @@ use super::ArgoApplication;
#[derive(Debug, Serialize, Clone)]
pub struct ArgoHelmScore {
pub namespace: String,
// TODO: remove and rely on topology (it now knows the flavor)
pub openshift: bool,
pub argo_apps: Vec<ArgoApplication>,
}
@@ -55,29 +54,98 @@ impl<T: Topology + K8sclient + HelmCommand + Ingress> Interpret<T> for ArgoInter
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let k8s_client = topology.k8s_client().await?;
let svc = format!("argo-{}", self.score.namespace.clone());
trace!("Starting ArgoInterpret execution {self:?}");
let k8s_client: Arc<K8sClient> = topology.k8s_client().await?;
trace!("Got k8s client");
let desired_ns = self.score.namespace.clone();
debug!("ArgoInterpret detecting cluster configuration");
let svc = format!("argo-{}", desired_ns);
let domain = topology.get_domain(&svc).await?;
let helm_score =
argo_helm_chart_score(&self.score.namespace, self.score.openshift, &domain);
debug!("Resolved Argo service domain for '{}': {}", svc, domain);
helm_score.interpret(inventory, topology).await?;
// Detect current Argo deployment type
let current = detect_argo_deployment_type(&k8s_client, &desired_ns).await?;
info!("Detected Argo deployment type: {:?}", current);
// Decide control namespace and whether we must install
let (control_ns, must_install) = match current.clone() {
ArgoDeploymentType::NotInstalled => {
info!(
"Argo CD not installed. Will install via Helm into namespace '{}'.",
desired_ns
);
(desired_ns.clone(), true)
}
ArgoDeploymentType::AvailableInDesiredNamespace(ns) => {
info!(
"Argo CD already installed by Harmony in '{}'. Skipping install.",
ns
);
(ns, false)
}
ArgoDeploymentType::InstalledClusterWide(ns) => {
info!("Argo CD installed cluster-wide in namespace '{}'.", ns);
(ns, false)
}
ArgoDeploymentType::InstalledNamespaceScoped(ns) => {
// TODO we could support this use case by installing a new argo instance. But that
// means handling a few cases that are out of scope for now :
// - Wether argo operator is installed
// - Managing CRD versions compatibility
// - Potentially handling the various k8s flavors and setups we might encounter
//
// There is a possibility that the helm chart already handles most or even all of these use cases but they are out of scope for now.
let msg = format!(
"Argo CD found in '{}' but it is namespace-scoped and not supported for attachment yet.",
ns
);
warn!("{}", msg);
return Err(InterpretError::new(msg));
}
};
info!("ArgoCD will be installed : {must_install} . Current argocd status : {current:?} ");
if must_install {
let helm_score = argo_helm_chart_score(&desired_ns, self.score.openshift, &domain);
info!(
"Installing Argo CD via Helm into namespace '{}' ...",
desired_ns
);
helm_score.interpret(inventory, topology).await?;
info!("Argo CD install complete in '{}'.", desired_ns);
}
let yamls: Vec<serde_yaml::Value> = self
.argo_apps
.iter()
.map(|a| a.to_yaml(Some(&control_ns)))
.collect();
info!(
"Applying {} Argo application object(s) into control namespace '{}'.",
yamls.len(),
control_ns
);
k8s_client
.apply_yaml_many(&self.argo_apps.iter().map(|a| a.to_yaml()).collect(), None)
.apply_yaml_many(&yamls, Some(control_ns.as_str()))
.await
.unwrap();
.map_err(|e| InterpretError::new(format!("Failed applying Argo CRs: {e}")))?;
Ok(Outcome::success_with_details(
format!(
"ArgoCD {} {}",
self.argo_apps.len(),
match self.argo_apps.len() {
1 => "application",
_ => "applications",
if self.argo_apps.len() == 1 {
"application"
} else {
"applications"
}
),
vec![format!("argo application: http://{}", domain)],
vec![
format!("control_namespace={}", control_ns),
format!("argo ui: http://{}", domain),
],
))
}
@@ -86,7 +154,7 @@ impl<T: Topology + K8sclient + HelmCommand + Ingress> Interpret<T> for ArgoInter
}
fn get_version(&self) -> Version {
todo!()
Version::from("0.1.0").unwrap()
}
fn get_status(&self) -> InterpretStatus {
@@ -94,39 +162,7 @@ impl<T: Topology + K8sclient + HelmCommand + Ingress> Interpret<T> for ArgoInter
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
impl ArgoInterpret {
pub async fn get_host_domain(
&self,
client: Arc<K8sClient>,
openshift: bool,
) -> Result<String, InterpretError> {
//This should be the job of the topology to determine if we are in
//openshift, potentially we need on openshift topology the same way we create a
//localhosttopology
match openshift {
true => {
let gvk = GroupVersionKind {
group: "operator.openshift.io".into(),
version: "v1".into(),
kind: "IngressController".into(),
};
let ic = client
.get_resource_json_value("default", Some("openshift-ingress-operator"), &gvk)
.await?;
match ic.data["status"]["domain"].as_str() {
Some(domain) => return Ok(domain.to_string()),
None => return Err(InterpretError::new("Could not find domain".to_string())),
}
}
false => {
todo!()
}
};
vec![]
}
}

View File

@@ -12,6 +12,7 @@ use crate::{
modules::application::{
ApplicationFeature, HelmPackage, InstallationError, InstallationOutcome, OCICompliant,
features::{ArgoApplication, ArgoHelmScore},
webapp::Webapp,
},
score::Score,
topology::{
@@ -47,11 +48,11 @@ use crate::{
/// - ArgoCD to install/upgrade/rollback/inspect k8s resources
/// - Kubernetes for runtime orchestration
#[derive(Debug, Default, Clone)]
pub struct PackagingDeployment<A: OCICompliant + HelmPackage> {
pub struct PackagingDeployment<A: OCICompliant + HelmPackage + Webapp> {
pub application: Arc<A>,
}
impl<A: OCICompliant + HelmPackage> PackagingDeployment<A> {
impl<A: OCICompliant + HelmPackage + Webapp> PackagingDeployment<A> {
async fn deploy_to_local_k3d(
&self,
app_name: String,
@@ -137,7 +138,7 @@ impl<A: OCICompliant + HelmPackage> PackagingDeployment<A> {
#[async_trait]
impl<
A: OCICompliant + HelmPackage + Clone + 'static,
A: OCICompliant + HelmPackage + Webapp + Clone + 'static,
T: Topology + HelmCommand + MultiTargetTopology + K8sclient + Ingress + 'static,
> ApplicationFeature<T> for PackagingDeployment<A>
{
@@ -146,10 +147,15 @@ impl<
topology: &T,
) -> Result<InstallationOutcome, InstallationError> {
let image = self.application.image_name();
let domain = topology
.get_domain(&self.application.name())
.await
.map_err(|e| e.to_string())?;
let domain = if topology.current_target() == DeploymentTarget::Production {
self.application.dns()
} else {
topology
.get_domain(&self.application.name())
.await
.map_err(|e| e.to_string())?
};
// TODO Write CI/CD workflow files
// we can autotedect the CI type using the remote url (default to github action for github
@@ -193,8 +199,7 @@ impl<
namespace: format!("{}", self.application.name()),
openshift: true,
argo_apps: vec![ArgoApplication::from(CDApplicationConfig {
// helm pull oci://hub.nationtech.io/harmony/harmony-example-rust-webapp-chart --version 0.1.0
version: Version::from("0.1.0").unwrap(),
version: Version::from("0.2.1").unwrap(),
helm_chart_repo_url: "hub.nationtech.io/harmony".to_string(),
helm_chart_name: format!("{}-chart", self.application.name()),
values_overrides: None,

View File

@@ -3,7 +3,6 @@ use std::sync::Arc;
use crate::modules::application::{
Application, ApplicationFeature, InstallationError, InstallationOutcome,
};
use crate::modules::monitoring::application_monitoring::application_monitoring_score::ApplicationMonitoringScore;
use crate::modules::monitoring::application_monitoring::rhobs_application_monitoring_score::ApplicationRHOBMonitoringScore;
use crate::modules::monitoring::kube_prometheus::crd::rhob_alertmanager_config::RHOBObservability;

View File

@@ -2,6 +2,7 @@ mod feature;
pub mod features;
pub mod oci;
mod rust;
mod webapp;
use std::sync::Arc;
pub use feature::*;

View File

@@ -16,6 +16,7 @@ use tar::{Builder, Header};
use walkdir::WalkDir;
use crate::config::{REGISTRY_PROJECT, REGISTRY_URL};
use crate::modules::application::webapp::Webapp;
use crate::{score::Score, topology::Topology};
use super::{Application, ApplicationFeature, ApplicationInterpret, HelmPackage, OCICompliant};
@@ -60,6 +61,10 @@ pub struct RustWebapp {
pub project_root: PathBuf,
pub service_port: u32,
pub framework: Option<RustWebFramework>,
/// Host name that will be used in production environment.
///
/// This is the place to put the public host name if this is a public facing webapp.
pub dns: String,
}
impl Application for RustWebapp {
@@ -68,6 +73,12 @@ impl Application for RustWebapp {
}
}
impl Webapp for RustWebapp {
fn dns(&self) -> String {
self.dns.clone()
}
}
#[async_trait]
impl HelmPackage for RustWebapp {
async fn build_push_helm_package(
@@ -194,10 +205,10 @@ impl RustWebapp {
Some(body_full(tar_data.into())),
);
while let Some(mut msg) = image_build_stream.next().await {
while let Some(msg) = image_build_stream.next().await {
trace!("Got bollard msg {msg:?}");
match msg {
Ok(mut msg) => {
Ok(msg) => {
if let Some(progress) = msg.progress_detail {
info!(
"Build progress {}/{}",
@@ -257,7 +268,6 @@ impl RustWebapp {
".harmony_generated",
"harmony",
"node_modules",
"Dockerfile.harmony",
];
let mut entries: Vec<_> = WalkDir::new(project_root)
.into_iter()
@@ -461,52 +471,53 @@ impl RustWebapp {
let (image_repo, image_tag) = image_url.rsplit_once(':').unwrap_or((image_url, "latest"));
let app_name = &self.name;
let service_port = self.service_port;
// Create Chart.yaml
let chart_yaml = format!(
r#"
apiVersion: v2
name: {}
description: A Helm chart for the {} web application.
name: {chart_name}
description: A Helm chart for the {app_name} web application.
type: application
version: 0.1.0
appVersion: "{}"
version: 0.2.1
appVersion: "{image_tag}"
"#,
chart_name, self.name, image_tag
);
fs::write(chart_dir.join("Chart.yaml"), chart_yaml)?;
// Create values.yaml
let values_yaml = format!(
r#"
# Default values for {}.
# Default values for {chart_name}.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: {}
repository: {image_repo}
pullPolicy: IfNotPresent
# Overridden by the chart's appVersion
tag: "{}"
tag: "{image_tag}"
service:
type: ClusterIP
port: {}
port: {service_port}
ingress:
enabled: true
tls: true
# Annotations for cert-manager to handle SSL.
annotations:
# Add other annotations like nginx ingress class if needed
# kubernetes.io/ingress.class: nginx
hosts:
- host: {}
- host: {domain}
paths:
- path: /
pathType: ImplementationSpecific
"#,
chart_name, image_repo, image_tag, self.service_port, domain,
);
fs::write(chart_dir.join("values.yaml"), values_yaml)?;
@@ -583,7 +594,11 @@ spec:
);
fs::write(templates_dir.join("deployment.yaml"), deployment_yaml)?;
let service_port = self.service_port;
// Create templates/ingress.yaml
// TODO get issuer name and tls config from topology as it may be different from one
// cluster to another, also from one version to another
let ingress_yaml = format!(
r#"
{{{{- if $.Values.ingress.enabled -}}}}
@@ -596,13 +611,11 @@ metadata:
spec:
{{{{- if $.Values.ingress.tls }}}}
tls:
{{{{- range $.Values.ingress.tls }}}}
- hosts:
{{{{- range .hosts }}}}
- {{{{ . | quote }}}}
- secretName: {{{{ include "chart.fullname" . }}}}-tls
hosts:
{{{{- range $.Values.ingress.hosts }}}}
- {{{{ .host | quote }}}}
{{{{- end }}}}
secretName: {{{{ .secretName }}}}
{{{{- end }}}}
{{{{- end }}}}
rules:
{{{{- range $.Values.ingress.hosts }}}}
@@ -616,12 +629,11 @@ spec:
service:
name: {{{{ include "chart.fullname" $ }}}}
port:
number: {{{{ $.Values.service.port | default {} }}}}
number: {{{{ $.Values.service.port | default {service_port} }}}}
{{{{- end }}}}
{{{{- end }}}}
{{{{- end }}}}
"#,
self.service_port
);
fs::write(templates_dir.join("ingress.yaml"), ingress_yaml)?;

View File

@@ -0,0 +1,7 @@
use super::Application;
use async_trait::async_trait;
#[async_trait]
pub trait Webapp: Application {
fn dns(&self) -> String;
}

View File

@@ -0,0 +1,208 @@
use std::sync::Arc;
use log::{debug, info};
use crate::{interpret::InterpretError, topology::k8s::K8sClient};
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum ArgoScope {
ClusterWide(String),
NamespaceScoped(String),
}
#[derive(Clone, Debug)]
pub struct DiscoveredArgo {
pub control_namespace: String,
pub scope: ArgoScope,
pub has_crds: bool,
pub has_applicationset: bool,
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum ArgoDeploymentType {
NotInstalled,
AvailableInDesiredNamespace(String),
InstalledClusterWide(String),
InstalledNamespaceScoped(String),
}
pub async fn discover_argo_all(
k8s: &Arc<K8sClient>,
) -> Result<Vec<DiscoveredArgo>, InterpretError> {
use log::{debug, info, trace, warn};
trace!("Starting Argo discovery");
// CRDs
let mut has_crds = true;
let required_crds = vec!["applications.argoproj.io", "appprojects.argoproj.io"];
trace!("Checking required Argo CRDs: {:?}", required_crds);
for crd in required_crds {
trace!("Verifying CRD presence: {crd}");
let crd_exists = k8s.has_crd(crd).await.map_err(|e| {
InterpretError::new(format!("Failed to verify existence of CRD {crd}: {e}"))
})?;
debug!("CRD {crd} exists: {crd_exists}");
if !crd_exists {
info!(
"Missing Argo CRD {crd}, looks like Argo CD is not installed (or partially installed)"
);
has_crds = false;
break;
}
}
trace!(
"Listing namespaces with healthy Argo CD deployments using selector app.kubernetes.io/part-of=argocd"
);
let mut candidate_namespaces = k8s
.list_namespaces_with_healthy_deployments("app.kubernetes.io/part-of=argocd")
.await
.map_err(|e| InterpretError::new(format!("List healthy argocd deployments: {e}")))?;
trace!(
"Listing namespaces with healthy Argo CD deployments using selector app.kubernetes.io/name=argo-cd"
);
candidate_namespaces.append(
&mut k8s
.list_namespaces_with_healthy_deployments("app.kubernetes.io/name=argo-cd")
.await
.map_err(|e| InterpretError::new(format!("List healthy argocd deployments: {e}")))?,
);
debug!(
"Discovered {} candidate namespace(s) for Argo CD: {:?}",
candidate_namespaces.len(),
candidate_namespaces
);
let mut found = Vec::new();
for ns in candidate_namespaces {
trace!("Evaluating namespace '{ns}' for Argo CD instance");
// Require the application-controller to be healthy (sanity check)
trace!(
"Checking healthy deployment with label app.kubernetes.io/name=argocd-application-controller in namespace '{ns}'"
);
let controller_ok = k8s
.has_healthy_deployment_with_label(
&ns,
"app.kubernetes.io/name=argocd-application-controller",
)
.await
.unwrap_or_else(|e| {
warn!(
"Error while checking application-controller health in namespace '{ns}': {e}"
);
false
}) || k8s
.has_healthy_deployment_with_label(
&ns,
"app.kubernetes.io/component=controller",
)
.await
.unwrap_or_else(|e| {
warn!(
"Error while checking application-controller health in namespace '{ns}': {e}"
);
false
});
debug!("Namespace '{ns}': application-controller healthy = {controller_ok}");
if !controller_ok {
trace!("Skipping namespace '{ns}' because application-controller is not healthy");
continue;
}
trace!("Determining Argo CD scope for namespace '{ns}' (cluster-wide vs namespace-scoped)");
let sa = k8s
.get_controller_service_account_name(&ns)
.await?
.unwrap_or("argocd-application-controller".to_string());
let scope = match k8s.is_service_account_cluster_wide(&sa, &ns).await {
Ok(true) => {
debug!("Namespace '{ns}' identified as cluster-wide Argo CD control plane");
ArgoScope::ClusterWide(ns.to_string())
}
Ok(false) => {
debug!("Namespace '{ns}' identified as namespace-scoped Argo CD control plane");
ArgoScope::NamespaceScoped(ns.to_string())
}
Err(e) => {
warn!(
"Failed to determine Argo CD scope for namespace '{ns}': {e}. Assuming namespace-scoped."
);
ArgoScope::NamespaceScoped(ns.to_string())
}
};
trace!("Checking optional ApplicationSet CRD (applicationsets.argoproj.io)");
let has_applicationset = match k8s.has_crd("applicationsets.argoproj.io").await {
Ok(v) => {
debug!("applicationsets.argoproj.io present: {v}");
v
}
Err(e) => {
warn!("Failed to check applicationsets.argoproj.io CRD: {e}. Assuming absent.");
false
}
};
let argo = DiscoveredArgo {
control_namespace: ns.clone(),
scope,
has_crds,
has_applicationset,
};
debug!("Discovered Argo instance in '{ns}': {argo:?}");
found.push(argo);
}
if found.is_empty() {
info!("No Argo CD installations discovered");
} else {
info!(
"Argo CD discovery complete: {} instance(s) found",
found.len()
);
}
Ok(found)
}
pub async fn detect_argo_deployment_type(
k8s: &Arc<K8sClient>,
desired_namespace: &str,
) -> Result<ArgoDeploymentType, InterpretError> {
let discovered = discover_argo_all(k8s).await?;
debug!("Discovered argo instances {discovered:?}");
if discovered.is_empty() {
return Ok(ArgoDeploymentType::NotInstalled);
}
if let Some(d) = discovered
.iter()
.find(|d| d.control_namespace == desired_namespace)
{
return Ok(ArgoDeploymentType::AvailableInDesiredNamespace(
d.control_namespace.clone(),
));
}
if let Some(d) = discovered
.iter()
.find(|d| matches!(d.scope, ArgoScope::ClusterWide(_)))
{
return Ok(ArgoDeploymentType::InstalledClusterWide(
d.control_namespace.clone(),
));
}
Ok(ArgoDeploymentType::InstalledNamespaceScoped(
discovered[0].control_namespace.clone(),
))
}

View File

@@ -0,0 +1,116 @@
use std::net::{IpAddr, Ipv4Addr};
use async_trait::async_trait;
use brocade::BrocadeOptions;
use harmony_secret::{Secret, SecretManager};
use harmony_types::id::Id;
use serde::{Deserialize, Serialize};
use crate::{
data::Version,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::Topology,
};
#[derive(Debug, Clone, Serialize)]
pub struct BrocadeEnableSnmpScore {
pub switch_ips: Vec<IpAddr>,
pub dry_run: bool,
}
impl<T: Topology> Score<T> for BrocadeEnableSnmpScore {
fn name(&self) -> String {
"BrocadeEnableSnmpScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(BrocadeEnableSnmpInterpret {
score: self.clone(),
})
}
}
#[derive(Debug, Clone, Serialize)]
pub struct BrocadeEnableSnmpInterpret {
score: BrocadeEnableSnmpScore,
}
#[derive(Secret, Clone, Debug, Serialize, Deserialize)]
struct BrocadeSwitchAuth {
username: String,
password: String,
}
#[derive(Secret, Clone, Debug, Serialize, Deserialize)]
struct BrocadeSnmpAuth {
username: String,
auth_password: String,
des_password: String,
}
#[async_trait]
impl<T: Topology> Interpret<T> for BrocadeEnableSnmpInterpret {
async fn execute(
&self,
_inventory: &Inventory,
_topology: &T,
) -> Result<Outcome, InterpretError> {
let switch_addresses = &self.score.switch_ips;
let snmp_auth = SecretManager::get_or_prompt::<BrocadeSnmpAuth>()
.await
.unwrap();
let config = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
.await
.unwrap();
let brocade = brocade::init(
&switch_addresses,
&config.username,
&config.password,
BrocadeOptions {
dry_run: self.score.dry_run,
..Default::default()
},
)
.await
.expect("Brocade client failed to connect");
brocade
.enable_snmp(
&snmp_auth.username,
&snmp_auth.auth_password,
&snmp_auth.des_password,
)
.await
.map_err(|e| InterpretError::new(e.to_string()))?;
Ok(Outcome::success(format!(
"Activated snmp server for Brocade at {}",
switch_addresses
.iter()
.map(|s| s.to_string())
.collect::<Vec<_>>()
.join(", ")
)))
}
fn get_name(&self) -> InterpretName {
InterpretName::Custom("BrocadeEnableSnmpInterpret")
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -19,8 +19,11 @@ pub struct DhcpScore {
pub host_binding: Vec<HostBinding>,
pub next_server: Option<IpAddress>,
pub boot_filename: Option<String>,
/// Boot filename to be provided to PXE clients identifying as BIOS
pub filename: Option<String>,
/// Boot filename to be provided to PXE clients identifying as uefi but NOT iPXE
pub filename64: Option<String>,
/// Boot filename to be provided to PXE clients identifying as iPXE
pub filenameipxe: Option<String>,
pub dhcp_range: (IpAddress, IpAddress),
pub domain: Option<String>,

View File

@@ -5,11 +5,10 @@ use serde::{Deserialize, Serialize};
use crate::{
data::Version,
hardware::PhysicalHost,
infra::inventory::InventoryRepositoryFactory,
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::{HostRole, Inventory},
modules::inventory::LaunchDiscoverInventoryAgentScore,
modules::inventory::{HarmonyDiscoveryStrategy, LaunchDiscoverInventoryAgentScore},
score::Score,
topology::Topology,
};
@@ -17,11 +16,13 @@ use crate::{
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DiscoverHostForRoleScore {
pub role: HostRole,
pub number_desired_hosts: i16,
pub discovery_strategy: HarmonyDiscoveryStrategy,
}
impl<T: Topology> Score<T> for DiscoverHostForRoleScore {
fn name(&self) -> String {
"DiscoverInventoryAgentScore".to_string()
format!("DiscoverHostForRoleScore({:?})", self.role)
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
@@ -48,13 +49,15 @@ impl<T: Topology> Interpret<T> for DiscoverHostForRoleInterpret {
);
LaunchDiscoverInventoryAgentScore {
discovery_timeout: None,
discovery_strategy: self.score.discovery_strategy.clone(),
}
.interpret(inventory, topology)
.await?;
let host: PhysicalHost;
let mut chosen_hosts = vec![];
let host_repo = InventoryRepositoryFactory::build().await?;
let mut assigned_hosts = 0;
loop {
let all_hosts = host_repo.get_all_hosts().await?;
@@ -75,15 +78,24 @@ impl<T: Topology> Interpret<T> for DiscoverHostForRoleInterpret {
match ans {
Ok(choice) => {
info!(
"Selected {} as the {:?} node.",
choice.summary(),
self.score.role
"Assigned role {:?} for node {}",
self.score.role,
choice.summary()
);
host_repo
.save_role_mapping(&self.score.role, &choice)
.await?;
host = choice;
break;
chosen_hosts.push(choice);
assigned_hosts += 1;
info!(
"Found {assigned_hosts} hosts for role {:?}",
self.score.role
);
if assigned_hosts == self.score.number_desired_hosts {
break;
}
}
Err(inquire::InquireError::OperationCanceled) => {
info!("Refresh requested. Fetching list of discovered hosts again...");
@@ -100,8 +112,13 @@ impl<T: Topology> Interpret<T> for DiscoverHostForRoleInterpret {
}
Ok(Outcome::success(format!(
"Successfully discovered host {} for role {:?}",
host.summary(),
"Successfully discovered {} hosts {} for role {:?}",
self.score.number_desired_hosts,
chosen_hosts
.iter()
.map(|h| h.summary())
.collect::<Vec<String>>()
.join(", "),
self.score.role
)))
}

View File

@@ -1,6 +1,10 @@
mod discovery;
pub mod inspect;
use std::net::Ipv4Addr;
use cidr::{Ipv4Cidr, Ipv4Inet};
pub use discovery::*;
use tokio::time::{Duration, timeout};
use async_trait::async_trait;
use harmony_inventory_agent::local_presence::DiscoveryEvent;
@@ -24,6 +28,7 @@ use harmony_types::id::Id;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LaunchDiscoverInventoryAgentScore {
pub discovery_timeout: Option<u64>,
pub discovery_strategy: HarmonyDiscoveryStrategy,
}
impl<T: Topology> Score<T> for LaunchDiscoverInventoryAgentScore {
@@ -43,6 +48,12 @@ struct DiscoverInventoryAgentInterpret {
score: LaunchDiscoverInventoryAgentScore,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum HarmonyDiscoveryStrategy {
MDNS,
SUBNET { cidr: cidr::Ipv4Cidr, port: u16 },
}
#[async_trait]
impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
async fn execute(
@@ -57,6 +68,37 @@ impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
),
};
match self.score.discovery_strategy {
HarmonyDiscoveryStrategy::MDNS => self.launch_mdns_discovery().await,
HarmonyDiscoveryStrategy::SUBNET { cidr, port } => {
self.launch_cidr_discovery(&cidr, port).await
}
};
Ok(Outcome::success(
"Discovery process completed successfully".to_string(),
))
}
fn get_name(&self) -> InterpretName {
InterpretName::DiscoverInventoryAgent
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
impl DiscoverInventoryAgentInterpret {
async fn launch_mdns_discovery(&self) {
harmony_inventory_agent::local_presence::discover_agents(
self.score.discovery_timeout,
|event: DiscoveryEvent| -> Result<(), String> {
@@ -88,6 +130,103 @@ impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
trace!("Found host information {host:?}");
// TODO its useless to have two distinct host types but requires a bit much
// refactoring to do it now
let harmony_inventory_agent::hwinfo::PhysicalHost {
storage_drives,
storage_controller: _,
memory_modules,
cpus,
chipset: _,
network_interfaces,
management_interface: _,
host_uuid,
} = host;
let host = PhysicalHost {
id: Id::from(host_uuid),
category: HostCategory::Server,
network: network_interfaces,
storage: storage_drives,
labels: vec![Label {
name: "discovered-by".to_string(),
value: "harmony-inventory-agent".to_string(),
}],
memory_modules,
cpus,
};
// FIXME only save the host when it is new or something changed in it.
// we currently are saving the host every time it is discovered.
let repo = InventoryRepositoryFactory::build()
.await
.map_err(|e| format!("Could not build repository : {e}"))
.unwrap();
repo.save(&host)
.await
.map_err(|e| format!("Could not save host : {e}"))
.unwrap();
info!(
"Saved new host id {}, summary : {}",
host.id,
host.summary()
);
});
}
_ => debug!("Unhandled event {event:?}"),
};
Ok(())
},
)
.await
}
// async fn launch_cidr_discovery(&self, cidr : &Ipv4Cidr, port: u16) {
// todo!("launnch cidr discovery for {cidr} : {port}
// - Iterate over all possible addresses in cidr
// - make calls in batches of 20 attempting to reach harmony inventory agent on <addr, port> using same as above harmony_inventory_agent::client::get_host_inventory(&address, port)
// - Log warn when response is 404, it means the port was used by something else unexpected
// - Log error when response is 5xx
// - Log debug when no response (timeout 15 seconds)
// - Log info when found and response is 2xx
// ");
// }
async fn launch_cidr_discovery(&self, cidr: &Ipv4Cidr, port: u16) {
let addrs: Vec<Ipv4Inet> = cidr.iter().collect();
let total = addrs.len();
info!(
"Starting CIDR discovery for {} hosts on {}/{} (port {})",
total,
cidr.network_length(),
cidr,
port
);
let batch_size: usize = 20;
let timeout_secs = 5;
let request_timeout = Duration::from_secs(timeout_secs);
let mut current_batch = 0;
let num_batches = addrs.len() / batch_size;
for batch in addrs.chunks(batch_size) {
current_batch += 1;
info!("Starting query batch {current_batch} of {num_batches}, timeout {timeout_secs}");
let mut tasks = Vec::with_capacity(batch.len());
for addr in batch {
let addr = addr.address().to_string();
let port = port;
let task = tokio::spawn(async move {
match timeout(
request_timeout,
harmony_inventory_agent::client::get_host_inventory(&addr, port),
)
.await
{
Ok(Ok(host)) => {
info!("Found and response is 2xx for {addr}:{port}");
// Reuse the same conversion to PhysicalHost as MDNS flow
let harmony_inventory_agent::hwinfo::PhysicalHost {
storage_drives,
storage_controller,
@@ -112,45 +251,36 @@ impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
cpus,
};
// Save host to inventory
let repo = InventoryRepositoryFactory::build()
.await
.map_err(|e| format!("Could not build repository : {e}"))
.unwrap();
repo.save(&host)
.await
.map_err(|e| format!("Could not save host : {e}"))
.unwrap();
info!(
"Saved new host id {}, summary : {}",
host.id,
host.summary()
);
});
if let Err(e) = repo.save(&host).await {
log::debug!("Failed to save host {}: {e}", host.id);
} else {
info!("Saved host id {}, summary : {}", host.id, host.summary());
}
}
Ok(Err(e)) => {
log::info!("Error querying inventory agent on {addr}:{port} : {e}");
}
Err(_) => {
// Timeout for this host
log::debug!("No response (timeout) for {addr}:{port}");
}
}
_ => debug!("Unhandled event {event:?}"),
};
Ok(())
},
)
.await;
Ok(Outcome::success(
"Discovery process completed successfully".to_string(),
))
}
});
fn get_name(&self) -> InterpretName {
InterpretName::DiscoverInventoryAgent
}
tasks.push(task);
}
fn get_version(&self) -> Version {
todo!()
}
// Wait for this batch to complete
for t in tasks {
let _ = t.await;
}
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
info!("CIDR discovery completed");
}
}

View File

@@ -0,0 +1,157 @@
use std::collections::BTreeMap;
use k8s_openapi::{
api::core::v1::{Affinity, Toleration},
apimachinery::pkg::apis::meta::v1::ObjectMeta,
};
use kube::CustomResource;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use serde_json::Value;
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug)]
#[kube(
group = "operators.coreos.com",
version = "v1alpha1",
kind = "CatalogSource",
plural = "catalogsources",
namespaced = true,
schema = "disabled"
)]
#[serde(rename_all = "camelCase")]
pub struct CatalogSourceSpec {
#[serde(skip_serializing_if = "Option::is_none")]
pub address: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub config_map: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub display_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub grpc_pod_config: Option<GrpcPodConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
pub icon: Option<Icon>,
#[serde(skip_serializing_if = "Option::is_none")]
pub image: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub priority: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub publisher: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub run_as_root: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub secrets: Option<Vec<String>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub source_type: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub update_strategy: Option<UpdateStrategy>,
}
#[derive(Deserialize, Serialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct GrpcPodConfig {
#[serde(skip_serializing_if = "Option::is_none")]
pub affinity: Option<Affinity>,
#[serde(skip_serializing_if = "Option::is_none")]
pub extract_content: Option<ExtractContent>,
#[serde(skip_serializing_if = "Option::is_none")]
pub memory_target: Option<Value>,
#[serde(skip_serializing_if = "Option::is_none")]
pub node_selector: Option<BTreeMap<String, String>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub priority_class_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub security_context_config: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub tolerations: Option<Vec<Toleration>>,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct ExtractContent {
pub cache_dir: String,
pub catalog_dir: String,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct Icon {
pub base64data: String,
pub mediatype: String,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct UpdateStrategy {
#[serde(skip_serializing_if = "Option::is_none")]
pub registry_poll: Option<RegistryPoll>,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct RegistryPoll {
#[serde(skip_serializing_if = "Option::is_none")]
pub interval: Option<String>,
}
impl Default for CatalogSource {
fn default() -> Self {
Self {
metadata: ObjectMeta::default(),
spec: CatalogSourceSpec {
address: None,
config_map: None,
description: None,
display_name: None,
grpc_pod_config: None,
icon: None,
image: None,
priority: None,
publisher: None,
run_as_root: None,
secrets: None,
source_type: None,
update_strategy: None,
},
}
}
}
impl Default for CatalogSourceSpec {
fn default() -> Self {
Self {
address: None,
config_map: None,
description: None,
display_name: None,
grpc_pod_config: None,
icon: None,
image: None,
priority: None,
publisher: None,
run_as_root: None,
secrets: None,
source_type: None,
update_strategy: None,
}
}
}

View File

@@ -0,0 +1,4 @@
mod catalogsources_operators_coreos_com;
pub use catalogsources_operators_coreos_com::*;
mod subscriptions_operators_coreos_com;
pub use subscriptions_operators_coreos_com::*;

Some files were not shown because too many files have changed in this diff Show More