Compare commits

..

398 Commits

Author SHA1 Message Date
7cb5237fdd fix(secret): openbao implementation wrong fs write call on windows
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m40s
2026-03-23 09:20:43 -04:00
9a67bcc96f Merge pull request 'fix/cnpgInstallation' (#251) from fix/cnpgInstallation into master
Some checks failed
Run Check Script / check (push) Successful in 1m45s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m15s
Reviewed-on: #251
2026-03-20 21:02:53 +00:00
a377fc1404 Merge branch 'master' into fix/cnpgInstallation
All checks were successful
Run Check Script / check (pull_request) Successful in 1m44s
2026-03-20 20:56:30 +00:00
c9977fee12 fix: CI file moved
All checks were successful
Run Check Script / check (pull_request) Successful in 2m5s
2026-03-20 16:48:38 -04:00
64bf585e07 fix: remove check.sh with broken path handling and fix formatting
Some checks failed
Run Check Script / check (pull_request) Failing after 12s
2026-03-20 16:41:30 -04:00
44e2c45435 fix: flaky tests due to bad environment variables handling in harmony_config crate 2026-03-20 16:40:08 -04:00
cdccbc8939 fix: formatting and minor stuff 2026-03-20 16:34:48 -04:00
9830971d05 feat: Creat namespace in k8s client and wait for namespace ready utility functions 2026-03-20 16:15:51 -04:00
e1183ef6de feat: K8s postgresql score now ensures cnpg is installed 2026-03-20 07:02:26 -04:00
444fea81b8 docs: Fix examples cli in docs
Some checks failed
Run Check Script / check (pull_request) Failing after 12s
2026-03-19 22:52:05 -04:00
907ae04195 chore: Add book.sh script and ci.sh, moved check.sh to build/ folder
Some checks failed
Run Check Script / check (pull_request) Failing after 9s
2026-03-19 22:43:32 -04:00
64582caa64 docs: Major rehaul of documentation
Some checks failed
Run Check Script / check (pull_request) Failing after 10s
2026-03-19 22:38:55 -04:00
f5736fcc37 wip: Config and secret management merging planification and high level documentation
Some checks failed
Run Check Script / check (pull_request) Failing after 43s
2026-03-19 17:02:17 -04:00
7a1e84fb68 doc: Adr 020 on interactive harmony configuration for great UX 2026-03-18 10:40:19 -04:00
8499f4d1b7 Merge pull request 'fix: small details were preventing to re-save frontends,backends and healthchecks in opnsense UI' (#248) from fix/load-balancer-xml into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #248
2026-03-17 14:38:35 +00:00
231d9b878e debt: Ignore interactive tests with inquire prompts
All checks were successful
Run Check Script / check (pull_request) Successful in 1m21s
2026-03-15 11:37:31 -04:00
ee2dade0be Merge remote-tracking branch 'origin/master' into feat/brocade_assisted_setup
Some checks failed
Run Check Script / check (pull_request) Failing after 1m28s
2026-03-15 10:12:22 -04:00
aa07f4c8ad Merge pull request 'fix/dynamically_get_public_domain' (#234) from fix/dynamically_get_public_domain into master
Some checks failed
Compile and package harmony_composer / package_harmony_composer (push) Failing after 1m48s
Run Check Script / check (push) Failing after 11m1s
Reviewed-on: #234
Reviewed-by: johnride <jg@nationtech.io>
2026-03-15 14:07:25 +00:00
77bb138497 Merge remote-tracking branch 'origin/master' into fix/dynamically_get_public_domain
All checks were successful
Run Check Script / check (pull_request) Successful in 1m20s
2026-03-15 09:54:36 -04:00
a16879b1b6 Merge pull request 'fix: readded tokio retry to get ca cert for a nats cluster which was accidentally removed during a refactor' (#229) from fix/nats-ca-cert-retry into master
Some checks failed
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m1s
Run Check Script / check (push) Failing after 12m23s
Reviewed-on: #229
2026-03-15 12:36:05 +00:00
f57e6f5957 Merge pull request 'feat: add priorityClass to node_health daemonset' (#249) from feat/health_endpoint_priority_class into master
Some checks failed
Run Check Script / check (push) Successful in 1m28s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 1m59s
Reviewed-on: #249
2026-03-14 18:53:30 +00:00
7605d05de3 fix: opnsense fixes for st-mcd (cb1)
All checks were successful
Run Check Script / check (pull_request) Successful in 1m29s
2026-03-13 13:13:37 -04:00
b244127843 feat: add priorityClass to node_health daemonset
All checks were successful
Run Check Script / check (pull_request) Successful in 1m27s
2026-03-13 11:18:18 -04:00
67c3265286 fix: small details were preventing to re-save frontends,backends and healthchecks in opnsense UI
All checks were successful
Run Check Script / check (pull_request) Successful in 2m12s
2026-03-13 10:31:17 -04:00
d10598d01e Merge pull request 'okdload balancer using 1936 port http healthcheck' (#240) from feat/okd_loadbalancer_betterhealthcheck into master
Some checks failed
Run Check Script / check (push) Successful in 1m26s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 1m56s
Reviewed-on: #240
2026-03-10 17:45:51 +00:00
61ba7257d0 fix: remove broken test
All checks were successful
Run Check Script / check (pull_request) Successful in 1m22s
2026-03-10 13:40:24 -04:00
b0e9594d92 Merge branch 'master' into feat/okd_loadbalancer_betterhealthcheck
Some checks failed
Run Check Script / check (pull_request) Failing after 47s
2026-03-07 23:06:50 +00:00
2a7fa466cc Merge pull request 'reafactor/k8sclient' (#243) from reafactor/k8sclient into master
Some checks failed
Run Check Script / check (push) Successful in 2m57s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m2s
Reviewed-on: #243
2026-03-07 23:05:09 +00:00
f463cd1e94 Fix merge conflict between master and refactor/k8sclient
All checks were successful
Run Check Script / check (pull_request) Successful in 1m28s
2026-03-07 17:56:26 -05:00
e1da7949ec Merge pull request 'okd: add worker nodes to load balancer backend pool' (#246) from feat/okd-load-balancer-include-workers into master
Some checks failed
Run Check Script / check (push) Successful in 1m30s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 1m53s
Reviewed-on: #246
2026-03-07 22:42:14 +00:00
d0a1a73710 doc: fix example code to use ignore instead of no_run
All checks were successful
Run Check Script / check (pull_request) Successful in 1m43s
-  fails because  cannot be used at module level
- Use  to skip doc compilation while keeping example visible
2026-03-07 17:30:24 -05:00
bc2b328296 okd: include workers in load balancer backend pool + add tests and docs
Some checks failed
Run Check Script / check (pull_request) Failing after 24s
- Add nodes_to_backend_server() function to include both control plane and worker nodes
- Update public services (ports 80, 443) to use worker-inclusive backend pool
- Add comprehensive tests covering all backend configurations
- Add documentation with OKD reference link and usage examples
2026-03-07 17:15:24 -05:00
a93896707f okd: add worker nodes to load balancer backend pool
All checks were successful
Run Check Script / check (pull_request) Successful in 1m29s
Include both control plane and worker nodes in ports 80 and 443 backend pools
2026-03-07 16:46:47 -05:00
0e9b23a320 Merge branch 'feat/change-node-readiness-strategy'
Some checks failed
Run Check Script / check (push) Successful in 1m26s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m11s
2026-03-07 16:35:14 -05:00
f532ba2b40 doc: Update node readiness readme and deployed port to 25001
All checks were successful
Run Check Script / check (pull_request) Successful in 1m27s
2026-03-07 16:33:28 -05:00
fafca31798 fix: formatting and check script
All checks were successful
Run Check Script / check (pull_request) Successful in 1m28s
2026-03-07 16:08:52 -05:00
5412c34957 Merge pull request 'fix: change vlan definition from MaybeString to RawXml' (#245) from feat/opnsense-config-xml-support-vlan into master
Some checks failed
Run Check Script / check (push) Successful in 1m47s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m7s
Reviewed-on: #245
2026-03-07 20:59:28 +00:00
787cc8feab Fix doc tests for harmony-k8s crate refactoring
All checks were successful
Run Check Script / check (pull_request) Successful in 2m6s
- Updated harmony-k8s doc tests to import from harmony_k8s instead of harmony
- Changed CloudNativePgOperatorScore::default() to default_openshift()

This ensures doc tests work correctly after moving K8sClient to the harmony-k8s crate.
2026-03-07 15:50:39 -05:00
ce041f495b fix(zitadel): include admin@zitadel.{host} username, secure password with symbol/number, and cert-manager TLS configuration
Some checks failed
Run Check Script / check (pull_request) Failing after 26s
Update Zitadel deployment to use correct username format (admin@zitadel.{host}), generate secure passwords with required complexity (uppercase, lowercase, digit, symbol), configure edge TLS termination for OpenShift, and add cert-manager annotations. Also refactor password generation to ensure all complexity requirements are met.
2026-03-07 15:29:26 -05:00
bfb86f63ce fix: xml field for vlan
All checks were successful
Run Check Script / check (pull_request) Successful in 1m31s
2026-03-07 11:29:44 -05:00
55de206523 fix: change vlan definition from MaybeString to RawXml
All checks were successful
Run Check Script / check (pull_request) Successful in 1m29s
2026-03-07 10:03:03 -05:00
64893a84f5 fix(node health endpoint): Setup sane timeouts for usage as a load balancer health check. The default k8s client timeout of 30 seconds caused haproxy health check to fail even though we still returned 200OK after 30 seconds
Some checks failed
Run Check Script / check (pull_request) Failing after 25s
2026-03-06 16:28:13 -05:00
f941672662 fix: Node readiness always fails open when kube api call fails on note status check
Some checks failed
Run Check Script / check (pull_request) Failing after 1m54s
2026-03-06 15:45:38 -05:00
a98113dd40 wip: zitadel ingress https not working yet
Some checks failed
Run Check Script / check (pull_request) Failing after 28s
2026-03-06 15:28:21 -05:00
5db1a31d33 ... 2026-03-06 15:24:33 -05:00
f5aac67af8 feat: k8s client works fine, added version config in zitadel and fix master key secret existence handling
Some checks failed
Run Check Script / check (pull_request) Failing after 32s
2026-03-06 15:15:35 -05:00
d7e5bf11d5 removing bad stuff I did this morning and trying to make it simple, and adding a couple tests 2026-03-06 14:41:08 -05:00
2e1f1b8447 feat: Refactor K8sClient into separate, publishable crate, and add zitadel example 2026-03-06 14:21:15 -05:00
2b157ad7fd feat: add a background loop checking the node status every X seconds. If NotReady for Y seconds, kill the router pod if there's one 2026-03-06 11:57:39 -05:00
a0c0905c3b wip: zitadel deployment 2026-03-06 10:56:48 -05:00
d920de34cf fix: configure health_check: None for public_services
All checks were successful
Run Check Script / check (pull_request) Successful in 1m35s
2026-03-05 14:55:00 -05:00
4276b9137b fix: put the hc on private_services, not public_services
All checks were successful
Run Check Script / check (pull_request) Successful in 1m32s
2026-03-05 14:35:33 -05:00
6ab88ab8d9 Merge branch 'master' into feat/okd_loadbalancer_betterhealthcheck 2026-03-04 10:46:57 -05:00
fe52f69473 Merge pull request 'feat/openbao_secret_manager' (#239) from feat/openbao_secret_manager into master
Some checks failed
Run Check Script / check (push) Successful in 1m35s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m36s
Reviewed-on: #239
Reviewed-by: stremblay <stremblay@nationtech.io>
2026-03-04 15:06:15 +00:00
d8338ad12c wip(sso): Openbao deploys fine, not fully tested yet, zitadel wip
All checks were successful
Run Check Script / check (pull_request) Successful in 1m40s
2026-03-04 09:53:33 -05:00
ac9fedf853 wip(secret store): Fix openbao, refactor with rust client 2026-03-04 09:33:21 -05:00
fd3705e382 wip(secret store): openbao/vault store implementation 2026-03-04 09:33:21 -05:00
4840c7fdc2 Merge pull request 'feat/node-health-score' (#242) from feat/node-health-score into master
Some checks failed
Run Check Script / check (push) Successful in 1m51s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 3m16s
Reviewed-on: #242
Reviewed-by: johnride <jg@nationtech.io>
2026-03-04 14:31:44 +00:00
20172a7801 removing another useless commented line
All checks were successful
Run Check Script / check (pull_request) Successful in 2m17s
2026-03-04 09:31:02 -05:00
6bb33c5845 remove useless comment
All checks were successful
Run Check Script / check (pull_request) Successful in 1m43s
2026-03-04 09:29:49 -05:00
d9357adad3 format code, fix interpert name
All checks were successful
Run Check Script / check (pull_request) Successful in 1m33s
2026-03-04 09:28:32 -05:00
a25ca86bdf wip: happy path is working
Some checks failed
Run Check Script / check (pull_request) Failing after 29s
2026-03-04 08:21:08 -05:00
646c5e723e feat: implementing node_health 2026-03-04 07:16:25 -05:00
69c382e8c6 Merge pull request 'feat(k8s): Can now apply resources of any scope. Kind of a hack leveraging the dynamic type under the hood but this is due to a limitation of kube-rs' (#241) from feat/k8s_apply_any_scope into master
Some checks failed
Run Check Script / check (push) Successful in 2m42s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 4m15s
Reviewed-on: #241
Reviewed-by: stremblay <stremblay@nationtech.io>
2026-03-03 20:06:03 +00:00
dca764395d feat(k8s): Can now apply resources of any scope. Kind of a hack leveraging the dynamic type under the hood but this is due to a limitation of kube-rs
Some checks failed
Run Check Script / check (pull_request) Failing after 38s
2026-03-03 14:37:52 -05:00
53d0704a35 wip: okdload balancer using 1936 port http healthcheck
Some checks failed
Run Check Script / check (pull_request) Failing after 31s
2026-03-02 20:47:41 -05:00
2738985edb Merge pull request 'feat: New harmony node readiness mini project what exposes health of a node on port 25001' (#237) from feat/harmony-node-health-endpoint into master
Some checks failed
Run Check Script / check (push) Successful in 1m36s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 3m35s
Reviewed-on: #237
2026-03-02 19:56:39 +00:00
d9a21bf94b feat: node readiness now supports a check query param with node_ready and okd_router_1936 options
All checks were successful
Run Check Script / check (pull_request) Successful in 1m51s
2026-03-02 14:55:28 -05:00
8f8bd34168 feat: Deployment is now happening in harmony-node-healthcheck namespace
All checks were successful
Run Check Script / check (pull_request) Successful in 1m42s
2026-02-26 16:39:26 -05:00
b5e971b3b6 feat: adding yaml to deploy k8s resources
All checks were successful
Run Check Script / check (pull_request) Successful in 1m37s
2026-02-26 16:36:16 -05:00
a1c0e0e246 fix: build docker default value
All checks were successful
Run Check Script / check (pull_request) Successful in 1m38s
2026-02-26 16:35:38 -05:00
d084cee8d5 doc(node-readiness): Fix README
All checks were successful
Run Check Script / check (pull_request) Successful in 1m38s
2026-02-26 16:33:12 -05:00
63ef1c0ea7 feat: New harmony node readiness mini project what exposes health of a node on port 25001
All checks were successful
Run Check Script / check (pull_request) Successful in 1m36s
2026-02-26 16:23:27 -05:00
de49e9ebcc feat: Brocade switch setup now asks questions for missing links instead of failing
Some checks failed
Run Check Script / check (pull_request) Failing after 53s
2026-02-19 10:31:47 -05:00
d8ab9d52a4 fix:broken test
All checks were successful
Run Check Script / check (pull_request) Successful in 1m0s
2026-02-17 15:34:42 -05:00
2cb7aeefc0 fix: deploys replicated postgresql with site 2 as standby
Some checks failed
Run Check Script / check (pull_request) Failing after 1m8s
2026-02-17 15:02:00 -05:00
ff7d2fb89e fix: Complete brocade switch config and auth refactoring
Some checks failed
Run Check Script / check (push) Successful in 1m12s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m57s
2026-02-17 10:31:30 -05:00
9bb38b930a Merge pull request 'reafactor: brocade switch slight improvements' (#233) from fix/brocade into master
Some checks failed
Run Check Script / check (push) Failing after -10s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m54s
Reviewed-on: #233
2026-02-17 15:16:42 +00:00
c677487a5e Merge pull request 'feat/drain_k8s_node' (#232) from feat/drain_k8s_node into master
Some checks failed
Run Check Script / check (push) Successful in 1m0s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 3m11s
Reviewed-on: #232
Reviewed-by: stremblay <stremblay@nationtech.io>
2026-02-17 15:01:08 +00:00
c1d46612ac fix: dnsmasq now replaces mac address
All checks were successful
Run Check Script / check (pull_request) Successful in 1m2s
2026-02-17 10:00:11 -05:00
4fba01338d feat: Reboot k8s node works, good logs and tests
All checks were successful
Run Check Script / check (pull_request) Successful in 1m3s
2026-02-17 09:30:04 -05:00
913ed17453 Merge remote-tracking branch 'origin/master' into feat/drain_k8s_node 2026-02-17 09:15:06 -05:00
9e185cbbd5 chore: cleanup comments 2026-02-17 09:15:03 -05:00
752526f831 fix: reboot node now works with correct command
Some checks failed
Run Check Script / check (pull_request) Failing after 54s
2026-02-16 23:04:18 -05:00
f9bd6ad260 reafactor: brocade switch slight improvements
Some checks failed
Run Check Script / check (pull_request) Failing after -10s
2026-02-16 21:08:56 -05:00
111181c300 wip
Some checks failed
Run Check Script / check (pull_request) Failing after 54s
2026-02-16 20:54:46 -05:00
16016febcf wip: adding impl details for deploying connected replica cluster 2026-02-16 16:22:30 -05:00
3257cd9569 wip: Reboot node cleanly via k8s api, copy files on node, run remote command with output, orchestrate network configuration, and some more
All checks were successful
Run Check Script / check (pull_request) Successful in 1m13s
2026-02-15 22:17:43 -05:00
4b1915c594 Merge pull request 'feat: improve output related to storage in the discovery process' (#231) from feat/improve-disk-device-display into master
All checks were successful
Run Check Script / check (push) Successful in 1m6s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m17s
Reviewed-on: #231
2026-02-15 18:22:54 +00:00
cf3050ce87 feat(k8s client): K8sClient module now holds the responsibility for the k8s distribution detection, add resource bundle useful for easy create and delete of a bunch of related resources.
First use case is creating a privileged pod allowing writing to nodes on
openshift family clusters. This requires creating the clusterrolebinding
and pod and other resources.
2026-02-15 09:14:49 -05:00
c3e27c60be feat: improve output related to storage in the discovery process
All checks were successful
Run Check Script / check (pull_request) Successful in 1m55s
2026-02-14 15:32:01 -05:00
2d26790c82 wip: K8s copy file on node refactoring to extract helpers and add tests 2026-02-14 10:22:48 -05:00
2e89308b82 wip: Copy files on k8s node via ephemeral pod and configmap 2026-02-14 08:07:03 -05:00
e709de531d fix: added route building to failover topology 2026-02-13 16:08:05 -05:00
d8936a8307 feat(okd/network_manager): Add get_node_name_for_id and refactor" 2026-02-13 15:49:24 -05:00
6ab0f3a6ab wip 2026-02-13 15:48:24 -05:00
e2fa12508f feat: Add k8s client drain node functionality with tests and example 2026-02-13 15:19:58 -05:00
724ab0b888 wip: removed hardcoding and added fn to trait tlsrouter 2026-02-13 15:18:23 -05:00
bea2a75882 doc(opnsense): Add note that dnsmasq mac addresses will be droped when
setting static host
2026-02-13 15:18:20 -05:00
a1528665d0 Merge remote-tracking branch 'origin/doc-and-braindump'
All checks were successful
Run Check Script / check (push) Successful in 1m18s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 8m47s
2026-02-12 10:52:38 -05:00
613225a00b chore: push misc formatting and details
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
2026-02-12 10:50:16 -05:00
dd1c088f0d example: Nats supercluster enable jetstream 2026-02-12 10:42:56 -05:00
b4ef009804 chore: Add node in cargo.toml to replace with serde-saphyr as serde_yaml is deprecated
Some checks failed
Run Check Script / check (push) Successful in 2m17s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
2026-02-12 10:42:16 -05:00
191e92048b feat: git ignore all ignore folders in the project 2026-02-12 10:42:16 -05:00
f4a70d8978 Merge pull request 'feat: integrate-brocade' (#230) from feat/integrate-brocade into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #230
2026-02-12 15:41:15 +00:00
2ddc9c0579 fix:format
All checks were successful
Run Check Script / check (pull_request) Successful in 1m12s
2026-02-12 10:31:06 -05:00
fececc2efd Creating a BrocadeSwitchConfig struct
Some checks failed
Run Check Script / check (pull_request) Failing after 0s
2026-02-09 11:24:29 -05:00
8afcacbd24 feat: integrate brocade 2026-02-09 09:53:16 -05:00
8b6ce8d069 fix: readded tokio retry to get ca cert for a nats cluster which was accidentally removed during a refactor
All checks were successful
Run Check Script / check (pull_request) Successful in 1m9s
2026-02-06 09:09:01 -05:00
84992b7ada Merge pull request 'fix: use installation_device from host_config in bootstrap_okd_node' (#228) from fix/installation-device-for-nodes-in-byMAC into master
All checks were successful
Run Check Script / check (push) Successful in 1m7s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 8m20s
Reviewed-on: #228
2026-02-05 22:38:00 +00:00
7cd3c8b93d fix: shorthand
All checks were successful
Run Check Script / check (pull_request) Successful in 1m10s
2026-02-05 17:29:10 -05:00
83459eb2a6 fix: satisfy my ocd :)
All checks were successful
Run Check Script / check (pull_request) Successful in 1m8s
2026-02-05 17:04:53 -05:00
d6ddbfa51a fix: use installation_device from host_config in bootstrap_okd_node
All checks were successful
Run Check Script / check (pull_request) Successful in 1m11s
2026-02-05 16:57:14 -05:00
c6ca3d38d1 Merge pull request 'feat/harmony_agent' (#220) from feat/harmony_agent into master
Some checks failed
Run Check Script / check (push) Successful in 1m7s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 9m50s
Reviewed-on: #220
2026-02-04 21:05:34 +00:00
4c79a7628d Merge branch 'master' into feat/harmony_agent
All checks were successful
Run Check Script / check (pull_request) Successful in 1m8s
2026-02-04 16:03:22 -05:00
7ca1a64038 feat: completed harmony_agent implentation for primary and replica agents, fixed a test
All checks were successful
Run Check Script / check (pull_request) Successful in 1m7s
2026-02-04 15:56:40 -05:00
333884a81a Merge pull request 'feat: created decentralized topology, capability nats and nats super cluster' (#221) from feat/nats_capability into master
All checks were successful
Run Check Script / check (push) Successful in 1m8s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 9m48s
Reviewed-on: #221
Reviewed-by: johnride <jg@nationtech.io>
2026-02-04 19:05:50 +00:00
74e6da1a16 Merge branch 'master' into feat/nats_capability
All checks were successful
Run Check Script / check (pull_request) Successful in 1m5s
2026-02-04 14:03:47 -05:00
0372cc3f31 Merge pull request 'fix/nats-isp' (#226) from fix/nats-isp into feat/nats_capability
All checks were successful
Run Check Script / check (pull_request) Successful in 1m5s
Reviewed-on: #226
2026-02-04 18:55:12 +00:00
de14ba6b97 fix(agent): fetch from store returns metadata to allow rebuilding states properly
Some checks failed
Run Check Script / check (pull_request) Failing after 5s
2026-02-04 12:10:33 -05:00
a08c3fb03b wip: save new cluster info state
Some checks failed
Run Check Script / check (pull_request) Failing after 5s
2026-02-04 11:47:11 -05:00
17b3b3b351 test(agent): Wrote first few tests for Primary workflow use cases : initializing to healthy, healthy to failed
Some checks failed
Run Check Script / check (pull_request) Failing after 6s
2026-02-04 09:26:10 -05:00
01a775a01f wip(agent): workflow now return new cluster state when they decide to alter it, primary taking control of current_primary case handled but using wrong ID
Some checks failed
Run Check Script / check (pull_request) Failing after 7s
2026-02-04 07:01:13 -05:00
9c551a0eba fix: Agent can now reload heartbeat info from store
Some checks failed
Run Check Script / check (pull_request) Failing after 10s
2026-02-03 22:12:44 -05:00
a88d67627a chore: Add a note and delete old code
Some checks failed
Run Check Script / check (pull_request) Failing after 6s
2026-02-03 20:46:18 -05:00
5b04cc96d7 wip: we want to initialize to the right seq number after a restart
Some checks failed
Run Check Script / check (pull_request) Failing after 6s
2026-02-03 14:50:03 -05:00
73cda3425f Merge branch 'feat/harmony_agent' of https://git.nationtech.io/NationTech/harmony into feat/harmony_agent 2026-02-03 11:53:27 -05:00
7065e90475 feat: use the role of the agent to define its name 2026-02-03 11:45:03 -05:00
a20919bbda wip: write cluster state to jetstream kv
Some checks failed
Run Check Script / check (pull_request) Failing after 8s
2026-02-03 11:43:22 -05:00
948334b89e wip: cleaning up llm code, pretty close to something comprehensible and robust
Some checks failed
Run Check Script / check (pull_request) Failing after 13s
2026-02-03 06:39:56 -05:00
2a1d489b78 Merge pull request 'feat: support use-swap-file opnsense xml field' (#227) from feat/support-use-swap-file into master
All checks were successful
Run Check Script / check (push) Successful in 1m11s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 9m39s
Reviewed-on: #227
2026-02-02 20:06:33 +00:00
4507504c47 feat: support use-swap-file opnsense xml field
All checks were successful
Run Check Script / check (pull_request) Successful in 1m11s
2026-02-02 14:56:26 -05:00
50aa545bd9 wip(harmony_agent): It compiles, contains most if not all of the required skeleton, now time to review it carefully, complete a few details and battle test it
Some checks failed
Run Check Script / check (pull_request) Failing after 11s
2026-02-01 20:54:11 -05:00
8b200cfe91 chore: removed commented code
All checks were successful
Run Check Script / check (pull_request) Successful in 1m9s
2026-01-30 14:02:52 -05:00
f6ff78a573 Merge branch 'feat/nats_capability' into fix/nats-isp
All checks were successful
Run Check Script / check (pull_request) Successful in 1m8s
2026-01-30 13:50:07 -05:00
329d5d8473 fix: format
All checks were successful
Run Check Script / check (pull_request) Successful in 1m9s
2026-01-30 13:42:01 -05:00
cd81d6584c fix: removed nats implementation details from k8sanywhere, added secret prompt for nats cluster using harmony secret 2026-01-30 13:41:38 -05:00
a0f32bb565 wip: working on separation of concerns 2026-01-30 10:57:22 -05:00
0cff1e0f66 feat: Harmony agent new algorithm based on heartbeat counters basics. Old code will need to be refactored completely
Some checks failed
Run Check Script / check (pull_request) Failing after 11s
2026-01-30 06:58:03 -05:00
29d2d620d1 Merge pull request 'feat: introduced crate tokio-retry to allow multiple attempts to get secret from k8s' (#225) from fix/nats-capability-retry into feat/nats_capability
All checks were successful
Run Check Script / check (pull_request) Successful in 1m14s
Reviewed-on: #225
2026-01-29 20:42:32 +00:00
7df8429181 feat: introduced crate tokio-retry to allow multiple attempts to get secret from k8s
All checks were successful
Run Check Script / check (pull_request) Successful in 1m9s
2026-01-29 15:03:33 -05:00
0358ea5959 Merge pull request 'fix: support DiscoveryStrategy in OKDSetup01InventoryScore' (#224) from fix/support-discovery-strategy-OKDSetup01InventoryScore into master
Some checks failed
Run Check Script / check (push) Successful in 1m14s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m59s
Reviewed-on: #224
2026-01-28 20:48:28 +00:00
eebda0f4aa chore: format
All checks were successful
Run Check Script / check (pull_request) Successful in 1m14s
2026-01-28 15:20:33 -05:00
666a3c0071 fix: modified nats trait and nats supercluster trait to better respect interface segregation
All checks were successful
Run Check Script / check (pull_request) Successful in 1m12s
2026-01-28 15:16:46 -05:00
a8217887f4 Merge branch 'master' into fix/support-discovery-strategy-OKDSetup01InventoryScore
Some checks failed
Run Check Script / check (pull_request) Failing after 13s
2026-01-28 20:16:29 +00:00
edf94554b8 fix: formating and example env variables
All checks were successful
Run Check Script / check (pull_request) Successful in 1m13s
2026-01-28 14:43:50 -05:00
4ea3d7f69c fix: support DiscoveryStrategy in OKDSetup01InventoryScore
Some checks failed
Run Check Script / check (pull_request) Failing after 10s
2026-01-28 14:41:49 -05:00
00d4b9de73 fix: added fullnameOverride to helmchart score values so that the helm deployed cluster name and service name match the cluster name defined in the NatsK8sScore, without this field helm appends -nats to svc and cluster name, breaking the tls chain 2026-01-28 14:25:24 -05:00
cb90788129 Merge pull request 'fix(deps): updating fqdn version as the one currently in use have been yanked' (#223) from fix/update-fqdn-version into master
Some checks failed
Run Check Script / check (push) Successful in 1m35s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 10m47s
Reviewed-on: #223
2026-01-28 19:21:25 +00:00
5ee9643a6c fix(deps): updating fqdn version as the one currently in use have been yanked
All checks were successful
Run Check Script / check (pull_request) Successful in 1m14s
2026-01-28 10:27:10 -05:00
92d4e3488a refactor: modified struct to accept N sites and N clusters 2026-01-28 09:43:43 -05:00
e557270960 Merge pull request 'feat/ask-for-main-disk' (#222) from feat/ask-for-main-disk into master
Some checks failed
Run Check Script / check (push) Successful in 1m59s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 9m32s
Reviewed-on: #222
2026-01-27 17:13:56 +00:00
54320e2ebe chore: refactor PhysicalHost into a tuple with HostConfig, it contains only installation device for now but it saves some mishaps
All checks were successful
Run Check Script / check (pull_request) Successful in 1m13s
2026-01-27 12:11:49 -05:00
fdd5d1b47c in the middle of HostConfig refactor 2026-01-27 12:03:23 -05:00
6ff43f4775 Remove nanodc example 2026-01-27 11:47:25 -05:00
f6b0f321b4 refactor: get_host_for_role -> get_hosts_for_role 2026-01-27 09:46:49 -05:00
619ac99b44 feat: query for installation disk during discovery, store it in host_role_mapping 2026-01-27 09:22:24 -05:00
922dd794d9 feat: run sqlx migrations automatically 2026-01-27 07:47:14 -05:00
8959719375 wip: created decentralized topology, capability nats and nats super cluster
Some checks failed
Run Check Script / check (pull_request) Failing after 17s
2026-01-26 16:21:26 -05:00
9e1095fb9b Merge pull request 'feat/nats' (#207) from feat/nats into master
All checks were successful
Run Check Script / check (push) Successful in 1m16s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m58s
Reviewed-on: #207
2026-01-26 16:32:05 +00:00
4758465b28 Merge remote-tracking branch 'origin/master' into feat/nats
All checks were successful
Run Check Script / check (pull_request) Successful in 1m13s
2026-01-26 11:28:46 -05:00
8ae38399b7 Merge pull request 'feat: use interactive_parse lib to query for secrets attributes values' (#219) from feat/json-attributes-prompt into master
All checks were successful
Run Check Script / check (push) Successful in 1m13s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m33s
Reviewed-on: #219
2026-01-26 15:54:04 +00:00
565bb4afa1 chore: formatting
All checks were successful
Run Check Script / check (pull_request) Successful in 1m15s
2026-01-26 10:50:16 -05:00
25d5aff158 chore: remove useless comments
Some checks failed
Run Check Script / check (pull_request) Failing after 15s
2026-01-26 10:45:48 -05:00
95f860809e chore: remove useless comments
All checks were successful
Run Check Script / check (pull_request) Successful in 1m14s
2026-01-26 10:43:02 -05:00
ce53ae0e04 Merge branch 'master' into feat/json-attributes-prompt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2026-01-26 15:38:30 +00:00
deca67fd55 feat(backend_app): Deployment now pretty much works to package and deploy an app with an existing Docker image and type-safe helm chart on local k3d, not tested for remote k8s with Argo yet
Some checks failed
Run Check Script / check (pull_request) Failing after 17s
2026-01-25 22:54:14 -05:00
0cc5f505f8 feat(harmony_execution): New crate to contain utils for execution such
as command line
2026-01-25 22:52:29 -05:00
ab68e7309d feat: Use k8s openapi structs as helm chart resources following ADR 018
Some checks failed
Run Check Script / check (pull_request) Failing after 21s
2026-01-23 23:31:37 -05:00
093e0d54c0 Merge pull request 'adr/nats-islands-of-trust' (#209) from adr/nats-islands-of-trust into feat/nats
All checks were successful
Run Check Script / check (pull_request) Successful in 1m16s
Reviewed-on: #209
2026-01-23 20:05:58 +00:00
8657261342 fix: extracted variables, removed uncool side effect
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2026-01-23 15:03:03 -05:00
c20db5b361 doc(adr): New ADR Template hydration for strongly typed workload deployment
Some checks failed
Run Check Script / check (pull_request) Failing after 19s
2026-01-23 11:49:32 -05:00
33476e899e Merge pull request 'fix: modified cert-manager ensure ready to check for existence of pods with labels matching cert-manager in kubernetes env. replaced deprecated olm subscription based install of cert-manager for supported helm-chart' (#218) from fix/cert-manager into master
All checks were successful
Run Check Script / check (push) Successful in 1m17s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m31s
Reviewed-on: #218
2026-01-23 16:28:34 +00:00
d0cd21c322 fix: improved logging and function names for clarity
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2026-01-23 11:26:35 -05:00
b2f0773795 wip: Working on backend app deployment 2026-01-23 09:34:58 -05:00
9c6b780634 feat: use interactive_parse lib to query for secrets attributes values
All checks were successful
Run Check Script / check (pull_request) Successful in 1m20s
2026-01-23 08:55:07 -05:00
3682a0cb5f feat: First draft of harmony_agent project that will synchronize multiple clusters using nats supercluster to communicate 2026-01-23 08:54:40 -05:00
53e1711aef Merge pull request 'feat: Create st-test example, fix a couple new missing xml fields for opnsense, fix bad HostRole' (#217) from feat/st_test into master
All checks were successful
Run Check Script / check (push) Successful in 1m20s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m50s
Reviewed-on: #217
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-22 20:59:23 +00:00
f37a8e373a fix: modified cert-manager ensure ready to check for existence of pods with labels matching cert-manager in kubernetes env. replaced deprecated olm subscription based install of cert-manager for supported helm-chart
All checks were successful
Run Check Script / check (pull_request) Successful in 1m20s
2026-01-22 15:56:17 -05:00
c631b3aef9 fix more opnsense stuff, remove installation notes
All checks were successful
Run Check Script / check (pull_request) Successful in 1m19s
2026-01-22 15:54:19 -05:00
3e2d94cff0 adding Cargo.lock
All checks were successful
Run Check Script / check (pull_request) Successful in 1m19s
2026-01-22 14:54:39 -05:00
c9e39d11ad fix: fix opnsense stuff for opnsense 25.1 test file
Some checks failed
Run Check Script / check (pull_request) Failing after 12s
2026-01-22 14:52:29 -05:00
740b5500f2 feat: added poc for deploying nats supercluster with certificates, issuers, and okd routes
All checks were successful
Run Check Script / check (pull_request) Successful in 1m20s
2026-01-22 11:37:10 -05:00
52bff9b6be fix: mod.rs 2026-01-22 11:37:10 -05:00
bc962be31f fix: moved cert management ensure ready to k8sanywhere 2026-01-22 11:37:10 -05:00
f6a20832cf lint: Remove useless variable assignment 2026-01-22 11:37:10 -05:00
a4515d34ae fix: modified score names for better clarity 2026-01-22 11:37:10 -05:00
2b324d7962 fix: modified trait to use other return types, modified trait function name to be ensure ready, use rust CRD definitions rather than constructing gvk for certificateManagement trait function in k8sanywhere 2026-01-22 11:37:10 -05:00
779444699f fix: modified k8sanywhere implentation of get_ca_cert to use the kubernetes certificate name to find its respective secret and ca.crt 2026-01-22 11:37:10 -05:00
865dab2fc1 feat: added fn get_ca_cert to trait certificateManagement 2026-01-22 11:37:10 -05:00
502e544cd3 cargo fmt 2026-01-22 11:37:10 -05:00
4f2a7050f5 feat: added working examples to add self signed issuer and self signed certificate. modified get_resource_json_value to be able to get cluster scoped operators 2026-01-22 11:37:10 -05:00
26256d9945 fix: added create_issuer fn to trait and its implementation is k8sanywhere 2026-01-22 11:37:10 -05:00
947733b240 wip: added scores and basic implentation to create certs and issuers 2026-01-22 11:37:10 -05:00
d3a8171e3c feat(cert-manager): added crds for cert-manager 2026-01-22 11:37:10 -05:00
043cd561e9 feat: added cert manager capability as well as scores to install openshift subscription to community cert-manager operator 2026-01-22 11:37:10 -05:00
5ed14b75ed chore: fix formatting 2026-01-22 08:47:56 -05:00
25a45096f8 doc: adding installation notes file 2026-01-21 16:22:59 -05:00
74252ded5c chore: remove useless brocade stuff
Some checks failed
Run Check Script / check (pull_request) Failing after 24s
2026-01-21 15:12:23 -05:00
0ecadbfb97 chore: remove unused import
Some checks failed
Run Check Script / check (pull_request) Failing after 23s
2026-01-21 15:07:35 -05:00
eb492f3ca9 fix: remove double definition of RUST_LOG in env.sh
Some checks failed
Run Check Script / check (pull_request) Failing after 19m39s
2026-01-21 14:02:58 -05:00
de3c8e9a41 adding data symlink
Some checks failed
Run Check Script / check (pull_request) Failing after 31s
2026-01-21 13:59:22 -05:00
2ef2d9f064 Fix HostRole (ControlPlane -> Worker) in workers score, fix main, add topology.rs 2026-01-21 13:56:28 -05:00
d2d18205e9 fix deps in Cargo.toml, create env.sh file 2026-01-21 13:09:26 -05:00
0b55a6fb53 fix: add new xml fields after updating opnsense 2026-01-20 14:27:28 -05:00
2dc65531c3 Merge pull request 'feat/cert_manager_crds' (#211) from feat/cert_manager_crds into master
Some checks failed
Compile and package harmony_composer / package_harmony_composer (push) Failing after 11m21s
Run Check Script / check (push) Failing after 11m50s
Reviewed-on: #211
2026-01-20 18:46:20 +00:00
1e98100ed4 fix: mod.rs
All checks were successful
Run Check Script / check (pull_request) Successful in 1m21s
2026-01-20 13:43:52 -05:00
ab33aba776 fix: moved cert management ensure ready to k8sanywhere
Some checks failed
Run Check Script / check (pull_request) Failing after 26s
2026-01-20 13:21:24 -05:00
c3ac0bafad lint: Remove useless variable assignment
All checks were successful
Run Check Script / check (pull_request) Successful in 1m27s
2026-01-20 12:11:48 -05:00
54a53fa982 fix: modified score names for better clarity
All checks were successful
Run Check Script / check (pull_request) Successful in 1m29s
2026-01-19 12:48:47 -05:00
731d59c8b0 fix: modified trait to use other return types, modified trait function name to be ensure ready, use rust CRD definitions rather than constructing gvk for certificateManagement trait function in k8sanywhere 2026-01-19 11:37:47 -05:00
001dd5269c add (now commented) line to init env_logger 2026-01-18 10:07:28 -05:00
9978acf16d feat: change staticroutes->route to Option<RawXml> instead of MaybeString 2026-01-18 10:06:15 -05:00
c6642db6fb fix: modified k8sanywhere implentation of get_ca_cert to use the kubernetes certificate name to find its respective secret and ca.crt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m46s
2026-01-16 13:39:10 -05:00
8f111bcb8b feat: added fn get_ca_cert to trait certificateManagement
All checks were successful
Run Check Script / check (pull_request) Successful in 1m40s
2026-01-16 13:16:06 -05:00
ced371ca43 feat: Nats supercluster example working
Some checks failed
Run Check Script / check (pull_request) Failing after 42s
2026-01-16 09:45:59 -05:00
f319f74edf cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m46s
2026-01-14 16:19:56 -05:00
f576effeca feat: added working examples to add self signed issuer and self signed certificate. modified get_resource_json_value to be able to get cluster scoped operators 2026-01-14 16:18:59 -05:00
25c5cd84fe fix: added create_issuer fn to trait and its implementation is k8sanywhere
All checks were successful
Run Check Script / check (pull_request) Successful in 1m43s
2026-01-14 14:39:05 -05:00
dc421fa099 wip: added scores and basic implentation to create certs and issuers
Some checks failed
Run Check Script / check (pull_request) Failing after 50s
2026-01-13 15:43:58 -05:00
2153edc68c feat(cert-manager): added crds for cert-manager 2026-01-13 14:05:10 -05:00
949c9a40be feat: added cert manager capability as well as scores to install openshift subscription to community cert-manager operator
All checks were successful
Run Check Script / check (pull_request) Successful in 1m41s
2026-01-13 12:09:56 -05:00
1837623394 adr: 17-1 nats clusters interconnection using islands of trust. mTLS via shared ca-bundle with each cluster distributing its own CA.
Some checks failed
Run Check Script / check (pull_request) Failing after 51s
2026-01-13 10:40:30 -05:00
270b6b87df wip nats supercluster
Some checks failed
Run Check Script / check (pull_request) Failing after 51s
2026-01-09 17:30:51 -05:00
6933280575 feat(helm): refactor helm execution to use topology-specific commands
Some checks failed
Run Check Script / check (pull_request) Failing after 8s
Refactors the `HelmChartInterpret` to move away from the `helm-wrapper-rs` crate in favor of a custom command builder pattern. This allows the `HelmCommand` trait to provide topology-specific configurations, such as `kubeconfig` and `kube-context`, directly to the `helm` CLI.

- Implements `get_helm_command` for `K8sAnywhereTopology` to inject configuration flags.
- Replaces `DefaultHelmExecutor` with a manual `Command` construction in `run_helm_command`.
- Updates `HelmChartInterpret` to pass the topology through to repository and installation logic.
- Cleans up unused imports and removes the temporary `HelmCommand` implementation for `LocalhostTopology`.
2026-01-08 23:42:54 -05:00
77583a1ad1 wip: nats multi cluster, fixing helm command to follow multiple k8s config by providing the helm command from the topology itself, fix cli_logger that can now be initialized multiple times, some more stuff 2026-01-08 16:03:15 -05:00
f7404bed36 wip: initial setup for installing nats helm chart score 2026-01-07 16:14:58 -05:00
9a1aad62c9 Merge pull request 'fix: kubeconfig falls back to .kube if KUBECONFIG env variable is not set' (#205) from fix/kubeconfig into master
All checks were successful
Run Check Script / check (push) Successful in 1m9s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m49s
Reviewed-on: #205
2026-01-07 21:05:49 +00:00
0f9a53c8f6 cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m4s
2026-01-07 15:57:56 -05:00
b21829470d Merge pull request 'fix: modified nats box to use image tag non root for use in openshift environment' (#204) from fix/nats_non_root into master
All checks were successful
Run Check Script / check (push) Successful in 1m5s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m36s
Reviewed-on: #204
2026-01-07 20:52:42 +00:00
15f5e14c70 fix: modified nats box to use image tag non root for use in openshift environment
All checks were successful
Run Check Script / check (pull_request) Successful in 1m12s
2026-01-07 15:48:37 -05:00
4dcaf55dc5 fix: kubeconfig falls back to .kube if KUBECONFIG env variable is not set
Some checks failed
Run Check Script / check (pull_request) Failing after 2s
2026-01-07 15:47:08 -05:00
05c6398875 Merge pull request 'fix: added missing functions to impl SwitchClient for unmanagedSwitch' (#203) from fix/brocade_unmaged_switch into master
All checks were successful
Run Check Script / check (push) Successful in 1m3s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m17s
Reviewed-on: #203
2026-01-07 19:22:23 +00:00
ad5abe1748 fix(test): Use a mutex to prevent race conditions between tests relying on KUBECONFIG env var
All checks were successful
Run Check Script / check (pull_request) Successful in 1m0s
2026-01-07 14:21:29 -05:00
4e5a24b07a fix: added missing functions to impl SwitchClient for unmanagedSwitch
Some checks failed
Run Check Script / check (pull_request) Failing after 53s
2026-01-07 13:22:41 -05:00
7f0b77969c Merge pull request 'feat: PostgreSQLScore happy path using cnpg operator' (#200) from feat/postgresqlScore into master
Some checks failed
Run Check Script / check (push) Failing after 4s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 26s
Reviewed-on: #200
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 23:58:17 +00:00
166af498a0 Merge pull request 'Unmanaged switch client' (#187) from jvtrudel/harmony:feat/unmanaged-switch into master
Some checks failed
Run Check Script / check (push) Failing after 0s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 28s
Reviewed-on: #187
2026-01-06 21:43:18 +00:00
4144633098 Merge pull request 'feat: OPNSense Topology useful to interact with only an opnsense instance.' (#184) from feat/opnsenseTopology into master
Some checks failed
Run Check Script / check (push) Successful in 56s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #184
Reviewed-by: Ian Letourneau <ian@noma.to>
2026-01-06 21:37:33 +00:00
59253a65da Merge remote-tracking branch 'origin/master' into feat/opnsenseTopology
All checks were successful
Run Check Script / check (pull_request) Successful in 56s
2026-01-06 16:37:11 -05:00
16f65efe4f Merge remote-tracking branch 'origin/master' into feat/postgresqlScore
All checks were successful
Run Check Script / check (pull_request) Successful in 56s
2026-01-06 15:54:34 -05:00
07bc59d414 Merge pull request 'feat/cluster_monitoring' (#179) from feat/cluster_monitoring into master
All checks were successful
Run Check Script / check (push) Successful in 1m4s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m16s
Reviewed-on: #179
2026-01-06 20:47:06 +00:00
d5137d5ebc Merge remote-tracking branch 'origin/master' into feat/cluster_monitoring
Some checks failed
Run Check Script / check (pull_request) Failing after 10m33s
2026-01-06 15:43:34 -05:00
f2ca97b3bf Merge pull request 'feat(application): Webapp feature with production dns' (#167) from feat/webappdns into master
All checks were successful
Run Check Script / check (push) Successful in 54s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m31s
Reviewed-on: #167
2026-01-06 20:15:28 +00:00
dbfae8539f Merge remote-tracking branch 'origin/master' into feat/webappdns
All checks were successful
Run Check Script / check (pull_request) Successful in 58s
2026-01-06 15:14:19 -05:00
ed61ed1d93 Merge remote-tracking branch 'origin/master' into feat/postgresqlScore
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2026-01-06 15:10:48 -05:00
9359d43fe1 chore: Fix pr comments, documentation, slight refactor for better apis
All checks were successful
Run Check Script / check (pull_request) Successful in 49s
2026-01-06 15:09:17 -05:00
5935d66407 removed serial_test crate to keep tests running un parallel
All checks were successful
Run Check Script / check (pull_request) Successful in 58s
2026-01-06 15:02:30 -05:00
e026ad4d69 Merge pull request 'adr: draft ADR proposing harmony agent and nats-jetstram for decentralized workload management' (#202) from adr/decentralized-workload-management into master
All checks were successful
Run Check Script / check (push) Successful in 58s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m41s
Reviewed-on: #202
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 19:45:54 +00:00
98f098ffa4 Merge pull request 'feat: implementation for opnsense os-node_exporter' (#173) from feat/install_opnsense_node_exporter into master
All checks were successful
Run Check Script / check (push) Successful in 59s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m58s
Reviewed-on: #173
2026-01-06 19:19:34 +00:00
fdf1dfaa30 fix: leave implementers to define their Debug, so removed impl Debug for dyn NodeExporter
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2026-01-06 14:17:04 -05:00
4f8cd0c1cb Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2026-01-06 13:56:48 -05:00
028161000e Merge remote-tracking branch 'origin/master' into feat/postgresqlScore
Some checks failed
Run Check Script / check (pull_request) Failing after 1s
2026-01-06 13:44:50 -05:00
457d3d4546 fix tests, cargo fmt, introduced crate serial_test to allow sequential testing env sensitive tests
All checks were successful
Run Check Script / check (pull_request) Successful in 56s
2026-01-06 13:06:59 -05:00
004b35f08e Merge pull request 'feat/brocade_snmp' (#193) from feat/brocade_snmp into master
Some checks failed
Run Check Script / check (push) Failing after 2s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 27s
Reviewed-on: #193
2026-01-06 16:22:25 +00:00
2b19d8c3e8 fix: changed name to switch_ips for more clarity
All checks were successful
Run Check Script / check (pull_request) Successful in 54s
2026-01-06 10:51:53 -05:00
745479c667 Merge pull request 'doc for removing worker flag from cp on UPI' (#165) from doc/worker-flag into master
All checks were successful
Run Check Script / check (push) Successful in 53s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 9m24s
Reviewed-on: #165
2026-01-06 15:46:13 +00:00
2d89e08877 Merge pull request 'doc to clone and transfer a coreos disk' (#166) from doc/clone into master
Some checks failed
Run Check Script / check (push) Successful in 54s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #166
2026-01-06 15:42:56 +00:00
e5bd866c09 Merge pull request 'feat: cnpg operator score' (#199) from feat/cnpgOperator into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #199
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 15:41:55 +00:00
0973f76701 Merge pull request 'feat: Introducing FailoverTopology and OperatorHub Catalog Subscription with example' (#196) from feat/multisitePostgreSQL into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #196
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-06 15:41:12 +00:00
fd69a2d101 Merge pull request 'feat/rebuild_inventory' (#201) from feat/rebuild_inventory into master
Reviewed-on: #201
Reviewed-by: wjro <wrolleman@nationtech.io>
2026-01-05 20:30:33 +00:00
4d535e192d feat: Add new nats example
Some checks failed
Run Check Script / check (pull_request) Failing after -5s
2025-12-22 16:02:10 -05:00
ef307081d8 chore: slight refactor of postgres public score
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-12-22 09:54:27 -05:00
5cce9f8e74 adr: draft ADR proposing harmony agent and nats-jetstram for decentralized workload management
All checks were successful
Run Check Script / check (pull_request) Successful in 1m31s
2025-12-19 10:12:44 -05:00
07e610c54a fix git merge conflict
All checks were successful
Run Check Script / check (pull_request) Successful in 1m24s
2025-12-17 17:09:32 -05:00
204795a74f feat(failoverPostgres): Its alive! We can now deploy a multisite postgres instance. The public hostname is still hardcoded, we will have to fix that but the rest is good enough
Some checks failed
Run Check Script / check (pull_request) Failing after 36s
2025-12-17 16:43:37 -05:00
03e98a51e3 Merge pull request 'fix: added fields missing for haproxy after most recent update' (#191) from fix/opnsense_update into master
Some checks failed
Run Check Script / check (push) Failing after 12m40s
Reviewed-on: #191
2025-12-17 20:03:49 +00:00
22875fe8f3 fix: updated test xml structures to match with new fields added to opnsense
All checks were successful
Run Check Script / check (pull_request) Successful in 1m32s
2025-12-17 15:00:48 -05:00
66a9a76a6b feat(postgres): Failover postgres example maybe working!? Added FailoverTopology implementations for required capabilities, documented a bit, some more tests, and quite a few utility functions
Some checks failed
Run Check Script / check (pull_request) Failing after 1m49s
2025-12-17 14:35:10 -05:00
440e684b35 feat: Postgresql score based on the postgres capability now. true infrastructure abstraction!
Some checks failed
Run Check Script / check (pull_request) Failing after 33s
2025-12-16 23:35:52 -05:00
b0383454f0 feat(types): Add utility initialization functions for StorageSize such as StorageSize::kb(324)
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-12-16 16:24:53 -05:00
9e8f3ce52f feat(postgres): Postgres Connection Test score now has a script that provides more insight. Not quite working properly but easy to improve at this point.
Some checks failed
Run Check Script / check (pull_request) Failing after 43s
2025-12-16 15:53:54 -05:00
c6f859f973 fix(OPNSense): update fields for haproxyy and opnsense following most recent update and upgrade to opnsense
Some checks failed
Run Check Script / check (pull_request) Failing after 1m25s
2025-12-16 15:31:35 -05:00
bbf28a1a28 Merge branch 'master' into fix/opnsense_update
Some checks failed
Run Check Script / check (pull_request) Failing after 1m21s
2025-12-16 20:00:54 +00:00
c3ec7070ec feat: PostgreSQL public and Connection test score, also moved k8s_anywhere in a folder
Some checks failed
Run Check Script / check (pull_request) Failing after 40s
2025-12-16 14:57:02 -05:00
29821d5e9f feat: TlsPassthroughScore works, improved logging, fixed CRD
Some checks failed
Run Check Script / check (pull_request) Failing after 35s
2025-12-15 19:09:10 -05:00
446e079595 wip: public postgres many fixes and refactoring to have a more cohesive routing management
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-12-15 17:04:30 -05:00
e0da5764fb feat(types): Added Rfc1123 String type, useful for k8s names
Some checks failed
Run Check Script / check (pull_request) Failing after 38s
2025-12-15 12:57:52 -05:00
e9cab92585 feat: Impl TlsRoute for K8sAnywhereTopology 2025-12-14 22:22:09 -05:00
d06bd4dac6 feat: OKD route CRD and OKD specific route score
All checks were successful
Run Check Script / check (pull_request) Successful in 1m30s
2025-12-14 17:05:26 -05:00
142300802d wip: TlsRoute score first version
Some checks failed
Run Check Script / check (pull_request) Failing after 1m11s
2025-12-14 06:19:33 -05:00
2254641f3d fix: Tests, doctests, formatting
All checks were successful
Run Check Script / check (pull_request) Successful in 1m38s
2025-12-13 17:56:53 -05:00
b61e4f9a96 wip: Expose postgres publicly. Created tlsroute capability and postgres implementations
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-12-13 09:47:59 -05:00
2e367d88d4 feat: PostgreSQL score works, added postgresql example, tested on OKD 4.19, added note about incompatible default namespace settings
Some checks failed
Run Check Script / check (pull_request) Failing after 2m37s
2025-12-11 22:54:57 -05:00
9edc42a665 feat: PostgreSQLScore happy path using cnpg operator
Some checks failed
Run Check Script / check (pull_request) Failing after 37s
2025-12-11 14:36:39 -05:00
f242aafebb feat: Subscription for cnpg-operator fixed default values, tested and added to operatorhub example.
All checks were successful
Run Check Script / check (pull_request) Successful in 1m31s
2025-12-11 12:18:28 -05:00
3e14ebd62c feat: cnpg operator score
All checks were successful
Run Check Script / check (pull_request) Successful in 1m36s
2025-12-10 22:55:08 -05:00
1b19638df4 wip(failover): Started implementation of the FailoverTopology with PostgreSQL capability
All checks were successful
Run Check Script / check (pull_request) Successful in 1m32s
This is our first Higher Order Topology (see ADR-015)
2025-12-10 21:15:51 -05:00
d39b1957cd feat(k8s_app): OperatorhubCatalogSourceScore can now install the operatorhub catalogsource on a cluster that already has operator lifecycle manager installed 2025-12-10 16:58:58 -05:00
bfdb11b217 Merge pull request 'feat(OKDInstallation): Implemented bootstrap of okd worker node, added features to allow both control plane and worker node to use the same bootstrap_okd_node score' (#198) from feat/okd-nodes into master
Some checks failed
Run Check Script / check (push) Successful in 1m57s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m59s
Reviewed-on: #198
Reviewed-by: johnride <jg@nationtech.io>
2025-12-10 19:27:51 +00:00
d5fadf4f44 fix: deleted storage node role, fixed erroneous comment, modified score name to be in line with clean code naming conventions, fixed how the OKDNodeInstallationScore is called via OKDSetup03ControlPlaneScore and OKDSetup04WorkersScore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m45s
2025-12-10 14:20:24 -05:00
357ca93d90 wip: FailoverTopology implementation for PostgreSQL on the way! 2025-12-10 13:12:53 -05:00
8103932f23 doc: Initial documentation for the MultisitePostgreSQL module 2025-12-10 13:12:53 -05:00
9617e1cfde Merge pull request 'adr: Higher order topologies' (#197) from adr/015-higher-order-topologies into master
Some checks failed
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m54s
Run Check Script / check (push) Successful in 1m47s
Reviewed-on: #197
2025-12-10 18:12:23 +00:00
50bd5c5bba feat(OKDInstallation): Implemented bootstrap of okd worker node, added features to allow both control plane and worker node to use the same bootstrap_okd_node score
All checks were successful
Run Check Script / check (pull_request) Successful in 1m46s
2025-12-10 12:15:07 -05:00
a953284386 doc: Add note about counter-intuitive behavior of nmstate
Some checks failed
Run Check Script / check (push) Successful in 1m33s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m28s
2025-12-09 23:04:15 -05:00
bfde5f58ed adr: Higher order topologies
All checks were successful
Run Check Script / check (pull_request) Successful in 1m33s
These types of Topologies will orchestrate behavior in regular Topologies.

For example, a FailoverTopology is a Higher Order, it will orchestrate its capabilities between a primary and a replica topology. A great use case for this is a database deployment. The FailoverTopology will deploy both instances, connect them, and the able to execute the appropriate actions to promote de replica to primary and revert back to original state.

Other use cases are ShardedTopology, DecentralizedTopology, etc.
2025-12-09 11:23:30 -05:00
9fbdc72cd0 fix: git ignore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m29s
2025-11-18 08:41:09 -05:00
78e595e696 feat: added alert manager routes to openshift cluster monitoring
All checks were successful
Run Check Script / check (pull_request) Successful in 1m37s
2025-11-17 15:22:43 -05:00
90b89224d8 fix: added K8sName type for strict naming of Kubernetes resources 2025-11-17 15:20:51 -05:00
43a17811cc fix formatting
Some checks failed
Run Check Script / check (pull_request) Failing after 1m49s
2025-11-14 12:53:43 -05:00
93ac89157a feat: added score to enable snmp_server on brocade switch and a working example
All checks were successful
Run Check Script / check (pull_request) Successful in 2m4s
2025-11-14 12:49:00 -05:00
b885c35706 Merge branch 'master' into doc-and-braindump 2025-11-13 23:46:39 +00:00
Ian Letourneau
bb6b4b7f88 docs: New docs structure & rustdoc for HostNetworkConfigScore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m43s
2025-11-13 18:42:26 -05:00
29c82db70d fix: added fields missing for haproxy after most recent update
Some checks failed
Run Check Script / check (pull_request) Failing after 49s
2025-11-12 13:21:55 -05:00
734c9704ab feat: provide an unmanaged switch
Some checks failed
Run Check Script / check (pull_request) Failing after 40s
2025-11-11 13:30:03 -05:00
8ee3f8a4ad chore: Update harmony-inventory-agent binary as some fixes were introduced : port is 25000 now and nbd devices wont make the inventory crash
Some checks failed
Run Check Script / check (pull_request) Failing after 40s
2025-11-11 11:32:42 -05:00
d3634a6313 fix(types): Switch port location failed on port channel interfaces 2025-11-11 09:53:59 -05:00
83c1cc82b6 fix(host_network): remove extra fields from bond config to prevent clashes (#186)
All checks were successful
Run Check Script / check (push) Successful in 1m36s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 8m16s
Also alias `port` to support both `port` and `ports` as per the nmstate spec.

Reviewed-on: #186
2025-11-11 14:12:56 +00:00
a0a8d5277c fix: opnsense definitions more accurate for various resources such as ProxyGeneral, System, StaticMap, Job, etc. Also fixed brocade crate export and some warnings 2025-11-11 09:06:36 -05:00
43b04edbae feat(brocade): Add feature and example to remove port channel and configure switchport 2025-11-10 22:59:37 -05:00
755a4b7749 feat(inventory-agent): Discover algorithm by scanning a subnet of ips, slower than mdns but more reliable and versatile 2025-11-10 22:15:31 -05:00
5953bc58f4 feat: added function to enable snmp-server for brocade switches 2025-11-10 14:57:22 -05:00
51a5afbb6d fix: added some extra details
All checks were successful
Run Check Script / check (pull_request) Successful in 1m4s
2025-11-07 09:04:27 -05:00
66d346a10c fix(host_network): skip configuration for host with only 1 interface/port (#185)
All checks were successful
Run Check Script / check (push) Successful in 1m11s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 8m11s
Reviewed-on: #185
Reviewed-by: johnride <jg@nationtech.io>
2025-11-06 00:07:20 +00:00
06a004a65d refactor(host_network): extract NetworkManager as a reusable component (#183)
Some checks failed
Run Check Script / check (push) Successful in 1m12s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
The NetworkManager logic was implemented directly into the `HaClusterTopology`, which wasn't directly its concern and prevented us from being able to reuse that NetworkManaager implementations in the future for a different Topology.

* Extract a `NetworkManager` trait
* Implement a `OpenShiftNmStateNetworkManager` for `NetworkManager`
* Dynamically instantiate the NetworkManager in the Topology to delegate calls to it

Reviewed-on: #183
Reviewed-by: johnride <jg@nationtech.io>
2025-11-06 00:02:52 +00:00
9d4e6acac0 fix(host_network): retrieve proper hostname and next available bond id (#182)
Some checks failed
Run Check Script / check (push) Successful in 1m9s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m24s
In order to query the current network state `NodeNetworkState` and to apply a `NodeNetworkConfigurationPolicy` for a given node, we first needed to find its hostname. As all we had was the UUID of a node.

We had different options available (e.g. updating the Harmony Inventory Agent to retrieve it, store it in the OKD installation pipeline on assignation, etc.). But for the sake of simplicity and for better flexibility (e.g. being able to run this score on a cluster that wasn't setup with Harmony), the `hostname` was retrieved directly in the cluster by running the equivalent of `kubectl get nodes -o yaml` and matching the nodes with the system UUID.

### Other changes
* Find the next available bond id for a node
* Apply a network config policy for a node (configuring a bond in our case)
* Adjust the CRDs for NMState

Note: to see a quick demo, watch the recording in #183
Reviewed-on: #182
Reviewed-by: johnride <jg@nationtech.io>
2025-11-05 23:38:24 +00:00
759a9287d3 Merge remote-tracking branch 'origin/master' into feat/cluster_monitoring
Some checks failed
Run Check Script / check (pull_request) Failing after 19s
2025-11-05 17:02:10 -05:00
24922321b1 fix: webhook name must be k8s field compliant, add a FIXME note 2025-11-05 16:59:48 -05:00
4ff57062ae Merge pull request 'feat(kube): Convert kube_openapi Resource to DynamicObject' (#180) from feat/kube_convert_dynamic_resource into master
Some checks failed
Run Check Script / check (push) Successful in 1m19s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m23s
Reviewed-on: #180
Reviewed-by: Ian Letourneau <ian@noma.to>
2025-11-05 21:48:32 +00:00
50ce54ea66 Merge pull request 'fix(opnsense-config): mark Interface::enable as optional' (#181) from fix-opnsense-config into master
Some checks failed
Run Check Script / check (push) Successful in 1m12s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m27s
Reviewed-on: #181
2025-11-05 17:13:29 +00:00
7b542c9865 feat: OPNSense Topology useful to interact with only an opnsense instance.
All checks were successful
Run Check Script / check (pull_request) Successful in 1m11s
With this work, no need to initialize a full HAClusterTopology to run
opnsense scores.

Also added an example showing how to use it and perform basic
operations.

Made a video out of it, might publish it at some point!
2025-11-05 10:02:45 -05:00
Ian Letourneau
827a49e56b fix(opnsense-config): mark Interface::enable as optional
All checks were successful
Run Check Script / check (pull_request) Successful in 1m11s
2025-11-04 17:25:30 -05:00
cf84f2cce8 wip: cluster_monitoring almost there, a kink to fix in the yaml handling
All checks were successful
Run Check Script / check (pull_request) Successful in 1m15s
2025-10-29 23:12:34 -04:00
a12d12aa4f feat: example OpenshiftClusterAlertScore
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2025-10-29 17:29:28 -04:00
cefb65933a wip: cluster monitoring score coming along, this simply edits OKD builtin alertmanager instance and adds a receiver 2025-10-29 17:26:21 -04:00
95cfc03518 feat(kube): Utility function to convert kube_openapi Resource to DynamicObject. This will allow initializing resources strongly typed and then bundle various types into a list of DynamicObject
All checks were successful
Run Check Script / check (pull_request) Successful in 1m18s
2025-10-29 17:24:35 -04:00
c2fa4f1869 fix:cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m21s
2025-10-29 13:53:58 -04:00
ee278ac817 Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
Some checks failed
Run Check Script / check (pull_request) Failing after 25s
2025-10-29 13:49:56 -04:00
09a06f136e Merge remote-tracking branch 'origin/master' into feat/install_opnsense_node_exporter
All checks were successful
Run Check Script / check (pull_request) Successful in 1m21s
2025-10-29 13:42:12 -04:00
5f147fa672 fix: opnsense-config reload_config() returns live config.xml rather than dropping it, allows function is_package_installed() to read live state after package installation rather than old config before installation
All checks were successful
Run Check Script / check (pull_request) Successful in 1m17s
2025-10-29 13:25:37 -04:00
c80ede706b fix(host_network): adjust bond & port-channel configuration (partial) (#175)
Some checks failed
Run Check Script / check (push) Successful in 1m20s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m21s
## Description
* Replace the CatalogSource approach to install the OperatorHub.io catalog by a more simple & straightforward way to install NMState
* Improve logging
* Add report summarizing the host network configuration that was applied (which host, bonds, port-channels)
* Fix command to find next available port channel id

## Extra info
Using the `apply_url` approach to install the NMState operator isn't the best approach: it's harder to maintain and upgrade. But it helps us achieve waht we wanted for now: install the NMState Operator to configure bonds on a host.

The preferred approach, installing an operator from the OperatorHub.io catalog, didn't work for now. We had a timeout error with DeadlineExceeded probably caused by an insufficient CPU/Memory allocation to query such a big catalog, even though we tweaked the RAM allocation (we couldn't find a way to do it for CPU).

Spent too much time on this so we stopped these efforts for now. It would be good to get back to it when we need to install something else from a custom catalog.

Reviewed-on: #175
2025-10-29 17:09:16 +00:00
9ba939bde1 wip: cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m16s
2025-10-28 15:45:02 -04:00
44bf21718c wip: example score with impl topolgy for opnsense topology 2025-10-28 14:41:15 -04:00
b2825ec1ef Merge pull request 'feat/impl_installable_crd_prometheus' (#170) from feat/impl_installable_crd_prometheus into master
Some checks failed
Run Check Script / check (push) Successful in 1m25s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m20s
Reviewed-on: #170
2025-10-24 16:42:54 +00:00
609d7acb5d feat: impl clone_box for ScrapeTarget<CRDPrometheus>
All checks were successful
Run Check Script / check (pull_request) Successful in 1m25s
2025-10-24 12:05:54 -04:00
de761cf538 Merge branch 'master' into feat/impl_installable_crd_prometheus 2025-10-24 11:23:56 -04:00
5e1580e5c1 Merge branch 'master' into doc/clone 2025-10-23 19:32:26 +00:00
1802b10ddf fix:translated documentaion notes into English 2025-10-23 15:31:45 -04:00
008b03f979 fix: changed documentation language to english 2025-10-23 14:56:07 -04:00
9f7b90d182 feat(argocd): Can now detect argocd instance when already installed and write crd accordingly. One major caveat though is that crd versions are not managed properly yet
Some checks failed
Run Check Script / check (pull_request) Failing after 39s
2025-10-23 13:12:38 -04:00
dc70266b5a wip: install argocd app depending on how argocd is already installed in the cluster 2025-10-23 13:11:39 -04:00
8fb755cda1 wip: argocd discovery 2025-10-23 13:10:35 -04:00
cb7a64b160 feat: Support tls enabled by default on rust web app 2025-10-23 13:10:35 -04:00
afdd511a6d feat(application): Webapp feature with production dns 2025-10-23 13:10:35 -04:00
c069207f12 Merge pull request 'refactor(ha_cluster): inject switch client for better testability' (#174) from switch-client into master
Some checks failed
Run Check Script / check (push) Successful in 1m44s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 2m43s
Reviewed-on: #174
2025-10-23 15:05:17 +00:00
Ian Letourneau
7368184917 fix(ha_cluster): inject switch client for better testability
All checks were successful
Run Check Script / check (pull_request) Successful in 1m30s
2025-10-22 15:12:53 -04:00
5ab58f0253 fix: added impl node exporter for hacluster topology and dummy infra
All checks were successful
Run Check Script / check (pull_request) Successful in 1m26s
2025-10-22 14:39:12 -04:00
5af13800b7 fix: removed unimplemnted marco and returned Err instead
Some checks failed
Run Check Script / check (pull_request) Failing after 29s
some formatting error
2025-10-22 11:51:22 -04:00
05205f4ac1 Merge pull request 'feat: scrape targets to be able to get snmp alerts from machines to prometheus' (#171) from feat/scrape_target into master
Some checks are pending
Run Check Script / check (push) Waiting to run
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Reviewed-on: #171
2025-10-22 15:33:24 +00:00
3174645c97 Merge branch 'master' into feat/scrape_target
All checks were successful
Run Check Script / check (pull_request) Successful in 1m32s
2025-10-22 15:33:01 +00:00
8126b233d8 feat: implementation for opnsense os-node_exporter
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-10-22 11:27:28 -04:00
7536f4ec4b Merge pull request 'fix: fixed merge error that somehow got missed' (#172) from fix/merge_error into master
Some checks are pending
Run Check Script / check (push) Waiting to run
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Reviewed-on: #172
2025-10-21 16:02:39 +00:00
464347d3e5 fix: fixed merge error that somehow got missed
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-21 12:01:31 -04:00
7f415f5b98 Merge pull request 'feat: K8sFlavour' (#161) from feat/detect_k8s_flavour into master
Some checks are pending
Run Check Script / check (push) Waiting to run
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Reviewed-on: #161
2025-10-21 15:56:47 +00:00
2a520a1d7c Merge branch 'master' into feat/detect_k8s_flavour
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-21 15:56:18 +00:00
987f195e2f feat(cert-manager): add cluster issuer to okd cluster score (#157)
Some checks are pending
Run Check Script / check (push) Waiting to run
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
added score to install okd cluster issuer

Reviewed-on: #157
2025-10-21 15:55:55 +00:00
14d1823d15 fix: remove ceph osd deletes and purges osd from ceph osd tree\ (#120)
Some checks are pending
Run Check Script / check (push) Waiting to run
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
k8s returns None rather than zero when checking deployment for replicas
exec_app requires commands 's' and '-c' to run correctly

Reviewed-on: #120
Co-authored-by: Willem <wrolleman@nationtech.io>
Co-committed-by: Willem <wrolleman@nationtech.io>
2025-10-21 15:54:51 +00:00
2a48d51479 fix: naming of k8s distribution
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-21 11:09:45 -04:00
20a227bb41 Merge branch 'master' into feat/detect_k8s_flavour
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-21 15:02:15 +00:00
ce91ee0168 fix: removed dead code, mapped error from grafana operator to preparation error rather than ignoring it, modified k8sprometheus score to unwrap_or_default() service monitors
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-20 15:31:06 -04:00
ed7f81aa1f fix(opnsense-config): ensure load balancer service configuration is idempotent (#129)
Some checks are pending
Run Check Script / check (push) Waiting to run
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
The previous implementation blindly added HAProxy components without checking for existing configurations on the same port, which caused duplicate entries and errors when a service was updated.

This commit refactors the logic to a robust "remove-then-add" strategy. The configure_service method now finds and removes any existing frontend and its dependent components (backend, servers, health check) before adding the new, complete service definition.

This change makes the process fully idempotent, preventing configuration drift and ensuring a predictable state.

Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Reviewed-on: #129
2025-10-20 19:18:49 +00:00
cb66b7592e fix: made targets plural and changed scrape targets to option in AlertingInterpret
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-20 14:44:37 -04:00
a815f6ac9c feat: scrape targets to be able to get snmp alerts from machines to prometheus
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-20 11:44:11 -04:00
2d891e4463 Merge pull request 'feat(host_network): configure bonds and port channels' (#169) from config-host-network into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #169
2025-10-16 18:24:58 +00:00
f66e58b9ca Merge branch 'master' into config-host-network
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-16 18:24:34 +00:00
ea39d93aa7 feat(host_network): configure bonds on the host and switch port channels
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-16 14:23:41 -04:00
6989d208cf Merge pull request 'feat(switch/brocade): Implement client to interact with Brocade switches' (#168) from brocade-switch-client into master
Some checks are pending
Run Check Script / check (push) Waiting to run
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Reviewed-on: #168
2025-10-16 18:23:01 +00:00
c0d54a4466 Merge remote-tracking branch 'origin/master' into feat/impl_installable_crd_prometheus
Some checks failed
Run Check Script / check (pull_request) Has been cancelled
2025-10-16 14:17:32 -04:00
fc384599a1 feat: implementation of Installable for CRDPrometheusIntroduction of Grafana trait and its impl for k8sanywhereallows for CRDPrometheus to be installed via AlertingInterpret which standardizes the installation of alert receivers, alerting rules, and alert senders 2025-10-16 14:07:23 -04:00
c0bd8007c7 feat(switch/brocade): Implement client to interact with Brocade Switch
All checks were successful
Run Check Script / check (pull_request) Successful in 1m5s
* Expose a high-level `brocade::init()` function to connect to a Brocade switch and automatically pick the best implementation based on its OS and version
* Implement a client for Brocade switches running on Network Operating System (NOS)
* Implement a client for older Brocade switches running on FastIron (partial implementation)

The architecture for the library is based on 3 layers:
1. The `BrocadeClient` trait to describe the available capabilities to
   interact with a Brocade switch. It is partly opinionated in order to
   offer higher level features to group multiple commands into a single
   function (e.g. create a port channel). Its implementations are
   basically just the commands to run on the switch and the functions to
   parse the output.
2. The `BrocadeShell` struct to make it easier to authenticate, send commands, and interact with the switch.
3. The `ssh` module to actually connect to the switch over SSH and execute the commands.

With time, we will add support for more Brocade switches and their various OS/versions. If needed, shared behavior could be extracted into a separate module to make it easier to add new implementations.
2025-10-15 15:28:24 -04:00
7dff70edcf wip: fixed token expiration and configured grafana dashboard 2025-10-15 15:26:36 -04:00
06a0c44c3c wip: connected the thanos-datasource to grafana, need to complete connecting the openshift-userworkload-monitoring as well 2025-10-14 15:53:42 -04:00
85bec66e58 wip: fixing grafana datasource for openshift which requires creating a token, sa, secret and inserting them into the grafanadatasource 2025-10-10 12:09:26 -04:00
e5eb7fde9f doc to clone and transfer a coreos disk
All checks were successful
Run Check Script / check (pull_request) Successful in 1m11s
2025-10-09 15:29:09 -04:00
dd3f07e5b7 doc for removing worker flag from cp on UPI
All checks were successful
Run Check Script / check (pull_request) Successful in 1m13s
2025-10-09 15:28:42 -04:00
1f3796f503 refactor(prometheus): modified crd prometheus to impl the installable trait 2025-10-09 12:26:05 -04:00
cf576192a8 Merge pull request 'feat: Add openbao example, open-source fork of vault' (#162) from feat/openbao into master
All checks were successful
Run Check Script / check (push) Successful in 1m33s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m51s
Reviewed-on: #162
2025-10-03 00:28:50 +00:00
5f78300d78 Merge branch 'master' into feat/detect_k8s_flavour
All checks were successful
Run Check Script / check (pull_request) Successful in 1m20s
2025-10-02 17:14:30 -04:00
f7e9669009 Merge branch 'master' into feat/openbao
All checks were successful
Run Check Script / check (pull_request) Successful in 1m23s
2025-10-02 21:11:44 +00:00
2d3c32469c chore: Simplify k8s flavour detection algorithm and do not unwrap when it cannot be detected, just return Err 2025-09-30 22:59:50 -04:00
f65e16df7b feat: Remove unused helm command, refactor url to use hurl in some more places 2025-09-30 11:18:08 -04:00
1cec398d4d fix: modifed naming scheme to OpenshiftFamily, K3sFamily, and defaultswitched discovery of openshiftfamily to look for projet.openshift.io 2025-09-29 11:29:34 -04:00
58b6268989 wip: moving the install steps for grafana and prometheus into the trait installable<T> 2025-09-29 10:46:29 -04:00
cbbaae2ac8 okd_enable_user_workload_monitoring (#160)
Reviewed-on: #160
Co-authored-by: Willem <wrolleman@nationtech.io>
Co-committed-by: Willem <wrolleman@nationtech.io>
2025-09-29 14:32:38 +00:00
4a500e4eb7 feat: Add openbao example, open-source fork of vault
Some checks failed
Run Check Script / check (pull_request) Failing after 5s
2025-09-24 21:54:32 -04:00
f073b7e5fb feat:added k8s flavour to k8s_aywhere topology to be able to get the type of cluster
All checks were successful
Run Check Script / check (pull_request) Successful in 33s
2025-09-24 13:28:46 -04:00
c84b2413ed Merge pull request 'fix: added securityContext.runAsUser:null to argo-cd helm chart so that in okd user group will be randomly assigned within the uid range for the designated namespace' (#156) from fix/argo-cd-redis into master
All checks were successful
Run Check Script / check (push) Successful in 57s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m35s
Reviewed-on: #156
2025-09-12 13:54:02 +00:00
f83fd09f11 fix(monitoring): returned namespaced kube metrics
All checks were successful
Run Check Script / check (pull_request) Successful in 55s
2025-09-12 09:49:20 -04:00
c15bd53331 fix: added securityContext.runAsUser:null to argo-cd helm chart so that in okd user group will be randomly assigned within the uid range for the designated namespace
All checks were successful
Run Check Script / check (pull_request) Successful in 59s
2025-09-12 09:29:27 -04:00
6e6f57e38c Merge pull request 'fix: added routes to domain name for prometheus, grafana, alertmanageradded argo cd to the reporting after successfull build' (#155) from fix/add_routes_to_domain into master
All checks were successful
Run Check Script / check (push) Successful in 59s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m27s
Reviewed-on: #155
2025-09-10 19:44:53 +00:00
6f55f79281 feat: Update readme with newer UX/DX Rust Leptos app, update slides and misc stuff
All checks were successful
Run Check Script / check (pull_request) Successful in 58s
2025-09-10 15:40:32 -04:00
19f87fdaf7 fix: added routes to domain name for prometheus, grafana, alertmanageradded argo cd to the reporting after successfull build
All checks were successful
Run Check Script / check (pull_request) Successful in 1m1s
2025-09-10 15:08:13 -04:00
49370af176 Merge pull request 'doc: Slides demo 10 sept' (#153) from feat/slides_demo_10sept into master
All checks were successful
Run Check Script / check (push) Successful in 1m4s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m55s
Reviewed-on: #153
2025-09-10 17:14:48 +00:00
cf0b8326dc Merge pull request 'fix: properly configured discord alert receiver corrected domain and topic name for ntfy' (#154) from fix/alertreceivers into master
Some checks failed
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Run Check Script / check (push) Has been cancelled
Reviewed-on: #154
2025-09-10 17:13:31 +00:00
1e2563f7d1 fix: added reporting to output ntfy topic
All checks were successful
Run Check Script / check (pull_request) Successful in 1m2s
2025-09-10 13:10:06 -04:00
7f50c36f11 Merge pull request 'fix: Various demo fixe and rename : RHOBMonitoring -> Monitoring, ContinuousDelivery -> PackagingDeployment, Fix bollard logs' (#152) from fix/demo into master
All checks were successful
Run Check Script / check (push) Successful in 1m1s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 6m50s
Reviewed-on: #152
2025-09-10 17:01:15 +00:00
49dad343ad fix: properly configured discord alert receiver corrected domain and topic name for ntfy
All checks were successful
Run Check Script / check (pull_request) Successful in 1m2s
2025-09-10 12:53:43 -04:00
9961e8b79d doc: Slides demo 10 sept 2025-09-10 12:38:25 -04:00
467 changed files with 36211 additions and 5643 deletions

View File

@@ -1,2 +1,6 @@
target/
Dockerfile
Dockerfile
.git
data
target
demos

View File

@@ -15,4 +15,4 @@ jobs:
uses: actions/checkout@v4
- name: Run check script
run: bash check.sh
run: bash build/check.sh

8
.gitignore vendored
View File

@@ -24,3 +24,11 @@ Cargo.lock
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb
.harmony_generated
# Useful to create ignore folders for temp files and notes
ignore
# Generated book
book

View File

@@ -0,0 +1,26 @@
{
"db_name": "SQLite",
"query": "SELECT host_id, installation_device FROM host_role_mapping WHERE role = ?",
"describe": {
"columns": [
{
"name": "host_id",
"ordinal": 0,
"type_info": "Text"
},
{
"name": "installation_device",
"ordinal": 1,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
false,
true
]
},
"hash": "24f719d57144ecf4daa55f0aa5836c165872d70164401c0388e8d625f1b72d7b"
}

View File

@@ -1,20 +0,0 @@
{
"db_name": "SQLite",
"query": "SELECT host_id FROM host_role_mapping WHERE role = ?",
"describe": {
"columns": [
{
"name": "host_id",
"ordinal": 0,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
false
]
},
"hash": "2ea29df2326f7c84bd4100ad510a3fd4878dc2e217dc83f9bf45a402dfd62a91"
}

View File

@@ -1,12 +1,12 @@
{
"db_name": "SQLite",
"query": "\n INSERT INTO host_role_mapping (host_id, role)\n VALUES (?, ?)\n ",
"query": "\n INSERT INTO host_role_mapping (host_id, role, installation_device)\n VALUES (?, ?, ?)\n ",
"describe": {
"columns": [],
"parameters": {
"Right": 2
"Right": 3
},
"nullable": []
},
"hash": "df7a7c9cfdd0972e2e0ce7ea444ba8bc9d708a4fb89d5593a0be2bbebde62aff"
"hash": "6fcc29cfdbdf3b2cee94a4844e227f09b245dd8f079832a9a7b774151cb03af6"
}

2850
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,13 @@
[workspace]
resolver = "2"
members = [
"private_repos/*",
"examples/*",
"private_repos/*",
"harmony",
"harmony_types",
"harmony_macros",
"harmony_tui",
"harmony_execution",
"opnsense-config",
"opnsense-config-xml",
"harmony_cli",
@@ -14,7 +15,14 @@ members = [
"harmony_composer",
"harmony_inventory_agent",
"harmony_secret_derive",
"harmony_secret", "adr/agent_discovery/mdns",
"harmony_secret",
"harmony_config_derive",
"harmony_config",
"brocade",
"harmony_agent",
"harmony_agent/deploy",
"harmony_node_readiness",
"harmony-k8s",
]
[workspace.package]
@@ -33,6 +41,8 @@ tokio = { version = "1.40", features = [
"macros",
"rt-multi-thread",
] }
tokio-retry = "0.3.0"
tokio-util = "0.7.15"
cidr = { features = ["serde"], version = "0.2" }
russh = "0.45"
russh-keys = "0.45"
@@ -47,6 +57,7 @@ kube = { version = "1.1.0", features = [
"jsonpatch",
] }
k8s-openapi = { version = "0.25", features = ["v1_30"] }
# TODO replace with https://github.com/bourumir-wyngs/serde-saphyr as serde_yaml is deprecated https://github.com/sebastienrousseau/serde_yml
serde_yaml = "0.9"
serde-value = "0.7"
http = "1.2"
@@ -66,5 +77,12 @@ thiserror = "2.0.14"
serde = { version = "1.0.209", features = ["derive", "rc"] }
serde_json = "1.0.127"
askama = "0.14"
sqlx = { version = "0.8", features = ["runtime-tokio", "sqlite" ] }
reqwest = { version = "0.12", features = ["blocking", "stream", "rustls-tls", "http2", "json"], default-features = false }
sqlx = { version = "0.8", features = ["runtime-tokio", "sqlite"] }
reqwest = { version = "0.12", features = [
"blocking",
"stream",
"rustls-tls",
"http2",
"json",
], default-features = false }
assertor = "0.0.4"

290
README.md
View File

@@ -1,150 +1,250 @@
# Harmony : Open-source infrastructure orchestration that treats your platform like first-class code
# Harmony
**Infrastructure orchestration that treats your platform like first-class code.**
Harmony is an open-source framework that brings the rigor of software engineering to infrastructure management. Write Rust code to define what you want, and Harmony handles the rest — from local development to production clusters.
_By [NationTech](https://nationtech.io)_
[![Build](https://git.nationtech.io/NationTech/harmony/actions/workflows/check.yml/badge.svg)](https://git.nationtech.io/nationtech/harmony)
[![Build](https://git.nationtech.io/NationTech/harmony/actions/workflows/check.yml/badge.svg)](https://git.nationtech.io/NationTech/harmony)
[![License](https://img.shields.io/badge/license-AGPLv3-blue?style=flat-square)](LICENSE)
### Unify
---
- **Project Scaffolding**
- **Infrastructure Provisioning**
- **Application Deployment**
- **Day-2 operations**
## The Problem Harmony Solves
All in **one strongly-typed Rust codebase**.
Modern infrastructure is messy. Your Kubernetes cluster needs monitoring. Your bare-metal servers need provisioning. Your applications need deployments. Each comes with its own tooling, its own configuration format, and its own failure modes.
### Deploy anywhere
**What if you could describe your entire platform in one consistent language?**
From a **developer laptop** to a **global production cluster**, a single **source of truth** drives the **full software lifecycle.**
That's Harmony. It unifies project scaffolding, infrastructure provisioning, application deployment, and day-2 operations into a single strongly-typed Rust codebase.
---
## 1 · The Harmony Philosophy
## Three Principles That Make the Difference
Infrastructure is essential, but it shouldnt be your core business. Harmony is built on three guiding principles that make modern platforms reliable, repeatable, and easy to reason about.
| Principle | What it means for you |
| -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Infrastructure as Resilient Code** | Replace sprawling YAML and bash scripts with type-safe Rust. Test, refactor, and version your platform just like application code. |
| **Prove It Works — Before You Deploy** | Harmony uses the compiler to verify that your applications needs match the target environments capabilities at **compile-time**, eliminating an entire class of runtime outages. |
| **One Unified Model** | Software and infrastructure are a single system. Harmony models them together, enabling deep automation—from bare-metal servers to Kubernetes workloads—with zero context switching. |
These principles surface as simple, ergonomic Rust APIs that let teams focus on their product while trusting the platform underneath.
| Principle | What It Means |
|-----------|---------------|
| **Infrastructure as Resilient Code** | Stop fighting with YAML and bash. Write type-safe Rust that you can test, version, and refactor like any other code. |
| **Prove It Works Before You Deploy** | Harmony verifies at _compile time_ that your application can actually run on your target infrastructure. No more "the config looks right but it doesn't work" surprises. |
| **One Unified Model** | Software and infrastructure are one system. Deploy from laptop to production cluster without switching contexts or tools. |
---
## 2 · Quick Start
## How It Works: The Core Concepts
The snippet below spins up a complete **production-grade LAMP stack** with monitoring. Swap it for your own scores to deploy anything from microservices to machine-learning pipelines.
Harmony is built around three concepts that work together:
### Score — "What You Want"
A `Score` is a declarative description of desired state. Think of it as a "recipe" that says _what_ you want without specifying _how_ to get there.
```rust
// "I want a PostgreSQL cluster running with default settings"
let postgres = PostgreSQLScore {
config: PostgreSQLConfig {
cluster_name: "harmony-postgres-example".to_string(),
namespace: "harmony-postgres-example".to_string(),
..Default::default()
},
};
```
### Topology — "Where It Goes"
A `Topology` represents your infrastructure environment and its capabilities. It answers the question: "What can this environment actually do?"
```rust
// Deploy to a local K3D cluster, or any Kubernetes cluster via environment variables
K8sAnywhereTopology::from_env()
```
### Interpret — "How It Happens"
An `Interpret` is the execution logic that connects your `Score` to your `Topology`. It translates "what you want" into "what the infrastructure does."
**The Compile-Time Check:** Before your code ever runs, Harmony verifies that your `Score` is compatible with your `Topology`. If your application needs a feature your infrastructure doesn't provide, you get a compile error — not a runtime failure.
---
## What You Can Deploy
Harmony ships with ready-made Scores for:
**Data Services**
- PostgreSQL clusters (via CloudNativePG operator)
- Multi-site PostgreSQL with failover
**Kubernetes**
- Namespaces, Deployments, Ingress
- Helm charts
- cert-manager for TLS
- Monitoring (Prometheus, alerting, ntfy)
**Bare Metal / Infrastructure**
- OKD clusters from scratch
- OPNsense firewalls
- Network services (DNS, DHCP, TFTP)
- Brocade switch configuration
**And more:** Application deployment, tenant management, load balancing, and more.
---
## Quick Start: Deploy a PostgreSQL Cluster
This example provisions a local Kubernetes cluster (K3D) and deploys a PostgreSQL cluster on it — no external infrastructure required.
```rust
use harmony::{
data::Version,
inventory::Inventory,
maestro::Maestro,
modules::{
lamp::{LAMPConfig, LAMPScore},
monitoring::monitoring_alerting::MonitoringAlertingStackScore,
},
topology::{K8sAnywhereTopology, Url},
modules::postgresql::{PostgreSQLScore, capability::PostgreSQLConfig},
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
// 1. Describe what you want
let lamp_stack = LAMPScore {
name: "harmony-lamp-demo".into(),
domain: Url::Url(url::Url::parse("https://lampdemo.example.com").unwrap()),
php_version: Version::from("8.3.0").unwrap(),
config: LAMPConfig {
project_root: "./php".into(),
database_size: "4Gi".into(),
let postgres = PostgreSQLScore {
config: PostgreSQLConfig {
cluster_name: "harmony-postgres-example".to_string(),
namespace: "harmony-postgres-example".to_string(),
..Default::default()
},
};
// 2. Enhance with extra scores (monitoring, CI/CD, …)
let mut monitoring = MonitoringAlertingStackScore::new();
monitoring.namespace = Some(lamp_stack.config.namespace.clone());
// 3. Run your scores on the desired topology & inventory
harmony_cli::run(
Inventory::autoload(), // auto-detect hardware / kube-config
K8sAnywhereTopology::from_env(), // local k3d, CI, staging, prod…
vec![
Box::new(lamp_stack),
Box::new(monitoring)
],
None
).await.unwrap();
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
vec![Box::new(postgres)],
None,
)
.await
.unwrap();
}
```
Run it:
```bash
cargo run
```
Harmony analyses the code, shows an execution plan in a TUI, and applies it once you confirm. Same code, same binary—every environment.
---
## 3 · Core Concepts
| Term | One-liner |
| ---------------- | ---------------------------------------------------------------------------------------------------- |
| **Score<T>** | Declarative description of the desired state (e.g., `LAMPScore`). |
| **Interpret<T>** | Imperative logic that realises a `Score` on a specific environment. |
| **Topology** | An environment (local k3d, AWS, bare-metal) exposing verified _Capabilities_ (Kubernetes, DNS, …). |
| **Maestro** | Orchestrator that compiles Scores + Topology, ensuring all capabilities line up **at compile-time**. |
| **Inventory** | Optional catalogue of physical assets for bare-metal and edge deployments. |
A visual overview is in the diagram below.
[Harmony Core Architecture](docs/diagrams/Harmony_Core_Architecture.drawio.svg)
---
## 4 · Install
Prerequisites:
- Rust
- Docker (if you deploy locally)
- `kubectl` / `helm` for Kubernetes-based topologies
### What this actually does
When you compile and run this program:
1. **Compiles** the Harmony Score into an executable
2. **Connects** to `K8sAnywhereTopology` — which auto-provisions a local K3D cluster if none exists
3. **Installs** the CloudNativePG operator into the cluster (one-time setup)
4. **Creates** a PostgreSQL cluster with 1 instance and 1 GiB of storage
5. **Exposes** the PostgreSQL instance as a Kubernetes Service
### Prerequisites
- [Rust](https://rust-lang.org/tools/install) (edition 2024)
- [Docker](https://docs.docker.com/get-docker/) (for the local K3D cluster)
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (optional, for inspecting the cluster)
### Run it
```bash
# Clone the repository
git clone https://git.nationtech.io/nationtech/harmony
cd harmony
cargo build --release # builds the CLI, TUI and libraries
# Build the project
cargo build --release
# Run the example
cargo run -p example-postgresql
```
Harmony will print its progress as it sets up the cluster and deploys PostgreSQL. When complete, you can inspect the deployment:
```bash
kubectl get pods -n harmony-postgres-example
kubectl get secret -n harmony-postgres-example harmony-postgres-example-db-user -o jsonpath='{.data.password}' | base64 -d
```
To connect to the database, forward the port:
```bash
kubectl port-forward -n harmony-postgres-example svc/harmony-postgres-example-rw 5432:5432
psql -h localhost -p 5432 -U postgres
```
To clean up, delete the K3D cluster:
```bash
k3d cluster delete harmony-postgres-example
```
---
## 5 · Learning More
## Environment Variables
- **Architectural Decision Records** dive into the rationale
- [ADR-001 · Why Rust](adr/001-rust.md)
- [ADR-003 · Infrastructure Abstractions](adr/003-infrastructure-abstractions.md)
- [ADR-006 · Secret Management](adr/006-secret-management.md)
- [ADR-011 · Multi-Tenant Cluster](adr/011-multi-tenant-cluster.md)
`K8sAnywhereTopology::from_env()` reads the following environment variables to determine where and how to connect:
- **Extending Harmony** write new Scores / Interprets, add hardware like OPNsense firewalls, or embed Harmony in your own tooling (`/docs`).
| Variable | Default | Description |
|----------|---------|-------------|
| `KUBECONFIG` | `~/.kube/config` | Path to your kubeconfig file |
| `HARMONY_AUTOINSTALL` | `true` | Auto-provision a local K3D cluster if none found |
| `HARMONY_USE_LOCAL_K3D` | `true` | Always prefer local K3D over remote clusters |
| `HARMONY_PROFILE` | `dev` | Deployment profile: `dev`, `staging`, or `prod` |
| `HARMONY_K8S_CONTEXT` | _none_ | Use a specific kubeconfig context |
| `HARMONY_PUBLIC_DOMAIN` | _none_ | Public domain for ingress endpoints |
- **Community** discussions and roadmap live in [GitLab issues](https://git.nationtech.io/nationtech/harmony/-/issues). PRs, ideas, and feedback are welcome!
To connect to an existing Kubernetes cluster instead of provisioning K3D:
```bash
# Point to your kubeconfig
export KUBECONFIG=/path/to/your/kubeconfig
export HARMONY_USE_LOCAL_K3D=false
export HARMONY_AUTOINSTALL=false
# Then run
cargo run -p example-postgresql
```
---
## 6 · License
## Documentation
| I want to... | Start here |
|--------------|------------|
| Understand the core concepts | [Core Concepts](./docs/concepts.md) |
| Deploy my first application | [Getting Started Guide](./docs/guides/getting-started.md) |
| Explore available components | [Scores Catalog](./docs/catalogs/scores.md) · [Topologies Catalog](./docs/catalogs/topologies.md) |
| See a complete bare-metal deployment | [OKD on Bare Metal](./docs/use-cases/okd-on-bare-metal.md) |
| Build my own Score or Topology | [Developer Guide](./docs/guides/developer-guide.md) |
---
## Why Rust?
We chose Rust for the same reason you might: **reliability through type safety**.
Infrastructure code runs in production. It needs to be correct. Rust's ownership model and type system let us build a framework where:
- Invalid configurations fail at compile time, not at 3 AM
- Refactoring infrastructure is as safe as refactoring application code
- The compiler verifies that your platform can actually fulfill your requirements
See [ADR-001 · Why Rust](./adr/001-rust.md) for our full rationale.
---
## Architecture Decisions
Harmony's design is documented through Architecture Decision Records (ADRs):
- [ADR-001 · Why Rust](./adr/001-rust.md)
- [ADR-003 · Infrastructure Abstractions](./adr/003-infrastructure-abstractions.md)
- [ADR-006 · Secret Management](./adr/006-secret-management.md)
- [ADR-011 · Multi-Tenant Cluster](./adr/011-multi-tenant-cluster.md)
---
## License
Harmony is released under the **GNU AGPL v3**.
> We choose a strong copyleft license to ensure the project—and every improvement to it—remains open and benefits the entire community. Fork it, enhance it, even out-innovate us; just keep it open.
> We choose a strong copyleft license to ensure the project—and every improvement to it—remains open and benefits the entire community.
See [LICENSE](LICENSE) for the full text.
---
_Made with ❤️ & 🦀 by the NationTech and the Harmony community_
_Made with ❤️ & 🦀 by NationTech and the Harmony community_

9
book.toml Normal file
View File

@@ -0,0 +1,9 @@
[book]
title = "Harmony"
description = "Infrastructure orchestration that treats your platform like first-class code"
src = "docs"
build-dir = "book"
authors = ["NationTech"]
[output.html]
mathjax-support = false

19
brocade/Cargo.toml Normal file
View File

@@ -0,0 +1,19 @@
[package]
name = "brocade"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
async-trait.workspace = true
harmony_types = { path = "../harmony_types" }
russh.workspace = true
russh-keys.workspace = true
tokio.workspace = true
log.workspace = true
env_logger.workspace = true
regex = "1.11.3"
harmony_secret = { path = "../harmony_secret" }
serde.workspace = true
schemars = "0.8"

75
brocade/examples/main.rs Normal file
View File

@@ -0,0 +1,75 @@
use std::net::{IpAddr, Ipv4Addr};
use brocade::{BrocadeOptions, ssh};
use harmony_secret::{Secret, SecretManager};
use harmony_types::switch::PortLocation;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Secret, Clone, Debug, JsonSchema, Serialize, Deserialize)]
struct BrocadeSwitchAuth {
username: String,
password: String,
}
#[tokio::main]
async fn main() {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();
// let ip = IpAddr::V4(Ipv4Addr::new(10, 0, 0, 250)); // old brocade @ ianlet
let ip = IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)); // brocade @ sto1
// let ip = IpAddr::V4(Ipv4Addr::new(192, 168, 4, 11)); // brocade @ st
let switch_addresses = vec![ip];
let config = SecretManager::get_or_prompt::<BrocadeSwitchAuth>()
.await
.unwrap();
let brocade = brocade::init(
&switch_addresses,
&config.username,
&config.password,
&BrocadeOptions {
dry_run: true,
ssh: ssh::SshOptions {
port: 2222,
..Default::default()
},
..Default::default()
},
)
.await
.expect("Brocade client failed to connect");
let entries = brocade.get_stack_topology().await.unwrap();
println!("Stack topology: {entries:#?}");
let entries = brocade.get_interfaces().await.unwrap();
println!("Interfaces: {entries:#?}");
let version = brocade.version().await.unwrap();
println!("Version: {version:?}");
println!("--------------");
let mac_adddresses = brocade.get_mac_address_table().await.unwrap();
println!("VLAN\tMAC\t\t\tPORT");
for mac in mac_adddresses {
println!("{}\t{}\t{}", mac.vlan, mac.mac_address, mac.port);
}
println!("--------------");
todo!();
let channel_name = "1";
brocade.clear_port_channel(channel_name).await.unwrap();
println!("--------------");
let channel_id = brocade.find_available_channel_id().await.unwrap();
println!("--------------");
let channel_name = "HARMONY_LAG";
let ports = [PortLocation(2, 0, 35)];
brocade
.create_port_channel(channel_id, channel_name, &ports)
.await
.unwrap();
}

228
brocade/src/fast_iron.rs Normal file
View File

@@ -0,0 +1,228 @@
use super::BrocadeClient;
use crate::{
BrocadeInfo, Error, ExecutionMode, InterSwitchLink, InterfaceInfo, MacAddressEntry,
PortChannelId, PortOperatingMode, parse_brocade_mac_address, shell::BrocadeShell,
};
use async_trait::async_trait;
use harmony_types::switch::{PortDeclaration, PortLocation};
use log::{debug, info};
use regex::Regex;
use std::{collections::HashSet, str::FromStr};
#[derive(Debug)]
pub struct FastIronClient {
shell: BrocadeShell,
version: BrocadeInfo,
}
impl FastIronClient {
pub fn init(mut shell: BrocadeShell, version_info: BrocadeInfo) -> Self {
shell.before_all(vec!["skip-page-display".into()]);
shell.after_all(vec!["page".into()]);
Self {
shell,
version: version_info,
}
}
fn parse_mac_entry(&self, line: &str) -> Option<Result<MacAddressEntry, Error>> {
debug!("[Brocade] Parsing mac address entry: {line}");
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 3 {
return None;
}
let (vlan, mac_address, port) = match parts.len() {
3 => (
u16::from_str(parts[0]).ok()?,
parse_brocade_mac_address(parts[1]).ok()?,
parts[2].to_string(),
),
_ => (
1,
parse_brocade_mac_address(parts[0]).ok()?,
parts[1].to_string(),
),
};
let port =
PortDeclaration::parse(&port).map_err(|e| Error::UnexpectedError(format!("{e}")));
match port {
Ok(p) => Some(Ok(MacAddressEntry {
vlan,
mac_address,
port: p,
})),
Err(e) => Some(Err(e)),
}
}
fn parse_stack_port_entry(&self, line: &str) -> Option<Result<InterSwitchLink, Error>> {
debug!("[Brocade] Parsing stack port entry: {line}");
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 10 {
return None;
}
let local_port = PortLocation::from_str(parts[0]).ok()?;
Some(Ok(InterSwitchLink {
local_port,
remote_port: None,
}))
}
fn build_port_channel_commands(
&self,
channel_id: PortChannelId,
channel_name: &str,
ports: &[PortLocation],
) -> Vec<String> {
let mut commands = vec![
"configure terminal".to_string(),
format!("lag {channel_name} static id {channel_id}"),
];
for port in ports {
commands.push(format!("ports ethernet {port}"));
}
commands.push(format!("primary-port {}", ports[0]));
commands.push("deploy".into());
commands.push("exit".into());
commands.push("write memory".into());
commands.push("exit".into());
commands
}
}
#[async_trait]
impl BrocadeClient for FastIronClient {
async fn version(&self) -> Result<BrocadeInfo, Error> {
Ok(self.version.clone())
}
async fn get_mac_address_table(&self) -> Result<Vec<MacAddressEntry>, Error> {
info!("[Brocade] Showing MAC address table...");
let output = self
.shell
.run_command("show mac-address", ExecutionMode::Regular)
.await?;
output
.lines()
.skip(2)
.filter_map(|line| self.parse_mac_entry(line))
.collect()
}
async fn get_stack_topology(&self) -> Result<Vec<InterSwitchLink>, Error> {
let output = self
.shell
.run_command("show interface stack-ports", crate::ExecutionMode::Regular)
.await?;
output
.lines()
.skip(1)
.filter_map(|line| self.parse_stack_port_entry(line))
.collect()
}
async fn get_interfaces(&self) -> Result<Vec<InterfaceInfo>, Error> {
todo!()
}
async fn configure_interfaces(
&self,
_interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error> {
todo!()
}
async fn find_available_channel_id(&self) -> Result<PortChannelId, Error> {
info!("[Brocade] Finding next available channel id...");
let output = self
.shell
.run_command("show lag", ExecutionMode::Regular)
.await?;
let re = Regex::new(r"=== LAG .* ID\s+(\d+)").expect("Invalid regex");
let used_ids: HashSet<u8> = output
.lines()
.filter_map(|line| {
re.captures(line)
.and_then(|c| c.get(1))
.and_then(|id_match| id_match.as_str().parse().ok())
})
.collect();
let mut next_id: u8 = 1;
loop {
if !used_ids.contains(&next_id) {
break;
}
next_id += 1;
}
info!("[Brocade] Found channel id: {next_id}");
Ok(next_id)
}
async fn create_port_channel(
&self,
channel_id: PortChannelId,
channel_name: &str,
ports: &[PortLocation],
) -> Result<(), Error> {
info!(
"[Brocade] Configuring port-channel '{channel_name} {channel_id}' with ports: {ports:?}"
);
let commands = self.build_port_channel_commands(channel_id, channel_name, ports);
self.shell
.run_commands(commands, ExecutionMode::Privileged)
.await?;
info!("[Brocade] Port-channel '{channel_name}' configured.");
Ok(())
}
async fn clear_port_channel(&self, channel_name: &str) -> Result<(), Error> {
info!("[Brocade] Clearing port-channel: {channel_name}");
let commands = vec![
"configure terminal".to_string(),
format!("no lag {channel_name}"),
"write memory".to_string(),
];
self.shell
.run_commands(commands, ExecutionMode::Privileged)
.await?;
info!("[Brocade] Port-channel '{channel_name}' cleared.");
Ok(())
}
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error> {
let commands = vec![
"configure terminal".into(),
"snmp-server view ALL 1 included".into(),
"snmp-server group public v3 priv read ALL".into(),
format!(
"snmp-server user {user_name} groupname public auth md5 auth-password {auth} priv des priv-password {des}"
),
"exit".into(),
];
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await?;
Ok(())
}
}

352
brocade/src/lib.rs Normal file
View File

@@ -0,0 +1,352 @@
use std::net::IpAddr;
use std::{
fmt::{self, Display},
time::Duration,
};
use crate::network_operating_system::NetworkOperatingSystemClient;
use crate::{
fast_iron::FastIronClient,
shell::{BrocadeSession, BrocadeShell},
};
use async_trait::async_trait;
use harmony_types::net::MacAddress;
use harmony_types::switch::{PortDeclaration, PortLocation};
use regex::Regex;
use serde::Serialize;
mod fast_iron;
mod network_operating_system;
mod shell;
pub mod ssh;
#[derive(Default, Clone, Debug)]
pub struct BrocadeOptions {
pub dry_run: bool,
pub ssh: ssh::SshOptions,
pub timeouts: TimeoutConfig,
}
#[derive(Clone, Debug)]
pub struct TimeoutConfig {
pub shell_ready: Duration,
pub command_execution: Duration,
pub command_output: Duration,
pub cleanup: Duration,
pub message_wait: Duration,
}
impl Default for TimeoutConfig {
fn default() -> Self {
Self {
shell_ready: Duration::from_secs(10),
command_execution: Duration::from_secs(60), // Commands like `deploy` (for a LAG) can take a while
command_output: Duration::from_secs(5), // Delay to start logging "waiting for command output"
cleanup: Duration::from_secs(10),
message_wait: Duration::from_millis(500),
}
}
}
enum ExecutionMode {
Regular,
Privileged,
}
#[derive(Clone, Debug)]
pub struct BrocadeInfo {
os: BrocadeOs,
_version: String,
}
#[derive(Clone, Debug)]
pub enum BrocadeOs {
NetworkOperatingSystem,
FastIron,
Unknown,
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone)]
pub struct MacAddressEntry {
pub vlan: u16,
pub mac_address: MacAddress,
pub port: PortDeclaration,
}
pub type PortChannelId = u8;
/// Represents a single physical or logical link connecting two switches within a stack or fabric.
///
/// This structure provides a standardized view of the topology regardless of the
/// underlying Brocade OS configuration (stacking vs. fabric).
#[derive(Debug, PartialEq, Eq, Clone)]
pub struct InterSwitchLink {
/// The local port on the switch where the topology command was run.
pub local_port: PortLocation,
/// The port on the directly connected neighboring switch.
pub remote_port: Option<PortLocation>,
}
/// Represents the key running configuration status of a single switch interface.
#[derive(Debug, PartialEq, Eq, Clone)]
pub struct InterfaceInfo {
/// The full configuration name (e.g., "TenGigabitEthernet 1/0/1", "FortyGigabitEthernet 2/0/2").
pub name: String,
/// The physical location of the interface.
pub port_location: PortLocation,
/// The parsed type and name prefix of the interface.
pub interface_type: InterfaceType,
/// The primary configuration mode defining the interface's behavior (L2, L3, Fabric).
pub operating_mode: Option<PortOperatingMode>,
/// Indicates the current state of the interface.
pub status: InterfaceStatus,
}
/// Categorizes the functional type of a switch interface.
#[derive(Debug, PartialEq, Eq, Clone)]
pub enum InterfaceType {
/// Physical or virtual Ethernet interface (e.g., TenGigabitEthernet, FortyGigabitEthernet).
Ethernet(String),
}
impl fmt::Display for InterfaceType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
InterfaceType::Ethernet(name) => write!(f, "{name}"),
}
}
}
/// Defines the primary configuration mode of a switch interface, representing mutually exclusive roles.
#[derive(Debug, PartialEq, Eq, Clone, Serialize)]
pub enum PortOperatingMode {
/// The interface is explicitly configured for Brocade fabric roles (ISL or Trunk enabled).
Fabric,
/// The interface is configured for standard Layer 2 switching as Trunk port (`switchport mode trunk`).
Trunk,
/// The interface is configured for standard Layer 2 switching as Access port (`switchport` without trunk mode).
Access,
}
/// Defines the possible status of an interface.
#[derive(Debug, PartialEq, Eq, Clone)]
pub enum InterfaceStatus {
/// The interface is connected.
Connected,
/// The interface is not connected and is not expected to be.
NotConnected,
/// The interface is not connected but is expected to be (configured with `no shutdown`).
SfpAbsent,
}
pub async fn init(
ip_addresses: &[IpAddr],
username: &str,
password: &str,
options: &BrocadeOptions,
) -> Result<Box<dyn BrocadeClient + Send + Sync>, Error> {
let shell = BrocadeShell::init(ip_addresses, username, password, options).await?;
let version_info = shell
.with_session(ExecutionMode::Regular, |session| {
Box::pin(get_brocade_info(session))
})
.await?;
Ok(match version_info.os {
BrocadeOs::FastIron => Box::new(FastIronClient::init(shell, version_info)),
BrocadeOs::NetworkOperatingSystem => {
Box::new(NetworkOperatingSystemClient::init(shell, version_info))
}
BrocadeOs::Unknown => todo!(),
})
}
#[async_trait]
pub trait BrocadeClient: std::fmt::Debug {
/// Retrieves the operating system and version details from the connected Brocade switch.
///
/// This is typically the first call made after establishing a connection to determine
/// the switch OS family (e.g., FastIron, NOS) for feature compatibility.
///
/// # Returns
///
/// A `BrocadeInfo` structure containing parsed OS type and version string.
async fn version(&self) -> Result<BrocadeInfo, Error>;
/// Retrieves the dynamically learned MAC address table from the switch.
///
/// This is crucial for discovering where specific network endpoints (MAC addresses)
/// are currently located on the physical ports.
///
/// # Returns
///
/// A vector of `MacAddressEntry`, where each entry typically contains VLAN, MAC address,
/// and the associated port name/index.
async fn get_mac_address_table(&self) -> Result<Vec<MacAddressEntry>, Error>;
/// Derives the physical connections used to link multiple switches together
/// to form a single logical entity (stack, fabric, etc.).
///
/// This abstracts the underlying configuration (e.g., stack ports, fabric ports)
/// to return a standardized view of the topology.
///
/// # Returns
///
/// A vector of `InterSwitchLink` structs detailing which ports are used for stacking/fabric.
/// If the switch is not stacked, returns an empty vector.
async fn get_stack_topology(&self) -> Result<Vec<InterSwitchLink>, Error>;
/// Retrieves the status for all interfaces
///
/// # Returns
///
/// A vector of `InterfaceInfo` structures.
async fn get_interfaces(&self) -> Result<Vec<InterfaceInfo>, Error>;
/// Configures a set of interfaces to be operated with a specified mode (access ports, ISL, etc.).
async fn configure_interfaces(
&self,
interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error>;
/// Scans the existing configuration to find the next available (unused)
/// Port-Channel ID (`lag` or `trunk`) for assignment.
///
/// # Returns
///
/// The smallest, unassigned `PortChannelId` within the supported range.
async fn find_available_channel_id(&self) -> Result<PortChannelId, Error>;
/// Creates and configures a new Port-Channel (Link Aggregation Group or LAG)
/// using the specified channel ID and ports.
///
/// The resulting configuration must be persistent (saved to startup-config).
/// Assumes a static LAG configuration mode unless specified otherwise by the implementation.
///
/// # Parameters
///
/// * `channel_id`: The ID (e.g., 1-128) for the logical port channel.
/// * `channel_name`: A descriptive name for the LAG (used in configuration context).
/// * `ports`: A slice of `PortLocation` structs defining the physical member ports.
async fn create_port_channel(
&self,
channel_id: PortChannelId,
channel_name: &str,
ports: &[PortLocation],
) -> Result<(), Error>;
/// Enables Simple Network Management Protocol (SNMP) server for switch
///
/// # Parameters
///
/// * `user_name`: The user name for the snmp server
/// * `auth`: The password for authentication process for verifying the identity of a device
/// * `des`: The Data Encryption Standard algorithm key
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error>;
/// Removes all configuration associated with the specified Port-Channel name.
///
/// This operation should be idempotent; attempting to clear a non-existent
/// channel should succeed (or return a benign error).
///
/// # Parameters
///
/// * `channel_name`: The name of the Port-Channel (LAG) to delete.
///
async fn clear_port_channel(&self, channel_name: &str) -> Result<(), Error>;
}
async fn get_brocade_info(session: &mut BrocadeSession) -> Result<BrocadeInfo, Error> {
let output = session.run_command("show version").await?;
if output.contains("Network Operating System") {
let re = Regex::new(r"Network Operating System Version:\s*(?P<version>[a-zA-Z0-9.\-]+)")
.expect("Invalid regex");
let version = re
.captures(&output)
.and_then(|cap| cap.name("version"))
.map(|m| m.as_str().to_string())
.unwrap_or_default();
return Ok(BrocadeInfo {
os: BrocadeOs::NetworkOperatingSystem,
_version: version,
});
} else if output.contains("ICX") {
let re = Regex::new(r"(?m)^\s*SW: Version\s*(?P<version>[a-zA-Z0-9.\-]+)")
.expect("Invalid regex");
let version = re
.captures(&output)
.and_then(|cap| cap.name("version"))
.map(|m| m.as_str().to_string())
.unwrap_or_default();
return Ok(BrocadeInfo {
os: BrocadeOs::FastIron,
_version: version,
});
}
Err(Error::UnexpectedError("Unknown Brocade OS version".into()))
}
fn parse_brocade_mac_address(value: &str) -> Result<MacAddress, String> {
let cleaned_mac = value.replace('.', "");
if cleaned_mac.len() != 12 {
return Err(format!("Invalid MAC address: {value}"));
}
let mut bytes = [0u8; 6];
for (i, pair) in cleaned_mac.as_bytes().chunks(2).enumerate() {
let byte_str = std::str::from_utf8(pair).map_err(|_| "Invalid UTF-8")?;
bytes[i] =
u8::from_str_radix(byte_str, 16).map_err(|_| format!("Invalid hex in MAC: {value}"))?;
}
Ok(MacAddress(bytes))
}
#[derive(Debug)]
pub enum SecurityLevel {
AuthPriv(String),
}
#[derive(Debug)]
pub enum Error {
NetworkError(String),
AuthenticationError(String),
ConfigurationError(String),
TimeoutError(String),
UnexpectedError(String),
CommandError(String),
}
impl Display for Error {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Error::NetworkError(msg) => write!(f, "Network error: {msg}"),
Error::AuthenticationError(msg) => write!(f, "Authentication error: {msg}"),
Error::ConfigurationError(msg) => write!(f, "Configuration error: {msg}"),
Error::TimeoutError(msg) => write!(f, "Timeout error: {msg}"),
Error::UnexpectedError(msg) => write!(f, "Unexpected error: {msg}"),
Error::CommandError(msg) => write!(f, "{msg}"),
}
}
}
impl From<Error> for String {
fn from(val: Error) -> Self {
format!("{val}")
}
}
impl std::error::Error for Error {}
impl From<russh::Error> for Error {
fn from(value: russh::Error) -> Self {
Error::NetworkError(format!("Russh client error: {value}"))
}
}

View File

@@ -0,0 +1,352 @@
use std::str::FromStr;
use async_trait::async_trait;
use harmony_types::switch::{PortDeclaration, PortLocation};
use log::{debug, info};
use regex::Regex;
use crate::{
BrocadeClient, BrocadeInfo, Error, ExecutionMode, InterSwitchLink, InterfaceInfo,
InterfaceStatus, InterfaceType, MacAddressEntry, PortChannelId, PortOperatingMode,
parse_brocade_mac_address, shell::BrocadeShell,
};
#[derive(Debug)]
pub struct NetworkOperatingSystemClient {
shell: BrocadeShell,
version: BrocadeInfo,
}
impl NetworkOperatingSystemClient {
pub fn init(mut shell: BrocadeShell, version_info: BrocadeInfo) -> Self {
shell.before_all(vec!["terminal length 0".into()]);
Self {
shell,
version: version_info,
}
}
fn parse_mac_entry(&self, line: &str) -> Option<Result<MacAddressEntry, Error>> {
debug!("[Brocade] Parsing mac address entry: {line}");
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 5 {
return None;
}
let (vlan, mac_address, port) = match parts.len() {
5 => (
u16::from_str(parts[0]).ok()?,
parse_brocade_mac_address(parts[1]).ok()?,
parts[4].to_string(),
),
_ => (
u16::from_str(parts[0]).ok()?,
parse_brocade_mac_address(parts[1]).ok()?,
parts[5].to_string(),
),
};
let port =
PortDeclaration::parse(&port).map_err(|e| Error::UnexpectedError(format!("{e}")));
match port {
Ok(p) => Some(Ok(MacAddressEntry {
vlan,
mac_address,
port: p,
})),
Err(e) => Some(Err(e)),
}
}
fn parse_inter_switch_link_entry(&self, line: &str) -> Option<Result<InterSwitchLink, Error>> {
debug!("[Brocade] Parsing inter switch link entry: {line}");
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 10 {
return None;
}
let local_port = PortLocation::from_str(parts[2]).ok()?;
let remote_port = PortLocation::from_str(parts[5]).ok()?;
Some(Ok(InterSwitchLink {
local_port,
remote_port: Some(remote_port),
}))
}
fn parse_interface_status_entry(&self, line: &str) -> Option<Result<InterfaceInfo, Error>> {
debug!("[Brocade] Parsing interface status entry: {line}");
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 6 {
return None;
}
let interface_type = match parts[0] {
"Fo" => InterfaceType::Ethernet("FortyGigabitEthernet".to_string()),
"Te" => InterfaceType::Ethernet("TenGigabitEthernet".to_string()),
_ => return None,
};
let port_location = PortLocation::from_str(parts[1]).ok()?;
let status = match parts[2] {
"connected" => InterfaceStatus::Connected,
"notconnected" => InterfaceStatus::NotConnected,
"sfpAbsent" => InterfaceStatus::SfpAbsent,
_ => return None,
};
let operating_mode = match parts[3] {
"ISL" => Some(PortOperatingMode::Fabric),
"Trunk" => Some(PortOperatingMode::Trunk),
"Access" => Some(PortOperatingMode::Access),
"--" => None,
_ => return None,
};
Some(Ok(InterfaceInfo {
name: format!("{interface_type} {port_location}"),
port_location,
interface_type,
operating_mode,
status,
}))
}
fn map_configure_interfaces_error(&self, err: Error) -> Error {
debug!("[Brocade] {err}");
if let Error::CommandError(message) = &err {
if message.contains("switchport")
&& message.contains("Cannot configure aggregator member")
{
let re = Regex::new(r"\(conf-if-([a-zA-Z]+)-([\d/]+)\)#").unwrap();
if let Some(caps) = re.captures(message) {
let interface_type = &caps[1];
let port_location = &caps[2];
let interface = format!("{interface_type} {port_location}");
return Error::CommandError(format!(
"Cannot configure interface '{interface}', it is a member of a port-channel (LAG)"
));
}
}
}
err
}
}
#[async_trait]
impl BrocadeClient for NetworkOperatingSystemClient {
async fn version(&self) -> Result<BrocadeInfo, Error> {
Ok(self.version.clone())
}
async fn get_mac_address_table(&self) -> Result<Vec<MacAddressEntry>, Error> {
let output = self
.shell
.run_command("show mac-address-table", ExecutionMode::Regular)
.await?;
output
.lines()
.skip(1)
.filter_map(|line| self.parse_mac_entry(line))
.collect()
}
async fn get_stack_topology(&self) -> Result<Vec<InterSwitchLink>, Error> {
let output = self
.shell
.run_command("show fabric isl", ExecutionMode::Regular)
.await?;
output
.lines()
.skip(6)
.filter_map(|line| self.parse_inter_switch_link_entry(line))
.collect()
}
async fn get_interfaces(&self) -> Result<Vec<InterfaceInfo>, Error> {
let output = self
.shell
.run_command(
"show interface status rbridge-id all",
ExecutionMode::Regular,
)
.await?;
output
.lines()
.skip(2)
.filter_map(|line| self.parse_interface_status_entry(line))
.collect()
}
async fn configure_interfaces(
&self,
interfaces: &Vec<(String, PortOperatingMode)>,
) -> Result<(), Error> {
info!("[Brocade] Configuring {} interface(s)...", interfaces.len());
let mut commands = vec!["configure terminal".to_string()];
for interface in interfaces {
commands.push(format!("interface {}", interface.0));
match interface.1 {
PortOperatingMode::Fabric => {
commands.push("fabric isl enable".into());
commands.push("fabric trunk enable".into());
}
PortOperatingMode::Trunk => {
commands.push("switchport".into());
commands.push("switchport mode trunk".into());
commands.push("switchport trunk allowed vlan all".into());
commands.push("no switchport trunk tag native-vlan".into());
commands.push("spanning-tree shutdown".into());
commands.push("no fabric isl enable".into());
commands.push("no fabric trunk enable".into());
commands.push("no shutdown".into());
}
PortOperatingMode::Access => {
commands.push("switchport".into());
commands.push("switchport mode access".into());
commands.push("switchport access vlan 1".into());
commands.push("no spanning-tree shutdown".into());
commands.push("no fabric isl enable".into());
commands.push("no fabric trunk enable".into());
}
}
commands.push("no shutdown".into());
commands.push("exit".into());
}
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await
.map_err(|err| self.map_configure_interfaces_error(err))?;
info!("[Brocade] Interfaces configured.");
Ok(())
}
async fn find_available_channel_id(&self) -> Result<PortChannelId, Error> {
info!("[Brocade] Finding next available channel id...");
let output = self
.shell
.run_command("show port-channel summary", ExecutionMode::Regular)
.await?;
let used_ids: Vec<u8> = output
.lines()
.skip(6)
.filter_map(|line| {
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 8 {
return None;
}
u8::from_str(parts[0]).ok()
})
.collect();
let mut next_id: u8 = 1;
loop {
if !used_ids.contains(&next_id) {
break;
}
next_id += 1;
}
info!("[Brocade] Found channel id: {next_id}");
Ok(next_id)
}
async fn create_port_channel(
&self,
channel_id: PortChannelId,
channel_name: &str,
ports: &[PortLocation],
) -> Result<(), Error> {
info!(
"[Brocade] Configuring port-channel '{channel_id} {channel_name}' with ports: {}",
ports
.iter()
.map(|p| format!("{p}"))
.collect::<Vec<String>>()
.join(", ")
);
let interfaces = self.get_interfaces().await?;
let mut commands = vec![
"configure terminal".into(),
format!("interface port-channel {}", channel_id),
"no shutdown".into(),
"exit".into(),
];
for port in ports {
let interface = interfaces.iter().find(|i| i.port_location == *port);
let Some(interface) = interface else {
continue;
};
commands.push(format!("interface {}", interface.name));
commands.push("no switchport".into());
commands.push("no ip address".into());
commands.push("no fabric isl enable".into());
commands.push("no fabric trunk enable".into());
commands.push(format!("channel-group {channel_id} mode active"));
commands.push("no shutdown".into());
commands.push("exit".into());
}
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await?;
info!("[Brocade] Port-channel '{channel_name}' configured.");
Ok(())
}
async fn clear_port_channel(&self, channel_name: &str) -> Result<(), Error> {
info!("[Brocade] Clearing port-channel: {channel_name}");
let commands = vec![
"configure terminal".into(),
format!("no interface port-channel {}", channel_name),
"exit".into(),
];
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await?;
info!("[Brocade] Port-channel '{channel_name}' cleared.");
Ok(())
}
async fn enable_snmp(&self, user_name: &str, auth: &str, des: &str) -> Result<(), Error> {
let commands = vec![
"configure terminal".into(),
"snmp-server view ALL 1 included".into(),
"snmp-server group public v3 priv read ALL".into(),
format!(
"snmp-server user {user_name} groupname public auth md5 auth-password {auth} priv des priv-password {des}"
),
"exit".into(),
];
self.shell
.run_commands(commands, ExecutionMode::Regular)
.await?;
Ok(())
}
}

367
brocade/src/shell.rs Normal file
View File

@@ -0,0 +1,367 @@
use std::net::IpAddr;
use std::time::Duration;
use std::time::Instant;
use crate::BrocadeOptions;
use crate::Error;
use crate::ExecutionMode;
use crate::TimeoutConfig;
use crate::ssh;
use log::debug;
use log::info;
use russh::ChannelMsg;
use tokio::time::timeout;
#[derive(Debug)]
pub struct BrocadeShell {
ip: IpAddr,
username: String,
password: String,
options: BrocadeOptions,
before_all_commands: Vec<String>,
after_all_commands: Vec<String>,
}
impl BrocadeShell {
pub async fn init(
ip_addresses: &[IpAddr],
username: &str,
password: &str,
options: &BrocadeOptions,
) -> Result<Self, Error> {
let ip = ip_addresses
.first()
.ok_or_else(|| Error::ConfigurationError("No IP addresses provided".to_string()))?;
let brocade_ssh_client_options =
ssh::try_init_client(username, password, ip, options).await?;
Ok(Self {
ip: *ip,
username: username.to_string(),
password: password.to_string(),
before_all_commands: vec![],
after_all_commands: vec![],
options: brocade_ssh_client_options,
})
}
pub async fn open_session(&self, mode: ExecutionMode) -> Result<BrocadeSession, Error> {
BrocadeSession::open(
self.ip,
self.options.ssh.port,
&self.username,
&self.password,
self.options.clone(),
mode,
)
.await
}
pub async fn with_session<F, R>(&self, mode: ExecutionMode, callback: F) -> Result<R, Error>
where
F: FnOnce(
&mut BrocadeSession,
) -> std::pin::Pin<
Box<dyn std::future::Future<Output = Result<R, Error>> + Send + '_>,
>,
{
let mut session = self.open_session(mode).await?;
let _ = session.run_commands(self.before_all_commands.clone()).await;
let result = callback(&mut session).await;
let _ = session.run_commands(self.after_all_commands.clone()).await;
session.close().await?;
result
}
pub async fn run_command(&self, command: &str, mode: ExecutionMode) -> Result<String, Error> {
let mut session = self.open_session(mode).await?;
let _ = session.run_commands(self.before_all_commands.clone()).await;
let result = session.run_command(command).await;
let _ = session.run_commands(self.after_all_commands.clone()).await;
session.close().await?;
result
}
pub async fn run_commands(
&self,
commands: Vec<String>,
mode: ExecutionMode,
) -> Result<(), Error> {
let mut session = self.open_session(mode).await?;
let _ = session.run_commands(self.before_all_commands.clone()).await;
let result = session.run_commands(commands).await;
let _ = session.run_commands(self.after_all_commands.clone()).await;
session.close().await?;
result
}
pub fn before_all(&mut self, commands: Vec<String>) {
self.before_all_commands = commands;
}
pub fn after_all(&mut self, commands: Vec<String>) {
self.after_all_commands = commands;
}
}
pub struct BrocadeSession {
pub channel: russh::Channel<russh::client::Msg>,
pub mode: ExecutionMode,
pub options: BrocadeOptions,
}
impl BrocadeSession {
pub async fn open(
ip: IpAddr,
port: u16,
username: &str,
password: &str,
options: BrocadeOptions,
mode: ExecutionMode,
) -> Result<Self, Error> {
let client = ssh::create_client(ip, port, username, password, &options).await?;
let mut channel = client.channel_open_session().await?;
channel
.request_pty(false, "vt100", 80, 24, 0, 0, &[])
.await?;
channel.request_shell(false).await?;
wait_for_shell_ready(&mut channel, &options.timeouts).await?;
if let ExecutionMode::Privileged = mode {
try_elevate_session(&mut channel, username, password, &options.timeouts).await?;
}
Ok(Self {
channel,
mode,
options,
})
}
pub async fn close(&mut self) -> Result<(), Error> {
debug!("[Brocade] Closing session...");
self.channel.data(&b"exit\n"[..]).await?;
if let ExecutionMode::Privileged = self.mode {
self.channel.data(&b"exit\n"[..]).await?;
}
let start = Instant::now();
while start.elapsed() < self.options.timeouts.cleanup {
match timeout(self.options.timeouts.message_wait, self.channel.wait()).await {
Ok(Some(ChannelMsg::Close)) => break,
Ok(Some(_)) => continue,
Ok(None) | Err(_) => break,
}
}
debug!("[Brocade] Session closed.");
Ok(())
}
pub async fn run_command(&mut self, command: &str) -> Result<String, Error> {
if self.should_skip_command(command) {
return Ok(String::new());
}
debug!("[Brocade] Running command: '{command}'...");
self.channel
.data(format!("{}\n", command).as_bytes())
.await?;
tokio::time::sleep(Duration::from_millis(100)).await;
let output = self.collect_command_output().await?;
let output = String::from_utf8(output)
.map_err(|_| Error::UnexpectedError("Invalid UTF-8 in command output".to_string()))?;
self.check_for_command_errors(&output, command)?;
Ok(output)
}
pub async fn run_commands(&mut self, commands: Vec<String>) -> Result<(), Error> {
for command in commands {
self.run_command(&command).await?;
}
Ok(())
}
fn should_skip_command(&self, command: &str) -> bool {
if (command.starts_with("write") || command.starts_with("deploy")) && self.options.dry_run {
info!("[Brocade] Dry-run mode enabled, skipping command: {command}");
return true;
}
false
}
async fn collect_command_output(&mut self) -> Result<Vec<u8>, Error> {
let mut output = Vec::new();
let start = Instant::now();
let read_timeout = Duration::from_millis(500);
let log_interval = Duration::from_secs(5);
let mut last_log = Instant::now();
loop {
if start.elapsed() > self.options.timeouts.command_execution {
return Err(Error::TimeoutError(
"Timeout waiting for command completion.".into(),
));
}
if start.elapsed() > self.options.timeouts.command_output
&& last_log.elapsed() > log_interval
{
info!("[Brocade] Waiting for command output...");
last_log = Instant::now();
}
match timeout(read_timeout, self.channel.wait()).await {
Ok(Some(ChannelMsg::Data { data } | ChannelMsg::ExtendedData { data, .. })) => {
output.extend_from_slice(&data);
let current_output = String::from_utf8_lossy(&output);
if current_output.contains('>') || current_output.contains('#') {
return Ok(output);
}
}
Ok(Some(ChannelMsg::Eof | ChannelMsg::Close)) => return Ok(output),
Ok(Some(ChannelMsg::ExitStatus { exit_status })) => {
debug!("[Brocade] Command exit status: {exit_status}");
}
Ok(Some(_)) => continue,
Ok(None) | Err(_) => {
if output.is_empty() {
if let Ok(None) = timeout(read_timeout, self.channel.wait()).await {
break;
}
continue;
}
tokio::time::sleep(Duration::from_millis(100)).await;
let current_output = String::from_utf8_lossy(&output);
if current_output.contains('>') || current_output.contains('#') {
return Ok(output);
}
}
}
}
Ok(output)
}
fn check_for_command_errors(&self, output: &str, command: &str) -> Result<(), Error> {
const ERROR_PATTERNS: &[&str] = &[
"invalid input",
"syntax error",
"command not found",
"unknown command",
"permission denied",
"access denied",
"authentication failed",
"configuration error",
"failed to",
"error:",
];
let output_lower = output.to_lowercase();
if ERROR_PATTERNS.iter().any(|&p| output_lower.contains(p)) {
return Err(Error::CommandError(format!(
"Command error: {}",
output.trim()
)));
}
if !command.starts_with("show") && output.trim().is_empty() {
return Err(Error::CommandError(format!(
"Command '{command}' produced no output"
)));
}
Ok(())
}
}
async fn wait_for_shell_ready(
channel: &mut russh::Channel<russh::client::Msg>,
timeouts: &TimeoutConfig,
) -> Result<(), Error> {
let mut buffer = Vec::new();
let start = Instant::now();
while start.elapsed() < timeouts.shell_ready {
match timeout(timeouts.message_wait, channel.wait()).await {
Ok(Some(ChannelMsg::Data { data })) => {
buffer.extend_from_slice(&data);
let output = String::from_utf8_lossy(&buffer);
let output = output.trim();
if output.ends_with('>') || output.ends_with('#') {
debug!("[Brocade] Shell ready");
return Ok(());
}
}
Ok(Some(_)) => continue,
Ok(None) => break,
Err(_) => continue,
}
}
Ok(())
}
async fn try_elevate_session(
channel: &mut russh::Channel<russh::client::Msg>,
username: &str,
password: &str,
timeouts: &TimeoutConfig,
) -> Result<(), Error> {
channel.data(&b"enable\n"[..]).await?;
let start = Instant::now();
let mut buffer = Vec::new();
while start.elapsed() < timeouts.shell_ready {
match timeout(timeouts.message_wait, channel.wait()).await {
Ok(Some(ChannelMsg::Data { data })) => {
buffer.extend_from_slice(&data);
let output = String::from_utf8_lossy(&buffer);
if output.ends_with('#') {
debug!("[Brocade] Privileged mode established");
return Ok(());
}
if output.contains("User Name:") {
channel.data(format!("{}\n", username).as_bytes()).await?;
buffer.clear();
} else if output.contains("Password:") {
channel.data(format!("{}\n", password).as_bytes()).await?;
buffer.clear();
} else if output.contains('>') {
return Err(Error::AuthenticationError(
"Enable authentication failed".into(),
));
}
}
Ok(Some(_)) => continue,
Ok(None) => break,
Err(_) => continue,
}
}
let output = String::from_utf8_lossy(&buffer);
if output.ends_with('#') {
debug!("[Brocade] Privileged mode established");
Ok(())
} else {
Err(Error::AuthenticationError(format!(
"Enable failed. Output:\n{output}"
)))
}
}

131
brocade/src/ssh.rs Normal file
View File

@@ -0,0 +1,131 @@
use std::borrow::Cow;
use std::sync::Arc;
use async_trait::async_trait;
use log::debug;
use russh::client::Handler;
use russh::kex::DH_G1_SHA1;
use russh::kex::ECDH_SHA2_NISTP256;
use russh_keys::key::SSH_RSA;
use super::BrocadeOptions;
use super::Error;
#[derive(Clone, Debug)]
pub struct SshOptions {
pub preferred_algorithms: russh::Preferred,
pub port: u16,
}
impl Default for SshOptions {
fn default() -> Self {
Self {
preferred_algorithms: Default::default(),
port: 22,
}
}
}
impl SshOptions {
fn ecdhsa_sha2_nistp256(port: u16) -> Self {
Self {
preferred_algorithms: russh::Preferred {
kex: Cow::Borrowed(&[ECDH_SHA2_NISTP256]),
key: Cow::Borrowed(&[SSH_RSA]),
..Default::default()
},
port,
..Default::default()
}
}
fn legacy(port: u16) -> Self {
Self {
preferred_algorithms: russh::Preferred {
kex: Cow::Borrowed(&[DH_G1_SHA1]),
key: Cow::Borrowed(&[SSH_RSA]),
..Default::default()
},
port,
..Default::default()
}
}
}
pub struct Client;
#[async_trait]
impl Handler for Client {
type Error = Error;
async fn check_server_key(
&mut self,
_server_public_key: &russh_keys::key::PublicKey,
) -> Result<bool, Self::Error> {
Ok(true)
}
}
pub async fn try_init_client(
username: &str,
password: &str,
ip: &std::net::IpAddr,
base_options: &BrocadeOptions,
) -> Result<BrocadeOptions, Error> {
let mut default = SshOptions::default();
default.port = base_options.ssh.port;
let ssh_options = vec![
default,
SshOptions::ecdhsa_sha2_nistp256(base_options.ssh.port),
SshOptions::legacy(base_options.ssh.port),
];
for ssh in ssh_options {
let opts = BrocadeOptions {
ssh: ssh.clone(),
..base_options.clone()
};
debug!("Creating client {ip}:{} {username}", ssh.port);
let client = create_client(*ip, ssh.port, username, password, &opts).await;
match client {
Ok(_) => {
return Ok(opts);
}
Err(e) => match e {
Error::NetworkError(e) => {
if e.contains("No common key exchange algorithm") {
continue;
} else {
return Err(Error::NetworkError(e));
}
}
_ => return Err(e),
},
}
}
Err(Error::NetworkError(
"Could not establish ssh connection: wrong key exchange algorithm)".to_string(),
))
}
pub async fn create_client(
ip: std::net::IpAddr,
port: u16,
username: &str,
password: &str,
options: &BrocadeOptions,
) -> Result<russh::client::Handle<Client>, Error> {
let config = russh::client::Config {
preferred: options.ssh.preferred_algorithms.clone(),
..Default::default()
};
let mut client = russh::client::connect(Arc::new(config), (ip, port), Client {}).await?;
if !client.authenticate_password(username, password).await? {
return Err(Error::AuthenticationError(
"ssh authentication failed".to_string(),
));
}
Ok(client)
}

11
build/book.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/sh
set -e
cd "$(dirname "$0")/.."
cargo install mdbook --locked
mdbook build
test -f book/index.html || (echo "ERROR: book/index.html not found" && exit 1)
test -f book/concepts.html || (echo "ERROR: book/concepts.html not found" && exit 1)
test -f book/guides/getting-started.html || (echo "ERROR: book/guides/getting-started.html not found" && exit 1)

View File

@@ -1,6 +1,8 @@
#!/bin/sh
set -e
cd "$(dirname "$0")/.."
rustc --version
cargo check --all-targets --all-features --keep-going
cargo fmt --check

16
build/ci.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
set -e
cd "$(dirname "$0")/.."
BRANCH="${1:-main}"
echo "=== Running CI for branch: $BRANCH ==="
echo "--- Checking code ---"
./build/check.sh
echo "--- Building book ---"
./build/book.sh
echo "=== CI passed ==="

Binary file not shown.

View File

@@ -0,0 +1,3 @@
.terraform
*.tfstate
venv

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

View File

@@ -0,0 +1,5 @@
To build :
```bash
npx @marp-team/marp-cli@latest -w slides.md
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -0,0 +1,9 @@
To run this :
```bash
virtualenv venv
source venv/bin/activate
pip install ansible ansible-dev-tools
ansible-lint download.yml
ansible-playbook -i localhost download.yml
```

View File

@@ -0,0 +1,8 @@
- name: Test Ansible URL Validation
hosts: localhost
tasks:
- name: Download a file
ansible.builtin.get_url:
url: "http:/wikipedia.org/"
dest: "/tmp/ansible-test/wikipedia.html"
mode: '0900'

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 275 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 384 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,241 @@
---
theme: uncover
---
# Voici l'histoire de Petit Poisson
---
<img src="./Happy_swimmer.jpg" width="600"/>
---
<img src="./happy_landscape_swimmer.jpg" width="1000"/>
---
<img src="./Happy_swimmer.jpg" width="200"/>
<img src="./tryrust.org.png" width="600"/>
[https://tryrust.org](https://tryrust.org)
---
<img src="./texto_deploy_prod_1.png" width="600"/>
---
<img src="./texto_deploy_prod_2.png" width="600"/>
---
<img src="./texto_deploy_prod_3.png" width="600"/>
---
<img src="./texto_deploy_prod_4.png" width="600"/>
---
## Demo time
---
<img src="./Happy_swimmer_sunglasses.jpg" width="1000"/>
---
<img src="./texto_download_wikipedia.png" width="600"/>
---
<img src="./ansible.jpg" width="200"/>
## Ansible❓
---
<img src="./Happy_swimmer.jpg" width="200"/>
```yaml
- name: Download wikipedia
hosts: localhost
tasks:
- name: Download a file
ansible.builtin.get_url:
url: "https:/wikipedia.org/"
dest: "/tmp/ansible-test/wikipedia.html"
mode: '0900'
```
---
<img src="./Happy_swimmer.jpg" width="200"/>
```
ansible-lint download.yml
Passed: 0 failure(s), 0 warning(s) on 1 files. Last profile that met the validation criteria was 'production'.
```
---
```
git push
```
---
<img src="./75_years_later.jpg" width="1100"/>
---
<img src="./texto_download_wikipedia_fail.png" width="600"/>
---
<img src="./Happy_swimmer_reversed.jpg" width="600"/>
---
<img src="./ansible_output_fail.jpg" width="1100"/>
---
<img src="./Happy_swimmer_reversed_1hit.jpg" width="600"/>
---
<img src="./ansible_crossed_out.jpg" width="400"/>
---
<img src="./terraform.jpg" width="400"/>
## Terraform❓❗
---
<img src="./Happy_swimmer_reversed_1hit.jpg" width="200"/>
<img src="./terraform.jpg" width="200"/>
```tf
provider "docker" {}
resource "docker_network" "invalid_network" {
name = "my-invalid-network"
ipam_config {
subnet = "172.17.0.0/33"
}
}
```
---
<img src="./Happy_swimmer_reversed_1hit.jpg" width="100"/>
<img src="./terraform.jpg" width="200"/>
```
terraform plan
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# docker_network.invalid_network will be created
+ resource "docker_network" "invalid_network" {
+ driver = (known after apply)
+ id = (known after apply)
+ internal = (known after apply)
+ ipam_driver = "default"
+ name = "my-invalid-network"
+ options = (known after apply)
+ scope = (known after apply)
+ ipam_config {
+ subnet = "172.17.0.0/33"
# (2 unchanged attributes hidden)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
```
---
---
```
terraform apply
```
---
```
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
```
---
```
docker_network.invalid_network: Creating...
│ Error: Unable to create network: Error response from daemon: invalid network config:
│ invalid subnet 172.17.0.0/33: invalid CIDR block notation
│ with docker_network.invalid_network,
│ on main.tf line 11, in resource "docker_network" "invalid_network":
│ 11: resource "docker_network" "invalid_network" {
```
---
<img src="./Happy_swimmer_reversed_fullhit.jpg" width="1100"/>
---
<img src="./ansible_crossed_out.jpg" width="300"/>
<img src="./terraform_crossed_out.jpg" width="400"/>
<img src="./Happy_swimmer_reversed_fullhit.jpg" width="300"/>
---
## Harmony❓❗
---
Demo time
---
<img src="./Happy_swimmer.jpg" width="300"/>
---
# 🎼
Harmony : [https://git.nationtech.io/nationtech/harmony](https://git.nationtech.io/nationtech/harmony)
<img src="./qrcode_gitea_nationtech.png" width="120"/>
LinkedIn : [https://www.linkedin.com/in/jean-gabriel-gill-couture/](https://www.linkedin.com/in/jean-gabriel-gill-couture/)
Courriel : [jg@nationtech.io](mailto:jg@nationtech.io)

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -0,0 +1,40 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/http" {
version = "3.5.0"
hashes = [
"h1:8bUoPwS4hahOvzCBj6b04ObLVFXCEmEN8T/5eOHmWOM=",
"zh:047c5b4920751b13425efe0d011b3a23a3be97d02d9c0e3c60985521c9c456b7",
"zh:157866f700470207561f6d032d344916b82268ecd0cf8174fb11c0674c8d0736",
"zh:1973eb9383b0d83dd4fd5e662f0f16de837d072b64a6b7cd703410d730499476",
"zh:212f833a4e6d020840672f6f88273d62a564f44acb0c857b5961cdb3bbc14c90",
"zh:2c8034bc039fffaa1d4965ca02a8c6d57301e5fa9fff4773e684b46e3f78e76a",
"zh:5df353fc5b2dd31577def9cc1a4ebf0c9a9c2699d223c6b02087a3089c74a1c6",
"zh:672083810d4185076c81b16ad13d1224b9e6ea7f4850951d2ab8d30fa6e41f08",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:7b4200f18abdbe39904b03537e1a78f21ebafe60f1c861a44387d314fda69da6",
"zh:843feacacd86baed820f81a6c9f7bd32cf302db3d7a0f39e87976ebc7a7cc2ee",
"zh:a9ea5096ab91aab260b22e4251c05f08dad2ed77e43e5e4fadcdfd87f2c78926",
"zh:d02b288922811739059e90184c7f76d45d07d3a77cc48d0b15fd3db14e928623",
]
}
provider "registry.terraform.io/hashicorp/local" {
version = "2.5.3"
hashes = [
"h1:1Nkh16jQJMp0EuDmvP/96f5Unnir0z12WyDuoR6HjMo=",
"zh:284d4b5b572eacd456e605e94372f740f6de27b71b4e1fd49b63745d8ecd4927",
"zh:40d9dfc9c549e406b5aab73c023aa485633c1b6b730c933d7bcc2fa67fd1ae6e",
"zh:6243509bb208656eb9dc17d3c525c89acdd27f08def427a0dce22d5db90a4c8b",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:885d85869f927853b6fe330e235cd03c337ac3b933b0d9ae827ec32fa1fdcdbf",
"zh:bab66af51039bdfcccf85b25fe562cbba2f54f6b3812202f4873ade834ec201d",
"zh:c505ff1bf9442a889ac7dca3ac05a8ee6f852e0118dd9a61796a2f6ff4837f09",
"zh:d36c0b5770841ddb6eaf0499ba3de48e5d4fc99f4829b6ab66b0fab59b1aaf4f",
"zh:ddb6a407c7f3ec63efb4dad5f948b54f7f4434ee1a2607a49680d494b1776fe1",
"zh:e0dafdd4500bec23d3ff221e3a9b60621c5273e5df867bc59ef6b7e41f5c91f6",
"zh:ece8742fd2882a8fc9d6efd20e2590010d43db386b920b2a9c220cfecc18de47",
"zh:f4c6b3eb8f39105004cf720e202f04f57e3578441cfb76ca27611139bc116a82",
]
}

View File

@@ -0,0 +1,10 @@
provider "http" {}
data "http" "remote_file" {
url = "http:/example.com/file.txt"
}
resource "local_file" "downloaded_file" {
content = data.http.remote_file.body
filename = "${path.module}/downloaded_file.txt"
}

View File

@@ -0,0 +1,24 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/kreuzwerker/docker" {
version = "3.0.2"
constraints = "~> 3.0.1"
hashes = [
"h1:cT2ccWOtlfKYBUE60/v2/4Q6Stk1KYTNnhxSck+VPlU=",
"zh:15b0a2b2b563d8d40f62f83057d91acb02cd0096f207488d8b4298a59203d64f",
"zh:23d919de139f7cd5ebfd2ff1b94e6d9913f0977fcfc2ca02e1573be53e269f95",
"zh:38081b3fe317c7e9555b2aaad325ad3fa516a886d2dfa8605ae6a809c1072138",
"zh:4a9c5065b178082f79ad8160243369c185214d874ff5048556d48d3edd03c4da",
"zh:5438ef6afe057945f28bce43d76c4401254073de01a774760169ac1058830ac2",
"zh:60b7fadc287166e5c9873dfe53a7976d98244979e0ab66428ea0dea1ebf33e06",
"zh:61c5ec1cb94e4c4a4fb1e4a24576d5f39a955f09afb17dab982de62b70a9bdd1",
"zh:a38fe9016ace5f911ab00c88e64b156ebbbbfb72a51a44da3c13d442cd214710",
"zh:c2c4d2b1fd9ebb291c57f524b3bf9d0994ff3e815c0cd9c9bcb87166dc687005",
"zh:d567bb8ce483ab2cf0602e07eae57027a1a53994aba470fa76095912a505533d",
"zh:e83bf05ab6a19dd8c43547ce9a8a511f8c331a124d11ac64687c764ab9d5a792",
"zh:e90c934b5cd65516fbcc454c89a150bfa726e7cf1fe749790c7480bbeb19d387",
"zh:f05f167d2eaf913045d8e7b88c13757e3cf595dd5cd333057fdafc7c4b7fed62",
"zh:fcc9c1cea5ce85e8bcb593862e699a881bd36dffd29e2e367f82d15368659c3d",
]
}

View File

@@ -0,0 +1,17 @@
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "~> 3.0.1" # Adjust version as needed
}
}
}
provider "docker" {}
resource "docker_network" "invalid_network" {
name = "my-invalid-network"
ipam_config {
subnet = "172.17.0.0/33"
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 325 KiB

View File

@@ -1 +1,37 @@
Not much here yet, see the `adr` folder for now. More to come in time!
# Harmony Documentation Hub
Welcome to the Harmony documentation. This is the main entry point for learning everything from core concepts to building your own Score, Topologies, and Capabilities.
## 1. Getting Started
If you're new to Harmony, start here:
- [**Getting Started Guide**](./guides/getting-started.md): A step-by-step tutorial that takes you from an empty project to deploying your first application.
- [**Core Concepts**](./concepts.md): A high-level overview of the key concepts in Harmony: `Score`, `Topology`, `Capability`, `Inventory`, `Interpret`, ...
## 2. Use Cases & Examples
See how to use Harmony to solve real-world problems.
- [**PostgreSQL on Local K3D**](./use-cases/postgresql-on-local-k3d.md): Deploy a production-grade PostgreSQL cluster on a local K3D cluster. The fastest way to get started.
- [**OKD on Bare Metal**](./use-cases/okd-on-bare-metal.md): A detailed walkthrough of bootstrapping a high-availability OKD cluster from physical hardware.
## 3. Component Catalogs
Discover existing, reusable components you can use in your Harmony projects.
- [**Scores Catalog**](./catalogs/scores.md): A categorized list of all available `Scores` (the "what").
- [**Topologies Catalog**](./catalogs/topologies.md): A list of all available `Topologies` (the "where").
- [**Capabilities Catalog**](./catalogs/capabilities.md): A list of all available `Capabilities` (the "how").
## 4. Developer Guides
Ready to build your own components? These guides show you how.
- [**Writing a Score**](./guides/writing-a-score.md): Learn how to create your own `Score` and `Interpret` logic to define a new desired state.
- [**Writing a Topology**](./guides/writing-a-topology.md): Learn how to model a new environment (like AWS, GCP, or custom hardware) as a `Topology`.
- [**Adding Capabilities**](./guides/adding-capabilities.md): See how to add a `Capability` to your custom `Topology`.
## 5. Architecture Decision Records
Harmony's design is documented through Architecture Decision Records (ADRs). See the [ADR Overview](./adr/README.md) for a complete index of all decisions.

53
docs/SUMMARY.md Normal file
View File

@@ -0,0 +1,53 @@
# Summary
[Harmony Documentation](./README.md)
- [Core Concepts](./concepts.md)
- [Getting Started Guide](./guides/getting-started.md)
## Use Cases
- [PostgreSQL on Local K3D](./use-cases/postgresql-on-local-k3d.md)
- [OKD on Bare Metal](./use-cases/okd-on-bare-metal.md)
## Component Catalogs
- [Scores Catalog](./catalogs/scores.md)
- [Topologies Catalog](./catalogs/topologies.md)
- [Capabilities Catalog](./catalogs/capabilities.md)
## Developer Guides
- [Developer Guide](./guides/developer-guide.md)
- [Writing a Score](./guides/writing-a-score.md)
- [Writing a Topology](./guides/writing-a-topology.md)
- [Adding Capabilities](./guides/adding-capabilities.md)
## Configuration
- [Configuration](./concepts/configuration.md)
## Architecture Decision Records
- [ADR Overview](./adr/README.md)
- [000 · ADR Template](./adr/000-ADR-Template.md)
- [001 · Why Rust](./adr/001-rust.md)
- [002 · Hexagonal Architecture](./adr/002-hexagonal-architecture.md)
- [003 · Infrastructure Abstractions](./adr/003-infrastructure-abstractions.md)
- [004 · iPXE](./adr/004-ipxe.md)
- [005 · Interactive Project](./adr/005-interactive-project.md)
- [006 · Secret Management](./adr/006-secret-management.md)
- [007 · Default Runtime](./adr/007-default-runtime.md)
- [008 · Score Display Formatting](./adr/008-score-display-formatting.md)
- [009 · Helm and Kustomize Handling](./adr/009-helm-and-kustomize-handling.md)
- [010 · Monitoring and Alerting](./adr/010-monitoring-and-alerting.md)
- [011 · Multi-Tenant Cluster](./adr/011-multi-tenant-cluster.md)
- [012 · Project Delivery Automation](./adr/012-project-delivery-automation.md)
- [013 · Monitoring Notifications](./adr/013-monitoring-notifications.md)
- [015 · Higher Order Topologies](./adr/015-higher-order-topologies.md)
- [016 · Harmony Agent and Global Mesh](./adr/016-Harmony-Agent-And-Global-Mesh-For-Decentralized-Workload-Management.md)
- [017-1 · NATS Clusters Interconnection](./adr/017-1-Nats-Clusters-Interconnection-Topology.md)
- [018 · Template Hydration for Workload Deployment](./adr/018-Template-Hydration-For-Workload-Deployment.md)
- [019 · Network Bond Setup](./adr/019-Network-bond-setup.md)
- [020 · Interactive Configuration Crate](./adr/020-interactive-configuration-crate.md)
- [020-1 · Zitadel + OpenBao Secure Config Store](./adr/020-1-zitadel-openbao-secure-config-store.md)

View File

@@ -2,7 +2,7 @@
## Status
Proposed
Rejected : See ADR 020 ./020-interactive-configuration-crate.md
### TODO [#3](https://git.nationtech.io/NationTech/harmony/issues/3):

View File

@@ -0,0 +1,114 @@
# Architecture Decision Record: Higher-Order Topologies
**Initial Author:** Jean-Gabriel Gill-Couture
**Initial Date:** 2025-12-08
**Last Updated Date:** 2025-12-08
## Status
Implemented
## Context
Harmony models infrastructure as **Topologies** (deployment targets like `K8sAnywhereTopology`, `LinuxHostTopology`) implementing **Capabilities** (tech traits like `PostgreSQL`, `Docker`).
**Higher-Order Topologies** (e.g., `FailoverTopology<T>`) compose/orchestrate capabilities *across* multiple underlying topologies (e.g., primary+replica `T`).
Naive design requires manual `impl Capability for HigherOrderTopology<T>` *per T per capability*, causing:
- **Impl explosion**: N topologies × M capabilities = N×M boilerplate.
- **ISP violation**: Topologies forced to impl unrelated capabilities.
- **Maintenance hell**: New topology needs impls for *all* orchestrated capabilities; new capability needs impls for *all* topologies/higher-order.
- **Barrier to extension**: Users can't easily add topologies without todos/panics.
This makes scaling Harmony impractical as ecosystem grows.
## Decision
Use **blanket trait impls** on higher-order topologies to *automatically* derive orchestration:
````rust
/// Higher-Order Topology: Orchestrates capabilities across sub-topologies.
pub struct FailoverTopology<T> {
/// Primary sub-topology.
primary: T,
/// Replica sub-topology.
replica: T,
}
/// Automatically provides PostgreSQL failover for *any* `T: PostgreSQL`.
/// Delegates to primary for queries; orchestrates deploy across both.
#[async_trait]
impl<T: PostgreSQL> PostgreSQL for FailoverTopology<T> {
async fn deploy(&self, config: &PostgreSQLConfig) -> Result<String, String> {
// Deploy primary; extract certs/endpoint;
// deploy replica with pg_basebackup + TLS passthrough.
// (Full impl logged/elaborated.)
}
// Delegate queries to primary.
async fn get_replication_certs(&self, cluster_name: &str) -> Result<ReplicationCerts, String> {
self.primary.get_replication_certs(cluster_name).await
}
// ...
}
/// Similarly for other capabilities.
#[async_trait]
impl<T: Docker> Docker for FailoverTopology<T> {
// Failover Docker orchestration.
}
````
**Key properties:**
- **Auto-derivation**: `Failover<K8sAnywhere>` gets `PostgreSQL` iff `K8sAnywhere: PostgreSQL`.
- **No boilerplate**: One blanket impl per capability *per higher-order type*.
## Rationale
- **Composition via generics**: Rust trait solver auto-selects impls; zero runtime cost.
- **Compile-time safety**: Missing `T: Capability` → compile error (no panics).
- **Scalable**: O(capabilities) impls per higher-order; new `T` auto-works.
- **ISP-respecting**: Capabilities only surface if sub-topology provides.
- **Centralized logic**: Orchestration (e.g., cert propagation) in one place.
**Example usage:**
````rust
// ✅ Works: K8sAnywhere: PostgreSQL → Failover provides failover PG
let pg_failover: FailoverTopology<K8sAnywhereTopology> = ...;
pg_failover.deploy_pg(config).await;
// ✅ Works: LinuxHost: Docker → Failover provides failover Docker
let docker_failover: FailoverTopology<LinuxHostTopology> = ...;
docker_failover.deploy_docker(...).await;
// ❌ Compile fail: K8sAnywhere !: Docker
let invalid: FailoverTopology<K8sAnywhereTopology>;
invalid.deploy_docker(...); // `T: Docker` bound unsatisfied
````
## Consequences
**Pros:**
- **Extensible**: New topology `AWSTopology: PostgreSQL` → instant `Failover<AWSTopology>: PostgreSQL`.
- **Lean**: No useless impls (e.g., no `K8sAnywhere: Docker`).
- **Observable**: Logs trace every step.
**Cons:**
- **Monomorphization**: Generics generate code per T (mitigated: few Ts).
- **Delegation opacity**: Relies on rustdoc/logs for internals.
## Alternatives considered
| Approach | Pros | Cons |
|----------|------|------|
| **Manual per-T impls**<br>`impl PG for Failover<K8s> {..}`<br>`impl PG for Failover<Linux> {..}` | Explicit control | N×M explosion; violates ISP; hard to extend. |
| **Dynamic trait objects**<br>`Box<dyn AnyCapability>` | Runtime flex | Perf hit; type erasure; error-prone dispatch. |
| **Mega-topology trait**<br>All-in-one `OrchestratedTopology` | Simple wiring | Monolithic; poor composition. |
| **Registry dispatch**<br>Runtime capability lookup | Decoupled | Complex; no compile safety; perf/debug overhead. |
**Selected**: Blanket impls leverage Rust generics for safe, zero-cost composition.
## Additional Notes
- Applies to `MultisiteTopology<T>`, `ShardedTopology<T>`, etc.
- `FailoverTopology` in `failover.rs` is first implementation.

View File

@@ -0,0 +1,153 @@
//! Example of Higher-Order Topologies in Harmony.
//! Demonstrates how `FailoverTopology<T>` automatically provides failover for *any* capability
//! supported by a sub-topology `T` via blanket trait impls.
//!
//! Key insight: No manual impls per T or capability -- scales effortlessly.
//! Users can:
//! - Write new `Topology` (impl capabilities on a struct).
//! - Compose with `FailoverTopology` (gets capabilities if T has them).
//! - Compile fails if capability missing (safety).
use async_trait::async_trait;
use tokio;
/// Capability trait: Deploy and manage PostgreSQL.
#[async_trait]
pub trait PostgreSQL {
async fn deploy(&self, config: &PostgreSQLConfig) -> Result<String, String>;
async fn get_replication_certs(&self, cluster_name: &str) -> Result<ReplicationCerts, String>;
}
/// Capability trait: Deploy Docker.
#[async_trait]
pub trait Docker {
async fn deploy_docker(&self) -> Result<String, String>;
}
/// Configuration for PostgreSQL deployments.
#[derive(Clone)]
pub struct PostgreSQLConfig;
/// Replication certificates.
#[derive(Clone)]
pub struct ReplicationCerts;
/// Concrete topology: Kubernetes Anywhere (supports PostgreSQL).
#[derive(Clone)]
pub struct K8sAnywhereTopology;
#[async_trait]
impl PostgreSQL for K8sAnywhereTopology {
async fn deploy(&self, _config: &PostgreSQLConfig) -> Result<String, String> {
// Real impl: Use k8s helm chart, operator, etc.
Ok("K8sAnywhere PostgreSQL deployed".to_string())
}
async fn get_replication_certs(&self, _cluster_name: &str) -> Result<ReplicationCerts, String> {
Ok(ReplicationCerts)
}
}
/// Concrete topology: Linux Host (supports Docker).
#[derive(Clone)]
pub struct LinuxHostTopology;
#[async_trait]
impl Docker for LinuxHostTopology {
async fn deploy_docker(&self) -> Result<String, String> {
// Real impl: Install/configure Docker on host.
Ok("LinuxHost Docker deployed".to_string())
}
}
/// Higher-Order Topology: Composes multiple sub-topologies (primary + replica).
/// Automatically derives *all* capabilities of `T` with failover orchestration.
///
/// - If `T: PostgreSQL`, then `FailoverTopology<T>: PostgreSQL` (blanket impl).
/// - Same for `Docker`, etc. No boilerplate!
/// - Compile-time safe: Missing `T: Capability` → error.
#[derive(Clone)]
pub struct FailoverTopology<T> {
/// Primary sub-topology.
pub primary: T,
/// Replica sub-topology.
pub replica: T,
}
/// Blanket impl: Failover PostgreSQL if T provides PostgreSQL.
/// Delegates reads to primary; deploys to both.
#[async_trait]
impl<T: PostgreSQL + Send + Sync + Clone> PostgreSQL for FailoverTopology<T> {
async fn deploy(&self, config: &PostgreSQLConfig) -> Result<String, String> {
// Orchestrate: Deploy primary first, then replica (e.g., via pg_basebackup).
let primary_result = self.primary.deploy(config).await?;
let replica_result = self.replica.deploy(config).await?;
Ok(format!("Failover PG deployed: {} | {}", primary_result, replica_result))
}
async fn get_replication_certs(&self, cluster_name: &str) -> Result<ReplicationCerts, String> {
// Delegate to primary (replica follows).
self.primary.get_replication_certs(cluster_name).await
}
}
/// Blanket impl: Failover Docker if T provides Docker.
#[async_trait]
impl<T: Docker + Send + Sync + Clone> Docker for FailoverTopology<T> {
async fn deploy_docker(&self) -> Result<String, String> {
// Orchestrate across primary + replica.
let primary_result = self.primary.deploy_docker().await?;
let replica_result = self.replica.deploy_docker().await?;
Ok(format!("Failover Docker deployed: {} | {}", primary_result, replica_result))
}
}
#[tokio::main]
async fn main() {
let config = PostgreSQLConfig;
println!("=== ✅ PostgreSQL Failover (K8sAnywhere supports PG) ===");
let pg_failover = FailoverTopology {
primary: K8sAnywhereTopology,
replica: K8sAnywhereTopology,
};
let result = pg_failover.deploy(&config).await.unwrap();
println!("Result: {}", result);
println!("\n=== ✅ Docker Failover (LinuxHost supports Docker) ===");
let docker_failover = FailoverTopology {
primary: LinuxHostTopology,
replica: LinuxHostTopology,
};
let result = docker_failover.deploy_docker().await.unwrap();
println!("Result: {}", result);
println!("\n=== ❌ Would fail to compile (K8sAnywhere !: Docker) ===");
// let invalid = FailoverTopology {
// primary: K8sAnywhereTopology,
// replica: K8sAnywhereTopology,
// };
// invalid.deploy_docker().await.unwrap(); // Error: `K8sAnywhereTopology: Docker` not satisfied!
// Very clear error message :
// error[E0599]: the method `deploy_docker` exists for struct `FailoverTopology<K8sAnywhereTopology>`, but its trait bounds were not satisfied
// --> src/main.rs:90:9
// |
// 4 | pub struct FailoverTopology<T> {
// | ------------------------------ method `deploy_docker` not found for this struct because it doesn't satisfy `FailoverTopology<K8sAnywhereTopology>: Docker`
// ...
// 37 | struct K8sAnywhereTopology;
// | -------------------------- doesn't satisfy `K8sAnywhereTopology: Docker`
// ...
// 90 | invalid.deploy_docker(); // `T: Docker` bound unsatisfied
// | ^^^^^^^^^^^^^ method cannot be called on `FailoverTopology<K8sAnywhereTopology>` due to unsatisfied trait bounds
// |
// note: trait bound `K8sAnywhereTopology: Docker` was not satisfied
// --> src/main.rs:61:9
// |
// 61 | impl<T: Docker + Send + Sync> Docker for FailoverTopology<T> {
// | ^^^^^^ ------ -------------------
// | |
// | unsatisfied trait bound introduced here
// note: the trait `Docker` must be implemented
}

View File

@@ -0,0 +1,90 @@
# Architecture Decision Record: Global Orchestration Mesh & The Harmony Agent
**Status:** Proposed
**Date:** 2025-12-19
## Context
Harmony is designed to enable a truly decentralized infrastructure where independent clusters—owned by different organizations or running on diverse hardware—can collaborate reliably. This vision combines the decentralization of Web3 with the performance and capabilities of Web2.
Currently, Harmony operates as a stateless CLI tool, invoked manually or via CI runners. While effective for deployment, this model presents a critical limitation: **a CLI cannot react to real-time events.**
To achieve automated failover and dynamic workload management, we need a system that is "always on." Relying on manual intervention or scheduled CI jobs to recover from a cluster failure creates unacceptable latency and prevents us from scaling to thousands of nodes.
Furthermore, we face a challenge in serving diverse workloads:
* **Financial workloads** require absolute consistency (CP - Consistency/Partition Tolerance).
* **AI/Inference workloads** require maximum availability (AP - Availability/Partition Tolerance).
There are many more use cases, but those are the two extremes.
We need a unified architecture that automates cluster coordination and supports both consistency models without requiring a complete re-architecture in the future.
## Decision
We propose a fundamental architectural evolution. It has been clear since the start of Harmony that it would be necessary to transition Harmony from a purely ephemeral CLI tool to a system that includes a persistent **Harmony Agent**. This Agent will connect to a **Global Orchestration Mesh** based on a strongly consistent protocol.
The proposal consists of four key pillars:
### 1. The Harmony Agent (New Component)
We will develop a long-running process (Daemon/Agent) to be deployed alongside workloads.
* **Shift from CLI:** Unlike the CLI, which applies configuration and exits, the Agent maintains a persistent connection to the mesh.
* **Responsibility:** It actively monitors cluster health, participates in consensus, and executes lifecycle commands (start/stop/fence) instantly when the mesh dictates a state change.
### 2. The Technology: NATS JetStream
We will utilize **NATS JetStream** as the underlying transport and consensus layer for the Agent and the Mesh.
* **Why not raw Raft?** Implementing a raw Raft library requires building and maintaining the transport layer, log compaction, snapshotting, and peer discovery manually. NATS JetStream provides a battle-tested, distributed log and Key-Value store (based on Raft) out of the box, along with a high-performance pub/sub system for event propagation.
* **Role:** It will act as the "source of truth" for the cluster state.
### 3. Strong Consistency at the Mesh Layer
The mesh will operate with **Strong Consistency** by default.
* All critical cluster state changes (topology updates, lease acquisitions, leadership elections) will require consensus among the Agents.
* This ensures that in the event of a network partition, we have a mathematical guarantee of which side holds the valid state, preventing data corruption.
### 4. Public UX: The `FailoverStrategy` Abstraction
To keep the user experience stable and simple, we will expose the complexity of the mesh through a high-level configuration API, tentatively called `FailoverStrategy`.
The user defines the *intent* in their config, and the Harmony Agent automates the *execution*:
* **`FailoverStrategy::AbsoluteConsistency`**:
* *Use Case:* Banking, Transactional DBs.
* *Behavior:* If the mesh detects a partition, the Agent on the minority side immediately halts workloads. No split-brain is ever allowed.
* **`FailoverStrategy::SplitBrainAllowed`**:
* *Use Case:* LLM Inference, Stateless Web Servers.
* *Behavior:* If a partition occurs, the Agent keeps workloads running to maximize uptime. State is reconciled when connectivity returns.
## Rationale
**The Necessity of an Agent**
You cannot automate what you do not monitor. Moving to an Agent-based model is the only way to achieve sub-second reaction times to infrastructure failures. It transforms Harmony from a deployment tool into a self-healing platform.
**Scaling & Decentralization**
To allow independent clusters to collaborate, they need a shared language. A strongly consistent mesh allows Cluster A (Organization X) and Cluster B (Organization Y) to agree on workload placement without a central authority.
**Why Strong Consistency First?**
It is technically feasible to relax a strongly consistent system to allow for "Split Brain" behavior (AP) when the user requests it. However, it is nearly impossible to take an eventually consistent system and force it to be strongly consistent (CP) later. By starting with strict constraints, we cover the hardest use cases (Finance) immediately.
**Future Topologies**
While our immediate need is `FailoverTopology` (Multi-site), this architecture supports any future topology logic:
* **`CostTopology`**: Agents negotiate to route workloads to the cluster with the cheapest spot instances.
* **`HorizontalTopology`**: Spreading a single workload across 100 clusters for massive scale.
* **`GeoTopology`**: Ensuring data stays within specific legal jurisdictions.
The mesh provides the *capability* (consensus and messaging); the topology provides the *logic*.
## Consequences
**Positive**
* **Automation:** Eliminates manual failover, enabling massive scale.
* **Reliability:** Guarantees data safety for critical workloads by default.
* **Flexibility:** A single codebase serves both high-frequency trading and AI inference.
* **Stability:** The public API remains abstract, allowing us to optimize the mesh internals without breaking user code.
**Negative**
* **Deployment Complexity:** Users must now deploy and maintain a running service (the Agent) rather than just downloading a binary.
* **Engineering Complexity:** Integrating NATS JetStream and handling distributed state machines is significantly more complex than the current CLI logic.
## Implementation Plan (Short Term)
1. **Agent Bootstrap:** Create the initial scaffold for the Harmony Agent (daemon).
2. **Mesh Integration:** Prototype NATS JetStream embedding within the Agent.
3. **Strategy Implementation:** Add `FailoverStrategy` to the configuration schema and implement the logic in the Agent to read and act on it.
4. **Migration:** Transition the current manual failover scripts into event-driven logic handled by the Agent.

View File

@@ -0,0 +1,189 @@
### 1. ADR 017-1: NATS Cluster Interconnection & Trust Topology
# Architecture Decision Record: NATS Cluster Interconnection & Trust Topology
**Status:** Proposed
**Date:** 2026-01-12
**Precedes:** [017-Staleness-Detection-for-Failover.md]
## Context
In ADR 017, we defined the failover mechanisms for the Harmony mesh. However, for a Primary (Site A) and a Replica (Site B) to communicate securely—or for the Global Mesh to function across disparate locations—we must establish a robust Transport Layer Security (TLS) strategy.
Our primary deployment platform is OKD (Kubernetes). While OKD provides an internal `service-ca`, it is designed primarily for intra-cluster service-to-service communication. It lacks the flexibility required for:
1. **Public/External Gateway Identities:** NATS Gateways need to identify themselves via public DNS names or external IPs, not just internal `.svc` cluster domains.
2. **Cross-Cluster Trust:** We need a mechanism to allow Cluster A to trust Cluster B without sharing a single private root key.
## Decision
We will implement an **"Islands of Trust"** topology using **cert-manager** on OKD.
### 1. Per-Cluster Certificate Authorities (CA)
* We explicitly **reject** the use of a single "Supercluster CA" shared across all sites.
* Instead, every Harmony Cluster (Site A, Site B, etc.) will generate its own unique Self-Signed Root CA managed by `cert-manager` inside that cluster.
* **Lifecycle:** Root CAs will have a long duration (e.g., 10 years) to minimize rotation friction, while Leaf Certificates (NATS servers) will remain short-lived (e.g., 90 days) and rotate automatically.
> Note : The decision to have a single CA for various workloads managed by Harmony on each deployment, or to have multiple CA for each service that requires interconnection is not made yet. This ADR leans towards one CA per service. This allows for maximum flexibility. But the direction might change and no clear decision has been made yet. The alternative of establishing that each cluster/harmony deployment has a single identity could make mTLS very simple between tenants.
### 2. Trust Federation via Bundle Exchange
To enable secure communication (mTLS) between clusters (e.g., for NATS Gateways or Leaf Nodes):
* **No Private Keys are shared.**
* We will aggregate the **Public CA Certificates** of all trusted clusters into a shared `ca-bundle.pem`.
* This bundle is distributed to the NATS configuration of every node.
* **Verification Logic:** When Site A connects to Site B, Site A verifies Site B's certificate against the bundle. Since Site B's CA public key is in the bundle, the connection is accepted.
### 3. Tooling
* We will use **cert-manager** (deployed via Operator on OKD) rather than OKD's built-in `service-ca`. This provides us with standard CRDs (`Issuer`, `Certificate`) to manage the lifecycle, rotation, and complex SANs (Subject Alternative Names) required for external connectivity.
* Harmony will manage installation, configuration and bundle creation across all sites
## Rationale
**Security Blast Radius (The "Key Leak" Scenario)**
If we used a single global CA and the private key for Site A was compromised (e.g., physical theft of a server from a basement), the attacker could impersonate *any* site in the global mesh.
By using Per-Cluster CAs:
* If Site A is compromised, only Site A's identity is stolen.
* We can "evict" Site A from the mesh simply by removing Site A's Public CA from the `ca-bundle.pem` on the remaining healthy clusters and reloading. The attacker can no longer authenticate.
**Decentralized Autonomy**
This aligns with the "Humane Computing" vision. A local cluster owns its identity. It does not depend on a central authority to issue its certificates. It can function in isolation (offline) indefinitely without needing to "phone home" to renew credentials.
## Consequences
**Positive**
* **High Security:** Compromise of one node does not compromise the global mesh.
* **Flexibility:** Easier to integrate with third-party clusters or partners by simply adding their public CA to the bundle.
* **Standardization:** `cert-manager` is the industry standard, making the configuration portable to non-OKD K8s clusters if needed.
**Negative**
* **Configuration Complexity:** We must manage a mechanism to distribute the `ca-bundle.pem` containing public keys to all sites. This should be automated (e.g., via a Harmony Agent) to ensure timely updates and revocation.
* **Revocation Latency:** Revoking a compromised cluster requires updating and reloading the bundle on all other clusters. This is slower than OCSP/CRL but acceptable for infrastructure-level trust if automation is in place.
---
# 2. Concrete overview of the process, how it can be implemented manually across multiple OKD clusters
All of this will be automated via Harmony, but to understand correctly the process it is outlined in details here :
## 1. Deploying and Configuring cert-manager on OKD
While OKD has a built-in `service-ca` controller, it is "opinionated" and primarily signs certs for internal services (like `my-svc.my-namespace.svc`). It is **not suitable** for the Harmony Global Mesh because you cannot easily control the Subject Alternative Names (SANs) for external routes (e.g., `nats.site-a.nationtech.io`), nor can you easily export its CA to other clusters.
**The Solution:** Use the **cert-manager Operator for Red Hat OpenShift**.
### Step 1: Install the Operator
1. Log in to the OKD Web Console.
2. Navigate to **Operators** -> **OperatorHub**.
3. Search for **"cert-manager"**.
4. Choose the **"cert-manager Operator for Red Hat OpenShift"** (Red Hat provided) or the community version.
5. Click **Install**. Use the default settings (Namespace: `cert-manager-operator`).
### Step 2: Create the "Island" CA (The Issuer)
Once installed, you define your cluster's unique identity. Apply this YAML to your NATS namespace.
```yaml
# filepath: k8s/01-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: harmony-selfsigned-issuer
namespace: harmony-nats
spec:
selfSigned: {}
---
# This generates the unique Root CA for THIS specific cluster
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: harmony-root-ca
namespace: harmony-nats
spec:
isCA: true
commonName: "harmony-site-a-ca" # CHANGE THIS per cluster (e.g., site-b-ca)
duration: 87600h # 10 years
renewBefore: 2160h # 3 months before expiry
secretName: harmony-root-ca-secret
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: harmony-selfsigned-issuer
kind: Issuer
group: cert-manager.io
---
# This Issuer uses the Root CA generated above to sign NATS certs
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: harmony-ca-issuer
namespace: harmony-nats
spec:
ca:
secretName: harmony-root-ca-secret
```
### Step 3: Generate the NATS Server Certificate
This certificate will be used by the NATS server. It includes both internal DNS names (for local clients) and external DNS names (for the global mesh).
```yaml
# filepath: k8s/02-nats-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: nats-server-cert
namespace: harmony-nats
spec:
secretName: nats-server-tls
duration: 2160h # 90 days
renewBefore: 360h # 15 days
issuerRef:
name: harmony-ca-issuer
kind: Issuer
# CRITICAL: Define all names this server can be reached by
dnsNames:
- "nats"
- "nats.harmony-nats.svc"
- "nats.harmony-nats.svc.cluster.local"
- "*.nats.harmony-nats.svc.cluster.local"
- "nats-gateway.site-a.nationtech.io" # External Route for Mesh
```
## 2. Implementing the "Islands of Trust" (Trust Bundle)
To make Site A and Site B talk, you need to exchange **Public Keys**.
1. **Extract Public CA from Site A:**
```bash
oc get secret harmony-root-ca-secret -n harmony-nats -o jsonpath='{.data.ca\.crt}' | base64 -d > site-a.crt
```
2. **Extract Public CA from Site B:**
```bash
oc get secret harmony-root-ca-secret -n harmony-nats -o jsonpath='{.data.ca\.crt}' | base64 -d > site-b.crt
```
3. **Create the Bundle:**
Combine them into one file.
```bash
cat site-a.crt site-b.crt > ca-bundle.crt
```
4. **Upload Bundle to Both Clusters:**
Create a ConfigMap or Secret in *both* clusters containing this combined bundle.
```bash
oc create configmap nats-trust-bundle --from-file=ca.crt=ca-bundle.crt -n harmony-nats
```
5. **Configure NATS:**
Mount this ConfigMap and point NATS to it.
```conf
# nats.conf snippet
tls {
cert_file: "/etc/nats-certs/tls.crt"
key_file: "/etc/nats-certs/tls.key"
# Point to the bundle containing BOTH Site A and Site B public CAs
ca_file: "/etc/nats-trust/ca.crt"
}
```
This setup ensures that Site A can verify Site B's certificate (signed by `harmony-site-b-ca`) because Site B's CA is in Site A's trust store, and vice versa, without ever sharing the private keys that generated them.

View File

@@ -0,0 +1,141 @@
# Architecture Decision Record: Template Hydration for Kubernetes Manifest Generation
Initial Author: Jean-Gabriel Gill-Couture & Sylvain Tremblay
Initial Date: 2025-01-23
Last Updated Date: 2025-01-23
## Status
Implemented
## Context
Harmony's philosophy is built on three guiding principles: Infrastructure as Resilient Code, Prove It Works — Before You Deploy, and One Unified Model. Our goal is to shift validation and verification as left as possible—ideally to compile time—rather than discovering errors at deploy time.
After investigating a few approaches such as compile-checked Askama templates to generate Kubernetes manifests for Helm charts, we found again that this approach suffered from several fundamental limitations:
* **Late Validation:** Typos in template syntax or field names are only discovered at deployment time, not during compilation. A mistyped `metadata.name` won't surface until Helm attempts to render the template.
* **Brittle Maintenance:** Templates are string-based with limited IDE support. Refactoring requires grep-and-replace across YAML-like template files, risking subtle breakage.
* **Hard-to-Test Logic:** Testing template output requires mocking the template engine and comparing serialized strings rather than asserting against typed data structures.
* **No Type Safety:** There is no guarantee that the generated YAML will be valid Kubernetes resources without runtime validation.
We also faced a strategic choice around Helm: use it as both *templating engine* and *packaging mechanism*, or decouple these concerns. While Helm's ecosystem integration (Harbor, ArgoCD, OCI registry support) is valuable, the Jinja-like templating is at odds with Harmony's "code-first" ethos.
## Decision
We will adopt the **Template Hydration Pattern**—constructing Kubernetes manifests programmatically using strongly-typed `kube-rs` objects, then serializing them to YAML files for packaging into Helm charts.
Specifically:
* **Write strongly typed `k8s_openapi` Structs:** All Kubernetes resources (Deployment, Service, ConfigMap, etc.) will be constructed using the typed structs generated by `k8s_openapi`.
* **Direct Serialization to YAML:** Rather than rendering templates, we use `serde_yaml::to_string()` to serialize typed objects directly into YAML manifests. This way, YAML is only used as a data-transfer format and not a templating/programming language - which it is not.
* **Helm as Packaging-Only:** Helm's role is reduced to packaging pre-rendered templates into a tarball and pushing to OCI registries. No template rendering logic resides within Helm.
* **Ecosystem Preservation:** The generated Helm charts remain fully compatible with Harbor, ArgoCD, and any Helm-compatible tool—the only difference is that the `templates/` directory contains static YAML files.
The implementation in `backend_app.rs` demonstrates this pattern:
```rust
let deployment = Deployment {
metadata: ObjectMeta {
name: Some(self.name.clone()),
labels: Some([("app.kubernetes.io/name".to_string(), self.name.clone())].into()),
..Default::default()
},
spec: Some(DeploymentSpec { /* ... */ }),
..Default::default()
};
let deployment_yaml = serde_yaml::to_string(&deployment)?;
fs::write(templates_dir.join("deployment.yaml"), deployment_yaml)?;
```
## Rationale
**Aligns with "Infrastructure as Resilient Code"**
Harmony's first principle states that infrastructure should be treated like application code. By expressing Kubernetes manifests as Rust structs, we gain:
* **Refactorability:** Rename a label and the compiler catches all usages.
* **IDE Support:** Autocomplete for all Kubernetes API fields; documentation inline.
* **Code Navigation:** Jump to definition shows exactly where a value comes from.
**Achieves "Prove It Works — Before You Deploy"**
The compiler now validates that:
* All required fields are populated (Rust's `Option` type prevents missing fields).
* Field types match expectations (ports are integers, not strings).
* Enums contain valid values (e.g., `ServiceType::ClusterIP`).
This moves what was runtime validation into compile-time checks, fulfilling the "shift left" promise.
**Enables True Unit Testing**
Developers can now write unit tests that assert directly against typed objects:
```rust
let deployment = create_deployment(&app);
assert_eq!(deployment.spec.unwrap().replicas.unwrap(), 3);
assert_eq!(deployment.metadata.name.unwrap(), "my-app");
```
No string parsing, no YAML serialization, no fragile assertions against rendered output.
**Preserves Ecosystem Benefits**
By generating standard Helm chart structures, Harmony retains compatibility with:
* **OCI Registries (Harbor, GHCR):** `helm push` works exactly as before.
* **ArgoCD:** Syncs and manages releases using the generated charts.
* **Existing Workflows:** Teams already consuming Helm charts see no change.
The Helm tarball becomes a "dumb pipe" for transport, which is arguably its ideal role.
## Consequences
### Positive
* **Compile-Time Safety:** A broad class of errors (typos, missing fields, type mismatches) is now caught at build time.
* **Better Developer Experience:** IDE autocomplete, inline documentation, and refactor support significantly reduce the learning curve for Kubernetes manifests.
* **Testability:** Unit tests can validate manifest structure without integration or runtime checks.
* **Auditability:** The source-of-truth for manifests is now pure Rust—easier to review in pull requests than template logic scattered across files.
* **Future-Extensibility:** CustomResources (CRDs) can be supported via `kopium`-generated Rust types, maintaining the same strong typing.
### Negative
* **API Schema Drift:** Kubernetes API changes require regenerating `k8s_openapi` types and updating code. A change in a struct field will cause the build to fail—intentionally, but still requiring the pipeline to be updated.
* **Verbosity:** Typed construction is more verbose than the equivalent template. Builder patterns or helper functions will be needed to keep code readable.
* **Learning Curve:** Contributors must understand both the Kubernetes resource spec *and* the Rust type system, rather than just YAML.
* **Debugging Shift:** When debugging generated YAML, you now trace through Rust code rather than template files—more precise but different mental model.
## Alternatives Considered
### 1. Enhance Askama with Compile-Time Validation
*Pros:* Stay within familiar templating paradigm; minimal code changes.
*Cons:* Rust's type system cannot fully express Kubernetes schema validation without significant macro boilerplate. Errors would still surface at template evaluation time, not compilation.
### 2. Use Helm SDK Programmatically (Go)
*Pros:* Direct access to Helm's template engine; no YAML serialization step.
*Cons:* Would introduce a second language (Go) into a Rust codebase, increasing cognitive load and compilation complexity. No improvement in compile-time safety.
### 3. Raw YAML String Templating (Manual)
*Pros:* Maximum control; no external dependencies.
*Cons:* Even more error-prone than Askama; no structure validation; string concatenation errors abound.
### 4. Use Kustomize for All Manifests
*Pros:* Declarative overlays; standard tool.
*Cons:* Kustomize is itself a layer over YAML templates with its own DSL. It does not provide compile-time type safety and would require externalizing manifest management outside Harmony's codebase.
__Note that this template hydration architecture still allows to override templates with tools like kustomize when required__
## Additional Notes
**Scalability to Future Topologies**
The Template Hydration pattern enables future Harmony architectures to generate manifests dynamically based on topology context. For example, a `CostTopology` might adjust resource requests based on cluster pricing, manipulating the typed `Deployment::spec` directly before serialization.
**Implementation Status**
As of this writing, the pattern is implemented for `BackendApp` deployments (`backend_app.rs`). The next phase is to extend this pattern across all application modules (`webapp.rs`, etc.) and to standardize on this approach for any new implementations.

View File

@@ -0,0 +1,65 @@
# Architecture Decision Record: Network Bonding Configuration via External Automation
Initial Author: Jean-Gabriel Gill-Couture & Sylvain Tremblay
Initial Date: 2026-02-13
Last Updated Date: 2026-02-13
## Status
Accepted
## Context
We need to configure LACP bonds on 10GbE interfaces across all worker nodes in the OpenShift cluster. A significant challenge is that interface names (e.g., `enp1s0f0` vs `ens1f0`) vary across different hardware nodes.
The standard OpenShift mechanism (MachineConfig) applies identical configurations to all nodes in a MachineConfigPool. Since the interface names differ, a single static MachineConfig cannot target specific physical devices across the entire cluster without complex workarounds.
## Decision
We will use the existing "Harmony" automation tool to generate and apply host-specific NetworkManager configuration files directly to the nodes.
1. Harmony will generate the specific `.nmconnection` files for the bond and slaves based on its inventory of interface names.
2. Files will be pushed to `/etc/NetworkManager/system-connections/` on each node.
3. Configuration will be applied via `nmcli` reload or a node reboot.
## Rationale
* **Inventory Awareness:** Harmony already possesses the specific interface mapping data for each host.
* **Persistence:** Fedora CoreOS/SCOS allows writing to `/etc`, and these files persist across reboots and OS upgrades (rpm-ostree updates).
* **Avoids Complexity:** This approach avoids the operational overhead of creating unique MachineConfigPools for every single host or hardware variant.
* **Safety:** Unlike wildcard matching, this ensures explicit interface selection, preventing accidental bonding of reserved interfaces (e.g., future separation of Ceph storage traffic).
## Consequences
**Pros:**
* Precise, per-host configuration without polluting the Kubernetes API with hundreds of MachineConfigs.
* Standard Linux networking behavior; easy to debug locally.
* Prevents accidental interface capture (unlike wildcards).
**Cons:**
* **Loss of Declarative K8s State:** The network config is not managed by the Machine Config Operator (MCO).
* **Node Replacement Friction:** Newly provisioned nodes (replacements) will boot with default config. Harmony must be run against new nodes manually or via a hook before they can fully join the cluster workload.
## Alternatives considered
1. **Wildcard Matching in NetworkManager (e.g., `interface-name=enp*`):**
* *Pros:* Single MachineConfig for the whole cluster.
* *Cons:* Rejected because it is too broad. It risks capturing interfaces intended for other purposes (e.g., splitting storage and cluster networks later).
2. **"Kitchen Sink" Configuration:**
* *Pros:* Single file listing every possible interface name as a slave.
* *Cons:* "Dirty" configuration; results in many inactive connections on every host; brittle if new naming schemes appear.
3. **Per-Host MachineConfig:**
* *Pros:* Fully declarative within OpenShift.
* *Cons:* Requires a unique `MachineConfigPool` per host, which is an anti-pattern and unmaintainable at scale.
4. **On-boot Generation Script:**
* *Pros:* Dynamic detection.
* *Cons:* Increases boot complexity; harder to debug if the script fails during startup.
## Additional Notes
While `/etc` is writable and persistent on CoreOS, this configuration falls outside the "Day 1" Ignition process. Operational runbooks must be updated to ensure Harmony runs on any node replacement events.

View File

@@ -0,0 +1,233 @@
# ADR 020-1: Zitadel OIDC and OpenBao Integration for the Config Store
Author: Jean-Gabriel Gill-Couture
Date: 2026-03-18
## Status
Proposed
## Context
ADR 020 defines a unified `harmony_config` crate with a `ConfigStore` trait. The default team-oriented backend is OpenBao, which provides encrypted storage, versioned KV, audit logging, and fine-grained access control.
OpenBao requires authentication. The question is how developers authenticate without introducing new credentials to manage.
The goals are:
- **Zero new credentials.** Developers log in with their existing corporate identity (Google Workspace, GitHub, or Microsoft Entra ID / Azure AD).
- **Headless compatibility.** The flow must work over SSH, inside containers, and in CI — environments with no browser or localhost listener.
- **Minimal friction.** After a one-time login, authentication should be invisible for weeks of active use.
- **Centralized offboarding.** Revoking a user in the identity provider must immediately revoke their access to the config store.
## Decision
Developers authenticate to OpenBao through a two-step process: first, they obtain an OIDC token from Zitadel (`sso.nationtech.io`) using the OAuth 2.0 Device Authorization Grant (RFC 8628); then, they exchange that token for a short-lived OpenBao client token via OpenBao's JWT auth method.
### The authentication flow
#### Step 1: Trigger
The `ConfigManager` attempts to resolve a value via the `StoreSource`. The `StoreSource` checks for a cached OpenBao token in `~/.local/share/harmony/session.json`. If the token is missing or expired, authentication begins.
#### Step 2: Device Authorization Request
Harmony sends a `POST` to Zitadel's device authorization endpoint:
```
POST https://sso.nationtech.io/oauth/v2/device_authorization
Content-Type: application/x-www-form-urlencoded
client_id=<harmony_client_id>&scope=openid email profile offline_access
```
Zitadel responds with:
```json
{
"device_code": "dOcbPeysDhT26ZatRh9n7Q",
"user_code": "GQWC-FWFK",
"verification_uri": "https://sso.nationtech.io/device",
"verification_uri_complete": "https://sso.nationtech.io/device?user_code=GQWC-FWFK",
"expires_in": 300,
"interval": 5
}
```
#### Step 3: User prompt
Harmony prints the code and URL to the terminal:
```
[Harmony] To authenticate, open your browser to:
https://sso.nationtech.io/device
and enter code: GQWC-FWFK
Or visit: https://sso.nationtech.io/device?user_code=GQWC-FWFK
```
If a desktop environment is detected, Harmony also calls `open` / `xdg-open` to launch the browser automatically. The `verification_uri_complete` URL pre-fills the code, so the user only needs to click "Confirm" after logging in.
There is no localhost HTTP listener. The CLI does not need to bind a port or receive a callback. This is what makes the device flow work over SSH, in containers, and through corporate firewalls — unlike the `oc login` approach which spins up a temporary web server to catch a redirect.
#### Step 4: User login
The developer logs in through Zitadel's web UI using one of the configured identity providers:
- **Google Workspace** — for teams using Google as their corporate identity.
- **GitHub** — for open-source or GitHub-centric teams.
- **Microsoft Entra ID (Azure AD)** — for enterprise clients, particularly common in Quebec and the broader Canadian public sector.
Zitadel federates the login to the chosen provider. The developer authenticates with their existing corporate credentials. No new password is created.
#### Step 5: Polling
While the user is authenticating in the browser, Harmony polls Zitadel's token endpoint at the interval specified in the device authorization response (typically 5 seconds):
```
POST https://sso.nationtech.io/oauth/v2/token
Content-Type: application/x-www-form-urlencoded
grant_type=urn:ietf:params:oauth:grant-type:device_code
&device_code=dOcbPeysDhT26ZatRh9n7Q
&client_id=<harmony_client_id>
```
Before the user completes login, Zitadel responds with `authorization_pending`. Once the user consents, Zitadel returns:
```json
{
"access_token": "...",
"token_type": "Bearer",
"expires_in": 3600,
"refresh_token": "...",
"id_token": "eyJhbGciOiJSUzI1NiIs..."
}
```
The `scope=offline_access` in the initial request is what causes Zitadel to issue a `refresh_token`.
#### Step 6: OpenBao JWT exchange
Harmony sends the `id_token` (a JWT signed by Zitadel) to OpenBao's JWT auth method:
```
POST https://secrets.nationtech.io/v1/auth/jwt/login
Content-Type: application/json
{
"role": "harmony-developer",
"jwt": "eyJhbGciOiJSUzI1NiIs..."
}
```
OpenBao validates the JWT:
1. It fetches Zitadel's public keys from `https://sso.nationtech.io/oauth/v2/keys` (the JWKS endpoint).
2. It verifies the JWT signature.
3. It reads the claims (`email`, `groups`, and any custom claims mapped from the upstream identity provider, such as Azure AD tenant or Google Workspace org).
4. It evaluates the claims against the `bound_claims` and `bound_audiences` configured on the `harmony-developer` role.
5. If validation passes, OpenBao returns a client token:
```json
{
"auth": {
"client_token": "hvs.CAES...",
"policies": ["harmony-dev"],
"metadata": { "role": "harmony-developer" },
"lease_duration": 14400,
"renewable": true
}
}
```
Harmony caches the OpenBao token, the OIDC refresh token, and the token expiry timestamps to `~/.local/share/harmony/session.json` with `0600` file permissions.
### OpenBao storage structure
All configuration and secret state is stored in an OpenBao Versioned KV v2 engine.
Path taxonomy:
```
harmony/<organization>/<project>/<environment>/<key>
```
Examples:
```
harmony/nationtech/my-app/staging/PostgresConfig
harmony/nationtech/my-app/production/PostgresConfig
harmony/nationtech/my-app/local-shared/PostgresConfig
```
The `ConfigClass` (Standard vs. Secret) can influence OpenBao policy structure — for example, `Secret`-class paths could require stricter ACLs or additional audit backends — but the path taxonomy itself does not change. This is an operational concern configured in OpenBao policies, not a structural one enforced by path naming.
### Token lifecycle and silent refresh
The system manages three tokens with different lifetimes:
| Token | TTL | Max TTL | Purpose |
|---|---|---|---|
| OpenBao client token | 4 hours | 24 hours | Read/write config store |
| OIDC ID token | 1 hour | — | Exchange for OpenBao token |
| OIDC refresh token | 90 days absolute, 30 days inactivity | — | Obtain new ID tokens silently |
The refresh flow, from the developer's perspective:
1. **Same session (< 4 hours since last use).** The cached OpenBao token is still valid. No network call to Zitadel. Fastest path.
2. **Next day (OpenBao token expired, refresh token valid).** Harmony uses the OIDC `refresh_token` to request a new `id_token` from Zitadel's token endpoint (`grant_type=refresh_token`). It then exchanges the new `id_token` for a fresh OpenBao token. This happens silently. The developer sees no prompt.
3. **OpenBao token near max TTL (approaching 24 hours of cumulative renewals).** Instead of renewing, Harmony re-authenticates using the refresh token to get a completely fresh OpenBao token. Transparent to the user.
4. **After 30 days of inactivity.** The OIDC refresh token expires. Harmony falls back to the device flow (Step 2 above) and prompts the user to re-authenticate in the browser. This is the only scenario where a returning developer sees a login prompt.
5. **User offboarded.** An administrator revokes the user's account or group membership in Zitadel. The next time the refresh token is used, Zitadel rejects it. The device flow also fails because the user can no longer authenticate. Access is terminated without any action needed on the OpenBao side.
OpenBao token renewal uses the `/auth/token/renew-self` endpoint with the `X-Vault-Token` header. Harmony renews proactively at ~75% of the TTL to avoid race conditions.
### OpenBao role configuration
The OpenBao JWT auth role for Harmony developers:
```bash
bao write auth/jwt/config \
oidc_discovery_url="https://sso.nationtech.io" \
bound_issuer="https://sso.nationtech.io"
bao write auth/jwt/role/harmony-developer \
role_type="jwt" \
bound_audiences="<harmony_client_id>" \
user_claim="email" \
groups_claim="urn:zitadel:iam:org:project:roles" \
policies="harmony-dev" \
ttl="4h" \
max_ttl="24h" \
token_type="service"
```
The `bound_audiences` claim ties the role to the specific Harmony Zitadel application. The `groups_claim` allows mapping Zitadel project roles to OpenBao policies for per-team or per-project access control.
### Self-hosted deployments
For organizations running their own infrastructure, the same architecture applies. The operator deploys Zitadel and OpenBao using Harmony's existing `ZitadelScore` and `OpenbaoScore`. The only configuration needed is three environment variables (or their equivalents in the bootstrap config):
- `HARMONY_SSO_URL` — the Zitadel instance URL.
- `HARMONY_SECRETS_URL` — the OpenBao instance URL.
- `HARMONY_SSO_CLIENT_ID` — the Zitadel application client ID.
None of these are secrets. They can be committed to an infrastructure repository or distributed via any convenient channel.
## Consequences
### Positive
- Developers authenticate with existing corporate credentials. No new passwords, no static tokens to distribute.
- The device flow works in every environment: local terminal, SSH, containers, CI runners, corporate VPNs.
- Silent token refresh keeps developers authenticated for weeks without any manual intervention.
- User offboarding is a single action in Zitadel. No OpenBao token rotation or manual revocation required.
- Azure AD / Microsoft Entra ID support addresses the enterprise and public sector market.
### Negative
- The OAuth state machine (device code polling, token refresh, error handling) adds implementation complexity compared to a static token approach.
- Developers must have network access to `sso.nationtech.io` and `secrets.nationtech.io` to pull or push configuration state. True offline work falls back to the local file store, which does not sync with the team.
- The first login per machine requires a browser interaction. Fully headless first-run scenarios (e.g., a fresh CI runner with no pre-seeded tokens) must use `EnvSource` overrides or a service account JWT.

View File

@@ -0,0 +1,177 @@
# ADR 020: Unified Configuration and Secret Management
Author: Jean-Gabriel Gill-Couture
Date: 2026-03-18
## Status
Proposed
## Context
Harmony's orchestration logic depends on runtime data that falls into two categories:
1. **Secrets** — credentials, tokens, private keys.
2. **Operational configuration** — deployment targets, host selections, port assignments, reboot decisions, and similar contextual choices.
Both categories share the same fundamental lifecycle: a value must be acquired before execution can proceed, it may come from several backends (environment variable, remote store, interactive prompt), and it must be shareable across a team without polluting the Git repository.
Treating these categories as separate subsystems forces developers to choose between a "config API" and a "secret API" at every call site. The only meaningful difference between the two is how the storage backend handles the data (plaintext vs. encrypted, audited vs. unaudited) and how the CLI displays it (visible vs. masked). That difference belongs in the backend, not in the application code.
Three concrete problems drive this change:
- **Async terminal corruption.** `inquire` prompts assume exclusive terminal ownership. Background tokio tasks emitting log output during a prompt corrupt the terminal state. This is inherent to Harmony's concurrent orchestration model.
- **Untestable code paths.** Any function containing an inline `inquire` call requires a real TTY to execute. Unit testing is impossible without ignoring the test entirely.
- **No backend integration.** Inline prompts cannot be answered from a remote store, an environment variable, or a CI pipeline. Every automated deployment that passes through a prompting code path requires a human operator at a terminal.
## Decision
A single workspace crate, `harmony_config`, provides all configuration and secret acquisition for Harmony. It replaces both `harmony_secret` and all inline `inquire` usage.
### Schema in Git, state in the store
The Rust type system serves as the configuration schema. Developers declare what configuration is needed by defining structs:
```rust
#[derive(Config, Serialize, Deserialize, JsonSchema, InteractiveParse)]
struct PostgresConfig {
pub host: String,
pub port: u16,
#[config(secret)]
pub password: String,
}
```
These structs live in Git and evolve with the code. When a branch introduces a new field, Git tracks that schema change. The actual values live in an external store — OpenBao by default. No `.env` files, no JSON config files, no YAML in the repository.
### Data classification
```rust
/// Tells the storage backend how to handle the data.
pub enum ConfigClass {
/// Plaintext storage is acceptable.
Standard,
/// Must be encrypted at rest, masked in UI, subject to audit logging.
Secret,
}
```
Classification is determined at the struct level. A struct with no `#[config(secret)]` fields has `ConfigClass::Standard`. A struct with one or more `#[config(secret)]` fields is elevated to `ConfigClass::Secret`. The struct is always stored as a single cohesive JSON blob; field-level splitting across backends is not a concern of the trait.
The `#[config(secret)]` attribute also instructs the `PromptSource` to mask terminal input for that field during interactive prompting.
### The Config trait
```rust
pub trait Config: Serialize + DeserializeOwned + JsonSchema + InteractiveParseObj + Sized {
/// Stable lookup key. By default, the struct name.
const KEY: &'static str;
/// How the backend should treat this data.
const CLASS: ConfigClass;
}
```
A `#[derive(Config)]` proc macro generates the implementation. The macro inspects field attributes to determine `CLASS`.
### The ConfigStore trait
```rust
#[async_trait]
pub trait ConfigStore: Send + Sync {
async fn get(
&self,
class: ConfigClass,
namespace: &str,
key: &str,
) -> Result<Option<serde_json::Value>, ConfigError>;
async fn set(
&self,
class: ConfigClass,
namespace: &str,
key: &str,
value: &serde_json::Value,
) -> Result<(), ConfigError>;
}
```
The `class` parameter is a hint. The store implementation decides what to do with it. An OpenBao store may route `Secret` data to a different path prefix or apply stricter ACLs. A future store could split fields across backends — that is an implementation concern, not a trait concern.
### Resolution chain
The `ConfigManager` tries sources in priority order:
1. **`EnvSource`** — reads `HARMONY_CONFIG_{KEY}` as a JSON string. Override hatch for CI/CD pipelines and containerized environments.
2. **`StoreSource`** — wraps a `ConfigStore` implementation. For teams, this is the OpenBao backend authenticated via Zitadel OIDC (see ADR 020-1).
3. **`PromptSource`** — presents an `interactive-parse` prompt on the terminal. Acquires a process-wide async mutex before rendering to prevent log output corruption.
When `PromptSource` obtains a value, the `ConfigManager` persists it back to the `StoreSource` so that subsequent runs — by the same developer or any teammate — resolve without prompting.
Callers that do not include `PromptSource` in their source list never block on a TTY. Test code passes empty source lists and constructs config structs directly.
### Schema versioning
The Rust struct is the schema. When a developer renames a field, removes a field, or changes a type on a branch, the store may still contain data shaped for a previous version of the struct. If another team member who does not yet have that commit runs the code, `serde_json::from_value` will fail on the stale entry.
In the initial implementation, the resolution chain handles this gracefully: a deserialization failure is treated as a cache miss, and the `PromptSource` fires. The prompted value overwrites the stale entry in the store.
This is sufficient for small teams working on short-lived branches. It is not sufficient at scale, where silent re-prompting could mask real configuration drift.
A future iteration will introduce a compile-time schema migration mechanism, similar to how `sqlx` verifies queries against a live database at compile time. The mechanism will:
- Detect schema drift between the Rust struct and the stored JSON.
- Apply named, ordered migration functions to transform stored data forward.
- Reject ambiguous migrations at compile time rather than silently corrupting state.
Until that mechanism exists, teams should treat store entries as soft caches: the struct definition is always authoritative, and the store is best-effort.
## Rationale
**Why merge secrets and config into one crate?** Separate crates with nearly identical trait shapes (`Secret` vs `Config`, `SecretStore` vs `ConfigStore`) force developers to make a classification decision at every call site. A unified crate with a `ConfigClass` discriminator moves that decision to the struct definition, where it belongs.
**Why OpenBao as the default backend?** OpenBao is a fully open-source Vault fork under the Linux Foundation. It runs on-premises with no phone-home requirement — a hard constraint for private cloud and regulated environments. Harmony already deploys OpenBao for clients (`OpenbaoScore`), so no new infrastructure is introduced.
**Why not store values in Git (e.g., encrypted YAML)?** Git-tracked config files create merge conflicts, require re-encryption on team membership changes, and leak metadata (file names, key names) even when values are encrypted. Storing state in OpenBao avoids all of these issues and provides audit logging, access control, and versioned KV out of the box.
**Why keep `PromptSource`?** Removing interactive prompts entirely would break the zero-infrastructure bootstrapping path and eliminate human-confirmation safety gates for destructive operations (interface reconfiguration, node reboot). The problem was never that prompts exist — it is that they were unavoidable and untestable. Making `PromptSource` an explicit, opt-in entry in the source list restores control.
## Consequences
### Positive
- A single API surface for all runtime data acquisition.
- All currently-ignored tests become runnable without TTY access.
- Async terminal corruption is eliminated by the process-wide prompt mutex.
- The bootstrapping path requires no infrastructure for a first run; `PromptSource` alone is sufficient.
- The team path (OpenBao + Zitadel) reuses infrastructure Harmony already deploys.
- User offboarding is a single Zitadel action.
### Negative
- Migrating all inline `inquire` and `harmony_secret` call sites is a significant refactoring effort.
- Until the schema migration mechanism is built, store entries for renamed or removed fields become stale and must be re-prompted.
- The Zitadel device flow introduces a browser step on first login per machine.
## Implementation Plan
### Phase 1: Trait design and crate restructure
Refactor `harmony_config` to define the final `Config`, `ConfigClass`, and `ConfigStore` traits. Update the derive macro to support `#[config(secret)]` and generate the correct `CLASS` constant. Implement `EnvSource` and `PromptSource` against the new traits. Write comprehensive unit tests using mock stores.
### Phase 2: Absorb `harmony_secret`
Migrate the `OpenbaoSecretStore`, `InfisicalSecretStore`, and `LocalFileSecretStore` implementations from `harmony_secret` into `harmony_config` as `ConfigStore` backends. Update all call sites that use `SecretManager::get`, `SecretManager::get_or_prompt`, or `SecretManager::set` to use `harmony_config` equivalents.
### Phase 3: Migrate inline prompts
Replace all inline `inquire` call sites in the `harmony` crate (`infra/brocade.rs`, `infra/network_manager.rs`, `modules/okd/host_network.rs`, and others) with `harmony_config` structs and `get_or_prompt` calls. Un-ignore the affected tests.
### Phase 4: Zitadel and OpenBao integration
Implement the authentication flow described in ADR 020-1. Wire `StoreSource` to use Zitadel OIDC tokens for OpenBao access. Implement token caching and silent refresh.
### Phase 5: Remove `harmony_secret`
Delete the `harmony_secret` and `harmony_secret_derive` crates from the workspace. All functionality now lives in `harmony_config`.

63
docs/adr/README.md Normal file
View File

@@ -0,0 +1,63 @@
# Architecture Decision Records
An Architecture Decision Record (ADR) documents a significant architectural decision made during the development of Harmony — along with its context, rationale, and consequences.
## Why We Use ADRs
As a platform engineering framework used by a team, Harmony accumulates technical decisions over time. ADRs help us:
- **Track rationale** — understand _why_ a decision was made, not just _what_ was decided
- ** onboard new contributors** — the "why" is preserved even when team membership changes
- **Avoid repeating past mistakes** — previous decisions and their context are searchable
- **Manage technical debt** — ADRs make it easier to revisit and revise past choices
An ADR captures a decision at a point in time. It is not a specification — it is a record of reasoning.
## ADR Format
Every ADR follows this structure:
| Section | Purpose |
|---------|---------|
| **Status** | Proposed / Pending / Accepted / Implemented / Deprecated |
| **Context** | The problem or background — the "why" behind this decision |
| **Decision** | The chosen solution or direction |
| **Rationale** | Reasoning behind the decision |
| **Consequences** | Both positive and negative outcomes |
| **Alternatives considered** | Other options that were evaluated |
| **Additional Notes** | Supplementary context, links, or open questions |
## ADR Index
| Number | Title | Status |
|--------|-------|--------|
| [000](./000-ADR-Template.md) | ADR Template | Reference |
| [001](./001-rust.md) | Why Rust | Accepted |
| [002](./002-hexagonal-architecture.md) | Hexagonal Architecture | Accepted |
| [003](./003-infrastructure-abstractions.md) | Infrastructure Abstractions | Accepted |
| [004](./004-ipxe.md) | iPXE | Accepted |
| [005](./005-interactive-project.md) | Interactive Project | Proposed |
| [006](./006-secret-management.md) | Secret Management | Accepted |
| [007](./007-default-runtime.md) | Default Runtime | Accepted |
| [008](./008-score-display-formatting.md) | Score Display Formatting | Proposed |
| [009](./009-helm-and-kustomize-handling.md) | Helm and Kustomize Handling | Accepted |
| [010](./010-monitoring-and-alerting.md) | Monitoring and Alerting | Accepted |
| [011](./011-multi-tenant-cluster.md) | Multi-Tenant Cluster | Accepted |
| [012](./012-project-delivery-automation.md) | Project Delivery Automation | Proposed |
| [013](./013-monitoring-notifications.md) | Monitoring Notifications | Accepted |
| [015](./015-higher-order-topologies.md) | Higher Order Topologies | Proposed |
| [016](./016-Harmony-Agent-And-Global-Mesh-For-Decentralized-Workload-Management.md) | Harmony Agent and Global Mesh | Proposed |
| [017-1](./017-1-Nats-Clusters-Interconnection-Topology.md) | NATS Clusters Interconnection Topology | Proposed |
| [018](./018-Template-Hydration-For-Workload-Deployment.md) | Template Hydration for Workload Deployment | Proposed |
| [019](./019-Network-bond-setup.md) | Network Bond Setup | Proposed |
| [020-1](./020-1-zitadel-openbao-secure-config-store.md) | Zitadel + OpenBao Secure Config Store | Accepted |
| [020](./020-interactive-configuration-crate.md) | Interactive Configuration Crate | Proposed |
## Contributing
When making a significant technical change:
1. **Check existing ADRs** — the decision may already be documented
2. **Create a new ADR** using the [template](./000-ADR-Template.md) if the change warrants architectural discussion
3. **Set status to Proposed** and open it for team review
4. Once accepted and implemented, update the status accordingly

Some files were not shown because too many files have changed in this diff Show More