Compare commits

...

86 Commits

Author SHA1 Message Date
5c628b37b7 wip:added alertreceiver and alert rules which are built and added to the yaml before deploying prometheus, added a few dashboards to grafana. Trying to fix prometheus-server clusterrole/role/serviceaccount so that it can discover targets and kubernetes in a namespaced release where it does not have access to clusterrole 2025-07-09 15:09:38 -04:00
31661aaaf1 fix: prometheus deploys as namespaced resource without prometheus-server clusterrole and clusterrolebinding 2025-07-07 14:33:09 -04:00
2c208df143 fix: deploys by default in the application name namespace 2025-07-07 13:24:21 -04:00
1a6d72dc17 fix: uncommented example
All checks were successful
Run Check Script / check (pull_request) Successful in 1m37s
2025-07-04 16:30:13 -04:00
df9e21807e fix: git conflict
All checks were successful
Run Check Script / check (pull_request) Successful in -6s
2025-07-04 16:22:39 -04:00
b1bf4fd4d5 fix: cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m40s
2025-07-04 16:14:47 -04:00
f702ecd8c9 fix: deploys a lighter weight prometheus and grafana which is limited to their respective namespaces 2025-07-04 16:13:41 -04:00
a19b52e690 fix: properly append YAML in correct places in argoapplication (#80)
All checks were successful
Run Check Script / check (push) Successful in -7s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m56s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #80
2025-07-04 15:32:02 +00:00
b73f2e76d0 Merge pull request 'refact: Make RustWebappScore generic, it is now Application score and takes an application and list of features to attach to the application' (#81) from refact/application into master
All checks were successful
Run Check Script / check (push) Successful in -1s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m38s
Reviewed-on: #81
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-07-04 14:31:38 +00:00
b4534c6ee0 refact: Make RustWebappScore generic, it is now Application score and takes an application and list of features to attach to the application
All checks were successful
Run Check Script / check (pull_request) Successful in -8s
2025-07-04 10:27:16 -04:00
6149249a6c feat: create Argo interpret and kube client apply_yaml to install Argo Applications. Very messy implementation though, must be refactored/improved
All checks were successful
Run Check Script / check (push) Successful in -5s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m13s
2025-07-04 09:49:43 -04:00
d9935e20cb Merge pull request 'feat: harmony now defaults to using local k3d cluster. Also created OCICompliant: Application trait to make building images cleaner' (#76) from feat/oci into master
All checks were successful
Run Check Script / check (push) Successful in -9s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m4s
Reviewed-on: #76
2025-07-03 19:37:46 +00:00
7b0f3b79b1 Merge remote-tracking branch 'origin/master' into feat/oci
All checks were successful
Run Check Script / check (pull_request) Successful in -8s
2025-07-03 15:36:52 -04:00
e6612245a5 Merge pull request 'feat/cd/localdeploymentdemo' (#79) from feat/cd/localdeploymentdemo into feat/oci
All checks were successful
Run Check Script / check (pull_request) Successful in -9s
Reviewed-on: #79
2025-07-03 19:31:45 +00:00
b4f5b91a57 feat: WIP argocd_score (#78)
Some checks are pending
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Run Check Script / check (push) Successful in -8s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #78
Reviewed-by: johnride <jg@nationtech.io>
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-07-03 19:30:00 +00:00
d317c0ba76 fix: Continuous delivery now works with rust example to deploy on local k3d, ingress and everything
All checks were successful
Run Check Script / check (pull_request) Successful in -3s
2025-07-03 15:25:43 -04:00
539b8299ae feat(continuousdelivery): Local deployment implementation for demo purposes. Needs a lot of refactoring but it works (or almost works) 2025-07-03 11:55:10 -04:00
5a89495c61 feat: implement helm chart generation and publishing
All checks were successful
Run Check Script / check (pull_request) Successful in -4s
- Added functionality to generate a Helm chart for the application.
- Implemented chart packaging and pushing to an OCI registry.
- Utilized `helm package` and `helm push` commands.
- Included configurable registry URL and project name.
- Added tests to verify chart generation and packaging.
- Improved error handling and logging.
2025-07-03 07:19:37 -04:00
fb7849c010 feat: Add sample leptos webapp as example 2025-07-02 23:13:08 -04:00
6371009c6f breaking: Rename Maestro::new to Maestro::new_without_initialization. This improves UX as it makes it more obvious to users that this method should rarely be used
All checks were successful
Run Check Script / check (pull_request) Successful in -5s
2025-07-02 17:47:23 -04:00
a4aa685a4f feat: harmony now defaults to using local k3d cluster. Also created OCICompliant: Application trait to make building images cleaner
Some checks failed
Run Check Script / check (pull_request) Failing after -33s
2025-07-02 17:42:29 -04:00
6bf10b093c Merge pull request 'refactor/ns' (#74) from refactor/ns into master
All checks were successful
Run Check Script / check (push) Successful in 0s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m7s
Reviewed-on: #74
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 19:54:28 +00:00
3eecc2f590 fix: K8sTenantManager is responsible for concrete implementation. K8sAnywhere should delegate
All checks were successful
Run Check Script / check (pull_request) Successful in 4s
2025-07-02 15:51:30 -04:00
3959c07261 Merge remote-tracking branch 'origin/master' into refactor/ns 2025-07-02 15:13:13 -04:00
e50c01c0b3 fix: Forgotten file 🙈
Some checks failed
Run Check Script / check (push) Failing after -28s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m15s
2025-07-02 15:11:03 -04:00
286460d59e Merge pull request 'feat: added default resource limit and request to k8s tenant' (#75) from feat/tenant_limit_range into master
Some checks failed
Run Check Script / check (push) Failing after 38s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m0s
Reviewed-on: #75
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 18:55:04 +00:00
4baa3ae707 feat: added default resource limit and request to k8s tenant
Some checks failed
Run Check Script / check (pull_request) Failing after 35s
2025-07-02 14:06:08 -04:00
82119076cf fix: merge conflict
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-07-02 13:46:26 -04:00
f2a350fae6 fix: comments from pr
All checks were successful
Run Check Script / check (pull_request) Successful in 5s
2025-07-02 13:35:20 -04:00
197770a603 feat: Add ntfy score (#69)
Some checks failed
Run Check Script / check (push) Failing after 42s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m4s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #69
2025-07-02 16:19:35 +00:00
ab69a2c264 feat: add service monitors support to prom (#66)
Some checks failed
Run Check Script / check (push) Failing after 45s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m30s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #66
Co-authored-by: taha <taha@noreply.git.nationtech.io>
Co-committed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 15:29:16 +00:00
e857efa92f fix merge conflict
All checks were successful
Run Check Script / check (pull_request) Successful in 1m50s
2025-07-02 11:26:27 -04:00
2ff3f4afa9 Merge pull request 'feat: Introduce Application trait, not too sure how it will evolve but it makes sense, at the very least to identify the Application, also some minor refactoring' (#73) from feat/applicationTrait into master
Some checks failed
Run Check Script / check (push) Failing after 50s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #73
2025-07-02 15:25:26 +00:00
2f6a11ead7 Merge pull request 'feat: Application Interpret still WIP but now call ensure_installed on features, also introduced a rust app example, completed work on clone_box behavior' (#72) from feat/rust_cd into master
Some checks failed
Run Check Script / check (push) Successful in 2m4s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #72
2025-07-02 15:20:24 +00:00
7de9860dcf refactor: monitoring takes namespace from tenant
All checks were successful
Run Check Script / check (pull_request) Successful in -6s
2025-07-02 11:14:24 -04:00
6e884cff3a feat: Start default implementation to ArgoCD for ContinuousDelivery feature
Some checks failed
Run Check Script / check (pull_request) Failing after -34s
2025-07-02 11:14:24 -04:00
c74c51090a feat: Introduce Application trait, not too sure how it will evolve but it makes sense, at the very least to identify the Application, also some minor refactoring
Some checks failed
Run Check Script / check (pull_request) Failing after -38s
2025-07-02 09:48:26 -04:00
8ae0d6b548 feat: Application Interpret still WIP but now call ensure_installed on features, also introduced a rust app example, completed work on clone_box behavior
All checks were successful
Run Check Script / check (pull_request) Successful in -6s
2025-07-01 22:44:44 -04:00
ee02906ce9 fix(composer): spawn commands to allow interaction (#71)
All checks were successful
Run Check Script / check (push) Successful in 1m39s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m5s
Using `Command::output()` executes the command and wait for it to be finished before returning the output.
Though in some cases the user might need to interact with the CLI before continuing, which hangs the command execution.

Instead, using `Command::spawn()` allows to forward stdin/stdout to the parent process.

Reviewed-on: #71
Reviewed-by: johnride <jg@nationtech.io>
2025-07-01 21:08:19 +00:00
284cc6afd7 feat: Application module architecture and placeholder features (#70)
Some checks failed
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 11m22s
With this architecture, we have an extensible application module for which we can easily define new features and add them to application scores.

All this is driven by the ApplicationInterpret, who understands features and make sure they are "installed".

The drawback of this design is that we now have three different places to launch scores within Harmony : Maestro, Topology and Interpret. This is an architectural smell and I am not sure how to deal with it at the moment.

However, all these places where execution is performed make sense semantically : an ApplicationInterpret must understand ApplicationFeatures and can very well be responsible of them. Same goes for a Topology which provides features itself by composition (ex. K8sAnywhereTopology implements TenantManager) so it is natural for this very imp
lementation to know how to install itself.

Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #70
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-07-01 19:40:30 +00:00
9bf6aac82e doc: Fix curl command for environments without ~/.local/bin/ folder
All checks were successful
Run Check Script / check (push) Successful in 1m21s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m1s
2025-07-01 11:32:24 -04:00
460c8b59e1 wip: helm chart deploys to namespace with resource limits and requests, trying to fix connection refused to api error 2025-06-27 14:47:28 -04:00
8e857bc72a wip: using the name from tenant config as deployment namespace for kubeprometheus deployment or defaulting to monitoring if no tenant config exists 2025-06-26 16:24:19 -04:00
e8d55d27e4 Merge pull request 'feat: added webhook receiver to alertchannels' (#68) from feat/webhook_receiver into master
All checks were successful
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m8s
Reviewed-on: #68
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-06-26 16:43:25 +00:00
fea7e9ddb9 doc: Improve harmony_composer README single command usage
Some checks failed
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
2025-06-26 12:40:39 -04:00
7ec89cdac5 fix: cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m58s
2025-06-26 11:26:07 -04:00
55143dcad4 Merge pull request 'feat: add dry-run functionality and similar dependency' (#62) from feat/dryRun into master
All checks were successful
Run Check Script / check (push) Successful in 1m42s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 9m8s
Reviewed-on: #62
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-06-26 15:14:25 +00:00
17ad92402d feat: added webhook receiver to alertchannels
Some checks failed
Run Check Script / check (pull_request) Failing after 47s
2025-06-26 10:12:18 -04:00
29e74a2712 Merge pull request 'feat: added alert rule and impl for prometheus as well as a few preconfigured bmc alerts for dell server that are used in the monitoring example' (#67) from feat/alert_rules into master
All checks were successful
Run Check Script / check (push) Successful in -1s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 13m10s
Reviewed-on: #67
2025-06-26 13:16:38 +00:00
e16f8fa82e fix: modified directory names to be in line with alert functions and deployment environments
All checks were successful
Run Check Script / check (pull_request) Successful in 1m43s
2025-06-25 16:10:45 -04:00
c21f3084dc feat: added alert rule and impl for prometheus as well as a few preconfigured bmc alerts for dell server that are used in the monitoring example
All checks were successful
Run Check Script / check (pull_request) Successful in 1m35s
2025-06-25 15:10:16 -04:00
2c706225a1 feat: Publishing a release of harmony composer binary as latest-snapshot (#65)
All checks were successful
Run Check Script / check (push) Successful in 1m42s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m44s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #65
Reviewed-by: taha <taha@noreply.git.nationtech.io>
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-25 15:14:45 +00:00
acfb93f1a2 feat: add dry-run functionality and similar dependency
All checks were successful
Run Check Script / check (pull_request) Successful in 1m45s
- Implemented a dry-run mode for K8s resource patching, displaying diffs before applying changes.
- Added the `similar` dependency for calculating and displaying text diffs.
- Enhanced K8s resource application to handle various port specifications in NetworkPolicy ingress rules.
- Added support for port ranges and lists of ports in NetworkPolicy rules.
- Updated K8s client to utilize the dry-run configuration setting.
- Added configuration option `HARMONY_DRY_RUN` to enable or disable dry-run mode.
2025-06-24 14:54:22 -04:00
f437c40428 impl_monitoring_alerting_kube_prometheus (#64)
All checks were successful
Run Check Script / check (push) Successful in 1m29s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 2m59s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #64
Co-authored-by: Willem <wrolleman@nationtech.io>
Co-committed-by: Willem <wrolleman@nationtech.io>
2025-06-24 18:54:15 +00:00
e06548ac44 feat: Alerting module architecture to make it easy to use and extensible by external crates
All checks were successful
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m26s
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Reviewed-on: #61
Reviewed-by: johnride <jg@nationtech.io>
Co-authored-by: Willem <wrolleman@nationtech.io>
Co-committed-by: Willem <wrolleman@nationtech.io>
2025-06-19 14:37:16 +00:00
155e9bac28 feat: create harmony_composer initial version + rework CI (#58)
All checks were successful
Run Check Script / check (push) Successful in 1m38s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m15s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #58
Reviewed-by: johnride <jg@nationtech.io>
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-06-18 19:52:37 +00:00
7bebc58615 feat: add tenant credential management (#63)
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Adds the foundation for managing tenant credentials, including:

- `TenantCredentialScore` for scoring credential-related operations.
- `TenantCredentialManager` trait for creating users.
- `CredentialMetadata` struct to store credential information.
- `CredentialData` enum to hold credential content.
- `TenantCredentialBundle` struct to encapsulate metadata and content.

This provides a starting point for implementing credential creation, storage, and retrieval within the harmony system.

Reviewed-on: #63
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-17 18:28:04 +00:00
246d6718c3 docs: Introduce project delivery automation ADR. This is still WIP (#51)
All checks were successful
Run Check Script / check (push) Successful in 1m52s
Reviewed-on: #51
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-12 20:00:22 +00:00
d776042e20 docs: Improve README formatting
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Signed-off-by: johnride <jg@nationtech.io>
2025-06-12 18:23:17 +00:00
86c681be70 docs: New README, two options to choose from right now (#59)
All checks were successful
Run Check Script / check (push) Successful in 1m52s
Reviewed-on: #59
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-12 18:16:43 +00:00
b94dd1e595 feat: add support for custom CIDR ingress/egress rules (#60)
All checks were successful
Run Check Script / check (push) Successful in 1m53s
- Added `additional_allowed_cidr_ingress` and `additional_allowed_cidr_egress` fields to `TenantNetworkPolicy` to allow specifying custom CIDR blocks for network access.
- Updated K8sTenantManager to parse and apply these CIDR rules to NetworkPolicy ingress and egress rules.
- Added `cidr` dependency to `harmony_macros` and a custom proc macro `cidrv4` to easily parse CIDR strings.
- Updated TenantConfig to default inter tenant and internet egress to deny all and added default empty vectors for CIDR ingress and egress.
- Updated ResourceLimits to implement default.

Reviewed-on: #60
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-12 15:24:03 +00:00
ef5ec4a131 Merge pull request 'feat: Pass configuration when initializing K8sAnywhereTopology' (#57) from feat/configK8sAnywhere into master
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Reviewed-on: #57
2025-06-10 13:01:50 +00:00
a8eb06f686 feat: Pass configuration when initializing K8sAnywhereTopology
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-10 09:00:38 -04:00
d1678b529e Merge pull request 'feat: K8s Tenant looks good, basic isolation working now' (#56) from feat/k8sTenant into master
All checks were successful
Run Check Script / check (push) Successful in 1m57s
Reviewed-on: #56
2025-06-10 12:59:13 +00:00
1451260d4d feat: K8s Tenant looks good, basic isolation working now
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-09 20:39:15 -04:00
415488ba39 feat: K8s apply function now correctly emulates kubectl apply behavior by either creating or updating resources (#55)
Some checks failed
Run Check Script / check (push) Has been cancelled
Reviewed-on: #55
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-09 20:19:54 +00:00
bf7a6d590c Merge pull request 'TenantManager_impl_k8s_anywhere' (#47) from TenantManager_impl_k8s_anywhere into master
All checks were successful
Run Check Script / check (push) Successful in 1m56s
Reviewed-on: #47
2025-06-09 18:07:32 +00:00
8d8120bbfd fix: K8s ingress module was completely broken, fixed resource definition structure and types
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-09 14:02:06 -04:00
6cf61ae67c feat: Tenant manager k8s implementation progress : ResourceQuota, NetworkPolicy and Namespace look good. Still WIP 2025-06-09 13:59:49 -04:00
8c65aef127 feat: Can now apply any k8s resource type, both namespaced or cluster scoped 2025-06-09 13:58:40 -04:00
00e71b97f6 chore: Move ADR helper files into folders with their corresponding ADR number
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-09 13:54:23 -04:00
ee2bba5623 Merge pull request 'feat: Add Default implementation for Harmony Id along with documentation.' (#53) from feat/id_default into TenantManager_impl_k8s_anywhere
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m47s
Reviewed-on: #53
2025-06-09 17:47:33 +00:00
118d34db55 Merge pull request 'feat: Initialize k8s tenant properly' (#54) from feat/init_k8s_tenant into feat/id_default
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m47s
Reviewed-on: #54
2025-06-09 17:42:10 +00:00
24e466fadd fix: formatting
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m46s
2025-06-08 23:51:11 -04:00
14fc4345c1 feat: Initialize k8s tenant properly
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m49s
2025-06-08 23:49:08 -04:00
8e472e4c65 feat: Add Default implementation for Harmony Id along with documentation.
Some checks failed
Run Check Script / check (push) Failing after 47s
Run Check Script / check (pull_request) Failing after 45s
This Id implementation is optimized for ease of use. Ids are prefixed with the unix epoch and suffixed with 7 alphanumeric characters. But Ids can also contain any String the user wants to pass it
2025-06-08 21:23:29 -04:00
ec17ccc246 feat: Add example-tenant (WIP)
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m53s
2025-06-06 13:59:48 -04:00
5127f44ab3 docs: Add note about pod privilege escalation in ADR 011 Tenant
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m46s
2025-06-06 13:56:40 -04:00
2ff70db0b1 wip: Tenant example project
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-06 13:52:40 -04:00
e17ac1af83 Merge remote-tracking branch 'origin/master' into TenantManager_impl_k8s_anywhere
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-04 16:14:21 -04:00
045954f8d3 start network policy
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 18:06:16 -04:00
7c809bf18a Make k8stenantmanager a oncecell
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 16:03:58 -04:00
6490e5e82a Hardcode some limits to protect the overall cluster
Some checks failed
Run Check Script / check (push) Failing after 42s
Run Check Script / check (pull_request) Failing after 47s
2025-05-29 15:49:46 -04:00
5e51f7490c Update request quota
Some checks failed
Run Check Script / check (push) Failing after 46s
2025-05-29 15:41:57 -04:00
97fba07f4e feat: adding kubernetes implentation of tenant manager
Some checks failed
Run Check Script / check (push) Failing after 43s
2025-05-29 14:35:58 -04:00
624e4330bb boilerplate
All checks were successful
Run Check Script / check (push) Successful in 1m47s
2025-05-29 13:36:30 -04:00
141 changed files with 9339 additions and 1184 deletions

2
.dockerignore Normal file
View File

@@ -0,0 +1,2 @@
target/
Dockerfile

View File

@@ -1,11 +1,15 @@
name: Run Check Script
on:
push:
branches:
- master
pull_request:
jobs:
check:
runs-on: rust-cargo
runs-on: docker
container:
image: hub.nationtech.io/harmony/harmony_composer:latest@sha256:eb0406fcb95c63df9b7c4b19bc50ad7914dd8232ce98e9c9abef628e07c69386
steps:
- name: Checkout code
uses: actions/checkout@v4

View File

@@ -0,0 +1,95 @@
name: Compile and package harmony_composer
on:
push:
branches:
- master
jobs:
package_harmony_composer:
container:
image: hub.nationtech.io/harmony/harmony_composer:latest@sha256:eb0406fcb95c63df9b7c4b19bc50ad7914dd8232ce98e9c9abef628e07c69386
runs-on: dind
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build for Linux x86_64
run: cargo build --release --bin harmony_composer --target x86_64-unknown-linux-gnu
- name: Build for Windows x86_64 GNU
run: cargo build --release --bin harmony_composer --target x86_64-pc-windows-gnu
- name: Setup log into hub.nationtech.io
uses: docker/login-action@v3
with:
registry: hub.nationtech.io
username: ${{ secrets.HUB_BOT_USER }}
password: ${{ secrets.HUB_BOT_PASSWORD }}
# TODO: build ARM images and MacOS binaries (or other targets) too
- name: Update snapshot-latest tag
run: |
git config user.name "Gitea CI"
git config user.email "ci@nationtech.io"
git tag -f snapshot-latest
git push origin snapshot-latest --force
- name: Install jq
run: apt install -y jq # The current image includes apt lists so we don't have to apt update and rm /var/lib/apt... every time. But if the image is optimized it won't work anymore
- name: Create or update release
run: |
# First, check if release exists and delete it if it does
RELEASE_ID=$(curl -s -X GET \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases/tags/snapshot-latest" \
| jq -r '.id // empty')
if [ -n "$RELEASE_ID" ]; then
# Delete existing release
curl -X DELETE \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases/$RELEASE_ID"
fi
# Create new release
RESPONSE=$(curl -X POST \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
-H "Content-Type: application/json" \
-d '{
"tag_name": "snapshot-latest",
"name": "Latest Snapshot",
"body": "Automated snapshot build from master branch",
"draft": false,
"prerelease": true
}' \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases")
echo "RELEASE_ID=$(echo $RESPONSE | jq -r '.id')" >> $GITHUB_ENV
- name: Upload Linux binary
run: |
curl -X POST \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@target/x86_64-unknown-linux-gnu/release/harmony_composer" \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases/${{ env.RELEASE_ID }}/assets?name=harmony_composer"
- name: Upload Windows binary
run: |
curl -X POST \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@target/x86_64-pc-windows-gnu/release/harmony_composer.exe" \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases/${{ env.RELEASE_ID }}/assets?name=harmony_composer.exe"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: hub.nationtech.io/harmony/harmony_composer:latest

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
target
private_repos
log/
*.tgz

1122
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -11,6 +11,7 @@ members = [
"opnsense-config-xml",
"harmony_cli",
"k3d",
"harmony_composer",
]
[workspace.package]
@@ -19,28 +20,35 @@ readme = "README.md"
license = "GNU AGPL v3"
[workspace.dependencies]
log = "0.4.22"
env_logger = "0.11.5"
derive-new = "0.7.0"
async-trait = "0.1.82"
tokio = { version = "1.40.0", features = ["io-std", "fs", "macros", "rt-multi-thread"] }
cidr = "0.2.3"
russh = "0.45.0"
russh-keys = "0.45.0"
rand = "0.8.5"
url = "2.5.4"
kube = "0.98.0"
k8s-openapi = { version = "0.24.0", features = ["v1_30"] }
serde_yaml = "0.9.34"
serde-value = "0.7.0"
http = "1.2.0"
inquire = "0.7.5"
convert_case = "0.8.0"
[workspace.dependencies.uuid]
version = "1.11.0"
features = [
"v4", # Lets you generate random UUIDs
"fast-rng", # Use a faster (but still sufficiently random) RNG
"macro-diagnostics", # Enable better diagnostics for compile-time UUIDs
]
log = "0.4"
env_logger = "0.11"
derive-new = "0.7"
async-trait = "0.1"
tokio = { version = "1.40", features = [
"io-std",
"fs",
"macros",
"rt-multi-thread",
] }
cidr = { features = ["serde"], version = "0.2" }
russh = "0.45"
russh-keys = "0.45"
rand = "0.8"
url = "2.5"
kube = { version = "1.1.0", features = [
"config",
"client",
"runtime",
"rustls-tls",
"ws",
"jsonpatch",
] }
k8s-openapi = { version = "0.25", features = ["v1_30"] }
serde_yaml = "0.9"
serde-value = "0.7"
http = "1.2"
inquire = "0.7"
convert_case = "0.8"
chrono = "0.4"
similar = "2"
uuid = { version = "1.11", features = ["v4", "fast-rng", "macro-diagnostics"] }

25
Dockerfile Normal file
View File

@@ -0,0 +1,25 @@
FROM docker.io/rust:1.87.0 AS build
WORKDIR /app
COPY . .
RUN cargo build --release --bin harmony_composer
FROM docker.io/rust:1.87.0
WORKDIR /app
RUN rustup target add x86_64-pc-windows-gnu
RUN rustup target add x86_64-unknown-linux-gnu
RUN rustup component add rustfmt
RUN apt update
# TODO: Consider adding more supported targets
# nodejs for checkout action, docker for building containers, mingw for cross-compiling for windows
RUN apt install -y nodejs docker.io mingw-w64
COPY --from=build /app/target/release/harmony_composer .
ENTRYPOINT ["/app/harmony_composer"]

282
README.md
View File

@@ -1,171 +1,151 @@
# Harmony : Open Infrastructure Orchestration
# Harmony : Open-source infrastructure orchestration that treats your platform like first-class code.
*By [NationTech](https://nationtech.io)*
## Quick demo
[![Build](https://git.nationtech.io/NationTech/harmony/actions/workflows/check.yml/badge.svg)](https://git.nationtech.io/nationtech/harmony)
[![License](https://img.shields.io/badge/license-AGPLv3-blue?style=flat-square)](LICENSE)
`cargo run -p example-tui`
### Unify
This will launch Harmony's minimalist terminal ui which embeds a few demo scores.
- **Project Scaffolding**
- **Infrastructure Provisioning**
- **Application Deployment**
- **Day-2 operations**
Usage instructions will be displayed at the bottom of the TUI.
All in **one strongly-typed Rust codebase**.
`cargo run --bin example-cli -- --help`
### Deploy anywhere
This is the harmony CLI, a minimal implementation
From a **developer laptop** to a **global production cluster**, a single **source of truth** drives the **full software lifecycle.**
The current help text:
---
````
Usage: example-cli [OPTIONS]
## 1 · The Harmony Philosophy
Options:
-y, --yes Run score(s) or not
-f, --filter <FILTER> Filter query
-i, --interactive Run interactive TUI or not
-a, --all Run all or nth, defaults to all
-n, --number <NUMBER> Run nth matching, zero indexed [default: 0]
-l, --list list scores, will also be affected by run filter
-h, --help Print help
-V, --version Print version```
Infrastructure is essential, but it shouldnt be your core business. Harmony is built on three guiding principles that make modern platforms reliable, repeatable, and easy to reason about.
## Core architecture
| Principle | What it means for you |
|-----------|-----------------------|
| **Infrastructure as Resilient Code** | Replace sprawling YAML and bash scripts with type-safe Rust. Test, refactor, and version your platform just like application code. |
| **Prove It Works — Before You Deploy** | Harmony uses the compiler to verify that your applications needs match the target environments capabilities at **compile-time**, eliminating an entire class of runtime outages. |
| **One Unified Model** | Software and infrastructure are a single system. Harmony models them together, enabling deep automation—from bare-metal servers to Kubernetes workloads—with zero context switching. |
![Harmony Core Architecture](docs/diagrams/Harmony_Core_Architecture.drawio.svg)
````
## Supporting a new field in OPNSense `config.xml`
These principles surface as simple, ergonomic Rust APIs that let teams focus on their product while trusting the platform underneath.
Two steps:
- Supporting the field in `opnsense-config-xml`
- Enabling Harmony to control the field
---
We'll use the `filename` field in the `dhcpcd` section of the file as an example.
## 2 · Quick Start
### Supporting the field
The snippet below spins up a complete **production-grade LAMP stack** with monitoring. Swap it for your own scores to deploy anything from microservices to machine-learning pipelines.
As type checking if enforced, every field from `config.xml` must be known by the code. Each subsection of `config.xml` has its `.rs` file. For the `dhcpcd` section, we'll modify `opnsense-config-xml/src/data/dhcpd.rs`.
When a new field appears in the xml file, an error like this will be thrown and Harmony will panic :
```
Running `/home/stremblay/nt/dir/harmony/target/debug/example-nanodc`
Found unauthorized element filename
thread 'main' panicked at opnsense-config-xml/src/data/opnsense.rs:54:14:
OPNSense received invalid string, should be full XML: ()
```
Define the missing field (`filename`) in the `DhcpInterface` struct of `opnsense-config-xml/src/data/dhcpd.rs`:
```
pub struct DhcpInterface {
...
pub filename: Option<String>,
```
Harmony should now be fixed, build and run.
### Controlling the field
Define the `xml field setter` in `opnsense-config/src/modules/dhcpd.rs`.
```
impl<'a> DhcpConfig<'a> {
...
pub fn set_filename(&mut self, filename: &str) {
self.enable_netboot();
self.get_lan_dhcpd().filename = Some(filename.to_string());
}
...
```
Define the `value setter` in the `DhcpServer trait` in `domain/topology/network.rs`
```
#[async_trait]
pub trait DhcpServer: Send + Sync {
...
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError>;
...
```
Implement the `value setter` in each `DhcpServer` implementation.
`infra/opnsense/dhcp.rs`:
```
#[async_trait]
impl DhcpServer for OPNSenseFirewall {
...
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_filename(filename);
debug!("OPNsense dhcp server set filename {filename}");
}
Ok(())
}
...
```
`domain/topology/ha_cluster.rs`
```
#[async_trait]
impl DhcpServer for DummyInfra {
...
async fn set_filename(&self, _filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
...
```
Add the new field to the DhcpScore in `modules/dhcp.rs`
```
pub struct DhcpScore {
...
pub filename: Option<String>,
```
Define it in its implementation in `modules/okd/dhcp.rs`
```
impl OKDDhcpScore {
...
Self {
dhcp_score: DhcpScore {
...
filename: Some("undionly.kpxe".to_string()),
```
Define it in its implementation in `modules/okd/bootstrap_dhcp.rs`
```
impl OKDDhcpScore {
...
Self {
dhcp_score: DhcpScore::new(
...
Some("undionly.kpxe".to_string()),
```
Update the interpret (function called by the `execute` fn of the interpret) so it now updates the `filename` field value in `modules/dhcp.rs`
```
impl DhcpInterpret {
...
let filename_outcome = match &self.score.filename {
Some(filename) => {
let dhcp_server = Arc::new(topology.dhcp_server.clone());
dhcp_server.set_filename(&filename).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set filename to {filename}"),
)
}
None => Outcome::noop(),
```rust
use harmony::{
data::Version,
inventory::Inventory,
maestro::Maestro,
modules::{
lamp::{LAMPConfig, LAMPScore},
monitoring::monitoring_alerting::MonitoringAlertingStackScore,
},
topology::{K8sAnywhereTopology, Url},
};
if next_server_outcome.status == InterpretStatus::NOOP
&& boot_filename_outcome.status == InterpretStatus::NOOP
&& filename_outcome.status == InterpretStatus::NOOP
#[tokio::main]
async fn main() {
// 1. Describe what you want
let lamp_stack = LAMPScore {
name: "harmony-lamp-demo".into(),
domain: Url::Url(url::Url::parse("https://lampdemo.example.com").unwrap()),
php_version: Version::from("8.3.0").unwrap(),
config: LAMPConfig {
project_root: "./php".into(),
database_size: "4Gi".into(),
..Default::default()
},
};
...
Ok(Outcome::new(
InterpretStatus::SUCCESS,
format!(
"Dhcp Interpret Set next boot to [{:?}], boot_filename to [{:?}], filename to [{:?}]",
self.score.boot_filename, self.score.boot_filename, self.score.filename
// 2. Pick where it should run
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(), // auto-detect hardware / kube-config
K8sAnywhereTopology::from_env(), // local k3d, CI, staging, prod…
)
...
.await
.unwrap();
// 3. Enhance with extra scores (monitoring, CI/CD, …)
let mut monitoring = MonitoringAlertingStackScore::new();
monitoring.namespace = Some(lamp_stack.config.namespace.clone());
maestro.register_all(vec![Box::new(lamp_stack), Box::new(monitoring)]);
// 4. Launch an interactive CLI / TUI
harmony_cli::init(maestro, None).await.unwrap();
}
```
Run it:
```bash
cargo run
```
Harmony analyses the code, shows an execution plan in a TUI, and applies it once you confirm. Same code, same binary—every environment.
---
## 3 · Core Concepts
| Term | One-liner |
|------|-----------|
| **Score<T>** | Declarative description of the desired state (e.g., `LAMPScore`). |
| **Interpret<T>** | Imperative logic that realises a `Score` on a specific environment. |
| **Topology** | An environment (local k3d, AWS, bare-metal) exposing verified *Capabilities* (Kubernetes, DNS, …). |
| **Maestro** | Orchestrator that compiles Scores + Topology, ensuring all capabilities line up **at compile-time**. |
| **Inventory** | Optional catalogue of physical assets for bare-metal and edge deployments. |
A visual overview is in the diagram below.
[Harmony Core Architecture](docs/diagrams/Harmony_Core_Architecture.drawio.svg)
---
## 4 · Install
Prerequisites:
* Rust
* Docker (if you deploy locally)
* `kubectl` / `helm` for Kubernetes-based topologies
```bash
git clone https://git.nationtech.io/nationtech/harmony
cd harmony
cargo build --release # builds the CLI, TUI and libraries
```
---
## 5 · Learning More
* **Architectural Decision Records** dive into the rationale
- [ADR-001 · Why Rust](adr/001-rust.md)
- [ADR-003 · Infrastructure Abstractions](adr/003-infrastructure-abstractions.md)
- [ADR-006 · Secret Management](adr/006-secret-management.md)
- [ADR-011 · Multi-Tenant Cluster](adr/011-multi-tenant-cluster.md)
* **Extending Harmony** write new Scores / Interprets, add hardware like OPNsense firewalls, or embed Harmony in your own tooling (`/docs`).
* **Community** discussions and roadmap live in [GitLab issues](https://git.nationtech.io/nationtech/harmony/-/issues). PRs, ideas, and feedback are welcome!
---
## 6 · License
Harmony is released under the **GNU AGPL v3**.
> We choose a strong copyleft license to ensure the project—and every improvement to it—remains open and benefits the entire community. Fork it, enhance it, even out-innovate us; just keep it open.
See [LICENSE](LICENSE) for the full text.
---
*Made with ❤️ & 🦀 by the NationTech and the Harmony community*

View File

@@ -0,0 +1,73 @@
pub trait MonitoringSystem {}
// 1. Modified AlertReceiver trait:
// - Removed the problematic `clone` method.
// - Added `box_clone` which returns a Box<dyn AlertReceiver>.
pub trait AlertReceiver {
type M: MonitoringSystem;
fn install(&self, sender: &Self::M) -> Result<(), String>;
// This method allows concrete types to clone themselves into a Box<dyn AlertReceiver>
fn box_clone(&self) -> Box<dyn AlertReceiver<M = Self::M>>;
}
#[derive(Clone)]
struct Prometheus{}
impl MonitoringSystem for Prometheus {}
#[derive(Clone)] // Keep derive(Clone) for DiscordWebhook itself
struct DiscordWebhook{}
impl AlertReceiver for DiscordWebhook {
type M = Prometheus;
fn install(&self, sender: &Self::M) -> Result<(), String> {
// Placeholder for actual installation logic
println!("DiscordWebhook installed for Prometheus monitoring.");
Ok(())
}
// 2. Implement `box_clone` for DiscordWebhook:
// This uses the derived `Clone` for DiscordWebhook to create a new boxed instance.
fn box_clone(&self) -> Box<dyn AlertReceiver<M = Self::M>> {
Box::new(self.clone())
}
}
// 3. Implement `std::clone::Clone` for `Box<dyn AlertReceiver<M= M>>`:
// This allows `Box<dyn AlertReceiver>` to be cloned.
// The `+ 'static` lifetime bound is often necessary for trait objects stored in collections,
// ensuring they live long enough.
impl<M: MonitoringSystem + 'static> Clone for Box<dyn AlertReceiver<M= M>> {
fn clone(&self) -> Self {
self.box_clone() // Call the custom `box_clone` method
}
}
// MonitoringConfig can now derive Clone because its `receivers` field
// (Vec<Box<dyn AlertReceiver<M = M>>>) is now cloneable.
#[derive(Clone)]
struct MonitoringConfig <M: MonitoringSystem + 'static>{
receivers: Vec<Box<dyn AlertReceiver<M = M>>>
}
// Example usage to demonstrate compilation and functionality
fn main() {
let prometheus_instance = Prometheus{};
let discord_webhook_instance = DiscordWebhook{};
let mut config = MonitoringConfig {
receivers: Vec::new()
};
// Create a boxed alert receiver
let boxed_receiver: Box<dyn AlertReceiver<M = Prometheus>> = Box::new(discord_webhook_instance);
config.receivers.push(boxed_receiver);
// Clone the config, which will now correctly clone the boxed receiver
let cloned_config = config.clone();
println!("Original config has {} receivers.", config.receivers.len());
println!("Cloned config has {} receivers.", cloned_config.receivers.len());
// Example of using the installed receiver
if let Some(receiver) = config.receivers.get(0) {
let _ = receiver.install(&prometheus_instance);
}
}

View File

@@ -137,8 +137,9 @@ Our approach addresses both customer and team multi-tenancy requirements:
### Implementation Roadmap
1. **Phase 1**: Implement VPN access and manual tenant provisioning
2. **Phase 2**: Deploy TenantScore automation for namespace, RBAC, and NetworkPolicy management
3. **Phase 3**: Integrate Keycloak for centralized identity management
4. **Phase 4**: Add advanced monitoring and per-tenant observability
4. **Phase 3**: Work on privilege escalation from pods, audit for weaknesses, enforce security policies on pod runtimes
3. **Phase 4**: Integrate Keycloak for centralized identity management
4. **Phase 5**: Add advanced monitoring and per-tenant observability
### TenantScore Structure Preview
```rust

View File

@@ -0,0 +1,41 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant-isolation-policy
namespace: testtenant
spec:
podSelector: {} # Selects all pods in the namespace
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {} # Allow from all pods in the same namespace
egress:
- to:
- podSelector: {} # Allow to all pods in the same namespace
- to:
- podSelector: {}
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns # Target the openshift-dns namespace
# Note, only opening port 53 is not enough, will have to dig deeper into this one eventually
# ports:
# - protocol: UDP
# port: 53
# - protocol: TCP
# port: 53
# Allow egress to public internet only
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8 # RFC1918
- 172.16.0.0/12 # RFC1918
- 192.168.0.0/16 # RFC1918
- 169.254.0.0/16 # Link-local
- 127.0.0.0/8 # Loopback
- 224.0.0.0/4 # Multicast
- 240.0.0.0/4 # Reserved
- 100.64.0.0/10 # Carrier-grade NAT
- 0.0.0.0/8 # Reserved

View File

@@ -0,0 +1,95 @@
apiVersion: v1
kind: Namespace
metadata:
name: testtenant
---
apiVersion: v1
kind: Namespace
metadata:
name: testtenant2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-web
namespace: testtenant
spec:
replicas: 1
selector:
matchLabels:
app: test-web
template:
metadata:
labels:
app: test-web
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-web
namespace: testtenant
spec:
selector:
app: test-web
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-client
namespace: testtenant
spec:
replicas: 1
selector:
matchLabels:
app: test-client
template:
metadata:
labels:
app: test-client
spec:
containers:
- name: curl
image: curlimages/curl:latest
command: ["/bin/sh", "-c", "sleep 3600"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-web
namespace: testtenant2
spec:
replicas: 1
selector:
matchLabels:
app: test-web
template:
metadata:
labels:
app: test-web
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-web
namespace: testtenant2
spec:
selector:
app: test-web
ports:
- port: 80
targetPort: 8080

View File

@@ -0,0 +1,63 @@
# Architecture Decision Record: \<Title\>
Initial Author: Jean-Gabriel Gill-Couture
Initial Date: 2025-06-04
Last Updated Date: 2025-06-04
## Status
Proposed
## Context
As Harmony's goal is to make software delivery easier, we must provide an easy way for developers to express their app's semantics and dependencies with great abstractions, in a similar fashion to what the score.dev project is doing.
Thus, we started working on ways to package common types of applications such as LAMP, which we started working on with `LAMPScore`.
Now is time for the next step : we want to pave the way towards complete lifecycle automation. To do this, we will start with a way to execute Harmony's modules easily from anywhere, starting with locally and in CI environments.
## Decision
To achieve easy, portable execution of Harmony, we will follow this architecture :
- Host a basic harmony release that is compiled with the CLI by our gitea/github server
- This binary will do the following : check if there is a `harmony` folder in the current path
- If yes
- Check if cargo is available locally and compile the harmony binary, or compile the harmony binary using a rust docker container, if neither cargo or a container runtime is available, output a message explaining the situation
- Run the newly compiled binary. (Ideally using pid handoff like exec does but some research around this should be done. I think handing off the process is to help with OS interaction such as terminal apps, signals, exit codes, process handling, etc but there might be some side effects)
- If not
- Suggest initializing a project by auto detecting what the project looks like
- When the project type cannot be auto detected, provide links to Harmony's documentation on how to set up a project, a link to the examples folder, and a ask the user if he wants to initialize an empty Harmony project in the current folder
- harmony/Cargo.toml with dependencies set
- harmony/src/main.rs with an example LAMPScore setup and ready to run
- This same binary can be used in a CI environment to run the target project's Harmony module. By default, we provide these opinionated steps :
1. **An empty check step.** The purpose of this step is to run all tests and checks against the codebase. For complex projects this could involve a very complex pipeline of test environments setup and execution but this is out of scope for now. This is not handled by harmony. For projects with automatic setup, we can fill this step with something like `cargo fmt --check; cargo test; cargo build` but Harmony is not directly involved in the execution of this step.
2. **Package and publish.** Once all checks have passed, the production ready container is built and pushed to a registry. This is done by Harmony.
3. **Deploy to staging automatically.**
4. **Run a sanity check on staging.** As Harmony is responsible for deploying, Harmony should have all the knowledge of how to perform a sanity check on the staging environment. This will, most of the time, be a simple verification of the kubernetes health of all deployed components, and a poke on the public endpoint when there is one.
5. **Deploy to production automatically.** Many projects will require manual approval here, this can be easily set up in the CI afterwards, but our opinion is that
6. **Run a sanity check on production.** Same check as staging, but on production.
*Note on providing a base pipeline :* Having a complete pipeline set up automatically will encourage development teams to build upon these by adding tests where they belong. The goal here is to provide an opiniated solution that works for most small and large projects. Of course, many orgnizations will need to add steps such as deploying to sandbox environments, requiring more advanced approvals, more complex publication and coordination with other projects. But this here encompasses the basics required to build and deploy software reliably at any scale.
### Environment setup
TBD : For now, environments (tenants) will be set up and configured manually. Harmony will rely on the kubeconfig provided in the environment where it is running to deploy in the namespace.
For the CD tool such as Argo or Flux they will be activated by default by Harmony when using application level Scores such as LAMPScore in a similar way that the container is automatically built. Then, CI deployment steps will be notifying the CD tool using its API of the new release to deploy.
## Rationale
Reasoning behind the decision
## Consequences
Pros/Cons of chosen solution
## Alternatives considered
Pros/Cons of various proposed solutions considered
## Additional Notes

View File

@@ -0,0 +1,78 @@
# Architecture Decision Record: Monitoring Notifications
Initial Author: Taha Hawa
Initial Date: 2025-06-26
Last Updated Date: 2025-06-26
## Status
Proposed
## Context
We need to send notifications (typically from AlertManager/Prometheus) and we need to receive said notifications on mobile devices for sure in some way, whether it's push messages, SMS, phone call, email, etc or all of the above.
## Decision
We should go with https://ntfy.sh except host it ourselves.
`ntfy` is an open source solution written in Go that has the features we need.
## Rationale
`ntfy` has pretty much everything we need (push notifications, email forwarding, receives via webhook), and nothing/not much we don't. Good fit, lightweight.
## Consequences
Pros:
- topics, with ACLs
- lightweight
- reliable
- easy to configure
- mobile app
- the mobile app can listen via websocket, poll, or receive via Firebase/GCM on Android, or similar on iOS.
- Forward to email
- Text-to-Speech phone call messages using Twilio integration
- Operates based on simple HTTP requests/Webhooks, easily usable via AlertManager
Cons:
- No SMS pushes
- SQLite DB, makes it harder to HA/scale
## Alternatives considered
[AWS SNS](https://aws.amazon.com/sns/):
Pros:
- highly reliable
- no hosting needed
Cons:
- no control, not self hosted
- costs (per usage)
[Apprise](https://github.com/caronc/apprise):
Pros:
- Way more ways of sending notifications
- Can use ntfy as one of the backends/ways of sending
Cons:
- Way too overkill for what we need in terms of features
[Gotify](https://github.com/gotify/server):
Pros:
- simple, lightweight, golang, etc
Cons:
- Pushes topics are per-user
## Additional Notes

1
docs/README.md Normal file
View File

@@ -0,0 +1 @@
Not much here yet, see the `adr` folder for now. More to come in time!

13
docs/cyborg-metaphor.md Normal file
View File

@@ -0,0 +1,13 @@
## Conceptual metaphor : The Cyborg and the Central Nervous System
At the heart of Harmony lies a core belief: in modern, decentralized systems, **software and infrastructure are not separate entities.** They are a single, symbiotic organism—a cyborg.
The software is the electronics, the "mind"; the infrastructure is the biological host, the "body". They live or die, thrive or sink together.
Traditional approaches attempt to manage this complex organism with fragmented tools: static YAML for configuration, brittle scripts for automation, and separate Infrastructure as Code (IaC) for provisioning. This creates a disjointed system that struggles to scale or heal itself, making it inadequate for the demands of fully automated, enterprise-grade clusters.
Harmony's goal is to provide the **central nervous system for this cyborg**. We aim to achieve the full automation of complex, decentralized clouds by managing this integrated entity holistically.
To achieve this, a tool must be both robust and powerful. It must manage the entire lifecycle—deployment, upgrades, failure recovery, and decommissioning—with precision. This requires full control over application packaging and a deep, intrinsic integration between the software and the infrastructure it inhabits.
This is why Harmony uses a powerful, living language like Rust. It replaces static, lifeless configuration files with a dynamic, breathing codebase. It allows us to express the complex relationships and behaviors of a modern distributed system, enabling the creation of truly automated, resilient, and powerful platforms that can thrive.

View File

@@ -14,8 +14,8 @@ harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
kube = "0.98.0"
k8s-openapi = { version = "0.24.0", features = [ "v1_30" ] }
kube = "1.1.0"
k8s-openapi = { version = "0.25.0", features = ["v1_30"] }
http = "1.2.0"
serde_yaml = "0.9.34"
inquire.workspace = true

View File

@@ -2,10 +2,7 @@ use harmony::{
data::Version,
inventory::Inventory,
maestro::Maestro,
modules::{
lamp::{LAMPConfig, LAMPScore},
monitoring::monitoring_alerting::{AlertChannel, MonitoringAlertingStackScore},
},
modules::lamp::{LAMPConfig, LAMPScore},
topology::{K8sAnywhereTopology, Url},
};
@@ -32,24 +29,28 @@ async fn main() {
},
};
//let monitoring = MonitoringAlertingScore {
// alert_receivers: vec![Box::new(DiscordWebhook {
// url: Url::Url(url::Url::parse("https://discord.idonotexist.com").unwrap()),
// // TODO write url macro
// // url: url!("https://discord.idonotexist.com"),
// })],
// alert_rules: vec![],
// scrape_targets: vec![],
//};
// You can choose the type of Topology you want, we suggest starting with the
// K8sAnywhereTopology as it is the most automatic one that enables you to easily deploy
// locally, to development environment from a CI, to staging, and to production with settings
// that automatically adapt to each environment grade.
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::new(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
let url = url::Url::parse("https://discord.com/api/webhooks/dummy_channel/dummy_token")
.expect("invalid URL");
let mut monitoring_stack_score = MonitoringAlertingStackScore::new();
monitoring_stack_score.namespace = Some(lamp_stack.config.namespace.clone());
maestro.register_all(vec![Box::new(lamp_stack), Box::new(monitoring_stack_score)]);
maestro.register_all(vec![Box::new(lamp_stack)]);
// Here we bootstrap the CLI, this gives some nice features if you need them
harmony_cli::init(maestro, None).await.unwrap();
}

View File

@@ -0,0 +1,13 @@
[package]
name = "example-monitoring"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
harmony_macros = { version = "0.1.0", path = "../../harmony_macros" }
tokio.workspace = true
url.workspace = true

View File

@@ -0,0 +1,86 @@
use std::collections::HashMap;
use harmony::{
inventory::Inventory,
maestro::Maestro,
modules::{
monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
kube_prometheus::{
helm_prometheus_alert_score::HelmPrometheusAlertingScore,
types::{
HTTPScheme, MatchExpression, Operator, Selector, ServiceMonitor,
ServiceMonitorEndpoint,
},
},
},
prometheus::alerts::{
infra::dell_server::{
alert_global_storage_status_critical, alert_global_storage_status_non_recoverable,
global_storage_status_degraded_non_critical,
},
k8s::pvc::high_pvc_fill_rate_over_two_days,
},
},
topology::{K8sAnywhereTopology, Url},
};
#[tokio::main]
async fn main() {
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();
let dell_system_storage_degraded = global_storage_status_degraded_non_critical();
let alert_global_storage_status_critical = alert_global_storage_status_critical();
let alert_global_storage_status_non_recoverable = alert_global_storage_status_non_recoverable();
let additional_rules =
AlertManagerRuleGroup::new("pvc-alerts", vec![high_pvc_fill_rate_over_two_days_alert]);
let additional_rules2 = AlertManagerRuleGroup::new(
"dell-server-alerts",
vec![
dell_system_storage_degraded,
alert_global_storage_status_critical,
alert_global_storage_status_non_recoverable,
],
);
let service_monitor_endpoint = ServiceMonitorEndpoint {
port: Some("80".to_string()),
path: "/metrics".to_string(),
scheme: HTTPScheme::HTTP,
..Default::default()
};
let service_monitor = ServiceMonitor {
name: "test-service-monitor".to_string(),
selector: Selector {
match_labels: HashMap::new(),
match_expressions: vec![MatchExpression {
key: "test".to_string(),
operator: Operator::In,
values: vec!["test-service".to_string()],
}],
},
endpoints: vec![service_monitor_endpoint],
..Default::default()
};
let alerting_score = HelmPrometheusAlertingScore {
receivers: vec![Box::new(discord_receiver)],
rules: vec![Box::new(additional_rules), Box::new(additional_rules2)],
service_monitors: vec![service_monitor],
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(alerting_score)]);
harmony_cli::init(maestro, None).await.unwrap();
}

View File

@@ -0,0 +1,13 @@
[package]
name = "example-monitoring-with-tenant"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
cidr.workspace = true
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
tokio.workspace = true
url.workspace = true

View File

@@ -0,0 +1,90 @@
use std::collections::HashMap;
use harmony::{
data::Id,
inventory::Inventory,
maestro::Maestro,
modules::{
monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
kube_prometheus::{
helm_prometheus_alert_score::HelmPrometheusAlertingScore,
types::{
HTTPScheme, MatchExpression, Operator, Selector, ServiceMonitor,
ServiceMonitorEndpoint,
},
},
},
prometheus::alerts::k8s::pvc::high_pvc_fill_rate_over_two_days,
tenant::TenantScore,
},
topology::{
K8sAnywhereTopology, Url,
tenant::{ResourceLimits, TenantConfig, TenantNetworkPolicy},
},
};
#[tokio::main]
async fn main() {
let tenant = TenantScore {
config: TenantConfig {
id: Id::from_string("1234".to_string()),
name: "test-tenant".to_string(),
resource_limits: ResourceLimits {
cpu_request_cores: 6.0,
cpu_limit_cores: 4.0,
memory_request_gb: 4.0,
memory_limit_gb: 4.0,
storage_total_gb: 10.0,
},
network_policy: TenantNetworkPolicy::default(),
},
};
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();
let additional_rules =
AlertManagerRuleGroup::new("pvc-alerts", vec![high_pvc_fill_rate_over_two_days_alert]);
let service_monitor_endpoint = ServiceMonitorEndpoint {
port: Some("80".to_string()),
path: "/metrics".to_string(),
scheme: HTTPScheme::HTTP,
..Default::default()
};
let service_monitor = ServiceMonitor {
name: "test-service-monitor".to_string(),
selector: Selector {
match_labels: HashMap::new(),
match_expressions: vec![MatchExpression {
key: "test".to_string(),
operator: Operator::In,
values: vec!["test-service".to_string()],
}],
},
endpoints: vec![service_monitor_endpoint],
..Default::default()
};
let alerting_score = HelmPrometheusAlertingScore {
receivers: vec![Box::new(discord_receiver)],
rules: vec![Box::new(additional_rules)],
service_monitors: vec![service_monitor],
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(tenant), Box::new(alerting_score)]);
harmony_cli::init(maestro, None).await.unwrap();
}

12
examples/ntfy/Cargo.toml Normal file
View File

@@ -0,0 +1,12 @@
[package]
name = "example-ntfy"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
tokio.workspace = true
url.workspace = true

19
examples/ntfy/src/main.rs Normal file
View File

@@ -0,0 +1,19 @@
use harmony::{
inventory::Inventory, maestro::Maestro, modules::monitoring::ntfy::ntfy::NtfyScore,
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(NtfyScore {
namespace: "monitoring".to_string(),
})]);
harmony_cli::init(maestro, None).await.unwrap();
}

14
examples/rust/Cargo.toml Normal file
View File

@@ -0,0 +1,14 @@
[package]
name = "example-rust"
version = "0.1.0"
edition = "2024"
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_macros = { path = "../../harmony_macros" }
tokio = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

62
examples/rust/src/main.rs Normal file
View File

@@ -0,0 +1,62 @@
use std::{path::PathBuf, sync::Arc};
use harmony::{
inventory::Inventory,
maestro::Maestro,
modules::{
application::{
ApplicationScore, RustWebFramework, RustWebapp,
features::{ContinuousDelivery, PrometheusMonitoring},
},
monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
},
prometheus::alerts::k8s::{
pod::pod_in_failed_state, pvc::high_pvc_fill_rate_over_two_days,
},
},
topology::{K8sAnywhereTopology, Url},
};
#[tokio::main]
async fn main() {
env_logger::init();
let application = Arc::new(RustWebapp {
name: "harmony-example-rust-webapp".to_string(),
domain: Url::Url(url::Url::parse("https://rustapp.harmony.example.com").unwrap()),
project_root: PathBuf::from("./examples/rust/webapp"),
framework: Some(RustWebFramework::Leptos),
});
let pod_failed = pod_in_failed_state();
let pod_failed_2 = pod_in_failed_state();
let pod_failed_3 = pod_in_failed_state();
let additional_rules = AlertManagerRuleGroup::new("pod-alerts", vec![pod_failed]);
let additional_rules_2 = AlertManagerRuleGroup::new("pod-alerts-2", vec![pod_failed_2, pod_failed_3]);
let app = ApplicationScore {
features: vec![
//Box::new(ContinuousDelivery {
// application: application.clone(),
//}),
Box::new(PrometheusMonitoring {
application: application.clone(),
alert_receivers: vec![Box::new(DiscordWebhook {
name: "dummy-discord".to_string(),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
})],
alert_rules: vec![Box::new(additional_rules), Box::new(additional_rules_2)],
}),
// TODO add monitoring, backups, multisite ha, etc
],
application,
};
let topology = K8sAnywhereTopology::from_env();
let mut maestro = Maestro::initialize(Inventory::autoload(), topology)
.await
.unwrap();
maestro.register_all(vec![Box::new(app)]);
harmony_cli::init(maestro, None).await.unwrap();
}

14
examples/rust/webapp/.gitignore vendored Normal file
View File

@@ -0,0 +1,14 @@
# Generated by Cargo
# will have compiled files and executables
debug/
target/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
Cargo.lock
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb

View File

@@ -0,0 +1,93 @@
[package]
name = "harmony-example-rust-webapp"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib", "rlib"]
[workspace]
[dependencies]
actix-files = { version = "0.6", optional = true }
actix-web = { version = "4", optional = true, features = ["macros"] }
console_error_panic_hook = "0.1"
http = { version = "1.0.0", optional = true }
leptos = { version = "0.7.0" }
leptos_meta = { version = "0.7.0" }
leptos_actix = { version = "0.7.0", optional = true }
leptos_router = { version = "0.7.0" }
wasm-bindgen = "=0.2.100"
[features]
csr = ["leptos/csr"]
hydrate = ["leptos/hydrate"]
ssr = [
"dep:actix-files",
"dep:actix-web",
"dep:leptos_actix",
"leptos/ssr",
"leptos_meta/ssr",
"leptos_router/ssr",
]
# Defines a size-optimized profile for the WASM bundle in release mode
[profile.wasm-release]
inherits = "release"
opt-level = 'z'
lto = true
codegen-units = 1
panic = "abort"
[package.metadata.leptos]
# The name used by wasm-bindgen/cargo-leptos for the JS/WASM bundle. Defaults to the crate name
output-name = "harmony-example-rust-webapp"
# The site root folder is where cargo-leptos generate all output. WARNING: all content of this folder will be erased on a rebuild. Use it in your server setup.
site-root = "target/site"
# The site-root relative folder where all compiled output (JS, WASM and CSS) is written
# Defaults to pkg
site-pkg-dir = "pkg"
# [Optional] The source CSS file. If it ends with .sass or .scss then it will be compiled by dart-sass into CSS. The CSS is optimized by Lightning CSS before being written to <site-root>/<site-pkg>/app.css
style-file = "style/main.scss"
# Assets source dir. All files found here will be copied and synchronized to site-root.
# The assets-dir cannot have a sub directory with the same name/path as site-pkg-dir.
#
# Optional. Env: LEPTOS_ASSETS_DIR.
assets-dir = "assets"
# The IP and port (ex: 127.0.0.1:3000) where the server serves the content. Use it in your server setup.
site-addr = "0.0.0.0:3000"
# The port to use for automatic reload monitoring
reload-port = 3001
# [Optional] Command to use when running end2end tests. It will run in the end2end dir.
# [Windows] for non-WSL use "npx.cmd playwright test"
# This binary name can be checked in Powershell with Get-Command npx
end2end-cmd = "npx playwright test"
end2end-dir = "end2end"
# The browserlist query used for optimizing the CSS.
browserquery = "defaults"
# The environment Leptos will run in, usually either "DEV" or "PROD"
env = "DEV"
# The features to use when compiling the bin target
#
# Optional. Can be over-ridden with the command line parameter --bin-features
bin-features = ["ssr"]
# If the --no-default-features flag should be used when compiling the bin target
#
# Optional. Defaults to false.
bin-default-features = false
# The features to use when compiling the lib target
#
# Optional. Can be over-ridden with the command line parameter --lib-features
lib-features = ["hydrate"]
# If the --no-default-features flag should be used when compiling the lib target
#
# Optional. Defaults to false.
lib-default-features = false
# The profile to use for the lib target when compiling for release
#
# Optional. Defaults to "release".
lib-profile-release = "wasm-release"

View File

@@ -0,0 +1,16 @@
FROM rust:bookworm as builder
RUN apt-get update && apt-get install -y --no-install-recommends clang wget && wget https://github.com/cargo-bins/cargo-binstall/releases/latest/download/cargo-binstall-x86_64-unknown-linux-musl.tgz && tar -xvf cargo-binstall-x86_64-unknown-linux-musl.tgz && cp cargo-binstall /usr/local/cargo/bin && rm cargo-binstall-x86_64-unknown-linux-musl.tgz cargo-binstall && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN cargo binstall cargo-leptos -y
RUN rustup target add wasm32-unknown-unknown
WORKDIR /app
COPY . .
RUN cargo leptos build --release -vv
FROM debian:bookworm-slim
RUN groupadd -r appgroup && useradd -r -s /bin/false -g appgroup appuser
ENV LEPTOS_SITE_ADDR=0.0.0.0:3000
EXPOSE 3000/tcp
WORKDIR /home/appuser
COPY --from=builder /app/target/site/pkg /home/appuser/pkg
COPY --from=builder /app/target/release/harmony-example-rust-webapp /home/appuser/harmony-example-rust-webapp
USER appuser
CMD /home/appuser/harmony-example-rust-webapp

View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <https://unlicense.org>

View File

@@ -0,0 +1,72 @@
<picture>
<source srcset="https://raw.githubusercontent.com/leptos-rs/leptos/main/docs/logos/Leptos_logo_Solid_White.svg" media="(prefers-color-scheme: dark)">
<img src="https://raw.githubusercontent.com/leptos-rs/leptos/main/docs/logos/Leptos_logo_RGB.svg" alt="Leptos Logo">
</picture>
# Leptos Starter Template
This is a template for use with the [Leptos](https://github.com/leptos-rs/leptos) web framework and the [cargo-leptos](https://github.com/akesson/cargo-leptos) tool.
## Creating your template repo
If you don't have `cargo-leptos` installed you can install it with
`cargo install cargo-leptos --locked`
Then run
`cargo leptos new --git leptos-rs/start-actix`
to generate a new project template (you will be prompted to enter a project name).
`cd {projectname}`
to go to your newly created project.
Of course, you should explore around the project structure, but the best place to start with your application code is in `src/app.rs`.
## Running your project
`cargo leptos watch`
By default, you can access your local project at `http://localhost:3000`
## Installing Additional Tools
By default, `cargo-leptos` uses `nightly` Rust, `cargo-generate`, and `sass`. If you run into any trouble, you may need to install one or more of these tools.
1. `rustup toolchain install nightly --allow-downgrade` - make sure you have Rust nightly
2. `rustup target add wasm32-unknown-unknown` - add the ability to compile Rust to WebAssembly
3. `cargo install cargo-generate` - install `cargo-generate` binary (should be installed automatically in future)
4. `npm install -g sass` - install `dart-sass` (should be optional in future)
## Executing a Server on a Remote Machine Without the Toolchain
After running a `cargo leptos build --release` the minimum files needed are:
1. The server binary located in `target/server/release`
2. The `site` directory and all files within located in `target/site`
Copy these files to your remote server. The directory structure should be:
```text
leptos_start
site/
```
Set the following environment variables (updating for your project as needed):
```sh
export LEPTOS_OUTPUT_NAME="leptos_start"
export LEPTOS_SITE_ROOT="site"
export LEPTOS_SITE_PKG_DIR="pkg"
export LEPTOS_SITE_ADDR="127.0.0.1:3000"
export LEPTOS_RELOAD_PORT="3001"
```
Finally, run the server binary.
## Notes about CSR and Trunk:
Although it is not recommended, you can also run your project without server integration using the feature `csr` and `trunk serve`:
`trunk serve --open --features csr`
This may be useful for integrating external tools which require a static site, e.g. `tauri`.
## Licensing
This template itself is released under the Unlicense. You should replace the LICENSE for your own application with an appropriate license if you plan to release it publicly.

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -0,0 +1,112 @@
{
"name": "end2end",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "end2end",
"version": "1.0.0",
"license": "ISC",
"devDependencies": {
"@playwright/test": "^1.44.1",
"@types/node": "^20.12.12",
"typescript": "^5.4.5"
}
},
"node_modules/@playwright/test": {
"version": "1.44.1",
"resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.44.1.tgz",
"integrity": "sha512-1hZ4TNvD5z9VuhNJ/walIjvMVvYkZKf71axoF/uiAqpntQJXpG64dlXhoDXE3OczPuTuvjf/M5KWFg5VAVUS3Q==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright": "1.44.1"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=16"
}
},
"node_modules/@types/node": {
"version": "20.12.12",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.12.12.tgz",
"integrity": "sha512-eWLDGF/FOSPtAvEqeRAQ4C8LSA7M1I7i0ky1I8U7kD1J5ITyW3AsRhQrKVoWf5pFKZ2kILsEGJhsI9r93PYnOw==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~5.26.4"
}
},
"node_modules/fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"dev": true,
"hasInstallScript": true,
"license": "MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/playwright": {
"version": "1.44.1",
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.44.1.tgz",
"integrity": "sha512-qr/0UJ5CFAtloI3avF95Y0L1xQo6r3LQArLIg/z/PoGJ6xa+EwzrwO5lpNr/09STxdHuUoP2mvuELJS+hLdtgg==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright-core": "1.44.1"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=16"
},
"optionalDependencies": {
"fsevents": "2.3.2"
}
},
"node_modules/playwright-core": {
"version": "1.44.1",
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.44.1.tgz",
"integrity": "sha512-wh0JWtYTrhv1+OSsLPgFzGzt67Y7BE/ZS3jEqgGBlp2ppp1ZDj8c+9IARNW4dwf1poq5MgHreEM2KV/GuR4cFA==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"playwright-core": "cli.js"
},
"engines": {
"node": ">=16"
}
},
"node_modules/typescript": {
"version": "5.4.5",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.4.5.tgz",
"integrity": "sha512-vcI4UpRgg81oIRUFwR0WSIHKt11nJ7SAVlYNIu+QpqeyXP+gpQJy/Z4+F0aGxSE4MqwjyXvW/TzgkLAx2AGHwQ==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
},
"engines": {
"node": ">=14.17"
}
},
"node_modules/undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==",
"dev": true,
"license": "MIT"
}
}
}

View File

@@ -0,0 +1,15 @@
{
"name": "end2end",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"@playwright/test": "^1.44.1",
"@types/node": "^20.12.12",
"typescript": "^5.4.5"
}
}

View File

@@ -0,0 +1,104 @@
import { devices, defineConfig } from "@playwright/test";
/**
* Read environment variables from file.
* https://github.com/motdotla/dotenv
*/
// require('dotenv').config();
/**
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
testDir: "./tests",
/* Maximum time one test can run for. */
timeout: 30 * 1000,
expect: {
/**
* Maximum time expect() should wait for the condition to be met.
* For example in `await expect(locator).toHaveText();`
*/
timeout: 5000,
},
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
/* Opt out of parallel tests on CI. */
workers: process.env.CI ? 1 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: "html",
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Maximum time each action such as `click()` can take. Defaults to 0 (no limit). */
actionTimeout: 0,
/* Base URL to use in actions like `await page.goto('/')`. */
// baseURL: 'http://localhost:3000',
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: "on-first-retry",
},
/* Configure projects for major browsers */
projects: [
{
name: "chromium",
use: {
...devices["Desktop Chrome"],
},
},
{
name: "firefox",
use: {
...devices["Desktop Firefox"],
},
},
{
name: "webkit",
use: {
...devices["Desktop Safari"],
},
},
/* Test against mobile viewports. */
// {
// name: 'Mobile Chrome',
// use: {
// ...devices['Pixel 5'],
// },
// },
// {
// name: 'Mobile Safari',
// use: {
// ...devices['iPhone 12'],
// },
// },
/* Test against branded browsers. */
// {
// name: 'Microsoft Edge',
// use: {
// channel: 'msedge',
// },
// },
// {
// name: 'Google Chrome',
// use: {
// channel: 'chrome',
// },
// },
],
/* Folder for test artifacts such as screenshots, videos, traces, etc. */
// outputDir: 'test-results/',
/* Run your local dev server before starting the tests */
// webServer: {
// command: 'npm run start',
// port: 3000,
// },
});

View File

@@ -0,0 +1,9 @@
import { test, expect } from "@playwright/test";
test("homepage has title and links to intro page", async ({ page }) => {
await page.goto("http://localhost:3000/");
await expect(page).toHaveTitle("Welcome to Leptos");
await expect(page.locator("h1")).toHaveText("Welcome to Leptos!");
});

View File

@@ -0,0 +1,109 @@
{
"compilerOptions": {
/* Visit https://aka.ms/tsconfig to read more about this file */
/* Projects */
// "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
// "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
// "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */
// "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */
// "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
// "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
/* Language and Environment */
"target": "es2016", /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
// "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
// "jsx": "preserve", /* Specify what JSX code is generated. */
// "experimentalDecorators": true, /* Enable experimental support for legacy experimental decorators. */
// "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
// "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
// "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
// "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
// "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
// "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
// "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
// "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */
/* Modules */
"module": "commonjs", /* Specify what module code is generated. */
// "rootDir": "./", /* Specify the root folder within your source files. */
// "moduleResolution": "node10", /* Specify how TypeScript looks up a file from a given module specifier. */
// "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
// "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
// "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
// "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */
// "types": [], /* Specify type package names to be included without being referenced in a source file. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
// "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */
// "allowImportingTsExtensions": true, /* Allow imports to include TypeScript file extensions. Requires '--moduleResolution bundler' and either '--noEmit' or '--emitDeclarationOnly' to be set. */
// "resolvePackageJsonExports": true, /* Use the package.json 'exports' field when resolving package imports. */
// "resolvePackageJsonImports": true, /* Use the package.json 'imports' field when resolving imports. */
// "customConditions": [], /* Conditions to set in addition to the resolver-specific defaults when resolving imports. */
// "resolveJsonModule": true, /* Enable importing .json files. */
// "allowArbitraryExtensions": true, /* Enable importing files with any extension, provided a declaration file is present. */
// "noResolve": true, /* Disallow 'import's, 'require's or '<reference>'s from expanding the number of files TypeScript should add to a project. */
/* JavaScript Support */
// "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
// "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
// "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
/* Emit */
// "declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
// "declarationMap": true, /* Create sourcemaps for d.ts files. */
// "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
// "sourceMap": true, /* Create source map files for emitted JavaScript files. */
// "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
// "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
// "outDir": "./", /* Specify an output folder for all emitted files. */
// "removeComments": true, /* Disable emitting comments. */
// "noEmit": true, /* Disable emitting files from a compilation. */
// "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
// "importsNotUsedAsValues": "remove", /* Specify emit/checking behavior for imports that are only used for types. */
// "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
// "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
// "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
// "newLine": "crlf", /* Set the newline character for emitting files. */
// "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
// "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */
// "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
// "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */
// "declarationDir": "./", /* Specify the output directory for generated declaration files. */
// "preserveValueImports": true, /* Preserve unused imported values in the JavaScript output that would otherwise be removed. */
/* Interop Constraints */
// "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
// "verbatimModuleSyntax": true, /* Do not transform or elide any imports or exports not marked as type-only, ensuring they are written in the output file's format based on the 'module' setting. */
// "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
"esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
// "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
"forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */
/* Type Checking */
"strict": true, /* Enable all strict type-checking options. */
// "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* When type checking, take into account 'null' and 'undefined'. */
// "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
// "strictBindCallApply": true, /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */
// "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
// "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */
// "useUnknownInCatchVariables": true, /* Default catch clause variables as 'unknown' instead of 'any'. */
// "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
// "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */
// "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */
// "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
// "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
// "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
// "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */
// "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
// "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */
// "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
// "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
/* Completeness */
// "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
"skipLibCheck": true /* Skip type checking all .d.ts files. */
}
}

View File

@@ -0,0 +1,66 @@
use leptos::prelude::*;
use leptos_meta::{provide_meta_context, Stylesheet, Title};
use leptos_router::{
components::{Route, Router, Routes},
StaticSegment, WildcardSegment,
};
#[component]
pub fn App() -> impl IntoView {
// Provides context that manages stylesheets, titles, meta tags, etc.
provide_meta_context();
view! {
// injects a stylesheet into the document <head>
// id=leptos means cargo-leptos will hot-reload this stylesheet
<Stylesheet id="leptos" href="/pkg/harmony-example-rust-webapp.css"/>
// sets the document title
<Title text="Welcome to Leptos"/>
// content for this welcome page
<Router>
<main>
<Routes fallback=move || "Not found.">
<Route path=StaticSegment("") view=HomePage/>
<Route path=WildcardSegment("any") view=NotFound/>
</Routes>
</main>
</Router>
}
}
/// Renders the home page of your application.
#[component]
fn HomePage() -> impl IntoView {
// Creates a reactive value to update the button
let count = RwSignal::new(0);
let on_click = move |_| *count.write() += 1;
view! {
<h1>"Welcome to Leptos!"</h1>
<button on:click=on_click>"Click Me: " {count}</button>
}
}
/// 404 - Not Found
#[component]
fn NotFound() -> impl IntoView {
// set an HTTP status code 404
// this is feature gated because it can only be done during
// initial server-side rendering
// if you navigate to the 404 page subsequently, the status
// code will not be set because there is not a new HTTP request
// to the server
#[cfg(feature = "ssr")]
{
// this can be done inline because it's synchronous
// if it were async, we'd use a server function
let resp = expect_context::<leptos_actix::ResponseOptions>();
resp.set_status(actix_web::http::StatusCode::NOT_FOUND);
}
view! {
<h1>"Not Found"</h1>
}
}

View File

@@ -0,0 +1,9 @@
pub mod app;
#[cfg(feature = "hydrate")]
#[wasm_bindgen::prelude::wasm_bindgen]
pub fn hydrate() {
use app::*;
console_error_panic_hook::set_once();
leptos::mount::hydrate_body(App);
}

View File

@@ -0,0 +1,88 @@
#[cfg(feature = "ssr")]
#[actix_web::main]
async fn main() -> std::io::Result<()> {
use actix_files::Files;
use actix_web::*;
use leptos::prelude::*;
use leptos::config::get_configuration;
use leptos_meta::MetaTags;
use leptos_actix::{generate_route_list, LeptosRoutes};
use harmony_example_rust_webapp::app::*;
let conf = get_configuration(None).unwrap();
let addr = conf.leptos_options.site_addr;
HttpServer::new(move || {
// Generate the list of routes in your Leptos App
let routes = generate_route_list(App);
let leptos_options = &conf.leptos_options;
let site_root = leptos_options.site_root.clone().to_string();
println!("listening on http://{}", &addr);
App::new()
// serve JS/WASM/CSS from `pkg`
.service(Files::new("/pkg", format!("{site_root}/pkg")))
// serve other assets from the `assets` directory
.service(Files::new("/assets", &site_root))
// serve the favicon from /favicon.ico
.service(favicon)
.leptos_routes(routes, {
let leptos_options = leptos_options.clone();
move || {
view! {
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<AutoReload options=leptos_options.clone() />
<HydrationScripts options=leptos_options.clone()/>
<MetaTags/>
</head>
<body>
<App/>
</body>
</html>
}
}
})
.app_data(web::Data::new(leptos_options.to_owned()))
//.wrap(middleware::Compress::default())
})
.bind(&addr)?
.run()
.await
}
#[cfg(feature = "ssr")]
#[actix_web::get("favicon.ico")]
async fn favicon(
leptos_options: actix_web::web::Data<leptos::config::LeptosOptions>,
) -> actix_web::Result<actix_files::NamedFile> {
let leptos_options = leptos_options.into_inner();
let site_root = &leptos_options.site_root;
Ok(actix_files::NamedFile::open(format!(
"{site_root}/favicon.ico"
))?)
}
#[cfg(not(any(feature = "ssr", feature = "csr")))]
pub fn main() {
// no client-side main function
// unless we want this to work with e.g., Trunk for pure client-side testing
// see lib.rs for hydration function instead
// see optional feature `csr` instead
}
#[cfg(all(not(feature = "ssr"), feature = "csr"))]
pub fn main() {
// a client-side main function is required for using `trunk serve`
// prefer using `cargo leptos serve` instead
// to run: `trunk serve --open --features csr`
use harmony_example_rust_webapp::app::*;
console_error_panic_hook::set_once();
leptos::mount_to_body(App);
}

View File

@@ -0,0 +1,4 @@
body {
font-family: sans-serif;
text-align: center;
}

View File

@@ -0,0 +1,18 @@
[package]
name = "example-tenant"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -0,0 +1,41 @@
use harmony::{
data::Id,
inventory::Inventory,
maestro::Maestro,
modules::tenant::TenantScore,
topology::{K8sAnywhereTopology, tenant::TenantConfig},
};
#[tokio::main]
async fn main() {
let tenant = TenantScore {
config: TenantConfig {
id: Id::from_str("test-tenant-id"),
name: "testtenant".to_string(),
..Default::default()
},
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(tenant)]);
harmony_cli::init(maestro, None).await.unwrap();
}
// TODO write tests
// - Create Tenant with default config mostly, make sure namespace is created
// - deploy sample client/server app with nginx unprivileged and a service
// - exec in the client pod and validate the following
// - can reach internet
// - can reach server pod
// - can resolve dns queries to internet
// - can resolve dns queries to services
// - cannot reach services and pods in other namespaces
// - Create Tenant with specific cpu/ram/storage requests / limits and make sure they are enforced by trying to
// deploy a pod with lower requests/limits (accepted) and higher requests/limits (rejected)
// - Create TenantCredentials and make sure they give only access to the correct tenant

View File

@@ -10,9 +10,9 @@ publish = false
harmony = { path = "../../harmony" }
harmony_tui = { path = "../../harmony_tui" }
harmony_types = { path = "../../harmony_types" }
harmony_macros = { path = "../../harmony_macros" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -6,12 +6,14 @@ readme.workspace = true
license.workspace = true
[dependencies]
rand = "0.9"
hex = "0.4"
libredfish = "0.1.1"
reqwest = { version = "0.11", features = ["blocking", "json"] }
russh = "0.45.0"
rust-ipmi = "0.1.1"
semver = "1.0.23"
serde = { version = "1.0.209", features = ["derive"] }
serde = { version = "1.0.209", features = ["derive", "rc"] }
serde_json = "1.0.127"
tokio.workspace = true
derive-new.workspace = true
@@ -40,6 +42,7 @@ dockerfile_builder = "0.1.5"
temp-file = "0.1.9"
convert_case.workspace = true
email_address = "0.2.9"
chrono.workspace = true
fqdn = { version = "0.4.6", features = [
"domain-label-cannot-start-or-end-with-hyphen",
"domain-label-length-limited-to-63",
@@ -50,3 +53,8 @@ fqdn = { version = "0.4.6", features = [
] }
temp-dir = "0.1.14"
dyn-clone = "1.0.19"
similar.workspace = true
futures-util = "0.3.31"
tokio-util = "0.7.15"
strum = { version = "0.27.1", features = ["derive"] }
tempfile = "3.20.0"

View File

@@ -2,7 +2,7 @@ use lazy_static::lazy_static;
use std::path::PathBuf;
lazy_static! {
pub static ref HARMONY_CONFIG_DIR: PathBuf = directories::BaseDirs::new()
pub static ref HARMONY_DATA_DIR: PathBuf = directories::BaseDirs::new()
.unwrap()
.data_dir()
.join("harmony");
@@ -10,4 +10,6 @@ lazy_static! {
std::env::var("HARMONY_REGISTRY_URL").unwrap_or_else(|_| "hub.nationtech.io".to_string());
pub static ref REGISTRY_PROJECT: String =
std::env::var("HARMONY_REGISTRY_PROJECT").unwrap_or_else(|_| "harmony".to_string());
pub static ref DRY_RUN: bool =
std::env::var("HARMONY_DRY_RUN").map_or(true, |value| value.parse().unwrap_or(true));
}

View File

@@ -1,5 +1,23 @@
use rand::distr::Alphanumeric;
use rand::distr::SampleString;
use std::time::SystemTime;
use std::time::UNIX_EPOCH;
use serde::{Deserialize, Serialize};
/// A unique identifier designed for ease of use.
///
/// You can pass it any String to use and Id, or you can use the default format with `Id::default()`
///
/// The default format looks like this
///
/// `462d4c_g2COgai`
///
/// The first part is the unix timesamp in hexadecimal which makes Id easily sorted by creation time.
/// Second part is a serie of 7 random characters.
///
/// **It is not meant to be very secure or unique**, it is suitable to generate up to 10 000 items per
/// second with a reasonable collision rate of 0,000014 % as calculated by this calculator : https://kevingal.com/apps/collision.html
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct Id {
value: String,
@@ -9,6 +27,10 @@ impl Id {
pub fn from_string(value: String) -> Self {
Self { value }
}
pub fn from_str(value: &str) -> Self {
Self::from_string(value.to_string())
}
}
impl std::fmt::Display for Id {
@@ -16,3 +38,20 @@ impl std::fmt::Display for Id {
f.write_str(&self.value)
}
}
impl Default for Id {
fn default() -> Self {
let start = SystemTime::now();
let since_the_epoch = start
.duration_since(UNIX_EPOCH)
.expect("Time went backwards");
let timestamp = since_the_epoch.as_secs();
let hex_timestamp = format!("{:x}", timestamp & 0xffffff);
let random_part: String = Alphanumeric.sample_string(&mut rand::rng(), 7);
let value = format!("{}_{}", hex_timestamp, random_part);
Self { value }
}
}

View File

@@ -21,6 +21,8 @@ pub enum InterpretName {
OPNSense,
K3dInstallation,
TenantInterpret,
Application,
ArgoCD,
}
impl std::fmt::Display for InterpretName {
@@ -37,6 +39,8 @@ impl std::fmt::Display for InterpretName {
InterpretName::OPNSense => f.write_str("OPNSense"),
InterpretName::K3dInstallation => f.write_str("K3dInstallation"),
InterpretName::TenantInterpret => f.write_str("Tenant"),
InterpretName::Application => f.write_str("Application"),
InterpretName::ArgoCD => f.write_str("ArgoCD"),
}
}
}
@@ -124,3 +128,11 @@ impl From<kube::Error> for InterpretError {
}
}
}
impl From<String> for InterpretError {
fn from(value: String) -> Self {
Self {
msg: format!("InterpretError : {value}"),
}
}
}

View File

@@ -34,6 +34,17 @@ pub struct Inventory {
}
impl Inventory {
pub fn empty() -> Self {
Self {
location: Location::new("Empty".to_string(), "location".to_string()),
switch: vec![],
firewall: vec![],
worker_host: vec![],
storage_host: vec![],
control_plane_host: vec![],
}
}
pub fn autoload() -> Self {
Self {
location: Location::test_building(),

View File

@@ -19,7 +19,10 @@ pub struct Maestro<T: Topology> {
}
impl<T: Topology> Maestro<T> {
pub fn new(inventory: Inventory, topology: T) -> Self {
/// Creates a bare maestro without initialization.
///
/// This should rarely be used. Most of the time Maestro::initialize should be used instead.
pub fn new_without_initialization(inventory: Inventory, topology: T) -> Self {
Self {
inventory,
topology,
@@ -29,7 +32,7 @@ impl<T: Topology> Maestro<T> {
}
pub async fn initialize(inventory: Inventory, topology: T) -> Result<Self, InterpretError> {
let instance = Self::new(inventory, topology);
let instance = Self::new_without_initialization(inventory, topology);
instance.prepare_topology().await?;
Ok(instance)
}

View File

@@ -0,0 +1,59 @@
////////////////////
/// Working idea
///
///
trait ScoreWithDep<T> {
fn create_interpret(&self) -> Box<dyn Interpret<T>>;
fn name(&self) -> String;
fn get_dependencies(&self) -> Vec<TypeId>; // Force T to impl Installer<TypeId> or something
// like that
}
struct PrometheusAlertScore;
impl <T> ScoreWithDep<T> for PrometheusAlertScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
todo!()
}
fn name(&self) -> String {
todo!()
}
fn get_dependencies(&self) -> Vec<TypeId> {
// We have to find a way to constrait here so at compile time we are only allowed to return
// TypeId for types which can be installed by T
//
// This means, for example that T must implement HelmCommand if the impl <T: HelmCommand> Installable<T> for
// KubePrometheus calls for HelmCommand.
vec![TypeId::of::<KubePrometheus>()]
}
}
trait Installable{}
struct KubePrometheus;
impl Installable for KubePrometheus;
struct Maestro<T> {
topology: T
}
impl <T>Maestro<T> {
fn execute_store(&self, score: ScoreWithDep<T>) {
score.get_dependencies().iter().for_each(|dep| {
self.topology.ensure_dependency_ready(dep);
});
}
}
struct TopologyWithDep {
}
impl TopologyWithDep {
fn ensure_dependency_ready(&self, type_id: TypeId) -> Result<(), String> {
self.installer
}
}

View File

@@ -46,7 +46,7 @@ pub struct HAClusterTopology {
#[async_trait]
impl Topology for HAClusterTopology {
fn name(&self) -> &str {
todo!()
"HAClusterTopology"
}
async fn ensure_ready(&self) -> Result<Outcome, InterpretError> {
todo!(

View File

@@ -0,0 +1,14 @@
use async_trait::async_trait;
use crate::{interpret::InterpretError, inventory::Inventory};
#[async_trait]
pub trait Installable<T>: Send + Sync {
async fn configure(&self, inventory: &Inventory, topology: &T) -> Result<(), InterpretError>;
async fn ensure_installed(
&self,
inventory: &Inventory,
topology: &T,
) -> Result<(), InterpretError>;
}

View File

@@ -1,18 +1,41 @@
use derive_new::new;
use k8s_openapi::NamespaceResourceScope;
use kube::{
Api, Client, Config, Error, Resource,
api::PostParams,
config::{KubeConfigOptions, Kubeconfig},
use futures_util::StreamExt;
use k8s_openapi::{
ClusterResourceScope, NamespaceResourceScope,
api::{apps::v1::Deployment, core::v1::Pod},
};
use log::error;
use kube::{
Client, Config, Error, Resource,
api::{Api, AttachParams, ListParams, Patch, PatchParams, ResourceExt},
config::{KubeConfigOptions, Kubeconfig},
core::ErrorResponse,
runtime::reflector::Lookup,
};
use kube::{api::DynamicObject, runtime::conditions};
use kube::{
api::{ApiResource, GroupVersionKind},
runtime::wait::await_condition,
};
use log::{debug, error, trace};
use serde::de::DeserializeOwned;
use similar::{DiffableStr, TextDiff};
#[derive(new)]
#[derive(new, Clone)]
pub struct K8sClient {
client: Client,
}
impl std::fmt::Debug for K8sClient {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// This is a poor man's debug implementation for now as kube::Client does not provide much
// useful information
f.write_fmt(format_args!(
"K8sClient {{ kube client using default namespace {} }}",
self.client.default_namespace()
))
}
}
impl K8sClient {
pub async fn try_default() -> Result<Self, Error> {
Ok(Self {
@@ -20,52 +43,251 @@ impl K8sClient {
})
}
pub async fn apply_all<
K: Resource<Scope = NamespaceResourceScope>
+ std::fmt::Debug
+ Sync
+ DeserializeOwned
+ Default
+ serde::Serialize
+ Clone,
>(
pub async fn wait_until_deployment_ready(
&self,
resource: &Vec<K>,
) -> Result<Vec<K>, kube::Error>
name: String,
namespace: Option<&str>,
timeout: Option<u64>,
) -> Result<(), String> {
let api: Api<Deployment>;
if let Some(ns) = namespace {
api = Api::namespaced(self.client.clone(), ns);
} else {
api = Api::default_namespaced(self.client.clone());
}
let establish = await_condition(api, name.as_str(), conditions::is_deployment_completed());
let t = if let Some(t) = timeout { t } else { 300 };
let res = tokio::time::timeout(std::time::Duration::from_secs(t), establish).await;
if let Ok(r) = res {
return Ok(());
} else {
return Err("timed out while waiting for deployment".to_string());
}
}
/// Will execute a command in the first pod found that matches the label `app.kubernetes.io/name={name}`
pub async fn exec_app(
&self,
name: String,
namespace: Option<&str>,
command: Vec<&str>,
) -> Result<(), String> {
let api: Api<Pod>;
if let Some(ns) = namespace {
api = Api::namespaced(self.client.clone(), ns);
} else {
api = Api::default_namespaced(self.client.clone());
}
let pod_list = api
.list(&ListParams::default().labels(format!("app.kubernetes.io/name={name}").as_str()))
.await
.expect("couldn't get list of pods");
let res = api
.exec(
pod_list
.items
.first()
.expect("couldn't get pod")
.name()
.expect("couldn't get pod name")
.into_owned()
.as_str(),
command,
&AttachParams::default(),
)
.await;
match res {
Err(e) => return Err(e.to_string()),
Ok(mut process) => {
let status = process
.take_status()
.expect("Couldn't get status")
.await
.expect("Couldn't unwrap status");
if let Some(s) = status.status {
debug!("Status: {}", s);
if s == "Success" {
return Ok(());
} else {
return Err(s);
}
} else {
return Err("Couldn't get inner status of pod exec".to_string());
}
}
}
}
/// Apply a resource in namespace
///
/// See `kubectl apply` for more information on the expected behavior of this function
pub async fn apply<K>(&self, resource: &K, namespace: Option<&str>) -> Result<K, Error>
where
K: Resource + Clone + std::fmt::Debug + DeserializeOwned + serde::Serialize,
<K as Resource>::Scope: ApplyStrategy<K>,
<K as kube::Resource>::DynamicType: Default,
{
let mut result = vec![];
for r in resource.iter() {
let api: Api<K> = Api::all(self.client.clone());
result.push(api.create(&PostParams::default(), &r).await?);
debug!(
"Applying resource {:?} with ns {:?}",
resource.meta().name,
namespace
);
trace!(
"{:#}",
serde_json::to_value(resource).unwrap_or(serde_json::Value::Null)
);
let api: Api<K> =
<<K as Resource>::Scope as ApplyStrategy<K>>::get_api(&self.client, namespace);
// api.create(&PostParams::default(), &resource).await
let patch_params = PatchParams::apply("harmony");
let name = resource
.meta()
.name
.as_ref()
.expect("K8s Resource should have a name");
if *crate::config::DRY_RUN {
match api.get(name).await {
Ok(current) => {
trace!("Received current value {current:#?}");
// The resource exists, so we calculate and display a diff.
println!("\nPerforming dry-run for resource: '{}'", name);
let mut current_yaml = serde_yaml::to_value(&current)
.expect(&format!("Could not serialize current value : {current:#?}"));
if current_yaml.is_mapping() && current_yaml.get("status").is_some() {
let map = current_yaml.as_mapping_mut().unwrap();
let removed = map.remove_entry("status");
trace!("Removed status {:?}", removed);
} else {
trace!(
"Did not find status entry for current object {}/{}",
current.meta().namespace.as_ref().unwrap_or(&"".to_string()),
current.meta().name.as_ref().unwrap_or(&"".to_string())
);
}
let current_yaml = serde_yaml::to_string(&current_yaml)
.unwrap_or_else(|_| "Failed to serialize current resource".to_string());
let new_yaml = serde_yaml::to_string(resource)
.unwrap_or_else(|_| "Failed to serialize new resource".to_string());
if current_yaml == new_yaml {
println!("No changes detected.");
// Return the current resource state as there are no changes.
return Ok(current);
}
println!("Changes detected:");
let diff = TextDiff::from_lines(&current_yaml, &new_yaml);
// Iterate over the changes and print them in a git-like diff format.
for change in diff.iter_all_changes() {
let sign = match change.tag() {
similar::ChangeTag::Delete => "-",
similar::ChangeTag::Insert => "+",
similar::ChangeTag::Equal => " ",
};
print!("{}{}", sign, change);
}
// In a dry run, we return the new resource state that would have been applied.
Ok(resource.clone())
}
Err(Error::Api(ErrorResponse { code: 404, .. })) => {
// The resource does not exist, so the "diff" is the entire new resource.
println!("\nPerforming dry-run for new resource: '{}'", name);
println!(
"Resource does not exist. It would be created with the following content:"
);
let new_yaml = serde_yaml::to_string(resource)
.unwrap_or_else(|_| "Failed to serialize new resource".to_string());
// Print each line of the new resource with a '+' prefix.
for line in new_yaml.lines() {
println!("+{}", line);
}
// In a dry run, we return the new resource state that would have been created.
Ok(resource.clone())
}
Err(e) => {
// Another API error occurred.
error!("Failed to get resource '{}': {}", name, e);
Err(e)
}
}
} else {
return api
.patch(name, &patch_params, &Patch::Apply(resource))
.await;
}
}
pub async fn apply_many<K>(&self, resource: &Vec<K>, ns: Option<&str>) -> Result<Vec<K>, Error>
where
K: Resource + Clone + std::fmt::Debug + DeserializeOwned + serde::Serialize,
<K as Resource>::Scope: ApplyStrategy<K>,
<K as kube::Resource>::DynamicType: Default,
{
let mut result = Vec::new();
for r in resource.iter() {
result.push(self.apply(r, ns).await?);
}
Ok(result)
}
pub async fn apply_namespaced<K>(
pub async fn apply_yaml_many(
&self,
resource: &Vec<K>,
yaml: &Vec<serde_yaml::Value>,
ns: Option<&str>,
) -> Result<Vec<K>, Error>
where
K: Resource<Scope = NamespaceResourceScope>
+ Clone
+ std::fmt::Debug
+ DeserializeOwned
+ serde::Serialize
+ Default,
<K as kube::Resource>::DynamicType: Default,
{
let mut resources = Vec::new();
for r in resource.iter() {
let api: Api<K> = match ns {
Some(ns) => Api::namespaced(self.client.clone(), ns),
None => Api::default_namespaced(self.client.clone()),
};
resources.push(api.create(&PostParams::default(), &r).await?);
) -> Result<(), Error> {
for y in yaml.iter() {
self.apply_yaml(y, ns).await?;
}
Ok(resources)
Ok(())
}
pub async fn apply_yaml(
&self,
yaml: &serde_yaml::Value,
ns: Option<&str>,
) -> Result<(), Error> {
let obj: DynamicObject = serde_yaml::from_value(yaml.clone()).expect("TODO do not unwrap");
let name = obj.metadata.name.as_ref().expect("YAML must have a name");
let namespace = obj
.metadata
.namespace
.as_ref()
.expect("YAML must have a namespace");
// 4. Define the API resource type using the GVK from the object.
// The plural name 'applications' is taken from your CRD definition.
error!("This only supports argocd application harcoded, very rrrong");
let gvk = GroupVersionKind::gvk("argoproj.io", "v1alpha1", "Application");
let api_resource = ApiResource::from_gvk_with_plural(&gvk, "applications");
// 5. Create a dynamic API client for this resource type.
let api: Api<DynamicObject> =
Api::namespaced_with(self.client.clone(), namespace, &api_resource);
// 6. Apply the object to the cluster using Server-Side Apply.
// This will create the resource if it doesn't exist, or update it if it does.
println!(
"Applying Argo Application '{}' in namespace '{}'...",
name, namespace
);
let patch_params = PatchParams::apply("harmony"); // Use a unique field manager name
let result = api.patch(name, &patch_params, &Patch::Apply(&obj)).await?;
println!("Successfully applied '{}'.", result.name_any());
Ok(())
}
pub(crate) async fn from_kubeconfig(path: &str) -> Option<K8sClient> {
@@ -86,3 +308,35 @@ impl K8sClient {
))
}
}
pub trait ApplyStrategy<K: Resource> {
fn get_api(client: &Client, ns: Option<&str>) -> Api<K>;
}
/// Implementation for all resources that are cluster-scoped.
/// It will always use `Api::all` and ignore the namespace parameter.
impl<K> ApplyStrategy<K> for ClusterResourceScope
where
K: Resource<Scope = ClusterResourceScope>,
<K as kube::Resource>::DynamicType: Default,
{
fn get_api(client: &Client, _ns: Option<&str>) -> Api<K> {
Api::all(client.clone())
}
}
/// Implementation for all resources that are namespace-scoped.
/// It will use `Api::namespaced` if a namespace is provided, otherwise
/// it falls back to the default namespace configured in your kubeconfig.
impl<K> ApplyStrategy<K> for NamespaceResourceScope
where
K: Resource<Scope = NamespaceResourceScope>,
<K as kube::Resource>::DynamicType: Default,
{
fn get_api(client: &Client, ns: Option<&str>) -> Api<K> {
match ns {
Some(ns) => Api::namespaced(client.clone(), ns),
None => Api::default_namespaced(client.clone()),
}
}
}

View File

@@ -2,7 +2,8 @@ use std::{process::Command, sync::Arc};
use async_trait::async_trait;
use inquire::Confirm;
use log::{info, warn};
use log::{debug, info, warn};
use serde::Serialize;
use tokio::sync::OnceCell;
use crate::{
@@ -15,28 +16,29 @@ use crate::{
};
use super::{
HelmCommand, K8sclient, Topology,
DeploymentTarget, HelmCommand, K8sclient, MultiTargetTopology, Topology,
k8s::K8sClient,
tenant::{
ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy, k8s::K8sTenantManager,
},
tenant::{TenantConfig, TenantManager, k8s::K8sTenantManager},
};
#[derive(Clone, Debug)]
struct K8sState {
client: Arc<K8sClient>,
source: K8sSource,
_source: K8sSource,
message: String,
}
#[derive(Debug)]
#[derive(Debug, Clone)]
enum K8sSource {
LocalK3d,
Kubeconfig,
}
#[derive(Clone, Debug)]
pub struct K8sAnywhereTopology {
k8s_state: OnceCell<Option<K8sState>>,
tenant_manager: OnceCell<K8sTenantManager>,
k8s_state: Arc<OnceCell<Option<K8sState>>>,
tenant_manager: Arc<OnceCell<K8sTenantManager>>,
config: Arc<K8sAnywhereConfig>,
}
#[async_trait]
@@ -56,11 +58,29 @@ impl K8sclient for K8sAnywhereTopology {
}
}
impl Serialize for K8sAnywhereTopology {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
todo!()
}
}
impl K8sAnywhereTopology {
pub fn new() -> Self {
pub fn from_env() -> Self {
Self {
k8s_state: OnceCell::new(),
tenant_manager: OnceCell::new(),
k8s_state: Arc::new(OnceCell::new()),
tenant_manager: Arc::new(OnceCell::new()),
config: Arc::new(K8sAnywhereConfig::from_env()),
}
}
pub fn with_config(config: K8sAnywhereConfig) -> Self {
Self {
k8s_state: Arc::new(OnceCell::new()),
tenant_manager: Arc::new(OnceCell::new()),
config: Arc::new(config),
}
}
@@ -101,27 +121,20 @@ impl K8sAnywhereTopology {
}
async fn try_get_or_install_k8s_client(&self) -> Result<Option<K8sState>, InterpretError> {
let k8s_anywhere_config = K8sAnywhereConfig {
kubeconfig: std::env::var("KUBECONFIG").ok().map(|v| v.to_string()),
use_system_kubeconfig: std::env::var("HARMONY_USE_SYSTEM_KUBECONFIG")
.map_or_else(|_| false, |v| v.parse().ok().unwrap_or(false)),
autoinstall: std::env::var("HARMONY_AUTOINSTALL")
.map_or_else(|_| false, |v| v.parse().ok().unwrap_or(false)),
};
let k8s_anywhere_config = &self.config;
if k8s_anywhere_config.use_system_kubeconfig {
match self.try_load_system_kubeconfig().await {
Some(_client) => todo!(),
None => todo!(),
}
}
if let Some(kubeconfig) = k8s_anywhere_config.kubeconfig {
// TODO this deserves some refactoring, it is becoming a bit hard to figure out
// be careful when making modifications here
if k8s_anywhere_config.use_local_k3d {
info!("Using local k3d cluster because of use_local_k3d set to true");
} else {
if let Some(kubeconfig) = &k8s_anywhere_config.kubeconfig {
debug!("Loading kubeconfig {kubeconfig}");
match self.try_load_kubeconfig(&kubeconfig).await {
Some(client) => {
return Ok(Some(K8sState {
client: Arc::new(client),
source: K8sSource::Kubeconfig,
_source: K8sSource::Kubeconfig,
message: format!("Loaded k8s client from kubeconfig {kubeconfig}"),
}));
}
@@ -133,13 +146,24 @@ impl K8sAnywhereTopology {
}
}
if k8s_anywhere_config.use_system_kubeconfig {
debug!("Loading system kubeconfig");
match self.try_load_system_kubeconfig().await {
Some(_client) => todo!(),
None => todo!(),
}
}
info!("No kubernetes configuration found");
}
if !k8s_anywhere_config.autoinstall {
debug!("Autoinstall confirmation prompt");
let confirmation = Confirm::new( "Harmony autoinstallation is not activated, do you wish to launch autoinstallation? : ")
.with_default(false)
.prompt()
.expect("Unexpected prompt error");
debug!("Autoinstall confirmation {confirmation}");
if !confirmation {
warn!(
@@ -161,7 +185,7 @@ impl K8sAnywhereTopology {
let state = match k3d.get_client().await {
Ok(client) => K8sState {
client: Arc::new(K8sClient::new(client)),
source: K8sSource::LocalK3d,
_source: K8sSource::LocalK3d,
message: "Successfully installed K3D cluster and acquired client".to_string(),
},
Err(_) => todo!(),
@@ -170,6 +194,21 @@ impl K8sAnywhereTopology {
Ok(Some(state))
}
async fn ensure_k8s_tenant_manager(&self) -> Result<(), String> {
if let Some(_) = self.tenant_manager.get() {
return Ok(());
}
self.tenant_manager
.get_or_try_init(async || -> Result<K8sTenantManager, String> {
let k8s_client = self.k8s_client().await?;
Ok(K8sTenantManager::new(k8s_client))
})
.await?;
Ok(())
}
fn get_k8s_tenant_manager(&self) -> Result<&K8sTenantManager, ExecutorError> {
match self.tenant_manager.get() {
Some(t) => Ok(t),
@@ -180,25 +219,53 @@ impl K8sAnywhereTopology {
}
}
struct K8sAnywhereConfig {
#[derive(Clone, Debug)]
pub struct K8sAnywhereConfig {
/// The path of the KUBECONFIG file that Harmony should use to interact with the Kubernetes
/// cluster
///
/// Default : None
kubeconfig: Option<String>,
pub kubeconfig: Option<String>,
/// Whether to use the system KUBECONFIG, either the environment variable or the file in the
/// default or configured location
///
/// Default : false
use_system_kubeconfig: bool,
pub use_system_kubeconfig: bool,
/// Whether to install automatically a kubernetes cluster
///
/// When enabled, autoinstall will setup a K3D cluster on the localhost. https://k3d.io/stable/
///
/// Default: true
autoinstall: bool,
/// Default: false
pub autoinstall: bool,
/// Whether to use local k3d cluster.
///
/// Takes precedence over other options, useful to avoid messing up a remote cluster by mistake
///
/// default: true
pub use_local_k3d: bool,
pub harmony_profile: String,
}
impl K8sAnywhereConfig {
fn from_env() -> Self {
Self {
kubeconfig: std::env::var("KUBECONFIG").ok().map(|v| v.to_string()),
use_system_kubeconfig: std::env::var("HARMONY_USE_SYSTEM_KUBECONFIG")
.map_or_else(|_| false, |v| v.parse().ok().unwrap_or(false)),
autoinstall: std::env::var("HARMONY_AUTOINSTALL")
.map_or_else(|_| false, |v| v.parse().ok().unwrap_or(false)),
// TODO harmony_profile should be managed at a more core level than this
harmony_profile: std::env::var("HARMONY_PROFILE").map_or_else(
|_| "dev".to_string(),
|v| v.parse().ok().unwrap_or("dev".to_string()),
),
use_local_k3d: std::env::var("HARMONY_USE_LOCAL_K3D")
.map_or_else(|_| true, |v| v.parse().ok().unwrap_or(true)),
}
}
}
#[async_trait]
@@ -217,6 +284,10 @@ impl Topology for K8sAnywhereTopology {
"No K8s client could be found or installed".to_string(),
))?;
self.ensure_k8s_tenant_manager()
.await
.map_err(|e| InterpretError::new(e))?;
match self.is_helm_available() {
Ok(()) => Ok(Outcome::success(format!(
"{} + helm available",
@@ -227,6 +298,20 @@ impl Topology for K8sAnywhereTopology {
}
}
impl MultiTargetTopology for K8sAnywhereTopology {
fn current_target(&self) -> DeploymentTarget {
if self.config.use_local_k3d {
return DeploymentTarget::LocalDev;
}
match self.config.harmony_profile.to_lowercase().as_str() {
"staging" => DeploymentTarget::Staging,
"production" => DeploymentTarget::Production,
_ => todo!("HARMONY_PROFILE must be set when use_local_k3d is not set"),
}
}
}
impl HelmCommand for K8sAnywhereTopology {}
#[async_trait]
@@ -237,29 +322,10 @@ impl TenantManager for K8sAnywhereTopology {
.await
}
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.update_tenant_resource_limits(tenant_name, new_limits)
.await
}
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.update_tenant_network_policy(tenant_name, new_policy)
.await
}
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.deprovision_tenant(tenant_name)
async fn get_tenant_config(&self) -> Option<TenantConfig> {
self.get_k8s_tenant_manager()
.ok()?
.get_tenant_config()
.await
}
}

View File

@@ -1,6 +1,7 @@
mod ha_cluster;
mod host_binding;
mod http;
pub mod installable;
mod k8s_anywhere;
mod localhost;
pub mod oberservability;
@@ -61,6 +62,17 @@ pub trait Topology: Send + Sync {
async fn ensure_ready(&self) -> Result<Outcome, InterpretError>;
}
#[derive(Debug)]
pub enum DeploymentTarget {
LocalDev,
Staging,
Production,
}
pub trait MultiTargetTopology: Topology {
fn current_target(&self) -> DeploymentTarget;
}
pub type IpAddress = IpAddr;
#[derive(Debug, Clone)]

View File

@@ -1,31 +1,79 @@
use std::any::Any;
use async_trait::async_trait;
use log::debug;
use std::fmt::Debug;
use url::Url;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
topology::{Topology, installable::Installable},
};
use crate::interpret::InterpretError;
use crate::{interpret::Outcome, topology::Topology};
/// Represents an entity responsible for collecting and organizing observability data
/// from various telemetry sources
/// A `Monitor` abstracts the logic required to scrape, aggregate, and structure
/// monitoring data, enabling consistent processing regardless of the underlying data source.
#[async_trait]
pub trait Monitor<T: Topology>: Debug + Send + Sync {
async fn deploy_monitor(
pub trait AlertSender: Any + Send + Sync + std::fmt::Debug {
fn name(&self) -> String;
}
#[derive(Debug)]
pub struct AlertingInterpret<S: AlertSender> {
pub sender: S,
pub receivers: Vec<Box<dyn AlertReceiver<S>>>,
pub rules: Vec<Box<dyn AlertRule<S>>>,
}
#[async_trait]
impl<S: AlertSender + Installable<T>, T: Topology> Interpret<T> for AlertingInterpret<S> {
async fn execute(
&self,
inventory: &Inventory,
topology: &T,
alert_receivers: Vec<AlertReceiver>,
) -> Result<Outcome, InterpretError>;
async fn delete_monitor(
&self,
topolgy: &T,
alert_receivers: Vec<AlertReceiver>,
) -> Result<Outcome, InterpretError>;
) -> Result<Outcome, InterpretError> {
self.sender.configure(inventory, topology).await?;
for receiver in self.receivers.iter() {
receiver.install(&self.sender).await?;
}
for rule in self.rules.iter() {
debug!("installing rule: {:#?}", rule);
rule.install(&self.sender).await?;
}
self.sender.ensure_installed(inventory, topology).await?;
Ok(Outcome::success(format!(
"successfully installed alert sender {}",
self.sender.name()
)))
}
pub struct AlertReceiver {
pub receiver_id: String,
fn get_name(&self) -> InterpretName {
todo!()
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
#[async_trait]
pub trait AlertReceiver<S: AlertSender>: std::fmt::Debug + Send + Sync {
async fn install(&self, sender: &S) -> Result<Outcome, InterpretError>;
fn clone_box(&self) -> Box<dyn AlertReceiver<S>>;
}
#[async_trait]
pub trait AlertRule<S: AlertSender>: std::fmt::Debug + Send + Sync {
async fn install(&self, sender: &S) -> Result<Outcome, InterpretError>;
fn clone_box(&self) -> Box<dyn AlertRule<S>>;
}
#[async_trait]
pub trait ScrapeTarger<S: AlertSender> {
async fn install(&self, sender: &S) -> Result<(), InterpretError>;
}

View File

@@ -1,53 +1,117 @@
use std::sync::Arc;
use crate::{executors::ExecutorError, topology::k8s::K8sClient};
use crate::{
executors::ExecutorError,
topology::k8s::{ApplyStrategy, K8sClient},
};
use async_trait::async_trait;
use derive_new::new;
use k8s_openapi::api::core::v1::Namespace;
use k8s_openapi::{
api::{
core::v1::{LimitRange, Namespace, ResourceQuota},
networking::v1::{
NetworkPolicy, NetworkPolicyEgressRule, NetworkPolicyIngressRule, NetworkPolicyPort,
},
},
apimachinery::pkg::util::intstr::IntOrString,
};
use kube::Resource;
use log::{debug, info, warn};
use serde::de::DeserializeOwned;
use serde_json::json;
use tokio::sync::OnceCell;
use super::{ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy};
use super::{TenantConfig, TenantManager};
#[derive(new)]
#[derive(Clone, Debug)]
pub struct K8sTenantManager {
k8s_client: Arc<K8sClient>,
k8s_tenant_config: Arc<OnceCell<TenantConfig>>,
}
#[async_trait]
impl TenantManager for K8sTenantManager {
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
impl K8sTenantManager {
pub fn new(client: Arc<K8sClient>) -> Self {
Self {
k8s_client: client,
k8s_tenant_config: Arc::new(OnceCell::new()),
}
}
}
impl K8sTenantManager {
fn get_namespace_name(&self, config: &TenantConfig) -> String {
config.name.clone()
}
fn ensure_constraints(&self, _namespace: &Namespace) -> Result<(), ExecutorError> {
warn!("Validate that when tenant already exists (by id) that name has not changed");
warn!("Make sure other Tenant constraints are respected by this k8s implementation");
Ok(())
}
async fn apply_resource<
K: Resource + std::fmt::Debug + Sync + DeserializeOwned + Default + serde::Serialize + Clone,
>(
&self,
mut resource: K,
config: &TenantConfig,
) -> Result<K, ExecutorError>
where
<K as kube::Resource>::DynamicType: Default,
<K as kube::Resource>::Scope: ApplyStrategy<K>,
{
self.apply_labels(&mut resource, config);
self.k8s_client
.apply(&resource, Some(&self.get_namespace_name(config)))
.await
.map_err(|e| {
ExecutorError::UnexpectedError(format!("Could not create Tenant resource : {e}"))
})
}
fn apply_labels<K: Resource>(&self, resource: &mut K, config: &TenantConfig) {
let labels = resource.meta_mut().labels.get_or_insert_default();
labels.insert(
"app.kubernetes.io/managed-by".to_string(),
"harmony".to_string(),
);
labels.insert("harmony/tenant-id".to_string(), config.id.to_string());
labels.insert("harmony/tenant-name".to_string(), config.name.clone());
}
fn build_namespace(&self, config: &TenantConfig) -> Result<Namespace, ExecutorError> {
let namespace = json!(
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"labels": {
"harmony.nationtech.io/tenant.id": config.id,
"harmony.nationtech.io/tenant.id": config.id.to_string(),
"harmony.nationtech.io/tenant.name": config.name,
},
"name": config.name,
"name": self.get_namespace_name(config),
},
}
);
todo!("Validate that when tenant already exists (by id) that name has not changed");
let namespace: Namespace = serde_json::from_value(namespace).unwrap();
serde_json::from_value(namespace).map_err(|e| {
ExecutorError::ConfigurationError(format!(
"Could not build TenantManager Namespace. {}",
e
))
})
}
fn build_resource_quota(&self, config: &TenantConfig) -> Result<ResourceQuota, ExecutorError> {
let resource_quota = json!(
{
"apiVersion": "v1",
"kind": "List",
"items": [
{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"name": config.name,
"name": format!("{}-quota", config.name),
"labels": {
"harmony.nationtech.io/tenant.id": config.id,
"harmony.nationtech.io/tenant.id": config.id.to_string(),
"harmony.nationtech.io/tenant.name": config.name,
},
"namespace": config.name,
"namespace": self.get_namespace_name(config),
},
"spec": {
"hard": {
@@ -55,41 +119,323 @@ impl TenantManager for K8sTenantManager {
"limits.memory": format!("{:.3}Gi", config.resource_limits.memory_limit_gb),
"requests.cpu": format!("{:.0}",config.resource_limits.cpu_request_cores),
"requests.memory": format!("{:.3}Gi", config.resource_limits.memory_request_gb),
"requests.storage": format!("{:.3}", config.resource_limits.storage_total_gb),
"requests.storage": format!("{:.3}Gi", config.resource_limits.storage_total_gb),
"pods": "20",
"services": "10",
"configmaps": "30",
"secrets": "30",
"configmaps": "60",
"secrets": "60",
"persistentvolumeclaims": "15",
"services.loadbalancers": "2",
"services.nodeports": "5",
"limits.ephemeral-storage": "10Gi",
}
}
}
);
serde_json::from_value(resource_quota).map_err(|e| {
ExecutorError::ConfigurationError(format!(
"Could not build TenantManager ResourceQuota. {}",
e
))
})
}
fn build_limit_range(&self, config: &TenantConfig) -> Result<LimitRange, ExecutorError> {
let limit_range = json!({
"apiVersion": "v1",
"kind": "LimitRange",
"metadata": {
"name": format!("{}-defaults", config.name),
"labels": {
"harmony.nationtech.io/tenant.id": config.id.to_string(),
"harmony.nationtech.io/tenant.name": config.name,
},
"namespace": self.get_namespace_name(config),
},
"spec": {
"limits": [
{
"type": "Container",
"default": {
"cpu": "500m",
"memory": "500Mi"
},
"defaultRequest": {
"cpu": "100m",
"memory": "100Mi"
},
}
]
}
});
serde_json::from_value(limit_range).map_err(|e| {
ExecutorError::ConfigurationError(format!(
"Could not build TenantManager LimitRange. {}",
e
))
})
}
fn build_network_policy(&self, config: &TenantConfig) -> Result<NetworkPolicy, ExecutorError> {
let network_policy = json!({
"apiVersion": "networking.k8s.io/v1",
"kind": "NetworkPolicy",
"metadata": {
"name": format!("{}-network-policy", config.name),
"namespace": self.get_namespace_name(config),
},
"spec": {
"podSelector": {},
"egress": [
{
"to": [
{ "podSelector": {} }
]
},
{
"to": [
{
"podSelector": {},
"namespaceSelector": {
"matchLabels": {
"kubernetes.io/metadata.name": "kube-system"
}
}
}
]
},
{
"to": [
{
"podSelector": {},
"namespaceSelector": {
"matchLabels": {
"kubernetes.io/metadata.name": "openshift-dns"
}
}
}
]
},
{
"to": [
{
"ipBlock": {
"cidr": "10.43.0.1/32",
}
}
]
},
{
"to": [
{
"ipBlock": {
"cidr": "172.23.0.0/16",
}
}
]
},
{
"to": [
{
"ipBlock": {
"cidr": "0.0.0.0/0",
"except": [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
"192.0.0.0/24",
"192.0.2.0/24",
"192.88.99.0/24",
"192.18.0.0/15",
"198.51.100.0/24",
"169.254.0.0/16",
"203.0.113.0/24",
"127.0.0.0/8",
"224.0.0.0/4",
"240.0.0.0/4",
"100.64.0.0/10",
"233.252.0.0/24",
"0.0.0.0/8"
]
}
}
]
}
],
"ingress": [
{
"from": [
{ "podSelector": {} }
]
}
],
"policyTypes": [
"Ingress",
"Egress"
]
}
});
let mut network_policy: NetworkPolicy =
serde_json::from_value(network_policy).map_err(|e| {
ExecutorError::ConfigurationError(format!(
"Could not build TenantManager NetworkPolicy. {}",
e
))
})?;
config
.network_policy
.additional_allowed_cidr_ingress
.iter()
.try_for_each(|c| -> Result<(), ExecutorError> {
let cidr_list: Vec<serde_json::Value> =
c.0.iter()
.map(|ci| {
json!({
"ipBlock": {
"cidr": ci.to_string(),
}
})
})
.collect();
let ports: Option<Vec<NetworkPolicyPort>> =
c.1.as_ref().map(|spec| match &spec.data {
super::PortSpecData::SinglePort(port) => vec![NetworkPolicyPort {
port: Some(IntOrString::Int(port.clone().into())),
..Default::default()
}],
super::PortSpecData::PortRange(start, end) => vec![NetworkPolicyPort {
port: Some(IntOrString::Int(start.clone().into())),
end_port: Some(end.clone().into()),
protocol: None, // Not currently supported by Harmony
}],
super::PortSpecData::ListOfPorts(items) => items
.iter()
.map(|i| NetworkPolicyPort {
port: Some(IntOrString::Int(i.clone().into())),
..Default::default()
})
.collect(),
});
let rule = serde_json::from_value::<NetworkPolicyIngressRule>(json!({
"from": cidr_list,
"ports": ports,
}))
.map_err(|e| {
ExecutorError::ConfigurationError(format!(
"Could not build TenantManager NetworkPolicyIngressRule. {}",
e
))
})?;
network_policy
.spec
.as_mut()
.unwrap()
.ingress
.as_mut()
.unwrap()
.push(rule);
Ok(())
})?;
config
.network_policy
.additional_allowed_cidr_egress
.iter()
.try_for_each(|c| -> Result<(), ExecutorError> {
let cidr_list: Vec<serde_json::Value> =
c.0.iter()
.map(|ci| {
json!({
"ipBlock": {
"cidr": ci.to_string(),
}
})
})
.collect();
let ports: Option<Vec<NetworkPolicyPort>> =
c.1.as_ref().map(|spec| match &spec.data {
super::PortSpecData::SinglePort(port) => vec![NetworkPolicyPort {
port: Some(IntOrString::Int(port.clone().into())),
..Default::default()
}],
super::PortSpecData::PortRange(start, end) => vec![NetworkPolicyPort {
port: Some(IntOrString::Int(start.clone().into())),
end_port: Some(end.clone().into()),
protocol: None, // Not currently supported by Harmony
}],
super::PortSpecData::ListOfPorts(items) => items
.iter()
.map(|i| NetworkPolicyPort {
port: Some(IntOrString::Int(i.clone().into())),
..Default::default()
})
.collect(),
});
let rule = serde_json::from_value::<NetworkPolicyEgressRule>(json!({
"to": cidr_list,
"ports": ports,
}))
.map_err(|e| {
ExecutorError::ConfigurationError(format!(
"Could not build TenantManager NetworkPolicyEgressRule. {}",
e
))
})?;
network_policy
.spec
.as_mut()
.unwrap()
.egress
.as_mut()
.unwrap()
.push(rule);
Ok(())
})?;
Ok(network_policy)
}
fn store_config(&self, config: &TenantConfig) {
let _ = self.k8s_tenant_config.set(config.clone());
}
}
#[async_trait]
impl TenantManager for K8sTenantManager {
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
let namespace = self.build_namespace(config)?;
let resource_quota = self.build_resource_quota(config)?;
let network_policy = self.build_network_policy(config)?;
let resource_limit_range = self.build_limit_range(config)?;
self.ensure_constraints(&namespace)?;
debug!("Creating namespace for tenant {}", config.name);
self.apply_resource(namespace, config).await?;
debug!("Creating resource_quota for tenant {}", config.name);
self.apply_resource(resource_quota, config).await?;
debug!("Creating limit_range for tenant {}", config.name);
self.apply_resource(resource_limit_range, config).await?;
debug!("Creating network_policy for tenant {}", config.name);
self.apply_resource(network_policy, config).await?;
info!(
"Success provisionning K8s tenant id {} name {}",
config.id, config.name
);
self.store_config(config);
Ok(())
}
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError> {
todo!()
}
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError> {
todo!()
}
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
todo!()
async fn get_tenant_config(&self) -> Option<TenantConfig> {
self.k8s_tenant_config.get().cloned()
}
}

View File

@@ -5,9 +5,10 @@ use crate::executors::ExecutorError;
#[async_trait]
pub trait TenantManager {
/// Provisions a new tenant based on the provided configuration.
/// This operation should be idempotent; if a tenant with the same `config.name`
/// Creates or update tenant based on the provided configuration.
/// This operation should be idempotent; if a tenant with the same `config.id`
/// already exists and matches the config, it will succeed without changes.
///
/// If it exists but differs, it will be updated, or return an error if the update
/// action is not supported
///
@@ -15,32 +16,5 @@ pub trait TenantManager {
/// * `config`: The desired configuration for the new tenant.
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError>;
/// Updates the resource limits for an existing tenant.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to update.
/// * `new_limits`: The new set of resource limits to apply.
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError>;
/// Updates the high-level network isolation policy for an existing tenant.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to update.
/// * `new_policy`: The new network policy to apply.
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError>;
/// Decommissions an existing tenant, removing its isolated context and associated resources.
/// This operation should be idempotent.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to deprovision.
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError>;
async fn get_tenant_config(&self) -> Option<TenantConfig>;
}

View File

@@ -1,10 +1,10 @@
pub mod k8s;
mod manager;
use std::str::FromStr;
pub use manager::*;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use crate::data::Id;
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] // Assuming serde for Scores
@@ -21,13 +21,26 @@ pub struct TenantConfig {
/// High-level network isolation policies for the tenant.
pub network_policy: TenantNetworkPolicy,
/// Key-value pairs for provider-specific tagging, labeling, or metadata.
/// Useful for billing, organization, or filtering within the provider's console.
pub labels_or_tags: HashMap<String, String>,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Default)]
impl Default for TenantConfig {
fn default() -> Self {
let id = Id::default();
Self {
name: format!("tenant_{id}"),
id,
resource_limits: ResourceLimits::default(),
network_policy: TenantNetworkPolicy {
default_inter_tenant_ingress: InterTenantIngressPolicy::DenyAll,
default_internet_egress: InternetEgressPolicy::AllowAll,
additional_allowed_cidr_ingress: vec![],
additional_allowed_cidr_egress: vec![],
},
}
}
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct ResourceLimits {
/// Requested/guaranteed CPU cores (e.g., 2.0).
pub cpu_request_cores: f32,
@@ -43,6 +56,18 @@ pub struct ResourceLimits {
pub storage_total_gb: f32,
}
impl Default for ResourceLimits {
fn default() -> Self {
Self {
cpu_request_cores: 4.0,
cpu_limit_cores: 4.0,
memory_request_gb: 4.0,
memory_limit_gb: 4.0,
storage_total_gb: 20.0,
}
}
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct TenantNetworkPolicy {
/// Policy for ingress traffic originating from other tenants within the same Harmony-managed environment.
@@ -50,6 +75,20 @@ pub struct TenantNetworkPolicy {
/// Policy for egress traffic destined for the public internet.
pub default_internet_egress: InternetEgressPolicy,
pub additional_allowed_cidr_ingress: Vec<(Vec<cidr::Ipv4Cidr>, Option<PortSpec>)>,
pub additional_allowed_cidr_egress: Vec<(Vec<cidr::Ipv4Cidr>, Option<PortSpec>)>,
}
impl Default for TenantNetworkPolicy {
fn default() -> Self {
TenantNetworkPolicy {
default_inter_tenant_ingress: InterTenantIngressPolicy::DenyAll,
default_internet_egress: InternetEgressPolicy::DenyAll,
additional_allowed_cidr_ingress: vec![],
additional_allowed_cidr_egress: vec![],
}
}
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
@@ -65,3 +104,121 @@ pub enum InternetEgressPolicy {
/// Deny all outbound traffic to the internet by default.
DenyAll,
}
/// Represents a port specification that can be either a single port, a comma-separated list of ports,
/// or a range separated by a dash.
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct PortSpec {
/// The actual representation of the ports as strings for serialization/deserialization purposes.
pub data: PortSpecData,
}
impl PortSpec {
/// TODO write short rust doc that shows what types of input are supported
fn parse_from_str(spec: &str) -> Result<PortSpec, String> {
// Check for single port
if let Ok(port) = spec.parse::<u16>() {
let spec = PortSpecData::SinglePort(port);
return Ok(Self { data: spec });
}
if let Some(range) = spec.find('-') {
let start_str = &spec[..range];
let end_str = &spec[(range + 1)..];
if let (Ok(start), Ok(end)) = (start_str.parse::<u16>(), end_str.parse::<u16>()) {
let spec = PortSpecData::PortRange(start, end);
return Ok(Self { data: spec });
}
}
let ports: Vec<&str> = spec.split(',').collect();
if !ports.is_empty() && ports.iter().all(|p| p.parse::<u16>().is_ok()) {
let maybe_ports = ports.iter().try_fold(vec![], |mut list, &p| {
if let Ok(p) = p.parse::<u16>() {
list.push(p);
return Ok(list);
}
Err(())
});
if let Ok(ports) = maybe_ports {
let spec = PortSpecData::ListOfPorts(ports);
return Ok(Self { data: spec });
}
}
Err(format!("Invalid port spec format {spec}"))
}
}
impl FromStr for PortSpec {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::parse_from_str(s)
}
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum PortSpecData {
SinglePort(u16),
PortRange(u16, u16),
ListOfPorts(Vec<u16>),
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_single_port() {
let port_spec = "2144".parse::<PortSpec>().unwrap();
match port_spec.data {
PortSpecData::SinglePort(port) => assert_eq!(port, 2144),
_ => panic!("Expected SinglePort"),
}
}
#[test]
fn test_port_range() {
let port_spec = "80-90".parse::<PortSpec>().unwrap();
match port_spec.data {
PortSpecData::PortRange(start, end) => {
assert_eq!(start, 80);
assert_eq!(end, 90);
}
_ => panic!("Expected PortRange"),
}
}
#[test]
fn test_list_of_ports() {
let port_spec = "2144,3424".parse::<PortSpec>().unwrap();
match port_spec.data {
PortSpecData::ListOfPorts(ports) => {
assert_eq!(ports[0], 2144);
assert_eq!(ports[1], 3424);
}
_ => panic!("Expected ListOfPorts"),
}
}
#[test]
fn test_invalid_port_spec() {
let result = "invalid".parse::<PortSpec>();
assert!(result.is_err());
}
#[test]
fn test_empty_input() {
let result = "".parse::<PortSpec>();
assert!(result.is_err());
}
#[test]
fn test_only_coma() {
let result = ",".parse::<PortSpec>();
assert!(result.is_err());
}
}

View File

@@ -0,0 +1,42 @@
use async_trait::async_trait;
use serde::Serialize;
use crate::topology::Topology;
/// An ApplicationFeature provided by harmony, such as Backups, Monitoring, MultisiteAvailability,
/// ContinuousIntegration, ContinuousDelivery
#[async_trait]
pub trait ApplicationFeature<T: Topology>:
std::fmt::Debug + Send + Sync + ApplicationFeatureClone<T>
{
async fn ensure_installed(&self, topology: &T) -> Result<(), String>;
fn name(&self) -> String;
}
trait ApplicationFeatureClone<T: Topology> {
fn clone_box(&self) -> Box<dyn ApplicationFeature<T>>;
}
impl<A, T: Topology> ApplicationFeatureClone<T> for A
where
A: ApplicationFeature<T> + Clone + 'static,
{
fn clone_box(&self) -> Box<dyn ApplicationFeature<T>> {
Box::new(self.clone())
}
}
impl<T: Topology> Serialize for Box<dyn ApplicationFeature<T>> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
todo!()
}
}
impl<T: Topology> Clone for Box<dyn ApplicationFeature<T>> {
fn clone(&self) -> Self {
self.clone_box()
}
}

View File

@@ -0,0 +1,357 @@
use std::{backtrace, collections::HashMap};
use k8s_openapi::{Metadata, NamespaceResourceScope, Resource};
use log::debug;
use serde::Serialize;
use serde_yaml::Value;
use url::Url;
use crate::modules::application::features::CDApplicationConfig;
#[derive(Clone, Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct Helm {
pub pass_credentials: Option<bool>,
pub parameters: Vec<Value>,
pub file_parameters: Vec<Value>,
pub release_name: Option<String>,
pub value_files: Vec<String>,
pub ignore_missing_value_files: Option<bool>,
pub values: Option<String>,
pub values_object: Option<Value>,
pub skip_crds: Option<bool>,
pub skip_schema_validation: Option<bool>,
pub version: Option<String>,
pub kube_version: Option<String>,
pub api_versions: Vec<String>,
pub namespace: Option<String>,
}
#[derive(Clone, Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct Source {
pub repo_url: Url,
pub target_revision: Option<String>,
pub chart: String,
pub helm: Helm,
}
#[derive(Clone, Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct Automated {
pub prune: bool,
pub self_heal: bool,
pub allow_empty: bool,
}
#[derive(Clone, Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct Backoff {
pub duration: String,
pub factor: u32,
pub max_duration: String,
}
#[derive(Clone, Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct Retry {
pub limit: u32,
pub backoff: Backoff,
}
#[derive(Clone, Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct SyncPolicy {
pub automated: Automated,
pub sync_options: Vec<String>,
pub retry: Retry,
}
#[derive(Clone, Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ArgoApplication {
pub name: String,
pub namespace: Option<String>,
pub project: String,
pub source: Source,
pub sync_policy: SyncPolicy,
pub revision_history_limit: u32,
}
impl Default for ArgoApplication {
fn default() -> Self {
Self {
name: Default::default(),
namespace: Default::default(),
project: Default::default(),
source: Source {
repo_url: Url::parse("http://asdf").expect("Couldn't parse to URL"),
target_revision: None,
chart: "".to_string(),
helm: Helm {
pass_credentials: None,
parameters: vec![],
file_parameters: vec![],
release_name: None,
value_files: vec![],
ignore_missing_value_files: None,
values: None,
values_object: None,
skip_crds: None,
skip_schema_validation: None,
version: None,
kube_version: None,
api_versions: vec![],
namespace: None,
},
},
sync_policy: SyncPolicy {
automated: Automated {
prune: false,
self_heal: false,
allow_empty: false,
},
sync_options: vec![],
retry: Retry {
limit: 5,
backoff: Backoff {
duration: "5s".to_string(),
factor: 2,
max_duration: "3m".to_string(),
},
},
},
revision_history_limit: 10,
}
}
}
impl From<CDApplicationConfig> for ArgoApplication {
fn from(value: CDApplicationConfig) -> Self {
Self {
name: value.name,
namespace: Some(value.namespace),
project: "default".to_string(),
source: Source {
repo_url: Url::parse(value.helm_chart_repo_url.to_string().as_str())
.expect("couldn't convert to URL"),
target_revision: None,
chart: value.helm_chart_name,
helm: Helm {
pass_credentials: None,
parameters: vec![],
file_parameters: vec![],
release_name: None,
value_files: vec![],
ignore_missing_value_files: None,
values: None,
values_object: Some(value.values_overrides),
skip_crds: None,
skip_schema_validation: None,
version: None,
kube_version: None,
api_versions: vec![],
namespace: None,
},
},
sync_policy: SyncPolicy {
automated: Automated {
prune: false,
self_heal: false,
allow_empty: true,
},
sync_options: vec![],
retry: Retry {
limit: 5,
backoff: Backoff {
duration: "5s".to_string(),
factor: 2,
max_duration: "3m".to_string(),
},
},
},
..Self::default()
}
}
}
impl ArgoApplication {
pub fn to_yaml(&self) -> serde_yaml::Value {
let name = &self.name;
let namespace = if let Some(ns) = self.namespace.as_ref() {
&ns
} else {
"argocd"
};
let project = &self.project;
let source = &self.source;
let yaml_str = format!(
r#"
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {name}
# You'll usually want to add your resources to the argocd namespace.
namespace: {namespace}
spec:
# The project the application belongs to.
project: {project}
# Destination cluster and namespace to deploy the application
destination:
# cluster API URL
server: https://kubernetes.default.svc
# or cluster name
# name: in-cluster
# The namespace will only be set for namespace-scoped resources that have not set a value for .metadata.namespace
namespace: {namespace}
"#
);
let mut yaml_value: Value =
serde_yaml::from_str(yaml_str.as_str()).expect("couldn't parse string to YAML");
let mut spec = yaml_value
.get_mut("spec")
.expect("couldn't get spec from yaml")
.as_mapping_mut()
.expect("couldn't unwrap spec as mutable mapping");
let source =
serde_yaml::to_value(&self.source).expect("couldn't serialize source to value");
let sync_policy = serde_yaml::to_value(&self.sync_policy)
.expect("couldn't serialize sync_policy to value");
let revision_history_limit = serde_yaml::to_value(&self.revision_history_limit)
.expect("couldn't serialize revision_history_limit to value");
spec.insert(
serde_yaml::to_value("source").expect("string to value failed"),
source,
);
spec.insert(
serde_yaml::to_value("syncPolicy").expect("string to value failed"),
sync_policy,
);
spec.insert(
serde_yaml::to_value("revisionHistoryLimit")
.expect("couldn't convert str to yaml value"),
revision_history_limit,
);
debug!("spec: {}", serde_yaml::to_string(spec).unwrap());
debug!(
"entire yaml_value: {}",
serde_yaml::to_string(&yaml_value).unwrap()
);
yaml_value
}
}
#[cfg(test)]
mod tests {
use url::Url;
use crate::modules::application::features::{
ArgoApplication, Automated, Backoff, Helm, Retry, Source, SyncPolicy,
};
#[test]
fn test_argo_application_to_yaml_happy_path() {
let app = ArgoApplication {
name: "test".to_string(),
namespace: Some("test-ns".to_string()),
project: "test-project".to_string(),
source: Source {
repo_url: Url::parse("http://test").unwrap(),
target_revision: None,
chart: "test-chart".to_string(),
helm: Helm {
pass_credentials: None,
parameters: vec![],
file_parameters: vec![],
release_name: Some("test-release-neame".to_string()),
value_files: vec![],
ignore_missing_value_files: None,
values: None,
values_object: None,
skip_crds: None,
skip_schema_validation: None,
version: None,
kube_version: None,
api_versions: vec![],
namespace: None,
},
},
sync_policy: SyncPolicy {
automated: Automated {
prune: false,
self_heal: false,
allow_empty: false,
},
sync_options: vec![],
retry: Retry {
limit: 5,
backoff: Backoff {
duration: "5s".to_string(),
factor: 2,
max_duration: "3m".to_string(),
},
},
},
revision_history_limit: 10,
};
let expected_yaml_output = r#"apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: test
namespace: test-ns
spec:
project: test-project
destination:
server: https://kubernetes.default.svc
namespace: test-ns
source:
repoUrl: http://test/
targetRevision: null
chart: test-chart
helm:
passCredentials: null
parameters: []
fileParameters: []
releaseName: test-release-neame
valueFiles: []
ignoreMissingValueFiles: null
values: null
valuesObject: null
skipCrds: null
skipSchemaValidation: null
version: null
kubeVersion: null
apiVersions: []
namespace: null
syncPolicy:
automated:
prune: false
selfHeal: false
allowEmpty: false
syncOptions: []
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
revisionHistoryLimit: 10"#;
assert_eq!(
expected_yaml_output.trim(),
serde_yaml::to_string(&app.clone().to_yaml())
.unwrap()
.trim()
);
}
}

View File

@@ -0,0 +1,236 @@
use std::{io::Write, process::Command, sync::Arc};
use async_trait::async_trait;
use log::{error, info};
use serde_yaml::Value;
use tempfile::NamedTempFile;
use crate::{
config::HARMONY_DATA_DIR,
data::Version,
inventory::Inventory,
modules::{
application::{
Application, ApplicationFeature, HelmPackage, OCICompliant,
features::{ArgoApplication, ArgoHelmScore},
},
helm::chart::HelmChartScore,
},
score::Score,
topology::{DeploymentTarget, HelmCommand, K8sclient, MultiTargetTopology, Topology, Url},
};
/// ContinuousDelivery in Harmony provides this functionality :
///
/// - **Package** the application
/// - **Push** to an artifact registry
/// - **Deploy** to a testing environment
/// - **Deploy** to a production environment
///
/// It is intended to be used as an application feature passed down to an ApplicationInterpret. For
/// example :
///
/// ```rust,ignore
/// let app = RustApplicationScore {
/// name: "My Rust App".to_string(),
/// features: vec![ContinuousDelivery::default()],
/// };
/// ```
///
/// *Note :*
///
/// By default, the Harmony Opinionated Pipeline is built using these technologies :
///
/// - Gitea Action (executes pipeline steps)
/// - Docker to build an OCI container image
/// - Helm chart to package Kubernetes resources
/// - Harbor as artifact registru
/// - ArgoCD to install/upgrade/rollback/inspect k8s resources
/// - Kubernetes for runtime orchestration
#[derive(Debug, Default, Clone)]
pub struct ContinuousDelivery<A: OCICompliant + HelmPackage> {
pub application: Arc<A>,
}
impl<A: OCICompliant + HelmPackage> ContinuousDelivery<A> {
async fn deploy_to_local_k3d(
&self,
app_name: String,
chart_url: String,
image_name: String,
) -> Result<(), String> {
error!(
"FIXME This works only with local k3d installations, which is fine only for current demo purposes. We assume usage of K8sAnywhereTopology"
);
error!("TODO hardcoded k3d bin path is wrong");
let k3d_bin_path = (*HARMONY_DATA_DIR).join("k3d").join("k3d");
// --- 1. Import the container image into the k3d cluster ---
info!(
"Importing image '{}' into k3d cluster 'harmony'",
image_name
);
let import_output = Command::new(&k3d_bin_path)
.args(["image", "import", &image_name, "--cluster", "harmony"])
.output()
.map_err(|e| format!("Failed to execute k3d image import: {}", e))?;
if !import_output.status.success() {
return Err(format!(
"Failed to import image to k3d: {}",
String::from_utf8_lossy(&import_output.stderr)
));
}
// --- 2. Get the kubeconfig for the k3d cluster and write it to a temp file ---
info!("Retrieving kubeconfig for k3d cluster 'harmony'");
let kubeconfig_output = Command::new(&k3d_bin_path)
.args(["kubeconfig", "get", "harmony"])
.output()
.map_err(|e| format!("Failed to execute k3d kubeconfig get: {}", e))?;
if !kubeconfig_output.status.success() {
return Err(format!(
"Failed to get kubeconfig from k3d: {}",
String::from_utf8_lossy(&kubeconfig_output.stderr)
));
}
let mut temp_kubeconfig = NamedTempFile::new()
.map_err(|e| format!("Failed to create temp file for kubeconfig: {}", e))?;
temp_kubeconfig
.write_all(&kubeconfig_output.stdout)
.map_err(|e| format!("Failed to write to temp kubeconfig file: {}", e))?;
let kubeconfig_path = temp_kubeconfig.path().to_str().unwrap();
// --- 3. Install or upgrade the Helm chart in the cluster ---
info!(
"Deploying Helm chart '{}' to namespace '{}'",
chart_url, app_name
);
let release_name = app_name.to_lowercase(); // Helm release names are often lowercase
let helm_output = Command::new("helm")
.args([
"upgrade",
"--install",
&release_name,
&chart_url,
"--namespace",
&app_name,
"--create-namespace",
"--wait", // Wait for the deployment to be ready
"--kubeconfig",
kubeconfig_path,
])
.spawn()
.map_err(|e| format!("Failed to execute helm upgrade: {}", e))?
.wait_with_output()
.map_err(|e| format!("Failed to execute helm upgrade: {}", e))?;
if !helm_output.status.success() {
return Err(format!(
"Failed to deploy Helm chart: {}",
String::from_utf8_lossy(&helm_output.stderr)
));
}
info!("Successfully deployed '{}' to local k3d cluster.", app_name);
Ok(())
}
}
#[async_trait]
impl<
A: OCICompliant + HelmPackage + Clone + 'static,
T: Topology + HelmCommand + MultiTargetTopology + K8sclient + 'static,
> ApplicationFeature<T> for ContinuousDelivery<A>
{
async fn ensure_installed(&self, topology: &T) -> Result<(), String> {
let image = self.application.image_name();
// TODO
error!(
"TODO reverse helm chart packaging and docker image build. I put helm package first for faster iterations"
);
// TODO Write CI/CD workflow files
// we can autotedect the CI type using the remote url (default to github action for github
// url, etc..)
// Or ask for it when unknown
let helm_chart = self.application.build_push_helm_package(&image).await?;
info!("Pushed new helm chart {helm_chart}");
let image = self.application.build_push_oci_image().await?;
info!("Pushed new docker image {image}");
info!("Installing ContinuousDelivery feature");
// TODO this is a temporary hack for demo purposes, the deployment target should be driven
// by the topology only and we should not have to know how to perform tasks like this for
// which the topology should be responsible.
//
// That said, this will require some careful architectural decisions, since the concept of
// deployment targets / profiles is probably a layer of complexity that we won't be
// completely able to avoid
//
// I'll try something for now that must be thought through after : att a deployment_profile
// function to the topology trait that returns a profile, then anybody who needs it can
// access it. This forces every Topology to understand the concept of targets though... So
// instead I'll create a new Capability which is MultiTargetTopology and we'll see how it
// goes. It still does not feel right though.
match topology.current_target() {
DeploymentTarget::LocalDev => {
self.deploy_to_local_k3d(self.application.name(), helm_chart, image)
.await?;
}
target => {
info!("Deploying to target {target:?}");
let score = ArgoHelmScore {
namespace: "harmonydemo-staging".to_string(),
openshift: true,
domain: "argo.harmonydemo.apps.st.mcd".to_string(),
argo_apps: vec![ArgoApplication::from(CDApplicationConfig {
// helm pull oci://hub.nationtech.io/harmony/harmony-example-rust-webapp-chart/harmony-example-rust-webapp-chart --version 0.1.0
version: Version::from("0.1.0").unwrap(),
helm_chart_repo_url: Url::Url(url::Url::parse("oci://hub.nationtech.io/harmony/harmony-example-rust-webapp-chart/harmony-example-rust-webapp-chart").unwrap()),
helm_chart_name: "harmony-example-rust-webapp-chart".to_string(),
values_overrides: Value::Null,
name: "harmony-demo-rust-webapp".to_string(),
namespace: "harmonydemo-staging".to_string(),
})],
};
score
.create_interpret()
.execute(&Inventory::empty(), topology)
.await
.unwrap();
}
};
todo!("1. Create ArgoCD score that installs argo using helm chart, see if Taha's already done it
- [X] Package app (docker image, helm chart)
- [X] Push to registry
- [X] Push only if staging or prod
- [X] Deploy to local k3d when target is local
- [ ] Poke Argo
- [ ] Ensure app is up")
}
fn name(&self) -> String {
"ContinuousDelivery".to_string()
}
}
/// For now this is entirely bound to K8s / ArgoCD, will have to be revisited when we support
/// more CD systems
pub struct CDApplicationConfig {
pub version: Version,
pub helm_chart_repo_url: Url,
pub helm_chart_name: String,
pub values_overrides: Value,
pub name: String,
pub namespace: String,
}
pub trait ContinuousDeliveryApplication {
fn get_config(&self) -> CDApplicationConfig;
}

View File

@@ -0,0 +1,42 @@
use async_trait::async_trait;
use log::info;
use crate::{
modules::application::ApplicationFeature,
topology::{K8sclient, Topology},
};
#[derive(Debug, Clone)]
pub struct PublicEndpoint {
application_port: u16,
}
/// Use port 3000 as default port. Harmony wants to provide "sane defaults" in general, and in this
/// particular context, using port 80 goes against our philosophy to provide production grade
/// defaults out of the box. Using an unprivileged port is a good security practice and will allow
/// for unprivileged containers to work with this out of the box.
///
/// Now, why 3000 specifically? Many popular web/network frameworks use it by default, there is no
/// perfect answer for this but many Rust and Python libraries tend to use 3000.
impl Default for PublicEndpoint {
fn default() -> Self {
Self {
application_port: 3000,
}
}
}
/// For now we only suport K8s ingress, but we will support more stuff at some point
#[async_trait]
impl<T: Topology + K8sclient + 'static> ApplicationFeature<T> for PublicEndpoint {
async fn ensure_installed(&self, _topology: &T) -> Result<(), String> {
info!(
"Making sure public endpoint is installed for port {}",
self.application_port
);
todo!()
}
fn name(&self) -> String {
"PublicEndpoint".to_string()
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,14 @@
mod endpoint;
pub use endpoint::*;
mod monitoring;
pub use monitoring::*;
mod continuous_delivery;
pub use continuous_delivery::*;
mod helm_argocd_score;
pub use helm_argocd_score::*;
mod argo_types;
pub use argo_types::*;

View File

@@ -0,0 +1,55 @@
use std::sync::Arc;
use async_trait::async_trait;
use log::info;
use crate::{
inventory::Inventory,
modules::{
application::{Application, ApplicationFeature},
monitoring::{
application_monitoring::k8s_application_monitoring_score::ApplicationPrometheusMonitoringScore,
kube_prometheus::types::{NamespaceSelector, ServiceMonitor}, prometheus::prometheus::Prometheus,
},
},
score::Score,
topology::{oberservability::monitoring::{AlertReceiver, AlertRule, AlertSender}, tenant::TenantManager, HelmCommand, K8sclient, Topology},
};
#[derive(Debug, Clone)]
pub struct PrometheusMonitoring {
pub application: Arc<dyn Application>,
pub alert_receivers: Vec<Box<dyn AlertReceiver<Prometheus>>>,
pub alert_rules: Vec<Box<dyn AlertRule<Prometheus>>>,
}
#[async_trait]
impl<T: Topology + HelmCommand + 'static + TenantManager> ApplicationFeature<T> for PrometheusMonitoring {
async fn ensure_installed(&self, topology: &T) -> Result<(), String> {
info!("Ensuring monitoring is available for application");
let ns = self.application.name();
let mut service_monitor = ServiceMonitor::default();
service_monitor.name = ns.clone();
service_monitor.namespace = ns.clone();
service_monitor.namespace_selector = Some(NamespaceSelector {
any: true,
match_names: vec![ns.clone()],
});
let alerting_score = ApplicationPrometheusMonitoringScore {
namespace: ns,
receivers: self.alert_receivers.clone(),
rules: self.alert_rules.clone(),
service_monitors: vec![service_monitor],
};
alerting_score
.create_interpret()
.execute(&Inventory::empty(), topology)
.await
.unwrap();
Ok(())
}
fn name(&self) -> String {
"Monitoring".to_string()
}
}

View File

@@ -0,0 +1,82 @@
mod feature;
pub mod features;
pub mod oci;
mod rust;
use std::sync::Arc;
pub use feature::*;
use log::info;
pub use oci::*;
pub use rust::*;
use async_trait::async_trait;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
topology::Topology,
};
pub trait Application: std::fmt::Debug + Send + Sync {
fn name(&self) -> String;
}
#[derive(Debug)]
pub struct ApplicationInterpret<A: Application, T: Topology + std::fmt::Debug> {
features: Vec<Box<dyn ApplicationFeature<T>>>,
application: Arc<A>,
}
#[async_trait]
impl<A: Application, T: Topology + std::fmt::Debug> Interpret<T> for ApplicationInterpret<A, T> {
async fn execute(
&self,
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let app_name = self.application.name();
info!(
"Preparing {} features [{}] for application {app_name}",
self.features.len(),
self.features
.iter()
.map(|f| f.name())
.collect::<Vec<String>>()
.join(", ")
);
for feature in self.features.iter() {
info!(
"Installing feature {} for application {app_name}",
feature.name()
);
let _ = match feature.ensure_installed(topology).await {
Ok(()) => (),
Err(msg) => {
return Err(InterpretError::new(format!(
"Application Interpret failed to install feature : {msg}"
)));
}
};
}
todo!(
"Do I need to do anything more than this here?? I feel like the Application trait itself should expose something like ensure_ready but its becoming redundant. We'll see as this evolves."
)
}
fn get_name(&self) -> InterpretName {
InterpretName::Application
}
fn get_version(&self) -> Version {
Version::from("1.0.0").unwrap()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -0,0 +1,21 @@
use async_trait::async_trait;
use super::Application;
#[async_trait]
pub trait OCICompliant: Application {
async fn build_push_oci_image(&self) -> Result<String, String>; // TODO consider using oci-spec and friends crates here
fn image_name(&self) -> String;
fn local_image_name(&self) -> String;
}
#[async_trait]
pub trait HelmPackage: Application {
/// Generates, packages, and pushes a Helm chart for the web application to an OCI registry.
///
/// # Arguments
/// * `image_url` - The full URL of the OCI container image to be used in the Deployment.
async fn build_push_helm_package(&self, image_url: &str) -> Result<String, String>;
}

View File

@@ -0,0 +1,581 @@
use std::fs;
use std::path::PathBuf;
use std::process;
use std::sync::Arc;
use async_trait::async_trait;
use dockerfile_builder::Dockerfile;
use dockerfile_builder::instruction::{CMD, COPY, ENV, EXPOSE, FROM, RUN, USER, WORKDIR};
use dockerfile_builder::instruction_builder::CopyBuilder;
use log::{debug, error, info};
use serde::Serialize;
use crate::config::{REGISTRY_PROJECT, REGISTRY_URL};
use crate::{
score::Score,
topology::{Topology, Url},
};
use super::{Application, ApplicationFeature, ApplicationInterpret, HelmPackage, OCICompliant};
#[derive(Debug, Serialize, Clone)]
pub struct ApplicationScore<A: Application + Serialize, T: Topology + Clone + Serialize>
where
Arc<A>: Serialize + Clone,
{
pub features: Vec<Box<dyn ApplicationFeature<T>>>,
pub application: Arc<A>,
}
impl<
A: Application + Serialize + Clone + 'static,
T: Topology + std::fmt::Debug + Clone + Serialize + 'static,
> Score<T> for ApplicationScore<A, T>
where
Arc<A>: Serialize,
{
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
Box::new(ApplicationInterpret {
features: self.features.clone(),
application: self.application.clone(),
})
}
fn name(&self) -> String {
format!("Application: {}", self.application.name())
}
}
#[derive(Debug, Clone, Serialize)]
pub enum RustWebFramework {
Leptos,
}
#[derive(Debug, Clone, Serialize)]
pub struct RustWebapp {
pub name: String,
pub domain: Url,
/// The path to the root of the Rust project to be containerized.
pub project_root: PathBuf,
pub framework: Option<RustWebFramework>,
}
impl Application for RustWebapp {
fn name(&self) -> String {
self.name.clone()
}
}
#[async_trait]
impl HelmPackage for RustWebapp {
async fn build_push_helm_package(&self, image_url: &str) -> Result<String, String> {
info!("Starting Helm chart build and push for '{}'", self.name);
// 1. Create the Helm chart files on disk.
let chart_dir = self
.create_helm_chart_files(image_url)
.map_err(|e| format!("Failed to create Helm chart files: {}", e))?;
info!("Successfully created Helm chart files in {:?}", chart_dir);
// 2. Package the chart into a .tgz archive.
let packaged_chart_path = self
.package_helm_chart(&chart_dir)
.map_err(|e| format!("Failed to package Helm chart: {}", e))?;
info!(
"Successfully packaged Helm chart: {}",
packaged_chart_path.to_string_lossy()
);
// 3. Push the packaged chart to the OCI registry.
let oci_chart_url = self
.push_helm_chart(&packaged_chart_path)
.map_err(|e| format!("Failed to push Helm chart: {}", e))?;
info!("Successfully pushed Helm chart to: {}", oci_chart_url);
Ok(oci_chart_url)
}
}
#[async_trait]
impl OCICompliant for RustWebapp {
/// Builds a Docker image for the Rust web application using a multi-stage build,
/// pushes it to the configured OCI registry, and returns the full image tag.
async fn build_push_oci_image(&self) -> Result<String, String> {
// This function orchestrates the build and push process.
// It's async to match the trait definition, though the underlying docker commands are blocking.
info!("Starting OCI image build and push for '{}'", self.name);
// 1. Build the local image by calling the synchronous helper function.
let local_image_name = self.local_image_name();
self.build_docker_image(&local_image_name)
.map_err(|e| format!("Failed to build Docker image: {}", e))?;
info!(
"Successfully built local Docker image: {}",
local_image_name
);
let remote_image_name = self.image_name();
// 2. Push the image to the registry.
self.push_docker_image(&local_image_name, &remote_image_name)
.map_err(|e| format!("Failed to push Docker image: {}", e))?;
info!("Successfully pushed Docker image to: {}", remote_image_name);
Ok(remote_image_name)
}
fn local_image_name(&self) -> String {
self.name.clone()
}
fn image_name(&self) -> String {
format!(
"{}/{}/{}",
*REGISTRY_URL,
*REGISTRY_PROJECT,
&self.local_image_name()
)
}
}
/// Implementation of helper methods for building and pushing the Docker image.
impl RustWebapp {
/// Generates a multi-stage Dockerfile for a Rust application.
fn build_dockerfile(&self) -> Result<PathBuf, Box<dyn std::error::Error>> {
let mut dockerfile = Dockerfile::new();
self.build_builder_image(&mut dockerfile);
// Save the Dockerfile to a uniquely named file in the project root to avoid conflicts.
let dockerfile_path = self.project_root.join("Dockerfile.harmony");
fs::write(&dockerfile_path, dockerfile.to_string())?;
Ok(dockerfile_path)
}
/// Builds the Docker image using the generated Dockerfile.
pub fn build_docker_image(
&self,
image_name: &str,
) -> Result<String, Box<dyn std::error::Error>> {
info!("Generating Dockerfile for '{}'", self.name);
let dockerfile_path = self.build_dockerfile()?;
info!(
"Building Docker image with file {} from root {}",
dockerfile_path.to_string_lossy(),
self.project_root.to_string_lossy()
);
let output = process::Command::new("docker")
.args([
"build",
"--file",
dockerfile_path.to_str().unwrap(),
"-t",
&image_name,
self.project_root.to_str().unwrap(),
])
.spawn()?
.wait_with_output()?;
self.check_output(&output, "Failed to build Docker image")?;
Ok(image_name.to_string())
}
/// Tags and pushes a Docker image to the configured remote registry.
fn push_docker_image(
&self,
image_name: &str,
full_tag: &str,
) -> Result<String, Box<dyn std::error::Error>> {
info!("Pushing docker image {full_tag}");
// Tag the image for the remote registry.
let output = process::Command::new("docker")
.args(["tag", image_name, &full_tag])
.spawn()?
.wait_with_output()?;
self.check_output(&output, "Tagging docker image failed")?;
debug!(
"docker tag output: stdout: {}, stderr: {}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr)
);
// Push the image.
let output = process::Command::new("docker")
.args(["push", &full_tag])
.spawn()?
.wait_with_output()?;
self.check_output(&output, "Pushing docker image failed")?;
debug!(
"docker push output: stdout: {}, stderr: {}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr)
);
Ok(full_tag.to_string())
}
/// Checks the output of a process command for success.
fn check_output(
&self,
output: &process::Output,
msg: &str,
) -> Result<(), Box<dyn std::error::Error>> {
if !output.status.success() {
let error_message = format!("{}: {}", msg, String::from_utf8_lossy(&output.stderr));
return Err(error_message.into());
}
Ok(())
}
fn build_builder_image(&self, dockerfile: &mut Dockerfile) {
match self.framework {
Some(RustWebFramework::Leptos) => {
// --- Stage 1: Builder for Leptos ---
dockerfile.push(FROM::from("rust:bookworm as builder"));
// Install dependencies, cargo-binstall, and clean up in one layer
dockerfile.push(RUN::from(
"apt-get update && \
apt-get install -y --no-install-recommends clang wget && \
wget https://github.com/cargo-bins/cargo-binstall/releases/latest/download/cargo-binstall-x86_64-unknown-linux-musl.tgz && \
tar -xvf cargo-binstall-x86_64-unknown-linux-musl.tgz && \
cp cargo-binstall /usr/local/cargo/bin && \
rm cargo-binstall-x86_64-unknown-linux-musl.tgz cargo-binstall && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*"
));
// Install cargo-leptos
dockerfile.push(RUN::from("cargo binstall cargo-leptos -y"));
// Add the WASM target
dockerfile.push(RUN::from("rustup target add wasm32-unknown-unknown"));
// Set up workdir, copy source, and build
dockerfile.push(WORKDIR::from("/app"));
dockerfile.push(COPY::from(". ."));
dockerfile.push(RUN::from("cargo leptos build --release -vv"));
// --- Stage 2: Final Image ---
dockerfile.push(FROM::from("debian:bookworm-slim"));
// Create a non-root user for security.
dockerfile.push(RUN::from(
"groupadd -r appgroup && useradd -r -s /bin/false -g appgroup appuser",
));
dockerfile.push(ENV::from("LEPTOS_SITE_ADDR=0.0.0.0:3000"));
dockerfile.push(EXPOSE::from("3000/tcp"));
dockerfile.push(WORKDIR::from("/home/appuser"));
// Copy static files
dockerfile.push(
CopyBuilder::builder()
.from("builder")
.src("/app/target/site/pkg")
.dest("/home/appuser/pkg")
.build()
.unwrap(),
);
// Copy the compiled binary from the builder stage.
error!(
"FIXME Should not be using score name here, instead should use name from Cargo.toml"
);
let binary_path_in_builder = format!("/app/target/release/{}", self.name);
let binary_path_in_final = format!("/home/appuser/{}", self.name);
dockerfile.push(
CopyBuilder::builder()
.from("builder")
.src(binary_path_in_builder)
.dest(&binary_path_in_final)
.build()
.unwrap(),
);
// Run as the non-root user.
dockerfile.push(USER::from("appuser"));
// Set the command to run the application.
dockerfile.push(CMD::from(binary_path_in_final));
}
None => {
// --- Stage 1: Builder for a generic Rust app ---
dockerfile.push(FROM::from("rust:latest as builder"));
// Install the wasm32 target as required.
dockerfile.push(RUN::from("rustup target add wasm32-unknown-unknown"));
dockerfile.push(WORKDIR::from("/app"));
// Copy the source code and build the application.
dockerfile.push(COPY::from(". ."));
dockerfile.push(RUN::from("cargo build --release --locked"));
// --- Stage 2: Final Image ---
dockerfile.push(FROM::from("debian:bookworm-slim"));
// Create a non-root user for security.
dockerfile.push(RUN::from(
"groupadd -r appgroup && useradd -r -s /bin/false -g appgroup appuser",
));
// Copy only the compiled binary from the builder stage.
error!(
"FIXME Should not be using score name here, instead should use name from Cargo.toml"
);
let binary_path_in_builder = format!("/app/target/release/{}", self.name);
let binary_path_in_final = format!("/usr/local/bin/{}", self.name);
dockerfile.push(
CopyBuilder::builder()
.from("builder")
.src(binary_path_in_builder)
.dest(&binary_path_in_final)
.build()
.unwrap(),
);
// Run as the non-root user.
dockerfile.push(USER::from("appuser"));
// Set the command to run the application.
dockerfile.push(CMD::from(binary_path_in_final));
}
}
}
/// Creates all necessary files for a basic Helm chart.
fn create_helm_chart_files(
&self,
image_url: &str,
) -> Result<PathBuf, Box<dyn std::error::Error>> {
let chart_name = format!("{}-chart", self.name);
let chart_dir = self.project_root.join("helm").join(&chart_name);
let templates_dir = chart_dir.join("templates");
fs::create_dir_all(&templates_dir)?;
let (image_repo, image_tag) = image_url.rsplit_once(':').unwrap_or((image_url, "latest"));
// Create Chart.yaml
let chart_yaml = format!(
r#"
apiVersion: v2
name: {}
description: A Helm chart for the {} web application.
type: application
version: 0.1.0
appVersion: "{}"
"#,
chart_name, self.name, image_tag
);
fs::write(chart_dir.join("Chart.yaml"), chart_yaml)?;
// Create values.yaml
let values_yaml = format!(
r#"
# Default values for {}.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: {}
pullPolicy: IfNotPresent
# Overridden by the chart's appVersion
tag: "{}"
service:
type: ClusterIP
port: 3000
ingress:
enabled: true
# Annotations for cert-manager to handle SSL.
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# Add other annotations like nginx ingress class if needed
# kubernetes.io/ingress.class: nginx
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: {}-tls
hosts:
- chart-example.local
"#,
chart_name, image_repo, image_tag, self.name
);
fs::write(chart_dir.join("values.yaml"), values_yaml)?;
// Create templates/_helpers.tpl
let helpers_tpl = r#"
{{/*
Expand the name of the chart.
*/}}
{{- define "chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "chart.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
"#;
fs::write(templates_dir.join("_helpers.tpl"), helpers_tpl)?;
// Create templates/service.yaml
let service_yaml = r#"
apiVersion: v1
kind: Service
metadata:
name: {{ include "chart.fullname" . }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 3000
protocol: TCP
name: http
selector:
app: {{ include "chart.name" . }}
"#;
fs::write(templates_dir.join("service.yaml"), service_yaml)?;
// Create templates/deployment.yaml
let deployment_yaml = r#"
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "chart.name" . }}
template:
metadata:
labels:
app: {{ include "chart.name" . }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 3000
protocol: TCP
"#;
fs::write(templates_dir.join("deployment.yaml"), deployment_yaml)?;
// Create templates/ingress.yaml
let ingress_yaml = r#"
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "chart.fullname" . }}
annotations:
{{- toYaml .Values.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "chart.fullname" $ }}
port:
number: 3000
{{- end }}
{{- end }}
{{- end }}
"#;
fs::write(templates_dir.join("ingress.yaml"), ingress_yaml)?;
Ok(chart_dir)
}
/// Packages a Helm chart directory into a .tgz file.
fn package_helm_chart(
&self,
chart_dir: &PathBuf,
) -> Result<PathBuf, Box<dyn std::error::Error>> {
let chart_dirname = chart_dir.file_name().expect("Should find a chart dirname");
info!(
"Launching `helm package {}` cli with CWD {}",
chart_dirname.to_string_lossy(),
&self.project_root.join("helm").to_string_lossy()
);
let output = process::Command::new("helm")
.args(["package", chart_dirname.to_str().unwrap()])
.current_dir(&self.project_root.join("helm")) // Run package from the parent dir
.output()?;
self.check_output(&output, "Failed to package Helm chart")?;
// Helm prints the path of the created chart to stdout.
let tgz_name = String::from_utf8(output.stdout)?
.trim()
.split_whitespace()
.last()
.unwrap_or_default()
.to_string();
if tgz_name.is_empty() {
return Err("Could not determine packaged chart filename.".into());
}
// The output from helm is relative, so we join it with the execution directory.
Ok(self.project_root.join("helm").join(tgz_name))
}
/// Pushes a packaged Helm chart to an OCI registry.
fn push_helm_chart(
&self,
packaged_chart_path: &PathBuf,
) -> Result<String, Box<dyn std::error::Error>> {
// The chart name is the file stem of the .tgz file
let chart_file_name = packaged_chart_path.file_stem().unwrap().to_str().unwrap();
let oci_push_url = format!("oci://{}/{}", *REGISTRY_URL, *REGISTRY_PROJECT);
let oci_pull_url = format!("{oci_push_url}/{}-chart", self.name);
info!(
"Pushing Helm chart {} to {}",
packaged_chart_path.to_string_lossy(),
oci_push_url
);
let output = process::Command::new("helm")
.args(["push", packaged_chart_path.to_str().unwrap(), &oci_push_url])
.output()?;
self.check_output(&output, "Pushing Helm chart failed")?;
// The final URL includes the version tag, which is part of the file name
let version = chart_file_name.rsplit_once('-').unwrap().1;
debug!("pull url {oci_pull_url}");
debug!("push url {oci_push_url}");
Ok(format!("{}:{}", oci_pull_url, version))
}
}

View File

@@ -220,7 +220,6 @@ impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
yaml_path,
Some(&helm_options),
);
let status = match res {
Ok(status) => status,
Err(err) => return Err(InterpretError::new(err.to_string())),

View File

@@ -311,7 +311,7 @@ impl<T: Topology + K8sclient + HelmCommand> Interpret<T> for HelmChartInterpretV
_inventory: &Inventory,
_topology: &T,
) -> Result<Outcome, InterpretError> {
let ns = self
let _ns = self
.score
.chart
.namespace
@@ -339,7 +339,7 @@ impl<T: Topology + K8sclient + HelmCommand> Interpret<T> for HelmChartInterpretV
let res = helm_executor.generate();
let output = match res {
let _output = match res {
Ok(output) => output,
Err(err) => return Err(InterpretError::new(err.to_string())),
};

View File

@@ -5,7 +5,7 @@ use log::info;
use serde::Serialize;
use crate::{
config::HARMONY_CONFIG_DIR,
config::HARMONY_DATA_DIR,
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
@@ -22,7 +22,7 @@ pub struct K3DInstallationScore {
impl Default for K3DInstallationScore {
fn default() -> Self {
Self {
installation_path: HARMONY_CONFIG_DIR.join("k3d"),
installation_path: HARMONY_DATA_DIR.join("k3d"),
cluster_name: "harmony".to_string(),
}
}

View File

@@ -1,5 +1,6 @@
use harmony_macros::ingress_path;
use k8s_openapi::api::networking::v1::Ingress;
use log::{debug, trace};
use serde::Serialize;
use serde_json::json;
@@ -56,22 +57,24 @@ impl<T: Topology + K8sclient> Score<T> for K8sIngressScore {
let ingress = json!(
{
"metadata": {
"name": self.name
"name": self.name.to_string(),
},
"spec": {
"rules": [
{ "host": self.host,
{ "host": self.host.to_string(),
"http": {
"paths": [
{
"path": path,
"pathType": path_type.as_str(),
"backend": [
{
"service": self.backend_service,
"port": self.port
"backend": {
"service": {
"name": self.backend_service.to_string(),
"port": {
"number": self.port,
}
}
}
]
}
]
}
@@ -81,13 +84,16 @@ impl<T: Topology + K8sclient> Score<T> for K8sIngressScore {
}
);
trace!("Building ingresss object from Value {ingress:#}");
let ingress: Ingress = serde_json::from_value(ingress).unwrap();
debug!(
"Successfully built Ingress for host {:?}",
ingress.metadata.name
);
Box::new(K8sResourceInterpret {
score: K8sResourceScore::single(
ingress.clone(),
self.namespace
.clone()
.map(|f| f.as_c_str().to_str().unwrap().to_string()),
self.namespace.clone().map(|f| f.to_string()),
),
})
}

View File

@@ -1,6 +1,7 @@
use async_trait::async_trait;
use k8s_openapi::NamespaceResourceScope;
use kube::Resource;
use log::info;
use serde::{Serialize, de::DeserializeOwned};
use crate::{
@@ -75,11 +76,12 @@ where
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
info!("Applying {} resources", self.score.resource.len());
topology
.k8s_client()
.await
.expect("Environment should provide enough information to instanciate a client")
.apply_namespaced(&self.score.resource, self.score.namespace.as_deref())
.apply_many(&self.score.resource, self.score.namespace.as_deref())
.await?;
Ok(Outcome::success(

View File

@@ -1,3 +1,4 @@
pub mod application;
pub mod cert_manager;
pub mod dhcp;
pub mod dns;
@@ -12,5 +13,6 @@ pub mod load_balancer;
pub mod monitoring;
pub mod okd;
pub mod opnsense;
pub mod prometheus;
pub mod tenant;
pub mod tftp;

View File

@@ -0,0 +1,153 @@
use std::any::Any;
use async_trait::async_trait;
use serde::Serialize;
use serde_yaml::{Mapping, Value};
use crate::{
interpret::{InterpretError, Outcome},
modules::monitoring::{
kube_prometheus::{
prometheus::{KubePrometheus, KubePrometheusReceiver},
types::{AlertChannelConfig, AlertManagerChannelConfig},
},
prometheus::prometheus::{Prometheus, PrometheusReceiver},
},
topology::{
Url,
oberservability::monitoring::{AlertReceiver, AlertSender},
},
};
#[derive(Debug, Clone, Serialize)]
pub struct DiscordWebhook {
pub name: String,
pub url: Url,
}
#[async_trait]
impl AlertReceiver<Prometheus> for DiscordWebhook {
async fn install(&self, sender: &Prometheus) -> Result<Outcome, InterpretError> {
sender.install_receiver(self).await
}
fn clone_box(&self) -> Box<dyn AlertReceiver<Prometheus>> {
Box::new(self.clone())
}
}
#[async_trait]
impl PrometheusReceiver for DiscordWebhook {
fn name(&self) -> String {
self.name.clone()
}
async fn configure_receiver(&self) -> AlertManagerChannelConfig {
self.get_config().await
}
}
#[async_trait]
impl AlertReceiver<KubePrometheus> for DiscordWebhook {
async fn install(&self, sender: &KubePrometheus) -> Result<Outcome, InterpretError> {
sender.install_receiver(self).await
}
fn clone_box(&self) -> Box<dyn AlertReceiver<KubePrometheus>> {
Box::new(self.clone())
}
}
#[async_trait]
impl KubePrometheusReceiver for DiscordWebhook {
fn name(&self) -> String {
self.name.clone()
}
async fn configure_receiver(&self) -> AlertManagerChannelConfig {
self.get_config().await
}
}
#[async_trait]
impl AlertChannelConfig for DiscordWebhook {
async fn get_config(&self) -> AlertManagerChannelConfig {
let channel_global_config = None;
let channel_receiver = self.alert_channel_receiver().await;
let channel_route = self.alert_channel_route().await;
AlertManagerChannelConfig {
channel_global_config,
channel_receiver,
channel_route,
}
}
}
impl DiscordWebhook {
async fn alert_channel_route(&self) -> serde_yaml::Value {
let mut route = Mapping::new();
route.insert(
Value::String("receiver".to_string()),
Value::String(self.name.clone()),
);
route.insert(
Value::String("matchers".to_string()),
Value::Sequence(vec![Value::String("alertname!=Watchdog".to_string())]),
);
route.insert(Value::String("continue".to_string()), Value::Bool(true));
Value::Mapping(route)
}
async fn alert_channel_receiver(&self) -> serde_yaml::Value {
let mut receiver = Mapping::new();
receiver.insert(
Value::String("name".to_string()),
Value::String(self.name.clone()),
);
let mut discord_config = Mapping::new();
discord_config.insert(
Value::String("webhook_url".to_string()),
Value::String(self.url.to_string()),
);
receiver.insert(
Value::String("discord_configs".to_string()),
Value::Sequence(vec![Value::Mapping(discord_config)]),
);
Value::Mapping(receiver)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn discord_serialize_should_match() {
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
url: Url::Url(url::Url::parse("https://discord.i.dont.exist.com").unwrap()),
};
let discord_receiver_receiver =
serde_yaml::to_string(&discord_receiver.alert_channel_receiver().await).unwrap();
println!("receiver \n{:#}", discord_receiver_receiver);
let discord_receiver_receiver_yaml = r#"name: test-discord
discord_configs:
- webhook_url: https://discord.i.dont.exist.com/
"#
.to_string();
let discord_receiver_route =
serde_yaml::to_string(&discord_receiver.alert_channel_route().await).unwrap();
println!("route \n{:#}", discord_receiver_route);
let discord_receiver_route_yaml = r#"receiver: test-discord
matchers:
- alertname!=Watchdog
continue: true
"#
.to_string();
assert_eq!(discord_receiver_receiver, discord_receiver_receiver_yaml);
assert_eq!(discord_receiver_route, discord_receiver_route_yaml);
}
}

View File

@@ -0,0 +1,2 @@
pub mod discord_alert_channel;
pub mod webhook_receiver;

View File

@@ -0,0 +1,146 @@
use async_trait::async_trait;
use serde::Serialize;
use serde_yaml::{Mapping, Value};
use crate::{
interpret::{InterpretError, Outcome},
modules::monitoring::{
kube_prometheus::{
prometheus::{KubePrometheus, KubePrometheusReceiver},
types::{AlertChannelConfig, AlertManagerChannelConfig},
},
prometheus::prometheus::{Prometheus, PrometheusReceiver},
},
topology::{Url, oberservability::monitoring::AlertReceiver},
};
#[derive(Debug, Clone, Serialize)]
pub struct WebhookReceiver {
pub name: String,
pub url: Url,
}
#[async_trait]
impl AlertReceiver<Prometheus> for WebhookReceiver {
async fn install(&self, sender: &Prometheus) -> Result<Outcome, InterpretError> {
sender.install_receiver(self).await
}
fn clone_box(&self) -> Box<dyn AlertReceiver<Prometheus>> {
Box::new(self.clone())
}
}
#[async_trait]
impl PrometheusReceiver for WebhookReceiver {
fn name(&self) -> String {
self.name.clone()
}
async fn configure_receiver(&self) -> AlertManagerChannelConfig {
self.get_config().await
}
}
#[async_trait]
impl AlertReceiver<KubePrometheus> for WebhookReceiver {
async fn install(&self, sender: &KubePrometheus) -> Result<Outcome, InterpretError> {
sender.install_receiver(self).await
}
fn clone_box(&self) -> Box<dyn AlertReceiver<KubePrometheus>> {
Box::new(self.clone())
}
}
#[async_trait]
impl KubePrometheusReceiver for WebhookReceiver {
fn name(&self) -> String {
self.name.clone()
}
async fn configure_receiver(&self) -> AlertManagerChannelConfig {
self.get_config().await
}
}
#[async_trait]
impl AlertChannelConfig for WebhookReceiver {
async fn get_config(&self) -> AlertManagerChannelConfig {
let channel_global_config = None;
let channel_receiver = self.alert_channel_receiver().await;
let channel_route = self.alert_channel_route().await;
AlertManagerChannelConfig {
channel_global_config,
channel_receiver,
channel_route,
}
}
}
impl WebhookReceiver {
async fn alert_channel_route(&self) -> serde_yaml::Value {
let mut route = Mapping::new();
route.insert(
Value::String("receiver".to_string()),
Value::String(self.name.clone()),
);
route.insert(
Value::String("matchers".to_string()),
Value::Sequence(vec![Value::String("alertname!=Watchdog".to_string())]),
);
route.insert(Value::String("continue".to_string()), Value::Bool(true));
Value::Mapping(route)
}
async fn alert_channel_receiver(&self) -> serde_yaml::Value {
let mut receiver = Mapping::new();
receiver.insert(
Value::String("name".to_string()),
Value::String(self.name.clone()),
);
let mut webhook_config = Mapping::new();
webhook_config.insert(
Value::String("url".to_string()),
Value::String(self.url.to_string()),
);
receiver.insert(
Value::String("webhook_configs".to_string()),
Value::Sequence(vec![Value::Mapping(webhook_config)]),
);
Value::Mapping(receiver)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn webhook_serialize_should_match() {
let webhook_receiver = WebhookReceiver {
name: "test-webhook".to_string(),
url: Url::Url(url::Url::parse("https://webhook.i.dont.exist.com").unwrap()),
};
let webhook_receiver_receiver =
serde_yaml::to_string(&webhook_receiver.alert_channel_receiver().await).unwrap();
println!("receiver \n{:#}", webhook_receiver_receiver);
let webhook_receiver_receiver_yaml = r#"name: test-webhook
webhook_configs:
- url: https://webhook.i.dont.exist.com/
"#
.to_string();
let webhook_receiver_route =
serde_yaml::to_string(&webhook_receiver.alert_channel_route().await).unwrap();
println!("route \n{:#}", webhook_receiver_route);
let webhook_receiver_route_yaml = r#"receiver: test-webhook
matchers:
- alertname!=Watchdog
continue: true
"#
.to_string();
assert_eq!(webhook_receiver_receiver, webhook_receiver_receiver_yaml);
assert_eq!(webhook_receiver_route, webhook_receiver_route_yaml);
}
}

View File

@@ -0,0 +1 @@
pub mod prometheus_alert_rule;

View File

@@ -0,0 +1,131 @@
use std::collections::{BTreeMap, HashMap};
use async_trait::async_trait;
use serde::Serialize;
use crate::{
interpret::{InterpretError, Outcome},
modules::monitoring::{
kube_prometheus::{
prometheus::{KubePrometheus, KubePrometheusRule},
types::{AlertGroup, AlertManagerAdditionalPromRules},
},
prometheus::prometheus::{Prometheus, PrometheusRule},
},
topology::oberservability::monitoring::AlertRule,
};
#[async_trait]
impl AlertRule<KubePrometheus> for AlertManagerRuleGroup {
async fn install(&self, sender: &KubePrometheus) -> Result<Outcome, InterpretError> {
sender.install_rule(&self).await
}
fn clone_box(&self) -> Box<dyn AlertRule<KubePrometheus>> {
Box::new(self.clone())
}
}
#[async_trait]
impl AlertRule<Prometheus> for AlertManagerRuleGroup {
async fn install(&self, sender: &Prometheus) -> Result<Outcome, InterpretError> {
sender.install_rule(&self).await
}
fn clone_box(&self) -> Box<dyn AlertRule<Prometheus>> {
Box::new(self.clone())
}
}
#[async_trait]
impl PrometheusRule for AlertManagerRuleGroup {
fn name(&self) -> String {
self.name.clone()
}
async fn configure_rule(&self) -> AlertManagerAdditionalPromRules {
let mut additional_prom_rules = BTreeMap::new();
additional_prom_rules.insert(
self.name.clone(),
AlertGroup {
groups: vec![self.clone()],
},
);
AlertManagerAdditionalPromRules {
rules: additional_prom_rules,
}
}
}
#[async_trait]
impl KubePrometheusRule for AlertManagerRuleGroup {
fn name(&self) -> String {
self.name.clone()
}
async fn configure_rule(&self) -> AlertManagerAdditionalPromRules {
let mut additional_prom_rules = BTreeMap::new();
additional_prom_rules.insert(
self.name.clone(),
AlertGroup {
groups: vec![self.clone()],
},
);
AlertManagerAdditionalPromRules {
rules: additional_prom_rules,
}
}
}
impl AlertManagerRuleGroup {
pub fn new(name: &str, rules: Vec<PrometheusAlertRule>) -> AlertManagerRuleGroup {
AlertManagerRuleGroup {
name: name.to_string().to_lowercase(),
rules,
}
}
}
#[derive(Debug, Clone, Serialize)]
///logical group of alert rules
///evaluates to:
///name:
/// groups:
/// - name: name
/// rules: PrometheusAlertRule
pub struct AlertManagerRuleGroup {
pub name: String,
pub rules: Vec<PrometheusAlertRule>,
}
#[derive(Debug, Clone, Serialize)]
pub struct PrometheusAlertRule {
pub alert: String,
pub expr: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub r#for: Option<String>,
pub labels: HashMap<String, String>,
pub annotations: HashMap<String, String>,
}
impl PrometheusAlertRule {
pub fn new(alert_name: &str, expr: &str) -> Self {
Self {
alert: alert_name.into(),
expr: expr.into(),
r#for: Some("1m".into()),
labels: HashMap::new(),
annotations: HashMap::new(),
}
}
pub fn for_duration(mut self, duration: &str) -> Self {
self.r#for = Some(duration.into());
self
}
pub fn label(mut self, key: &str, value: &str) -> Self {
self.labels.insert(key.into(), value.into());
self
}
pub fn annotation(mut self, key: &str, value: &str) -> Self {
self.annotations.insert(key.into(), value.into());
self
}
}

View File

@@ -0,0 +1,45 @@
use std::sync::{Arc, Mutex};
use log::debug;
use serde::Serialize;
use crate::{
modules::monitoring::{
kube_prometheus::types::ServiceMonitor,
prometheus::{prometheus::Prometheus, prometheus_config::HelmPrometheusConfig},
},
score::Score,
topology::{
oberservability::monitoring::{AlertReceiver, AlertRule, AlertingInterpret}, tenant::TenantManager, HelmCommand, K8sclient, Topology
},
};
#[derive(Clone, Debug, Serialize)]
pub struct ApplicationPrometheusMonitoringScore {
pub namespace: String,
pub receivers: Vec<Box<dyn AlertReceiver<Prometheus>>>,
pub rules: Vec<Box<dyn AlertRule<Prometheus>>>,
pub service_monitors: Vec<ServiceMonitor>,
}
impl<T: Topology + HelmCommand + TenantManager> Score<T> for ApplicationPrometheusMonitoringScore {
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
let config = Arc::new(Mutex::new(HelmPrometheusConfig::new()));
config
.try_lock()
.expect("couldn't lock config")
.additional_service_monitors = self.service_monitors.clone();
let ns = self.namespace.clone();
config.try_lock().expect("couldn't lock config").namespace = Some(ns.clone());
debug!("set namespace to {}", ns);
Box::new(AlertingInterpret {
sender: Prometheus { config },
receivers: self.receivers.clone(),
rules: self.rules.clone(),
})
}
fn name(&self) -> String {
"ApplicationPrometheusMonitoringScore".to_string()
}
}

View File

@@ -0,0 +1 @@
pub mod k8s_application_monitoring_score;

View File

@@ -1,35 +0,0 @@
use std::str::FromStr;
use non_blank_string_rs::NonBlankString;
use url::Url;
use crate::modules::helm::chart::HelmChartScore;
pub fn discord_alert_manager_score(
webhook_url: Url,
namespace: String,
name: String,
) -> HelmChartScore {
let values = format!(
r#"
environment:
- name: "DISCORD_WEBHOOK"
value: "{webhook_url}"
"#,
);
HelmChartScore {
namespace: Some(NonBlankString::from_str(&namespace).unwrap()),
release_name: NonBlankString::from_str(&name).unwrap(),
chart_name: NonBlankString::from_str(
"oci://hub.nationtech.io/library/alertmanager-discord",
)
.unwrap(),
chart_version: None,
values_overrides: None,
values_yaml: Some(values.to_string()),
create_namespace: true,
install_only: true,
repository: None,
}
}

View File

@@ -1,55 +0,0 @@
use async_trait::async_trait;
use serde_json::Value;
use url::Url;
use crate::{
interpret::{InterpretError, Outcome},
topology::K8sAnywhereTopology,
};
#[derive(Debug, Clone)]
pub struct DiscordWebhookConfig {
pub webhook_url: Url,
pub name: String,
pub send_resolved_notifications: bool,
}
pub trait DiscordWebhookReceiver {
fn deploy_discord_webhook_receiver(
&self,
_notification_adapter_id: &str,
) -> Result<Outcome, InterpretError>;
fn delete_discord_webhook_receiver(
&self,
_notification_adapter_id: &str,
) -> Result<Outcome, InterpretError>;
}
// trait used to generate alert manager config values impl<T: Topology + AlertManagerConfig> Monitor for KubePrometheus
pub trait AlertManagerConfig<T> {
fn get_alert_manager_config(&self) -> Result<Value, InterpretError>;
}
#[async_trait]
impl<T: DiscordWebhookReceiver> AlertManagerConfig<T> for DiscordWebhookConfig {
fn get_alert_manager_config(&self) -> Result<Value, InterpretError> {
todo!()
}
}
#[async_trait]
impl DiscordWebhookReceiver for K8sAnywhereTopology {
fn deploy_discord_webhook_receiver(
&self,
_notification_adapter_id: &str,
) -> Result<Outcome, InterpretError> {
todo!()
}
fn delete_discord_webhook_receiver(
&self,
_notification_adapter_id: &str,
) -> Result<Outcome, InterpretError> {
todo!()
}
}

Some files were not shown because too many files have changed in this diff Show More