Compare commits

...

171 Commits

Author SHA1 Message Date
5c628b37b7 wip:added alertreceiver and alert rules which are built and added to the yaml before deploying prometheus, added a few dashboards to grafana. Trying to fix prometheus-server clusterrole/role/serviceaccount so that it can discover targets and kubernetes in a namespaced release where it does not have access to clusterrole 2025-07-09 15:09:38 -04:00
31661aaaf1 fix: prometheus deploys as namespaced resource without prometheus-server clusterrole and clusterrolebinding 2025-07-07 14:33:09 -04:00
2c208df143 fix: deploys by default in the application name namespace 2025-07-07 13:24:21 -04:00
1a6d72dc17 fix: uncommented example
All checks were successful
Run Check Script / check (pull_request) Successful in 1m37s
2025-07-04 16:30:13 -04:00
df9e21807e fix: git conflict
All checks were successful
Run Check Script / check (pull_request) Successful in -6s
2025-07-04 16:22:39 -04:00
b1bf4fd4d5 fix: cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m40s
2025-07-04 16:14:47 -04:00
f702ecd8c9 fix: deploys a lighter weight prometheus and grafana which is limited to their respective namespaces 2025-07-04 16:13:41 -04:00
a19b52e690 fix: properly append YAML in correct places in argoapplication (#80)
All checks were successful
Run Check Script / check (push) Successful in -7s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m56s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #80
2025-07-04 15:32:02 +00:00
b73f2e76d0 Merge pull request 'refact: Make RustWebappScore generic, it is now Application score and takes an application and list of features to attach to the application' (#81) from refact/application into master
All checks were successful
Run Check Script / check (push) Successful in -1s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m38s
Reviewed-on: #81
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-07-04 14:31:38 +00:00
b4534c6ee0 refact: Make RustWebappScore generic, it is now Application score and takes an application and list of features to attach to the application
All checks were successful
Run Check Script / check (pull_request) Successful in -8s
2025-07-04 10:27:16 -04:00
6149249a6c feat: create Argo interpret and kube client apply_yaml to install Argo Applications. Very messy implementation though, must be refactored/improved
All checks were successful
Run Check Script / check (push) Successful in -5s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m13s
2025-07-04 09:49:43 -04:00
d9935e20cb Merge pull request 'feat: harmony now defaults to using local k3d cluster. Also created OCICompliant: Application trait to make building images cleaner' (#76) from feat/oci into master
All checks were successful
Run Check Script / check (push) Successful in -9s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m4s
Reviewed-on: #76
2025-07-03 19:37:46 +00:00
7b0f3b79b1 Merge remote-tracking branch 'origin/master' into feat/oci
All checks were successful
Run Check Script / check (pull_request) Successful in -8s
2025-07-03 15:36:52 -04:00
e6612245a5 Merge pull request 'feat/cd/localdeploymentdemo' (#79) from feat/cd/localdeploymentdemo into feat/oci
All checks were successful
Run Check Script / check (pull_request) Successful in -9s
Reviewed-on: #79
2025-07-03 19:31:45 +00:00
b4f5b91a57 feat: WIP argocd_score (#78)
Some checks are pending
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Run Check Script / check (push) Successful in -8s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #78
Reviewed-by: johnride <jg@nationtech.io>
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-07-03 19:30:00 +00:00
d317c0ba76 fix: Continuous delivery now works with rust example to deploy on local k3d, ingress and everything
All checks were successful
Run Check Script / check (pull_request) Successful in -3s
2025-07-03 15:25:43 -04:00
539b8299ae feat(continuousdelivery): Local deployment implementation for demo purposes. Needs a lot of refactoring but it works (or almost works) 2025-07-03 11:55:10 -04:00
5a89495c61 feat: implement helm chart generation and publishing
All checks were successful
Run Check Script / check (pull_request) Successful in -4s
- Added functionality to generate a Helm chart for the application.
- Implemented chart packaging and pushing to an OCI registry.
- Utilized `helm package` and `helm push` commands.
- Included configurable registry URL and project name.
- Added tests to verify chart generation and packaging.
- Improved error handling and logging.
2025-07-03 07:19:37 -04:00
fb7849c010 feat: Add sample leptos webapp as example 2025-07-02 23:13:08 -04:00
6371009c6f breaking: Rename Maestro::new to Maestro::new_without_initialization. This improves UX as it makes it more obvious to users that this method should rarely be used
All checks were successful
Run Check Script / check (pull_request) Successful in -5s
2025-07-02 17:47:23 -04:00
a4aa685a4f feat: harmony now defaults to using local k3d cluster. Also created OCICompliant: Application trait to make building images cleaner
Some checks failed
Run Check Script / check (pull_request) Failing after -33s
2025-07-02 17:42:29 -04:00
6bf10b093c Merge pull request 'refactor/ns' (#74) from refactor/ns into master
All checks were successful
Run Check Script / check (push) Successful in 0s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m7s
Reviewed-on: #74
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 19:54:28 +00:00
3eecc2f590 fix: K8sTenantManager is responsible for concrete implementation. K8sAnywhere should delegate
All checks were successful
Run Check Script / check (pull_request) Successful in 4s
2025-07-02 15:51:30 -04:00
3959c07261 Merge remote-tracking branch 'origin/master' into refactor/ns 2025-07-02 15:13:13 -04:00
e50c01c0b3 fix: Forgotten file 🙈
Some checks failed
Run Check Script / check (push) Failing after -28s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m15s
2025-07-02 15:11:03 -04:00
286460d59e Merge pull request 'feat: added default resource limit and request to k8s tenant' (#75) from feat/tenant_limit_range into master
Some checks failed
Run Check Script / check (push) Failing after 38s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m0s
Reviewed-on: #75
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 18:55:04 +00:00
4baa3ae707 feat: added default resource limit and request to k8s tenant
Some checks failed
Run Check Script / check (pull_request) Failing after 35s
2025-07-02 14:06:08 -04:00
82119076cf fix: merge conflict
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-07-02 13:46:26 -04:00
f2a350fae6 fix: comments from pr
All checks were successful
Run Check Script / check (pull_request) Successful in 5s
2025-07-02 13:35:20 -04:00
197770a603 feat: Add ntfy score (#69)
Some checks failed
Run Check Script / check (push) Failing after 42s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m4s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #69
2025-07-02 16:19:35 +00:00
ab69a2c264 feat: add service monitors support to prom (#66)
Some checks failed
Run Check Script / check (push) Failing after 45s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m30s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #66
Co-authored-by: taha <taha@noreply.git.nationtech.io>
Co-committed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 15:29:16 +00:00
e857efa92f fix merge conflict
All checks were successful
Run Check Script / check (pull_request) Successful in 1m50s
2025-07-02 11:26:27 -04:00
2ff3f4afa9 Merge pull request 'feat: Introduce Application trait, not too sure how it will evolve but it makes sense, at the very least to identify the Application, also some minor refactoring' (#73) from feat/applicationTrait into master
Some checks failed
Run Check Script / check (push) Failing after 50s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #73
2025-07-02 15:25:26 +00:00
2f6a11ead7 Merge pull request 'feat: Application Interpret still WIP but now call ensure_installed on features, also introduced a rust app example, completed work on clone_box behavior' (#72) from feat/rust_cd into master
Some checks failed
Run Check Script / check (push) Successful in 2m4s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #72
2025-07-02 15:20:24 +00:00
7de9860dcf refactor: monitoring takes namespace from tenant
All checks were successful
Run Check Script / check (pull_request) Successful in -6s
2025-07-02 11:14:24 -04:00
6e884cff3a feat: Start default implementation to ArgoCD for ContinuousDelivery feature
Some checks failed
Run Check Script / check (pull_request) Failing after -34s
2025-07-02 11:14:24 -04:00
c74c51090a feat: Introduce Application trait, not too sure how it will evolve but it makes sense, at the very least to identify the Application, also some minor refactoring
Some checks failed
Run Check Script / check (pull_request) Failing after -38s
2025-07-02 09:48:26 -04:00
8ae0d6b548 feat: Application Interpret still WIP but now call ensure_installed on features, also introduced a rust app example, completed work on clone_box behavior
All checks were successful
Run Check Script / check (pull_request) Successful in -6s
2025-07-01 22:44:44 -04:00
ee02906ce9 fix(composer): spawn commands to allow interaction (#71)
All checks were successful
Run Check Script / check (push) Successful in 1m39s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m5s
Using `Command::output()` executes the command and wait for it to be finished before returning the output.
Though in some cases the user might need to interact with the CLI before continuing, which hangs the command execution.

Instead, using `Command::spawn()` allows to forward stdin/stdout to the parent process.

Reviewed-on: #71
Reviewed-by: johnride <jg@nationtech.io>
2025-07-01 21:08:19 +00:00
284cc6afd7 feat: Application module architecture and placeholder features (#70)
Some checks failed
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 11m22s
With this architecture, we have an extensible application module for which we can easily define new features and add them to application scores.

All this is driven by the ApplicationInterpret, who understands features and make sure they are "installed".

The drawback of this design is that we now have three different places to launch scores within Harmony : Maestro, Topology and Interpret. This is an architectural smell and I am not sure how to deal with it at the moment.

However, all these places where execution is performed make sense semantically : an ApplicationInterpret must understand ApplicationFeatures and can very well be responsible of them. Same goes for a Topology which provides features itself by composition (ex. K8sAnywhereTopology implements TenantManager) so it is natural for this very imp
lementation to know how to install itself.

Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #70
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-07-01 19:40:30 +00:00
9bf6aac82e doc: Fix curl command for environments without ~/.local/bin/ folder
All checks were successful
Run Check Script / check (push) Successful in 1m21s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m1s
2025-07-01 11:32:24 -04:00
460c8b59e1 wip: helm chart deploys to namespace with resource limits and requests, trying to fix connection refused to api error 2025-06-27 14:47:28 -04:00
8e857bc72a wip: using the name from tenant config as deployment namespace for kubeprometheus deployment or defaulting to monitoring if no tenant config exists 2025-06-26 16:24:19 -04:00
e8d55d27e4 Merge pull request 'feat: added webhook receiver to alertchannels' (#68) from feat/webhook_receiver into master
All checks were successful
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m8s
Reviewed-on: #68
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-06-26 16:43:25 +00:00
fea7e9ddb9 doc: Improve harmony_composer README single command usage
Some checks failed
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
2025-06-26 12:40:39 -04:00
7ec89cdac5 fix: cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m58s
2025-06-26 11:26:07 -04:00
55143dcad4 Merge pull request 'feat: add dry-run functionality and similar dependency' (#62) from feat/dryRun into master
All checks were successful
Run Check Script / check (push) Successful in 1m42s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 9m8s
Reviewed-on: #62
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-06-26 15:14:25 +00:00
17ad92402d feat: added webhook receiver to alertchannels
Some checks failed
Run Check Script / check (pull_request) Failing after 47s
2025-06-26 10:12:18 -04:00
29e74a2712 Merge pull request 'feat: added alert rule and impl for prometheus as well as a few preconfigured bmc alerts for dell server that are used in the monitoring example' (#67) from feat/alert_rules into master
All checks were successful
Run Check Script / check (push) Successful in -1s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 13m10s
Reviewed-on: #67
2025-06-26 13:16:38 +00:00
e16f8fa82e fix: modified directory names to be in line with alert functions and deployment environments
All checks were successful
Run Check Script / check (pull_request) Successful in 1m43s
2025-06-25 16:10:45 -04:00
c21f3084dc feat: added alert rule and impl for prometheus as well as a few preconfigured bmc alerts for dell server that are used in the monitoring example
All checks were successful
Run Check Script / check (pull_request) Successful in 1m35s
2025-06-25 15:10:16 -04:00
2c706225a1 feat: Publishing a release of harmony composer binary as latest-snapshot (#65)
All checks were successful
Run Check Script / check (push) Successful in 1m42s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m44s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #65
Reviewed-by: taha <taha@noreply.git.nationtech.io>
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-25 15:14:45 +00:00
acfb93f1a2 feat: add dry-run functionality and similar dependency
All checks were successful
Run Check Script / check (pull_request) Successful in 1m45s
- Implemented a dry-run mode for K8s resource patching, displaying diffs before applying changes.
- Added the `similar` dependency for calculating and displaying text diffs.
- Enhanced K8s resource application to handle various port specifications in NetworkPolicy ingress rules.
- Added support for port ranges and lists of ports in NetworkPolicy rules.
- Updated K8s client to utilize the dry-run configuration setting.
- Added configuration option `HARMONY_DRY_RUN` to enable or disable dry-run mode.
2025-06-24 14:54:22 -04:00
f437c40428 impl_monitoring_alerting_kube_prometheus (#64)
All checks were successful
Run Check Script / check (push) Successful in 1m29s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 2m59s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #64
Co-authored-by: Willem <wrolleman@nationtech.io>
Co-committed-by: Willem <wrolleman@nationtech.io>
2025-06-24 18:54:15 +00:00
e06548ac44 feat: Alerting module architecture to make it easy to use and extensible by external crates
All checks were successful
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m26s
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Reviewed-on: #61
Reviewed-by: johnride <jg@nationtech.io>
Co-authored-by: Willem <wrolleman@nationtech.io>
Co-committed-by: Willem <wrolleman@nationtech.io>
2025-06-19 14:37:16 +00:00
155e9bac28 feat: create harmony_composer initial version + rework CI (#58)
All checks were successful
Run Check Script / check (push) Successful in 1m38s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m15s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #58
Reviewed-by: johnride <jg@nationtech.io>
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-06-18 19:52:37 +00:00
7bebc58615 feat: add tenant credential management (#63)
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Adds the foundation for managing tenant credentials, including:

- `TenantCredentialScore` for scoring credential-related operations.
- `TenantCredentialManager` trait for creating users.
- `CredentialMetadata` struct to store credential information.
- `CredentialData` enum to hold credential content.
- `TenantCredentialBundle` struct to encapsulate metadata and content.

This provides a starting point for implementing credential creation, storage, and retrieval within the harmony system.

Reviewed-on: #63
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-17 18:28:04 +00:00
246d6718c3 docs: Introduce project delivery automation ADR. This is still WIP (#51)
All checks were successful
Run Check Script / check (push) Successful in 1m52s
Reviewed-on: #51
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-12 20:00:22 +00:00
d776042e20 docs: Improve README formatting
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Signed-off-by: johnride <jg@nationtech.io>
2025-06-12 18:23:17 +00:00
86c681be70 docs: New README, two options to choose from right now (#59)
All checks were successful
Run Check Script / check (push) Successful in 1m52s
Reviewed-on: #59
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-12 18:16:43 +00:00
b94dd1e595 feat: add support for custom CIDR ingress/egress rules (#60)
All checks were successful
Run Check Script / check (push) Successful in 1m53s
- Added `additional_allowed_cidr_ingress` and `additional_allowed_cidr_egress` fields to `TenantNetworkPolicy` to allow specifying custom CIDR blocks for network access.
- Updated K8sTenantManager to parse and apply these CIDR rules to NetworkPolicy ingress and egress rules.
- Added `cidr` dependency to `harmony_macros` and a custom proc macro `cidrv4` to easily parse CIDR strings.
- Updated TenantConfig to default inter tenant and internet egress to deny all and added default empty vectors for CIDR ingress and egress.
- Updated ResourceLimits to implement default.

Reviewed-on: #60
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-12 15:24:03 +00:00
ef5ec4a131 Merge pull request 'feat: Pass configuration when initializing K8sAnywhereTopology' (#57) from feat/configK8sAnywhere into master
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Reviewed-on: #57
2025-06-10 13:01:50 +00:00
a8eb06f686 feat: Pass configuration when initializing K8sAnywhereTopology
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-10 09:00:38 -04:00
d1678b529e Merge pull request 'feat: K8s Tenant looks good, basic isolation working now' (#56) from feat/k8sTenant into master
All checks were successful
Run Check Script / check (push) Successful in 1m57s
Reviewed-on: #56
2025-06-10 12:59:13 +00:00
1451260d4d feat: K8s Tenant looks good, basic isolation working now
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-09 20:39:15 -04:00
415488ba39 feat: K8s apply function now correctly emulates kubectl apply behavior by either creating or updating resources (#55)
Some checks failed
Run Check Script / check (push) Has been cancelled
Reviewed-on: #55
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-06-09 20:19:54 +00:00
bf7a6d590c Merge pull request 'TenantManager_impl_k8s_anywhere' (#47) from TenantManager_impl_k8s_anywhere into master
All checks were successful
Run Check Script / check (push) Successful in 1m56s
Reviewed-on: #47
2025-06-09 18:07:32 +00:00
8d8120bbfd fix: K8s ingress module was completely broken, fixed resource definition structure and types
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-09 14:02:06 -04:00
6cf61ae67c feat: Tenant manager k8s implementation progress : ResourceQuota, NetworkPolicy and Namespace look good. Still WIP 2025-06-09 13:59:49 -04:00
8c65aef127 feat: Can now apply any k8s resource type, both namespaced or cluster scoped 2025-06-09 13:58:40 -04:00
00e71b97f6 chore: Move ADR helper files into folders with their corresponding ADR number
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-09 13:54:23 -04:00
ee2bba5623 Merge pull request 'feat: Add Default implementation for Harmony Id along with documentation.' (#53) from feat/id_default into TenantManager_impl_k8s_anywhere
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m47s
Reviewed-on: #53
2025-06-09 17:47:33 +00:00
118d34db55 Merge pull request 'feat: Initialize k8s tenant properly' (#54) from feat/init_k8s_tenant into feat/id_default
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m47s
Reviewed-on: #54
2025-06-09 17:42:10 +00:00
24e466fadd fix: formatting
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m46s
2025-06-08 23:51:11 -04:00
14fc4345c1 feat: Initialize k8s tenant properly
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m49s
2025-06-08 23:49:08 -04:00
8e472e4c65 feat: Add Default implementation for Harmony Id along with documentation.
Some checks failed
Run Check Script / check (push) Failing after 47s
Run Check Script / check (pull_request) Failing after 45s
This Id implementation is optimized for ease of use. Ids are prefixed with the unix epoch and suffixed with 7 alphanumeric characters. But Ids can also contain any String the user wants to pass it
2025-06-08 21:23:29 -04:00
ec17ccc246 feat: Add example-tenant (WIP)
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m53s
2025-06-06 13:59:48 -04:00
5127f44ab3 docs: Add note about pod privilege escalation in ADR 011 Tenant
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m46s
2025-06-06 13:56:40 -04:00
2ff70db0b1 wip: Tenant example project
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-06 13:52:40 -04:00
e17ac1af83 Merge remote-tracking branch 'origin/master' into TenantManager_impl_k8s_anywhere
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-04 16:14:21 -04:00
31e59937dc Merge pull request 'feat: Initial setup for monitoring and alerting' (#48) from feat/monitor into master
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Reviewed-on: #48
Reviewed-by: johnride <jg@nationtech.io>
2025-06-03 18:17:13 +00:00
12eb4ae31f fix: cargo fmt
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-02 16:20:49 -04:00
a2be9457b9 wip: removed AlertReceiverConfig
Some checks failed
Run Check Script / check (push) Failing after 44s
Run Check Script / check (pull_request) Failing after 44s
2025-06-02 16:11:36 -04:00
0d56fbc09d wip: applied comments in pr, changed naming of AlertChannel to AlertReceiver and added rust doc to Monitor for clarity
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-02 14:44:43 -04:00
56dc1e93c1 fix: modified files in mod
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m46s
2025-06-02 11:47:21 -04:00
691540fe64 wip: modified initial monitoring architecture based on pr review
Some checks failed
Run Check Script / check (push) Failing after 46s
Run Check Script / check (pull_request) Failing after 43s
2025-06-02 11:42:37 -04:00
7e3f1b1830 fix:cargo fmt
All checks were successful
Run Check Script / check (push) Successful in 1m45s
Run Check Script / check (pull_request) Successful in 1m45s
2025-05-30 13:59:29 -04:00
b631e8ccbb feat: Initial setup for monitoring and alerting
Some checks failed
Run Check Script / check (push) Failing after 43s
Run Check Script / check (pull_request) Failing after 45s
2025-05-30 13:21:38 -04:00
60f2f31d6c feat: Add TenantScore and TenantInterpret (#45)
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Reviewed-on: #45
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-05-30 13:13:43 +00:00
045954f8d3 start network policy
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 18:06:16 -04:00
27f1a9dbdd feat: add more to the tenantmanager k8s impl (#46)
All checks were successful
Run Check Script / check (push) Successful in 1m55s
Co-authored-by: Willem <wrolleman@nationtech.io>
Reviewed-on: #46
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-05-29 20:15:38 +00:00
7c809bf18a Make k8stenantmanager a oncecell
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 16:03:58 -04:00
6490e5e82a Hardcode some limits to protect the overall cluster
Some checks failed
Run Check Script / check (push) Failing after 42s
Run Check Script / check (pull_request) Failing after 47s
2025-05-29 15:49:46 -04:00
5e51f7490c Update request quota
Some checks failed
Run Check Script / check (push) Failing after 46s
2025-05-29 15:41:57 -04:00
97fba07f4e feat: adding kubernetes implentation of tenant manager
Some checks failed
Run Check Script / check (push) Failing after 43s
2025-05-29 14:35:58 -04:00
624e4330bb boilerplate
All checks were successful
Run Check Script / check (push) Successful in 1m47s
2025-05-29 13:36:30 -04:00
e7917843bc Merge pull request 'feat: Add initial Tenant traits and data structures' (#43) from feat/tenant into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Reviewed-on: #43
2025-05-29 15:51:33 +00:00
7cd541bdd8 chore: Fix pr comments, remove many YAGNI things
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 11:47:25 -04:00
270dd49567 Merge pull request 'docs: Add CONTRIBUTING.md guide' (#44) from doc/contributor into master
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Reviewed-on: #44
2025-05-29 14:48:18 +00:00
0187300473 docs: Add CONTRIBUTING.md guide
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m47s
2025-05-29 10:47:38 -04:00
bf16566b4e wip: Clean up some unnecessary bits in the Tenant module and move manager to its own file
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 07:25:45 -04:00
895fb02f4e feat: Add initial Tenant traits and data structures
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m45s
2025-05-28 22:33:46 -04:00
88d6af9815 Merge pull request 'feat/basicCI' (#42) from feat/basicCI into master
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Reviewed-on: #42
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-05-28 19:42:19 +00:00
5aa9dc701f fix: Removed forgotten refactoring bits and formatting
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m48s
2025-05-28 15:19:39 -04:00
f4ef895d2e feat: Add basic CI configuration
Some checks failed
Run Check Script / check (push) Failing after 51s
2025-05-28 14:40:19 -04:00
6e7148a945 Merge pull request 'adr: Add ADR on multi tenancy using namespace based customer isolation' (#41) from adr/multi-tenancy into master
Reviewed-on: #41
2025-05-26 20:26:36 +00:00
83453273c6 adr: Add ADR on multi tenancy using namespace based customer isolation 2025-05-26 11:56:45 -04:00
76ae5eb747 fix: make HelmRepository public (#39)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #39
Reviewed-by: johnride <jg@nationtech.io>
2025-05-22 20:07:42 +00:00
9c51040f3b Merge pull request 'feat:added Slack notifications support' (#38) from feat/slack-notifs into master
Reviewed-on: #38
Reviewed-by: johnride <jg@nationtech.io>
2025-05-22 20:04:51 +00:00
e1a8ee1c15 feat: send alerts to multiple alert channels 2025-05-22 14:16:41 -04:00
44b2b092a8 feat:added Slack notifications support 2025-05-21 15:29:14 -04:00
19bd47a545 Merge pull request 'monitoringalerting' (#37) from monitoringalerting into master
Reviewed-on: #37
Reviewed-by: johnride <jg@nationtech.io>
2025-05-21 17:32:26 +00:00
2b6d2e8606 fix:merge confict 2025-05-20 16:05:38 -04:00
7fc2b1ebfe feat: added monitoring stack example to lamp demo 2025-05-20 15:59:01 -04:00
e80752ea3f feat: install discord alert manager helm chart when Discord is the chosen alerting channel 2025-05-20 15:51:03 -04:00
bae7222d64 Our own Helm Command/Resource/Executor (WIP) (#13)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #13
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-05-20 14:01:10 +00:00
f7d3da3ac9 fix merge conflict 2025-05-15 15:31:26 -04:00
eb8a8a2e04 chore: modified build config to be able to pass namespace to the config 2025-05-15 15:19:40 -04:00
b4c6848433 feat: added default monitoringStackScore implementation 2025-05-15 14:52:04 -04:00
0d94c537a0 feat: add ingress score (#32)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #32
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-05-15 16:11:40 +00:00
861f266c4e Merge pull request 'feat: LAMP stack and Monitoring stack now work on OKD, we just have to manually set a few serviceaccounts to privileged scc until we find a better solution' (#36) from feat/lampOKD into master
Reviewed-on: #36
2025-05-14 15:48:56 +00:00
51724d0e55 feat: LAMP stack and Monitoring stack now work on OKD, we just have to manually set a few serviceaccounts to privileged scc until we find a better solution 2025-05-14 11:47:39 -04:00
c2d1cb9b76 Merge pull request 'upgrade stack size from default 1MB on windows (k3d stack overflow otherwise)' (#34) from windows-stack-size-increase into master
Reviewed-on: #34
Reviewed-by: johnride <jg@nationtech.io>
2025-05-14 14:29:51 +00:00
tahahawa
c84a02c8ec upgrade stack size from default 1MB on windows (k3d stack overflow otherwise) 2025-05-11 22:39:23 -04:00
8d3d167848 fix: Remove todo statements for lamp score and k8s related features that are now complete! 2025-05-06 14:46:57 -04:00
94f6cc6942 fix: kube_prometheus missing new field repo in HelmChartScore 2025-05-06 13:57:58 -04:00
4a9b95acad Merge pull request 'monitoring-alerting' (#30) from monitoring-alerting into master
Reviewed-on: #30
2025-05-06 17:50:56 +00:00
ef9c1cce77 fix:yaml structure 2025-05-06 13:42:59 -04:00
df65ac3439 formatting: Fix format of load_balancer.rs 2025-05-06 13:38:21 -04:00
e5ddd296db Merge pull request 'feat: add cert-manager module and helm repo support' (#31) from feat/awsOKD into master
Reviewed-on: #31
2025-05-06 16:39:19 +00:00
4be008556e feat: add cert-manager module and helm repo support
- Implemented a new `cert-manager` module for deploying cert-manager.
- Added support for specifying a Helm repository in module configurations.
- Introduced `cert_manager` module in `modules/mod.rs`.
- Created `src/modules/cert_manager` directory and its associated code.
- Implemented `add_repo` function in `src/modules/helm.rs` for adding Helm repositories.
- Updated `LAMPInterpret` and `lamp.rs` to integrate the new module.
- Added logging for Helm command execution.
- Updated k8s deployment file to remove unused DeepMerge dependency.
2025-05-06 16:38:57 +00:00
78e9893341 Merge pull request 'feat: started to prepare inventory / topoplogy for NCD' (#1) from feat/settingUpNDC into master
Reviewed-on: #1
2025-05-06 16:38:40 +00:00
d9921b857b fix:installs helm chart 2025-05-06 12:23:03 -04:00
e62ef001ed fix: Fix opnsense test, Host.tll now optional and run cargo fmt 2025-05-06 12:00:56 -04:00
1fb7132c64 Merge branch 'master' into feat/settingUpNDC 2025-05-06 11:58:12 -04:00
2d74c66fc6 wip: trying to get the kube-prometheus score to install 2025-05-06 11:54:10 -04:00
8a199b64f5 feat: Upgrade opnsense-config crates to be compatible with opnsense 25.1_5 2025-05-06 11:45:19 -04:00
b7fe62fcbb feat: ncd0 example complete. Missing files for authentication, ignition etc are accessible upon deman. This is yet another great step towards full UPI automated provisionning 2025-05-06 11:44:40 -04:00
cd8542258c Merge remote-tracking branch 'origin/master' into monitoring-alerting 2025-05-06 10:03:27 -04:00
472a3c1051 fix: correctly pass namespace and monitoring stack to topology so it can be used to init the maestro and exec the score 2025-05-06 10:02:21 -04:00
88270ece61 fix: refactor so that the topology installs the MonitoringAlertingStack depending on if it is already present in the cluster 2025-05-05 16:37:15 -04:00
e7cfbf914a feat: added basic alert for pvc 95% full to kube-prometheus score 2025-05-05 15:38:37 -04:00
fbd466a85c added file 2025-05-05 13:40:32 -04:00
2f8e150f41 feat: added Score and topology to create kube prometheus monitoring and alerting stack 2025-05-05 12:49:28 -04:00
764fd6d451 Merge pull request 'chore: added default mariadb size and pass env variables to php app' (#28) from lamp-env-vars into master
Reviewed-on: #28
Reviewed-by: johnride <jg@nationtech.io>
2025-05-03 00:20:47 +00:00
78fffcd725 fix: specified 2Gi db size from LAMPconfig 2025-05-02 15:07:39 -04:00
e1133ea114 use default database_size None in LampConfig to default to value from helm chart 2025-05-02 15:02:50 -04:00
d8e8a49745 Merge pull request 'feat:php program to fill pvc and report database usage' (#29) from pvc-filler into master
Reviewed-on: #29
2025-05-02 16:06:46 +00:00
a7ba9be486 feat:php program to fill pvc and report database usage 2025-05-02 12:03:18 -04:00
1c3669cb47 chore: added default mariadb size and pass env variables to php app 2025-05-02 11:56:27 -04:00
90b80b24bc Merge pull request 'feat: push docker image to registry and deploy with full tag' (#27) from feat/lampDatabase into master
Reviewed-on: #27
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-05-01 17:39:23 +00:00
c879ca143f feat: Add comments explaining a bit of what harmony does in the lamp demo 2025-04-30 23:36:12 -04:00
bc2bd2f2f4 feat: push docker image to registry and deploy with full tag
- Added functionality to tag and push the built Docker image to a specified registry.
- Modified deployment score to use the full image tag (including registry and project).
- Included error handling and logging for the `docker tag` and `docker push` commands.
- Updated the `K8sDeploymentScore` struct to include a namespace field and environment variables for database credentials.
- Added kebab-case conversion for deployment name and namespace.
- Implemented a check_output function for better error reporting.
2025-04-30 22:33:31 -04:00
28978299c9 Merge pull request 'feat: add mariadb helm deployment to lamp interpreter' (#26) from feat/lampDatabase into master
Reviewed-on: #26
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-04-30 20:03:13 +00:00
87f6afc249 feat: add mariadb helm deployment to lamp interpreter
- Adds a `deploy_database` function to the `LAMPInterpret` struct to deploy a MariaDB database using Helm.
- Integrates `HelmCommand` trait requirement to the `LAMPInterpret` struct.
- Introduces `HelmChartScore` to manage MariaDB deployment.
- Adds namespace configuration for helm deployments.
- Updates trait bounds for `LAMPInterpret` to include `HelmCommand`.
- Implements `get_namespace` function to retrieve the namespace.
2025-04-30 15:40:26 -04:00
a6bcaade46 wip: alerting 2025-04-29 11:28:32 -04:00
6c145f1100 wip: initial layout 2025-04-28 16:31:22 -04:00
40cd765019 WIP: initial layout for MonitoringStackScore 2025-04-28 16:18:44 -04:00
c8547e38f2 feat(ipxe): create empty score shell for ipxe 2025-03-02 12:14:04 -05:00
bfc79abfb6 feat(ipxe): added slitaz image (super small image with gui and tools) 2025-03-02 10:05:50 -05:00
7697a170bd feat(ipxe): setup to have MAC specific bootfiles and fallback to a default if not found 2025-03-02 09:37:11 -05:00
941c9bc0b0 fix: missing protocol for ipxe boot file 2025-03-02 07:59:30 -05:00
51aeea1ec9 feat: support new configurable field in dhcp config: filenameipxe 2025-03-01 10:51:01 -05:00
8118df85ee feat: support new configurable field in dhcp config: filename64 2025-03-01 10:41:41 -05:00
7af83910ef doc: fix 2025-03-01 10:29:22 -05:00
1475f4af0c doc: fix 2025-03-01 10:25:24 -05:00
a3a61c734f doc: update README.md with instructions on how to add a field in opnsense config.xml 2025-03-01 10:17:51 -05:00
3f77bc7aef feat: support new configurable field in dhcp config: filename 2025-03-01 08:56:41 -05:00
d5125dd811 feat(iPXE): adding files for iPXE and memtest86 image 2025-02-23 16:39:57 -05:00
1ca316c085 wip: added new xml fields for Caddy + legacy pxe filename 2025-02-22 14:24:51 -05:00
e390f1edb3 feat: started to prepare inventory / topoplogy for NCD 2025-02-22 11:12:28 -05:00
192 changed files with 13165 additions and 616 deletions

5
.cargo/config.toml Normal file
View File

@@ -0,0 +1,5 @@
[target.x86_64-pc-windows-msvc]
rustflags = ["-C", "link-arg=/STACK:8000000"]
[target.x86_64-pc-windows-gnu]
rustflags = ["-C", "link-arg=-Wl,--stack,8000000"]

2
.dockerignore Normal file
View File

@@ -0,0 +1,2 @@
target/
Dockerfile

View File

@@ -0,0 +1,18 @@
name: Run Check Script
on:
push:
branches:
- master
pull_request:
jobs:
check:
runs-on: docker
container:
image: hub.nationtech.io/harmony/harmony_composer:latest@sha256:eb0406fcb95c63df9b7c4b19bc50ad7914dd8232ce98e9c9abef628e07c69386
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run check script
run: bash check.sh

View File

@@ -0,0 +1,95 @@
name: Compile and package harmony_composer
on:
push:
branches:
- master
jobs:
package_harmony_composer:
container:
image: hub.nationtech.io/harmony/harmony_composer:latest@sha256:eb0406fcb95c63df9b7c4b19bc50ad7914dd8232ce98e9c9abef628e07c69386
runs-on: dind
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build for Linux x86_64
run: cargo build --release --bin harmony_composer --target x86_64-unknown-linux-gnu
- name: Build for Windows x86_64 GNU
run: cargo build --release --bin harmony_composer --target x86_64-pc-windows-gnu
- name: Setup log into hub.nationtech.io
uses: docker/login-action@v3
with:
registry: hub.nationtech.io
username: ${{ secrets.HUB_BOT_USER }}
password: ${{ secrets.HUB_BOT_PASSWORD }}
# TODO: build ARM images and MacOS binaries (or other targets) too
- name: Update snapshot-latest tag
run: |
git config user.name "Gitea CI"
git config user.email "ci@nationtech.io"
git tag -f snapshot-latest
git push origin snapshot-latest --force
- name: Install jq
run: apt install -y jq # The current image includes apt lists so we don't have to apt update and rm /var/lib/apt... every time. But if the image is optimized it won't work anymore
- name: Create or update release
run: |
# First, check if release exists and delete it if it does
RELEASE_ID=$(curl -s -X GET \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases/tags/snapshot-latest" \
| jq -r '.id // empty')
if [ -n "$RELEASE_ID" ]; then
# Delete existing release
curl -X DELETE \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases/$RELEASE_ID"
fi
# Create new release
RESPONSE=$(curl -X POST \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
-H "Content-Type: application/json" \
-d '{
"tag_name": "snapshot-latest",
"name": "Latest Snapshot",
"body": "Automated snapshot build from master branch",
"draft": false,
"prerelease": true
}' \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases")
echo "RELEASE_ID=$(echo $RESPONSE | jq -r '.id')" >> $GITHUB_ENV
- name: Upload Linux binary
run: |
curl -X POST \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@target/x86_64-unknown-linux-gnu/release/harmony_composer" \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases/${{ env.RELEASE_ID }}/assets?name=harmony_composer"
- name: Upload Windows binary
run: |
curl -X POST \
-H "Authorization: token ${{ secrets.GITEATOKEN }}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@target/x86_64-pc-windows-gnu/release/harmony_composer.exe" \
"https://git.nationtech.io/api/v1/repos/nationtech/harmony/releases/${{ env.RELEASE_ID }}/assets?name=harmony_composer.exe"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: hub.nationtech.io/harmony/harmony_composer:latest

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
target target
private_repos private_repos
log/ log/
*.tgz

36
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,36 @@
# Contributing to the Harmony project
## Write small P-R
Aim for the smallest piece of work that is mergeable.
Mergeable means that :
- it does not break the build
- it moves the codebase one step forward
P-Rs can be many things, they do not have to be complete features.
### What a P-R **should** be
- Introduce a new trait : This will be the place to discuss the new trait addition, its design and implementation
- A new implementation of a trait : a new concrete implementation of the LoadBalancer trait
- A new CI check : something that improves quality, robustness, ci performance
- Documentation improvements
- Refactoring
- Bugfix
### What a P-R **should not** be
- Large. Anything over 200 lines (excluding generated lines) should have a very good reason to be this large.
- A mix of refactoring, bug fixes and new features.
- Introducing multiple new features or ideas at once.
- Multiple new implementations of a trait/functionnality at once
The general idea is to keep P-Rs small and single purpose.
## Commit message formatting
We follow conventional commits guidelines.
https://www.conventionalcommits.org/en/v1.0.0/

1454
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -11,6 +11,7 @@ members = [
"opnsense-config-xml", "opnsense-config-xml",
"harmony_cli", "harmony_cli",
"k3d", "k3d",
"harmony_composer",
] ]
[workspace.package] [workspace.package]
@@ -19,27 +20,35 @@ readme = "README.md"
license = "GNU AGPL v3" license = "GNU AGPL v3"
[workspace.dependencies] [workspace.dependencies]
log = "0.4.22" log = "0.4"
env_logger = "0.11.5" env_logger = "0.11"
derive-new = "0.7.0" derive-new = "0.7"
async-trait = "0.1.82" async-trait = "0.1"
tokio = { version = "1.40.0", features = ["io-std", "fs", "macros", "rt-multi-thread"] } tokio = { version = "1.40", features = [
cidr = "0.2.3" "io-std",
russh = "0.45.0" "fs",
russh-keys = "0.45.0" "macros",
rand = "0.8.5" "rt-multi-thread",
url = "2.5.4" ] }
kube = "0.98.0" cidr = { features = ["serde"], version = "0.2" }
k8s-openapi = { version = "0.24.0", features = ["v1_30"] } russh = "0.45"
serde_yaml = "0.9.34" russh-keys = "0.45"
serde-value = "0.7.0" rand = "0.8"
http = "1.2.0" url = "2.5"
inquire = "0.7.5" kube = { version = "1.1.0", features = [
"config",
[workspace.dependencies.uuid] "client",
version = "1.11.0" "runtime",
features = [ "rustls-tls",
"v4", # Lets you generate random UUIDs "ws",
"fast-rng", # Use a faster (but still sufficiently random) RNG "jsonpatch",
"macro-diagnostics", # Enable better diagnostics for compile-time UUIDs ] }
] k8s-openapi = { version = "0.25", features = ["v1_30"] }
serde_yaml = "0.9"
serde-value = "0.7"
http = "1.2"
inquire = "0.7"
convert_case = "0.8"
chrono = "0.4"
similar = "2"
uuid = { version = "1.11", features = ["v4", "fast-rng", "macro-diagnostics"] }

25
Dockerfile Normal file
View File

@@ -0,0 +1,25 @@
FROM docker.io/rust:1.87.0 AS build
WORKDIR /app
COPY . .
RUN cargo build --release --bin harmony_composer
FROM docker.io/rust:1.87.0
WORKDIR /app
RUN rustup target add x86_64-pc-windows-gnu
RUN rustup target add x86_64-unknown-linux-gnu
RUN rustup component add rustfmt
RUN apt update
# TODO: Consider adding more supported targets
# nodejs for checkout action, docker for building containers, mingw for cross-compiling for windows
RUN apt install -y nodejs docker.io mingw-w64
COPY --from=build /app/target/release/harmony_composer .
ENTRYPOINT ["/app/harmony_composer"]

162
README.md
View File

@@ -1,33 +1,151 @@
# Harmony : Open Infrastructure Orchestration # Harmony : Open-source infrastructure orchestration that treats your platform like first-class code.
*By [NationTech](https://nationtech.io)*
## Quick demo [![Build](https://git.nationtech.io/NationTech/harmony/actions/workflows/check.yml/badge.svg)](https://git.nationtech.io/nationtech/harmony)
[![License](https://img.shields.io/badge/license-AGPLv3-blue?style=flat-square)](LICENSE)
`cargo run -p example-tui` ### Unify
This will launch Harmony's minimalist terminal ui which embeds a few demo scores. - **Project Scaffolding**
- **Infrastructure Provisioning**
- **Application Deployment**
- **Day-2 operations**
Usage instructions will be displayed at the bottom of the TUI. All in **one strongly-typed Rust codebase**.
`cargo run --bin example-cli -- --help` ### Deploy anywhere
This is the harmony CLI, a minimal implementation From a **developer laptop** to a **global production cluster**, a single **source of truth** drives the **full software lifecycle.**
The current help text: ---
```` ## 1 · The Harmony Philosophy
Usage: example-cli [OPTIONS]
Options: Infrastructure is essential, but it shouldnt be your core business. Harmony is built on three guiding principles that make modern platforms reliable, repeatable, and easy to reason about.
-y, --yes Run score(s) or not
-f, --filter <FILTER> Filter query
-i, --interactive Run interactive TUI or not
-a, --all Run all or nth, defaults to all
-n, --number <NUMBER> Run nth matching, zero indexed [default: 0]
-l, --list list scores, will also be affected by run filter
-h, --help Print help
-V, --version Print version```
## Core architecture | Principle | What it means for you |
|-----------|-----------------------|
| **Infrastructure as Resilient Code** | Replace sprawling YAML and bash scripts with type-safe Rust. Test, refactor, and version your platform just like application code. |
| **Prove It Works — Before You Deploy** | Harmony uses the compiler to verify that your applications needs match the target environments capabilities at **compile-time**, eliminating an entire class of runtime outages. |
| **One Unified Model** | Software and infrastructure are a single system. Harmony models them together, enabling deep automation—from bare-metal servers to Kubernetes workloads—with zero context switching. |
![Harmony Core Architecture](docs/diagrams/Harmony_Core_Architecture.drawio.svg) These principles surface as simple, ergonomic Rust APIs that let teams focus on their product while trusting the platform underneath.
````
---
## 2 · Quick Start
The snippet below spins up a complete **production-grade LAMP stack** with monitoring. Swap it for your own scores to deploy anything from microservices to machine-learning pipelines.
```rust
use harmony::{
data::Version,
inventory::Inventory,
maestro::Maestro,
modules::{
lamp::{LAMPConfig, LAMPScore},
monitoring::monitoring_alerting::MonitoringAlertingStackScore,
},
topology::{K8sAnywhereTopology, Url},
};
#[tokio::main]
async fn main() {
// 1. Describe what you want
let lamp_stack = LAMPScore {
name: "harmony-lamp-demo".into(),
domain: Url::Url(url::Url::parse("https://lampdemo.example.com").unwrap()),
php_version: Version::from("8.3.0").unwrap(),
config: LAMPConfig {
project_root: "./php".into(),
database_size: "4Gi".into(),
..Default::default()
},
};
// 2. Pick where it should run
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(), // auto-detect hardware / kube-config
K8sAnywhereTopology::from_env(), // local k3d, CI, staging, prod…
)
.await
.unwrap();
// 3. Enhance with extra scores (monitoring, CI/CD, …)
let mut monitoring = MonitoringAlertingStackScore::new();
monitoring.namespace = Some(lamp_stack.config.namespace.clone());
maestro.register_all(vec![Box::new(lamp_stack), Box::new(monitoring)]);
// 4. Launch an interactive CLI / TUI
harmony_cli::init(maestro, None).await.unwrap();
}
```
Run it:
```bash
cargo run
```
Harmony analyses the code, shows an execution plan in a TUI, and applies it once you confirm. Same code, same binary—every environment.
---
## 3 · Core Concepts
| Term | One-liner |
|------|-----------|
| **Score<T>** | Declarative description of the desired state (e.g., `LAMPScore`). |
| **Interpret<T>** | Imperative logic that realises a `Score` on a specific environment. |
| **Topology** | An environment (local k3d, AWS, bare-metal) exposing verified *Capabilities* (Kubernetes, DNS, …). |
| **Maestro** | Orchestrator that compiles Scores + Topology, ensuring all capabilities line up **at compile-time**. |
| **Inventory** | Optional catalogue of physical assets for bare-metal and edge deployments. |
A visual overview is in the diagram below.
[Harmony Core Architecture](docs/diagrams/Harmony_Core_Architecture.drawio.svg)
---
## 4 · Install
Prerequisites:
* Rust
* Docker (if you deploy locally)
* `kubectl` / `helm` for Kubernetes-based topologies
```bash
git clone https://git.nationtech.io/nationtech/harmony
cd harmony
cargo build --release # builds the CLI, TUI and libraries
```
---
## 5 · Learning More
* **Architectural Decision Records** dive into the rationale
- [ADR-001 · Why Rust](adr/001-rust.md)
- [ADR-003 · Infrastructure Abstractions](adr/003-infrastructure-abstractions.md)
- [ADR-006 · Secret Management](adr/006-secret-management.md)
- [ADR-011 · Multi-Tenant Cluster](adr/011-multi-tenant-cluster.md)
* **Extending Harmony** write new Scores / Interprets, add hardware like OPNsense firewalls, or embed Harmony in your own tooling (`/docs`).
* **Community** discussions and roadmap live in [GitLab issues](https://git.nationtech.io/nationtech/harmony/-/issues). PRs, ideas, and feedback are welcome!
---
## 6 · License
Harmony is released under the **GNU AGPL v3**.
> We choose a strong copyleft license to ensure the project—and every improvement to it—remains open and benefits the entire community. Fork it, enhance it, even out-innovate us; just keep it open.
See [LICENSE](LICENSE) for the full text.
---
*Made with ❤️ & 🦀 by the NationTech and the Harmony community*

View File

@@ -1,6 +1,6 @@
# Architecture Decision Record: \<Title\> # Architecture Decision Record: \<Title\>
Name: \<Name\> Initial Author: \<Name\>
Initial Date: \<Date\> Initial Date: \<Date\>

View File

@@ -1,6 +1,6 @@
# Architecture Decision Record: Helm and Kustomize Handling # Architecture Decision Record: Helm and Kustomize Handling
Name: Taha Hawa Initial Author: Taha Hawa
Initial Date: 2025-04-15 Initial Date: 2025-04-15

View File

@@ -0,0 +1,73 @@
pub trait MonitoringSystem {}
// 1. Modified AlertReceiver trait:
// - Removed the problematic `clone` method.
// - Added `box_clone` which returns a Box<dyn AlertReceiver>.
pub trait AlertReceiver {
type M: MonitoringSystem;
fn install(&self, sender: &Self::M) -> Result<(), String>;
// This method allows concrete types to clone themselves into a Box<dyn AlertReceiver>
fn box_clone(&self) -> Box<dyn AlertReceiver<M = Self::M>>;
}
#[derive(Clone)]
struct Prometheus{}
impl MonitoringSystem for Prometheus {}
#[derive(Clone)] // Keep derive(Clone) for DiscordWebhook itself
struct DiscordWebhook{}
impl AlertReceiver for DiscordWebhook {
type M = Prometheus;
fn install(&self, sender: &Self::M) -> Result<(), String> {
// Placeholder for actual installation logic
println!("DiscordWebhook installed for Prometheus monitoring.");
Ok(())
}
// 2. Implement `box_clone` for DiscordWebhook:
// This uses the derived `Clone` for DiscordWebhook to create a new boxed instance.
fn box_clone(&self) -> Box<dyn AlertReceiver<M = Self::M>> {
Box::new(self.clone())
}
}
// 3. Implement `std::clone::Clone` for `Box<dyn AlertReceiver<M= M>>`:
// This allows `Box<dyn AlertReceiver>` to be cloned.
// The `+ 'static` lifetime bound is often necessary for trait objects stored in collections,
// ensuring they live long enough.
impl<M: MonitoringSystem + 'static> Clone for Box<dyn AlertReceiver<M= M>> {
fn clone(&self) -> Self {
self.box_clone() // Call the custom `box_clone` method
}
}
// MonitoringConfig can now derive Clone because its `receivers` field
// (Vec<Box<dyn AlertReceiver<M = M>>>) is now cloneable.
#[derive(Clone)]
struct MonitoringConfig <M: MonitoringSystem + 'static>{
receivers: Vec<Box<dyn AlertReceiver<M = M>>>
}
// Example usage to demonstrate compilation and functionality
fn main() {
let prometheus_instance = Prometheus{};
let discord_webhook_instance = DiscordWebhook{};
let mut config = MonitoringConfig {
receivers: Vec::new()
};
// Create a boxed alert receiver
let boxed_receiver: Box<dyn AlertReceiver<M = Prometheus>> = Box::new(discord_webhook_instance);
config.receivers.push(boxed_receiver);
// Clone the config, which will now correctly clone the boxed receiver
let cloned_config = config.clone();
println!("Original config has {} receivers.", config.receivers.len());
println!("Cloned config has {} receivers.", cloned_config.receivers.len());
// Example of using the installed receiver
if let Some(receiver) = config.receivers.get(0) {
let _ = receiver.install(&prometheus_instance);
}
}

View File

@@ -1,7 +1,7 @@
# Architecture Decision Record: Monitoring and Alerting # Architecture Decision Record: Monitoring and Alerting
Proposed by: Willem Rolleman Initial Author : Willem Rolleman
Date: April 28 2025 Date : April 28 2025
## Status ## Status

View File

@@ -0,0 +1,161 @@
# Architecture Decision Record: Multi-Tenancy Strategy for Harmony Managed Clusters
Initial Author: Jean-Gabriel Gill-Couture
Initial Date: 2025-05-26
## Status
Proposed
## Context
Harmony manages production OKD/Kubernetes clusters that serve multiple clients with varying trust levels and operational requirements. We need a multi-tenancy strategy that provides:
1. **Strong isolation** between client workloads while maintaining operational simplicity
2. **Controlled API access** allowing clients self-service capabilities within defined boundaries
3. **Security-first approach** protecting both the cluster infrastructure and tenant data
4. **Harmony-native implementation** using our Score/Interpret pattern for automated tenant provisioning
5. **Scalable management** supporting both small trusted clients and larger enterprise customers
The official Kubernetes multi-tenancy documentation identifies two primary models: namespace-based isolation and virtual control planes per tenant. Given Harmony's focus on operational simplicity, provider-agnostic abstractions (ADR-003), and hexagonal architecture (ADR-002), we must choose an approach that balances security, usability, and maintainability.
Our clients represent a hybrid tenancy model:
- **Customer multi-tenancy**: Each client operates independently with no cross-tenant trust
- **Team multi-tenancy**: Individual clients may have multiple team members requiring coordinated access
- **API access requirement**: Unlike pure SaaS scenarios, clients need controlled Kubernetes API access for self-service operations
The official kubernetes documentation on multi tenancy heavily inspired this ADR : https://kubernetes.io/docs/concepts/security/multi-tenancy/
## Decision
Implement **namespace-based multi-tenancy** with the following architecture:
### 1. Network Security Model
- **Private cluster access**: Kubernetes API and OpenShift console accessible only via WireGuard VPN
- **No public exposure**: Control plane endpoints remain internal to prevent unauthorized access attempts
- **VPN-based authentication**: Initial access control through WireGuard client certificates
### 2. Tenant Isolation Strategy
- **Dedicated namespace per tenant**: Each client receives an isolated namespace with access limited only to the required resources and operations
- **Complete network isolation**: NetworkPolicies prevent cross-namespace communication while allowing full egress to public internet
- **Resource governance**: ResourceQuotas and LimitRanges enforce CPU, memory, and storage consumption limits
- **Storage access control**: Clients can create PersistentVolumeClaims but cannot directly manipulate PersistentVolumes or access other tenants' storage
### 3. Access Control Framework
- **Principle of Least Privilege**: RBAC grants only necessary permissions within tenant namespace scope
- **Namespace-scoped**: Clients can create/modify/delete resources within their namespace
- **Cluster-level restrictions**: No access to cluster-wide resources, other namespaces, or sensitive cluster operations
- **Whitelisted operations**: Controlled self-service capabilities for ingress, secrets, configmaps, and workload management
### 4. Identity Management Evolution
- **Phase 1**: Manual provisioning of VPN access and Kubernetes ServiceAccounts/Users
- **Phase 2**: Migration to Keycloak-based identity management (aligning with ADR-006) for centralized authentication and lifecycle management
### 5. Harmony Integration
- **TenantScore implementation**: Declarative tenant provisioning using Harmony's Score/Interpret pattern
- **Topology abstraction**: Tenant configuration abstracted from underlying Kubernetes implementation details
- **Automated deployment**: Complete tenant setup automated through Harmony's orchestration capabilities
## Rationale
### Network Security Through VPN Access
- **Defense in depth**: VPN requirement adds critical security layer preventing unauthorized cluster access
- **Simplified firewall rules**: No need for complex public endpoint protections or rate limiting
- **Audit capability**: VPN access provides clear audit trail of cluster connections
- **Aligns with enterprise practices**: Most enterprise customers already use VPN infrastructure
### Namespace Isolation vs Virtual Control Planes
Following Kubernetes official guidance, namespace isolation provides:
- **Lower resource overhead**: Virtual control planes require dedicated etcd, API server, and controller manager per tenant
- **Operational simplicity**: Single control plane to maintain, upgrade, and monitor
- **Cross-tenant service integration**: Enables future controlled cross-tenant communication if required
- **Proven stability**: Namespace-based isolation is well-tested and widely deployed
- **Cost efficiency**: Significantly lower infrastructure costs compared to dedicated control planes
### Hybrid Tenancy Model Suitability
Our approach addresses both customer and team multi-tenancy requirements:
- **Customer isolation**: Strong network and RBAC boundaries prevent cross-tenant interference
- **Team collaboration**: Multiple team members can share namespace access through group-based RBAC
- **Self-service balance**: Controlled API access enables client autonomy without compromising security
### Harmony Architecture Alignment
- **Provider agnostic**: TenantScore abstracts multi-tenancy concepts, enabling future support for other Kubernetes distributions
- **Hexagonal architecture**: Tenant management becomes an infrastructure capability accessed through well-defined ports
- **Declarative automation**: Tenant lifecycle fully managed through Harmony's Score execution model
## Consequences
### Positive Consequences
- **Strong security posture**: VPN + namespace isolation provides robust tenant separation
- **Operational efficiency**: Single cluster management with automated tenant provisioning
- **Client autonomy**: Self-service capabilities reduce operational support burden
- **Scalable architecture**: Can support hundreds of tenants per cluster without architectural changes
- **Future flexibility**: Foundation supports evolution to more sophisticated multi-tenancy models
- **Cost optimization**: Shared infrastructure maximizes resource utilization
### Negative Consequences
- **VPN operational overhead**: Requires VPN infrastructure management
- **Manual provisioning complexity**: Phase 1 manual user management creates administrative burden
- **Network policy dependency**: Requires CNI with NetworkPolicy support (OVN-Kubernetes provides this and is the OKD/Openshift default)
- **Cluster-wide resource limitations**: Some advanced Kubernetes features require cluster-wide access
- **Single point of failure**: Cluster outage affects all tenants simultaneously
### Migration Challenges
- **Legacy client integration**: Existing clients may need VPN client setup and credential migration
- **Monitoring complexity**: Per-tenant observability requires careful metric and log segmentation
- **Backup considerations**: Tenant data backup must respect isolation boundaries
## Alternatives Considered
### Alternative 1: Virtual Control Plane Per Tenant
**Pros**: Complete control plane isolation, full Kubernetes API access per tenant
**Cons**: 3-5x higher resource usage, complex cross-tenant networking, operational complexity scales linearly with tenants
**Rejected**: Resource overhead incompatible with cost-effective multi-tenancy goals
### Alternative 2: Dedicated Clusters Per Tenant
**Pros**: Maximum isolation, independent upgrade cycles, simplified security model
**Cons**: Exponential operational complexity, prohibitive costs, resource waste
**Rejected**: Operational overhead makes this approach unsustainable for multiple clients
### Alternative 3: Public API with Advanced Authentication
**Pros**: No VPN requirement, potentially simpler client access
**Cons**: Larger attack surface, complex rate limiting and DDoS protection, increased security monitoring requirements
**Rejected**: Risk/benefit analysis favors VPN-based access control
### Alternative 4: Service Mesh Based Isolation
**Pros**: Fine-grained traffic control, encryption, advanced observability
**Cons**: Significant operational complexity, performance overhead, steep learning curve
**Rejected**: Complexity overhead outweighs benefits for current requirements; remains option for future enhancement
## Additional Notes
### Implementation Roadmap
1. **Phase 1**: Implement VPN access and manual tenant provisioning
2. **Phase 2**: Deploy TenantScore automation for namespace, RBAC, and NetworkPolicy management
4. **Phase 3**: Work on privilege escalation from pods, audit for weaknesses, enforce security policies on pod runtimes
3. **Phase 4**: Integrate Keycloak for centralized identity management
4. **Phase 5**: Add advanced monitoring and per-tenant observability
### TenantScore Structure Preview
```rust
pub struct TenantScore {
pub tenant_config: TenantConfig,
pub resource_quotas: ResourceQuotaConfig,
pub network_isolation: NetworkIsolationPolicy,
pub storage_access: StorageAccessConfig,
pub rbac_config: RBACConfig,
}
```
### Future Enhancements
- **Cross-tenant service mesh**: For approved inter-tenant communication
- **Advanced monitoring**: Per-tenant Prometheus/Grafana instances
- **Backup automation**: Tenant-scoped backup policies
- **Cost allocation**: Detailed per-tenant resource usage tracking
This ADR establishes the foundation for secure, scalable multi-tenancy in Harmony-managed clusters while maintaining operational simplicity and cost effectiveness. A follow-up ADR will detail the Tenant abstraction and user management mechanisms within the Harmony framework.

View File

@@ -0,0 +1,41 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant-isolation-policy
namespace: testtenant
spec:
podSelector: {} # Selects all pods in the namespace
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {} # Allow from all pods in the same namespace
egress:
- to:
- podSelector: {} # Allow to all pods in the same namespace
- to:
- podSelector: {}
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns # Target the openshift-dns namespace
# Note, only opening port 53 is not enough, will have to dig deeper into this one eventually
# ports:
# - protocol: UDP
# port: 53
# - protocol: TCP
# port: 53
# Allow egress to public internet only
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8 # RFC1918
- 172.16.0.0/12 # RFC1918
- 192.168.0.0/16 # RFC1918
- 169.254.0.0/16 # Link-local
- 127.0.0.0/8 # Loopback
- 224.0.0.0/4 # Multicast
- 240.0.0.0/4 # Reserved
- 100.64.0.0/10 # Carrier-grade NAT
- 0.0.0.0/8 # Reserved

View File

@@ -0,0 +1,95 @@
apiVersion: v1
kind: Namespace
metadata:
name: testtenant
---
apiVersion: v1
kind: Namespace
metadata:
name: testtenant2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-web
namespace: testtenant
spec:
replicas: 1
selector:
matchLabels:
app: test-web
template:
metadata:
labels:
app: test-web
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-web
namespace: testtenant
spec:
selector:
app: test-web
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-client
namespace: testtenant
spec:
replicas: 1
selector:
matchLabels:
app: test-client
template:
metadata:
labels:
app: test-client
spec:
containers:
- name: curl
image: curlimages/curl:latest
command: ["/bin/sh", "-c", "sleep 3600"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-web
namespace: testtenant2
spec:
replicas: 1
selector:
matchLabels:
app: test-web
template:
metadata:
labels:
app: test-web
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-web
namespace: testtenant2
spec:
selector:
app: test-web
ports:
- port: 80
targetPort: 8080

View File

@@ -0,0 +1,63 @@
# Architecture Decision Record: \<Title\>
Initial Author: Jean-Gabriel Gill-Couture
Initial Date: 2025-06-04
Last Updated Date: 2025-06-04
## Status
Proposed
## Context
As Harmony's goal is to make software delivery easier, we must provide an easy way for developers to express their app's semantics and dependencies with great abstractions, in a similar fashion to what the score.dev project is doing.
Thus, we started working on ways to package common types of applications such as LAMP, which we started working on with `LAMPScore`.
Now is time for the next step : we want to pave the way towards complete lifecycle automation. To do this, we will start with a way to execute Harmony's modules easily from anywhere, starting with locally and in CI environments.
## Decision
To achieve easy, portable execution of Harmony, we will follow this architecture :
- Host a basic harmony release that is compiled with the CLI by our gitea/github server
- This binary will do the following : check if there is a `harmony` folder in the current path
- If yes
- Check if cargo is available locally and compile the harmony binary, or compile the harmony binary using a rust docker container, if neither cargo or a container runtime is available, output a message explaining the situation
- Run the newly compiled binary. (Ideally using pid handoff like exec does but some research around this should be done. I think handing off the process is to help with OS interaction such as terminal apps, signals, exit codes, process handling, etc but there might be some side effects)
- If not
- Suggest initializing a project by auto detecting what the project looks like
- When the project type cannot be auto detected, provide links to Harmony's documentation on how to set up a project, a link to the examples folder, and a ask the user if he wants to initialize an empty Harmony project in the current folder
- harmony/Cargo.toml with dependencies set
- harmony/src/main.rs with an example LAMPScore setup and ready to run
- This same binary can be used in a CI environment to run the target project's Harmony module. By default, we provide these opinionated steps :
1. **An empty check step.** The purpose of this step is to run all tests and checks against the codebase. For complex projects this could involve a very complex pipeline of test environments setup and execution but this is out of scope for now. This is not handled by harmony. For projects with automatic setup, we can fill this step with something like `cargo fmt --check; cargo test; cargo build` but Harmony is not directly involved in the execution of this step.
2. **Package and publish.** Once all checks have passed, the production ready container is built and pushed to a registry. This is done by Harmony.
3. **Deploy to staging automatically.**
4. **Run a sanity check on staging.** As Harmony is responsible for deploying, Harmony should have all the knowledge of how to perform a sanity check on the staging environment. This will, most of the time, be a simple verification of the kubernetes health of all deployed components, and a poke on the public endpoint when there is one.
5. **Deploy to production automatically.** Many projects will require manual approval here, this can be easily set up in the CI afterwards, but our opinion is that
6. **Run a sanity check on production.** Same check as staging, but on production.
*Note on providing a base pipeline :* Having a complete pipeline set up automatically will encourage development teams to build upon these by adding tests where they belong. The goal here is to provide an opiniated solution that works for most small and large projects. Of course, many orgnizations will need to add steps such as deploying to sandbox environments, requiring more advanced approvals, more complex publication and coordination with other projects. But this here encompasses the basics required to build and deploy software reliably at any scale.
### Environment setup
TBD : For now, environments (tenants) will be set up and configured manually. Harmony will rely on the kubeconfig provided in the environment where it is running to deploy in the namespace.
For the CD tool such as Argo or Flux they will be activated by default by Harmony when using application level Scores such as LAMPScore in a similar way that the container is automatically built. Then, CI deployment steps will be notifying the CD tool using its API of the new release to deploy.
## Rationale
Reasoning behind the decision
## Consequences
Pros/Cons of chosen solution
## Alternatives considered
Pros/Cons of various proposed solutions considered
## Additional Notes

View File

@@ -0,0 +1,78 @@
# Architecture Decision Record: Monitoring Notifications
Initial Author: Taha Hawa
Initial Date: 2025-06-26
Last Updated Date: 2025-06-26
## Status
Proposed
## Context
We need to send notifications (typically from AlertManager/Prometheus) and we need to receive said notifications on mobile devices for sure in some way, whether it's push messages, SMS, phone call, email, etc or all of the above.
## Decision
We should go with https://ntfy.sh except host it ourselves.
`ntfy` is an open source solution written in Go that has the features we need.
## Rationale
`ntfy` has pretty much everything we need (push notifications, email forwarding, receives via webhook), and nothing/not much we don't. Good fit, lightweight.
## Consequences
Pros:
- topics, with ACLs
- lightweight
- reliable
- easy to configure
- mobile app
- the mobile app can listen via websocket, poll, or receive via Firebase/GCM on Android, or similar on iOS.
- Forward to email
- Text-to-Speech phone call messages using Twilio integration
- Operates based on simple HTTP requests/Webhooks, easily usable via AlertManager
Cons:
- No SMS pushes
- SQLite DB, makes it harder to HA/scale
## Alternatives considered
[AWS SNS](https://aws.amazon.com/sns/):
Pros:
- highly reliable
- no hosting needed
Cons:
- no control, not self hosted
- costs (per usage)
[Apprise](https://github.com/caronc/apprise):
Pros:
- Way more ways of sending notifications
- Can use ntfy as one of the backends/ways of sending
Cons:
- Way too overkill for what we need in terms of features
[Gotify](https://github.com/gotify/server):
Pros:
- simple, lightweight, golang, etc
Cons:
- Pushes topics are per-user
## Additional Notes

0
check.sh Normal file → Executable file
View File

View File

@@ -0,0 +1 @@
slitaz/* filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,6 @@
#!ipxe
set base-url http://192.168.33.1:8080
set hostfile ${base-url}/byMAC/01-${mac:hexhyp}.ipxe
chain ${hostfile} || chain ${base-url}/default.ipxe

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Boot to Slitaz - old linux for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/nvme0n1
set ignition-file ncd0/master.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Boot to Slitaz - old linux for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/nvme0n1
set ignition-file ncd0/master.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Slitaz - an old linux image for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/sda
set ignition-file ncd0/worker.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Boot to Slitaz - old linux for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/nvme0n1
set ignition-file ncd0/master.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Slitaz - an old linux image for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/sda
set ignition-file ncd0/worker.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,37 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Slitaz - an old linux image for debugging
choose selected
goto ${selected}
:local
exit
# This is the bootstrap node
# it will become wk2
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/sda
set ignition-file ncd0/worker.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,71 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item local Boot from Hard Disk
item slitaz Boot slitaz live environment [tux|root:root]
#item ubuntu-server Ubuntu 24.04.1 live server
#item ubuntu-desktop Ubuntu 24.04.1 desktop
#item systemrescue System Rescue 11.03
item memtest memtest
#choose --default local --timeout 5000 selected
choose selected
goto ${selected}
:local
exit
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot
#################################
# Ubuntu Server
#################################
:ubuntu-server
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/ubuntu/live-server-24.04.1
kernel ${base_url}/vmlinuz ip=dhcp url=${base_url}/ubuntu-24.04.1-live-server-amd64.iso autoinstall ds=nocloud
initrd ${base_url}/initrd
boot
#################################
# Ubuntu Desktop
#################################
:ubuntu-desktop
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/ubuntu/desktop-24.04.1
kernel ${base_url}/vmlinuz ip=dhcp url=${base_url}/ubuntu-24.04.1-desktop-amd64.iso autoinstall ds=nocloud
initrd ${base_url}/initrd
boot
#################################
# System Rescue
#################################
:systemrescue
set base-url http://192.168.33.1:8080/systemrescue
kernel ${base-url}/vmlinuz initrd=sysresccd.img boot=systemrescue docache
initrd ${base-url}/sysresccd.img
boot
#################################
# MemTest86 (BIOS/UEFI)
#################################
:memtest
iseq ${platform} efi && goto memtest_efi || goto memtest_bios
:memtest_efi
kernel http://192.168.33.1:8080/memtest/memtest64.efi
boot
:memtest_bios
kernel http://192.168.33.1:8080/memtest/memtest64.bin
boot

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1 +0,0 @@
hey i am paul

BIN
data/watchguard/pxe-http-files/slitaz/rootfs.gz (Stored with Git LFS) Normal file

Binary file not shown.

BIN
data/watchguard/pxe-http-files/slitaz/vmlinuz-2.6.37-slitaz (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

1
docs/README.md Normal file
View File

@@ -0,0 +1 @@
Not much here yet, see the `adr` folder for now. More to come in time!

13
docs/cyborg-metaphor.md Normal file
View File

@@ -0,0 +1,13 @@
## Conceptual metaphor : The Cyborg and the Central Nervous System
At the heart of Harmony lies a core belief: in modern, decentralized systems, **software and infrastructure are not separate entities.** They are a single, symbiotic organism—a cyborg.
The software is the electronics, the "mind"; the infrastructure is the biological host, the "body". They live or die, thrive or sink together.
Traditional approaches attempt to manage this complex organism with fragmented tools: static YAML for configuration, brittle scripts for automation, and separate Infrastructure as Code (IaC) for provisioning. This creates a disjointed system that struggles to scale or heal itself, making it inadequate for the demands of fully automated, enterprise-grade clusters.
Harmony's goal is to provide the **central nervous system for this cyborg**. We aim to achieve the full automation of complex, decentralized clouds by managing this integrated entity holistically.
To achieve this, a tool must be both robust and powerful. It must manage the entire lifecycle—deployment, upgrades, failure recovery, and decommissioning—with precision. This requires full control over application packaging and a deep, intrinsic integration between the software and the infrastructure it inhabits.
This is why Harmony uses a powerful, living language like Rust. It replaces static, lifeless configuration files with a dynamic, breathing codebase. It allows us to express the complex relationships and behaviors of a modern distributed system, enabling the creation of truly automated, resilient, and powerful platforms that can thrive.

View File

@@ -14,8 +14,8 @@ harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true } log = { workspace = true }
env_logger = { workspace = true } env_logger = { workspace = true }
url = { workspace = true } url = { workspace = true }
kube = "0.98.0" kube = "1.1.0"
k8s-openapi = { version = "0.24.0", features = [ "v1_30" ] } k8s-openapi = { version = "0.25.0", features = ["v1_30"] }
http = "1.2.0" http = "1.2.0"
serde_yaml = "0.9.34" serde_yaml = "0.9.34"
inquire.workspace = true inquire.workspace = true

View File

@@ -8,7 +8,7 @@ publish = false
[dependencies] [dependencies]
harmony = { path = "../../harmony" } harmony = { path = "../../harmony" }
harmony_tui = { path = "../../harmony_tui" } harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" } harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true } cidr = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }

View File

@@ -1,3 +1,85 @@
<?php <?php
print_r("Hello this is from PHP")
ini_set('display_errors', 1);
error_reporting(E_ALL);
$host = getenv('MYSQL_HOST') ?: '';
$user = getenv('MYSQL_USER') ?: 'root';
$pass = getenv('MYSQL_PASSWORD') ?: '';
$db = 'testfill';
$charset = 'utf8mb4';
$dsn = "mysql:host=$host;charset=$charset";
$options = [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
];
try {
$pdo = new PDO($dsn, $user, $pass, $options);
$pdo->exec("CREATE DATABASE IF NOT EXISTS `$db`");
$pdo->exec("USE `$db`");
$pdo->exec("
CREATE TABLE IF NOT EXISTS filler (
id INT AUTO_INCREMENT PRIMARY KEY,
data LONGBLOB
)
");
} catch (\PDOException $e) {
die("❌ DB connection failed: " . $e->getMessage());
}
function getDbStats($pdo, $db) {
$stmt = $pdo->query("
SELECT
ROUND(SUM(data_length + index_length) / 1024 / 1024 / 1024, 2) AS total_size_gb,
SUM(table_rows) AS total_rows
FROM information_schema.tables
WHERE table_schema = '$db'
");
$result = $stmt->fetch();
$sizeGb = $result['total_size_gb'] ?? '0';
$rows = $result['total_rows'] ?? '0';
$avgMb = ($rows > 0) ? round(($sizeGb * 1024) / $rows, 2) : 0;
return [$sizeGb, $rows, $avgMb];
}
list($dbSize, $rowCount, $avgRowMb) = getDbStats($pdo, $db);
$message = '';
if ($_SERVER['REQUEST_METHOD'] === 'POST' && isset($_POST['fill'])) {
$iterations = 1024;
$data = str_repeat(random_bytes(1024), 1024); // 1MB
$stmt = $pdo->prepare("INSERT INTO filler (data) VALUES (:data)");
for ($i = 0; $i < $iterations; $i++) {
$stmt->execute([':data' => $data]);
}
list($dbSize, $rowCount, $avgRowMb) = getDbStats($pdo, $db);
$message = "<p style='color: green;'>✅ 1GB inserted into MariaDB successfully.</p>";
}
?> ?>
<!DOCTYPE html>
<html>
<head>
<title>MariaDB Filler</title>
</head>
<body>
<h1>MariaDB Storage Filler</h1>
<?= $message ?>
<ul>
<li><strong>📦 MariaDB Used Size:</strong> <?= $dbSize ?> GB</li>
<li><strong>📊 Total Rows:</strong> <?= $rowCount ?></li>
<li><strong>📐 Average Row Size:</strong> <?= $avgRowMb ?> MB</li>
</ul>
<form method="post">
<button name="fill" value="1" type="submit">Insert 1GB into DB</button>
</form>
</body>
</html>

View File

@@ -8,23 +8,50 @@ use harmony::{
#[tokio::main] #[tokio::main]
async fn main() { async fn main() {
// let _ = env_logger::Builder::from_default_env().filter_level(log::LevelFilter::Info).try_init(); // This here is the whole configuration to
// - setup a local K3D cluster
// - Build a docker image with the PHP project builtin and production grade settings
// - Deploy a mariadb database using a production grade helm chart
// - Deploy the new container using a kubernetes deployment
// - Configure networking between the PHP container and the database
// - Provision a public route and an SSL certificate automatically on production environments
//
// Enjoy :)
let lamp_stack = LAMPScore { let lamp_stack = LAMPScore {
name: "harmony-lamp-demo".to_string(), name: "harmony-lamp-demo".to_string(),
domain: Url::Url(url::Url::parse("https://lampdemo.harmony.nationtech.io").unwrap()), domain: Url::Url(url::Url::parse("https://lampdemo.harmony.nationtech.io").unwrap()),
php_version: Version::from("8.4.4").unwrap(), php_version: Version::from("8.4.4").unwrap(),
// This config can be extended as needed for more complicated configurations
config: LAMPConfig { config: LAMPConfig {
project_root: "./php".into(), project_root: "./php".into(),
database_size: format!("4Gi").into(),
..Default::default() ..Default::default()
}, },
}; };
//let monitoring = MonitoringAlertingScore {
// alert_receivers: vec![Box::new(DiscordWebhook {
// url: Url::Url(url::Url::parse("https://discord.idonotexist.com").unwrap()),
// // TODO write url macro
// // url: url!("https://discord.idonotexist.com"),
// })],
// alert_rules: vec![],
// scrape_targets: vec![],
//};
// You can choose the type of Topology you want, we suggest starting with the
// K8sAnywhereTopology as it is the most automatic one that enables you to easily deploy
// locally, to development environment from a CI, to staging, and to production with settings
// that automatically adapt to each environment grade.
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize( let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(), Inventory::autoload(),
K8sAnywhereTopology::new(), K8sAnywhereTopology::from_env(),
) )
.await .await
.unwrap(); .unwrap();
maestro.register_all(vec![Box::new(lamp_stack)]); maestro.register_all(vec![Box::new(lamp_stack)]);
harmony_tui::init(maestro).await.unwrap(); // Here we bootstrap the CLI, this gives some nice features if you need them
harmony_cli::init(maestro, None).await.unwrap();
} }
// That's it, end of the infra as code.

View File

@@ -0,0 +1,13 @@
[package]
name = "example-monitoring"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
harmony_macros = { version = "0.1.0", path = "../../harmony_macros" }
tokio.workspace = true
url.workspace = true

View File

@@ -0,0 +1,86 @@
use std::collections::HashMap;
use harmony::{
inventory::Inventory,
maestro::Maestro,
modules::{
monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
kube_prometheus::{
helm_prometheus_alert_score::HelmPrometheusAlertingScore,
types::{
HTTPScheme, MatchExpression, Operator, Selector, ServiceMonitor,
ServiceMonitorEndpoint,
},
},
},
prometheus::alerts::{
infra::dell_server::{
alert_global_storage_status_critical, alert_global_storage_status_non_recoverable,
global_storage_status_degraded_non_critical,
},
k8s::pvc::high_pvc_fill_rate_over_two_days,
},
},
topology::{K8sAnywhereTopology, Url},
};
#[tokio::main]
async fn main() {
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();
let dell_system_storage_degraded = global_storage_status_degraded_non_critical();
let alert_global_storage_status_critical = alert_global_storage_status_critical();
let alert_global_storage_status_non_recoverable = alert_global_storage_status_non_recoverable();
let additional_rules =
AlertManagerRuleGroup::new("pvc-alerts", vec![high_pvc_fill_rate_over_two_days_alert]);
let additional_rules2 = AlertManagerRuleGroup::new(
"dell-server-alerts",
vec![
dell_system_storage_degraded,
alert_global_storage_status_critical,
alert_global_storage_status_non_recoverable,
],
);
let service_monitor_endpoint = ServiceMonitorEndpoint {
port: Some("80".to_string()),
path: "/metrics".to_string(),
scheme: HTTPScheme::HTTP,
..Default::default()
};
let service_monitor = ServiceMonitor {
name: "test-service-monitor".to_string(),
selector: Selector {
match_labels: HashMap::new(),
match_expressions: vec![MatchExpression {
key: "test".to_string(),
operator: Operator::In,
values: vec!["test-service".to_string()],
}],
},
endpoints: vec![service_monitor_endpoint],
..Default::default()
};
let alerting_score = HelmPrometheusAlertingScore {
receivers: vec![Box::new(discord_receiver)],
rules: vec![Box::new(additional_rules), Box::new(additional_rules2)],
service_monitors: vec![service_monitor],
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(alerting_score)]);
harmony_cli::init(maestro, None).await.unwrap();
}

View File

@@ -0,0 +1,13 @@
[package]
name = "example-monitoring-with-tenant"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
cidr.workspace = true
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
tokio.workspace = true
url.workspace = true

View File

@@ -0,0 +1,90 @@
use std::collections::HashMap;
use harmony::{
data::Id,
inventory::Inventory,
maestro::Maestro,
modules::{
monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
kube_prometheus::{
helm_prometheus_alert_score::HelmPrometheusAlertingScore,
types::{
HTTPScheme, MatchExpression, Operator, Selector, ServiceMonitor,
ServiceMonitorEndpoint,
},
},
},
prometheus::alerts::k8s::pvc::high_pvc_fill_rate_over_two_days,
tenant::TenantScore,
},
topology::{
K8sAnywhereTopology, Url,
tenant::{ResourceLimits, TenantConfig, TenantNetworkPolicy},
},
};
#[tokio::main]
async fn main() {
let tenant = TenantScore {
config: TenantConfig {
id: Id::from_string("1234".to_string()),
name: "test-tenant".to_string(),
resource_limits: ResourceLimits {
cpu_request_cores: 6.0,
cpu_limit_cores: 4.0,
memory_request_gb: 4.0,
memory_limit_gb: 4.0,
storage_total_gb: 10.0,
},
network_policy: TenantNetworkPolicy::default(),
},
};
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();
let additional_rules =
AlertManagerRuleGroup::new("pvc-alerts", vec![high_pvc_fill_rate_over_two_days_alert]);
let service_monitor_endpoint = ServiceMonitorEndpoint {
port: Some("80".to_string()),
path: "/metrics".to_string(),
scheme: HTTPScheme::HTTP,
..Default::default()
};
let service_monitor = ServiceMonitor {
name: "test-service-monitor".to_string(),
selector: Selector {
match_labels: HashMap::new(),
match_expressions: vec![MatchExpression {
key: "test".to_string(),
operator: Operator::In,
values: vec!["test-service".to_string()],
}],
},
endpoints: vec![service_monitor_endpoint],
..Default::default()
};
let alerting_score = HelmPrometheusAlertingScore {
receivers: vec![Box::new(discord_receiver)],
rules: vec![Box::new(additional_rules)],
service_monitors: vec![service_monitor],
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(tenant), Box::new(alerting_score)]);
harmony_cli::init(maestro, None).await.unwrap();
}

View File

@@ -0,0 +1,4 @@
#!/bin/bash
helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \
--set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml

View File

@@ -0,0 +1,721 @@
# Default values for a single rook-ceph cluster
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- Namespace of the main rook operator
operatorNamespace: rook-ceph
# -- The metadata.name of the CephCluster CR
# @default -- The same as the namespace
clusterName:
# -- Optional override of the target kubernetes version
kubeVersion:
# -- Cluster ceph.conf override
configOverride:
# configOverride: |
# [global]
# mon_allow_pool_delete = true
# osd_pool_default_size = 3
# osd_pool_default_min_size = 2
# Installs a debugging toolbox deployment
toolbox:
# -- Enable Ceph debugging pod deployment. See [toolbox](../Troubleshooting/ceph-toolbox.md)
enabled: true
# -- Toolbox image, defaults to the image used by the Ceph cluster
image: #quay.io/ceph/ceph:v19.2.2
# -- Toolbox tolerations
tolerations: []
# -- Toolbox affinity
affinity: {}
# -- Toolbox container security context
containerSecurityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
capabilities:
drop: ["ALL"]
# -- Toolbox resources
resources:
limits:
memory: "1Gi"
requests:
cpu: "100m"
memory: "128Mi"
# -- Set the priority class for the toolbox if desired
priorityClassName:
monitoring:
# -- Enable Prometheus integration, will also create necessary RBAC rules to allow Operator to create ServiceMonitors.
# Monitoring requires Prometheus to be pre-installed
enabled: false
# -- Whether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabled
metricsDisabled: false
# -- Whether to create the Prometheus rules for Ceph alerts
createPrometheusRules: false
# -- The namespace in which to create the prometheus rules, if different from the rook cluster namespace.
# If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus
# deployed) to set rulesNamespaceOverride for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions.
rulesNamespaceOverride:
# Monitoring settings for external clusters:
# externalMgrEndpoints: <list of endpoints>
# externalMgrPrometheusPort: <port>
# Scrape interval for prometheus
# interval: 10s
# allow adding custom labels and annotations to the prometheus rule
prometheusRule:
# -- Labels applied to PrometheusRule
labels: {}
# -- Annotations applied to PrometheusRule
annotations: {}
# -- Create & use PSP resources. Set this to the same value as the rook-ceph chart.
pspEnable: false
# imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts.
# imagePullSecrets:
# - name: my-registry-secret
# All values below are taken from the CephCluster CRD
# -- Cluster configuration.
# @default -- See [below](#ceph-cluster-spec)
cephClusterSpec:
# This cluster spec example is for a converged cluster where all the Ceph daemons are running locally,
# as in the host-based example (cluster.yaml). For a different configuration such as a
# PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml),
# or stretch cluster (cluster-stretched.yaml), replace this entire `cephClusterSpec`
# with the specs from those examples.
# For more details, check https://rook.io/docs/rook/v1.10/CRDs/Cluster/ceph-cluster-crd/
cephVersion:
# The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw).
# v18 is Reef, v19 is Squid
# RECOMMENDATION: In production, use a specific version tag instead of the general v18 flag, which pulls the latest release and could result in different
# versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.
# If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v19.2.2-20250409
# This tag might not contain a new Ceph version, just security fixes from the underlying operating system, which will reduce vulnerabilities
image: quay.io/ceph/ceph:v19.2.2
# Whether to allow unsupported versions of Ceph. Currently Reef and Squid are supported.
# Future versions such as Tentacle (v20) would require this to be set to `true`.
# Do not set to true in production.
allowUnsupported: false
# The path on the host where configuration files will be persisted. Must be specified. If there are multiple clusters, the directory must be unique for each cluster.
# Important: if you reinstall the cluster, make sure you delete this directory from each host or else the mons will fail to start on the new cluster.
# In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment.
dataDirHostPath: /var/lib/rook
# Whether or not upgrade should continue even if a check fails
# This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise
# Use at your OWN risk
# To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/v1.10/Upgrade/ceph-upgrade/
skipUpgradeChecks: false
# Whether or not continue if PGs are not clean during an upgrade
continueUpgradeAfterChecksEvenIfNotHealthy: false
# WaitTimeoutForHealthyOSDInMinutes defines the time (in minutes) the operator would wait before an OSD can be stopped for upgrade or restart.
# If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one
# if `continueUpgradeAfterChecksEvenIfNotHealthy` is `false`. If `continueUpgradeAfterChecksEvenIfNotHealthy` is `true`, then operator would
# continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won't be applied if `skipUpgradeChecks` is `true`.
# The default wait timeout is 10 minutes.
waitTimeoutForHealthyOSDInMinutes: 10
# Whether or not requires PGs are clean before an OSD upgrade. If set to `true` OSD upgrade process won't start until PGs are healthy.
# This configuration will be ignored if `skipUpgradeChecks` is `true`.
# Default is false.
upgradeOSDRequiresHealthyPGs: false
mon:
# Set the number of mons to be started. Generally recommended to be 3.
# For highest availability, an odd number of mons should be specified.
count: 3
# The mons should be on unique nodes. For production, at least 3 nodes are recommended for this reason.
# Mons should only be allowed on the same node for test environments where data loss is acceptable.
allowMultiplePerNode: false
mgr:
# When higher availability of the mgr is needed, increase the count to 2.
# In that case, one mgr will be active and one in standby. When Ceph updates which
# mgr is active, Rook will update the mgr services to match the active mgr.
count: 2
allowMultiplePerNode: false
modules:
# List of modules to optionally enable or disable.
# Note the "dashboard" and "monitoring" modules are already configured by other settings in the cluster CR.
# - name: rook
# enabled: true
# enable the ceph dashboard for viewing cluster status
dashboard:
enabled: true
# serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
# urlPrefix: /ceph-dashboard
# serve the dashboard at the given port.
# port: 8443
# Serve the dashboard using SSL (if using ingress to expose the dashboard and `ssl: true` you need to set
# the corresponding "backend protocol" annotation(s) for your ingress controller of choice)
ssl: true
# Network configuration, see: https://github.com/rook/rook/blob/master/Documentation/CRDs/Cluster/ceph-cluster-crd.md#network-configuration-settings
network:
connections:
# Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network.
# The default is false. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted.
# When encryption is not enabled, clients still establish a strong initial authentication and data integrity is still validated with a crc check.
# IMPORTANT: Encryption requires the 5.11 kernel for the latest nbd and cephfs drivers. Alternatively for testing only,
# you can set the "mounter: rbd-nbd" in the rbd storage class, or "mounter: fuse" in the cephfs storage class.
# The nbd and fuse drivers are *not* recommended in production since restarting the csi driver pod will disconnect the volumes.
encryption:
enabled: false
# Whether to compress the data in transit across the wire. The default is false.
# The kernel requirements above for encryption also apply to compression.
compression:
enabled: false
# Whether to require communication over msgr2. If true, the msgr v1 port (6789) will be disabled
# and clients will be required to connect to the Ceph cluster with the v2 port (3300).
# Requires a kernel that supports msgr v2 (kernel 5.11 or CentOS 8.4 or newer).
requireMsgr2: false
# # enable host networking
# provider: host
# # EXPERIMENTAL: enable the Multus network provider
# provider: multus
# selectors:
# # The selector keys are required to be `public` and `cluster`.
# # Based on the configuration, the operator will do the following:
# # 1. if only the `public` selector key is specified both public_network and cluster_network Ceph settings will listen on that interface
# # 2. if both `public` and `cluster` selector keys are specified the first one will point to 'public_network' flag and the second one to 'cluster_network'
# #
# # In order to work, each selector value must match a NetworkAttachmentDefinition object in Multus
# #
# # public: public-conf --> NetworkAttachmentDefinition object name in Multus
# # cluster: cluster-conf --> NetworkAttachmentDefinition object name in Multus
# # Provide internet protocol version. IPv6, IPv4 or empty string are valid options. Empty string would mean IPv4
# ipFamily: "IPv6"
# # Ceph daemons to listen on both IPv4 and Ipv6 networks
# dualStack: false
# enable the crash collector for ceph daemon crash collection
crashCollector:
disable: false
# Uncomment daysToRetain to prune ceph crash entries older than the
# specified number of days.
# daysToRetain: 30
# enable log collector, daemons will log on files and rotate
logCollector:
enabled: true
periodicity: daily # one of: hourly, daily, weekly, monthly
maxLogSize: 500M # SUFFIX may be 'M' or 'G'. Must be at least 1M.
# automate [data cleanup process](https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/ceph-teardown.md#delete-the-data-on-hosts) in cluster destruction.
cleanupPolicy:
# Since cluster cleanup is destructive to data, confirmation is required.
# To destroy all Rook data on hosts during uninstall, confirmation must be set to "yes-really-destroy-data".
# This value should only be set when the cluster is about to be deleted. After the confirmation is set,
# Rook will immediately stop configuring the cluster and only wait for the delete command.
# If the empty string is set, Rook will not destroy any data on hosts during uninstall.
confirmation: ""
# sanitizeDisks represents settings for sanitizing OSD disks on cluster deletion
sanitizeDisks:
# method indicates if the entire disk should be sanitized or simply ceph's metadata
# in both case, re-install is possible
# possible choices are 'complete' or 'quick' (default)
method: quick
# dataSource indicate where to get random bytes from to write on the disk
# possible choices are 'zero' (default) or 'random'
# using random sources will consume entropy from the system and will take much more time then the zero source
dataSource: zero
# iteration overwrite N times instead of the default (1)
# takes an integer value
iteration: 1
# allowUninstallWithVolumes defines how the uninstall should be performed
# If set to true, cephCluster deletion does not wait for the PVs to be deleted.
allowUninstallWithVolumes: false
# To control where various services will be scheduled by kubernetes, use the placement configuration sections below.
# The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node' and
# tolerate taints with a key of 'storage-node'.
# placement:
# all:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: role
# operator: In
# values:
# - storage-node
# podAffinity:
# podAntiAffinity:
# topologySpreadConstraints:
# tolerations:
# - key: storage-node
# operator: Exists
# # The above placement information can also be specified for mon, osd, and mgr components
# mon:
# # Monitor deployments may contain an anti-affinity rule for avoiding monitor
# # collocation on the same node. This is a required rule when host network is used
# # or when AllowMultiplePerNode is false. Otherwise this anti-affinity rule is a
# # preferred rule with weight: 50.
# osd:
# mgr:
# cleanup:
# annotations:
# all:
# mon:
# osd:
# cleanup:
# prepareosd:
# # If no mgr annotations are set, prometheus scrape annotations will be set by default.
# mgr:
# dashboard:
# labels:
# all:
# mon:
# osd:
# cleanup:
# mgr:
# prepareosd:
# # monitoring is a list of key-value pairs. It is injected into all the monitoring resources created by operator.
# # These labels can be passed as LabelSelector to Prometheus
# monitoring:
# dashboard:
resources:
mgr:
limits:
memory: "1Gi"
requests:
cpu: "500m"
memory: "512Mi"
mon:
limits:
memory: "2Gi"
requests:
cpu: "1000m"
memory: "1Gi"
osd:
limits:
memory: "4Gi"
requests:
cpu: "1000m"
memory: "4Gi"
prepareosd:
# limits: It is not recommended to set limits on the OSD prepare job
# since it's a one-time burst for memory that must be allowed to
# complete without an OOM kill. Note however that if a k8s
# limitRange guardrail is defined external to Rook, the lack of
# a limit here may result in a sync failure, in which case a
# limit should be added. 1200Mi may suffice for up to 15Ti
# OSDs ; for larger devices 2Gi may be required.
# cf. https://github.com/rook/rook/pull/11103
requests:
cpu: "500m"
memory: "50Mi"
mgr-sidecar:
limits:
memory: "100Mi"
requests:
cpu: "100m"
memory: "40Mi"
crashcollector:
limits:
memory: "60Mi"
requests:
cpu: "100m"
memory: "60Mi"
logcollector:
limits:
memory: "1Gi"
requests:
cpu: "100m"
memory: "100Mi"
cleanup:
limits:
memory: "1Gi"
requests:
cpu: "500m"
memory: "100Mi"
exporter:
limits:
memory: "128Mi"
requests:
cpu: "50m"
memory: "50Mi"
# The option to automatically remove OSDs that are out and are safe to destroy.
removeOSDsIfOutAndSafeToRemove: false
# priority classes to apply to ceph resources
priorityClassNames:
mon: system-node-critical
osd: system-node-critical
mgr: system-cluster-critical
storage: # cluster level storage configuration and selection
useAllNodes: true
useAllDevices: true
# deviceFilter:
# config:
# crushRoot: "custom-root" # specify a non-default root label for the CRUSH map
# metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
# databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
# osdsPerDevice: "1" # this value can be overridden at the node or device level
# encryptedDevice: "true" # the default value for this option is "false"
# # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named
# # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label.
# nodes:
# - name: "172.17.4.201"
# devices: # specific devices to use for storage can be specified for each node
# - name: "sdb"
# - name: "nvme01" # multiple osds can be created on high performance devices
# config:
# osdsPerDevice: "5"
# - name: "/dev/disk/by-id/ata-ST4000DM004-XXXX" # devices can be specified using full udev paths
# config: # configuration can be specified at the node level which overrides the cluster level config
# - name: "172.17.4.301"
# deviceFilter: "^sd."
# The section for configuring management of daemon disruptions during upgrade or fencing.
disruptionManagement:
# If true, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically
# via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will
# block eviction of OSDs by default and unblock them safely when drains are detected.
managePodBudgets: true
# A duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the
# default DOWN/OUT interval) when it is draining. This is only relevant when `managePodBudgets` is `true`. The default value is `30` minutes.
osdMaintenanceTimeout: 30
# Configure the healthcheck and liveness probes for ceph pods.
# Valid values for daemons are 'mon', 'osd', 'status'
healthCheck:
daemonHealth:
mon:
disabled: false
interval: 45s
osd:
disabled: false
interval: 60s
status:
disabled: false
interval: 60s
# Change pod liveness probe, it works for all mon, mgr, and osd pods.
livenessProbe:
mon:
disabled: false
mgr:
disabled: false
osd:
disabled: false
ingress:
# -- Enable an ingress for the ceph-dashboard
dashboard:
# {}
# labels:
# external-dns/private: "true"
annotations:
"route.openshift.io/termination": "passthrough"
# external-dns.alpha.kubernetes.io/hostname: dashboard.example.com
# nginx.ingress.kubernetes.io/rewrite-target: /ceph-dashboard/$2
# If the dashboard has ssl: true the following will make sure the NGINX Ingress controller can expose the dashboard correctly
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# nginx.ingress.kubernetes.io/server-snippet: |
# proxy_ssl_verify off;
host:
name: ceph.apps.ncd0.harmony.mcd
path: null # TODO the chart does not allow removing the path, and it causes openshift to fail creating a route, because path is not supported with termination mode passthrough
pathType: ImplementationSpecific
tls:
- {}
# secretName: testsecret-tls
# Note: Only one of ingress class annotation or the `ingressClassName:` can be used at a time
# to set the ingress class
# ingressClassName: openshift-default
# labels:
# external-dns/private: "true"
# annotations:
# external-dns.alpha.kubernetes.io/hostname: dashboard.example.com
# nginx.ingress.kubernetes.io/rewrite-target: /ceph-dashboard/$2
# If the dashboard has ssl: true the following will make sure the NGINX Ingress controller can expose the dashboard correctly
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# nginx.ingress.kubernetes.io/server-snippet: |
# proxy_ssl_verify off;
# host:
# name: dashboard.example.com
# path: "/ceph-dashboard(/|$)(.*)"
# pathType: Prefix
# tls:
# - hosts:
# - dashboard.example.com
# secretName: testsecret-tls
## Note: Only one of ingress class annotation or the `ingressClassName:` can be used at a time
## to set the ingress class
# ingressClassName: nginx
# -- A list of CephBlockPool configurations to deploy
# @default -- See [below](#ceph-block-pools)
cephBlockPools:
- name: ceph-blockpool
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Block-Storage/ceph-block-pool-crd.md#spec for available configuration
spec:
failureDomain: host
replicated:
size: 3
# Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false.
# For reference: https://docs.ceph.com/docs/latest/mgr/prometheus/#rbd-io-statistics
# enableRBDStats: true
storageClass:
enabled: true
name: ceph-block
annotations: {}
labels: {}
isDefault: true
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: "Immediate"
mountOptions: []
# see https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies
allowedTopologies: []
# - matchLabelExpressions:
# - key: rook-ceph-role
# values:
# - storage-node
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Block-Storage-RBD/block-storage.md#provision-storage for available configuration
parameters:
# (optional) mapOptions is a comma-separated list of map options.
# For krbd options refer
# https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
# For nbd options refer
# https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
# mapOptions: lock_on_read,queue_depth=1024
# (optional) unmapOptions is a comma-separated list of unmap options.
# For krbd options refer
# https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
# For nbd options refer
# https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
# unmapOptions: force
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features, equivalent to OR'd bitfield value: 63
# Available for imageFormat: "2". Older releases of CSI RBD
# support only the `layering` feature. The Linux kernel (KRBD) supports the
# full feature complement as of 5.4
imageFeatures: layering
# These secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}"
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}"
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}"
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
# -- A list of CephFileSystem configurations to deploy
# @default -- See [below](#ceph-file-systems)
cephFileSystems:
- name: ceph-filesystem
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Shared-Filesystem/ceph-filesystem-crd.md#filesystem-settings for available configuration
spec:
metadataPool:
replicated:
size: 3
dataPools:
- failureDomain: host
replicated:
size: 3
# Optional and highly recommended, 'data0' by default, see https://github.com/rook/rook/blob/master/Documentation/CRDs/Shared-Filesystem/ceph-filesystem-crd.md#pools
name: data0
metadataServer:
activeCount: 1
activeStandby: true
resources:
limits:
memory: "4Gi"
requests:
cpu: "1000m"
memory: "4Gi"
priorityClassName: system-cluster-critical
storageClass:
enabled: true
isDefault: false
name: ceph-filesystem
# (Optional) specify a data pool to use, must be the name of one of the data pools above, 'data0' by default
pool: data0
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: "Immediate"
annotations: {}
labels: {}
mountOptions: []
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage.md#provision-storage for available configuration
parameters:
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}"
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}"
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}"
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
# -- Settings for the filesystem snapshot class
# @default -- See [CephFS Snapshots](../Storage-Configuration/Ceph-CSI/ceph-csi-snapshot.md#cephfs-snapshots)
cephFileSystemVolumeSnapshotClass:
enabled: false
name: ceph-filesystem
isDefault: true
deletionPolicy: Delete
annotations: {}
labels: {}
# see https://rook.io/docs/rook/v1.10/Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-snapshots for available configuration
parameters: {}
# -- Settings for the block pool snapshot class
# @default -- See [RBD Snapshots](../Storage-Configuration/Ceph-CSI/ceph-csi-snapshot.md#rbd-snapshots)
cephBlockPoolsVolumeSnapshotClass:
enabled: false
name: ceph-block
isDefault: false
deletionPolicy: Delete
annotations: {}
labels: {}
# see https://rook.io/docs/rook/v1.10/Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-snapshots for available configuration
parameters: {}
# -- A list of CephObjectStore configurations to deploy
# @default -- See [below](#ceph-object-stores)
cephObjectStores:
- name: ceph-objectstore
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md#object-store-settings for available configuration
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
dataPool:
failureDomain: host
erasureCoded:
dataChunks: 2
codingChunks: 1
parameters:
bulk: "true"
preservePoolsOnDelete: true
gateway:
port: 80
resources:
limits:
memory: "2Gi"
requests:
cpu: "1000m"
memory: "1Gi"
# securePort: 443
# sslCertificateRef:
instances: 1
priorityClassName: system-cluster-critical
# opsLogSidecar:
# resources:
# limits:
# memory: "100Mi"
# requests:
# cpu: "100m"
# memory: "40Mi"
storageClass:
enabled: true
name: ceph-bucket
reclaimPolicy: Delete
volumeBindingMode: "Immediate"
annotations: {}
labels: {}
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim.md#storageclass for available configuration
parameters:
# note: objectStoreNamespace and objectStoreName are configured by the chart
region: us-east-1
ingress:
# Enable an ingress for the ceph-objectstore
enabled: true
# The ingress port by default will be the object store's "securePort" (if set), or the gateway "port".
# To override those defaults, set this ingress port to the desired port.
# port: 80
# annotations: {}
host:
name: objectstore.apps.ncd0.harmony.mcd
path: /
pathType: Prefix
# tls:
# - hosts:
# - objectstore.example.com
# secretName: ceph-objectstore-tls
# ingressClassName: nginx
## cephECBlockPools are disabled by default, please remove the comments and set desired values to enable it
## For erasure coded a replicated metadata pool is required.
## https://rook.io/docs/rook/latest/CRDs/Shared-Filesystem/ceph-filesystem-crd/#erasure-coded
#cephECBlockPools:
# - name: ec-pool
# spec:
# metadataPool:
# replicated:
# size: 2
# dataPool:
# failureDomain: osd
# erasureCoded:
# dataChunks: 2
# codingChunks: 1
# deviceClass: hdd
#
# parameters:
# # clusterID is the namespace where the rook cluster is running
# # If you change this namespace, also change the namespace below where the secret namespaces are defined
# clusterID: rook-ceph # namespace:cluster
# # (optional) mapOptions is a comma-separated list of map options.
# # For krbd options refer
# # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
# # For nbd options refer
# # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
# # mapOptions: lock_on_read,queue_depth=1024
#
# # (optional) unmapOptions is a comma-separated list of unmap options.
# # For krbd options refer
# # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
# # For nbd options refer
# # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
# # unmapOptions: force
#
# # RBD image format. Defaults to "2".
# imageFormat: "2"
#
# # RBD image features, equivalent to OR'd bitfield value: 63
# # Available for imageFormat: "2". Older releases of CSI RBD
# # support only the `layering` feature. The Linux kernel (KRBD) supports the
# # full feature complement as of 5.4
# # imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock
# imageFeatures: layering
#
# storageClass:
# provisioner: rook-ceph.rbd.csi.ceph.com # csi-provisioner-name
# enabled: true
# name: rook-ceph-block
# isDefault: false
# annotations: { }
# labels: { }
# allowVolumeExpansion: true
# reclaimPolicy: Delete
# -- CSI driver name prefix for cephfs, rbd and nfs.
# @default -- `namespace name where rook-ceph operator is deployed`
csiDriverNamePrefix:

View File

@@ -0,0 +1,3 @@
#!/bin/bash
helm repo add rook-release https://charts.rook.io/release
helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml

View File

@@ -0,0 +1,674 @@
# Default values for rook-ceph-operator
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
# -- Image
repository: docker.io/rook/ceph
# -- Image tag
# @default -- `master`
tag: v1.17.1
# -- Image pull policy
pullPolicy: IfNotPresent
crds:
# -- Whether the helm chart should create and update the CRDs. If false, the CRDs must be
# managed independently with deploy/examples/crds.yaml.
# **WARNING** Only set during first deployment. If later disabled the cluster may be DESTROYED.
# If the CRDs are deleted in this case, see
# [the disaster recovery guide](https://rook.io/docs/rook/latest/Troubleshooting/disaster-recovery/#restoring-crds-after-deletion)
# to restore them.
enabled: true
# -- Pod resource requests & limits
resources:
limits:
memory: 512Mi
requests:
cpu: 200m
memory: 128Mi
# -- Kubernetes [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) to add to the Deployment.
nodeSelector: {}
# Constraint rook-ceph-operator Deployment to nodes with label `disktype: ssd`.
# For more info, see https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# disktype: ssd
# -- List of Kubernetes [`tolerations`](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) to add to the Deployment.
tolerations: []
# -- Delay to use for the `node.kubernetes.io/unreachable` pod failure toleration to override
# the Kubernetes default of 5 minutes
unreachableNodeTolerationSeconds: 5
# -- Whether the operator should watch cluster CRD in its own namespace or not
currentNamespaceOnly: false
# -- Custom pod labels for the operator
operatorPodLabels: {}
# -- Pod annotations
annotations: {}
# -- Global log level for the operator.
# Options: `ERROR`, `WARNING`, `INFO`, `DEBUG`
logLevel: INFO
# -- If true, create & use RBAC resources
rbacEnable: true
rbacAggregate:
# -- If true, create a ClusterRole aggregated to [user facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) for objectbucketclaims
enableOBCs: false
# -- If true, create & use PSP resources
pspEnable: false
# -- Set the priority class for the rook operator deployment if desired
priorityClassName:
# -- Set the container security context for the operator
containerSecurityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
capabilities:
drop: ["ALL"]
# -- If true, loop devices are allowed to be used for osds in test clusters
allowLoopDevices: false
# Settings for whether to disable the drivers or other daemons if they are not
# needed
csi:
# -- Enable Ceph CSI RBD driver
enableRbdDriver: true
# -- Enable Ceph CSI CephFS driver
enableCephfsDriver: true
# -- Disable the CSI driver.
disableCsiDriver: "false"
# -- Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
# in some network configurations where the SDN does not provide access to an external cluster or
# there is significant drop in read/write performance
enableCSIHostNetwork: true
# -- Enable Snapshotter in CephFS provisioner pod
enableCephfsSnapshotter: true
# -- Enable Snapshotter in NFS provisioner pod
enableNFSSnapshotter: true
# -- Enable Snapshotter in RBD provisioner pod
enableRBDSnapshotter: true
# -- Enable Host mount for `/etc/selinux` directory for Ceph CSI nodeplugins
enablePluginSelinuxHostMount: false
# -- Enable Ceph CSI PVC encryption support
enableCSIEncryption: false
# -- Enable volume group snapshot feature. This feature is
# enabled by default as long as the necessary CRDs are available in the cluster.
enableVolumeGroupSnapshot: true
# -- PriorityClassName to be set on csi driver plugin pods
pluginPriorityClassName: system-node-critical
# -- PriorityClassName to be set on csi driver provisioner pods
provisionerPriorityClassName: system-cluster-critical
# -- Policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
rbdFSGroupPolicy: "File"
# -- Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
cephFSFSGroupPolicy: "File"
# -- Policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
nfsFSGroupPolicy: "File"
# -- OMAP generator generates the omap mapping between the PV name and the RBD image
# which helps CSI to identify the rbd images for CSI operations.
# `CSI_ENABLE_OMAP_GENERATOR` needs to be enabled when we are using rbd mirroring feature.
# By default OMAP generator is disabled and when enabled, it will be deployed as a
# sidecar with CSI provisioner pod, to enable set it to true.
enableOMAPGenerator: false
# -- Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options.
# Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR
cephFSKernelMountOptions:
# -- Enable adding volume metadata on the CephFS subvolumes and RBD images.
# Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images.
# Hence enable metadata is false by default
enableMetadata: false
# -- Set replicas for csi provisioner deployment
provisionerReplicas: 2
# -- Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful
# in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster
clusterName:
# -- Set logging level for cephCSI containers maintained by the cephCSI.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
logLevel: 0
# -- Set logging level for Kubernetes-csi sidecar containers.
# Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity.
# @default -- `0`
sidecarLogLevel:
# -- CSI driver name prefix for cephfs, rbd and nfs.
# @default -- `namespace name where rook-ceph operator is deployed`
csiDriverNamePrefix:
# -- CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
# @default -- `RollingUpdate`
rbdPluginUpdateStrategy:
# -- A maxUnavailable parameter of CSI RBD plugin daemonset update strategy.
# @default -- `1`
rbdPluginUpdateStrategyMaxUnavailable:
# -- CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
# @default -- `RollingUpdate`
cephFSPluginUpdateStrategy:
# -- A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy.
# @default -- `1`
cephFSPluginUpdateStrategyMaxUnavailable:
# -- CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
# @default -- `RollingUpdate`
nfsPluginUpdateStrategy:
# -- Set GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150
grpcTimeoutInSeconds: 150
# -- Burst to use while communicating with the kubernetes apiserver.
kubeApiBurst:
# -- QPS to use while communicating with the kubernetes apiserver.
kubeApiQPS:
# -- The volume of the CephCSI RBD plugin DaemonSet
csiRBDPluginVolume:
# - name: lib-modules
# hostPath:
# path: /run/booted-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# -- The volume mounts of the CephCSI RBD plugin DaemonSet
csiRBDPluginVolumeMount:
# - name: host-nix
# mountPath: /nix
# readOnly: true
# -- The volume of the CephCSI CephFS plugin DaemonSet
csiCephFSPluginVolume:
# - name: lib-modules
# hostPath:
# path: /run/booted-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# -- The volume mounts of the CephCSI CephFS plugin DaemonSet
csiCephFSPluginVolumeMount:
# - name: host-nix
# mountPath: /nix
# readOnly: true
# -- CEPH CSI RBD provisioner resource requirement list
# csi-omap-generator resources will be applied only if `enableOMAPGenerator` is set to `true`
# @default -- see values.yaml
csiRBDProvisionerResource: |
- name : csi-provisioner
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-resizer
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-attacher
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-snapshotter
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-rbdplugin
resource:
requests:
memory: 512Mi
limits:
memory: 1Gi
- name : csi-omap-generator
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
# -- CEPH CSI RBD plugin resource requirement list
# @default -- see values.yaml
csiRBDPluginResource: |
- name : driver-registrar
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
- name : csi-rbdplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
# -- CEPH CSI CephFS provisioner resource requirement list
# @default -- see values.yaml
csiCephFSProvisionerResource: |
- name : csi-provisioner
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-resizer
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-attacher
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-snapshotter
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-cephfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
# -- CEPH CSI CephFS plugin resource requirement list
# @default -- see values.yaml
csiCephFSPluginResource: |
- name : driver-registrar
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
- name : csi-cephfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
# -- CEPH CSI NFS provisioner resource requirement list
# @default -- see values.yaml
csiNFSProvisionerResource: |
- name : csi-provisioner
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-nfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : csi-attacher
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
# -- CEPH CSI NFS plugin resource requirement list
# @default -- see values.yaml
csiNFSPluginResource: |
- name : driver-registrar
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
- name : csi-nfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
# Set provisionerTolerations and provisionerNodeAffinity for provisioner pod.
# The CSI provisioner would be best to start on the same nodes as other ceph daemons.
# -- Array of tolerations in YAML format which will be added to CSI provisioner deployment
provisionerTolerations:
# - key: key
# operator: Exists
# effect: NoSchedule
# -- The node labels for affinity of the CSI provisioner deployment [^1]
provisionerNodeAffinity: #key1=value1,value2; key2=value3
# Set pluginTolerations and pluginNodeAffinity for plugin daemonset pods.
# The CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# -- Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet
pluginTolerations:
# - key: key
# operator: Exists
# effect: NoSchedule
# -- The node labels for affinity of the CephCSI RBD plugin DaemonSet [^1]
pluginNodeAffinity: # key1=value1,value2; key2=value3
# -- Enable Ceph CSI Liveness sidecar deployment
enableLiveness: false
# -- CSI CephFS driver metrics port
# @default -- `9081`
cephfsLivenessMetricsPort:
# -- CSI Addons server port
# @default -- `9070`
csiAddonsPort:
# -- CSI Addons server port for the RBD provisioner
# @default -- `9070`
csiAddonsRBDProvisionerPort:
# -- CSI Addons server port for the Ceph FS provisioner
# @default -- `9070`
csiAddonsCephFSProvisionerPort:
# -- Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS
# you may want to disable this setting. However, this will cause an issue during upgrades
# with the FUSE client. See the [upgrade guide](https://rook.io/docs/rook/v1.2/ceph-upgrade.html)
forceCephFSKernelClient: true
# -- Ceph CSI RBD driver metrics port
# @default -- `8080`
rbdLivenessMetricsPort:
serviceMonitor:
# -- Enable ServiceMonitor for Ceph CSI drivers
enabled: false
# -- Service monitor scrape interval
interval: 10s
# -- ServiceMonitor additional labels
labels: {}
# -- Use a different namespace for the ServiceMonitor
namespace:
# -- Kubelet root directory path (if the Kubelet uses a different path for the `--root-dir` flag)
# @default -- `/var/lib/kubelet`
kubeletDirPath:
# -- Duration in seconds that non-leader candidates will wait to force acquire leadership.
# @default -- `137s`
csiLeaderElectionLeaseDuration:
# -- Deadline in seconds that the acting leader will retry refreshing leadership before giving up.
# @default -- `107s`
csiLeaderElectionRenewDeadline:
# -- Retry period in seconds the LeaderElector clients should wait between tries of actions.
# @default -- `26s`
csiLeaderElectionRetryPeriod:
cephcsi:
# -- Ceph CSI image repository
repository: quay.io/cephcsi/cephcsi
# -- Ceph CSI image tag
tag: v3.14.0
registrar:
# -- Kubernetes CSI registrar image repository
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
# -- Registrar image tag
tag: v2.13.0
provisioner:
# -- Kubernetes CSI provisioner image repository
repository: registry.k8s.io/sig-storage/csi-provisioner
# -- Provisioner image tag
tag: v5.1.0
snapshotter:
# -- Kubernetes CSI snapshotter image repository
repository: registry.k8s.io/sig-storage/csi-snapshotter
# -- Snapshotter image tag
tag: v8.2.0
attacher:
# -- Kubernetes CSI Attacher image repository
repository: registry.k8s.io/sig-storage/csi-attacher
# -- Attacher image tag
tag: v4.8.0
resizer:
# -- Kubernetes CSI resizer image repository
repository: registry.k8s.io/sig-storage/csi-resizer
# -- Resizer image tag
tag: v1.13.1
# -- Image pull policy
imagePullPolicy: IfNotPresent
# -- Labels to add to the CSI CephFS Deployments and DaemonSets Pods
cephfsPodLabels: #"key1=value1,key2=value2"
# -- Labels to add to the CSI NFS Deployments and DaemonSets Pods
nfsPodLabels: #"key1=value1,key2=value2"
# -- Labels to add to the CSI RBD Deployments and DaemonSets Pods
rbdPodLabels: #"key1=value1,key2=value2"
csiAddons:
# -- Enable CSIAddons
enabled: false
# -- CSIAddons sidecar image repository
repository: quay.io/csiaddons/k8s-sidecar
# -- CSIAddons sidecar image tag
tag: v0.12.0
nfs:
# -- Enable the nfs csi driver
enabled: false
topology:
# -- Enable topology based provisioning
enabled: false
# NOTE: the value here serves as an example and needs to be
# updated with node labels that define domains of interest
# -- domainLabels define which node labels to use as domains
# for CSI nodeplugins to advertise their domains
domainLabels:
# - kubernetes.io/hostname
# - topology.kubernetes.io/zone
# - topology.rook.io/rack
# -- Whether to skip any attach operation altogether for CephFS PVCs. See more details
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation
# of pods using the CephFS PVC fast. **WARNING** It's highly discouraged to use this for
# CephFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
cephFSAttachRequired: true
# -- Whether to skip any attach operation altogether for RBD PVCs. See more details
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast.
# **WARNING** It's highly discouraged to use this for RWO volumes as it can cause data corruption.
# csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set
# to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on.
# Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
rbdAttachRequired: true
# -- Whether to skip any attach operation altogether for NFS PVCs. See more details
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation
# of pods using the NFS PVC fast. **WARNING** It's highly discouraged to use this for
# NFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
nfsAttachRequired: true
# -- Enable discovery daemon
enableDiscoveryDaemon: false
# -- Set the discovery daemon device discovery interval (default to 60m)
discoveryDaemonInterval: 60m
# -- The timeout for ceph commands in seconds
cephCommandsTimeoutSeconds: "15"
# -- If true, run rook operator on the host network
useOperatorHostNetwork:
# -- If true, scale down the rook operator.
# This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling
# to deploy your helm charts.
scaleDownOperator: false
## Rook Discover configuration
## toleration: NoSchedule, PreferNoSchedule or NoExecute
## tolerationKey: Set this to the specific key of the taint to tolerate
## tolerations: Array of tolerations in YAML format which will be added to agent deployment
## nodeAffinity: Set to labels of the node to match
discover:
# -- Toleration for the discover pods.
# Options: `NoSchedule`, `PreferNoSchedule` or `NoExecute`
toleration:
# -- The specific key of the taint to tolerate
tolerationKey:
# -- Array of tolerations in YAML format which will be added to discover deployment
tolerations:
# - key: key
# operator: Exists
# effect: NoSchedule
# -- The node labels for affinity of `discover-agent` [^1]
nodeAffinity:
# key1=value1,value2; key2=value3
#
# or
#
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: storage-node
# operator: Exists
# -- Labels to add to the discover pods
podLabels: # "key1=value1,key2=value2"
# -- Add resources to discover daemon pods
resources:
# - limits:
# memory: 512Mi
# - requests:
# cpu: 100m
# memory: 128Mi
# -- Custom label to identify node hostname. If not set `kubernetes.io/hostname` will be used
customHostnameLabel:
# -- Runs Ceph Pods as privileged to be able to write to `hostPaths` in OpenShift with SELinux restrictions.
hostpathRequiresPrivileged: false
# -- Whether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabled
enforceHostNetwork: false
# -- Disable automatic orchestration when new devices are discovered.
disableDeviceHotplug: false
# -- The revision history limit for all pods created by Rook. If blank, the K8s default is 10.
revisionHistoryLimit:
# -- Blacklist certain disks according to the regex provided.
discoverDaemonUdev:
# -- imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts.
imagePullSecrets:
# - name: my-registry-secret
# -- Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used
enableOBCWatchOperatorNamespace: true
# -- Specify the prefix for the OBC provisioner in place of the cluster namespace
# @default -- `ceph cluster namespace`
obcProvisionerNamePrefix:
# -- Many OBC additional config fields may be risky for administrators to allow users control over.
# The safe and default-allowed fields are 'maxObjects' and 'maxSize'.
# Other fields should be considered risky. To allow all additional configs, use this value:
# "maxObjects,maxSize,bucketMaxObjects,bucketMaxSize,bucketPolicy,bucketLifecycle,bucketOwner"
# @default -- "maxObjects,maxSize"
obcAllowAdditionalConfigFields: "maxObjects,maxSize"
monitoring:
# -- Enable monitoring. Requires Prometheus to be pre-installed.
# Enabling will also create RBAC rules to allow Operator to create ServiceMonitors
enabled: false

View File

@@ -1,26 +1,145 @@
use std::{
net::{IpAddr, Ipv4Addr},
sync::Arc,
};
use cidr::Ipv4Cidr;
use harmony::{ use harmony::{
hardware::{FirewallGroup, HostCategory, Location, PhysicalHost, SwitchGroup},
infra::opnsense::OPNSenseManagementInterface,
inventory::Inventory, inventory::Inventory,
maestro::Maestro, maestro::Maestro,
modules::dummy::{ErrorScore, PanicScore, SuccessScore}, modules::{
topology::HAClusterTopology, http::HttpScore,
ipxe::IpxeScore,
okd::{
bootstrap_dhcp::OKDBootstrapDhcpScore,
bootstrap_load_balancer::OKDBootstrapLoadBalancerScore, dhcp::OKDDhcpScore,
dns::OKDDnsScore,
},
tftp::TftpScore,
},
topology::{LogicalHost, UnmanagedRouter, Url},
}; };
use harmony_macros::{ip, mac_address};
#[tokio::main] #[tokio::main]
async fn main() { async fn main() {
let inventory = Inventory::autoload(); let firewall = harmony::topology::LogicalHost {
let topology = HAClusterTopology::autoload(); ip: ip!("192.168.33.1"),
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap(); name: String::from("fw0"),
};
let opnsense = Arc::new(
harmony::infra::opnsense::OPNSenseFirewall::new(firewall, None, "root", "opnsense").await,
);
let lan_subnet = Ipv4Addr::new(192, 168, 33, 0);
let gateway_ipv4 = Ipv4Addr::new(192, 168, 33, 1);
let gateway_ip = IpAddr::V4(gateway_ipv4);
let topology = harmony::topology::HAClusterTopology {
domain_name: "ncd0.harmony.mcd".to_string(), // TODO this must be set manually correctly
// when setting up the opnsense firewall
router: Arc::new(UnmanagedRouter::new(
gateway_ip,
Ipv4Cidr::new(lan_subnet, 24).unwrap(),
)),
load_balancer: opnsense.clone(),
firewall: opnsense.clone(),
tftp_server: opnsense.clone(),
http_server: opnsense.clone(),
dhcp_server: opnsense.clone(),
dns_server: opnsense.clone(),
control_plane: vec![
LogicalHost {
ip: ip!("192.168.33.20"),
name: "cp0".to_string(),
},
LogicalHost {
ip: ip!("192.168.33.21"),
name: "cp1".to_string(),
},
LogicalHost {
ip: ip!("192.168.33.22"),
name: "cp2".to_string(),
},
],
bootstrap_host: LogicalHost {
ip: ip!("192.168.33.66"),
name: "bootstrap".to_string(),
},
workers: vec![
LogicalHost {
ip: ip!("192.168.33.30"),
name: "wk0".to_string(),
},
LogicalHost {
ip: ip!("192.168.33.31"),
name: "wk1".to_string(),
},
LogicalHost {
ip: ip!("192.168.33.32"),
name: "wk2".to_string(),
},
],
switch: vec![],
};
let inventory = Inventory {
location: Location::new("I am mobile".to_string(), "earth".to_string()),
switch: SwitchGroup::from([]),
firewall: FirewallGroup::from([PhysicalHost::empty(HostCategory::Firewall)
.management(Arc::new(OPNSenseManagementInterface::new()))]),
storage_host: vec![],
worker_host: vec![
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:61:0F")),
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:61:26")),
// thisone
// Then create the ipxe file
// set the dns static leases
// bootstrap nodes
// start ceph cluster
// try installation of lampscore
// bingo?
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:61:70")),
],
control_plane_host: vec![
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:60:FA")),
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:61:1A")),
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:01:BC:68")),
],
};
// TODO regroup smaller scores in a larger one such as this
// let okd_boostrap_preparation();
let bootstrap_dhcp_score = OKDBootstrapDhcpScore::new(&topology, &inventory);
let bootstrap_load_balancer_score = OKDBootstrapLoadBalancerScore::new(&topology);
let dhcp_score = OKDDhcpScore::new(&topology, &inventory);
let dns_score = OKDDnsScore::new(&topology);
let load_balancer_score =
harmony::modules::okd::load_balancer::OKDLoadBalancerScore::new(&topology);
let tftp_score = TftpScore::new(Url::LocalFolder("./data/watchguard/tftpboot".to_string()));
let http_score = HttpScore::new(Url::LocalFolder(
"./data/watchguard/pxe-http-files".to_string(),
));
let ipxe_score = IpxeScore::new();
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
maestro.register_all(vec![ maestro.register_all(vec![
// ADD scores : Box::new(dns_score),
// 1. OPNSense setup scores Box::new(bootstrap_dhcp_score),
// 2. Bootstrap node setup Box::new(bootstrap_load_balancer_score),
// 3. Control plane setup Box::new(load_balancer_score),
// 4. Workers setup Box::new(tftp_score),
// 5. Various tools and apps setup Box::new(http_score),
Box::new(SuccessScore {}), Box::new(ipxe_score),
Box::new(ErrorScore {}), Box::new(dhcp_score),
Box::new(PanicScore {}),
]); ]);
harmony_tui::init(maestro).await.unwrap(); harmony_tui::init(maestro).await.unwrap();
} }

12
examples/ntfy/Cargo.toml Normal file
View File

@@ -0,0 +1,12 @@
[package]
name = "example-ntfy"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
tokio.workspace = true
url.workspace = true

19
examples/ntfy/src/main.rs Normal file
View File

@@ -0,0 +1,19 @@
use harmony::{
inventory::Inventory, maestro::Maestro, modules::monitoring::ntfy::ntfy::NtfyScore,
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(NtfyScore {
namespace: "monitoring".to_string(),
})]);
harmony_cli::init(maestro, None).await.unwrap();
}

14
examples/rust/Cargo.toml Normal file
View File

@@ -0,0 +1,14 @@
[package]
name = "example-rust"
version = "0.1.0"
edition = "2024"
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_macros = { path = "../../harmony_macros" }
tokio = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

62
examples/rust/src/main.rs Normal file
View File

@@ -0,0 +1,62 @@
use std::{path::PathBuf, sync::Arc};
use harmony::{
inventory::Inventory,
maestro::Maestro,
modules::{
application::{
ApplicationScore, RustWebFramework, RustWebapp,
features::{ContinuousDelivery, PrometheusMonitoring},
},
monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
},
prometheus::alerts::k8s::{
pod::pod_in_failed_state, pvc::high_pvc_fill_rate_over_two_days,
},
},
topology::{K8sAnywhereTopology, Url},
};
#[tokio::main]
async fn main() {
env_logger::init();
let application = Arc::new(RustWebapp {
name: "harmony-example-rust-webapp".to_string(),
domain: Url::Url(url::Url::parse("https://rustapp.harmony.example.com").unwrap()),
project_root: PathBuf::from("./examples/rust/webapp"),
framework: Some(RustWebFramework::Leptos),
});
let pod_failed = pod_in_failed_state();
let pod_failed_2 = pod_in_failed_state();
let pod_failed_3 = pod_in_failed_state();
let additional_rules = AlertManagerRuleGroup::new("pod-alerts", vec![pod_failed]);
let additional_rules_2 = AlertManagerRuleGroup::new("pod-alerts-2", vec![pod_failed_2, pod_failed_3]);
let app = ApplicationScore {
features: vec![
//Box::new(ContinuousDelivery {
// application: application.clone(),
//}),
Box::new(PrometheusMonitoring {
application: application.clone(),
alert_receivers: vec![Box::new(DiscordWebhook {
name: "dummy-discord".to_string(),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
})],
alert_rules: vec![Box::new(additional_rules), Box::new(additional_rules_2)],
}),
// TODO add monitoring, backups, multisite ha, etc
],
application,
};
let topology = K8sAnywhereTopology::from_env();
let mut maestro = Maestro::initialize(Inventory::autoload(), topology)
.await
.unwrap();
maestro.register_all(vec![Box::new(app)]);
harmony_cli::init(maestro, None).await.unwrap();
}

14
examples/rust/webapp/.gitignore vendored Normal file
View File

@@ -0,0 +1,14 @@
# Generated by Cargo
# will have compiled files and executables
debug/
target/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
Cargo.lock
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb

View File

@@ -0,0 +1,93 @@
[package]
name = "harmony-example-rust-webapp"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib", "rlib"]
[workspace]
[dependencies]
actix-files = { version = "0.6", optional = true }
actix-web = { version = "4", optional = true, features = ["macros"] }
console_error_panic_hook = "0.1"
http = { version = "1.0.0", optional = true }
leptos = { version = "0.7.0" }
leptos_meta = { version = "0.7.0" }
leptos_actix = { version = "0.7.0", optional = true }
leptos_router = { version = "0.7.0" }
wasm-bindgen = "=0.2.100"
[features]
csr = ["leptos/csr"]
hydrate = ["leptos/hydrate"]
ssr = [
"dep:actix-files",
"dep:actix-web",
"dep:leptos_actix",
"leptos/ssr",
"leptos_meta/ssr",
"leptos_router/ssr",
]
# Defines a size-optimized profile for the WASM bundle in release mode
[profile.wasm-release]
inherits = "release"
opt-level = 'z'
lto = true
codegen-units = 1
panic = "abort"
[package.metadata.leptos]
# The name used by wasm-bindgen/cargo-leptos for the JS/WASM bundle. Defaults to the crate name
output-name = "harmony-example-rust-webapp"
# The site root folder is where cargo-leptos generate all output. WARNING: all content of this folder will be erased on a rebuild. Use it in your server setup.
site-root = "target/site"
# The site-root relative folder where all compiled output (JS, WASM and CSS) is written
# Defaults to pkg
site-pkg-dir = "pkg"
# [Optional] The source CSS file. If it ends with .sass or .scss then it will be compiled by dart-sass into CSS. The CSS is optimized by Lightning CSS before being written to <site-root>/<site-pkg>/app.css
style-file = "style/main.scss"
# Assets source dir. All files found here will be copied and synchronized to site-root.
# The assets-dir cannot have a sub directory with the same name/path as site-pkg-dir.
#
# Optional. Env: LEPTOS_ASSETS_DIR.
assets-dir = "assets"
# The IP and port (ex: 127.0.0.1:3000) where the server serves the content. Use it in your server setup.
site-addr = "0.0.0.0:3000"
# The port to use for automatic reload monitoring
reload-port = 3001
# [Optional] Command to use when running end2end tests. It will run in the end2end dir.
# [Windows] for non-WSL use "npx.cmd playwright test"
# This binary name can be checked in Powershell with Get-Command npx
end2end-cmd = "npx playwright test"
end2end-dir = "end2end"
# The browserlist query used for optimizing the CSS.
browserquery = "defaults"
# The environment Leptos will run in, usually either "DEV" or "PROD"
env = "DEV"
# The features to use when compiling the bin target
#
# Optional. Can be over-ridden with the command line parameter --bin-features
bin-features = ["ssr"]
# If the --no-default-features flag should be used when compiling the bin target
#
# Optional. Defaults to false.
bin-default-features = false
# The features to use when compiling the lib target
#
# Optional. Can be over-ridden with the command line parameter --lib-features
lib-features = ["hydrate"]
# If the --no-default-features flag should be used when compiling the lib target
#
# Optional. Defaults to false.
lib-default-features = false
# The profile to use for the lib target when compiling for release
#
# Optional. Defaults to "release".
lib-profile-release = "wasm-release"

View File

@@ -0,0 +1,16 @@
FROM rust:bookworm as builder
RUN apt-get update && apt-get install -y --no-install-recommends clang wget && wget https://github.com/cargo-bins/cargo-binstall/releases/latest/download/cargo-binstall-x86_64-unknown-linux-musl.tgz && tar -xvf cargo-binstall-x86_64-unknown-linux-musl.tgz && cp cargo-binstall /usr/local/cargo/bin && rm cargo-binstall-x86_64-unknown-linux-musl.tgz cargo-binstall && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN cargo binstall cargo-leptos -y
RUN rustup target add wasm32-unknown-unknown
WORKDIR /app
COPY . .
RUN cargo leptos build --release -vv
FROM debian:bookworm-slim
RUN groupadd -r appgroup && useradd -r -s /bin/false -g appgroup appuser
ENV LEPTOS_SITE_ADDR=0.0.0.0:3000
EXPOSE 3000/tcp
WORKDIR /home/appuser
COPY --from=builder /app/target/site/pkg /home/appuser/pkg
COPY --from=builder /app/target/release/harmony-example-rust-webapp /home/appuser/harmony-example-rust-webapp
USER appuser
CMD /home/appuser/harmony-example-rust-webapp

View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <https://unlicense.org>

View File

@@ -0,0 +1,72 @@
<picture>
<source srcset="https://raw.githubusercontent.com/leptos-rs/leptos/main/docs/logos/Leptos_logo_Solid_White.svg" media="(prefers-color-scheme: dark)">
<img src="https://raw.githubusercontent.com/leptos-rs/leptos/main/docs/logos/Leptos_logo_RGB.svg" alt="Leptos Logo">
</picture>
# Leptos Starter Template
This is a template for use with the [Leptos](https://github.com/leptos-rs/leptos) web framework and the [cargo-leptos](https://github.com/akesson/cargo-leptos) tool.
## Creating your template repo
If you don't have `cargo-leptos` installed you can install it with
`cargo install cargo-leptos --locked`
Then run
`cargo leptos new --git leptos-rs/start-actix`
to generate a new project template (you will be prompted to enter a project name).
`cd {projectname}`
to go to your newly created project.
Of course, you should explore around the project structure, but the best place to start with your application code is in `src/app.rs`.
## Running your project
`cargo leptos watch`
By default, you can access your local project at `http://localhost:3000`
## Installing Additional Tools
By default, `cargo-leptos` uses `nightly` Rust, `cargo-generate`, and `sass`. If you run into any trouble, you may need to install one or more of these tools.
1. `rustup toolchain install nightly --allow-downgrade` - make sure you have Rust nightly
2. `rustup target add wasm32-unknown-unknown` - add the ability to compile Rust to WebAssembly
3. `cargo install cargo-generate` - install `cargo-generate` binary (should be installed automatically in future)
4. `npm install -g sass` - install `dart-sass` (should be optional in future)
## Executing a Server on a Remote Machine Without the Toolchain
After running a `cargo leptos build --release` the minimum files needed are:
1. The server binary located in `target/server/release`
2. The `site` directory and all files within located in `target/site`
Copy these files to your remote server. The directory structure should be:
```text
leptos_start
site/
```
Set the following environment variables (updating for your project as needed):
```sh
export LEPTOS_OUTPUT_NAME="leptos_start"
export LEPTOS_SITE_ROOT="site"
export LEPTOS_SITE_PKG_DIR="pkg"
export LEPTOS_SITE_ADDR="127.0.0.1:3000"
export LEPTOS_RELOAD_PORT="3001"
```
Finally, run the server binary.
## Notes about CSR and Trunk:
Although it is not recommended, you can also run your project without server integration using the feature `csr` and `trunk serve`:
`trunk serve --open --features csr`
This may be useful for integrating external tools which require a static site, e.g. `tauri`.
## Licensing
This template itself is released under the Unlicense. You should replace the LICENSE for your own application with an appropriate license if you plan to release it publicly.

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -0,0 +1,112 @@
{
"name": "end2end",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "end2end",
"version": "1.0.0",
"license": "ISC",
"devDependencies": {
"@playwright/test": "^1.44.1",
"@types/node": "^20.12.12",
"typescript": "^5.4.5"
}
},
"node_modules/@playwright/test": {
"version": "1.44.1",
"resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.44.1.tgz",
"integrity": "sha512-1hZ4TNvD5z9VuhNJ/walIjvMVvYkZKf71axoF/uiAqpntQJXpG64dlXhoDXE3OczPuTuvjf/M5KWFg5VAVUS3Q==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright": "1.44.1"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=16"
}
},
"node_modules/@types/node": {
"version": "20.12.12",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.12.12.tgz",
"integrity": "sha512-eWLDGF/FOSPtAvEqeRAQ4C8LSA7M1I7i0ky1I8U7kD1J5ITyW3AsRhQrKVoWf5pFKZ2kILsEGJhsI9r93PYnOw==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~5.26.4"
}
},
"node_modules/fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"dev": true,
"hasInstallScript": true,
"license": "MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/playwright": {
"version": "1.44.1",
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.44.1.tgz",
"integrity": "sha512-qr/0UJ5CFAtloI3avF95Y0L1xQo6r3LQArLIg/z/PoGJ6xa+EwzrwO5lpNr/09STxdHuUoP2mvuELJS+hLdtgg==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"playwright-core": "1.44.1"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=16"
},
"optionalDependencies": {
"fsevents": "2.3.2"
}
},
"node_modules/playwright-core": {
"version": "1.44.1",
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.44.1.tgz",
"integrity": "sha512-wh0JWtYTrhv1+OSsLPgFzGzt67Y7BE/ZS3jEqgGBlp2ppp1ZDj8c+9IARNW4dwf1poq5MgHreEM2KV/GuR4cFA==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"playwright-core": "cli.js"
},
"engines": {
"node": ">=16"
}
},
"node_modules/typescript": {
"version": "5.4.5",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.4.5.tgz",
"integrity": "sha512-vcI4UpRgg81oIRUFwR0WSIHKt11nJ7SAVlYNIu+QpqeyXP+gpQJy/Z4+F0aGxSE4MqwjyXvW/TzgkLAx2AGHwQ==",
"dev": true,
"license": "Apache-2.0",
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
},
"engines": {
"node": ">=14.17"
}
},
"node_modules/undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==",
"dev": true,
"license": "MIT"
}
}
}

View File

@@ -0,0 +1,15 @@
{
"name": "end2end",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"@playwright/test": "^1.44.1",
"@types/node": "^20.12.12",
"typescript": "^5.4.5"
}
}

View File

@@ -0,0 +1,104 @@
import { devices, defineConfig } from "@playwright/test";
/**
* Read environment variables from file.
* https://github.com/motdotla/dotenv
*/
// require('dotenv').config();
/**
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
testDir: "./tests",
/* Maximum time one test can run for. */
timeout: 30 * 1000,
expect: {
/**
* Maximum time expect() should wait for the condition to be met.
* For example in `await expect(locator).toHaveText();`
*/
timeout: 5000,
},
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
/* Opt out of parallel tests on CI. */
workers: process.env.CI ? 1 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: "html",
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Maximum time each action such as `click()` can take. Defaults to 0 (no limit). */
actionTimeout: 0,
/* Base URL to use in actions like `await page.goto('/')`. */
// baseURL: 'http://localhost:3000',
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: "on-first-retry",
},
/* Configure projects for major browsers */
projects: [
{
name: "chromium",
use: {
...devices["Desktop Chrome"],
},
},
{
name: "firefox",
use: {
...devices["Desktop Firefox"],
},
},
{
name: "webkit",
use: {
...devices["Desktop Safari"],
},
},
/* Test against mobile viewports. */
// {
// name: 'Mobile Chrome',
// use: {
// ...devices['Pixel 5'],
// },
// },
// {
// name: 'Mobile Safari',
// use: {
// ...devices['iPhone 12'],
// },
// },
/* Test against branded browsers. */
// {
// name: 'Microsoft Edge',
// use: {
// channel: 'msedge',
// },
// },
// {
// name: 'Google Chrome',
// use: {
// channel: 'chrome',
// },
// },
],
/* Folder for test artifacts such as screenshots, videos, traces, etc. */
// outputDir: 'test-results/',
/* Run your local dev server before starting the tests */
// webServer: {
// command: 'npm run start',
// port: 3000,
// },
});

View File

@@ -0,0 +1,9 @@
import { test, expect } from "@playwright/test";
test("homepage has title and links to intro page", async ({ page }) => {
await page.goto("http://localhost:3000/");
await expect(page).toHaveTitle("Welcome to Leptos");
await expect(page.locator("h1")).toHaveText("Welcome to Leptos!");
});

View File

@@ -0,0 +1,109 @@
{
"compilerOptions": {
/* Visit https://aka.ms/tsconfig to read more about this file */
/* Projects */
// "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
// "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
// "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */
// "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */
// "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
// "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
/* Language and Environment */
"target": "es2016", /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
// "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
// "jsx": "preserve", /* Specify what JSX code is generated. */
// "experimentalDecorators": true, /* Enable experimental support for legacy experimental decorators. */
// "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
// "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
// "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
// "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
// "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
// "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
// "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
// "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */
/* Modules */
"module": "commonjs", /* Specify what module code is generated. */
// "rootDir": "./", /* Specify the root folder within your source files. */
// "moduleResolution": "node10", /* Specify how TypeScript looks up a file from a given module specifier. */
// "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
// "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
// "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
// "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */
// "types": [], /* Specify type package names to be included without being referenced in a source file. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
// "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */
// "allowImportingTsExtensions": true, /* Allow imports to include TypeScript file extensions. Requires '--moduleResolution bundler' and either '--noEmit' or '--emitDeclarationOnly' to be set. */
// "resolvePackageJsonExports": true, /* Use the package.json 'exports' field when resolving package imports. */
// "resolvePackageJsonImports": true, /* Use the package.json 'imports' field when resolving imports. */
// "customConditions": [], /* Conditions to set in addition to the resolver-specific defaults when resolving imports. */
// "resolveJsonModule": true, /* Enable importing .json files. */
// "allowArbitraryExtensions": true, /* Enable importing files with any extension, provided a declaration file is present. */
// "noResolve": true, /* Disallow 'import's, 'require's or '<reference>'s from expanding the number of files TypeScript should add to a project. */
/* JavaScript Support */
// "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
// "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
// "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
/* Emit */
// "declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
// "declarationMap": true, /* Create sourcemaps for d.ts files. */
// "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
// "sourceMap": true, /* Create source map files for emitted JavaScript files. */
// "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
// "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
// "outDir": "./", /* Specify an output folder for all emitted files. */
// "removeComments": true, /* Disable emitting comments. */
// "noEmit": true, /* Disable emitting files from a compilation. */
// "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
// "importsNotUsedAsValues": "remove", /* Specify emit/checking behavior for imports that are only used for types. */
// "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
// "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
// "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
// "newLine": "crlf", /* Set the newline character for emitting files. */
// "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
// "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */
// "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
// "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */
// "declarationDir": "./", /* Specify the output directory for generated declaration files. */
// "preserveValueImports": true, /* Preserve unused imported values in the JavaScript output that would otherwise be removed. */
/* Interop Constraints */
// "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
// "verbatimModuleSyntax": true, /* Do not transform or elide any imports or exports not marked as type-only, ensuring they are written in the output file's format based on the 'module' setting. */
// "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
"esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
// "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
"forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */
/* Type Checking */
"strict": true, /* Enable all strict type-checking options. */
// "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* When type checking, take into account 'null' and 'undefined'. */
// "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
// "strictBindCallApply": true, /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */
// "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
// "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */
// "useUnknownInCatchVariables": true, /* Default catch clause variables as 'unknown' instead of 'any'. */
// "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
// "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */
// "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */
// "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
// "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
// "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
// "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */
// "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
// "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */
// "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
// "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
/* Completeness */
// "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
"skipLibCheck": true /* Skip type checking all .d.ts files. */
}
}

View File

@@ -0,0 +1,66 @@
use leptos::prelude::*;
use leptos_meta::{provide_meta_context, Stylesheet, Title};
use leptos_router::{
components::{Route, Router, Routes},
StaticSegment, WildcardSegment,
};
#[component]
pub fn App() -> impl IntoView {
// Provides context that manages stylesheets, titles, meta tags, etc.
provide_meta_context();
view! {
// injects a stylesheet into the document <head>
// id=leptos means cargo-leptos will hot-reload this stylesheet
<Stylesheet id="leptos" href="/pkg/harmony-example-rust-webapp.css"/>
// sets the document title
<Title text="Welcome to Leptos"/>
// content for this welcome page
<Router>
<main>
<Routes fallback=move || "Not found.">
<Route path=StaticSegment("") view=HomePage/>
<Route path=WildcardSegment("any") view=NotFound/>
</Routes>
</main>
</Router>
}
}
/// Renders the home page of your application.
#[component]
fn HomePage() -> impl IntoView {
// Creates a reactive value to update the button
let count = RwSignal::new(0);
let on_click = move |_| *count.write() += 1;
view! {
<h1>"Welcome to Leptos!"</h1>
<button on:click=on_click>"Click Me: " {count}</button>
}
}
/// 404 - Not Found
#[component]
fn NotFound() -> impl IntoView {
// set an HTTP status code 404
// this is feature gated because it can only be done during
// initial server-side rendering
// if you navigate to the 404 page subsequently, the status
// code will not be set because there is not a new HTTP request
// to the server
#[cfg(feature = "ssr")]
{
// this can be done inline because it's synchronous
// if it were async, we'd use a server function
let resp = expect_context::<leptos_actix::ResponseOptions>();
resp.set_status(actix_web::http::StatusCode::NOT_FOUND);
}
view! {
<h1>"Not Found"</h1>
}
}

View File

@@ -0,0 +1,9 @@
pub mod app;
#[cfg(feature = "hydrate")]
#[wasm_bindgen::prelude::wasm_bindgen]
pub fn hydrate() {
use app::*;
console_error_panic_hook::set_once();
leptos::mount::hydrate_body(App);
}

View File

@@ -0,0 +1,88 @@
#[cfg(feature = "ssr")]
#[actix_web::main]
async fn main() -> std::io::Result<()> {
use actix_files::Files;
use actix_web::*;
use leptos::prelude::*;
use leptos::config::get_configuration;
use leptos_meta::MetaTags;
use leptos_actix::{generate_route_list, LeptosRoutes};
use harmony_example_rust_webapp::app::*;
let conf = get_configuration(None).unwrap();
let addr = conf.leptos_options.site_addr;
HttpServer::new(move || {
// Generate the list of routes in your Leptos App
let routes = generate_route_list(App);
let leptos_options = &conf.leptos_options;
let site_root = leptos_options.site_root.clone().to_string();
println!("listening on http://{}", &addr);
App::new()
// serve JS/WASM/CSS from `pkg`
.service(Files::new("/pkg", format!("{site_root}/pkg")))
// serve other assets from the `assets` directory
.service(Files::new("/assets", &site_root))
// serve the favicon from /favicon.ico
.service(favicon)
.leptos_routes(routes, {
let leptos_options = leptos_options.clone();
move || {
view! {
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<AutoReload options=leptos_options.clone() />
<HydrationScripts options=leptos_options.clone()/>
<MetaTags/>
</head>
<body>
<App/>
</body>
</html>
}
}
})
.app_data(web::Data::new(leptos_options.to_owned()))
//.wrap(middleware::Compress::default())
})
.bind(&addr)?
.run()
.await
}
#[cfg(feature = "ssr")]
#[actix_web::get("favicon.ico")]
async fn favicon(
leptos_options: actix_web::web::Data<leptos::config::LeptosOptions>,
) -> actix_web::Result<actix_files::NamedFile> {
let leptos_options = leptos_options.into_inner();
let site_root = &leptos_options.site_root;
Ok(actix_files::NamedFile::open(format!(
"{site_root}/favicon.ico"
))?)
}
#[cfg(not(any(feature = "ssr", feature = "csr")))]
pub fn main() {
// no client-side main function
// unless we want this to work with e.g., Trunk for pure client-side testing
// see lib.rs for hydration function instead
// see optional feature `csr` instead
}
#[cfg(all(not(feature = "ssr"), feature = "csr"))]
pub fn main() {
// a client-side main function is required for using `trunk serve`
// prefer using `cargo leptos serve` instead
// to run: `trunk serve --open --features csr`
use harmony_example_rust_webapp::app::*;
console_error_panic_hook::set_once();
leptos::mount_to_body(App);
}

View File

@@ -0,0 +1,4 @@
body {
font-family: sans-serif;
text-align: center;
}

View File

@@ -0,0 +1,18 @@
[package]
name = "example-tenant"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -0,0 +1,41 @@
use harmony::{
data::Id,
inventory::Inventory,
maestro::Maestro,
modules::tenant::TenantScore,
topology::{K8sAnywhereTopology, tenant::TenantConfig},
};
#[tokio::main]
async fn main() {
let tenant = TenantScore {
config: TenantConfig {
id: Id::from_str("test-tenant-id"),
name: "testtenant".to_string(),
..Default::default()
},
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(tenant)]);
harmony_cli::init(maestro, None).await.unwrap();
}
// TODO write tests
// - Create Tenant with default config mostly, make sure namespace is created
// - deploy sample client/server app with nginx unprivileged and a service
// - exec in the client pod and validate the following
// - can reach internet
// - can reach server pod
// - can resolve dns queries to internet
// - can resolve dns queries to services
// - cannot reach services and pods in other namespaces
// - Create Tenant with specific cpu/ram/storage requests / limits and make sure they are enforced by trying to
// deploy a pod with lower requests/limits (accepted) and higher requests/limits (rejected)
// - Create TenantCredentials and make sure they give only access to the correct tenant

View File

@@ -10,9 +10,9 @@ publish = false
harmony = { path = "../../harmony" } harmony = { path = "../../harmony" }
harmony_tui = { path = "../../harmony_tui" } harmony_tui = { path = "../../harmony_tui" }
harmony_types = { path = "../../harmony_types" } harmony_types = { path = "../../harmony_types" }
harmony_macros = { path = "../../harmony_macros" }
cidr = { workspace = true } cidr = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true } log = { workspace = true }
env_logger = { workspace = true } env_logger = { workspace = true }
url = { workspace = true } url = { workspace = true }

View File

@@ -6,30 +6,32 @@ readme.workspace = true
license.workspace = true license.workspace = true
[dependencies] [dependencies]
rand = "0.9"
hex = "0.4"
libredfish = "0.1.1" libredfish = "0.1.1"
reqwest = { version = "0.11", features = ["blocking", "json"] } reqwest = { version = "0.11", features = ["blocking", "json"] }
russh = "0.45.0" russh = "0.45.0"
rust-ipmi = "0.1.1" rust-ipmi = "0.1.1"
semver = "1.0.23" semver = "1.0.23"
serde = { version = "1.0.209", features = ["derive"] } serde = { version = "1.0.209", features = ["derive", "rc"] }
serde_json = "1.0.127" serde_json = "1.0.127"
tokio = { workspace = true } tokio.workspace = true
derive-new = { workspace = true } derive-new.workspace = true
log = { workspace = true } log.workspace = true
env_logger = { workspace = true } env_logger.workspace = true
async-trait = { workspace = true } async-trait.workspace = true
cidr = { workspace = true } cidr.workspace = true
opnsense-config = { path = "../opnsense-config" } opnsense-config = { path = "../opnsense-config" }
opnsense-config-xml = { path = "../opnsense-config-xml" } opnsense-config-xml = { path = "../opnsense-config-xml" }
harmony_macros = { path = "../harmony_macros" } harmony_macros = { path = "../harmony_macros" }
harmony_types = { path = "../harmony_types" } harmony_types = { path = "../harmony_types" }
uuid = { workspace = true } uuid.workspace = true
url = { workspace = true } url.workspace = true
kube = { workspace = true } kube.workspace = true
k8s-openapi = { workspace = true } k8s-openapi.workspace = true
serde_yaml = { workspace = true } serde_yaml.workspace = true
http = { workspace = true } http.workspace = true
serde-value = { workspace = true } serde-value.workspace = true
inquire.workspace = true inquire.workspace = true
helm-wrapper-rs = "0.4.0" helm-wrapper-rs = "0.4.0"
non-blank-string-rs = "1.0.4" non-blank-string-rs = "1.0.4"
@@ -38,3 +40,21 @@ directories = "6.0.0"
lazy_static = "1.5.0" lazy_static = "1.5.0"
dockerfile_builder = "0.1.5" dockerfile_builder = "0.1.5"
temp-file = "0.1.9" temp-file = "0.1.9"
convert_case.workspace = true
email_address = "0.2.9"
chrono.workspace = true
fqdn = { version = "0.4.6", features = [
"domain-label-cannot-start-or-end-with-hyphen",
"domain-label-length-limited-to-63",
"domain-name-without-special-chars",
"domain-name-length-limited-to-255",
"punycode",
"serde",
] }
temp-dir = "0.1.14"
dyn-clone = "1.0.19"
similar.workspace = true
futures-util = "0.3.31"
tokio-util = "0.7.15"
strum = { version = "0.27.1", features = ["derive"] }
tempfile = "3.20.0"

View File

@@ -2,8 +2,14 @@ use lazy_static::lazy_static;
use std::path::PathBuf; use std::path::PathBuf;
lazy_static! { lazy_static! {
pub static ref HARMONY_CONFIG_DIR: PathBuf = directories::BaseDirs::new() pub static ref HARMONY_DATA_DIR: PathBuf = directories::BaseDirs::new()
.unwrap() .unwrap()
.data_dir() .data_dir()
.join("harmony"); .join("harmony");
pub static ref REGISTRY_URL: String =
std::env::var("HARMONY_REGISTRY_URL").unwrap_or_else(|_| "hub.nationtech.io".to_string());
pub static ref REGISTRY_PROJECT: String =
std::env::var("HARMONY_REGISTRY_PROJECT").unwrap_or_else(|_| "harmony".to_string());
pub static ref DRY_RUN: bool =
std::env::var("HARMONY_DRY_RUN").map_or(true, |value| value.parse().unwrap_or(true));
} }

View File

@@ -1,6 +1,24 @@
use rand::distr::Alphanumeric;
use rand::distr::SampleString;
use std::time::SystemTime;
use std::time::UNIX_EPOCH;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)] /// A unique identifier designed for ease of use.
///
/// You can pass it any String to use and Id, or you can use the default format with `Id::default()`
///
/// The default format looks like this
///
/// `462d4c_g2COgai`
///
/// The first part is the unix timesamp in hexadecimal which makes Id easily sorted by creation time.
/// Second part is a serie of 7 random characters.
///
/// **It is not meant to be very secure or unique**, it is suitable to generate up to 10 000 items per
/// second with a reasonable collision rate of 0,000014 % as calculated by this calculator : https://kevingal.com/apps/collision.html
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct Id { pub struct Id {
value: String, value: String,
} }
@@ -9,4 +27,31 @@ impl Id {
pub fn from_string(value: String) -> Self { pub fn from_string(value: String) -> Self {
Self { value } Self { value }
} }
pub fn from_str(value: &str) -> Self {
Self::from_string(value.to_string())
}
}
impl std::fmt::Display for Id {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(&self.value)
}
}
impl Default for Id {
fn default() -> Self {
let start = SystemTime::now();
let since_the_epoch = start
.duration_since(UNIX_EPOCH)
.expect("Time went backwards");
let timestamp = since_the_epoch.as_secs();
let hex_timestamp = format!("{:x}", timestamp & 0xffffff);
let random_part: String = Alphanumeric.sample_string(&mut rand::rng(), 7);
let value = format!("{}_{}", hex_timestamp, random_part);
Self { value }
}
} }

View File

@@ -138,7 +138,8 @@ impl ManagementInterface for ManualManagementInterface {
} }
fn get_supported_protocol_names(&self) -> String { fn get_supported_protocol_names(&self) -> String {
todo!() // todo!()
"none".to_string()
} }
} }

View File

@@ -15,10 +15,14 @@ pub enum InterpretName {
LoadBalancer, LoadBalancer,
Tftp, Tftp,
Http, Http,
Ipxe,
Dummy, Dummy,
Panic, Panic,
OPNSense, OPNSense,
K3dInstallation, K3dInstallation,
TenantInterpret,
Application,
ArgoCD,
} }
impl std::fmt::Display for InterpretName { impl std::fmt::Display for InterpretName {
@@ -29,10 +33,14 @@ impl std::fmt::Display for InterpretName {
InterpretName::LoadBalancer => f.write_str("LoadBalancer"), InterpretName::LoadBalancer => f.write_str("LoadBalancer"),
InterpretName::Tftp => f.write_str("Tftp"), InterpretName::Tftp => f.write_str("Tftp"),
InterpretName::Http => f.write_str("Http"), InterpretName::Http => f.write_str("Http"),
InterpretName::Ipxe => f.write_str("iPXE"),
InterpretName::Dummy => f.write_str("Dummy"), InterpretName::Dummy => f.write_str("Dummy"),
InterpretName::Panic => f.write_str("Panic"), InterpretName::Panic => f.write_str("Panic"),
InterpretName::OPNSense => f.write_str("OPNSense"), InterpretName::OPNSense => f.write_str("OPNSense"),
InterpretName::K3dInstallation => f.write_str("K3dInstallation"), InterpretName::K3dInstallation => f.write_str("K3dInstallation"),
InterpretName::TenantInterpret => f.write_str("Tenant"),
InterpretName::Application => f.write_str("Application"),
InterpretName::ArgoCD => f.write_str("ArgoCD"),
} }
} }
} }
@@ -120,3 +128,11 @@ impl From<kube::Error> for InterpretError {
} }
} }
} }
impl From<String> for InterpretError {
fn from(value: String) -> Self {
Self {
msg: format!("InterpretError : {value}"),
}
}
}

View File

@@ -34,6 +34,17 @@ pub struct Inventory {
} }
impl Inventory { impl Inventory {
pub fn empty() -> Self {
Self {
location: Location::new("Empty".to_string(), "location".to_string()),
switch: vec![],
firewall: vec![],
worker_host: vec![],
storage_host: vec![],
control_plane_host: vec![],
}
}
pub fn autoload() -> Self { pub fn autoload() -> Self {
Self { Self {
location: Location::test_building(), location: Location::test_building(),

View File

@@ -19,7 +19,10 @@ pub struct Maestro<T: Topology> {
} }
impl<T: Topology> Maestro<T> { impl<T: Topology> Maestro<T> {
pub fn new(inventory: Inventory, topology: T) -> Self { /// Creates a bare maestro without initialization.
///
/// This should rarely be used. Most of the time Maestro::initialize should be used instead.
pub fn new_without_initialization(inventory: Inventory, topology: T) -> Self {
Self { Self {
inventory, inventory,
topology, topology,
@@ -29,7 +32,7 @@ impl<T: Topology> Maestro<T> {
} }
pub async fn initialize(inventory: Inventory, topology: T) -> Result<Self, InterpretError> { pub async fn initialize(inventory: Inventory, topology: T) -> Result<Self, InterpretError> {
let instance = Self::new(inventory, topology); let instance = Self::new_without_initialization(inventory, topology);
instance.prepare_topology().await?; instance.prepare_topology().await?;
Ok(instance) Ok(instance)
} }

View File

@@ -0,0 +1,59 @@
////////////////////
/// Working idea
///
///
trait ScoreWithDep<T> {
fn create_interpret(&self) -> Box<dyn Interpret<T>>;
fn name(&self) -> String;
fn get_dependencies(&self) -> Vec<TypeId>; // Force T to impl Installer<TypeId> or something
// like that
}
struct PrometheusAlertScore;
impl <T> ScoreWithDep<T> for PrometheusAlertScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
todo!()
}
fn name(&self) -> String {
todo!()
}
fn get_dependencies(&self) -> Vec<TypeId> {
// We have to find a way to constrait here so at compile time we are only allowed to return
// TypeId for types which can be installed by T
//
// This means, for example that T must implement HelmCommand if the impl <T: HelmCommand> Installable<T> for
// KubePrometheus calls for HelmCommand.
vec![TypeId::of::<KubePrometheus>()]
}
}
trait Installable{}
struct KubePrometheus;
impl Installable for KubePrometheus;
struct Maestro<T> {
topology: T
}
impl <T>Maestro<T> {
fn execute_store(&self, score: ScoreWithDep<T>) {
score.get_dependencies().iter().for_each(|dep| {
self.topology.ensure_dependency_ready(dep);
});
}
}
struct TopologyWithDep {
}
impl TopologyWithDep {
fn ensure_dependency_ready(&self, type_id: TypeId) -> Result<(), String> {
self.installer
}
}

View File

@@ -46,7 +46,7 @@ pub struct HAClusterTopology {
#[async_trait] #[async_trait]
impl Topology for HAClusterTopology { impl Topology for HAClusterTopology {
fn name(&self) -> &str { fn name(&self) -> &str {
todo!() "HAClusterTopology"
} }
async fn ensure_ready(&self) -> Result<Outcome, InterpretError> { async fn ensure_ready(&self) -> Result<Outcome, InterpretError> {
todo!( todo!(
@@ -168,6 +168,16 @@ impl DhcpServer for HAClusterTopology {
async fn commit_config(&self) -> Result<(), ExecutorError> { async fn commit_config(&self) -> Result<(), ExecutorError> {
self.dhcp_server.commit_config().await self.dhcp_server.commit_config().await
} }
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filename(filename).await
}
async fn set_filename64(&self, filename64: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filename64(filename64).await
}
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filenameipxe(filenameipxe).await
}
} }
#[async_trait] #[async_trait]
@@ -293,6 +303,15 @@ impl DhcpServer for DummyInfra {
async fn set_boot_filename(&self, _boot_filename: &str) -> Result<(), ExecutorError> { async fn set_boot_filename(&self, _boot_filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA) unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
} }
async fn set_filename(&self, _filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_filename64(&self, _filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_filenameipxe(&self, _filenameipxe: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
fn get_ip(&self) -> IpAddress { fn get_ip(&self) -> IpAddress {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA) unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
} }

Some files were not shown because too many files have changed in this diff Show More