Compare commits

...

88 Commits

Author SHA1 Message Date
613def5e0b feat: depoloys cluster monitoring stack from monitoring score on k8sanywhere topology
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-11 15:06:39 -04:00
238d1f85e2 wip: impl k8sMonitor
Some checks failed
Run Check Script / check (push) Failing after 45s
Run Check Script / check (pull_request) Failing after 42s
2025-06-11 13:35:07 -04:00
dbc66f3d0c feat: setup basic structure to for the concrete implementation of kube prometheus monitor, removed discord webhook receiver trait as the dependency is no longer required for prometheus to interact with discord
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m49s
2025-06-06 16:41:17 -04:00
31e59937dc Merge pull request 'feat: Initial setup for monitoring and alerting' (#48) from feat/monitor into master
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Reviewed-on: #48
Reviewed-by: johnride <jg@nationtech.io>
2025-06-03 18:17:13 +00:00
12eb4ae31f fix: cargo fmt
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-02 16:20:49 -04:00
a2be9457b9 wip: removed AlertReceiverConfig
Some checks failed
Run Check Script / check (push) Failing after 44s
Run Check Script / check (pull_request) Failing after 44s
2025-06-02 16:11:36 -04:00
0d56fbc09d wip: applied comments in pr, changed naming of AlertChannel to AlertReceiver and added rust doc to Monitor for clarity
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-02 14:44:43 -04:00
56dc1e93c1 fix: modified files in mod
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m46s
2025-06-02 11:47:21 -04:00
691540fe64 wip: modified initial monitoring architecture based on pr review
Some checks failed
Run Check Script / check (push) Failing after 46s
Run Check Script / check (pull_request) Failing after 43s
2025-06-02 11:42:37 -04:00
7e3f1b1830 fix:cargo fmt
All checks were successful
Run Check Script / check (push) Successful in 1m45s
Run Check Script / check (pull_request) Successful in 1m45s
2025-05-30 13:59:29 -04:00
b631e8ccbb feat: Initial setup for monitoring and alerting
Some checks failed
Run Check Script / check (push) Failing after 43s
Run Check Script / check (pull_request) Failing after 45s
2025-05-30 13:21:38 -04:00
60f2f31d6c feat: Add TenantScore and TenantInterpret (#45)
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Reviewed-on: #45
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-05-30 13:13:43 +00:00
27f1a9dbdd feat: add more to the tenantmanager k8s impl (#46)
All checks were successful
Run Check Script / check (push) Successful in 1m55s
Co-authored-by: Willem <wrolleman@nationtech.io>
Reviewed-on: #46
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-05-29 20:15:38 +00:00
e7917843bc Merge pull request 'feat: Add initial Tenant traits and data structures' (#43) from feat/tenant into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Reviewed-on: #43
2025-05-29 15:51:33 +00:00
7cd541bdd8 chore: Fix pr comments, remove many YAGNI things
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 11:47:25 -04:00
270dd49567 Merge pull request 'docs: Add CONTRIBUTING.md guide' (#44) from doc/contributor into master
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Reviewed-on: #44
2025-05-29 14:48:18 +00:00
0187300473 docs: Add CONTRIBUTING.md guide
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m47s
2025-05-29 10:47:38 -04:00
bf16566b4e wip: Clean up some unnecessary bits in the Tenant module and move manager to its own file
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 07:25:45 -04:00
895fb02f4e feat: Add initial Tenant traits and data structures
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m45s
2025-05-28 22:33:46 -04:00
88d6af9815 Merge pull request 'feat/basicCI' (#42) from feat/basicCI into master
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Reviewed-on: #42
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-05-28 19:42:19 +00:00
5aa9dc701f fix: Removed forgotten refactoring bits and formatting
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m48s
2025-05-28 15:19:39 -04:00
f4ef895d2e feat: Add basic CI configuration
Some checks failed
Run Check Script / check (push) Failing after 51s
2025-05-28 14:40:19 -04:00
6e7148a945 Merge pull request 'adr: Add ADR on multi tenancy using namespace based customer isolation' (#41) from adr/multi-tenancy into master
Reviewed-on: #41
2025-05-26 20:26:36 +00:00
83453273c6 adr: Add ADR on multi tenancy using namespace based customer isolation 2025-05-26 11:56:45 -04:00
76ae5eb747 fix: make HelmRepository public (#39)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #39
Reviewed-by: johnride <jg@nationtech.io>
2025-05-22 20:07:42 +00:00
9c51040f3b Merge pull request 'feat:added Slack notifications support' (#38) from feat/slack-notifs into master
Reviewed-on: #38
Reviewed-by: johnride <jg@nationtech.io>
2025-05-22 20:04:51 +00:00
e1a8ee1c15 feat: send alerts to multiple alert channels 2025-05-22 14:16:41 -04:00
44b2b092a8 feat:added Slack notifications support 2025-05-21 15:29:14 -04:00
19bd47a545 Merge pull request 'monitoringalerting' (#37) from monitoringalerting into master
Reviewed-on: #37
Reviewed-by: johnride <jg@nationtech.io>
2025-05-21 17:32:26 +00:00
2b6d2e8606 fix:merge confict 2025-05-20 16:05:38 -04:00
7fc2b1ebfe feat: added monitoring stack example to lamp demo 2025-05-20 15:59:01 -04:00
e80752ea3f feat: install discord alert manager helm chart when Discord is the chosen alerting channel 2025-05-20 15:51:03 -04:00
bae7222d64 Our own Helm Command/Resource/Executor (WIP) (#13)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #13
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-05-20 14:01:10 +00:00
f7d3da3ac9 fix merge conflict 2025-05-15 15:31:26 -04:00
eb8a8a2e04 chore: modified build config to be able to pass namespace to the config 2025-05-15 15:19:40 -04:00
b4c6848433 feat: added default monitoringStackScore implementation 2025-05-15 14:52:04 -04:00
0d94c537a0 feat: add ingress score (#32)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #32
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-05-15 16:11:40 +00:00
861f266c4e Merge pull request 'feat: LAMP stack and Monitoring stack now work on OKD, we just have to manually set a few serviceaccounts to privileged scc until we find a better solution' (#36) from feat/lampOKD into master
Reviewed-on: #36
2025-05-14 15:48:56 +00:00
51724d0e55 feat: LAMP stack and Monitoring stack now work on OKD, we just have to manually set a few serviceaccounts to privileged scc until we find a better solution 2025-05-14 11:47:39 -04:00
c2d1cb9b76 Merge pull request 'upgrade stack size from default 1MB on windows (k3d stack overflow otherwise)' (#34) from windows-stack-size-increase into master
Reviewed-on: #34
Reviewed-by: johnride <jg@nationtech.io>
2025-05-14 14:29:51 +00:00
tahahawa
c84a02c8ec upgrade stack size from default 1MB on windows (k3d stack overflow otherwise) 2025-05-11 22:39:23 -04:00
8d3d167848 fix: Remove todo statements for lamp score and k8s related features that are now complete! 2025-05-06 14:46:57 -04:00
94f6cc6942 fix: kube_prometheus missing new field repo in HelmChartScore 2025-05-06 13:57:58 -04:00
4a9b95acad Merge pull request 'monitoring-alerting' (#30) from monitoring-alerting into master
Reviewed-on: #30
2025-05-06 17:50:56 +00:00
ef9c1cce77 fix:yaml structure 2025-05-06 13:42:59 -04:00
df65ac3439 formatting: Fix format of load_balancer.rs 2025-05-06 13:38:21 -04:00
e5ddd296db Merge pull request 'feat: add cert-manager module and helm repo support' (#31) from feat/awsOKD into master
Reviewed-on: #31
2025-05-06 16:39:19 +00:00
4be008556e feat: add cert-manager module and helm repo support
- Implemented a new `cert-manager` module for deploying cert-manager.
- Added support for specifying a Helm repository in module configurations.
- Introduced `cert_manager` module in `modules/mod.rs`.
- Created `src/modules/cert_manager` directory and its associated code.
- Implemented `add_repo` function in `src/modules/helm.rs` for adding Helm repositories.
- Updated `LAMPInterpret` and `lamp.rs` to integrate the new module.
- Added logging for Helm command execution.
- Updated k8s deployment file to remove unused DeepMerge dependency.
2025-05-06 16:38:57 +00:00
78e9893341 Merge pull request 'feat: started to prepare inventory / topoplogy for NCD' (#1) from feat/settingUpNDC into master
Reviewed-on: #1
2025-05-06 16:38:40 +00:00
d9921b857b fix:installs helm chart 2025-05-06 12:23:03 -04:00
e62ef001ed fix: Fix opnsense test, Host.tll now optional and run cargo fmt 2025-05-06 12:00:56 -04:00
1fb7132c64 Merge branch 'master' into feat/settingUpNDC 2025-05-06 11:58:12 -04:00
2d74c66fc6 wip: trying to get the kube-prometheus score to install 2025-05-06 11:54:10 -04:00
8a199b64f5 feat: Upgrade opnsense-config crates to be compatible with opnsense 25.1_5 2025-05-06 11:45:19 -04:00
b7fe62fcbb feat: ncd0 example complete. Missing files for authentication, ignition etc are accessible upon deman. This is yet another great step towards full UPI automated provisionning 2025-05-06 11:44:40 -04:00
cd8542258c Merge remote-tracking branch 'origin/master' into monitoring-alerting 2025-05-06 10:03:27 -04:00
472a3c1051 fix: correctly pass namespace and monitoring stack to topology so it can be used to init the maestro and exec the score 2025-05-06 10:02:21 -04:00
88270ece61 fix: refactor so that the topology installs the MonitoringAlertingStack depending on if it is already present in the cluster 2025-05-05 16:37:15 -04:00
e7cfbf914a feat: added basic alert for pvc 95% full to kube-prometheus score 2025-05-05 15:38:37 -04:00
fbd466a85c added file 2025-05-05 13:40:32 -04:00
2f8e150f41 feat: added Score and topology to create kube prometheus monitoring and alerting stack 2025-05-05 12:49:28 -04:00
764fd6d451 Merge pull request 'chore: added default mariadb size and pass env variables to php app' (#28) from lamp-env-vars into master
Reviewed-on: #28
Reviewed-by: johnride <jg@nationtech.io>
2025-05-03 00:20:47 +00:00
78fffcd725 fix: specified 2Gi db size from LAMPconfig 2025-05-02 15:07:39 -04:00
e1133ea114 use default database_size None in LampConfig to default to value from helm chart 2025-05-02 15:02:50 -04:00
d8e8a49745 Merge pull request 'feat:php program to fill pvc and report database usage' (#29) from pvc-filler into master
Reviewed-on: #29
2025-05-02 16:06:46 +00:00
a7ba9be486 feat:php program to fill pvc and report database usage 2025-05-02 12:03:18 -04:00
1c3669cb47 chore: added default mariadb size and pass env variables to php app 2025-05-02 11:56:27 -04:00
90b80b24bc Merge pull request 'feat: push docker image to registry and deploy with full tag' (#27) from feat/lampDatabase into master
Reviewed-on: #27
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-05-01 17:39:23 +00:00
c879ca143f feat: Add comments explaining a bit of what harmony does in the lamp demo 2025-04-30 23:36:12 -04:00
bc2bd2f2f4 feat: push docker image to registry and deploy with full tag
- Added functionality to tag and push the built Docker image to a specified registry.
- Modified deployment score to use the full image tag (including registry and project).
- Included error handling and logging for the `docker tag` and `docker push` commands.
- Updated the `K8sDeploymentScore` struct to include a namespace field and environment variables for database credentials.
- Added kebab-case conversion for deployment name and namespace.
- Implemented a check_output function for better error reporting.
2025-04-30 22:33:31 -04:00
28978299c9 Merge pull request 'feat: add mariadb helm deployment to lamp interpreter' (#26) from feat/lampDatabase into master
Reviewed-on: #26
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-04-30 20:03:13 +00:00
87f6afc249 feat: add mariadb helm deployment to lamp interpreter
- Adds a `deploy_database` function to the `LAMPInterpret` struct to deploy a MariaDB database using Helm.
- Integrates `HelmCommand` trait requirement to the `LAMPInterpret` struct.
- Introduces `HelmChartScore` to manage MariaDB deployment.
- Adds namespace configuration for helm deployments.
- Updates trait bounds for `LAMPInterpret` to include `HelmCommand`.
- Implements `get_namespace` function to retrieve the namespace.
2025-04-30 15:40:26 -04:00
a6bcaade46 wip: alerting 2025-04-29 11:28:32 -04:00
6c145f1100 wip: initial layout 2025-04-28 16:31:22 -04:00
40cd765019 WIP: initial layout for MonitoringStackScore 2025-04-28 16:18:44 -04:00
c8547e38f2 feat(ipxe): create empty score shell for ipxe 2025-03-02 12:14:04 -05:00
bfc79abfb6 feat(ipxe): added slitaz image (super small image with gui and tools) 2025-03-02 10:05:50 -05:00
7697a170bd feat(ipxe): setup to have MAC specific bootfiles and fallback to a default if not found 2025-03-02 09:37:11 -05:00
941c9bc0b0 fix: missing protocol for ipxe boot file 2025-03-02 07:59:30 -05:00
51aeea1ec9 feat: support new configurable field in dhcp config: filenameipxe 2025-03-01 10:51:01 -05:00
8118df85ee feat: support new configurable field in dhcp config: filename64 2025-03-01 10:41:41 -05:00
7af83910ef doc: fix 2025-03-01 10:29:22 -05:00
1475f4af0c doc: fix 2025-03-01 10:25:24 -05:00
a3a61c734f doc: update README.md with instructions on how to add a field in opnsense config.xml 2025-03-01 10:17:51 -05:00
3f77bc7aef feat: support new configurable field in dhcp config: filename 2025-03-01 08:56:41 -05:00
d5125dd811 feat(iPXE): adding files for iPXE and memtest86 image 2025-02-23 16:39:57 -05:00
1ca316c085 wip: added new xml fields for Caddy + legacy pxe filename 2025-02-22 14:24:51 -05:00
e390f1edb3 feat: started to prepare inventory / topoplogy for NCD 2025-02-22 11:12:28 -05:00
92 changed files with 4855 additions and 318 deletions

5
.cargo/config.toml Normal file
View File

@@ -0,0 +1,5 @@
[target.x86_64-pc-windows-msvc]
rustflags = ["-C", "link-arg=/STACK:8000000"]
[target.x86_64-pc-windows-gnu]
rustflags = ["-C", "link-arg=-Wl,--stack,8000000"]

View File

@@ -0,0 +1,14 @@
name: Run Check Script
on:
push:
pull_request:
jobs:
check:
runs-on: rust-cargo
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run check script
run: bash check.sh

36
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,36 @@
# Contributing to the Harmony project
## Write small P-R
Aim for the smallest piece of work that is mergeable.
Mergeable means that :
- it does not break the build
- it moves the codebase one step forward
P-Rs can be many things, they do not have to be complete features.
### What a P-R **should** be
- Introduce a new trait : This will be the place to discuss the new trait addition, its design and implementation
- A new implementation of a trait : a new concrete implementation of the LoadBalancer trait
- A new CI check : something that improves quality, robustness, ci performance
- Documentation improvements
- Refactoring
- Bugfix
### What a P-R **should not** be
- Large. Anything over 200 lines (excluding generated lines) should have a very good reason to be this large.
- A mix of refactoring, bug fixes and new features.
- Introducing multiple new features or ideas at once.
- Multiple new implementations of a trait/functionnality at once
The general idea is to keep P-Rs small and single purpose.
## Commit message formatting
We follow conventional commits guidelines.
https://www.conventionalcommits.org/en/v1.0.0/

402
Cargo.lock generated
View File

@@ -4,19 +4,13 @@ version = 4
[[package]]
name = "addr2line"
version = "0.21.0"
version = "0.24.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a30b2e23b9e17a9f90641c7ab1549cd9b44f296d3ccbf309d2863cfe398a0cb"
checksum = "dfbe277e56a376000877090da837660b4427aad530e3028d44e0bffe4f89a1c1"
dependencies = [
"gimli",
]
[[package]]
name = "adler"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "adler2"
version = "2.0.0"
@@ -60,15 +54,15 @@ dependencies = [
[[package]]
name = "ahash"
version = "0.8.11"
version = "0.8.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e89da841a80418a9b391ebaea17f5c112ffaaa96f621d2c285b5174da76b9011"
checksum = "5a15f179cd60c4584b8a8c596927aadc462e27f2ca70c04e0071964a73ba7a75"
dependencies = [
"cfg-if",
"const-random",
"once_cell",
"version_check",
"zerocopy 0.7.35",
"zerocopy",
]
[[package]]
@@ -198,17 +192,17 @@ checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
[[package]]
name = "backtrace"
version = "0.3.71"
version = "0.3.75"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26b05800d2e817c8b3b4b54abd461726265fa9789ae34330622f2db9ee696f9d"
checksum = "6806a6321ec58106fea15becdad98371e28d92ccbc7c8f1b3b6dd724fe8f1002"
dependencies = [
"addr2line",
"cc",
"cfg-if",
"libc",
"miniz_oxide 0.7.4",
"miniz_oxide",
"object",
"rustc-demangle",
"windows-targets 0.52.6",
]
[[package]]
@@ -254,9 +248,9 @@ checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
[[package]]
name = "bitflags"
version = "2.9.0"
version = "2.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c8214115b7bf84099f1309324e63141d4c5d7cc26862f97a0a857dbefe165bd"
checksum = "1b8e56985ec62d17e9c1001dc89c88ecd7dc08e47eba5ec7c29c7b5eeecde967"
dependencies = [
"serde",
]
@@ -356,9 +350,9 @@ dependencies = [
[[package]]
name = "cc"
version = "1.2.20"
version = "1.2.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04da6a0d40b948dfc4fa8f5bbf402b0fc1a64a28dbf7d12ffd683550f2c1b63a"
checksum = "32db95edf998450acc7881c932f94cd9b05c87b4b2599e8bab064753da4acfd1"
dependencies = [
"shlex",
]
@@ -413,9 +407,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.37"
version = "4.5.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eccb054f56cbd38340b380d4a8e69ef1f02f1af43db2f0cc817a4774d80ae071"
checksum = "ed93b9805f8ba930df42c2590f05453d5ec36cbb85d018868a5b24d31f6ac000"
dependencies = [
"clap_builder",
"clap_derive",
@@ -423,9 +417,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.37"
version = "4.5.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efd9466fac8543255d3b1fcad4762c5e116ffe808c8a3043d4263cd4fd4862a2"
checksum = "379026ff283facf611b0ea629334361c4211d1b12ee01024eec1591133b04120"
dependencies = [
"anstream",
"anstyle",
@@ -453,9 +447,9 @@ checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6"
[[package]]
name = "color-eyre"
version = "0.6.3"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55146f5e46f237f7423d74111267d4597b59b0dad0ffaf7303bce9945d843ad5"
checksum = "e6e1761c0e16f8883bbbb8ce5990867f4f06bf11a0253da6495a04ce4b6ef0ec"
dependencies = [
"backtrace",
"color-spantrace",
@@ -468,9 +462,9 @@ dependencies = [
[[package]]
name = "color-spantrace"
version = "0.2.1"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd6be1b2a7e382e2b98b43b2adcca6bb0e465af0bdd38123873ae61eb17a72c2"
checksum = "2ddd8d5bfda1e11a501d0a7303f3bfed9aa632ebdb859be40d0fd70478ed70d5"
dependencies = [
"once_cell",
"owo-colors",
@@ -524,6 +518,15 @@ dependencies = [
"tiny-keccak",
]
[[package]]
name = "convert_case"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baaaa0ecca5b51987b9423ccdc971514dd8b0bb7b4060b983d3664dad3f1f89f"
dependencies = [
"unicode-segmentation",
]
[[package]]
name = "core-foundation"
version = "0.9.4"
@@ -605,7 +608,7 @@ version = "0.28.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "829d955a0bb380ef178a640b91779e3987da38c9aea133b20614cfed8cdea9c6"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"crossterm_winapi",
"futures-core",
"mio 1.0.3",
@@ -927,6 +930,15 @@ dependencies = [
"zeroize",
]
[[package]]
name = "email_address"
version = "0.2.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e079f19b08ca6239f47f8ba8509c11cf3ea30095831f7fed61441475edd8c449"
dependencies = [
"serde",
]
[[package]]
name = "encoding_rs"
version = "0.8.35"
@@ -1020,8 +1032,8 @@ dependencies = [
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_tui",
"harmony_types",
"log",
"tokio",
@@ -1112,7 +1124,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ced92e76e966ca2fd84c8f7aa01a4aea65b0eb6648d72f7c8f3e2764a67fece"
dependencies = [
"crc32fast",
"miniz_oxide 0.8.8",
"miniz_oxide",
]
[[package]]
@@ -1163,6 +1175,16 @@ dependencies = [
"percent-encoding",
]
[[package]]
name = "fqdn"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0f5d7f7b3eed2f771fc7f6fcb651f9560d7b0c483d75876082acb4649d266b3"
dependencies = [
"punycode",
"serde",
]
[[package]]
name = "funty"
version = "2.0.0"
@@ -1302,9 +1324,9 @@ dependencies = [
[[package]]
name = "getrandom"
version = "0.3.2"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73fea8450eea4bac3940448fb7ae50d91f034f941199fcd9d909a5a07aa455f0"
checksum = "26145e563e54f2cadc477553f1ec5ee650b00862f0a58bcd12cbdc5f0ea2d2f4"
dependencies = [
"cfg-if",
"libc",
@@ -1324,9 +1346,9 @@ dependencies = [
[[package]]
name = "gimli"
version = "0.28.1"
version = "0.31.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4271d37baee1b8c7e4b708028c57d816cf9d2434acb33a549475f78c181f6253"
checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f"
[[package]]
name = "group"
@@ -1360,9 +1382,9 @@ dependencies = [
[[package]]
name = "h2"
version = "0.4.9"
version = "0.4.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75249d144030531f8dee69fe9cea04d3edf809a017ae445e2abdff6629e86633"
checksum = "a9421a676d1b147b16b82c9225157dc629087ef8ec4d5e2960f9437a90dac0a5"
dependencies = [
"atomic-waker",
"bytes",
@@ -1383,10 +1405,13 @@ version = "0.1.0"
dependencies = [
"async-trait",
"cidr",
"convert_case",
"derive-new",
"directories",
"dockerfile_builder",
"email_address",
"env_logger",
"fqdn",
"harmony_macros",
"harmony_types",
"helm-wrapper-rs",
@@ -1409,6 +1434,7 @@ dependencies = [
"serde-value",
"serde_json",
"serde_yaml",
"temp-dir",
"temp-file",
"tokio",
"url",
@@ -1466,9 +1492,9 @@ dependencies = [
[[package]]
name = "hashbrown"
version = "0.15.2"
version = "0.15.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289"
checksum = "84b26c544d002229e640969970a2e74021aadf6e2f96372b9c58eff97de08eb3"
dependencies = [
"allocator-api2",
"equivalent",
@@ -1682,7 +1708,7 @@ dependencies = [
"bytes",
"futures-channel",
"futures-util",
"h2 0.4.9",
"h2 0.4.10",
"http 1.3.1",
"http-body 1.0.1",
"httparse",
@@ -1821,21 +1847,22 @@ dependencies = [
[[package]]
name = "icu_collections"
version = "1.5.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db2fa452206ebee18c4b5c2274dbf1de17008e874b4dc4f0aea9d01ca79e4526"
checksum = "200072f5d0e3614556f94a9930d5dc3e0662a652823904c3a75dc3b0af7fee47"
dependencies = [
"displaydoc",
"potential_utf",
"yoke",
"zerofrom",
"zerovec",
]
[[package]]
name = "icu_locid"
version = "1.5.0"
name = "icu_locale_core"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13acbb8371917fc971be86fc8057c41a64b521c184808a698c02acc242dbf637"
checksum = "0cde2700ccaed3872079a65fb1a78f6c0a36c91570f28755dda67bc8f7d9f00a"
dependencies = [
"displaydoc",
"litemap",
@@ -1844,31 +1871,11 @@ dependencies = [
"zerovec",
]
[[package]]
name = "icu_locid_transform"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "01d11ac35de8e40fdeda00d9e1e9d92525f3f9d887cdd7aa81d727596788b54e"
dependencies = [
"displaydoc",
"icu_locid",
"icu_locid_transform_data",
"icu_provider",
"tinystr",
"zerovec",
]
[[package]]
name = "icu_locid_transform_data"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7515e6d781098bf9f7205ab3fc7e9709d34554ae0b21ddbcb5febfa4bc7df11d"
[[package]]
name = "icu_normalizer"
version = "1.5.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19ce3e0da2ec68599d193c93d088142efd7f9c5d6fc9b803774855747dc6a84f"
checksum = "436880e8e18df4d7bbc06d58432329d6458cc84531f7ac5f024e93deadb37979"
dependencies = [
"displaydoc",
"icu_collections",
@@ -1876,67 +1883,54 @@ dependencies = [
"icu_properties",
"icu_provider",
"smallvec",
"utf16_iter",
"utf8_iter",
"write16",
"zerovec",
]
[[package]]
name = "icu_normalizer_data"
version = "1.5.1"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c5e8338228bdc8ab83303f16b797e177953730f601a96c25d10cb3ab0daa0cb7"
checksum = "00210d6893afc98edb752b664b8890f0ef174c8adbb8d0be9710fa66fbbf72d3"
[[package]]
name = "icu_properties"
version = "1.5.1"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "93d6020766cfc6302c15dbbc9c8778c37e62c14427cb7f6e601d849e092aeef5"
checksum = "2549ca8c7241c82f59c80ba2a6f415d931c5b58d24fb8412caa1a1f02c49139a"
dependencies = [
"displaydoc",
"icu_collections",
"icu_locid_transform",
"icu_locale_core",
"icu_properties_data",
"icu_provider",
"tinystr",
"potential_utf",
"zerotrie",
"zerovec",
]
[[package]]
name = "icu_properties_data"
version = "1.5.1"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85fb8799753b75aee8d2a21d7c14d9f38921b54b3dbda10f5a3c7a7b82dba5e2"
checksum = "8197e866e47b68f8f7d95249e172903bec06004b18b2937f1095d40a0c57de04"
[[package]]
name = "icu_provider"
version = "1.5.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ed421c8a8ef78d3e2dbc98a973be2f3770cb42b606e3ab18d6237c4dfde68d9"
checksum = "03c80da27b5f4187909049ee2d72f276f0d9f99a42c306bd0131ecfe04d8e5af"
dependencies = [
"displaydoc",
"icu_locid",
"icu_provider_macros",
"icu_locale_core",
"stable_deref_trait",
"tinystr",
"writeable",
"yoke",
"zerofrom",
"zerotrie",
"zerovec",
]
[[package]]
name = "icu_provider_macros"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1ec89e9337638ecdc08744df490b221a7399bf8d164eb52a665454e60e075ad6"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "ident_case"
version = "1.0.1"
@@ -1956,9 +1950,9 @@ dependencies = [
[[package]]
name = "idna_adapter"
version = "1.2.0"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "daca1df1c957320b2cf139ac61e7bd64fed304c5040df000a745aa1de3b4ef71"
checksum = "3acae9609540aa318d1bc588455225fb2085b9ed0c4f6bd0d9d5bcd86f1a0344"
dependencies = [
"icu_normalizer",
"icu_properties",
@@ -2002,7 +1996,7 @@ version = "0.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fddf93031af70e75410a2511ec04d49e758ed2f26dad3404a934e0fb45cc12a"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"crossterm 0.25.0",
"dyn-clone",
"fuzzy-matcher",
@@ -2065,9 +2059,9 @@ checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
[[package]]
name = "jiff"
version = "0.2.10"
version = "0.2.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a064218214dc6a10fbae5ec5fa888d80c45d611aba169222fc272072bf7aef6"
checksum = "f02000660d30638906021176af16b17498bd0d12813dbfe7b276d8bc7f3c0806"
dependencies = [
"jiff-static",
"log",
@@ -2078,9 +2072,9 @@ dependencies = [
[[package]]
name = "jiff-static"
version = "0.2.10"
version = "0.2.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "199b7932d97e325aff3a7030e141eafe7f2c6268e1d1b24859b753a627f45254"
checksum = "f3c30758ddd7188629c6713fc45d1188af4f44c90582311d0c8d8c9907f60c48"
dependencies = [
"proc-macro2",
"quote",
@@ -2239,9 +2233,9 @@ checksum = "d750af042f7ef4f724306de029d18836c26c1765a54a6a3f094cbd23a7267ffa"
[[package]]
name = "libm"
version = "0.2.13"
version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9627da5196e5d8ed0b0495e61e518847578da83483c37288316d9b2e03a7f72"
checksum = "f9fbbcab51052fe104eb5e5d351cf728d30a5be1fe14d9be8a3b097481fb97de"
[[package]]
name = "libredfish"
@@ -2262,7 +2256,7 @@ version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0ff37bd590ca25063e35af745c343cb7a0271906fb7b37e4813e8f79f00268d"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"libc",
]
@@ -2280,9 +2274,9 @@ checksum = "cd945864f07fe9f5371a27ad7b52a172b4b499999f1d97574c9fa68373937e12"
[[package]]
name = "litemap"
version = "0.7.5"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "23fb14cb19457329c82206317a5663005a4d404783dc74f4252769b0d5f42856"
checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956"
[[package]]
name = "lock_api"
@@ -2336,15 +2330,6 @@ version = "0.3.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
[[package]]
name = "miniz_oxide"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8a240ddb74feaf34a79a7add65a741f3167852fba007066dcac1ca548d89c08"
dependencies = [
"adler",
]
[[package]]
name = "miniz_oxide"
version = "0.8.8"
@@ -2489,18 +2474,18 @@ dependencies = [
[[package]]
name = "object"
version = "0.32.2"
version = "0.36.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6a622008b6e321afc04970976f62ee297fdbaa6f95318ca343e3eebb9648441"
checksum = "62948e14d923ea95ea2c7c86c71013138b66525b86bdc08d2dcc262bdb497b87"
dependencies = [
"memchr",
]
[[package]]
name = "octocrab"
version = "0.44.0"
version = "0.44.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aaf799a9982a4d0b4b3fa15b4c1ff7daf5bd0597f46456744dcbb6ddc2e4c827"
checksum = "86996964f8b721067b6ed238aa0ccee56ecad6ee5e714468aa567992d05d2b91"
dependencies = [
"arc-swap",
"async-trait",
@@ -2554,7 +2539,7 @@ version = "0.10.72"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fedfea7d58a1f73118430a55da6a286e7b044961736ce96a16a17068ea25e5da"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"cfg-if",
"foreign-types",
"libc",
@@ -2582,9 +2567,9 @@ checksum = "d05e27ee213611ffe7d6348b942e8f942b37114c00cc03cec254295a4a17852e"
[[package]]
name = "openssl-sys"
version = "0.9.107"
version = "0.9.108"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8288979acd84749c744a9014b4382d42b8f7b2592847b5afb2ed29e5d16ede07"
checksum = "e145e1651e858e820e4860f7b9c5e169bc1d8ce1c86043be79fa7b7634821847"
dependencies = [
"cc",
"libc",
@@ -2648,9 +2633,9 @@ dependencies = [
[[package]]
name = "owo-colors"
version = "3.5.0"
version = "4.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1b04fb49957986fdce4d6ee7a65027d55d4b6d2265e5848bbb507b58ccfdb6f"
checksum = "1036865bb9422d3300cf723f657c2851d0e9ab12567854b1f4eba3d77decf564"
[[package]]
name = "p256"
@@ -2936,6 +2921,15 @@ dependencies = [
"portable-atomic",
]
[[package]]
name = "potential_utf"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5a7c30837279ca13e7c867e9e40053bc68740f988cb07f7ca6df43cc734b585"
dependencies = [
"zerovec",
]
[[package]]
name = "powerfmt"
version = "0.2.0"
@@ -2948,7 +2942,7 @@ version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85eae3c4ed2f50dcfe72643da4befc30deadb458a9b590d720cde2f2b1e97da9"
dependencies = [
"zerocopy 0.8.25",
"zerocopy",
]
[[package]]
@@ -3006,6 +3000,12 @@ dependencies = [
"unicode-ident",
]
[[package]]
name = "punycode"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9e1dcb320d6839f6edb64f7a4a59d39b30480d4d1765b56873f7c858538a5fe"
[[package]]
name = "quote"
version = "1.0.40"
@@ -3083,7 +3083,7 @@ version = "0.9.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "99d9a13982dcf210057a8a78572b2217b667c3beacbf3a0d8b454f6f82837d38"
dependencies = [
"getrandom 0.3.2",
"getrandom 0.3.3",
]
[[package]]
@@ -3092,7 +3092,7 @@ version = "0.29.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eabd94c2f37801c20583fc49dd5cd6b0ba68c716787c2dd6ed18571e1e63117b"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"cassowary",
"compact_str",
"crossterm 0.28.1",
@@ -3109,11 +3109,11 @@ dependencies = [
[[package]]
name = "redox_syscall"
version = "0.5.11"
version = "0.5.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2f103c6d277498fbceb16e84d317e2a400f160f46904d5f5410848c829511a3"
checksum = "928fca9cf2aa042393a8325b9ead81d2f0df4cb12e1e24cef072922ccd99c5af"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
]
[[package]]
@@ -3207,7 +3207,7 @@ dependencies = [
"encoding_rs",
"futures-core",
"futures-util",
"h2 0.4.9",
"h2 0.4.10",
"http 1.3.1",
"http-body 1.0.1",
"http-body-util",
@@ -3296,7 +3296,7 @@ dependencies = [
"aes",
"aes-gcm",
"async-trait",
"bitflags 2.9.0",
"bitflags 2.9.1",
"byteorder",
"cbc",
"chacha20",
@@ -3397,7 +3397,7 @@ version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3bb94393cafad0530145b8f626d8687f1ee1dedb93d7ba7740d6ae81868b13b5"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"bytes",
"chrono",
"flurry",
@@ -3444,7 +3444,7 @@ version = "0.38.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fdb5bc1ae2baa591800df16c9ca78619bf65c0488b41b96ccec5d11220d8c154"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"errno",
"libc",
"linux-raw-sys 0.4.15",
@@ -3453,11 +3453,11 @@ dependencies = [
[[package]]
name = "rustix"
version = "1.0.5"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d97817398dd4bb2e6da002002db259209759911da105da92bec29ccb12cf58bf"
checksum = "c71e83d6afe7ff64890ec6b71d6a69bb8a610ab78ce364b3352876bb4c801266"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"errno",
"libc",
"linux-raw-sys 0.9.4",
@@ -3466,9 +3466,9 @@ dependencies = [
[[package]]
name = "rustls"
version = "0.23.26"
version = "0.23.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df51b5869f3a441595eac5e8ff14d486ff285f7b8c0df8770e49c3b56351f0f0"
checksum = "730944ca083c1c233a75c09f199e973ca499344a2b7ba9e755c457e86fb4a321"
dependencies = [
"log",
"once_cell",
@@ -3524,15 +3524,18 @@ dependencies = [
[[package]]
name = "rustls-pki-types"
version = "1.11.0"
version = "1.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "917ce264624a4b4db1c364dcc35bfca9ded014d0a958cd47ad3e960e988ea51c"
checksum = "229a4a4c221013e7e1f1a043678c5cc39fe5171437c88fb47151a21e6f5b5c79"
dependencies = [
"zeroize",
]
[[package]]
name = "rustls-webpki"
version = "0.103.1"
version = "0.103.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fef8b8769aaccf73098557a87cd1816b4f9c7c16811c9c77142aa695c16f2c03"
checksum = "e4a72fe2bcf7a6ac6fd7d0b9e5cb68aeb7d4c0a0271730218b3e92d43b4eb435"
dependencies = [
"ring",
"rustls-pki-types",
@@ -3615,7 +3618,7 @@ version = "2.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "897b2245f0b511c87893af39b033e5ca9cce68824c4d7e7630b5a1d339658d02"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"core-foundation 0.9.4",
"core-foundation-sys",
"libc",
@@ -3628,7 +3631,7 @@ version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "271720403f46ca04f7ba6f55d438f8bd878d6b8ca0a1046e8228c4145bcbb316"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"core-foundation 0.10.0",
"core-foundation-sys",
"libc",
@@ -3759,9 +3762,9 @@ dependencies = [
[[package]]
name = "sha2"
version = "0.10.8"
version = "0.10.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "793db75ad2bcafc3ffa7c68b215fee268f537982cd901d132f89c6343f3a3dc8"
checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283"
dependencies = [
"cfg-if",
"cpufeatures",
@@ -3785,9 +3788,9 @@ checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "signal-hook"
version = "0.3.17"
version = "0.3.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8621587d4798caf8eb44879d42e56b9a93ea5dcd315a6487c357130095b62801"
checksum = "d881a16cf4426aa584979d30bd82cb33429027e42122b169753d6ef1085ed6e2"
dependencies = [
"libc",
"signal-hook-registry",
@@ -4023,9 +4026,9 @@ dependencies = [
[[package]]
name = "synstructure"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8af7666ab7b6390ab78131fb5b0fce11d6b7a6951602017c35fa82800708971"
checksum = "728a70f3dbaf5bab7f0c4b1ac8d7ae5ea60a4b5549c8a5914361c99147a709d2"
dependencies = [
"proc-macro2",
"quote",
@@ -4049,7 +4052,7 @@ version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c879d448e9d986b661742763247d3693ed13609438cf3d006f51f5368a5ba6b"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"core-foundation 0.9.4",
"system-configuration-sys 0.6.0",
]
@@ -4080,6 +4083,12 @@ version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55937e1799185b12863d447f42597ed69d9928686b8d88a1df17376a097d8369"
[[package]]
name = "temp-dir"
version = "0.1.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83176759e9416cf81ee66cb6508dbfe9c96f20b8b56265a39917551c23c70964"
[[package]]
name = "temp-file"
version = "0.1.9"
@@ -4088,14 +4097,14 @@ checksum = "b5ff282c3f91797f0acb021f3af7fffa8a78601f0f2fd0a9f79ee7dcf9a9af9e"
[[package]]
name = "tempfile"
version = "3.19.1"
version = "3.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7437ac7763b9b123ccf33c338a5cc1bac6f69b45a136c19bdd8a65e3916435bf"
checksum = "e8a64e3985349f2441a1a9ef0b853f869006c3855f2cda6862a94d26ebb9d6a1"
dependencies = [
"fastrand",
"getrandom 0.3.2",
"getrandom 0.3.3",
"once_cell",
"rustix 1.0.5",
"rustix 1.0.7",
"windows-sys 0.59.0",
]
@@ -4197,9 +4206,9 @@ dependencies = [
[[package]]
name = "tinystr"
version = "0.7.6"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9117f5d4db391c1cf6927e7bea3db74b9a1c1add8f7eda9ffd5364f40f57b82f"
checksum = "5d4f6d1145dcb577acf783d4e601bc1d76a13337bb54e6233add580b07344c8b"
dependencies = [
"displaydoc",
"zerovec",
@@ -4207,9 +4216,9 @@ dependencies = [
[[package]]
name = "tokio"
version = "1.44.2"
version = "1.45.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
checksum = "2513ca694ef9ede0fb23fe71a4ee4107cb102b9dc1930f6d0fd77aae068ae165"
dependencies = [
"backtrace",
"bytes",
@@ -4296,12 +4305,12 @@ dependencies = [
[[package]]
name = "tower-http"
version = "0.6.2"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "403fa3b783d4b626a8ad51d766ab03cb6d2dbfc46b1c5d4448395e6628dc9697"
checksum = "0fdb0c213ca27a9f57ab69ddb290fd80d970922355b83ae380b395d3986b8a2e"
dependencies = [
"base64 0.22.1",
"bitflags 2.9.0",
"bitflags 2.9.1",
"bytes",
"futures-util",
"http 1.3.1",
@@ -4483,12 +4492,6 @@ dependencies = [
"serde",
]
[[package]]
name = "utf16_iter"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8232dd3cdaed5356e0f716d285e4b40b932ac434100fe9b7e0e8e935b9e6246"
[[package]]
name = "utf8_iter"
version = "1.0.4"
@@ -4507,7 +4510,7 @@ version = "1.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "458f7a779bf54acc9f347480ac654f68407d3aab21269a6e3c9f922acd9e2da9"
dependencies = [
"getrandom 0.3.2",
"getrandom 0.3.3",
"rand 0.9.1",
"uuid-macro-internal",
]
@@ -5008,20 +5011,14 @@ version = "0.39.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f42320e61fe2cfd34354ecb597f86f413484a798ba44a8ca1165c58d42da6c1"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
]
[[package]]
name = "write16"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d1890f4022759daae28ed4fe62859b1236caebfc61ede2f63ed4e695f3f6d936"
[[package]]
name = "writeable"
version = "0.5.5"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e9df38ee2d2c3c5948ea468a8406ff0db0b29ae1ffde1bcf20ef305bcc95c51"
checksum = "ea2f10b9bb0928dfb1b42b65e1f9e36f7f54dbdf08457afefb38afcdec4fa2bb"
[[package]]
name = "wyz"
@@ -5070,9 +5067,9 @@ dependencies = [
[[package]]
name = "yoke"
version = "0.7.5"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "120e6aef9aa629e3d4f52dc8cc43a015c7724194c97dfaf45180d2daf2b77f40"
checksum = "5f41bb01b8226ef4bfd589436a297c53d118f65921786300e427be8d487695cc"
dependencies = [
"serde",
"stable_deref_trait",
@@ -5082,9 +5079,9 @@ dependencies = [
[[package]]
name = "yoke-derive"
version = "0.7.5"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2380878cad4ac9aac1e2435f3eb4020e8374b5f13c296cb75b4620ff8e229154"
checksum = "38da3c9736e16c5d3c8c597a9aaa5d1fa565d0532ae05e27c24aa62fb32c0ab6"
dependencies = [
"proc-macro2",
"quote",
@@ -5092,33 +5089,13 @@ dependencies = [
"synstructure",
]
[[package]]
name = "zerocopy"
version = "0.7.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b9b4fd18abc82b8136838da5d50bae7bdea537c574d8dc1a34ed098d6c166f0"
dependencies = [
"zerocopy-derive 0.7.35",
]
[[package]]
name = "zerocopy"
version = "0.8.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1702d9583232ddb9174e01bb7c15a2ab8fb1bc6f227aa1233858c351a3ba0cb"
dependencies = [
"zerocopy-derive 0.8.25",
]
[[package]]
name = "zerocopy-derive"
version = "0.7.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa4f8080344d4671fb4e831a13ad1e68092748387dfc4f55e356242fae12ce3e"
dependencies = [
"proc-macro2",
"quote",
"syn",
"zerocopy-derive",
]
[[package]]
@@ -5160,10 +5137,21 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ced3678a2879b30306d323f4542626697a464a97c0a07c9aebf7ebca65cd4dde"
[[package]]
name = "zerovec"
version = "0.10.4"
name = "zerotrie"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa2b893d79df23bfb12d5461018d408ea19dfafe76c2c7ef6d4eba614f8ff079"
checksum = "36f0bbd478583f79edad978b407914f61b2972f5af6fa089686016be8f9af595"
dependencies = [
"displaydoc",
"yoke",
"zerofrom",
]
[[package]]
name = "zerovec"
version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a05eb080e015ba39cc9e23bbe5e7fb04d5fb040350f99f34e338d5fdd294428"
dependencies = [
"yoke",
"zerofrom",
@@ -5172,9 +5160,9 @@ dependencies = [
[[package]]
name = "zerovec-derive"
version = "0.10.3"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6eafa6dfb17584ea3e2bd6e76e0cc15ad7af12b09abdd1ca55961bed9b1063c6"
checksum = "5b96237efa0c878c64bd89c436f661be4e46b2f3eff1ebb976f7ef2321d2f58f"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -35,6 +35,7 @@ serde_yaml = "0.9.34"
serde-value = "0.7.0"
http = "1.2.0"
inquire = "0.7.5"
convert_case = "0.8.0"
[workspace.dependencies.uuid]
version = "1.11.0"

138
README.md
View File

@@ -31,3 +31,141 @@ Options:
![Harmony Core Architecture](docs/diagrams/Harmony_Core_Architecture.drawio.svg)
````
## Supporting a new field in OPNSense `config.xml`
Two steps:
- Supporting the field in `opnsense-config-xml`
- Enabling Harmony to control the field
We'll use the `filename` field in the `dhcpcd` section of the file as an example.
### Supporting the field
As type checking if enforced, every field from `config.xml` must be known by the code. Each subsection of `config.xml` has its `.rs` file. For the `dhcpcd` section, we'll modify `opnsense-config-xml/src/data/dhcpd.rs`.
When a new field appears in the xml file, an error like this will be thrown and Harmony will panic :
```
Running `/home/stremblay/nt/dir/harmony/target/debug/example-nanodc`
Found unauthorized element filename
thread 'main' panicked at opnsense-config-xml/src/data/opnsense.rs:54:14:
OPNSense received invalid string, should be full XML: ()
```
Define the missing field (`filename`) in the `DhcpInterface` struct of `opnsense-config-xml/src/data/dhcpd.rs`:
```
pub struct DhcpInterface {
...
pub filename: Option<String>,
```
Harmony should now be fixed, build and run.
### Controlling the field
Define the `xml field setter` in `opnsense-config/src/modules/dhcpd.rs`.
```
impl<'a> DhcpConfig<'a> {
...
pub fn set_filename(&mut self, filename: &str) {
self.enable_netboot();
self.get_lan_dhcpd().filename = Some(filename.to_string());
}
...
```
Define the `value setter` in the `DhcpServer trait` in `domain/topology/network.rs`
```
#[async_trait]
pub trait DhcpServer: Send + Sync {
...
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError>;
...
```
Implement the `value setter` in each `DhcpServer` implementation.
`infra/opnsense/dhcp.rs`:
```
#[async_trait]
impl DhcpServer for OPNSenseFirewall {
...
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_filename(filename);
debug!("OPNsense dhcp server set filename {filename}");
}
Ok(())
}
...
```
`domain/topology/ha_cluster.rs`
```
#[async_trait]
impl DhcpServer for DummyInfra {
...
async fn set_filename(&self, _filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
...
```
Add the new field to the DhcpScore in `modules/dhcp.rs`
```
pub struct DhcpScore {
...
pub filename: Option<String>,
```
Define it in its implementation in `modules/okd/dhcp.rs`
```
impl OKDDhcpScore {
...
Self {
dhcp_score: DhcpScore {
...
filename: Some("undionly.kpxe".to_string()),
```
Define it in its implementation in `modules/okd/bootstrap_dhcp.rs`
```
impl OKDDhcpScore {
...
Self {
dhcp_score: DhcpScore::new(
...
Some("undionly.kpxe".to_string()),
```
Update the interpret (function called by the `execute` fn of the interpret) so it now updates the `filename` field value in `modules/dhcp.rs`
```
impl DhcpInterpret {
...
let filename_outcome = match &self.score.filename {
Some(filename) => {
let dhcp_server = Arc::new(topology.dhcp_server.clone());
dhcp_server.set_filename(&filename).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set filename to {filename}"),
)
}
None => Outcome::noop(),
};
if next_server_outcome.status == InterpretStatus::NOOP
&& boot_filename_outcome.status == InterpretStatus::NOOP
&& filename_outcome.status == InterpretStatus::NOOP
...
Ok(Outcome::new(
InterpretStatus::SUCCESS,
format!(
"Dhcp Interpret Set next boot to [{:?}], boot_filename to [{:?}], filename to [{:?}]",
self.score.boot_filename, self.score.boot_filename, self.score.filename
)
...
```

View File

@@ -1,6 +1,6 @@
# Architecture Decision Record: \<Title\>
Name: \<Name\>
Initial Author: \<Name\>
Initial Date: \<Date\>

View File

@@ -1,6 +1,6 @@
# Architecture Decision Record: Helm and Kustomize Handling
Name: Taha Hawa
Initial Author: Taha Hawa
Initial Date: 2025-04-15

View File

@@ -1,7 +1,7 @@
# Architecture Decision Record: Monitoring and Alerting
Proposed by: Willem Rolleman
Date: April 28 2025
Initial Author : Willem Rolleman
Date : April 28 2025
## Status

View File

@@ -0,0 +1,160 @@
# Architecture Decision Record: Multi-Tenancy Strategy for Harmony Managed Clusters
Initial Author: Jean-Gabriel Gill-Couture
Initial Date: 2025-05-26
## Status
Proposed
## Context
Harmony manages production OKD/Kubernetes clusters that serve multiple clients with varying trust levels and operational requirements. We need a multi-tenancy strategy that provides:
1. **Strong isolation** between client workloads while maintaining operational simplicity
2. **Controlled API access** allowing clients self-service capabilities within defined boundaries
3. **Security-first approach** protecting both the cluster infrastructure and tenant data
4. **Harmony-native implementation** using our Score/Interpret pattern for automated tenant provisioning
5. **Scalable management** supporting both small trusted clients and larger enterprise customers
The official Kubernetes multi-tenancy documentation identifies two primary models: namespace-based isolation and virtual control planes per tenant. Given Harmony's focus on operational simplicity, provider-agnostic abstractions (ADR-003), and hexagonal architecture (ADR-002), we must choose an approach that balances security, usability, and maintainability.
Our clients represent a hybrid tenancy model:
- **Customer multi-tenancy**: Each client operates independently with no cross-tenant trust
- **Team multi-tenancy**: Individual clients may have multiple team members requiring coordinated access
- **API access requirement**: Unlike pure SaaS scenarios, clients need controlled Kubernetes API access for self-service operations
The official kubernetes documentation on multi tenancy heavily inspired this ADR : https://kubernetes.io/docs/concepts/security/multi-tenancy/
## Decision
Implement **namespace-based multi-tenancy** with the following architecture:
### 1. Network Security Model
- **Private cluster access**: Kubernetes API and OpenShift console accessible only via WireGuard VPN
- **No public exposure**: Control plane endpoints remain internal to prevent unauthorized access attempts
- **VPN-based authentication**: Initial access control through WireGuard client certificates
### 2. Tenant Isolation Strategy
- **Dedicated namespace per tenant**: Each client receives an isolated namespace with access limited only to the required resources and operations
- **Complete network isolation**: NetworkPolicies prevent cross-namespace communication while allowing full egress to public internet
- **Resource governance**: ResourceQuotas and LimitRanges enforce CPU, memory, and storage consumption limits
- **Storage access control**: Clients can create PersistentVolumeClaims but cannot directly manipulate PersistentVolumes or access other tenants' storage
### 3. Access Control Framework
- **Principle of Least Privilege**: RBAC grants only necessary permissions within tenant namespace scope
- **Namespace-scoped**: Clients can create/modify/delete resources within their namespace
- **Cluster-level restrictions**: No access to cluster-wide resources, other namespaces, or sensitive cluster operations
- **Whitelisted operations**: Controlled self-service capabilities for ingress, secrets, configmaps, and workload management
### 4. Identity Management Evolution
- **Phase 1**: Manual provisioning of VPN access and Kubernetes ServiceAccounts/Users
- **Phase 2**: Migration to Keycloak-based identity management (aligning with ADR-006) for centralized authentication and lifecycle management
### 5. Harmony Integration
- **TenantScore implementation**: Declarative tenant provisioning using Harmony's Score/Interpret pattern
- **Topology abstraction**: Tenant configuration abstracted from underlying Kubernetes implementation details
- **Automated deployment**: Complete tenant setup automated through Harmony's orchestration capabilities
## Rationale
### Network Security Through VPN Access
- **Defense in depth**: VPN requirement adds critical security layer preventing unauthorized cluster access
- **Simplified firewall rules**: No need for complex public endpoint protections or rate limiting
- **Audit capability**: VPN access provides clear audit trail of cluster connections
- **Aligns with enterprise practices**: Most enterprise customers already use VPN infrastructure
### Namespace Isolation vs Virtual Control Planes
Following Kubernetes official guidance, namespace isolation provides:
- **Lower resource overhead**: Virtual control planes require dedicated etcd, API server, and controller manager per tenant
- **Operational simplicity**: Single control plane to maintain, upgrade, and monitor
- **Cross-tenant service integration**: Enables future controlled cross-tenant communication if required
- **Proven stability**: Namespace-based isolation is well-tested and widely deployed
- **Cost efficiency**: Significantly lower infrastructure costs compared to dedicated control planes
### Hybrid Tenancy Model Suitability
Our approach addresses both customer and team multi-tenancy requirements:
- **Customer isolation**: Strong network and RBAC boundaries prevent cross-tenant interference
- **Team collaboration**: Multiple team members can share namespace access through group-based RBAC
- **Self-service balance**: Controlled API access enables client autonomy without compromising security
### Harmony Architecture Alignment
- **Provider agnostic**: TenantScore abstracts multi-tenancy concepts, enabling future support for other Kubernetes distributions
- **Hexagonal architecture**: Tenant management becomes an infrastructure capability accessed through well-defined ports
- **Declarative automation**: Tenant lifecycle fully managed through Harmony's Score execution model
## Consequences
### Positive Consequences
- **Strong security posture**: VPN + namespace isolation provides robust tenant separation
- **Operational efficiency**: Single cluster management with automated tenant provisioning
- **Client autonomy**: Self-service capabilities reduce operational support burden
- **Scalable architecture**: Can support hundreds of tenants per cluster without architectural changes
- **Future flexibility**: Foundation supports evolution to more sophisticated multi-tenancy models
- **Cost optimization**: Shared infrastructure maximizes resource utilization
### Negative Consequences
- **VPN operational overhead**: Requires VPN infrastructure management
- **Manual provisioning complexity**: Phase 1 manual user management creates administrative burden
- **Network policy dependency**: Requires CNI with NetworkPolicy support (OVN-Kubernetes provides this and is the OKD/Openshift default)
- **Cluster-wide resource limitations**: Some advanced Kubernetes features require cluster-wide access
- **Single point of failure**: Cluster outage affects all tenants simultaneously
### Migration Challenges
- **Legacy client integration**: Existing clients may need VPN client setup and credential migration
- **Monitoring complexity**: Per-tenant observability requires careful metric and log segmentation
- **Backup considerations**: Tenant data backup must respect isolation boundaries
## Alternatives Considered
### Alternative 1: Virtual Control Plane Per Tenant
**Pros**: Complete control plane isolation, full Kubernetes API access per tenant
**Cons**: 3-5x higher resource usage, complex cross-tenant networking, operational complexity scales linearly with tenants
**Rejected**: Resource overhead incompatible with cost-effective multi-tenancy goals
### Alternative 2: Dedicated Clusters Per Tenant
**Pros**: Maximum isolation, independent upgrade cycles, simplified security model
**Cons**: Exponential operational complexity, prohibitive costs, resource waste
**Rejected**: Operational overhead makes this approach unsustainable for multiple clients
### Alternative 3: Public API with Advanced Authentication
**Pros**: No VPN requirement, potentially simpler client access
**Cons**: Larger attack surface, complex rate limiting and DDoS protection, increased security monitoring requirements
**Rejected**: Risk/benefit analysis favors VPN-based access control
### Alternative 4: Service Mesh Based Isolation
**Pros**: Fine-grained traffic control, encryption, advanced observability
**Cons**: Significant operational complexity, performance overhead, steep learning curve
**Rejected**: Complexity overhead outweighs benefits for current requirements; remains option for future enhancement
## Additional Notes
### Implementation Roadmap
1. **Phase 1**: Implement VPN access and manual tenant provisioning
2. **Phase 2**: Deploy TenantScore automation for namespace, RBAC, and NetworkPolicy management
3. **Phase 3**: Integrate Keycloak for centralized identity management
4. **Phase 4**: Add advanced monitoring and per-tenant observability
### TenantScore Structure Preview
```rust
pub struct TenantScore {
pub tenant_config: TenantConfig,
pub resource_quotas: ResourceQuotaConfig,
pub network_isolation: NetworkIsolationPolicy,
pub storage_access: StorageAccessConfig,
pub rbac_config: RBACConfig,
}
```
### Future Enhancements
- **Cross-tenant service mesh**: For approved inter-tenant communication
- **Advanced monitoring**: Per-tenant Prometheus/Grafana instances
- **Backup automation**: Tenant-scoped backup policies
- **Cost allocation**: Detailed per-tenant resource usage tracking
This ADR establishes the foundation for secure, scalable multi-tenancy in Harmony-managed clusters while maintaining operational simplicity and cost effectiveness. A follow-up ADR will detail the Tenant abstraction and user management mechanisms within the Harmony framework.

0
check.sh Normal file → Executable file
View File

View File

@@ -0,0 +1 @@
slitaz/* filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,6 @@
#!ipxe
set base-url http://192.168.33.1:8080
set hostfile ${base-url}/byMAC/01-${mac:hexhyp}.ipxe
chain ${hostfile} || chain ${base-url}/default.ipxe

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Boot to Slitaz - old linux for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/nvme0n1
set ignition-file ncd0/master.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Boot to Slitaz - old linux for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/nvme0n1
set ignition-file ncd0/master.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Slitaz - an old linux image for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/sda
set ignition-file ncd0/worker.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Boot to Slitaz - old linux for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/nvme0n1
set ignition-file ncd0/master.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,35 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Slitaz - an old linux image for debugging
choose selected
goto ${selected}
:local
exit
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/sda
set ignition-file ncd0/worker.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,37 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item okdinstallation Install OKD
item slitaz Slitaz - an old linux image for debugging
choose selected
goto ${selected}
:local
exit
# This is the bootstrap node
# it will become wk2
#################################
# okdinstallation
#################################
:okdinstallation
set base-url http://192.168.33.1:8080
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
set install-disk /dev/sda
set ignition-file ncd0/worker.ign
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
initrd --name main ${base-url}/${live-initramfs}
boot
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot

View File

@@ -0,0 +1,71 @@
#!ipxe
menu PXE Boot Menu - [${mac}]
item local Boot from Hard Disk
item slitaz Boot slitaz live environment [tux|root:root]
#item ubuntu-server Ubuntu 24.04.1 live server
#item ubuntu-desktop Ubuntu 24.04.1 desktop
#item systemrescue System Rescue 11.03
item memtest memtest
#choose --default local --timeout 5000 selected
choose selected
goto ${selected}
:local
exit
#################################
# slitaz
#################################
:slitaz
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/slitaz
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
initrd ${base_url}/rootfs.gz
boot
#################################
# Ubuntu Server
#################################
:ubuntu-server
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/ubuntu/live-server-24.04.1
kernel ${base_url}/vmlinuz ip=dhcp url=${base_url}/ubuntu-24.04.1-live-server-amd64.iso autoinstall ds=nocloud
initrd ${base_url}/initrd
boot
#################################
# Ubuntu Desktop
#################################
:ubuntu-desktop
set server_ip 192.168.33.1:8080
set base_url http://${server_ip}/ubuntu/desktop-24.04.1
kernel ${base_url}/vmlinuz ip=dhcp url=${base_url}/ubuntu-24.04.1-desktop-amd64.iso autoinstall ds=nocloud
initrd ${base_url}/initrd
boot
#################################
# System Rescue
#################################
:systemrescue
set base-url http://192.168.33.1:8080/systemrescue
kernel ${base-url}/vmlinuz initrd=sysresccd.img boot=systemrescue docache
initrd ${base-url}/sysresccd.img
boot
#################################
# MemTest86 (BIOS/UEFI)
#################################
:memtest
iseq ${platform} efi && goto memtest_efi || goto memtest_bios
:memtest_efi
kernel http://192.168.33.1:8080/memtest/memtest64.efi
boot
:memtest_bios
kernel http://192.168.33.1:8080/memtest/memtest64.bin
boot

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1 +0,0 @@
hey i am paul

BIN
data/watchguard/pxe-http-files/slitaz/rootfs.gz (Stored with Git LFS) Normal file

Binary file not shown.

BIN
data/watchguard/pxe-http-files/slitaz/vmlinuz-2.6.37-slitaz (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -8,7 +8,7 @@ publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_tui = { path = "../../harmony_tui" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }

View File

@@ -1,3 +1,85 @@
<?php
print_r("Hello this is from PHP")
ini_set('display_errors', 1);
error_reporting(E_ALL);
$host = getenv('MYSQL_HOST') ?: '';
$user = getenv('MYSQL_USER') ?: 'root';
$pass = getenv('MYSQL_PASSWORD') ?: '';
$db = 'testfill';
$charset = 'utf8mb4';
$dsn = "mysql:host=$host;charset=$charset";
$options = [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
];
try {
$pdo = new PDO($dsn, $user, $pass, $options);
$pdo->exec("CREATE DATABASE IF NOT EXISTS `$db`");
$pdo->exec("USE `$db`");
$pdo->exec("
CREATE TABLE IF NOT EXISTS filler (
id INT AUTO_INCREMENT PRIMARY KEY,
data LONGBLOB
)
");
} catch (\PDOException $e) {
die("❌ DB connection failed: " . $e->getMessage());
}
function getDbStats($pdo, $db) {
$stmt = $pdo->query("
SELECT
ROUND(SUM(data_length + index_length) / 1024 / 1024 / 1024, 2) AS total_size_gb,
SUM(table_rows) AS total_rows
FROM information_schema.tables
WHERE table_schema = '$db'
");
$result = $stmt->fetch();
$sizeGb = $result['total_size_gb'] ?? '0';
$rows = $result['total_rows'] ?? '0';
$avgMb = ($rows > 0) ? round(($sizeGb * 1024) / $rows, 2) : 0;
return [$sizeGb, $rows, $avgMb];
}
list($dbSize, $rowCount, $avgRowMb) = getDbStats($pdo, $db);
$message = '';
if ($_SERVER['REQUEST_METHOD'] === 'POST' && isset($_POST['fill'])) {
$iterations = 1024;
$data = str_repeat(random_bytes(1024), 1024); // 1MB
$stmt = $pdo->prepare("INSERT INTO filler (data) VALUES (:data)");
for ($i = 0; $i < $iterations; $i++) {
$stmt->execute([':data' => $data]);
}
list($dbSize, $rowCount, $avgRowMb) = getDbStats($pdo, $db);
$message = "<p style='color: green;'>✅ 1GB inserted into MariaDB successfully.</p>";
}
?>
<!DOCTYPE html>
<html>
<head>
<title>MariaDB Filler</title>
</head>
<body>
<h1>MariaDB Storage Filler</h1>
<?= $message ?>
<ul>
<li><strong>📦 MariaDB Used Size:</strong> <?= $dbSize ?> GB</li>
<li><strong>📊 Total Rows:</strong> <?= $rowCount ?></li>
<li><strong>📐 Average Row Size:</strong> <?= $avgRowMb ?> MB</li>
</ul>
<form method="post">
<button name="fill" value="1" type="submit">Insert 1GB into DB</button>
</form>
</body>
</html>

View File

@@ -8,23 +8,40 @@ use harmony::{
#[tokio::main]
async fn main() {
// let _ = env_logger::Builder::from_default_env().filter_level(log::LevelFilter::Info).try_init();
// This here is the whole configuration to
// - setup a local K3D cluster
// - Build a docker image with the PHP project builtin and production grade settings
// - Deploy a mariadb database using a production grade helm chart
// - Deploy the new container using a kubernetes deployment
// - Configure networking between the PHP container and the database
// - Provision a public route and an SSL certificate automatically on production environments
//
// Enjoy :)
let lamp_stack = LAMPScore {
name: "harmony-lamp-demo".to_string(),
domain: Url::Url(url::Url::parse("https://lampdemo.harmony.nationtech.io").unwrap()),
php_version: Version::from("8.4.4").unwrap(),
// This config can be extended as needed for more complicated configurations
config: LAMPConfig {
project_root: "./php".into(),
database_size: format!("4Gi").into(),
..Default::default()
},
};
// You can choose the type of Topology you want, we suggest starting with the
// K8sAnywhereTopology as it is the most automatic one that enables you to easily deploy
// locally, to development environment from a CI, to staging, and to production with settings
// that automatically adapt to each environment grade.
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::new(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(lamp_stack)]);
harmony_tui::init(maestro).await.unwrap();
// Here we bootstrap the CLI, this gives some nice features if you need them
harmony_cli::init(maestro, None).await.unwrap();
}
// That's it, end of the infra as code.

View File

@@ -0,0 +1,12 @@
[package]
name = "webhook_sender"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
tokio.workspace = true
url.workspace = true

View File

@@ -0,0 +1,23 @@
use harmony::{
inventory::Inventory,
maestro::Maestro,
modules::monitoring::monitoring_alerting::MonitoringAlertingScore,
topology::{K8sAnywhereTopology, oberservability::K8sMonitorConfig},
};
#[tokio::main]
async fn main() {
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::new(),
)
.await
.unwrap();
let monitoring = MonitoringAlertingScore {
alert_channel_configs: None,
};
maestro.register_all(vec![Box::new(monitoring)]);
harmony_cli::init(maestro, None).await.unwrap();
}

View File

@@ -0,0 +1,4 @@
#!/bin/bash
helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \
--set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml

View File

@@ -0,0 +1,721 @@
# Default values for a single rook-ceph cluster
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- Namespace of the main rook operator
operatorNamespace: rook-ceph
# -- The metadata.name of the CephCluster CR
# @default -- The same as the namespace
clusterName:
# -- Optional override of the target kubernetes version
kubeVersion:
# -- Cluster ceph.conf override
configOverride:
# configOverride: |
# [global]
# mon_allow_pool_delete = true
# osd_pool_default_size = 3
# osd_pool_default_min_size = 2
# Installs a debugging toolbox deployment
toolbox:
# -- Enable Ceph debugging pod deployment. See [toolbox](../Troubleshooting/ceph-toolbox.md)
enabled: true
# -- Toolbox image, defaults to the image used by the Ceph cluster
image: #quay.io/ceph/ceph:v19.2.2
# -- Toolbox tolerations
tolerations: []
# -- Toolbox affinity
affinity: {}
# -- Toolbox container security context
containerSecurityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
capabilities:
drop: ["ALL"]
# -- Toolbox resources
resources:
limits:
memory: "1Gi"
requests:
cpu: "100m"
memory: "128Mi"
# -- Set the priority class for the toolbox if desired
priorityClassName:
monitoring:
# -- Enable Prometheus integration, will also create necessary RBAC rules to allow Operator to create ServiceMonitors.
# Monitoring requires Prometheus to be pre-installed
enabled: false
# -- Whether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabled
metricsDisabled: false
# -- Whether to create the Prometheus rules for Ceph alerts
createPrometheusRules: false
# -- The namespace in which to create the prometheus rules, if different from the rook cluster namespace.
# If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus
# deployed) to set rulesNamespaceOverride for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions.
rulesNamespaceOverride:
# Monitoring settings for external clusters:
# externalMgrEndpoints: <list of endpoints>
# externalMgrPrometheusPort: <port>
# Scrape interval for prometheus
# interval: 10s
# allow adding custom labels and annotations to the prometheus rule
prometheusRule:
# -- Labels applied to PrometheusRule
labels: {}
# -- Annotations applied to PrometheusRule
annotations: {}
# -- Create & use PSP resources. Set this to the same value as the rook-ceph chart.
pspEnable: false
# imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts.
# imagePullSecrets:
# - name: my-registry-secret
# All values below are taken from the CephCluster CRD
# -- Cluster configuration.
# @default -- See [below](#ceph-cluster-spec)
cephClusterSpec:
# This cluster spec example is for a converged cluster where all the Ceph daemons are running locally,
# as in the host-based example (cluster.yaml). For a different configuration such as a
# PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml),
# or stretch cluster (cluster-stretched.yaml), replace this entire `cephClusterSpec`
# with the specs from those examples.
# For more details, check https://rook.io/docs/rook/v1.10/CRDs/Cluster/ceph-cluster-crd/
cephVersion:
# The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw).
# v18 is Reef, v19 is Squid
# RECOMMENDATION: In production, use a specific version tag instead of the general v18 flag, which pulls the latest release and could result in different
# versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.
# If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v19.2.2-20250409
# This tag might not contain a new Ceph version, just security fixes from the underlying operating system, which will reduce vulnerabilities
image: quay.io/ceph/ceph:v19.2.2
# Whether to allow unsupported versions of Ceph. Currently Reef and Squid are supported.
# Future versions such as Tentacle (v20) would require this to be set to `true`.
# Do not set to true in production.
allowUnsupported: false
# The path on the host where configuration files will be persisted. Must be specified. If there are multiple clusters, the directory must be unique for each cluster.
# Important: if you reinstall the cluster, make sure you delete this directory from each host or else the mons will fail to start on the new cluster.
# In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment.
dataDirHostPath: /var/lib/rook
# Whether or not upgrade should continue even if a check fails
# This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise
# Use at your OWN risk
# To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/v1.10/Upgrade/ceph-upgrade/
skipUpgradeChecks: false
# Whether or not continue if PGs are not clean during an upgrade
continueUpgradeAfterChecksEvenIfNotHealthy: false
# WaitTimeoutForHealthyOSDInMinutes defines the time (in minutes) the operator would wait before an OSD can be stopped for upgrade or restart.
# If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one
# if `continueUpgradeAfterChecksEvenIfNotHealthy` is `false`. If `continueUpgradeAfterChecksEvenIfNotHealthy` is `true`, then operator would
# continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won't be applied if `skipUpgradeChecks` is `true`.
# The default wait timeout is 10 minutes.
waitTimeoutForHealthyOSDInMinutes: 10
# Whether or not requires PGs are clean before an OSD upgrade. If set to `true` OSD upgrade process won't start until PGs are healthy.
# This configuration will be ignored if `skipUpgradeChecks` is `true`.
# Default is false.
upgradeOSDRequiresHealthyPGs: false
mon:
# Set the number of mons to be started. Generally recommended to be 3.
# For highest availability, an odd number of mons should be specified.
count: 3
# The mons should be on unique nodes. For production, at least 3 nodes are recommended for this reason.
# Mons should only be allowed on the same node for test environments where data loss is acceptable.
allowMultiplePerNode: false
mgr:
# When higher availability of the mgr is needed, increase the count to 2.
# In that case, one mgr will be active and one in standby. When Ceph updates which
# mgr is active, Rook will update the mgr services to match the active mgr.
count: 2
allowMultiplePerNode: false
modules:
# List of modules to optionally enable or disable.
# Note the "dashboard" and "monitoring" modules are already configured by other settings in the cluster CR.
# - name: rook
# enabled: true
# enable the ceph dashboard for viewing cluster status
dashboard:
enabled: true
# serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
# urlPrefix: /ceph-dashboard
# serve the dashboard at the given port.
# port: 8443
# Serve the dashboard using SSL (if using ingress to expose the dashboard and `ssl: true` you need to set
# the corresponding "backend protocol" annotation(s) for your ingress controller of choice)
ssl: true
# Network configuration, see: https://github.com/rook/rook/blob/master/Documentation/CRDs/Cluster/ceph-cluster-crd.md#network-configuration-settings
network:
connections:
# Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network.
# The default is false. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted.
# When encryption is not enabled, clients still establish a strong initial authentication and data integrity is still validated with a crc check.
# IMPORTANT: Encryption requires the 5.11 kernel for the latest nbd and cephfs drivers. Alternatively for testing only,
# you can set the "mounter: rbd-nbd" in the rbd storage class, or "mounter: fuse" in the cephfs storage class.
# The nbd and fuse drivers are *not* recommended in production since restarting the csi driver pod will disconnect the volumes.
encryption:
enabled: false
# Whether to compress the data in transit across the wire. The default is false.
# The kernel requirements above for encryption also apply to compression.
compression:
enabled: false
# Whether to require communication over msgr2. If true, the msgr v1 port (6789) will be disabled
# and clients will be required to connect to the Ceph cluster with the v2 port (3300).
# Requires a kernel that supports msgr v2 (kernel 5.11 or CentOS 8.4 or newer).
requireMsgr2: false
# # enable host networking
# provider: host
# # EXPERIMENTAL: enable the Multus network provider
# provider: multus
# selectors:
# # The selector keys are required to be `public` and `cluster`.
# # Based on the configuration, the operator will do the following:
# # 1. if only the `public` selector key is specified both public_network and cluster_network Ceph settings will listen on that interface
# # 2. if both `public` and `cluster` selector keys are specified the first one will point to 'public_network' flag and the second one to 'cluster_network'
# #
# # In order to work, each selector value must match a NetworkAttachmentDefinition object in Multus
# #
# # public: public-conf --> NetworkAttachmentDefinition object name in Multus
# # cluster: cluster-conf --> NetworkAttachmentDefinition object name in Multus
# # Provide internet protocol version. IPv6, IPv4 or empty string are valid options. Empty string would mean IPv4
# ipFamily: "IPv6"
# # Ceph daemons to listen on both IPv4 and Ipv6 networks
# dualStack: false
# enable the crash collector for ceph daemon crash collection
crashCollector:
disable: false
# Uncomment daysToRetain to prune ceph crash entries older than the
# specified number of days.
# daysToRetain: 30
# enable log collector, daemons will log on files and rotate
logCollector:
enabled: true
periodicity: daily # one of: hourly, daily, weekly, monthly
maxLogSize: 500M # SUFFIX may be 'M' or 'G'. Must be at least 1M.
# automate [data cleanup process](https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/ceph-teardown.md#delete-the-data-on-hosts) in cluster destruction.
cleanupPolicy:
# Since cluster cleanup is destructive to data, confirmation is required.
# To destroy all Rook data on hosts during uninstall, confirmation must be set to "yes-really-destroy-data".
# This value should only be set when the cluster is about to be deleted. After the confirmation is set,
# Rook will immediately stop configuring the cluster and only wait for the delete command.
# If the empty string is set, Rook will not destroy any data on hosts during uninstall.
confirmation: ""
# sanitizeDisks represents settings for sanitizing OSD disks on cluster deletion
sanitizeDisks:
# method indicates if the entire disk should be sanitized or simply ceph's metadata
# in both case, re-install is possible
# possible choices are 'complete' or 'quick' (default)
method: quick
# dataSource indicate where to get random bytes from to write on the disk
# possible choices are 'zero' (default) or 'random'
# using random sources will consume entropy from the system and will take much more time then the zero source
dataSource: zero
# iteration overwrite N times instead of the default (1)
# takes an integer value
iteration: 1
# allowUninstallWithVolumes defines how the uninstall should be performed
# If set to true, cephCluster deletion does not wait for the PVs to be deleted.
allowUninstallWithVolumes: false
# To control where various services will be scheduled by kubernetes, use the placement configuration sections below.
# The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node' and
# tolerate taints with a key of 'storage-node'.
# placement:
# all:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: role
# operator: In
# values:
# - storage-node
# podAffinity:
# podAntiAffinity:
# topologySpreadConstraints:
# tolerations:
# - key: storage-node
# operator: Exists
# # The above placement information can also be specified for mon, osd, and mgr components
# mon:
# # Monitor deployments may contain an anti-affinity rule for avoiding monitor
# # collocation on the same node. This is a required rule when host network is used
# # or when AllowMultiplePerNode is false. Otherwise this anti-affinity rule is a
# # preferred rule with weight: 50.
# osd:
# mgr:
# cleanup:
# annotations:
# all:
# mon:
# osd:
# cleanup:
# prepareosd:
# # If no mgr annotations are set, prometheus scrape annotations will be set by default.
# mgr:
# dashboard:
# labels:
# all:
# mon:
# osd:
# cleanup:
# mgr:
# prepareosd:
# # monitoring is a list of key-value pairs. It is injected into all the monitoring resources created by operator.
# # These labels can be passed as LabelSelector to Prometheus
# monitoring:
# dashboard:
resources:
mgr:
limits:
memory: "1Gi"
requests:
cpu: "500m"
memory: "512Mi"
mon:
limits:
memory: "2Gi"
requests:
cpu: "1000m"
memory: "1Gi"
osd:
limits:
memory: "4Gi"
requests:
cpu: "1000m"
memory: "4Gi"
prepareosd:
# limits: It is not recommended to set limits on the OSD prepare job
# since it's a one-time burst for memory that must be allowed to
# complete without an OOM kill. Note however that if a k8s
# limitRange guardrail is defined external to Rook, the lack of
# a limit here may result in a sync failure, in which case a
# limit should be added. 1200Mi may suffice for up to 15Ti
# OSDs ; for larger devices 2Gi may be required.
# cf. https://github.com/rook/rook/pull/11103
requests:
cpu: "500m"
memory: "50Mi"
mgr-sidecar:
limits:
memory: "100Mi"
requests:
cpu: "100m"
memory: "40Mi"
crashcollector:
limits:
memory: "60Mi"
requests:
cpu: "100m"
memory: "60Mi"
logcollector:
limits:
memory: "1Gi"
requests:
cpu: "100m"
memory: "100Mi"
cleanup:
limits:
memory: "1Gi"
requests:
cpu: "500m"
memory: "100Mi"
exporter:
limits:
memory: "128Mi"
requests:
cpu: "50m"
memory: "50Mi"
# The option to automatically remove OSDs that are out and are safe to destroy.
removeOSDsIfOutAndSafeToRemove: false
# priority classes to apply to ceph resources
priorityClassNames:
mon: system-node-critical
osd: system-node-critical
mgr: system-cluster-critical
storage: # cluster level storage configuration and selection
useAllNodes: true
useAllDevices: true
# deviceFilter:
# config:
# crushRoot: "custom-root" # specify a non-default root label for the CRUSH map
# metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
# databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
# osdsPerDevice: "1" # this value can be overridden at the node or device level
# encryptedDevice: "true" # the default value for this option is "false"
# # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named
# # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label.
# nodes:
# - name: "172.17.4.201"
# devices: # specific devices to use for storage can be specified for each node
# - name: "sdb"
# - name: "nvme01" # multiple osds can be created on high performance devices
# config:
# osdsPerDevice: "5"
# - name: "/dev/disk/by-id/ata-ST4000DM004-XXXX" # devices can be specified using full udev paths
# config: # configuration can be specified at the node level which overrides the cluster level config
# - name: "172.17.4.301"
# deviceFilter: "^sd."
# The section for configuring management of daemon disruptions during upgrade or fencing.
disruptionManagement:
# If true, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically
# via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will
# block eviction of OSDs by default and unblock them safely when drains are detected.
managePodBudgets: true
# A duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the
# default DOWN/OUT interval) when it is draining. This is only relevant when `managePodBudgets` is `true`. The default value is `30` minutes.
osdMaintenanceTimeout: 30
# Configure the healthcheck and liveness probes for ceph pods.
# Valid values for daemons are 'mon', 'osd', 'status'
healthCheck:
daemonHealth:
mon:
disabled: false
interval: 45s
osd:
disabled: false
interval: 60s
status:
disabled: false
interval: 60s
# Change pod liveness probe, it works for all mon, mgr, and osd pods.
livenessProbe:
mon:
disabled: false
mgr:
disabled: false
osd:
disabled: false
ingress:
# -- Enable an ingress for the ceph-dashboard
dashboard:
# {}
# labels:
# external-dns/private: "true"
annotations:
"route.openshift.io/termination": "passthrough"
# external-dns.alpha.kubernetes.io/hostname: dashboard.example.com
# nginx.ingress.kubernetes.io/rewrite-target: /ceph-dashboard/$2
# If the dashboard has ssl: true the following will make sure the NGINX Ingress controller can expose the dashboard correctly
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# nginx.ingress.kubernetes.io/server-snippet: |
# proxy_ssl_verify off;
host:
name: ceph.apps.ncd0.harmony.mcd
path: null # TODO the chart does not allow removing the path, and it causes openshift to fail creating a route, because path is not supported with termination mode passthrough
pathType: ImplementationSpecific
tls:
- {}
# secretName: testsecret-tls
# Note: Only one of ingress class annotation or the `ingressClassName:` can be used at a time
# to set the ingress class
# ingressClassName: openshift-default
# labels:
# external-dns/private: "true"
# annotations:
# external-dns.alpha.kubernetes.io/hostname: dashboard.example.com
# nginx.ingress.kubernetes.io/rewrite-target: /ceph-dashboard/$2
# If the dashboard has ssl: true the following will make sure the NGINX Ingress controller can expose the dashboard correctly
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# nginx.ingress.kubernetes.io/server-snippet: |
# proxy_ssl_verify off;
# host:
# name: dashboard.example.com
# path: "/ceph-dashboard(/|$)(.*)"
# pathType: Prefix
# tls:
# - hosts:
# - dashboard.example.com
# secretName: testsecret-tls
## Note: Only one of ingress class annotation or the `ingressClassName:` can be used at a time
## to set the ingress class
# ingressClassName: nginx
# -- A list of CephBlockPool configurations to deploy
# @default -- See [below](#ceph-block-pools)
cephBlockPools:
- name: ceph-blockpool
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Block-Storage/ceph-block-pool-crd.md#spec for available configuration
spec:
failureDomain: host
replicated:
size: 3
# Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false.
# For reference: https://docs.ceph.com/docs/latest/mgr/prometheus/#rbd-io-statistics
# enableRBDStats: true
storageClass:
enabled: true
name: ceph-block
annotations: {}
labels: {}
isDefault: true
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: "Immediate"
mountOptions: []
# see https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies
allowedTopologies: []
# - matchLabelExpressions:
# - key: rook-ceph-role
# values:
# - storage-node
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Block-Storage-RBD/block-storage.md#provision-storage for available configuration
parameters:
# (optional) mapOptions is a comma-separated list of map options.
# For krbd options refer
# https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
# For nbd options refer
# https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
# mapOptions: lock_on_read,queue_depth=1024
# (optional) unmapOptions is a comma-separated list of unmap options.
# For krbd options refer
# https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
# For nbd options refer
# https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
# unmapOptions: force
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features, equivalent to OR'd bitfield value: 63
# Available for imageFormat: "2". Older releases of CSI RBD
# support only the `layering` feature. The Linux kernel (KRBD) supports the
# full feature complement as of 5.4
imageFeatures: layering
# These secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}"
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}"
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}"
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
# -- A list of CephFileSystem configurations to deploy
# @default -- See [below](#ceph-file-systems)
cephFileSystems:
- name: ceph-filesystem
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Shared-Filesystem/ceph-filesystem-crd.md#filesystem-settings for available configuration
spec:
metadataPool:
replicated:
size: 3
dataPools:
- failureDomain: host
replicated:
size: 3
# Optional and highly recommended, 'data0' by default, see https://github.com/rook/rook/blob/master/Documentation/CRDs/Shared-Filesystem/ceph-filesystem-crd.md#pools
name: data0
metadataServer:
activeCount: 1
activeStandby: true
resources:
limits:
memory: "4Gi"
requests:
cpu: "1000m"
memory: "4Gi"
priorityClassName: system-cluster-critical
storageClass:
enabled: true
isDefault: false
name: ceph-filesystem
# (Optional) specify a data pool to use, must be the name of one of the data pools above, 'data0' by default
pool: data0
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: "Immediate"
annotations: {}
labels: {}
mountOptions: []
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage.md#provision-storage for available configuration
parameters:
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}"
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}"
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}"
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
# -- Settings for the filesystem snapshot class
# @default -- See [CephFS Snapshots](../Storage-Configuration/Ceph-CSI/ceph-csi-snapshot.md#cephfs-snapshots)
cephFileSystemVolumeSnapshotClass:
enabled: false
name: ceph-filesystem
isDefault: true
deletionPolicy: Delete
annotations: {}
labels: {}
# see https://rook.io/docs/rook/v1.10/Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-snapshots for available configuration
parameters: {}
# -- Settings for the block pool snapshot class
# @default -- See [RBD Snapshots](../Storage-Configuration/Ceph-CSI/ceph-csi-snapshot.md#rbd-snapshots)
cephBlockPoolsVolumeSnapshotClass:
enabled: false
name: ceph-block
isDefault: false
deletionPolicy: Delete
annotations: {}
labels: {}
# see https://rook.io/docs/rook/v1.10/Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-snapshots for available configuration
parameters: {}
# -- A list of CephObjectStore configurations to deploy
# @default -- See [below](#ceph-object-stores)
cephObjectStores:
- name: ceph-objectstore
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md#object-store-settings for available configuration
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
dataPool:
failureDomain: host
erasureCoded:
dataChunks: 2
codingChunks: 1
parameters:
bulk: "true"
preservePoolsOnDelete: true
gateway:
port: 80
resources:
limits:
memory: "2Gi"
requests:
cpu: "1000m"
memory: "1Gi"
# securePort: 443
# sslCertificateRef:
instances: 1
priorityClassName: system-cluster-critical
# opsLogSidecar:
# resources:
# limits:
# memory: "100Mi"
# requests:
# cpu: "100m"
# memory: "40Mi"
storageClass:
enabled: true
name: ceph-bucket
reclaimPolicy: Delete
volumeBindingMode: "Immediate"
annotations: {}
labels: {}
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim.md#storageclass for available configuration
parameters:
# note: objectStoreNamespace and objectStoreName are configured by the chart
region: us-east-1
ingress:
# Enable an ingress for the ceph-objectstore
enabled: true
# The ingress port by default will be the object store's "securePort" (if set), or the gateway "port".
# To override those defaults, set this ingress port to the desired port.
# port: 80
# annotations: {}
host:
name: objectstore.apps.ncd0.harmony.mcd
path: /
pathType: Prefix
# tls:
# - hosts:
# - objectstore.example.com
# secretName: ceph-objectstore-tls
# ingressClassName: nginx
## cephECBlockPools are disabled by default, please remove the comments and set desired values to enable it
## For erasure coded a replicated metadata pool is required.
## https://rook.io/docs/rook/latest/CRDs/Shared-Filesystem/ceph-filesystem-crd/#erasure-coded
#cephECBlockPools:
# - name: ec-pool
# spec:
# metadataPool:
# replicated:
# size: 2
# dataPool:
# failureDomain: osd
# erasureCoded:
# dataChunks: 2
# codingChunks: 1
# deviceClass: hdd
#
# parameters:
# # clusterID is the namespace where the rook cluster is running
# # If you change this namespace, also change the namespace below where the secret namespaces are defined
# clusterID: rook-ceph # namespace:cluster
# # (optional) mapOptions is a comma-separated list of map options.
# # For krbd options refer
# # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
# # For nbd options refer
# # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
# # mapOptions: lock_on_read,queue_depth=1024
#
# # (optional) unmapOptions is a comma-separated list of unmap options.
# # For krbd options refer
# # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
# # For nbd options refer
# # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
# # unmapOptions: force
#
# # RBD image format. Defaults to "2".
# imageFormat: "2"
#
# # RBD image features, equivalent to OR'd bitfield value: 63
# # Available for imageFormat: "2". Older releases of CSI RBD
# # support only the `layering` feature. The Linux kernel (KRBD) supports the
# # full feature complement as of 5.4
# # imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock
# imageFeatures: layering
#
# storageClass:
# provisioner: rook-ceph.rbd.csi.ceph.com # csi-provisioner-name
# enabled: true
# name: rook-ceph-block
# isDefault: false
# annotations: { }
# labels: { }
# allowVolumeExpansion: true
# reclaimPolicy: Delete
# -- CSI driver name prefix for cephfs, rbd and nfs.
# @default -- `namespace name where rook-ceph operator is deployed`
csiDriverNamePrefix:

View File

@@ -0,0 +1,3 @@
#!/bin/bash
helm repo add rook-release https://charts.rook.io/release
helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml

View File

@@ -0,0 +1,674 @@
# Default values for rook-ceph-operator
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
# -- Image
repository: docker.io/rook/ceph
# -- Image tag
# @default -- `master`
tag: v1.17.1
# -- Image pull policy
pullPolicy: IfNotPresent
crds:
# -- Whether the helm chart should create and update the CRDs. If false, the CRDs must be
# managed independently with deploy/examples/crds.yaml.
# **WARNING** Only set during first deployment. If later disabled the cluster may be DESTROYED.
# If the CRDs are deleted in this case, see
# [the disaster recovery guide](https://rook.io/docs/rook/latest/Troubleshooting/disaster-recovery/#restoring-crds-after-deletion)
# to restore them.
enabled: true
# -- Pod resource requests & limits
resources:
limits:
memory: 512Mi
requests:
cpu: 200m
memory: 128Mi
# -- Kubernetes [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) to add to the Deployment.
nodeSelector: {}
# Constraint rook-ceph-operator Deployment to nodes with label `disktype: ssd`.
# For more info, see https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# disktype: ssd
# -- List of Kubernetes [`tolerations`](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) to add to the Deployment.
tolerations: []
# -- Delay to use for the `node.kubernetes.io/unreachable` pod failure toleration to override
# the Kubernetes default of 5 minutes
unreachableNodeTolerationSeconds: 5
# -- Whether the operator should watch cluster CRD in its own namespace or not
currentNamespaceOnly: false
# -- Custom pod labels for the operator
operatorPodLabels: {}
# -- Pod annotations
annotations: {}
# -- Global log level for the operator.
# Options: `ERROR`, `WARNING`, `INFO`, `DEBUG`
logLevel: INFO
# -- If true, create & use RBAC resources
rbacEnable: true
rbacAggregate:
# -- If true, create a ClusterRole aggregated to [user facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) for objectbucketclaims
enableOBCs: false
# -- If true, create & use PSP resources
pspEnable: false
# -- Set the priority class for the rook operator deployment if desired
priorityClassName:
# -- Set the container security context for the operator
containerSecurityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
capabilities:
drop: ["ALL"]
# -- If true, loop devices are allowed to be used for osds in test clusters
allowLoopDevices: false
# Settings for whether to disable the drivers or other daemons if they are not
# needed
csi:
# -- Enable Ceph CSI RBD driver
enableRbdDriver: true
# -- Enable Ceph CSI CephFS driver
enableCephfsDriver: true
# -- Disable the CSI driver.
disableCsiDriver: "false"
# -- Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
# in some network configurations where the SDN does not provide access to an external cluster or
# there is significant drop in read/write performance
enableCSIHostNetwork: true
# -- Enable Snapshotter in CephFS provisioner pod
enableCephfsSnapshotter: true
# -- Enable Snapshotter in NFS provisioner pod
enableNFSSnapshotter: true
# -- Enable Snapshotter in RBD provisioner pod
enableRBDSnapshotter: true
# -- Enable Host mount for `/etc/selinux` directory for Ceph CSI nodeplugins
enablePluginSelinuxHostMount: false
# -- Enable Ceph CSI PVC encryption support
enableCSIEncryption: false
# -- Enable volume group snapshot feature. This feature is
# enabled by default as long as the necessary CRDs are available in the cluster.
enableVolumeGroupSnapshot: true
# -- PriorityClassName to be set on csi driver plugin pods
pluginPriorityClassName: system-node-critical
# -- PriorityClassName to be set on csi driver provisioner pods
provisionerPriorityClassName: system-cluster-critical
# -- Policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
rbdFSGroupPolicy: "File"
# -- Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
cephFSFSGroupPolicy: "File"
# -- Policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
nfsFSGroupPolicy: "File"
# -- OMAP generator generates the omap mapping between the PV name and the RBD image
# which helps CSI to identify the rbd images for CSI operations.
# `CSI_ENABLE_OMAP_GENERATOR` needs to be enabled when we are using rbd mirroring feature.
# By default OMAP generator is disabled and when enabled, it will be deployed as a
# sidecar with CSI provisioner pod, to enable set it to true.
enableOMAPGenerator: false
# -- Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options.
# Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR
cephFSKernelMountOptions:
# -- Enable adding volume metadata on the CephFS subvolumes and RBD images.
# Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images.
# Hence enable metadata is false by default
enableMetadata: false
# -- Set replicas for csi provisioner deployment
provisionerReplicas: 2
# -- Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful
# in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster
clusterName:
# -- Set logging level for cephCSI containers maintained by the cephCSI.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
logLevel: 0
# -- Set logging level for Kubernetes-csi sidecar containers.
# Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity.
# @default -- `0`
sidecarLogLevel:
# -- CSI driver name prefix for cephfs, rbd and nfs.
# @default -- `namespace name where rook-ceph operator is deployed`
csiDriverNamePrefix:
# -- CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
# @default -- `RollingUpdate`
rbdPluginUpdateStrategy:
# -- A maxUnavailable parameter of CSI RBD plugin daemonset update strategy.
# @default -- `1`
rbdPluginUpdateStrategyMaxUnavailable:
# -- CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
# @default -- `RollingUpdate`
cephFSPluginUpdateStrategy:
# -- A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy.
# @default -- `1`
cephFSPluginUpdateStrategyMaxUnavailable:
# -- CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
# @default -- `RollingUpdate`
nfsPluginUpdateStrategy:
# -- Set GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150
grpcTimeoutInSeconds: 150
# -- Burst to use while communicating with the kubernetes apiserver.
kubeApiBurst:
# -- QPS to use while communicating with the kubernetes apiserver.
kubeApiQPS:
# -- The volume of the CephCSI RBD plugin DaemonSet
csiRBDPluginVolume:
# - name: lib-modules
# hostPath:
# path: /run/booted-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# -- The volume mounts of the CephCSI RBD plugin DaemonSet
csiRBDPluginVolumeMount:
# - name: host-nix
# mountPath: /nix
# readOnly: true
# -- The volume of the CephCSI CephFS plugin DaemonSet
csiCephFSPluginVolume:
# - name: lib-modules
# hostPath:
# path: /run/booted-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# -- The volume mounts of the CephCSI CephFS plugin DaemonSet
csiCephFSPluginVolumeMount:
# - name: host-nix
# mountPath: /nix
# readOnly: true
# -- CEPH CSI RBD provisioner resource requirement list
# csi-omap-generator resources will be applied only if `enableOMAPGenerator` is set to `true`
# @default -- see values.yaml
csiRBDProvisionerResource: |
- name : csi-provisioner
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-resizer
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-attacher
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-snapshotter
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-rbdplugin
resource:
requests:
memory: 512Mi
limits:
memory: 1Gi
- name : csi-omap-generator
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
# -- CEPH CSI RBD plugin resource requirement list
# @default -- see values.yaml
csiRBDPluginResource: |
- name : driver-registrar
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
- name : csi-rbdplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
# -- CEPH CSI CephFS provisioner resource requirement list
# @default -- see values.yaml
csiCephFSProvisionerResource: |
- name : csi-provisioner
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-resizer
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-attacher
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-snapshotter
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-cephfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
# -- CEPH CSI CephFS plugin resource requirement list
# @default -- see values.yaml
csiCephFSPluginResource: |
- name : driver-registrar
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
- name : csi-cephfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
# -- CEPH CSI NFS provisioner resource requirement list
# @default -- see values.yaml
csiNFSProvisionerResource: |
- name : csi-provisioner
resource:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
- name : csi-nfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
- name : csi-attacher
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
# -- CEPH CSI NFS plugin resource requirement list
# @default -- see values.yaml
csiNFSPluginResource: |
- name : driver-registrar
resource:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
- name : csi-nfsplugin
resource:
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
# Set provisionerTolerations and provisionerNodeAffinity for provisioner pod.
# The CSI provisioner would be best to start on the same nodes as other ceph daemons.
# -- Array of tolerations in YAML format which will be added to CSI provisioner deployment
provisionerTolerations:
# - key: key
# operator: Exists
# effect: NoSchedule
# -- The node labels for affinity of the CSI provisioner deployment [^1]
provisionerNodeAffinity: #key1=value1,value2; key2=value3
# Set pluginTolerations and pluginNodeAffinity for plugin daemonset pods.
# The CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# -- Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet
pluginTolerations:
# - key: key
# operator: Exists
# effect: NoSchedule
# -- The node labels for affinity of the CephCSI RBD plugin DaemonSet [^1]
pluginNodeAffinity: # key1=value1,value2; key2=value3
# -- Enable Ceph CSI Liveness sidecar deployment
enableLiveness: false
# -- CSI CephFS driver metrics port
# @default -- `9081`
cephfsLivenessMetricsPort:
# -- CSI Addons server port
# @default -- `9070`
csiAddonsPort:
# -- CSI Addons server port for the RBD provisioner
# @default -- `9070`
csiAddonsRBDProvisionerPort:
# -- CSI Addons server port for the Ceph FS provisioner
# @default -- `9070`
csiAddonsCephFSProvisionerPort:
# -- Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS
# you may want to disable this setting. However, this will cause an issue during upgrades
# with the FUSE client. See the [upgrade guide](https://rook.io/docs/rook/v1.2/ceph-upgrade.html)
forceCephFSKernelClient: true
# -- Ceph CSI RBD driver metrics port
# @default -- `8080`
rbdLivenessMetricsPort:
serviceMonitor:
# -- Enable ServiceMonitor for Ceph CSI drivers
enabled: false
# -- Service monitor scrape interval
interval: 10s
# -- ServiceMonitor additional labels
labels: {}
# -- Use a different namespace for the ServiceMonitor
namespace:
# -- Kubelet root directory path (if the Kubelet uses a different path for the `--root-dir` flag)
# @default -- `/var/lib/kubelet`
kubeletDirPath:
# -- Duration in seconds that non-leader candidates will wait to force acquire leadership.
# @default -- `137s`
csiLeaderElectionLeaseDuration:
# -- Deadline in seconds that the acting leader will retry refreshing leadership before giving up.
# @default -- `107s`
csiLeaderElectionRenewDeadline:
# -- Retry period in seconds the LeaderElector clients should wait between tries of actions.
# @default -- `26s`
csiLeaderElectionRetryPeriod:
cephcsi:
# -- Ceph CSI image repository
repository: quay.io/cephcsi/cephcsi
# -- Ceph CSI image tag
tag: v3.14.0
registrar:
# -- Kubernetes CSI registrar image repository
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
# -- Registrar image tag
tag: v2.13.0
provisioner:
# -- Kubernetes CSI provisioner image repository
repository: registry.k8s.io/sig-storage/csi-provisioner
# -- Provisioner image tag
tag: v5.1.0
snapshotter:
# -- Kubernetes CSI snapshotter image repository
repository: registry.k8s.io/sig-storage/csi-snapshotter
# -- Snapshotter image tag
tag: v8.2.0
attacher:
# -- Kubernetes CSI Attacher image repository
repository: registry.k8s.io/sig-storage/csi-attacher
# -- Attacher image tag
tag: v4.8.0
resizer:
# -- Kubernetes CSI resizer image repository
repository: registry.k8s.io/sig-storage/csi-resizer
# -- Resizer image tag
tag: v1.13.1
# -- Image pull policy
imagePullPolicy: IfNotPresent
# -- Labels to add to the CSI CephFS Deployments and DaemonSets Pods
cephfsPodLabels: #"key1=value1,key2=value2"
# -- Labels to add to the CSI NFS Deployments and DaemonSets Pods
nfsPodLabels: #"key1=value1,key2=value2"
# -- Labels to add to the CSI RBD Deployments and DaemonSets Pods
rbdPodLabels: #"key1=value1,key2=value2"
csiAddons:
# -- Enable CSIAddons
enabled: false
# -- CSIAddons sidecar image repository
repository: quay.io/csiaddons/k8s-sidecar
# -- CSIAddons sidecar image tag
tag: v0.12.0
nfs:
# -- Enable the nfs csi driver
enabled: false
topology:
# -- Enable topology based provisioning
enabled: false
# NOTE: the value here serves as an example and needs to be
# updated with node labels that define domains of interest
# -- domainLabels define which node labels to use as domains
# for CSI nodeplugins to advertise their domains
domainLabels:
# - kubernetes.io/hostname
# - topology.kubernetes.io/zone
# - topology.rook.io/rack
# -- Whether to skip any attach operation altogether for CephFS PVCs. See more details
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation
# of pods using the CephFS PVC fast. **WARNING** It's highly discouraged to use this for
# CephFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
cephFSAttachRequired: true
# -- Whether to skip any attach operation altogether for RBD PVCs. See more details
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast.
# **WARNING** It's highly discouraged to use this for RWO volumes as it can cause data corruption.
# csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set
# to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on.
# Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
rbdAttachRequired: true
# -- Whether to skip any attach operation altogether for NFS PVCs. See more details
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation
# of pods using the NFS PVC fast. **WARNING** It's highly discouraged to use this for
# NFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
nfsAttachRequired: true
# -- Enable discovery daemon
enableDiscoveryDaemon: false
# -- Set the discovery daemon device discovery interval (default to 60m)
discoveryDaemonInterval: 60m
# -- The timeout for ceph commands in seconds
cephCommandsTimeoutSeconds: "15"
# -- If true, run rook operator on the host network
useOperatorHostNetwork:
# -- If true, scale down the rook operator.
# This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling
# to deploy your helm charts.
scaleDownOperator: false
## Rook Discover configuration
## toleration: NoSchedule, PreferNoSchedule or NoExecute
## tolerationKey: Set this to the specific key of the taint to tolerate
## tolerations: Array of tolerations in YAML format which will be added to agent deployment
## nodeAffinity: Set to labels of the node to match
discover:
# -- Toleration for the discover pods.
# Options: `NoSchedule`, `PreferNoSchedule` or `NoExecute`
toleration:
# -- The specific key of the taint to tolerate
tolerationKey:
# -- Array of tolerations in YAML format which will be added to discover deployment
tolerations:
# - key: key
# operator: Exists
# effect: NoSchedule
# -- The node labels for affinity of `discover-agent` [^1]
nodeAffinity:
# key1=value1,value2; key2=value3
#
# or
#
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: storage-node
# operator: Exists
# -- Labels to add to the discover pods
podLabels: # "key1=value1,key2=value2"
# -- Add resources to discover daemon pods
resources:
# - limits:
# memory: 512Mi
# - requests:
# cpu: 100m
# memory: 128Mi
# -- Custom label to identify node hostname. If not set `kubernetes.io/hostname` will be used
customHostnameLabel:
# -- Runs Ceph Pods as privileged to be able to write to `hostPaths` in OpenShift with SELinux restrictions.
hostpathRequiresPrivileged: false
# -- Whether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabled
enforceHostNetwork: false
# -- Disable automatic orchestration when new devices are discovered.
disableDeviceHotplug: false
# -- The revision history limit for all pods created by Rook. If blank, the K8s default is 10.
revisionHistoryLimit:
# -- Blacklist certain disks according to the regex provided.
discoverDaemonUdev:
# -- imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts.
imagePullSecrets:
# - name: my-registry-secret
# -- Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used
enableOBCWatchOperatorNamespace: true
# -- Specify the prefix for the OBC provisioner in place of the cluster namespace
# @default -- `ceph cluster namespace`
obcProvisionerNamePrefix:
# -- Many OBC additional config fields may be risky for administrators to allow users control over.
# The safe and default-allowed fields are 'maxObjects' and 'maxSize'.
# Other fields should be considered risky. To allow all additional configs, use this value:
# "maxObjects,maxSize,bucketMaxObjects,bucketMaxSize,bucketPolicy,bucketLifecycle,bucketOwner"
# @default -- "maxObjects,maxSize"
obcAllowAdditionalConfigFields: "maxObjects,maxSize"
monitoring:
# -- Enable monitoring. Requires Prometheus to be pre-installed.
# Enabling will also create RBAC rules to allow Operator to create ServiceMonitors
enabled: false

View File

@@ -1,26 +1,145 @@
use std::{
net::{IpAddr, Ipv4Addr},
sync::Arc,
};
use cidr::Ipv4Cidr;
use harmony::{
hardware::{FirewallGroup, HostCategory, Location, PhysicalHost, SwitchGroup},
infra::opnsense::OPNSenseManagementInterface,
inventory::Inventory,
maestro::Maestro,
modules::dummy::{ErrorScore, PanicScore, SuccessScore},
topology::HAClusterTopology,
modules::{
http::HttpScore,
ipxe::IpxeScore,
okd::{
bootstrap_dhcp::OKDBootstrapDhcpScore,
bootstrap_load_balancer::OKDBootstrapLoadBalancerScore, dhcp::OKDDhcpScore,
dns::OKDDnsScore,
},
tftp::TftpScore,
},
topology::{LogicalHost, UnmanagedRouter, Url},
};
use harmony_macros::{ip, mac_address};
#[tokio::main]
async fn main() {
let inventory = Inventory::autoload();
let topology = HAClusterTopology::autoload();
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
let firewall = harmony::topology::LogicalHost {
ip: ip!("192.168.33.1"),
name: String::from("fw0"),
};
let opnsense = Arc::new(
harmony::infra::opnsense::OPNSenseFirewall::new(firewall, None, "root", "opnsense").await,
);
let lan_subnet = Ipv4Addr::new(192, 168, 33, 0);
let gateway_ipv4 = Ipv4Addr::new(192, 168, 33, 1);
let gateway_ip = IpAddr::V4(gateway_ipv4);
let topology = harmony::topology::HAClusterTopology {
domain_name: "ncd0.harmony.mcd".to_string(), // TODO this must be set manually correctly
// when setting up the opnsense firewall
router: Arc::new(UnmanagedRouter::new(
gateway_ip,
Ipv4Cidr::new(lan_subnet, 24).unwrap(),
)),
load_balancer: opnsense.clone(),
firewall: opnsense.clone(),
tftp_server: opnsense.clone(),
http_server: opnsense.clone(),
dhcp_server: opnsense.clone(),
dns_server: opnsense.clone(),
control_plane: vec![
LogicalHost {
ip: ip!("192.168.33.20"),
name: "cp0".to_string(),
},
LogicalHost {
ip: ip!("192.168.33.21"),
name: "cp1".to_string(),
},
LogicalHost {
ip: ip!("192.168.33.22"),
name: "cp2".to_string(),
},
],
bootstrap_host: LogicalHost {
ip: ip!("192.168.33.66"),
name: "bootstrap".to_string(),
},
workers: vec![
LogicalHost {
ip: ip!("192.168.33.30"),
name: "wk0".to_string(),
},
LogicalHost {
ip: ip!("192.168.33.31"),
name: "wk1".to_string(),
},
LogicalHost {
ip: ip!("192.168.33.32"),
name: "wk2".to_string(),
},
],
switch: vec![],
};
let inventory = Inventory {
location: Location::new("I am mobile".to_string(), "earth".to_string()),
switch: SwitchGroup::from([]),
firewall: FirewallGroup::from([PhysicalHost::empty(HostCategory::Firewall)
.management(Arc::new(OPNSenseManagementInterface::new()))]),
storage_host: vec![],
worker_host: vec![
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:61:0F")),
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:61:26")),
// thisone
// Then create the ipxe file
// set the dns static leases
// bootstrap nodes
// start ceph cluster
// try installation of lampscore
// bingo?
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:61:70")),
],
control_plane_host: vec![
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:60:FA")),
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:02:61:1A")),
PhysicalHost::empty(HostCategory::Server)
.mac_address(mac_address!("C4:62:37:01:BC:68")),
],
};
// TODO regroup smaller scores in a larger one such as this
// let okd_boostrap_preparation();
let bootstrap_dhcp_score = OKDBootstrapDhcpScore::new(&topology, &inventory);
let bootstrap_load_balancer_score = OKDBootstrapLoadBalancerScore::new(&topology);
let dhcp_score = OKDDhcpScore::new(&topology, &inventory);
let dns_score = OKDDnsScore::new(&topology);
let load_balancer_score =
harmony::modules::okd::load_balancer::OKDLoadBalancerScore::new(&topology);
let tftp_score = TftpScore::new(Url::LocalFolder("./data/watchguard/tftpboot".to_string()));
let http_score = HttpScore::new(Url::LocalFolder(
"./data/watchguard/pxe-http-files".to_string(),
));
let ipxe_score = IpxeScore::new();
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
maestro.register_all(vec![
// ADD scores :
// 1. OPNSense setup scores
// 2. Bootstrap node setup
// 3. Control plane setup
// 4. Workers setup
// 5. Various tools and apps setup
Box::new(SuccessScore {}),
Box::new(ErrorScore {}),
Box::new(PanicScore {}),
Box::new(dns_score),
Box::new(bootstrap_dhcp_score),
Box::new(bootstrap_load_balancer_score),
Box::new(load_balancer_score),
Box::new(tftp_score),
Box::new(http_score),
Box::new(ipxe_score),
Box::new(dhcp_score),
]);
harmony_tui::init(maestro).await.unwrap();
}

View File

@@ -13,23 +13,23 @@ rust-ipmi = "0.1.1"
semver = "1.0.23"
serde = { version = "1.0.209", features = ["derive"] }
serde_json = "1.0.127"
tokio = { workspace = true }
derive-new = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
async-trait = { workspace = true }
cidr = { workspace = true }
tokio.workspace = true
derive-new.workspace = true
log.workspace = true
env_logger.workspace = true
async-trait.workspace = true
cidr.workspace = true
opnsense-config = { path = "../opnsense-config" }
opnsense-config-xml = { path = "../opnsense-config-xml" }
harmony_macros = { path = "../harmony_macros" }
harmony_types = { path = "../harmony_types" }
uuid = { workspace = true }
url = { workspace = true }
kube = { workspace = true }
k8s-openapi = { workspace = true }
serde_yaml = { workspace = true }
http = { workspace = true }
serde-value = { workspace = true }
uuid.workspace = true
url.workspace = true
kube.workspace = true
k8s-openapi.workspace = true
serde_yaml.workspace = true
http.workspace = true
serde-value.workspace = true
inquire.workspace = true
helm-wrapper-rs = "0.4.0"
non-blank-string-rs = "1.0.4"
@@ -38,3 +38,15 @@ directories = "6.0.0"
lazy_static = "1.5.0"
dockerfile_builder = "0.1.5"
temp-file = "0.1.9"
convert_case.workspace = true
email_address = "0.2.9"
fqdn = { version = "0.4.6", features = [
"domain-label-cannot-start-or-end-with-hyphen",
"domain-label-length-limited-to-63",
"domain-name-without-special-chars",
"domain-name-length-limited-to-255",
"punycode",
"serde",
] }
temp-dir = "0.1.14"
dyn-clone = "1.0.19"

View File

@@ -6,4 +6,8 @@ lazy_static! {
.unwrap()
.data_dir()
.join("harmony");
pub static ref REGISTRY_URL: String =
std::env::var("HARMONY_REGISTRY_URL").unwrap_or_else(|_| "hub.nationtech.io".to_string());
pub static ref REGISTRY_PROJECT: String =
std::env::var("HARMONY_REGISTRY_PROJECT").unwrap_or_else(|_| "harmony".to_string());
}

View File

@@ -1,6 +1,6 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct Id {
value: String,
}
@@ -10,3 +10,9 @@ impl Id {
Self { value }
}
}
impl std::fmt::Display for Id {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(&self.value)
}
}

View File

@@ -138,7 +138,8 @@ impl ManagementInterface for ManualManagementInterface {
}
fn get_supported_protocol_names(&self) -> String {
todo!()
// todo!()
"none".to_string()
}
}

View File

@@ -15,10 +15,12 @@ pub enum InterpretName {
LoadBalancer,
Tftp,
Http,
Ipxe,
Dummy,
Panic,
OPNSense,
K3dInstallation,
TenantInterpret,
}
impl std::fmt::Display for InterpretName {
@@ -29,10 +31,12 @@ impl std::fmt::Display for InterpretName {
InterpretName::LoadBalancer => f.write_str("LoadBalancer"),
InterpretName::Tftp => f.write_str("Tftp"),
InterpretName::Http => f.write_str("Http"),
InterpretName::Ipxe => f.write_str("iPXE"),
InterpretName::Dummy => f.write_str("Dummy"),
InterpretName::Panic => f.write_str("Panic"),
InterpretName::OPNSense => f.write_str("OPNSense"),
InterpretName::K3dInstallation => f.write_str("K3dInstallation"),
InterpretName::TenantInterpret => f.write_str("Tenant"),
}
}
}

View File

@@ -168,6 +168,16 @@ impl DhcpServer for HAClusterTopology {
async fn commit_config(&self) -> Result<(), ExecutorError> {
self.dhcp_server.commit_config().await
}
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filename(filename).await
}
async fn set_filename64(&self, filename64: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filename64(filename64).await
}
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filenameipxe(filenameipxe).await
}
}
#[async_trait]
@@ -293,6 +303,15 @@ impl DhcpServer for DummyInfra {
async fn set_boot_filename(&self, _boot_filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_filename(&self, _filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_filename64(&self, _filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_filenameipxe(&self, _filenameipxe: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
fn get_ip(&self) -> IpAddress {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}

View File

@@ -1,6 +1,11 @@
use derive_new::new;
use k8s_openapi::NamespaceResourceScope;
use kube::{Api, Client, Error, Resource, api::PostParams};
use kube::{
Api, Client, Config, Error, Resource,
api::PostParams,
config::{KubeConfigOptions, Kubeconfig},
};
use log::error;
use serde::de::DeserializeOwned;
#[derive(new)]
@@ -38,7 +43,11 @@ impl K8sClient {
Ok(result)
}
pub async fn apply_namespaced<K>(&self, resource: &Vec<K>) -> Result<K, Error>
pub async fn apply_namespaced<K>(
&self,
resource: &Vec<K>,
ns: Option<&str>,
) -> Result<Vec<K>, Error>
where
K: Resource<Scope = NamespaceResourceScope>
+ Clone
@@ -48,10 +57,32 @@ impl K8sClient {
+ Default,
<K as kube::Resource>::DynamicType: Default,
{
let mut resources = Vec::new();
for r in resource.iter() {
let api: Api<K> = Api::default_namespaced(self.client.clone());
api.create(&PostParams::default(), &r).await?;
let api: Api<K> = match ns {
Some(ns) => Api::namespaced(self.client.clone(), ns),
None => Api::default_namespaced(self.client.clone()),
};
resources.push(api.create(&PostParams::default(), &r).await?);
}
todo!("")
Ok(resources)
}
pub(crate) async fn from_kubeconfig(path: &str) -> Option<K8sClient> {
let k = match Kubeconfig::read_from(path) {
Ok(k) => k,
Err(e) => {
error!("Failed to load kubeconfig from {path} : {e}");
return None;
}
};
Some(K8sClient::new(
Client::try_from(
Config::from_custom_kubeconfig(k, &KubeConfigOptions::default())
.await
.unwrap(),
)
.unwrap(),
))
}
}

View File

@@ -6,27 +6,46 @@ use log::{info, warn};
use tokio::sync::OnceCell;
use crate::{
executors::ExecutorError,
interpret::{InterpretError, Outcome},
inventory::Inventory,
maestro::Maestro,
modules::k3d::K3DInstallationScore,
modules::{
k3d::K3DInstallationScore,
monitoring::kube_prometheus::kube_prometheus_helm_chart_score::kube_prometheus_helm_chart_score,
},
topology::LocalhostTopology,
};
use super::{HelmCommand, K8sclient, Topology, k8s::K8sClient};
use super::{
HelmCommand, K8sclient, Topology,
k8s::K8sClient,
oberservability::{
K8sMonitorConfig,
k8s::K8sMonitor,
monitoring::{AlertChannel, AlertChannelConfig, Monitor},
},
tenant::{
ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy, k8s::K8sTenantManager,
},
};
struct K8sState {
client: Arc<K8sClient>,
_source: K8sSource,
source: K8sSource,
message: String,
}
#[derive(Debug)]
enum K8sSource {
LocalK3d,
Kubeconfig,
}
pub struct K8sAnywhereTopology {
k8s_state: OnceCell<Option<K8sState>>,
tenant_manager: OnceCell<K8sTenantManager>,
k8s_monitor: OnceCell<K8sMonitor>,
}
#[async_trait]
@@ -50,6 +69,8 @@ impl K8sAnywhereTopology {
pub fn new() -> Self {
Self {
k8s_state: OnceCell::new(),
tenant_manager: OnceCell::new(),
k8s_monitor: OnceCell::new(),
}
}
@@ -75,7 +96,7 @@ impl K8sAnywhereTopology {
}
async fn try_load_kubeconfig(&self, path: &str) -> Option<K8sClient> {
todo!("Use kube-rs to load kubeconfig at path {path}");
K8sClient::from_kubeconfig(path).await
}
fn get_k3d_installation_score(&self) -> K3DInstallationScore {
@@ -91,9 +112,7 @@ impl K8sAnywhereTopology {
async fn try_get_or_install_k8s_client(&self) -> Result<Option<K8sState>, InterpretError> {
let k8s_anywhere_config = K8sAnywhereConfig {
kubeconfig: std::env::var("HARMONY_KUBECONFIG")
.ok()
.map(|v| v.to_string()),
kubeconfig: std::env::var("KUBECONFIG").ok().map(|v| v.to_string()),
use_system_kubeconfig: std::env::var("HARMONY_USE_SYSTEM_KUBECONFIG")
.map_or_else(|_| false, |v| v.parse().ok().unwrap_or(false)),
autoinstall: std::env::var("HARMONY_AUTOINSTALL")
@@ -109,8 +128,18 @@ impl K8sAnywhereTopology {
if let Some(kubeconfig) = k8s_anywhere_config.kubeconfig {
match self.try_load_kubeconfig(&kubeconfig).await {
Some(_client) => todo!(),
None => todo!(),
Some(client) => {
return Ok(Some(K8sState {
client: Arc::new(client),
source: K8sSource::Kubeconfig,
message: format!("Loaded k8s client from kubeconfig {kubeconfig}"),
}));
}
None => {
return Err(InterpretError::new(format!(
"Failed to load kubeconfig from {kubeconfig}"
)));
}
}
}
@@ -142,7 +171,7 @@ impl K8sAnywhereTopology {
let state = match k3d.get_client().await {
Ok(client) => K8sState {
client: Arc::new(K8sClient::new(client)),
_source: K8sSource::LocalK3d,
source: K8sSource::LocalK3d,
message: "Successfully installed K3D cluster and acquired client".to_string(),
},
Err(_) => todo!(),
@@ -150,6 +179,39 @@ impl K8sAnywhereTopology {
Ok(Some(state))
}
fn get_k8s_tenant_manager(&self) -> Result<&K8sTenantManager, ExecutorError> {
match self.tenant_manager.get() {
Some(t) => Ok(t),
None => Err(ExecutorError::UnexpectedError(
"K8sTenantManager not available".to_string(),
)),
}
}
async fn ensure_k8s_monitor(&self) -> Result<(), String> {
if let Some(_) = self.k8s_monitor.get() {
return Ok(());
}
self.k8s_monitor
.get_or_try_init(async || -> Result<K8sMonitor, String> {
let config = K8sMonitorConfig::cluster_monitor();
Ok(K8sMonitor { config })
})
.await
.unwrap();
Ok(())
}
fn get_k8s_monitor(&self) -> Result<&K8sMonitor, ExecutorError> {
match self.k8s_monitor.get() {
Some(k) => Ok(k),
None => Err(ExecutorError::UnexpectedError(
"K8sMonitor not available".to_string(),
)),
}
}
}
struct K8sAnywhereConfig {
@@ -189,6 +251,10 @@ impl Topology for K8sAnywhereTopology {
"No K8s client could be found or installed".to_string(),
))?;
self.ensure_k8s_monitor()
.await
.map_err(|e| InterpretError::new(e))?;
match self.is_helm_available() {
Ok(()) => Ok(Outcome::success(format!(
"{} + helm available",
@@ -200,3 +266,55 @@ impl Topology for K8sAnywhereTopology {
}
impl HelmCommand for K8sAnywhereTopology {}
#[async_trait]
impl TenantManager for K8sAnywhereTopology {
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.provision_tenant(config)
.await
}
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.update_tenant_resource_limits(tenant_name, new_limits)
.await
}
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.update_tenant_network_policy(tenant_name, new_policy)
.await
}
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.deprovision_tenant(tenant_name)
.await
}
}
#[async_trait]
impl Monitor for K8sAnywhereTopology {
async fn provision_monitor<T: Topology + HelmCommand>(
&self,
inventory: &Inventory,
topology: &T,
alert_receivers: Option<Vec<Box<dyn AlertChannelConfig>>>,
) -> Result<Outcome, InterpretError> {
self.get_k8s_monitor()?
.provision_monitor(inventory, topology, alert_receivers)
.await
}
fn delete_monitor(&self) -> Result<Outcome, InterpretError> {
todo!()
}
}

View File

@@ -7,6 +7,12 @@ use serde::Serialize;
use super::{IpAddress, LogicalHost};
use crate::executors::ExecutorError;
impl std::fmt::Debug for dyn LoadBalancer {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_fmt(format_args!("LoadBalancer {}", self.get_ip()))
}
}
#[async_trait]
pub trait LoadBalancer: Send + Sync {
fn get_ip(&self) -> IpAddress;
@@ -32,11 +38,6 @@ pub trait LoadBalancer: Send + Sync {
}
}
impl std::fmt::Debug for dyn LoadBalancer {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_fmt(format_args!("LoadBalancer {}", self.get_ip()))
}
}
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct LoadBalancerService {
pub backend_servers: Vec<BackendServer>,

View File

@@ -3,6 +3,8 @@ mod host_binding;
mod http;
mod k8s_anywhere;
mod localhost;
pub mod oberservability;
pub mod tenant;
pub use k8s_anywhere::*;
pub use localhost::*;
pub mod k8s;

View File

@@ -53,6 +53,9 @@ pub trait DhcpServer: Send + Sync + std::fmt::Debug {
async fn list_static_mappings(&self) -> Vec<(MacAddress, IpAddress)>;
async fn set_next_server(&self, ip: IpAddress) -> Result<(), ExecutorError>;
async fn set_boot_filename(&self, boot_filename: &str) -> Result<(), ExecutorError>;
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError>;
async fn set_filename64(&self, filename64: &str) -> Result<(), ExecutorError>;
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError>;
fn get_ip(&self) -> IpAddress;
fn get_host(&self) -> LogicalHost;
async fn commit_config(&self) -> Result<(), ExecutorError>;

View File

@@ -0,0 +1,71 @@
use std::sync::Arc;
use async_trait::async_trait;
use serde::Serialize;
use crate::score::Score;
use crate::topology::HelmCommand;
use crate::{
interpret::{InterpretError, Outcome},
inventory::Inventory,
topology::Topology,
};
use super::{
K8sMonitorConfig,
monitoring::{AlertChannel, AlertChannelConfig, Monitor},
};
#[derive(Debug, Clone, Serialize)]
pub struct K8sMonitor {
pub config: K8sMonitorConfig,
}
#[async_trait]
impl Monitor for K8sMonitor {
async fn provision_monitor<T: Topology + HelmCommand>(
&self,
inventory: &Inventory,
topology: &T,
alert_channels: Option<Vec<Box<dyn AlertChannelConfig>>>,
) -> Result<Outcome, InterpretError> {
if let Some(channels) = alert_channels {
let alert_channels = self.build_alert_channels(channels).await?;
for channel in alert_channels {
channel.register_alert_channel().await?;
}
}
let chart = self.config.chart.clone();
chart
.create_interpret()
.execute(inventory, topology)
.await?;
Ok(Outcome::success("installed monitor".to_string()))
}
fn delete_monitor(&self) -> Result<Outcome, InterpretError> {
todo!()
}
}
#[async_trait]
impl AlertChannelConfig for K8sMonitor {
async fn build_alert_channel(&self) -> Result<Box<dyn AlertChannel>, InterpretError> {
todo!()
}
}
impl K8sMonitor {
pub async fn build_alert_channels(
&self,
alert_channel_configs: Vec<Box<dyn AlertChannelConfig>>,
) -> Result<Vec<Box<dyn AlertChannel>>, InterpretError> {
let mut alert_channels = Vec::new();
for config in alert_channel_configs {
let channel = config.build_alert_channel().await?;
alert_channels.push(channel)
}
Ok(alert_channels)
}
}

View File

@@ -0,0 +1,23 @@
use serde::Serialize;
use crate::modules::{
helm::chart::HelmChartScore,
monitoring::kube_prometheus::kube_prometheus_helm_chart_score::kube_prometheus_helm_chart_score,
};
pub mod k8s;
pub mod monitoring;
#[derive(Debug, Clone, Serialize)]
pub struct K8sMonitorConfig {
//probably need to do something better here
pub chart: HelmChartScore,
}
impl K8sMonitorConfig {
pub fn cluster_monitor() -> Self {
Self {
chart: kube_prometheus_helm_chart_score(),
}
}
}

View File

@@ -0,0 +1,39 @@
use async_trait::async_trait;
use dyn_clone::DynClone;
use std::fmt::Debug;
use crate::executors::ExecutorError;
use crate::interpret::InterpretError;
use crate::inventory::Inventory;
use crate::topology::HelmCommand;
use crate::{interpret::Outcome, topology::Topology};
/// Represents an entity responsible for collecting and organizing observability data
/// from various telemetry sources such as Prometheus or Datadog
/// A `Monitor` abstracts the logic required to scrape, aggregate, and structure
/// monitoring data, enabling consistent processing regardless of the underlying data source.
#[async_trait]
pub trait Monitor {
async fn provision_monitor<T: Topology + HelmCommand>(
&self,
inventory: &Inventory,
topology: &T,
alert_receivers: Option<Vec<Box<dyn AlertChannelConfig>>>,
) -> Result<Outcome, InterpretError>;
fn delete_monitor(&self) -> Result<Outcome, InterpretError>;
}
#[async_trait]
pub trait AlertChannel: Debug + Send + Sync {
async fn register_alert_channel(&self) -> Result<Outcome, ExecutorError>;
//async fn get_channel_id(&self) -> String;
}
#[async_trait]
pub trait AlertChannelConfig: Debug + Send + Sync + DynClone {
async fn build_alert_channel(&self) -> Result<Box<dyn AlertChannel>, InterpretError>;
}
dyn_clone::clone_trait_object!(AlertChannelConfig);

View File

@@ -0,0 +1,95 @@
use std::sync::Arc;
use crate::{executors::ExecutorError, topology::k8s::K8sClient};
use async_trait::async_trait;
use derive_new::new;
use k8s_openapi::api::core::v1::Namespace;
use serde_json::json;
use super::{ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy};
#[derive(new)]
pub struct K8sTenantManager {
k8s_client: Arc<K8sClient>,
}
#[async_trait]
impl TenantManager for K8sTenantManager {
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
let namespace = json!(
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"labels": {
"harmony.nationtech.io/tenant.id": config.id,
"harmony.nationtech.io/tenant.name": config.name,
},
"name": config.name,
},
}
);
todo!("Validate that when tenant already exists (by id) that name has not changed");
let namespace: Namespace = serde_json::from_value(namespace).unwrap();
let resource_quota = json!(
{
"apiVersion": "v1",
"kind": "List",
"items": [
{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"name": config.name,
"labels": {
"harmony.nationtech.io/tenant.id": config.id,
"harmony.nationtech.io/tenant.name": config.name,
},
"namespace": config.name,
},
"spec": {
"hard": {
"limits.cpu": format!("{:.0}",config.resource_limits.cpu_limit_cores),
"limits.memory": format!("{:.3}Gi", config.resource_limits.memory_limit_gb),
"requests.cpu": format!("{:.0}",config.resource_limits.cpu_request_cores),
"requests.memory": format!("{:.3}Gi", config.resource_limits.memory_request_gb),
"requests.storage": format!("{:.3}", config.resource_limits.storage_total_gb),
"pods": "20",
"services": "10",
"configmaps": "30",
"secrets": "30",
"persistentvolumeclaims": "15",
"services.loadbalancers": "2",
"services.nodeports": "5",
}
}
}
]
}
);
}
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError> {
todo!()
}
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError> {
todo!()
}
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
todo!()
}
}

View File

@@ -0,0 +1,46 @@
use super::*;
use async_trait::async_trait;
use crate::executors::ExecutorError;
#[async_trait]
pub trait TenantManager {
/// Provisions a new tenant based on the provided configuration.
/// This operation should be idempotent; if a tenant with the same `config.name`
/// already exists and matches the config, it will succeed without changes.
/// If it exists but differs, it will be updated, or return an error if the update
/// action is not supported
///
/// # Arguments
/// * `config`: The desired configuration for the new tenant.
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError>;
/// Updates the resource limits for an existing tenant.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to update.
/// * `new_limits`: The new set of resource limits to apply.
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError>;
/// Updates the high-level network isolation policy for an existing tenant.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to update.
/// * `new_policy`: The new network policy to apply.
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError>;
/// Decommissions an existing tenant, removing its isolated context and associated resources.
/// This operation should be idempotent.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to deprovision.
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError>;
}

View File

@@ -0,0 +1,67 @@
pub mod k8s;
mod manager;
pub use manager::*;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use crate::data::Id;
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] // Assuming serde for Scores
pub struct TenantConfig {
/// This will be used as the primary unique identifier for management operations and will never
/// change for the entire lifetime of the tenant
pub id: Id,
/// A human-readable name for the tenant (e.g., "client-alpha", "project-phoenix").
pub name: String,
/// Desired resource allocations and limits for the tenant.
pub resource_limits: ResourceLimits,
/// High-level network isolation policies for the tenant.
pub network_policy: TenantNetworkPolicy,
/// Key-value pairs for provider-specific tagging, labeling, or metadata.
/// Useful for billing, organization, or filtering within the provider's console.
pub labels_or_tags: HashMap<String, String>,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Default)]
pub struct ResourceLimits {
/// Requested/guaranteed CPU cores (e.g., 2.0).
pub cpu_request_cores: f32,
/// Maximum CPU cores the tenant can burst to (e.g., 4.0).
pub cpu_limit_cores: f32,
/// Requested/guaranteed memory in Gigabytes (e.g., 8.0).
pub memory_request_gb: f32,
/// Maximum memory in Gigabytes tenant can burst to (e.g., 16.0).
pub memory_limit_gb: f32,
/// Total persistent storage allocation in Gigabytes across all volumes.
pub storage_total_gb: f32,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct TenantNetworkPolicy {
/// Policy for ingress traffic originating from other tenants within the same Harmony-managed environment.
pub default_inter_tenant_ingress: InterTenantIngressPolicy,
/// Policy for egress traffic destined for the public internet.
pub default_internet_egress: InternetEgressPolicy,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum InterTenantIngressPolicy {
/// Deny all traffic from other tenants by default.
DenyAll,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum InternetEgressPolicy {
/// Allow all outbound traffic to the internet.
AllowAll,
/// Deny all outbound traffic to the internet by default.
DenyAll,
}

View File

@@ -69,4 +69,34 @@ impl DhcpServer for OPNSenseFirewall {
Ok(())
}
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_filename(filename);
debug!("OPNsense dhcp server set filename {filename}");
}
Ok(())
}
async fn set_filename64(&self, filename: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_filename64(filename);
debug!("OPNsense dhcp server set filename {filename}");
}
Ok(())
}
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_filenameipxe(filenameipxe);
debug!("OPNsense dhcp server set filenameipxe {filenameipxe}");
}
Ok(())
}
}

View File

@@ -61,7 +61,7 @@ impl HttpServer for OPNSenseFirewall {
info!("Adding custom caddy config files");
config
.upload_files(
"../../../watchguard/caddy_config",
"./data/watchguard/caddy_config",
"/usr/local/etc/caddy/caddy.d/",
)
.await

View File

@@ -370,10 +370,13 @@ mod tests {
let result = get_servers_for_backend(&backend, &haproxy);
// Check the result
assert_eq!(result, vec![BackendServer {
address: "192.168.1.1".to_string(),
port: 80,
},]);
assert_eq!(
result,
vec![BackendServer {
address: "192.168.1.1".to_string(),
port: 80,
},]
);
}
#[test]
fn test_get_servers_for_backend_no_linked_servers() {
@@ -430,15 +433,18 @@ mod tests {
// Call the function
let result = get_servers_for_backend(&backend, &haproxy);
// Check the result
assert_eq!(result, vec![
BackendServer {
address: "some-hostname.test.mcd".to_string(),
port: 80,
},
BackendServer {
address: "192.168.1.2".to_string(),
port: 8080,
},
]);
assert_eq!(
result,
vec![
BackendServer {
address: "some-hostname.test.mcd".to_string(),
port: 80,
},
BackendServer {
address: "192.168.1.2".to_string(),
port: 8080,
},
]
);
}
}

View File

@@ -0,0 +1,46 @@
use std::{collections::HashMap, str::FromStr};
use non_blank_string_rs::NonBlankString;
use serde::Serialize;
use url::Url;
use crate::{
modules::helm::chart::{HelmChartScore, HelmRepository},
score::Score,
topology::{HelmCommand, Topology},
};
#[derive(Debug, Serialize, Clone)]
pub struct CertManagerHelmScore {}
impl<T: Topology + HelmCommand> Score<T> for CertManagerHelmScore {
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
let mut values_overrides = HashMap::new();
values_overrides.insert(
NonBlankString::from_str("crds.enabled").unwrap(),
"true".to_string(),
);
let values_overrides = Some(values_overrides);
HelmChartScore {
namespace: Some(NonBlankString::from_str("cert-manager").unwrap()),
release_name: NonBlankString::from_str("cert-manager").unwrap(),
chart_name: NonBlankString::from_str("jetstack/cert-manager").unwrap(),
chart_version: None,
values_overrides,
values_yaml: None,
create_namespace: true,
install_only: true,
repository: Some(HelmRepository::new(
"jetstack".to_string(),
Url::parse("https://charts.jetstack.io").unwrap(),
true,
)),
}
.create_interpret()
}
fn name(&self) -> String {
format!("CertManagerHelmScore")
}
}

View File

@@ -0,0 +1,2 @@
mod helm;
pub use helm::*;

View File

@@ -17,6 +17,9 @@ pub struct DhcpScore {
pub host_binding: Vec<HostBinding>,
pub next_server: Option<IpAddress>,
pub boot_filename: Option<String>,
pub filename: Option<String>,
pub filename64: Option<String>,
pub filenameipxe: Option<String>,
}
impl<T: Topology + DhcpServer> Score<T> for DhcpScore {
@@ -117,8 +120,44 @@ impl DhcpInterpret {
None => Outcome::noop(),
};
let filename_outcome = match &self.score.filename {
Some(filename) => {
dhcp_server.set_filename(&filename).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set filename to {filename}"),
)
}
None => Outcome::noop(),
};
let filename64_outcome = match &self.score.filename64 {
Some(filename64) => {
dhcp_server.set_filename64(&filename64).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set filename64 to {filename64}"),
)
}
None => Outcome::noop(),
};
let filenameipxe_outcome = match &self.score.filenameipxe {
Some(filenameipxe) => {
dhcp_server.set_filenameipxe(&filenameipxe).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set filenameipxe to {filenameipxe}"),
)
}
None => Outcome::noop(),
};
if next_server_outcome.status == InterpretStatus::NOOP
&& boot_filename_outcome.status == InterpretStatus::NOOP
&& filename_outcome.status == InterpretStatus::NOOP
&& filename64_outcome.status == InterpretStatus::NOOP
&& filenameipxe_outcome.status == InterpretStatus::NOOP
{
return Ok(Outcome::noop());
}
@@ -126,8 +165,12 @@ impl DhcpInterpret {
Ok(Outcome::new(
InterpretStatus::SUCCESS,
format!(
"Dhcp Interpret Set next boot to {:?} and boot_filename to {:?}",
self.score.boot_filename, self.score.boot_filename
"Dhcp Interpret Set next boot to [{:?}], boot_filename to [{:?}], filename to [{:?}], filename64 to [{:?}], filenameipxe to [:{:?}]",
self.score.boot_filename,
self.score.boot_filename,
self.score.filename,
self.score.filename64,
self.score.filenameipxe
),
))
}

View File

@@ -6,11 +6,31 @@ use crate::topology::{HelmCommand, Topology};
use async_trait::async_trait;
use helm_wrapper_rs;
use helm_wrapper_rs::blocking::{DefaultHelmExecutor, HelmExecutor};
use log::{debug, info, warn};
pub use non_blank_string_rs::NonBlankString;
use serde::Serialize;
use std::collections::HashMap;
use std::path::Path;
use std::process::{Command, Output, Stdio};
use std::str::FromStr;
use temp_file::TempFile;
use url::Url;
#[derive(Debug, Clone, Serialize)]
pub struct HelmRepository {
name: String,
url: Url,
force_update: bool,
}
impl HelmRepository {
pub fn new(name: String, url: Url, force_update: bool) -> Self {
Self {
name,
url,
force_update,
}
}
}
#[derive(Debug, Clone, Serialize)]
pub struct HelmChartScore {
@@ -20,6 +40,11 @@ pub struct HelmChartScore {
pub chart_version: Option<NonBlankString>,
pub values_overrides: Option<HashMap<NonBlankString, String>>,
pub values_yaml: Option<String>,
pub create_namespace: bool,
/// Wether to run `helm upgrade --install` under the hood or only install when not present
pub install_only: bool,
pub repository: Option<HelmRepository>,
}
impl<T: Topology + HelmCommand> Score<T> for HelmChartScore {
@@ -38,6 +63,81 @@ impl<T: Topology + HelmCommand> Score<T> for HelmChartScore {
pub struct HelmChartInterpret {
pub score: HelmChartScore,
}
impl HelmChartInterpret {
fn add_repo(&self) -> Result<(), InterpretError> {
let repo = match &self.score.repository {
Some(repo) => repo,
None => {
info!("No Helm repository specified in the score. Skipping repository setup.");
return Ok(());
}
};
info!(
"Ensuring Helm repository exists: Name='{}', URL='{}', ForceUpdate={}",
repo.name, repo.url, repo.force_update
);
let mut add_args = vec!["repo", "add", &repo.name, repo.url.as_str()];
if repo.force_update {
add_args.push("--force-update");
}
let add_output = run_helm_command(&add_args)?;
let full_output = format!(
"{}\n{}",
String::from_utf8_lossy(&add_output.stdout),
String::from_utf8_lossy(&add_output.stderr)
);
match add_output.status.success() {
true => {
return Ok(());
}
false => {
return Err(InterpretError::new(format!(
"Failed to add helm repository!\n{full_output}"
)));
}
}
}
}
fn run_helm_command(args: &[&str]) -> Result<Output, InterpretError> {
let command_str = format!("helm {}", args.join(" "));
debug!(
"Got KUBECONFIG: `{}`",
std::env::var("KUBECONFIG").unwrap_or("".to_string())
);
debug!("Running Helm command: `{}`", command_str);
let output = Command::new("helm")
.args(args)
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.output()
.map_err(|e| {
InterpretError::new(format!(
"Failed to execute helm command '{}': {}. Is helm installed and in PATH?",
command_str, e
))
})?;
if !output.status.success() {
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
warn!(
"Helm command `{}` failed with status: {}\nStdout:\n{}\nStderr:\n{}",
command_str, output.status, stdout, stderr
);
} else {
debug!(
"Helm command `{}` finished successfully. Status: {}",
command_str, output.status
);
}
Ok(output)
}
#[async_trait]
impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
@@ -61,7 +161,56 @@ impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
None => None,
};
let helm_executor = DefaultHelmExecutor::new();
self.add_repo()?;
let helm_executor = DefaultHelmExecutor::new_with_opts(
&NonBlankString::from_str("helm").unwrap(),
None,
900,
false,
false,
);
let mut helm_options = Vec::new();
if self.score.create_namespace {
helm_options.push(NonBlankString::from_str("--create-namespace").unwrap());
}
if self.score.install_only {
let chart_list = match helm_executor.list(Some(ns)) {
Ok(charts) => charts,
Err(e) => {
return Err(InterpretError::new(format!(
"Failed to list scores in namespace {:?} because of error : {}",
self.score.namespace, e
)));
}
};
if chart_list
.iter()
.any(|item| item.name == self.score.release_name.to_string())
{
info!(
"Release '{}' already exists in namespace '{}'. Skipping installation as install_only is true.",
self.score.release_name, ns
);
return Ok(Outcome::new(
InterpretStatus::SUCCESS,
format!(
"Helm Chart '{}' already installed to namespace {ns} and install_only=true",
self.score.release_name
),
));
} else {
info!(
"Release '{}' not found in namespace '{}'. Proceeding with installation.",
self.score.release_name, ns
);
}
}
let res = helm_executor.install_or_upgrade(
&ns,
&self.score.release_name,
@@ -69,7 +218,7 @@ impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
self.score.chart_version.as_ref(),
self.score.values_overrides.as_ref(),
yaml_path,
None,
Some(&helm_options),
);
let status = match res {

View File

@@ -0,0 +1,376 @@
use async_trait::async_trait;
use log::debug;
use serde::Serialize;
use std::collections::HashMap;
use std::io::ErrorKind;
use std::path::PathBuf;
use std::process::{Command, Output};
use temp_dir::{self, TempDir};
use temp_file::TempFile;
use crate::data::{Id, Version};
use crate::interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome};
use crate::inventory::Inventory;
use crate::score::Score;
use crate::topology::{HelmCommand, K8sclient, Topology};
#[derive(Clone)]
pub struct HelmCommandExecutor {
pub env: HashMap<String, String>,
pub path: Option<PathBuf>,
pub args: Vec<String>,
pub api_versions: Option<Vec<String>>,
pub kube_version: String,
pub debug: Option<bool>,
pub globals: HelmGlobals,
pub chart: HelmChart,
}
#[derive(Clone)]
pub struct HelmGlobals {
pub chart_home: Option<PathBuf>,
pub config_home: Option<PathBuf>,
}
#[derive(Debug, Clone, Serialize)]
pub struct HelmChart {
pub name: String,
pub version: Option<String>,
pub repo: Option<String>,
pub release_name: Option<String>,
pub namespace: Option<String>,
pub additional_values_files: Vec<PathBuf>,
pub values_file: Option<PathBuf>,
pub values_inline: Option<String>,
pub include_crds: Option<bool>,
pub skip_hooks: Option<bool>,
pub api_versions: Option<Vec<String>>,
pub kube_version: Option<String>,
pub name_template: String,
pub skip_tests: Option<bool>,
pub debug: Option<bool>,
}
impl HelmCommandExecutor {
pub fn generate(mut self) -> Result<String, std::io::Error> {
if self.globals.chart_home.is_none() {
self.globals.chart_home = Some(PathBuf::from("charts"));
}
if self
.clone()
.chart
.clone()
.chart_exists_locally(self.clone().globals.chart_home.unwrap())
.is_none()
{
if self.chart.repo.is_none() {
return Err(std::io::Error::new(
ErrorKind::Other,
"Chart doesn't exist locally and no repo specified",
));
}
self.clone().run_command(
self.chart
.clone()
.pull_command(self.globals.chart_home.clone().unwrap()),
)?;
}
let out = match self.clone().run_command(
self.chart
.clone()
.helm_args(self.globals.chart_home.clone().unwrap()),
) {
Ok(out) => out,
Err(e) => return Err(e),
};
// TODO: don't use unwrap here
let s = String::from_utf8(out.stdout).unwrap();
debug!("helm stderr: {}", String::from_utf8(out.stderr).unwrap());
debug!("helm status: {}", out.status);
debug!("helm output: {s}");
let clean = s.split_once("---").unwrap().1;
Ok(clean.to_string())
}
pub fn version(self) -> Result<String, std::io::Error> {
let out = match self.run_command(vec![
"version".to_string(),
"-c".to_string(),
"--short".to_string(),
]) {
Ok(out) => out,
Err(e) => return Err(e),
};
// TODO: don't use unwrap
Ok(String::from_utf8(out.stdout).unwrap())
}
pub fn run_command(mut self, mut args: Vec<String>) -> Result<Output, std::io::Error> {
if let Some(d) = self.debug {
if d {
args.push("--debug".to_string());
}
}
let path = if let Some(p) = self.path {
p
} else {
PathBuf::from("helm")
};
let config_home = match self.globals.config_home {
Some(p) => p,
None => PathBuf::from(TempDir::new()?.path()),
};
match self.chart.values_inline {
Some(yaml_str) => {
let tf: TempFile;
tf = temp_file::with_contents(yaml_str.as_bytes());
self.chart
.additional_values_files
.push(PathBuf::from(tf.path()));
}
None => (),
};
self.env.insert(
"HELM_CONFIG_HOME".to_string(),
config_home.to_str().unwrap().to_string(),
);
self.env.insert(
"HELM_CACHE_HOME".to_string(),
config_home.to_str().unwrap().to_string(),
);
self.env.insert(
"HELM_DATA_HOME".to_string(),
config_home.to_str().unwrap().to_string(),
);
Command::new(path).envs(self.env).args(args).output()
}
}
impl HelmChart {
pub fn chart_exists_locally(self, chart_home: PathBuf) -> Option<PathBuf> {
let chart_path =
PathBuf::from(chart_home.to_str().unwrap().to_string() + "/" + &self.name.to_string());
if chart_path.exists() {
Some(chart_path)
} else {
None
}
}
pub fn pull_command(self, chart_home: PathBuf) -> Vec<String> {
let mut args = vec![
"pull".to_string(),
"--untar".to_string(),
"--untardir".to_string(),
chart_home.to_str().unwrap().to_string(),
];
match self.repo {
Some(r) => {
if r.starts_with("oci://") {
args.push(String::from(
r.trim_end_matches("/").to_string() + "/" + self.name.clone().as_str(),
));
} else {
args.push("--repo".to_string());
args.push(r.to_string());
args.push(self.name);
}
}
None => args.push(self.name),
};
match self.version {
Some(v) => {
args.push("--version".to_string());
args.push(v.to_string());
}
None => (),
}
args
}
pub fn helm_args(self, chart_home: PathBuf) -> Vec<String> {
let mut args: Vec<String> = vec!["template".to_string()];
match self.release_name {
Some(rn) => args.push(rn.to_string()),
None => args.push("--generate-name".to_string()),
}
args.push(
PathBuf::from(chart_home.to_str().unwrap().to_string() + "/" + self.name.as_str())
.to_str()
.unwrap()
.to_string(),
);
if let Some(n) = self.namespace {
args.push("--namespace".to_string());
args.push(n.to_string());
}
if let Some(f) = self.values_file {
args.push("-f".to_string());
args.push(f.to_str().unwrap().to_string());
}
for f in self.additional_values_files {
args.push("-f".to_string());
args.push(f.to_str().unwrap().to_string());
}
if let Some(vv) = self.api_versions {
for v in vv {
args.push("--api-versions".to_string());
args.push(v);
}
}
if let Some(kv) = self.kube_version {
args.push("--kube-version".to_string());
args.push(kv);
}
if let Some(crd) = self.include_crds {
if crd {
args.push("--include-crds".to_string());
}
}
if let Some(st) = self.skip_tests {
if st {
args.push("--skip-tests".to_string());
}
}
if let Some(sh) = self.skip_hooks {
if sh {
args.push("--no-hooks".to_string());
}
}
if let Some(d) = self.debug {
if d {
args.push("--debug".to_string());
}
}
args
}
}
#[derive(Debug, Clone, Serialize)]
pub struct HelmChartScoreV2 {
pub chart: HelmChart,
}
impl<T: Topology + K8sclient + HelmCommand> Score<T> for HelmChartScoreV2 {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(HelmChartInterpretV2 {
score: self.clone(),
})
}
fn name(&self) -> String {
format!(
"{} {} HelmChartScoreV2",
self.chart
.release_name
.clone()
.unwrap_or("Unknown".to_string()),
self.chart.name
)
}
}
#[derive(Debug, Serialize)]
pub struct HelmChartInterpretV2 {
pub score: HelmChartScoreV2,
}
impl HelmChartInterpretV2 {}
#[async_trait]
impl<T: Topology + K8sclient + HelmCommand> Interpret<T> for HelmChartInterpretV2 {
async fn execute(
&self,
_inventory: &Inventory,
_topology: &T,
) -> Result<Outcome, InterpretError> {
let ns = self
.score
.chart
.namespace
.as_ref()
.unwrap_or_else(|| todo!("Get namespace from active kubernetes cluster"));
let helm_executor = HelmCommandExecutor {
env: HashMap::new(),
path: None,
args: vec![],
api_versions: None,
kube_version: "v1.33.0".to_string(),
debug: Some(false),
globals: HelmGlobals {
chart_home: None,
config_home: None,
},
chart: self.score.chart.clone(),
};
// let mut helm_options = Vec::new();
// if self.score.create_namespace {
// helm_options.push(NonBlankString::from_str("--create-namespace").unwrap());
// }
let res = helm_executor.generate();
let output = match res {
Ok(output) => output,
Err(err) => return Err(InterpretError::new(err.to_string())),
};
// TODO: implement actually applying the YAML from the templating in the generate function to a k8s cluster, having trouble passing in straight YAML into the k8s client
// let k8s_resource = k8s_openapi::serde_json::from_str(output.as_str()).unwrap();
// let client = topology
// .k8s_client()
// .await
// .expect("Environment should provide enough information to instanciate a client")
// .apply_namespaced(&vec![output], Some(ns.to_string().as_str()));
// match client.apply_yaml(output) {
// Ok(_) => return Ok(Outcome::success("Helm chart deployed".to_string())),
// Err(e) => return Err(InterpretError::new(e)),
// }
Ok(Outcome::success("Helm chart deployed".to_string()))
}
fn get_name(&self) -> InterpretName {
todo!()
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -1 +1,2 @@
pub mod chart;
pub mod command;

View File

@@ -0,0 +1,66 @@
use async_trait::async_trait;
use derive_new::new;
use serde::Serialize;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::Topology,
};
#[derive(Debug, new, Clone, Serialize)]
pub struct IpxeScore {
//files_to_serve: Url,
}
impl<T: Topology> Score<T> for IpxeScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(IpxeInterpret::new(self.clone()))
}
fn name(&self) -> String {
"IpxeScore".to_string()
}
}
#[derive(Debug, new, Clone)]
pub struct IpxeInterpret {
_score: IpxeScore,
}
#[async_trait]
impl<T: Topology> Interpret<T> for IpxeInterpret {
async fn execute(
&self,
_inventory: &Inventory,
_topology: &T,
) -> Result<Outcome, InterpretError> {
/*
let http_server = &topology.http_server;
http_server.ensure_initialized().await?;
Ok(Outcome::success(format!(
"Http Server running and serving files from {}",
self.score.files_to_serve
)))
*/
todo!();
}
fn get_name(&self) -> InterpretName {
InterpretName::Ipxe
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -14,11 +14,13 @@ use super::resource::{K8sResourceInterpret, K8sResourceScore};
pub struct K8sDeploymentScore {
pub name: String,
pub image: String,
pub namespace: Option<String>,
pub env_vars: serde_json::Value,
}
impl<T: Topology + K8sclient> Score<T> for K8sDeploymentScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let deployment: Deployment = serde_json::from_value(json!(
let deployment = json!(
{
"metadata": {
"name": self.name
@@ -38,18 +40,21 @@ impl<T: Topology + K8sclient> Score<T> for K8sDeploymentScore {
"spec": {
"containers": [
{
"image": self.image,
"name": self.image
"image": self.image,
"name": self.name,
"imagePullPolicy": "Always",
"env": self.env_vars,
}
]
}
}
}
}
))
.unwrap();
);
let deployment: Deployment = serde_json::from_value(deployment).unwrap();
Box::new(K8sResourceInterpret {
score: K8sResourceScore::single(deployment.clone()),
score: K8sResourceScore::single(deployment.clone(), self.namespace.clone()),
})
}

View File

@@ -0,0 +1,98 @@
use harmony_macros::ingress_path;
use k8s_openapi::api::networking::v1::Ingress;
use serde::Serialize;
use serde_json::json;
use crate::{
interpret::Interpret,
score::Score,
topology::{K8sclient, Topology},
};
use super::resource::{K8sResourceInterpret, K8sResourceScore};
#[derive(Debug, Clone, Serialize)]
pub enum PathType {
ImplementationSpecific,
Exact,
Prefix,
}
impl PathType {
fn as_str(&self) -> &'static str {
match self {
PathType::ImplementationSpecific => "ImplementationSpecific",
PathType::Exact => "Exact",
PathType::Prefix => "Prefix",
}
}
}
type IngressPath = String;
#[derive(Debug, Clone, Serialize)]
pub struct K8sIngressScore {
pub name: fqdn::FQDN,
pub host: fqdn::FQDN,
pub backend_service: fqdn::FQDN,
pub port: u16,
pub path: Option<IngressPath>,
pub path_type: Option<PathType>,
pub namespace: Option<fqdn::FQDN>,
}
impl<T: Topology + K8sclient> Score<T> for K8sIngressScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let path = match self.path.clone() {
Some(p) => p,
None => ingress_path!("/"),
};
let path_type = match self.path_type.clone() {
Some(p) => p,
None => PathType::Prefix,
};
let ingress = json!(
{
"metadata": {
"name": self.name
},
"spec": {
"rules": [
{ "host": self.host,
"http": {
"paths": [
{
"path": path,
"pathType": path_type.as_str(),
"backend": [
{
"service": self.backend_service,
"port": self.port
}
]
}
]
}
}
]
}
}
);
let ingress: Ingress = serde_json::from_value(ingress).unwrap();
Box::new(K8sResourceInterpret {
score: K8sResourceScore::single(
ingress.clone(),
self.namespace
.clone()
.map(|f| f.as_c_str().to_str().unwrap().to_string()),
),
})
}
fn name(&self) -> String {
format!("{} K8sIngressScore", self.name)
}
}

View File

@@ -1,2 +1,4 @@
pub mod deployment;
pub mod ingress;
pub mod namespace;
pub mod resource;

View File

@@ -0,0 +1,46 @@
use k8s_openapi::api::core::v1::Namespace;
use non_blank_string_rs::NonBlankString;
use serde::Serialize;
use serde_json::json;
use crate::{
interpret::Interpret,
score::Score,
topology::{K8sclient, Topology},
};
#[derive(Debug, Clone, Serialize)]
pub struct K8sNamespaceScore {
pub name: Option<NonBlankString>,
}
impl<T: Topology + K8sclient> Score<T> for K8sNamespaceScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let name = match &self.name {
Some(name) => name,
None => todo!(
"Return NoOp interpret when no namespace specified or something that makes sense"
),
};
let _namespace: Namespace = serde_json::from_value(json!(
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": name,
},
}
))
.unwrap();
todo!(
"We currently only support namespaced ressources (see Scope = NamespaceResourceScope)"
);
// Box::new(K8sResourceInterpret {
// score: K8sResourceScore::single(namespace.clone()),
// })
}
fn name(&self) -> String {
"K8sNamespaceScore".to_string()
}
}

View File

@@ -14,12 +14,14 @@ use crate::{
#[derive(Debug, Clone, Serialize)]
pub struct K8sResourceScore<K: Resource + std::fmt::Debug> {
pub resource: Vec<K>,
pub namespace: Option<String>,
}
impl<K: Resource + std::fmt::Debug> K8sResourceScore<K> {
pub fn single(resource: K) -> Self {
pub fn single(resource: K, namespace: Option<String>) -> Self {
Self {
resource: vec![resource],
namespace,
}
}
}
@@ -77,7 +79,7 @@ where
.k8s_client()
.await
.expect("Environment should provide enough information to instanciate a client")
.apply_namespaced(&self.score.resource)
.apply_namespaced(&self.score.resource, self.score.namespace.as_deref())
.await?;
Ok(Outcome::success(

View File

@@ -1,9 +1,22 @@
use convert_case::{Case, Casing};
use dockerfile_builder::instruction::{CMD, COPY, ENV, EXPOSE, FROM, RUN, WORKDIR};
use dockerfile_builder::{Dockerfile, instruction_builder::EnvBuilder};
use fqdn::fqdn;
use harmony_macros::ingress_path;
use non_blank_string_rs::NonBlankString;
use serde_json::json;
use std::collections::HashMap;
use std::fs;
use std::path::{Path, PathBuf};
use std::str::FromStr;
use async_trait::async_trait;
use log::info;
use log::{debug, info};
use serde::Serialize;
use crate::config::{REGISTRY_PROJECT, REGISTRY_URL};
use crate::modules::k8s::ingress::K8sIngressScore;
use crate::topology::HelmCommand;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
@@ -13,6 +26,8 @@ use crate::{
topology::{K8sclient, Topology, Url},
};
use super::helm::chart::HelmChartScore;
#[derive(Debug, Clone, Serialize)]
pub struct LAMPScore {
pub name: String,
@@ -25,6 +40,8 @@ pub struct LAMPScore {
pub struct LAMPConfig {
pub project_root: PathBuf,
pub ssl_enabled: bool,
pub database_size: Option<String>,
pub namespace: String,
}
impl Default for LAMPConfig {
@@ -32,11 +49,13 @@ impl Default for LAMPConfig {
LAMPConfig {
project_root: Path::new("./src").to_path_buf(),
ssl_enabled: true,
database_size: None,
namespace: "harmony-lamp".to_string(),
}
}
}
impl<T: Topology + K8sclient> Score<T> for LAMPScore {
impl<T: Topology + K8sclient + HelmCommand> Score<T> for LAMPScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(LAMPInterpret {
score: self.clone(),
@@ -54,7 +73,7 @@ pub struct LAMPInterpret {
}
#[async_trait]
impl<T: Topology + K8sclient> Interpret<T> for LAMPInterpret {
impl<T: Topology + K8sclient + HelmCommand> Interpret<T> for LAMPInterpret {
async fn execute(
&self,
inventory: &Inventory,
@@ -70,18 +89,78 @@ impl<T: Topology + K8sclient> Interpret<T> for LAMPInterpret {
};
info!("LAMP docker image built {image_name}");
let remote_name = match self.push_docker_image(&image_name) {
Ok(remote_name) => remote_name,
Err(e) => {
return Err(InterpretError::new(format!(
"Could not push docker image {e}"
)));
}
};
info!("LAMP docker image pushed to {remote_name}");
info!("Deploying database");
self.deploy_database(inventory, topology).await?;
let base_name = self.score.name.to_case(Case::Kebab);
let secret_name = format!("{}-database-mariadb", base_name);
let deployment_score = K8sDeploymentScore {
name: <LAMPScore as Score<T>>::name(&self.score),
image: image_name,
name: <LAMPScore as Score<T>>::name(&self.score).to_case(Case::Kebab),
image: remote_name,
namespace: self.get_namespace().map(|nbs| nbs.to_string()),
env_vars: json!([
{
"name": "MYSQL_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": secret_name,
"key": "mariadb-root-password"
}
}
},
{
"name": "MYSQL_HOST",
"value": secret_name
},
]),
};
info!("LAMP deployment_score {deployment_score:?}");
todo!();
info!("Deploying score {deployment_score:#?}");
deployment_score
.create_interpret()
.execute(inventory, topology)
.await?;
todo!()
info!("LAMP deployment_score {deployment_score:?}");
let lamp_ingress = K8sIngressScore {
name: fqdn!("lamp-ingress"),
host: fqdn!("test"),
backend_service: fqdn!(
<LAMPScore as Score<T>>::name(&self.score)
.to_case(Case::Kebab)
.as_str()
),
port: 8080,
path: Some(ingress_path!("/")),
path_type: None,
namespace: self
.get_namespace()
.map(|nbs| fqdn!(nbs.to_string().as_str())),
};
lamp_ingress
.create_interpret()
.execute(inventory, topology)
.await?;
info!("LAMP lamp_ingress {lamp_ingress:?}");
Ok(Outcome::success(
"Successfully deployed LAMP Stack!".to_string(),
))
}
fn get_name(&self) -> InterpretName {
@@ -101,15 +180,42 @@ impl<T: Topology + K8sclient> Interpret<T> for LAMPInterpret {
}
}
use dockerfile_builder::instruction::{CMD, COPY, ENV, EXPOSE, FROM, RUN, WORKDIR};
use dockerfile_builder::{Dockerfile, instruction_builder::EnvBuilder};
use std::fs;
impl LAMPInterpret {
pub fn build_dockerfile(
async fn deploy_database<T: Topology + K8sclient + HelmCommand>(
&self,
score: &LAMPScore,
) -> Result<PathBuf, Box<dyn std::error::Error>> {
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let mut values_overrides = HashMap::new();
if let Some(database_size) = self.score.config.database_size.clone() {
values_overrides.insert(
NonBlankString::from_str("primary.persistence.size").unwrap(),
database_size,
);
values_overrides.insert(
NonBlankString::from_str("auth.rootPassword").unwrap(),
"mariadb-changethis".to_string(),
);
}
let score = HelmChartScore {
namespace: self.get_namespace(),
release_name: NonBlankString::from_str(&format!("{}-database", self.score.name))
.unwrap(),
chart_name: NonBlankString::from_str(
"oci://registry-1.docker.io/bitnamicharts/mariadb",
)
.unwrap(),
chart_version: None,
values_overrides: Some(values_overrides),
create_namespace: true,
install_only: false,
values_yaml: None,
repository: None,
};
score.create_interpret().execute(inventory, topology).await
}
fn build_dockerfile(&self, score: &LAMPScore) -> Result<PathBuf, Box<dyn std::error::Error>> {
let mut dockerfile = Dockerfile::new();
// Use the PHP version from the score to determine the base image
@@ -157,6 +263,9 @@ impl LAMPInterpret {
opcache",
));
dockerfile.push(RUN::from(r#"sed -i 's/VirtualHost \*:80/VirtualHost *:8080/' /etc/apache2/sites-available/000-default.conf && \
sed -i 's/^Listen 80$/Listen 8080/' /etc/apache2/ports.conf"#));
// Copy PHP configuration
dockerfile.push(RUN::from("mkdir -p /usr/local/etc/php/conf.d/"));
@@ -196,6 +305,13 @@ opcache.fast_shutdown=1
sed -i 's/ServerSignature On/ServerSignature Off/' /etc/apache2/conf-enabled/security.conf"
));
// Set env vars
dockerfile.push(RUN::from(
"echo 'PassEnv MYSQL_PASSWORD' >> /etc/apache2/sites-available/000-default.conf \
&& echo 'PassEnv MYSQL_USER' >> /etc/apache2/sites-available/000-default.conf \
&& echo 'PassEnv MYSQL_HOST' >> /etc/apache2/sites-available/000-default.conf",
));
// Create a dedicated user for running Apache
dockerfile.push(RUN::from(
"groupadd -g 1000 appuser && \
@@ -215,7 +331,7 @@ opcache.fast_shutdown=1
dockerfile.push(RUN::from("chown -R appuser:appuser /var/www/html"));
// Expose Apache port
dockerfile.push(EXPOSE::from("80/tcp"));
dockerfile.push(EXPOSE::from("8080/tcp"));
// Set the default command
dockerfile.push(CMD::from("apache2-foreground"));
@@ -227,6 +343,43 @@ opcache.fast_shutdown=1
Ok(dockerfile_path)
}
fn check_output(
&self,
output: &std::process::Output,
msg: &str,
) -> Result<(), Box<dyn std::error::Error>> {
if !output.status.success() {
return Err(format!("{msg}: {}", String::from_utf8_lossy(&output.stderr)).into());
}
Ok(())
}
fn push_docker_image(&self, image_name: &str) -> Result<String, Box<dyn std::error::Error>> {
let full_tag = format!("{}/{}/{}", *REGISTRY_URL, *REGISTRY_PROJECT, &image_name);
let output = std::process::Command::new("docker")
.args(["tag", image_name, &full_tag])
.output()?;
self.check_output(&output, "Tagging docker image failed")?;
debug!(
"docker tag output {} {}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr)
);
let output = std::process::Command::new("docker")
.args(["push", &full_tag])
.output()?;
self.check_output(&output, "Pushing docker image failed")?;
debug!(
"docker push output {} {}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr)
);
Ok(full_tag)
}
pub fn build_docker_image(&self) -> Result<String, Box<dyn std::error::Error>> {
info!("Generating Dockerfile");
let dockerfile = self.build_dockerfile(&self.score)?;
@@ -260,4 +413,8 @@ opcache.fast_shutdown=1
Ok(image_name)
}
fn get_namespace(&self) -> Option<NonBlankString> {
Some(NonBlankString::from_str(&self.score.config.namespace).unwrap())
}
}

View File

@@ -1,12 +1,16 @@
pub mod cert_manager;
pub mod dhcp;
pub mod dns;
pub mod dummy;
pub mod helm;
pub mod http;
pub mod ipxe;
pub mod k3d;
pub mod k8s;
pub mod lamp;
pub mod load_balancer;
pub mod monitoring;
pub mod okd;
pub mod opnsense;
pub mod tenant;
pub mod tftp;

View File

@@ -0,0 +1,42 @@
use url::Url;
#[derive(Debug, Clone)]
pub struct DiscordWebhookAlertChannel {
pub webhook_url: Url,
pub name: String,
pub send_resolved_notifications: bool,
}
//impl AlertChannelConfig for DiscordWebhookAlertChannel {
// fn build_alert_channel(&self) -> Box<dyn AlertChannel> {
// Box::new(DiscordWebhookAlertChannel {
// webhook_url: self.webhook_url.clone(),
// name: self.name.clone(),
// send_resolved_notifications: self.send_resolved_notifications.clone(),
// })
// }
// fn channel_type(&self) -> String {
// "discord".to_string()
// }
//}
//
//#[async_trait]
//impl AlertChannel for DiscordWebhookAlertChannel {
// async fn get_channel_id(&self) -> String {
// self.name.clone()
// }
//}
//
//impl PrometheusAlertChannel for DiscordWebhookAlertChannel {
// fn get_alert_channel_global_settings(&self) -> Option<AlertManagerChannelGlobalConfigs> {
// None
// }
//
// fn get_alert_channel_route(&self) -> AlertManagerChannelRoute {
// todo!()
// }
//
// fn get_alert_channel_receiver(&self) -> AlertManagerChannelReceiver {
// todo!()
// }
//}

View File

@@ -0,0 +1 @@
pub mod discord_alert_channel;

View File

@@ -0,0 +1,49 @@
use serde::Serialize;
use super::types::AlertManagerChannelConfig;
#[derive(Debug, Clone, Serialize)]
pub struct KubePrometheusConfig {
pub namespace: String,
pub default_rules: bool,
pub windows_monitoring: bool,
pub alert_manager: bool,
pub node_exporter: bool,
pub prometheus: bool,
pub grafana: bool,
pub kubernetes_service_monitors: bool,
pub kubernetes_api_server: bool,
pub kubelet: bool,
pub kube_controller_manager: bool,
pub core_dns: bool,
pub kube_etcd: bool,
pub kube_scheduler: bool,
pub kube_proxy: bool,
pub kube_state_metrics: bool,
pub prometheus_operator: bool,
pub alert_channels: Vec<AlertManagerChannelConfig>,
}
impl KubePrometheusConfig {
pub fn new() -> Self {
Self {
namespace: "monitoring".into(),
default_rules: true,
windows_monitoring: false,
alert_manager: true,
grafana: true,
node_exporter: false,
prometheus: true,
kubernetes_service_monitors: true,
kubernetes_api_server: false,
kubelet: false,
kube_controller_manager: false,
kube_etcd: false,
kube_proxy: false,
kube_state_metrics: true,
prometheus_operator: true,
core_dns: false,
kube_scheduler: false,
alert_channels: Vec::new(),
}
}
}

View File

@@ -0,0 +1,261 @@
use super::config::KubePrometheusConfig;
use log::info;
use non_blank_string_rs::NonBlankString;
use std::str::FromStr;
use crate::modules::helm::chart::HelmChartScore;
pub fn kube_prometheus_helm_chart_score() -> HelmChartScore {
let config = KubePrometheusConfig::new();
//TODO this should be make into a rule with default formatting that can be easily passed as a vec
//to the overrides or something leaving the user to deal with formatting here seems bad
let default_rules = config.default_rules.to_string();
let windows_monitoring = config.windows_monitoring.to_string();
let alert_manager = config.alert_manager.to_string();
let grafana = config.grafana.to_string();
let kubernetes_service_monitors = config.kubernetes_service_monitors.to_string();
let kubernetes_api_server = config.kubernetes_api_server.to_string();
let kubelet = config.kubelet.to_string();
let kube_controller_manager = config.kube_controller_manager.to_string();
let core_dns = config.core_dns.to_string();
let kube_etcd = config.kube_etcd.to_string();
let kube_scheduler = config.kube_scheduler.to_string();
let kube_proxy = config.kube_proxy.to_string();
let kube_state_metrics = config.kube_state_metrics.to_string();
let node_exporter = config.node_exporter.to_string();
let prometheus_operator = config.prometheus_operator.to_string();
let prometheus = config.prometheus.to_string();
let mut values = format!(
r#"
additionalPrometheusRulesMap:
pods-status-alerts:
groups:
- name: pods
rules:
- alert: "[CRIT] POD not healthy"
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{{phase=~"Pending|Unknown|Failed"}})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
title: "[CRIT] POD not healthy : {{{{ $labels.pod }}}}"
description: |
A POD is in a non-ready state!
- **Pod**: {{{{ $labels.pod }}}}
- **Namespace**: {{{{ $labels.namespace }}}}
- alert: "[CRIT] POD crash looping"
expr: increase(kube_pod_container_status_restarts_total[5m]) > 3
for: 0m
labels:
severity: critical
annotations:
title: "[CRIT] POD crash looping : {{{{ $labels.pod }}}}"
description: |
A POD is drowning in a crash loop!
- **Pod**: {{{{ $labels.pod }}}}
- **Namespace**: {{{{ $labels.namespace }}}}
- **Instance**: {{{{ $labels.instance }}}}
pvc-alerts:
groups:
- name: pvc-alerts
rules:
- alert: 'PVC Fill Over 95 Percent In 2 Days'
expr: |
(
kubelet_volume_stats_used_bytes
/
kubelet_volume_stats_capacity_bytes
) > 0.95
AND
predict_linear(kubelet_volume_stats_used_bytes[2d], 2 * 24 * 60 * 60)
/
kubelet_volume_stats_capacity_bytes
> 0.95
for: 1m
labels:
severity: warning
annotations:
description: The PVC {{{{ $labels.persistentvolumeclaim }}}} in namespace {{{{ $labels.namespace }}}} is predicted to fill over 95% in less than 2 days.
title: PVC {{{{ $labels.persistentvolumeclaim }}}} in namespace {{{{ $labels.namespace }}}} will fill over 95% in less than 2 days
defaultRules:
create: {default_rules}
rules:
alertmanager: true
etcd: true
configReloaders: true
general: true
k8sContainerCpuUsageSecondsTotal: true
k8sContainerMemoryCache: true
k8sContainerMemoryRss: true
k8sContainerMemorySwap: true
k8sContainerResource: true
k8sContainerMemoryWorkingSetBytes: true
k8sPodOwner: true
kubeApiserverAvailability: true
kubeApiserverBurnrate: true
kubeApiserverHistogram: true
kubeApiserverSlos: true
kubeControllerManager: true
kubelet: true
kubeProxy: true
kubePrometheusGeneral: true
kubePrometheusNodeRecording: true
kubernetesApps: true
kubernetesResources: true
kubernetesStorage: true
kubernetesSystem: true
kubeSchedulerAlerting: true
kubeSchedulerRecording: true
kubeStateMetrics: true
network: true
node: true
nodeExporterAlerting: true
nodeExporterRecording: true
prometheus: true
prometheusOperator: true
windows: true
windowsMonitoring:
enabled: {windows_monitoring}
grafana:
enabled: {grafana}
kubernetesServiceMonitors:
enabled: {kubernetes_service_monitors}
kubeApiServer:
enabled: {kubernetes_api_server}
kubelet:
enabled: {kubelet}
kubeControllerManager:
enabled: {kube_controller_manager}
coreDns:
enabled: {core_dns}
kubeEtcd:
enabled: {kube_etcd}
kubeScheduler:
enabled: {kube_scheduler}
kubeProxy:
enabled: {kube_proxy}
kubeStateMetrics:
enabled: {kube_state_metrics}
nodeExporter:
enabled: {node_exporter}
prometheusOperator:
enabled: {prometheus_operator}
prometheus:
enabled: {prometheus}
"#,
);
HelmChartScore {
namespace: Some(NonBlankString::from_str(&config.namespace).unwrap()),
release_name: NonBlankString::from_str("kube-prometheus").unwrap(),
chart_name: NonBlankString::from_str(
"oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack",
)
.unwrap(),
chart_version: None,
values_overrides: None,
values_yaml: Some(values.to_string()),
create_namespace: true,
install_only: true,
repository: None,
}
}
// let alertmanager_config = alert_manager_yaml_builder(&config);
// values.push_str(&alertmanager_config);
//
// fn alert_manager_yaml_builder(config: &KubePrometheusConfig) -> String {
// let mut receivers = String::new();
// let mut routes = String::new();
// let mut global_configs = String::new();
// let alert_manager = config.alert_manager;
// for alert_channel in &config.alert_channel {
// match alert_channel {
// AlertChannel::Discord { name, .. } => {
// let (receiver, route) = discord_alert_builder(name);
// info!("discord receiver: {} \nroute: {}", receiver, route);
// receivers.push_str(&receiver);
// routes.push_str(&route);
// }
// AlertChannel::Slack {
// slack_channel,
// webhook_url,
// } => {
// let (receiver, route) = slack_alert_builder(slack_channel);
// info!("slack receiver: {} \nroute: {}", receiver, route);
// receivers.push_str(&receiver);
//
// routes.push_str(&route);
// let global_config = format!(
// r#"
// global:
// slack_api_url: {webhook_url}"#
// );
//
// global_configs.push_str(&global_config);
// }
// AlertChannel::Smpt { .. } => todo!(),
// }
// }
// info!("after alert receiver: {}", receivers);
// info!("after alert routes: {}", routes);
//
// let alertmanager_config = format!(
// r#"
//alertmanager:
// enabled: {alert_manager}
// config: {global_configs}
// route:
// group_by: ['job']
// group_wait: 30s
// group_interval: 5m
// repeat_interval: 12h
// routes:
//{routes}
// receivers:
// - name: 'null'
//{receivers}"#
// );
//
// info!("alert manager config: {}", alertmanager_config);
// alertmanager_config
// }
//fn discord_alert_builder(release_name: &String) -> (String, String) {
// let discord_receiver_name = format!("Discord-{}", release_name);
// let receiver = format!(
// r#"
// - name: '{discord_receiver_name}'
// webhook_configs:
// - url: 'http://{release_name}-alertmanager-discord:9094'
// send_resolved: true"#,
// );
// let route = format!(
// r#"
// - receiver: '{discord_receiver_name}'
// matchers:
// - alertname!=Watchdog
// continue: true"#,
// );
// (receiver, route)
//}
//
//fn slack_alert_builder(slack_channel: &String) -> (String, String) {
// let slack_receiver_name = format!("Slack-{}", slack_channel);
// let receiver = format!(
// r#"
// - name: '{slack_receiver_name}'
// slack_configs:
// - channel: '{slack_channel}'
// send_resolved: true
// title: '{{{{ .CommonAnnotations.title }}}}'
// text: '{{{{ .CommonAnnotations.description }}}}'"#,
// );
// let route = format!(
// r#"
// - receiver: '{slack_receiver_name}'
// matchers:
// - alertname!=Watchdog
// continue: true"#,
// );
// (receiver, route)
//}

View File

@@ -0,0 +1,85 @@
//#[derive(Debug, Clone, Serialize)]
//pub struct KubePrometheusMonitorScore {
// pub kube_prometheus_config: KubePrometheusConfig,
// pub alert_channel_configs: Vec<dyn AlertChannelConfig>,
//}
//impl<T: Topology + Debug + HelmCommand + Monitor<T>> MonitorConfig<T>
// for KubePrometheusMonitorScore
//{
// fn build_monitor(&self) -> Box<dyn Monitor<T>> {
// Box::new(self.clone())
// }
//}
//impl<T: Topology + HelmCommand + Debug + Clone + 'static + Monitor<T>> Score<T>
// for KubePrometheusMonitorScore
//{
// fn create_interpret(&self) -> Box<dyn Interpret<T>> {
// Box::new(KubePrometheusMonitorInterpret {
// score: self.clone(),
// })
// }
//
// fn name(&self) -> String {
// "KubePrometheusMonitorScore".to_string()
// }
//}
//#[derive(Debug, Clone)]
//pub struct KubePrometheusMonitorInterpret {
// score: KubePrometheusMonitorScore,
//}
//#[async_trait]
//impl AlertChannelConfig for KubePrometheusMonitorInterpret {
// async fn build_alert_channel(
// &self,
// ) -> Box<dyn AlertChannel> {
// todo!()
// }
//}
//#[async_trait]
//impl<T: Topology + HelmCommand + Debug + Monitor<T>> Interpret<T>
// for KubePrometheusMonitorInterpret
//{
// async fn execute(
// &self,
// inventory: &Inventory,
// topology: &T,
// ) -> Result<Outcome, InterpretError> {
// let monitor = self.score.build_monitor();
//
// let mut alert_channels = Vec::new();
// //for config in self.score.alert_channel_configs {
// // alert_channels.push(self.build_alert_channel());
// //}
//
// monitor
// .deploy_monitor(inventory, topology, alert_channels)
// .await
// }
//
// fn get_name(&self) -> InterpretName {
// todo!()
// }
//
// fn get_version(&self) -> Version {
// todo!()
// }
//
// fn get_status(&self) -> InterpretStatus {
// todo!()
// }
//
// fn get_children(&self) -> Vec<Id> {
// todo!()
// }
//}
//#[async_trait]
//pub trait PrometheusAlertChannel {
// fn get_alert_channel_global_settings(&self) -> Option<AlertManagerChannelGlobalConfigs>;
// fn get_alert_channel_route(&self) -> AlertManagerChannelRoute;
// fn get_alert_channel_receiver(&self) -> AlertManagerChannelReceiver;
//}

View File

@@ -0,0 +1,4 @@
pub mod config;
pub mod kube_prometheus_helm_chart_score;
pub mod kube_prometheus_monitor;
pub mod types;

View File

@@ -0,0 +1,14 @@
use serde::Serialize;
#[derive(Debug, Clone, Serialize)]
pub struct AlertManagerChannelConfig {
pub global_configs: AlertManagerChannelGlobalConfigs,
pub route: AlertManagerChannelRoute,
pub receiver: AlertManagerChannelReceiver,
}
#[derive(Debug, Clone, Serialize)]
pub struct AlertManagerChannelGlobalConfigs {}
#[derive(Debug, Clone, Serialize)]
pub struct AlertManagerChannelReceiver {}
#[derive(Debug, Clone, Serialize)]
pub struct AlertManagerChannelRoute {}

View File

@@ -0,0 +1,3 @@
pub mod alert_channel;
pub mod kube_prometheus;
pub mod monitoring_alerting;

View File

@@ -0,0 +1,69 @@
use async_trait::async_trait;
use serde::Serialize;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::{
HelmCommand, Topology,
oberservability::monitoring::{AlertChannelConfig, Monitor},
},
};
#[derive(Debug, Clone, Serialize)]
pub struct MonitoringAlertingScore {
#[serde(skip)]
pub alert_channel_configs: Option<Vec<Box<dyn AlertChannelConfig>>>,
}
impl<T: Topology + HelmCommand + Monitor> Score<T> for MonitoringAlertingScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(MonitoringAlertingInterpret {
score: self.clone(),
})
}
fn name(&self) -> String {
"MonitoringAlertingScore".to_string()
}
}
#[derive(Debug)]
struct MonitoringAlertingInterpret {
score: MonitoringAlertingScore,
}
#[async_trait]
impl<T: Topology + HelmCommand + Monitor> Interpret<T> for MonitoringAlertingInterpret {
async fn execute(
&self,
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
topology
.provision_monitor(
inventory,
topology,
self.score.alert_channel_configs.clone(),
)
.await
}
fn get_name(&self) -> InterpretName {
todo!()
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -36,13 +36,20 @@ impl OKDBootstrapDhcpScore {
.expect("Should have at least one worker to be used as bootstrap node")
.clone(),
});
// TODO refactor this so it is not copy pasted from dhcp.rs
Self {
dhcp_score: DhcpScore::new(
host_binding,
// TODO : we should add a tftp server to the topology instead of relying on the
// router address, this is leaking implementation details
Some(topology.router.get_gateway()),
Some("bootx64.efi".to_string()),
None, // To allow UEFI boot we cannot provide a legacy file
Some("undionly.kpxe".to_string()),
Some("ipxe.efi".to_string()),
Some(format!(
"http://{}:8080/boot.ipxe",
topology.router.get_gateway()
)),
),
}
}

View File

@@ -15,7 +15,7 @@ pub struct OKDDhcpScore {
impl OKDDhcpScore {
pub fn new(topology: &HAClusterTopology, inventory: &Inventory) -> Self {
let host_binding = topology
let mut host_binding: Vec<HostBinding> = topology
.control_plane
.iter()
.enumerate()
@@ -28,13 +28,35 @@ impl OKDDhcpScore {
.clone(),
})
.collect();
topology
.workers
.iter()
.enumerate()
.for_each(|(index, topology_entry)| {
host_binding.push(HostBinding {
logical_host: topology_entry.clone(),
physical_host: inventory
.worker_host
.get(index)
.expect("There should be enough worker hosts to fill topology")
.clone(),
})
});
Self {
// TODO : we should add a tftp server to the topology instead of relying on the
// router address, this is leaking implementation details
dhcp_score: DhcpScore {
host_binding,
next_server: Some(topology.router.get_gateway()),
boot_filename: Some("bootx64.efi".to_string()),
boot_filename: None,
filename: Some("undionly.kpxe".to_string()),
filename64: Some("ipxe.efi".to_string()),
filenameipxe: Some(format!(
"http://{}:8080/boot.ipxe",
topology.router.get_gateway()
)),
},
}
}

View File

@@ -0,0 +1,67 @@
use async_trait::async_trait;
use serde::Serialize;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::{
Topology,
tenant::{TenantConfig, TenantManager},
},
};
#[derive(Debug, Serialize, Clone)]
pub struct TenantScore {
config: TenantConfig,
}
impl<T: Topology + TenantManager> Score<T> for TenantScore {
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
Box::new(TenantInterpret {
tenant_config: self.config.clone(),
})
}
fn name(&self) -> String {
format!("{} TenantScore", self.config.name)
}
}
#[derive(Debug)]
pub struct TenantInterpret {
tenant_config: TenantConfig,
}
#[async_trait]
impl<T: Topology + TenantManager> Interpret<T> for TenantInterpret {
async fn execute(
&self,
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
topology.provision_tenant(&self.tenant_config).await?;
Ok(Outcome::success(format!(
"Successfully provisioned tenant {} with id {}",
self.tenant_config.name, self.tenant_config.id
)))
}
fn get_name(&self) -> InterpretName {
InterpretName::TenantInterpret
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -116,3 +116,19 @@ pub fn yaml(input: TokenStream) -> TokenStream {
}
.into()
}
/// Verify that a string is a valid(ish) ingress path
/// Panics if path does not start with `/`
#[proc_macro]
pub fn ingress_path(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as LitStr);
let path_str = input.value();
match path_str.starts_with("/") {
true => {
let expanded = quote! {(#path_str.to_string()) };
return TokenStream::from(expanded);
}
false => panic!("Invalid ingress path"),
}
}

View File

@@ -40,7 +40,11 @@ pub struct CaddyGeneral {
#[yaserde(rename = "TlsDnsOptionalField4")]
pub tls_dns_optional_field4: MaybeString,
#[yaserde(rename = "TlsDnsPropagationTimeout")]
pub tls_dns_propagation_timeout: MaybeString,
pub tls_dns_propagation_timeout: Option<MaybeString>,
#[yaserde(rename = "TlsDnsPropagationTimeoutPeriod")]
pub tls_dns_propagation_timeout_period: Option<MaybeString>,
#[yaserde(rename = "TlsDnsPropagationDelay")]
pub tls_dns_propagation_delay: Option<MaybeString>,
#[yaserde(rename = "TlsDnsPropagationResolvers")]
pub tls_dns_propagation_resolvers: MaybeString,
pub accesslist: MaybeString,
@@ -82,4 +86,8 @@ pub struct CaddyGeneral {
pub auth_to_tls: Option<i32>,
#[yaserde(rename = "AuthToUri")]
pub auth_to_uri: MaybeString,
#[yaserde(rename = "ClientIpHeaders")]
pub client_ip_headers: MaybeString,
#[yaserde(rename = "CopyHeaders")]
pub copy_headers: MaybeString,
}

View File

@@ -14,6 +14,8 @@ pub struct DhcpInterface {
pub netboot: Option<u32>,
pub nextserver: Option<String>,
pub filename64: Option<String>,
pub filename: Option<String>,
pub filenameipxe: Option<String>,
#[yaserde(rename = "ddnsdomainalgorithm")]
pub ddns_domain_algorithm: Option<MaybeString>,
#[yaserde(rename = "numberoptions")]

View File

@@ -45,6 +45,7 @@ pub struct OPNsense {
#[yaserde(rename = "Pischem")]
pub pischem: Option<Pischem>,
pub ifgroups: Ifgroups,
pub dnsmasq: Option<RawXml>,
}
impl From<String> for OPNsense {
@@ -166,7 +167,7 @@ pub struct Sysctl {
pub struct SysctlItem {
pub descr: MaybeString,
pub tunable: String,
pub value: String,
pub value: MaybeString,
}
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
@@ -279,6 +280,7 @@ pub struct User {
pub scope: String,
pub groupname: Option<MaybeString>,
pub password: String,
pub pwd_changed_at: Option<MaybeString>,
pub uid: u32,
pub disabled: Option<u8>,
pub landing_page: Option<MaybeString>,
@@ -540,6 +542,8 @@ pub struct GeneralIpsec {
preferred_oldsa: Option<MaybeString>,
disablevpnrules: Option<MaybeString>,
passthrough_networks: Option<MaybeString>,
user_source: Option<MaybeString>,
local_group: Option<MaybeString>,
}
#[derive(Debug, YaSerialize, YaDeserialize, PartialEq)]
@@ -1219,6 +1223,7 @@ pub struct Host {
pub rr: String,
pub mxprio: MaybeString,
pub mx: MaybeString,
pub ttl: Option<MaybeString>,
pub server: String,
pub description: Option<String>,
}
@@ -1233,6 +1238,7 @@ impl Host {
rr,
server,
mxprio: MaybeString::default(),
ttl: Some(MaybeString::default()),
mx: MaybeString::default(),
description: None,
}
@@ -1421,7 +1427,7 @@ pub struct VirtualIp {
#[yaserde(attribute = true)]
pub version: String,
#[yaserde(rename = "vip")]
pub vip: Vip,
pub vip: Option<Vip>,
}
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]

View File

@@ -4,14 +4,14 @@ pub mod modules;
pub use config::Config;
pub use error::Error;
#[cfg(test)]
mod test {
#[cfg(e2e_test)]
mod e2e_test {
use opnsense_config_xml::StaticMap;
use std::net::Ipv4Addr;
use crate::Config;
#[cfg(opnsenseendtoend)]
#[tokio::test]
async fn test_public_sdk() {
use pretty_assertions::assert_eq;

View File

@@ -179,7 +179,21 @@ impl<'a> DhcpConfig<'a> {
pub fn set_boot_filename(&mut self, boot_filename: &str) {
self.enable_netboot();
self.get_lan_dhcpd().filename64 = Some(boot_filename.to_string());
self.get_lan_dhcpd().bootfilename = Some(boot_filename.to_string());
}
pub fn set_filename(&mut self, filename: &str) {
self.enable_netboot();
self.get_lan_dhcpd().filename = Some(filename.to_string());
}
pub fn set_filename64(&mut self, filename64: &str) {
self.enable_netboot();
self.get_lan_dhcpd().filename64 = Some(filename64.to_string());
}
pub fn set_filenameipxe(&mut self, filenameipxe: &str) {
self.enable_netboot();
self.get_lan_dhcpd().filenameipxe = Some(filenameipxe.to_string());
}
}