Compare commits

...

50 Commits

Author SHA1 Message Date
14fc4345c1 feat: Initialize k8s tenant properly
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m49s
2025-06-08 23:49:08 -04:00
8e472e4c65 feat: Add Default implementation for Harmony Id along with documentation.
Some checks failed
Run Check Script / check (push) Failing after 47s
Run Check Script / check (pull_request) Failing after 45s
This Id implementation is optimized for ease of use. Ids are prefixed with the unix epoch and suffixed with 7 alphanumeric characters. But Ids can also contain any String the user wants to pass it
2025-06-08 21:23:29 -04:00
ec17ccc246 feat: Add example-tenant (WIP)
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m53s
2025-06-06 13:59:48 -04:00
5127f44ab3 docs: Add note about pod privilege escalation in ADR 011 Tenant
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m46s
2025-06-06 13:56:40 -04:00
2ff70db0b1 wip: Tenant example project
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m48s
2025-06-06 13:52:40 -04:00
e17ac1af83 Merge remote-tracking branch 'origin/master' into TenantManager_impl_k8s_anywhere
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-04 16:14:21 -04:00
31e59937dc Merge pull request 'feat: Initial setup for monitoring and alerting' (#48) from feat/monitor into master
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Reviewed-on: #48
Reviewed-by: johnride <jg@nationtech.io>
2025-06-03 18:17:13 +00:00
12eb4ae31f fix: cargo fmt
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-02 16:20:49 -04:00
a2be9457b9 wip: removed AlertReceiverConfig
Some checks failed
Run Check Script / check (push) Failing after 44s
Run Check Script / check (pull_request) Failing after 44s
2025-06-02 16:11:36 -04:00
0d56fbc09d wip: applied comments in pr, changed naming of AlertChannel to AlertReceiver and added rust doc to Monitor for clarity
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m47s
2025-06-02 14:44:43 -04:00
56dc1e93c1 fix: modified files in mod
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m46s
2025-06-02 11:47:21 -04:00
691540fe64 wip: modified initial monitoring architecture based on pr review
Some checks failed
Run Check Script / check (push) Failing after 46s
Run Check Script / check (pull_request) Failing after 43s
2025-06-02 11:42:37 -04:00
7e3f1b1830 fix:cargo fmt
All checks were successful
Run Check Script / check (push) Successful in 1m45s
Run Check Script / check (pull_request) Successful in 1m45s
2025-05-30 13:59:29 -04:00
b631e8ccbb feat: Initial setup for monitoring and alerting
Some checks failed
Run Check Script / check (push) Failing after 43s
Run Check Script / check (pull_request) Failing after 45s
2025-05-30 13:21:38 -04:00
60f2f31d6c feat: Add TenantScore and TenantInterpret (#45)
All checks were successful
Run Check Script / check (push) Successful in 1m47s
Reviewed-on: #45
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-05-30 13:13:43 +00:00
045954f8d3 start network policy
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 18:06:16 -04:00
27f1a9dbdd feat: add more to the tenantmanager k8s impl (#46)
All checks were successful
Run Check Script / check (push) Successful in 1m55s
Co-authored-by: Willem <wrolleman@nationtech.io>
Reviewed-on: #46
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-05-29 20:15:38 +00:00
7c809bf18a Make k8stenantmanager a oncecell
All checks were successful
Run Check Script / check (push) Successful in 1m49s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 16:03:58 -04:00
6490e5e82a Hardcode some limits to protect the overall cluster
Some checks failed
Run Check Script / check (push) Failing after 42s
Run Check Script / check (pull_request) Failing after 47s
2025-05-29 15:49:46 -04:00
5e51f7490c Update request quota
Some checks failed
Run Check Script / check (push) Failing after 46s
2025-05-29 15:41:57 -04:00
97fba07f4e feat: adding kubernetes implentation of tenant manager
Some checks failed
Run Check Script / check (push) Failing after 43s
2025-05-29 14:35:58 -04:00
624e4330bb boilerplate
All checks were successful
Run Check Script / check (push) Successful in 1m47s
2025-05-29 13:36:30 -04:00
e7917843bc Merge pull request 'feat: Add initial Tenant traits and data structures' (#43) from feat/tenant into master
Some checks failed
Run Check Script / check (push) Has been cancelled
Reviewed-on: #43
2025-05-29 15:51:33 +00:00
7cd541bdd8 chore: Fix pr comments, remove many YAGNI things
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 11:47:25 -04:00
270dd49567 Merge pull request 'docs: Add CONTRIBUTING.md guide' (#44) from doc/contributor into master
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Reviewed-on: #44
2025-05-29 14:48:18 +00:00
0187300473 docs: Add CONTRIBUTING.md guide
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m47s
2025-05-29 10:47:38 -04:00
bf16566b4e wip: Clean up some unnecessary bits in the Tenant module and move manager to its own file
All checks were successful
Run Check Script / check (push) Successful in 1m48s
Run Check Script / check (pull_request) Successful in 1m46s
2025-05-29 07:25:45 -04:00
895fb02f4e feat: Add initial Tenant traits and data structures
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m45s
2025-05-28 22:33:46 -04:00
88d6af9815 Merge pull request 'feat/basicCI' (#42) from feat/basicCI into master
All checks were successful
Run Check Script / check (push) Successful in 1m50s
Reviewed-on: #42
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-05-28 19:42:19 +00:00
5aa9dc701f fix: Removed forgotten refactoring bits and formatting
All checks were successful
Run Check Script / check (push) Successful in 1m46s
Run Check Script / check (pull_request) Successful in 1m48s
2025-05-28 15:19:39 -04:00
f4ef895d2e feat: Add basic CI configuration
Some checks failed
Run Check Script / check (push) Failing after 51s
2025-05-28 14:40:19 -04:00
6e7148a945 Merge pull request 'adr: Add ADR on multi tenancy using namespace based customer isolation' (#41) from adr/multi-tenancy into master
Reviewed-on: #41
2025-05-26 20:26:36 +00:00
83453273c6 adr: Add ADR on multi tenancy using namespace based customer isolation 2025-05-26 11:56:45 -04:00
76ae5eb747 fix: make HelmRepository public (#39)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #39
Reviewed-by: johnride <jg@nationtech.io>
2025-05-22 20:07:42 +00:00
9c51040f3b Merge pull request 'feat:added Slack notifications support' (#38) from feat/slack-notifs into master
Reviewed-on: #38
Reviewed-by: johnride <jg@nationtech.io>
2025-05-22 20:04:51 +00:00
e1a8ee1c15 feat: send alerts to multiple alert channels 2025-05-22 14:16:41 -04:00
44b2b092a8 feat:added Slack notifications support 2025-05-21 15:29:14 -04:00
19bd47a545 Merge pull request 'monitoringalerting' (#37) from monitoringalerting into master
Reviewed-on: #37
Reviewed-by: johnride <jg@nationtech.io>
2025-05-21 17:32:26 +00:00
2b6d2e8606 fix:merge confict 2025-05-20 16:05:38 -04:00
7fc2b1ebfe feat: added monitoring stack example to lamp demo 2025-05-20 15:59:01 -04:00
e80752ea3f feat: install discord alert manager helm chart when Discord is the chosen alerting channel 2025-05-20 15:51:03 -04:00
bae7222d64 Our own Helm Command/Resource/Executor (WIP) (#13)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #13
Co-authored-by: Taha Hawa <taha@taha.dev>
Co-committed-by: Taha Hawa <taha@taha.dev>
2025-05-20 14:01:10 +00:00
f7d3da3ac9 fix merge conflict 2025-05-15 15:31:26 -04:00
eb8a8a2e04 chore: modified build config to be able to pass namespace to the config 2025-05-15 15:19:40 -04:00
b4c6848433 feat: added default monitoringStackScore implementation 2025-05-15 14:52:04 -04:00
0d94c537a0 feat: add ingress score (#32)
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #32
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-05-15 16:11:40 +00:00
861f266c4e Merge pull request 'feat: LAMP stack and Monitoring stack now work on OKD, we just have to manually set a few serviceaccounts to privileged scc until we find a better solution' (#36) from feat/lampOKD into master
Reviewed-on: #36
2025-05-14 15:48:56 +00:00
51724d0e55 feat: LAMP stack and Monitoring stack now work on OKD, we just have to manually set a few serviceaccounts to privileged scc until we find a better solution 2025-05-14 11:47:39 -04:00
c2d1cb9b76 Merge pull request 'upgrade stack size from default 1MB on windows (k3d stack overflow otherwise)' (#34) from windows-stack-size-increase into master
Reviewed-on: #34
Reviewed-by: johnride <jg@nationtech.io>
2025-05-14 14:29:51 +00:00
tahahawa
c84a02c8ec upgrade stack size from default 1MB on windows (k3d stack overflow otherwise) 2025-05-11 22:39:23 -04:00
43 changed files with 2161 additions and 458 deletions

5
.cargo/config.toml Normal file
View File

@@ -0,0 +1,5 @@
[target.x86_64-pc-windows-msvc]
rustflags = ["-C", "link-arg=/STACK:8000000"]
[target.x86_64-pc-windows-gnu]
rustflags = ["-C", "link-arg=-Wl,--stack,8000000"]

View File

@@ -0,0 +1,14 @@
name: Run Check Script
on:
push:
pull_request:
jobs:
check:
runs-on: rust-cargo
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run check script
run: bash check.sh

36
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,36 @@
# Contributing to the Harmony project
## Write small P-R
Aim for the smallest piece of work that is mergeable.
Mergeable means that :
- it does not break the build
- it moves the codebase one step forward
P-Rs can be many things, they do not have to be complete features.
### What a P-R **should** be
- Introduce a new trait : This will be the place to discuss the new trait addition, its design and implementation
- A new implementation of a trait : a new concrete implementation of the LoadBalancer trait
- A new CI check : something that improves quality, robustness, ci performance
- Documentation improvements
- Refactoring
- Bugfix
### What a P-R **should not** be
- Large. Anything over 200 lines (excluding generated lines) should have a very good reason to be this large.
- A mix of refactoring, bug fixes and new features.
- Introducing multiple new features or ideas at once.
- Multiple new implementations of a trait/functionnality at once
The general idea is to keep P-Rs small and single purpose.
## Commit message formatting
We follow conventional commits guidelines.
https://www.conventionalcommits.org/en/v1.0.0/

414
Cargo.lock generated
View File

@@ -4,19 +4,13 @@ version = 4
[[package]]
name = "addr2line"
version = "0.21.0"
version = "0.24.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a30b2e23b9e17a9f90641c7ab1549cd9b44f296d3ccbf309d2863cfe398a0cb"
checksum = "dfbe277e56a376000877090da837660b4427aad530e3028d44e0bffe4f89a1c1"
dependencies = [
"gimli",
]
[[package]]
name = "adler"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "adler2"
version = "2.0.0"
@@ -60,15 +54,15 @@ dependencies = [
[[package]]
name = "ahash"
version = "0.8.11"
version = "0.8.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e89da841a80418a9b391ebaea17f5c112ffaaa96f621d2c285b5174da76b9011"
checksum = "5a15f179cd60c4584b8a8c596927aadc462e27f2ca70c04e0071964a73ba7a75"
dependencies = [
"cfg-if",
"const-random",
"once_cell",
"version_check",
"zerocopy 0.7.35",
"zerocopy",
]
[[package]]
@@ -198,17 +192,17 @@ checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
[[package]]
name = "backtrace"
version = "0.3.71"
version = "0.3.75"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26b05800d2e817c8b3b4b54abd461726265fa9789ae34330622f2db9ee696f9d"
checksum = "6806a6321ec58106fea15becdad98371e28d92ccbc7c8f1b3b6dd724fe8f1002"
dependencies = [
"addr2line",
"cc",
"cfg-if",
"libc",
"miniz_oxide 0.7.4",
"miniz_oxide",
"object",
"rustc-demangle",
"windows-targets 0.52.6",
]
[[package]]
@@ -254,9 +248,9 @@ checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
[[package]]
name = "bitflags"
version = "2.9.0"
version = "2.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c8214115b7bf84099f1309324e63141d4c5d7cc26862f97a0a857dbefe165bd"
checksum = "1b8e56985ec62d17e9c1001dc89c88ecd7dc08e47eba5ec7c29c7b5eeecde967"
dependencies = [
"serde",
]
@@ -356,9 +350,9 @@ dependencies = [
[[package]]
name = "cc"
version = "1.2.20"
version = "1.2.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04da6a0d40b948dfc4fa8f5bbf402b0fc1a64a28dbf7d12ffd683550f2c1b63a"
checksum = "32db95edf998450acc7881c932f94cd9b05c87b4b2599e8bab064753da4acfd1"
dependencies = [
"shlex",
]
@@ -413,9 +407,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.37"
version = "4.5.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eccb054f56cbd38340b380d4a8e69ef1f02f1af43db2f0cc817a4774d80ae071"
checksum = "ed93b9805f8ba930df42c2590f05453d5ec36cbb85d018868a5b24d31f6ac000"
dependencies = [
"clap_builder",
"clap_derive",
@@ -423,9 +417,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.37"
version = "4.5.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efd9466fac8543255d3b1fcad4762c5e116ffe808c8a3043d4263cd4fd4862a2"
checksum = "379026ff283facf611b0ea629334361c4211d1b12ee01024eec1591133b04120"
dependencies = [
"anstream",
"anstyle",
@@ -453,9 +447,9 @@ checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6"
[[package]]
name = "color-eyre"
version = "0.6.3"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55146f5e46f237f7423d74111267d4597b59b0dad0ffaf7303bce9945d843ad5"
checksum = "e6e1761c0e16f8883bbbb8ce5990867f4f06bf11a0253da6495a04ce4b6ef0ec"
dependencies = [
"backtrace",
"color-spantrace",
@@ -468,9 +462,9 @@ dependencies = [
[[package]]
name = "color-spantrace"
version = "0.2.1"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd6be1b2a7e382e2b98b43b2adcca6bb0e465af0bdd38123873ae61eb17a72c2"
checksum = "2ddd8d5bfda1e11a501d0a7303f3bfed9aa632ebdb859be40d0fd70478ed70d5"
dependencies = [
"once_cell",
"owo-colors",
@@ -614,7 +608,7 @@ version = "0.28.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "829d955a0bb380ef178a640b91779e3987da38c9aea133b20614cfed8cdea9c6"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"crossterm_winapi",
"futures-core",
"mio 1.0.3",
@@ -936,6 +930,15 @@ dependencies = [
"zeroize",
]
[[package]]
name = "email_address"
version = "0.2.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e079f19b08ca6239f47f8ba8509c11cf3ea30095831f7fed61441475edd8c449"
dependencies = [
"serde",
]
[[package]]
name = "encoding_rs"
version = "0.8.35"
@@ -1067,6 +1070,21 @@ dependencies = [
"url",
]
[[package]]
name = "example-tenant"
version = "0.1.0"
dependencies = [
"cidr",
"env_logger",
"harmony",
"harmony_cli",
"harmony_macros",
"harmony_types",
"log",
"tokio",
"url",
]
[[package]]
name = "example-tui"
version = "0.1.0"
@@ -1121,7 +1139,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ced92e76e966ca2fd84c8f7aa01a4aea65b0eb6648d72f7c8f3e2764a67fece"
dependencies = [
"crc32fast",
"miniz_oxide 0.8.8",
"miniz_oxide",
]
[[package]]
@@ -1172,6 +1190,16 @@ dependencies = [
"percent-encoding",
]
[[package]]
name = "fqdn"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0f5d7f7b3eed2f771fc7f6fcb651f9560d7b0c483d75876082acb4649d266b3"
dependencies = [
"punycode",
"serde",
]
[[package]]
name = "funty"
version = "2.0.0"
@@ -1311,9 +1339,9 @@ dependencies = [
[[package]]
name = "getrandom"
version = "0.3.2"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73fea8450eea4bac3940448fb7ae50d91f034f941199fcd9d909a5a07aa455f0"
checksum = "26145e563e54f2cadc477553f1ec5ee650b00862f0a58bcd12cbdc5f0ea2d2f4"
dependencies = [
"cfg-if",
"libc",
@@ -1333,9 +1361,9 @@ dependencies = [
[[package]]
name = "gimli"
version = "0.28.1"
version = "0.31.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4271d37baee1b8c7e4b708028c57d816cf9d2434acb33a549475f78c181f6253"
checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f"
[[package]]
name = "group"
@@ -1369,9 +1397,9 @@ dependencies = [
[[package]]
name = "h2"
version = "0.4.9"
version = "0.4.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75249d144030531f8dee69fe9cea04d3edf809a017ae445e2abdff6629e86633"
checksum = "a9421a676d1b147b16b82c9225157dc629087ef8ec4d5e2960f9437a90dac0a5"
dependencies = [
"atomic-waker",
"bytes",
@@ -1396,10 +1424,14 @@ dependencies = [
"derive-new",
"directories",
"dockerfile_builder",
"dyn-clone",
"email_address",
"env_logger",
"fqdn",
"harmony_macros",
"harmony_types",
"helm-wrapper-rs",
"hex",
"http 1.3.1",
"inquire",
"k3d-rs",
@@ -1411,6 +1443,7 @@ dependencies = [
"non-blank-string-rs",
"opnsense-config",
"opnsense-config-xml",
"rand 0.9.1",
"reqwest 0.11.27",
"russh",
"rust-ipmi",
@@ -1419,6 +1452,7 @@ dependencies = [
"serde-value",
"serde_json",
"serde_yaml",
"temp-dir",
"temp-file",
"tokio",
"url",
@@ -1476,9 +1510,9 @@ dependencies = [
[[package]]
name = "hashbrown"
version = "0.15.2"
version = "0.15.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289"
checksum = "84b26c544d002229e640969970a2e74021aadf6e2f96372b9c58eff97de08eb3"
dependencies = [
"allocator-api2",
"equivalent",
@@ -1534,6 +1568,12 @@ version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d231dfb89cfffdbc30e7fc41579ed6066ad03abda9e567ccafae602b97ec5024"
[[package]]
name = "hex"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "hex-literal"
version = "0.4.1"
@@ -1692,7 +1732,7 @@ dependencies = [
"bytes",
"futures-channel",
"futures-util",
"h2 0.4.9",
"h2 0.4.10",
"http 1.3.1",
"http-body 1.0.1",
"httparse",
@@ -1831,21 +1871,22 @@ dependencies = [
[[package]]
name = "icu_collections"
version = "1.5.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db2fa452206ebee18c4b5c2274dbf1de17008e874b4dc4f0aea9d01ca79e4526"
checksum = "200072f5d0e3614556f94a9930d5dc3e0662a652823904c3a75dc3b0af7fee47"
dependencies = [
"displaydoc",
"potential_utf",
"yoke",
"zerofrom",
"zerovec",
]
[[package]]
name = "icu_locid"
version = "1.5.0"
name = "icu_locale_core"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13acbb8371917fc971be86fc8057c41a64b521c184808a698c02acc242dbf637"
checksum = "0cde2700ccaed3872079a65fb1a78f6c0a36c91570f28755dda67bc8f7d9f00a"
dependencies = [
"displaydoc",
"litemap",
@@ -1854,31 +1895,11 @@ dependencies = [
"zerovec",
]
[[package]]
name = "icu_locid_transform"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "01d11ac35de8e40fdeda00d9e1e9d92525f3f9d887cdd7aa81d727596788b54e"
dependencies = [
"displaydoc",
"icu_locid",
"icu_locid_transform_data",
"icu_provider",
"tinystr",
"zerovec",
]
[[package]]
name = "icu_locid_transform_data"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7515e6d781098bf9f7205ab3fc7e9709d34554ae0b21ddbcb5febfa4bc7df11d"
[[package]]
name = "icu_normalizer"
version = "1.5.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19ce3e0da2ec68599d193c93d088142efd7f9c5d6fc9b803774855747dc6a84f"
checksum = "436880e8e18df4d7bbc06d58432329d6458cc84531f7ac5f024e93deadb37979"
dependencies = [
"displaydoc",
"icu_collections",
@@ -1886,67 +1907,54 @@ dependencies = [
"icu_properties",
"icu_provider",
"smallvec",
"utf16_iter",
"utf8_iter",
"write16",
"zerovec",
]
[[package]]
name = "icu_normalizer_data"
version = "1.5.1"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c5e8338228bdc8ab83303f16b797e177953730f601a96c25d10cb3ab0daa0cb7"
checksum = "00210d6893afc98edb752b664b8890f0ef174c8adbb8d0be9710fa66fbbf72d3"
[[package]]
name = "icu_properties"
version = "1.5.1"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "93d6020766cfc6302c15dbbc9c8778c37e62c14427cb7f6e601d849e092aeef5"
checksum = "2549ca8c7241c82f59c80ba2a6f415d931c5b58d24fb8412caa1a1f02c49139a"
dependencies = [
"displaydoc",
"icu_collections",
"icu_locid_transform",
"icu_locale_core",
"icu_properties_data",
"icu_provider",
"tinystr",
"potential_utf",
"zerotrie",
"zerovec",
]
[[package]]
name = "icu_properties_data"
version = "1.5.1"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85fb8799753b75aee8d2a21d7c14d9f38921b54b3dbda10f5a3c7a7b82dba5e2"
checksum = "8197e866e47b68f8f7d95249e172903bec06004b18b2937f1095d40a0c57de04"
[[package]]
name = "icu_provider"
version = "1.5.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ed421c8a8ef78d3e2dbc98a973be2f3770cb42b606e3ab18d6237c4dfde68d9"
checksum = "03c80da27b5f4187909049ee2d72f276f0d9f99a42c306bd0131ecfe04d8e5af"
dependencies = [
"displaydoc",
"icu_locid",
"icu_provider_macros",
"icu_locale_core",
"stable_deref_trait",
"tinystr",
"writeable",
"yoke",
"zerofrom",
"zerotrie",
"zerovec",
]
[[package]]
name = "icu_provider_macros"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1ec89e9337638ecdc08744df490b221a7399bf8d164eb52a665454e60e075ad6"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "ident_case"
version = "1.0.1"
@@ -1966,9 +1974,9 @@ dependencies = [
[[package]]
name = "idna_adapter"
version = "1.2.0"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "daca1df1c957320b2cf139ac61e7bd64fed304c5040df000a745aa1de3b4ef71"
checksum = "3acae9609540aa318d1bc588455225fb2085b9ed0c4f6bd0d9d5bcd86f1a0344"
dependencies = [
"icu_normalizer",
"icu_properties",
@@ -2012,7 +2020,7 @@ version = "0.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fddf93031af70e75410a2511ec04d49e758ed2f26dad3404a934e0fb45cc12a"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"crossterm 0.25.0",
"dyn-clone",
"fuzzy-matcher",
@@ -2075,9 +2083,9 @@ checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
[[package]]
name = "jiff"
version = "0.2.10"
version = "0.2.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a064218214dc6a10fbae5ec5fa888d80c45d611aba169222fc272072bf7aef6"
checksum = "f02000660d30638906021176af16b17498bd0d12813dbfe7b276d8bc7f3c0806"
dependencies = [
"jiff-static",
"log",
@@ -2088,9 +2096,9 @@ dependencies = [
[[package]]
name = "jiff-static"
version = "0.2.10"
version = "0.2.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "199b7932d97e325aff3a7030e141eafe7f2c6268e1d1b24859b753a627f45254"
checksum = "f3c30758ddd7188629c6713fc45d1188af4f44c90582311d0c8d8c9907f60c48"
dependencies = [
"proc-macro2",
"quote",
@@ -2249,9 +2257,9 @@ checksum = "d750af042f7ef4f724306de029d18836c26c1765a54a6a3f094cbd23a7267ffa"
[[package]]
name = "libm"
version = "0.2.13"
version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9627da5196e5d8ed0b0495e61e518847578da83483c37288316d9b2e03a7f72"
checksum = "f9fbbcab51052fe104eb5e5d351cf728d30a5be1fe14d9be8a3b097481fb97de"
[[package]]
name = "libredfish"
@@ -2272,7 +2280,7 @@ version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0ff37bd590ca25063e35af745c343cb7a0271906fb7b37e4813e8f79f00268d"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"libc",
]
@@ -2290,9 +2298,9 @@ checksum = "cd945864f07fe9f5371a27ad7b52a172b4b499999f1d97574c9fa68373937e12"
[[package]]
name = "litemap"
version = "0.7.5"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "23fb14cb19457329c82206317a5663005a4d404783dc74f4252769b0d5f42856"
checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956"
[[package]]
name = "lock_api"
@@ -2346,15 +2354,6 @@ version = "0.3.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
[[package]]
name = "miniz_oxide"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8a240ddb74feaf34a79a7add65a741f3167852fba007066dcac1ca548d89c08"
dependencies = [
"adler",
]
[[package]]
name = "miniz_oxide"
version = "0.8.8"
@@ -2499,18 +2498,18 @@ dependencies = [
[[package]]
name = "object"
version = "0.32.2"
version = "0.36.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6a622008b6e321afc04970976f62ee297fdbaa6f95318ca343e3eebb9648441"
checksum = "62948e14d923ea95ea2c7c86c71013138b66525b86bdc08d2dcc262bdb497b87"
dependencies = [
"memchr",
]
[[package]]
name = "octocrab"
version = "0.44.0"
version = "0.44.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aaf799a9982a4d0b4b3fa15b4c1ff7daf5bd0597f46456744dcbb6ddc2e4c827"
checksum = "86996964f8b721067b6ed238aa0ccee56ecad6ee5e714468aa567992d05d2b91"
dependencies = [
"arc-swap",
"async-trait",
@@ -2564,7 +2563,7 @@ version = "0.10.72"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fedfea7d58a1f73118430a55da6a286e7b044961736ce96a16a17068ea25e5da"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"cfg-if",
"foreign-types",
"libc",
@@ -2592,9 +2591,9 @@ checksum = "d05e27ee213611ffe7d6348b942e8f942b37114c00cc03cec254295a4a17852e"
[[package]]
name = "openssl-sys"
version = "0.9.107"
version = "0.9.108"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8288979acd84749c744a9014b4382d42b8f7b2592847b5afb2ed29e5d16ede07"
checksum = "e145e1651e858e820e4860f7b9c5e169bc1d8ce1c86043be79fa7b7634821847"
dependencies = [
"cc",
"libc",
@@ -2658,9 +2657,9 @@ dependencies = [
[[package]]
name = "owo-colors"
version = "3.5.0"
version = "4.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1b04fb49957986fdce4d6ee7a65027d55d4b6d2265e5848bbb507b58ccfdb6f"
checksum = "1036865bb9422d3300cf723f657c2851d0e9ab12567854b1f4eba3d77decf564"
[[package]]
name = "p256"
@@ -2946,6 +2945,15 @@ dependencies = [
"portable-atomic",
]
[[package]]
name = "potential_utf"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5a7c30837279ca13e7c867e9e40053bc68740f988cb07f7ca6df43cc734b585"
dependencies = [
"zerovec",
]
[[package]]
name = "powerfmt"
version = "0.2.0"
@@ -2958,7 +2966,7 @@ version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85eae3c4ed2f50dcfe72643da4befc30deadb458a9b590d720cde2f2b1e97da9"
dependencies = [
"zerocopy 0.8.25",
"zerocopy",
]
[[package]]
@@ -3016,6 +3024,12 @@ dependencies = [
"unicode-ident",
]
[[package]]
name = "punycode"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9e1dcb320d6839f6edb64f7a4a59d39b30480d4d1765b56873f7c858538a5fe"
[[package]]
name = "quote"
version = "1.0.40"
@@ -3093,7 +3107,7 @@ version = "0.9.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "99d9a13982dcf210057a8a78572b2217b667c3beacbf3a0d8b454f6f82837d38"
dependencies = [
"getrandom 0.3.2",
"getrandom 0.3.3",
]
[[package]]
@@ -3102,7 +3116,7 @@ version = "0.29.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eabd94c2f37801c20583fc49dd5cd6b0ba68c716787c2dd6ed18571e1e63117b"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"cassowary",
"compact_str",
"crossterm 0.28.1",
@@ -3119,11 +3133,11 @@ dependencies = [
[[package]]
name = "redox_syscall"
version = "0.5.11"
version = "0.5.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2f103c6d277498fbceb16e84d317e2a400f160f46904d5f5410848c829511a3"
checksum = "928fca9cf2aa042393a8325b9ead81d2f0df4cb12e1e24cef072922ccd99c5af"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
]
[[package]]
@@ -3217,7 +3231,7 @@ dependencies = [
"encoding_rs",
"futures-core",
"futures-util",
"h2 0.4.9",
"h2 0.4.10",
"http 1.3.1",
"http-body 1.0.1",
"http-body-util",
@@ -3306,7 +3320,7 @@ dependencies = [
"aes",
"aes-gcm",
"async-trait",
"bitflags 2.9.0",
"bitflags 2.9.1",
"byteorder",
"cbc",
"chacha20",
@@ -3407,7 +3421,7 @@ version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3bb94393cafad0530145b8f626d8687f1ee1dedb93d7ba7740d6ae81868b13b5"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"bytes",
"chrono",
"flurry",
@@ -3454,7 +3468,7 @@ version = "0.38.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fdb5bc1ae2baa591800df16c9ca78619bf65c0488b41b96ccec5d11220d8c154"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"errno",
"libc",
"linux-raw-sys 0.4.15",
@@ -3463,11 +3477,11 @@ dependencies = [
[[package]]
name = "rustix"
version = "1.0.5"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d97817398dd4bb2e6da002002db259209759911da105da92bec29ccb12cf58bf"
checksum = "c71e83d6afe7ff64890ec6b71d6a69bb8a610ab78ce364b3352876bb4c801266"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"errno",
"libc",
"linux-raw-sys 0.9.4",
@@ -3476,9 +3490,9 @@ dependencies = [
[[package]]
name = "rustls"
version = "0.23.26"
version = "0.23.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df51b5869f3a441595eac5e8ff14d486ff285f7b8c0df8770e49c3b56351f0f0"
checksum = "730944ca083c1c233a75c09f199e973ca499344a2b7ba9e755c457e86fb4a321"
dependencies = [
"log",
"once_cell",
@@ -3534,15 +3548,18 @@ dependencies = [
[[package]]
name = "rustls-pki-types"
version = "1.11.0"
version = "1.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "917ce264624a4b4db1c364dcc35bfca9ded014d0a958cd47ad3e960e988ea51c"
checksum = "229a4a4c221013e7e1f1a043678c5cc39fe5171437c88fb47151a21e6f5b5c79"
dependencies = [
"zeroize",
]
[[package]]
name = "rustls-webpki"
version = "0.103.1"
version = "0.103.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fef8b8769aaccf73098557a87cd1816b4f9c7c16811c9c77142aa695c16f2c03"
checksum = "e4a72fe2bcf7a6ac6fd7d0b9e5cb68aeb7d4c0a0271730218b3e92d43b4eb435"
dependencies = [
"ring",
"rustls-pki-types",
@@ -3625,7 +3642,7 @@ version = "2.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "897b2245f0b511c87893af39b033e5ca9cce68824c4d7e7630b5a1d339658d02"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"core-foundation 0.9.4",
"core-foundation-sys",
"libc",
@@ -3638,7 +3655,7 @@ version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "271720403f46ca04f7ba6f55d438f8bd878d6b8ca0a1046e8228c4145bcbb316"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"core-foundation 0.10.0",
"core-foundation-sys",
"libc",
@@ -3769,9 +3786,9 @@ dependencies = [
[[package]]
name = "sha2"
version = "0.10.8"
version = "0.10.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "793db75ad2bcafc3ffa7c68b215fee268f537982cd901d132f89c6343f3a3dc8"
checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283"
dependencies = [
"cfg-if",
"cpufeatures",
@@ -3795,9 +3812,9 @@ checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "signal-hook"
version = "0.3.17"
version = "0.3.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8621587d4798caf8eb44879d42e56b9a93ea5dcd315a6487c357130095b62801"
checksum = "d881a16cf4426aa584979d30bd82cb33429027e42122b169753d6ef1085ed6e2"
dependencies = [
"libc",
"signal-hook-registry",
@@ -4033,9 +4050,9 @@ dependencies = [
[[package]]
name = "synstructure"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8af7666ab7b6390ab78131fb5b0fce11d6b7a6951602017c35fa82800708971"
checksum = "728a70f3dbaf5bab7f0c4b1ac8d7ae5ea60a4b5549c8a5914361c99147a709d2"
dependencies = [
"proc-macro2",
"quote",
@@ -4059,7 +4076,7 @@ version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c879d448e9d986b661742763247d3693ed13609438cf3d006f51f5368a5ba6b"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
"core-foundation 0.9.4",
"system-configuration-sys 0.6.0",
]
@@ -4090,6 +4107,12 @@ version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55937e1799185b12863d447f42597ed69d9928686b8d88a1df17376a097d8369"
[[package]]
name = "temp-dir"
version = "0.1.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83176759e9416cf81ee66cb6508dbfe9c96f20b8b56265a39917551c23c70964"
[[package]]
name = "temp-file"
version = "0.1.9"
@@ -4098,14 +4121,14 @@ checksum = "b5ff282c3f91797f0acb021f3af7fffa8a78601f0f2fd0a9f79ee7dcf9a9af9e"
[[package]]
name = "tempfile"
version = "3.19.1"
version = "3.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7437ac7763b9b123ccf33c338a5cc1bac6f69b45a136c19bdd8a65e3916435bf"
checksum = "e8a64e3985349f2441a1a9ef0b853f869006c3855f2cda6862a94d26ebb9d6a1"
dependencies = [
"fastrand",
"getrandom 0.3.2",
"getrandom 0.3.3",
"once_cell",
"rustix 1.0.5",
"rustix 1.0.7",
"windows-sys 0.59.0",
]
@@ -4207,9 +4230,9 @@ dependencies = [
[[package]]
name = "tinystr"
version = "0.7.6"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9117f5d4db391c1cf6927e7bea3db74b9a1c1add8f7eda9ffd5364f40f57b82f"
checksum = "5d4f6d1145dcb577acf783d4e601bc1d76a13337bb54e6233add580b07344c8b"
dependencies = [
"displaydoc",
"zerovec",
@@ -4217,9 +4240,9 @@ dependencies = [
[[package]]
name = "tokio"
version = "1.44.2"
version = "1.45.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
checksum = "2513ca694ef9ede0fb23fe71a4ee4107cb102b9dc1930f6d0fd77aae068ae165"
dependencies = [
"backtrace",
"bytes",
@@ -4306,12 +4329,12 @@ dependencies = [
[[package]]
name = "tower-http"
version = "0.6.2"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "403fa3b783d4b626a8ad51d766ab03cb6d2dbfc46b1c5d4448395e6628dc9697"
checksum = "0fdb0c213ca27a9f57ab69ddb290fd80d970922355b83ae380b395d3986b8a2e"
dependencies = [
"base64 0.22.1",
"bitflags 2.9.0",
"bitflags 2.9.1",
"bytes",
"futures-util",
"http 1.3.1",
@@ -4493,12 +4516,6 @@ dependencies = [
"serde",
]
[[package]]
name = "utf16_iter"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8232dd3cdaed5356e0f716d285e4b40b932ac434100fe9b7e0e8e935b9e6246"
[[package]]
name = "utf8_iter"
version = "1.0.4"
@@ -4517,7 +4534,7 @@ version = "1.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "458f7a779bf54acc9f347480ac654f68407d3aab21269a6e3c9f922acd9e2da9"
dependencies = [
"getrandom 0.3.2",
"getrandom 0.3.3",
"rand 0.9.1",
"uuid-macro-internal",
]
@@ -5018,20 +5035,14 @@ version = "0.39.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f42320e61fe2cfd34354ecb597f86f413484a798ba44a8ca1165c58d42da6c1"
dependencies = [
"bitflags 2.9.0",
"bitflags 2.9.1",
]
[[package]]
name = "write16"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d1890f4022759daae28ed4fe62859b1236caebfc61ede2f63ed4e695f3f6d936"
[[package]]
name = "writeable"
version = "0.5.5"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e9df38ee2d2c3c5948ea468a8406ff0db0b29ae1ffde1bcf20ef305bcc95c51"
checksum = "ea2f10b9bb0928dfb1b42b65e1f9e36f7f54dbdf08457afefb38afcdec4fa2bb"
[[package]]
name = "wyz"
@@ -5080,9 +5091,9 @@ dependencies = [
[[package]]
name = "yoke"
version = "0.7.5"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "120e6aef9aa629e3d4f52dc8cc43a015c7724194c97dfaf45180d2daf2b77f40"
checksum = "5f41bb01b8226ef4bfd589436a297c53d118f65921786300e427be8d487695cc"
dependencies = [
"serde",
"stable_deref_trait",
@@ -5092,9 +5103,9 @@ dependencies = [
[[package]]
name = "yoke-derive"
version = "0.7.5"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2380878cad4ac9aac1e2435f3eb4020e8374b5f13c296cb75b4620ff8e229154"
checksum = "38da3c9736e16c5d3c8c597a9aaa5d1fa565d0532ae05e27c24aa62fb32c0ab6"
dependencies = [
"proc-macro2",
"quote",
@@ -5102,33 +5113,13 @@ dependencies = [
"synstructure",
]
[[package]]
name = "zerocopy"
version = "0.7.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b9b4fd18abc82b8136838da5d50bae7bdea537c574d8dc1a34ed098d6c166f0"
dependencies = [
"zerocopy-derive 0.7.35",
]
[[package]]
name = "zerocopy"
version = "0.8.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1702d9583232ddb9174e01bb7c15a2ab8fb1bc6f227aa1233858c351a3ba0cb"
dependencies = [
"zerocopy-derive 0.8.25",
]
[[package]]
name = "zerocopy-derive"
version = "0.7.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa4f8080344d4671fb4e831a13ad1e68092748387dfc4f55e356242fae12ce3e"
dependencies = [
"proc-macro2",
"quote",
"syn",
"zerocopy-derive",
]
[[package]]
@@ -5170,10 +5161,21 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ced3678a2879b30306d323f4542626697a464a97c0a07c9aebf7ebca65cd4dde"
[[package]]
name = "zerovec"
version = "0.10.4"
name = "zerotrie"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa2b893d79df23bfb12d5461018d408ea19dfafe76c2c7ef6d4eba614f8ff079"
checksum = "36f0bbd478583f79edad978b407914f61b2972f5af6fa089686016be8f9af595"
dependencies = [
"displaydoc",
"yoke",
"zerofrom",
]
[[package]]
name = "zerovec"
version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a05eb080e015ba39cc9e23bbe5e7fb04d5fb040350f99f34e338d5fdd294428"
dependencies = [
"yoke",
"zerofrom",
@@ -5182,9 +5184,9 @@ dependencies = [
[[package]]
name = "zerovec-derive"
version = "0.10.3"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6eafa6dfb17584ea3e2bd6e76e0cc15ad7af12b09abdd1ca55961bed9b1063c6"
checksum = "5b96237efa0c878c64bd89c436f661be4e46b2f3eff1ebb976f7ef2321d2f58f"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -1,6 +1,6 @@
# Architecture Decision Record: \<Title\>
Name: \<Name\>
Initial Author: \<Name\>
Initial Date: \<Date\>

View File

@@ -1,6 +1,6 @@
# Architecture Decision Record: Helm and Kustomize Handling
Name: Taha Hawa
Initial Author: Taha Hawa
Initial Date: 2025-04-15

View File

@@ -1,7 +1,7 @@
# Architecture Decision Record: Monitoring and Alerting
Proposed by: Willem Rolleman
Date: April 28 2025
Initial Author : Willem Rolleman
Date : April 28 2025
## Status

View File

@@ -0,0 +1,161 @@
# Architecture Decision Record: Multi-Tenancy Strategy for Harmony Managed Clusters
Initial Author: Jean-Gabriel Gill-Couture
Initial Date: 2025-05-26
## Status
Proposed
## Context
Harmony manages production OKD/Kubernetes clusters that serve multiple clients with varying trust levels and operational requirements. We need a multi-tenancy strategy that provides:
1. **Strong isolation** between client workloads while maintaining operational simplicity
2. **Controlled API access** allowing clients self-service capabilities within defined boundaries
3. **Security-first approach** protecting both the cluster infrastructure and tenant data
4. **Harmony-native implementation** using our Score/Interpret pattern for automated tenant provisioning
5. **Scalable management** supporting both small trusted clients and larger enterprise customers
The official Kubernetes multi-tenancy documentation identifies two primary models: namespace-based isolation and virtual control planes per tenant. Given Harmony's focus on operational simplicity, provider-agnostic abstractions (ADR-003), and hexagonal architecture (ADR-002), we must choose an approach that balances security, usability, and maintainability.
Our clients represent a hybrid tenancy model:
- **Customer multi-tenancy**: Each client operates independently with no cross-tenant trust
- **Team multi-tenancy**: Individual clients may have multiple team members requiring coordinated access
- **API access requirement**: Unlike pure SaaS scenarios, clients need controlled Kubernetes API access for self-service operations
The official kubernetes documentation on multi tenancy heavily inspired this ADR : https://kubernetes.io/docs/concepts/security/multi-tenancy/
## Decision
Implement **namespace-based multi-tenancy** with the following architecture:
### 1. Network Security Model
- **Private cluster access**: Kubernetes API and OpenShift console accessible only via WireGuard VPN
- **No public exposure**: Control plane endpoints remain internal to prevent unauthorized access attempts
- **VPN-based authentication**: Initial access control through WireGuard client certificates
### 2. Tenant Isolation Strategy
- **Dedicated namespace per tenant**: Each client receives an isolated namespace with access limited only to the required resources and operations
- **Complete network isolation**: NetworkPolicies prevent cross-namespace communication while allowing full egress to public internet
- **Resource governance**: ResourceQuotas and LimitRanges enforce CPU, memory, and storage consumption limits
- **Storage access control**: Clients can create PersistentVolumeClaims but cannot directly manipulate PersistentVolumes or access other tenants' storage
### 3. Access Control Framework
- **Principle of Least Privilege**: RBAC grants only necessary permissions within tenant namespace scope
- **Namespace-scoped**: Clients can create/modify/delete resources within their namespace
- **Cluster-level restrictions**: No access to cluster-wide resources, other namespaces, or sensitive cluster operations
- **Whitelisted operations**: Controlled self-service capabilities for ingress, secrets, configmaps, and workload management
### 4. Identity Management Evolution
- **Phase 1**: Manual provisioning of VPN access and Kubernetes ServiceAccounts/Users
- **Phase 2**: Migration to Keycloak-based identity management (aligning with ADR-006) for centralized authentication and lifecycle management
### 5. Harmony Integration
- **TenantScore implementation**: Declarative tenant provisioning using Harmony's Score/Interpret pattern
- **Topology abstraction**: Tenant configuration abstracted from underlying Kubernetes implementation details
- **Automated deployment**: Complete tenant setup automated through Harmony's orchestration capabilities
## Rationale
### Network Security Through VPN Access
- **Defense in depth**: VPN requirement adds critical security layer preventing unauthorized cluster access
- **Simplified firewall rules**: No need for complex public endpoint protections or rate limiting
- **Audit capability**: VPN access provides clear audit trail of cluster connections
- **Aligns with enterprise practices**: Most enterprise customers already use VPN infrastructure
### Namespace Isolation vs Virtual Control Planes
Following Kubernetes official guidance, namespace isolation provides:
- **Lower resource overhead**: Virtual control planes require dedicated etcd, API server, and controller manager per tenant
- **Operational simplicity**: Single control plane to maintain, upgrade, and monitor
- **Cross-tenant service integration**: Enables future controlled cross-tenant communication if required
- **Proven stability**: Namespace-based isolation is well-tested and widely deployed
- **Cost efficiency**: Significantly lower infrastructure costs compared to dedicated control planes
### Hybrid Tenancy Model Suitability
Our approach addresses both customer and team multi-tenancy requirements:
- **Customer isolation**: Strong network and RBAC boundaries prevent cross-tenant interference
- **Team collaboration**: Multiple team members can share namespace access through group-based RBAC
- **Self-service balance**: Controlled API access enables client autonomy without compromising security
### Harmony Architecture Alignment
- **Provider agnostic**: TenantScore abstracts multi-tenancy concepts, enabling future support for other Kubernetes distributions
- **Hexagonal architecture**: Tenant management becomes an infrastructure capability accessed through well-defined ports
- **Declarative automation**: Tenant lifecycle fully managed through Harmony's Score execution model
## Consequences
### Positive Consequences
- **Strong security posture**: VPN + namespace isolation provides robust tenant separation
- **Operational efficiency**: Single cluster management with automated tenant provisioning
- **Client autonomy**: Self-service capabilities reduce operational support burden
- **Scalable architecture**: Can support hundreds of tenants per cluster without architectural changes
- **Future flexibility**: Foundation supports evolution to more sophisticated multi-tenancy models
- **Cost optimization**: Shared infrastructure maximizes resource utilization
### Negative Consequences
- **VPN operational overhead**: Requires VPN infrastructure management
- **Manual provisioning complexity**: Phase 1 manual user management creates administrative burden
- **Network policy dependency**: Requires CNI with NetworkPolicy support (OVN-Kubernetes provides this and is the OKD/Openshift default)
- **Cluster-wide resource limitations**: Some advanced Kubernetes features require cluster-wide access
- **Single point of failure**: Cluster outage affects all tenants simultaneously
### Migration Challenges
- **Legacy client integration**: Existing clients may need VPN client setup and credential migration
- **Monitoring complexity**: Per-tenant observability requires careful metric and log segmentation
- **Backup considerations**: Tenant data backup must respect isolation boundaries
## Alternatives Considered
### Alternative 1: Virtual Control Plane Per Tenant
**Pros**: Complete control plane isolation, full Kubernetes API access per tenant
**Cons**: 3-5x higher resource usage, complex cross-tenant networking, operational complexity scales linearly with tenants
**Rejected**: Resource overhead incompatible with cost-effective multi-tenancy goals
### Alternative 2: Dedicated Clusters Per Tenant
**Pros**: Maximum isolation, independent upgrade cycles, simplified security model
**Cons**: Exponential operational complexity, prohibitive costs, resource waste
**Rejected**: Operational overhead makes this approach unsustainable for multiple clients
### Alternative 3: Public API with Advanced Authentication
**Pros**: No VPN requirement, potentially simpler client access
**Cons**: Larger attack surface, complex rate limiting and DDoS protection, increased security monitoring requirements
**Rejected**: Risk/benefit analysis favors VPN-based access control
### Alternative 4: Service Mesh Based Isolation
**Pros**: Fine-grained traffic control, encryption, advanced observability
**Cons**: Significant operational complexity, performance overhead, steep learning curve
**Rejected**: Complexity overhead outweighs benefits for current requirements; remains option for future enhancement
## Additional Notes
### Implementation Roadmap
1. **Phase 1**: Implement VPN access and manual tenant provisioning
2. **Phase 2**: Deploy TenantScore automation for namespace, RBAC, and NetworkPolicy management
4. **Phase 3**: Work on privilege escalation from pods, audit for weaknesses, enforce security policies on pod runtimes
3. **Phase 4**: Integrate Keycloak for centralized identity management
4. **Phase 5**: Add advanced monitoring and per-tenant observability
### TenantScore Structure Preview
```rust
pub struct TenantScore {
pub tenant_config: TenantConfig,
pub resource_quotas: ResourceQuotaConfig,
pub network_isolation: NetworkIsolationPolicy,
pub storage_access: StorageAccessConfig,
pub rbac_config: RBACConfig,
}
```
### Future Enhancements
- **Cross-tenant service mesh**: For approved inter-tenant communication
- **Advanced monitoring**: Per-tenant Prometheus/Grafana instances
- **Backup automation**: Tenant-scoped backup policies
- **Cost allocation**: Detailed per-tenant resource usage tracking
This ADR establishes the foundation for secure, scalable multi-tenancy in Harmony-managed clusters while maintaining operational simplicity and cost effectiveness. A follow-up ADR will detail the Tenant abstraction and user management mechanisms within the Harmony framework.

View File

@@ -0,0 +1,41 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant-isolation-policy
namespace: testtenant
spec:
podSelector: {} # Selects all pods in the namespace
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {} # Allow from all pods in the same namespace
egress:
- to:
- podSelector: {} # Allow to all pods in the same namespace
- to:
- podSelector: {}
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns # Target the openshift-dns namespace
# Note, only opening port 53 is not enough, will have to dig deeper into this one eventually
# ports:
# - protocol: UDP
# port: 53
# - protocol: TCP
# port: 53
# Allow egress to public internet only
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8 # RFC1918
- 172.16.0.0/12 # RFC1918
- 192.168.0.0/16 # RFC1918
- 169.254.0.0/16 # Link-local
- 127.0.0.0/8 # Loopback
- 224.0.0.0/4 # Multicast
- 240.0.0.0/4 # Reserved
- 100.64.0.0/10 # Carrier-grade NAT
- 0.0.0.0/8 # Reserved

View File

@@ -0,0 +1,95 @@
apiVersion: v1
kind: Namespace
metadata:
name: testtenant
---
apiVersion: v1
kind: Namespace
metadata:
name: testtenant2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-web
namespace: testtenant
spec:
replicas: 1
selector:
matchLabels:
app: test-web
template:
metadata:
labels:
app: test-web
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-web
namespace: testtenant
spec:
selector:
app: test-web
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-client
namespace: testtenant
spec:
replicas: 1
selector:
matchLabels:
app: test-client
template:
metadata:
labels:
app: test-client
spec:
containers:
- name: curl
image: curlimages/curl:latest
command: ["/bin/sh", "-c", "sleep 3600"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-web
namespace: testtenant2
spec:
replicas: 1
selector:
matchLabels:
app: test-web
template:
metadata:
labels:
app: test-web
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-web
namespace: testtenant2
spec:
selector:
app: test-web
ports:
- port: 80
targetPort: 8080

0
check.sh Normal file → Executable file
View File

View File

@@ -2,7 +2,10 @@ use harmony::{
data::Version,
inventory::Inventory,
maestro::Maestro,
modules::lamp::{LAMPConfig, LAMPScore},
modules::{
lamp::{LAMPConfig, LAMPScore},
monitoring::monitoring_alerting::{AlertChannel, MonitoringAlertingStackScore},
},
topology::{K8sAnywhereTopology, Url},
};
@@ -24,7 +27,7 @@ async fn main() {
// This config can be extended as needed for more complicated configurations
config: LAMPConfig {
project_root: "./php".into(),
database_size: format!("2Gi").into(),
database_size: format!("4Gi").into(),
..Default::default()
},
};
@@ -39,7 +42,14 @@ async fn main() {
)
.await
.unwrap();
maestro.register_all(vec![Box::new(lamp_stack)]);
let url = url::Url::parse("https://discord.com/api/webhooks/dummy_channel/dummy_token")
.expect("invalid URL");
let mut monitoring_stack_score = MonitoringAlertingStackScore::new();
monitoring_stack_score.namespace = Some(lamp_stack.config.namespace.clone());
maestro.register_all(vec![Box::new(lamp_stack), Box::new(monitoring_stack_score)]);
// Here we bootstrap the CLI, this gives some nice features if you need them
harmony_cli::init(maestro, None).await.unwrap();
}

View File

@@ -0,0 +1,18 @@
[package]
name = "example-tenant"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -0,0 +1,41 @@
use harmony::{
data::Id,
inventory::Inventory,
maestro::Maestro,
modules::tenant::TenantScore,
topology::{K8sAnywhereTopology, tenant::TenantConfig},
};
#[tokio::main]
async fn main() {
let tenant = TenantScore {
config: TenantConfig {
id: Id::default(),
name: "TestTenant".to_string(),
..Default::default()
},
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::new(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(tenant)]);
harmony_cli::init(maestro, None).await.unwrap();
}
// TODO write tests
// - Create Tenant with default config mostly, make sure namespace is created
// - deploy sample client/server app with nginx unprivileged and a service
// - exec in the client pod and validate the following
// - can reach internet
// - can reach server pod
// - can resolve dns queries to internet
// - can resolve dns queries to services
// - cannot reach services and pods in other namespaces
// - Create Tenant with specific cpu/ram/storage requests / limits and make sure they are enforced by trying to
// deploy a pod with lower requests/limits (accepted) and higher requests/limits (rejected)
// - Create TenantCredentials and make sure they give only access to the correct tenant

View File

@@ -6,6 +6,8 @@ readme.workspace = true
license.workspace = true
[dependencies]
rand = "0.9"
hex = "0.4"
libredfish = "0.1.1"
reqwest = { version = "0.11", features = ["blocking", "json"] }
russh = "0.45.0"
@@ -39,3 +41,14 @@ lazy_static = "1.5.0"
dockerfile_builder = "0.1.5"
temp-file = "0.1.9"
convert_case.workspace = true
email_address = "0.2.9"
fqdn = { version = "0.4.6", features = [
"domain-label-cannot-start-or-end-with-hyphen",
"domain-label-length-limited-to-63",
"domain-name-without-special-chars",
"domain-name-length-limited-to-255",
"punycode",
"serde",
] }
temp-dir = "0.1.14"
dyn-clone = "1.0.19"

View File

@@ -1,6 +1,24 @@
use rand::distr::Alphanumeric;
use rand::distr::SampleString;
use std::time::SystemTime;
use std::time::UNIX_EPOCH;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
/// A unique identifier designed for ease of use.
///
/// You can pass it any String to use and Id, or you can use the default format with `Id::default()`
///
/// The default format looks like this
///
/// `462d4c_g2COgai`
///
/// The first part is the unix timesamp in hexadecimal which makes Id easily sorted by creation time.
/// Second part is a serie of 7 random characters.
///
/// **It is not meant to be very secure or unique**, it is suitable to generate up to 10 000 items per
/// second with a reasonable collision rate of 0,000014 % as calculated by this calculator : https://kevingal.com/apps/collision.html
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct Id {
value: String,
}
@@ -10,3 +28,26 @@ impl Id {
Self { value }
}
}
impl std::fmt::Display for Id {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(&self.value)
}
}
impl Default for Id {
fn default() -> Self {
let start = SystemTime::now();
let since_the_epoch = start
.duration_since(UNIX_EPOCH)
.expect("Time went backwards");
let timestamp = since_the_epoch.as_secs();
let hex_timestamp = format!("{:x}", timestamp & 0xffffff);
let random_part: String = Alphanumeric.sample_string(&mut rand::rng(), 7);
let value = format!("{}_{}", hex_timestamp, random_part);
Self { value }
}
}

View File

@@ -20,6 +20,7 @@ pub enum InterpretName {
Panic,
OPNSense,
K3dInstallation,
TenantInterpret,
}
impl std::fmt::Display for InterpretName {
@@ -35,6 +36,7 @@ impl std::fmt::Display for InterpretName {
InterpretName::Panic => f.write_str("Panic"),
InterpretName::OPNSense => f.write_str("OPNSense"),
InterpretName::K3dInstallation => f.write_str("K3dInstallation"),
InterpretName::TenantInterpret => f.write_str("Tenant"),
}
}
}

View File

@@ -1,4 +1,4 @@
use std::{process::Command, sync::Arc};
use std::{io::Error, process::Command, sync::Arc};
use async_trait::async_trait;
use inquire::Confirm;
@@ -6,6 +6,7 @@ use log::{info, warn};
use tokio::sync::OnceCell;
use crate::{
executors::ExecutorError,
interpret::{InterpretError, Outcome},
inventory::Inventory,
maestro::Maestro,
@@ -13,7 +14,13 @@ use crate::{
topology::LocalhostTopology,
};
use super::{HelmCommand, K8sclient, Topology, k8s::K8sClient};
use super::{
HelmCommand, K8sclient, Topology,
k8s::K8sClient,
tenant::{
ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy, k8s::K8sTenantManager,
},
};
struct K8sState {
client: Arc<K8sClient>,
@@ -21,6 +28,7 @@ struct K8sState {
message: String,
}
#[derive(Debug)]
enum K8sSource {
LocalK3d,
Kubeconfig,
@@ -28,6 +36,7 @@ enum K8sSource {
pub struct K8sAnywhereTopology {
k8s_state: OnceCell<Option<K8sState>>,
tenant_manager: OnceCell<K8sTenantManager>,
}
#[async_trait]
@@ -51,6 +60,7 @@ impl K8sAnywhereTopology {
pub fn new() -> Self {
Self {
k8s_state: OnceCell::new(),
tenant_manager: OnceCell::new(),
}
}
@@ -92,9 +102,7 @@ impl K8sAnywhereTopology {
async fn try_get_or_install_k8s_client(&self) -> Result<Option<K8sState>, InterpretError> {
let k8s_anywhere_config = K8sAnywhereConfig {
kubeconfig: std::env::var("HARMONY_KUBECONFIG")
.ok()
.map(|v| v.to_string()),
kubeconfig: std::env::var("KUBECONFIG").ok().map(|v| v.to_string()),
use_system_kubeconfig: std::env::var("HARMONY_USE_SYSTEM_KUBECONFIG")
.map_or_else(|_| false, |v| v.parse().ok().unwrap_or(false)),
autoinstall: std::env::var("HARMONY_AUTOINSTALL")
@@ -161,6 +169,31 @@ impl K8sAnywhereTopology {
Ok(Some(state))
}
async fn ensure_k8s_tenant_manager(&self) -> Result<(), String> {
if let Some(_) = self.tenant_manager.get() {
return Ok(());
}
self.tenant_manager
.get_or_try_init(async || -> Result<K8sTenantManager, String> {
let k8s_client = self.k8s_client().await?;
Ok(K8sTenantManager::new(k8s_client))
})
.await
.unwrap();
Ok(())
}
fn get_k8s_tenant_manager(&self) -> Result<&K8sTenantManager, ExecutorError> {
match self.tenant_manager.get() {
Some(t) => Ok(t),
None => Err(ExecutorError::UnexpectedError(
"K8sTenantManager not available".to_string(),
)),
}
}
}
struct K8sAnywhereConfig {
@@ -200,6 +233,10 @@ impl Topology for K8sAnywhereTopology {
"No K8s client could be found or installed".to_string(),
))?;
self.ensure_k8s_tenant_manager()
.await
.map_err(|e| InterpretError::new(e))?;
match self.is_helm_available() {
Ok(()) => Ok(Outcome::success(format!(
"{} + helm available",
@@ -211,3 +248,38 @@ impl Topology for K8sAnywhereTopology {
}
impl HelmCommand for K8sAnywhereTopology {}
#[async_trait]
impl TenantManager for K8sAnywhereTopology {
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.provision_tenant(config)
.await
}
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.update_tenant_resource_limits(tenant_name, new_limits)
.await
}
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.update_tenant_network_policy(tenant_name, new_policy)
.await
}
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
self.get_k8s_tenant_manager()?
.deprovision_tenant(tenant_name)
.await
}
}

View File

@@ -7,6 +7,12 @@ use serde::Serialize;
use super::{IpAddress, LogicalHost};
use crate::executors::ExecutorError;
impl std::fmt::Debug for dyn LoadBalancer {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_fmt(format_args!("LoadBalancer {}", self.get_ip()))
}
}
#[async_trait]
pub trait LoadBalancer: Send + Sync {
fn get_ip(&self) -> IpAddress;
@@ -32,11 +38,6 @@ pub trait LoadBalancer: Send + Sync {
}
}
impl std::fmt::Debug for dyn LoadBalancer {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_fmt(format_args!("LoadBalancer {}", self.get_ip()))
}
}
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct LoadBalancerService {
pub backend_servers: Vec<BackendServer>,

View File

@@ -1,9 +1,10 @@
pub mod monitoring_alerting;
mod ha_cluster;
mod host_binding;
mod http;
mod k8s_anywhere;
mod localhost;
pub mod oberservability;
pub mod tenant;
pub use k8s_anywhere::*;
pub use localhost::*;
pub mod k8s;

View File

@@ -1,108 +0,0 @@
use std::sync::Arc;
use log::warn;
use tokio::sync::OnceCell;
use k8s_openapi::api::core::v1::Pod;
use kube::{
Client,
api::{Api, ListParams},
};
use async_trait::async_trait;
use crate::{
interpret::{InterpretError, Outcome},
inventory::Inventory,
maestro::Maestro,
modules::monitoring::monitoring_alerting::MonitoringAlertingStackScore,
score::Score,
};
use super::{HelmCommand, K8sAnywhereTopology, Topology, k8s::K8sClient};
#[derive(Clone, Debug)]
struct MonitoringState {
message: String,
}
#[derive(Debug)]
pub struct MonitoringAlertingTopology {
monitoring_state: OnceCell<Option<MonitoringState>>,
}
impl MonitoringAlertingTopology {
pub fn new() -> Self {
Self {
monitoring_state: OnceCell::new(),
}
}
async fn get_monitoring_state(&self) -> Result<Option<MonitoringState>, InterpretError> {
let client = Client::try_default()
.await
.map_err(|e| InterpretError::new(format!("Kubernetes client error: {}", e)))?;
for ns in &["monitoring", "openshift-monitoring"] {
let pods: Api<Pod> = Api::namespaced(client.clone(), ns);
//TODO hardcoding the label is a problem
//check all pods are ready
let lp = ListParams::default().labels("app.kubernetes.io/name=prometheus");
match pods.list(&lp).await {
Ok(pod_list) => {
for p in pod_list.items {
if let Some(status) = p.status {
if let Some(conditions) = status.conditions {
if conditions
.iter()
.any(|c| c.type_ == "Ready" && c.status == "True")
{
return Ok(Some(MonitoringState {
message: format!(
"Prometheus is ready in namespace: {}",
ns
),
}));
}
}
}
}
}
Err(e) => {
warn!("Failed to query pods in ns {}: {}", ns, e);
}
}
}
Ok(None)
}
}
impl<T: Topology> Clone for Box<dyn Score<T>> {
fn clone(&self) -> Box<dyn Score<T>> {
self.clone_box()
}
}
#[async_trait]
impl Topology for MonitoringAlertingTopology {
fn name(&self) -> &str {
"MonitoringAlertingTopology"
}
async fn ensure_ready(&self) -> Result<Outcome, InterpretError> {
if let Some(state) = self.get_monitoring_state().await? {
// Monitoring stack is already ready — stop app.
println!("{}", state.message);
std::process::exit(0);
}
// Monitoring not found — proceed with installation.
Ok(Outcome::success(
"Monitoring stack installation started.".to_string(),
))
}
}
impl HelmCommand for MonitoringAlertingTopology {}

View File

@@ -0,0 +1 @@
pub mod monitoring;

View File

@@ -0,0 +1,31 @@
use async_trait::async_trait;
use std::fmt::Debug;
use url::Url;
use crate::interpret::InterpretError;
use crate::{interpret::Outcome, topology::Topology};
/// Represents an entity responsible for collecting and organizing observability data
/// from various telemetry sources
/// A `Monitor` abstracts the logic required to scrape, aggregate, and structure
/// monitoring data, enabling consistent processing regardless of the underlying data source.
#[async_trait]
pub trait Monitor<T: Topology>: Debug + Send + Sync {
async fn deploy_monitor(
&self,
topology: &T,
alert_receivers: Vec<AlertReceiver>,
) -> Result<Outcome, InterpretError>;
async fn delete_monitor(
&self,
topolgy: &T,
alert_receivers: Vec<AlertReceiver>,
) -> Result<Outcome, InterpretError>;
}
pub struct AlertReceiver {
pub receiver_id: String,
}

View File

@@ -0,0 +1,110 @@
use std::sync::Arc;
use crate::{executors::ExecutorError, topology::k8s::K8sClient};
use async_trait::async_trait;
use derive_new::new;
use k8s_openapi::api::core::v1::Namespace;
use serde_json::json;
use super::{ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy};
#[derive(new)]
pub struct K8sTenantManager {
k8s_client: Arc<K8sClient>,
}
#[async_trait]
impl TenantManager for K8sTenantManager {
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
let namespace = json!(
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"labels": {
"harmony.nationtech.io/tenant.id": config.id,
"harmony.nationtech.io/tenant.name": config.name,
},
"name": config.name,
},
}
);
todo!("Validate that when tenant already exists (by id) that name has not changed");
let namespace: Namespace = serde_json::from_value(namespace).unwrap();
let resource_quota = json!(
{
"apiVersion": "v1",
"kind": "List",
"items": [
{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"name": config.name,
"labels": {
"harmony.nationtech.io/tenant.id": config.id,
"harmony.nationtech.io/tenant.name": config.name,
},
"namespace": config.name,
},
"spec": {
"hard": {
"limits.cpu": format!("{:.0}",config.resource_limits.cpu_limit_cores),
"limits.memory": format!("{:.3}Gi", config.resource_limits.memory_limit_gb),
"requests.cpu": format!("{:.0}",config.resource_limits.cpu_request_cores),
"requests.memory": format!("{:.3}Gi", config.resource_limits.memory_request_gb),
"requests.storage": format!("{:.3}", config.resource_limits.storage_total_gb),
"pods": "20",
"services": "10",
"configmaps": "30",
"secrets": "30",
"persistentvolumeclaims": "15",
"services.loadbalancers": "2",
"services.nodeports": "5",
}
}
}
]
}
);
let network_policy = json!({
"apiVersion": "networking.k8s.io/v1",
"kind": "NetworkPolicy",
"metadata": {
"name": format!("{}-network-policy", config.name),
},
"spec": {
"podSelector": {},
"egress": [],
"ingress": [],
"policyTypes": [
]
}
});
}
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError> {
todo!()
}
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError> {
todo!()
}
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
todo!()
}
}

View File

@@ -0,0 +1,46 @@
use super::*;
use async_trait::async_trait;
use crate::executors::ExecutorError;
#[async_trait]
pub trait TenantManager {
/// Provisions a new tenant based on the provided configuration.
/// This operation should be idempotent; if a tenant with the same `config.name`
/// already exists and matches the config, it will succeed without changes.
/// If it exists but differs, it will be updated, or return an error if the update
/// action is not supported
///
/// # Arguments
/// * `config`: The desired configuration for the new tenant.
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError>;
/// Updates the resource limits for an existing tenant.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to update.
/// * `new_limits`: The new set of resource limits to apply.
async fn update_tenant_resource_limits(
&self,
tenant_name: &str,
new_limits: &ResourceLimits,
) -> Result<(), ExecutorError>;
/// Updates the high-level network isolation policy for an existing tenant.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to update.
/// * `new_policy`: The new network policy to apply.
async fn update_tenant_network_policy(
&self,
tenant_name: &str,
new_policy: &TenantNetworkPolicy,
) -> Result<(), ExecutorError>;
/// Decommissions an existing tenant, removing its isolated context and associated resources.
/// This operation should be idempotent.
///
/// # Arguments
/// * `tenant_name`: The logical name of the tenant to deprovision.
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError>;
}

View File

@@ -0,0 +1,89 @@
pub mod k8s;
mod manager;
pub use manager::*;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use crate::data::Id;
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] // Assuming serde for Scores
pub struct TenantConfig {
/// This will be used as the primary unique identifier for management operations and will never
/// change for the entire lifetime of the tenant
pub id: Id,
/// A human-readable name for the tenant (e.g., "client-alpha", "project-phoenix").
pub name: String,
/// Desired resource allocations and limits for the tenant.
pub resource_limits: ResourceLimits,
/// High-level network isolation policies for the tenant.
pub network_policy: TenantNetworkPolicy,
/// Key-value pairs for provider-specific tagging, labeling, or metadata.
/// Useful for billing, organization, or filtering within the provider's console.
pub labels_or_tags: HashMap<String, String>,
}
impl Default for TenantConfig {
fn default() -> Self {
let id = Id::default();
Self {
name: format!("tenant_{id}"),
id,
resource_limits: ResourceLimits {
cpu_request_cores: 4.0,
cpu_limit_cores: 4.0,
memory_request_gb: 4.0,
memory_limit_gb: 4.0,
storage_total_gb: 20.0,
},
network_policy: TenantNetworkPolicy {
default_inter_tenant_ingress: InterTenantIngressPolicy::DenyAll,
default_internet_egress: InternetEgressPolicy::AllowAll,
},
labels_or_tags: HashMap::new(),
}
}
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Default)]
pub struct ResourceLimits {
/// Requested/guaranteed CPU cores (e.g., 2.0).
pub cpu_request_cores: f32,
/// Maximum CPU cores the tenant can burst to (e.g., 4.0).
pub cpu_limit_cores: f32,
/// Requested/guaranteed memory in Gigabytes (e.g., 8.0).
pub memory_request_gb: f32,
/// Maximum memory in Gigabytes tenant can burst to (e.g., 16.0).
pub memory_limit_gb: f32,
/// Total persistent storage allocation in Gigabytes across all volumes.
pub storage_total_gb: f32,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct TenantNetworkPolicy {
/// Policy for ingress traffic originating from other tenants within the same Harmony-managed environment.
pub default_inter_tenant_ingress: InterTenantIngressPolicy,
/// Policy for egress traffic destined for the public internet.
pub default_internet_egress: InternetEgressPolicy,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum InterTenantIngressPolicy {
/// Deny all traffic from other tenants by default.
DenyAll,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum InternetEgressPolicy {
/// Allow all outbound traffic to the internet.
AllowAll,
/// Deny all outbound traffic to the internet by default.
DenyAll,
}

View File

@@ -370,10 +370,13 @@ mod tests {
let result = get_servers_for_backend(&backend, &haproxy);
// Check the result
assert_eq!(result, vec![BackendServer {
address: "192.168.1.1".to_string(),
port: 80,
},]);
assert_eq!(
result,
vec![BackendServer {
address: "192.168.1.1".to_string(),
port: 80,
},]
);
}
#[test]
fn test_get_servers_for_backend_no_linked_servers() {
@@ -430,15 +433,18 @@ mod tests {
// Call the function
let result = get_servers_for_backend(&backend, &haproxy);
// Check the result
assert_eq!(result, vec![
BackendServer {
address: "some-hostname.test.mcd".to_string(),
port: 80,
},
BackendServer {
address: "192.168.1.2".to_string(),
port: 8080,
},
]);
assert_eq!(
result,
vec![
BackendServer {
address: "some-hostname.test.mcd".to_string(),
port: 80,
},
BackendServer {
address: "192.168.1.2".to_string(),
port: 8080,
},
]
);
}
}

View File

@@ -6,7 +6,7 @@ use crate::topology::{HelmCommand, Topology};
use async_trait::async_trait;
use helm_wrapper_rs;
use helm_wrapper_rs::blocking::{DefaultHelmExecutor, HelmExecutor};
use log::{debug, error, info, warn};
use log::{debug, info, warn};
pub use non_blank_string_rs::NonBlankString;
use serde::Serialize;
use std::collections::HashMap;
@@ -23,7 +23,7 @@ pub struct HelmRepository {
force_update: bool,
}
impl HelmRepository {
pub(crate) fn new(name: String, url: Url, force_update: bool) -> Self {
pub fn new(name: String, url: Url, force_update: bool) -> Self {
Self {
name,
url,
@@ -104,6 +104,10 @@ impl HelmChartInterpret {
fn run_helm_command(args: &[&str]) -> Result<Output, InterpretError> {
let command_str = format!("helm {}", args.join(" "));
debug!(
"Got KUBECONFIG: `{}`",
std::env::var("KUBECONFIG").unwrap_or("".to_string())
);
debug!("Running Helm command: `{}`", command_str);
let output = Command::new("helm")
@@ -159,7 +163,13 @@ impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
self.add_repo()?;
let helm_executor = DefaultHelmExecutor::new();
let helm_executor = DefaultHelmExecutor::new_with_opts(
&NonBlankString::from_str("helm").unwrap(),
None,
900,
false,
false,
);
let mut helm_options = Vec::new();
if self.score.create_namespace {

View File

@@ -0,0 +1,376 @@
use async_trait::async_trait;
use log::debug;
use serde::Serialize;
use std::collections::HashMap;
use std::io::ErrorKind;
use std::path::PathBuf;
use std::process::{Command, Output};
use temp_dir::{self, TempDir};
use temp_file::TempFile;
use crate::data::{Id, Version};
use crate::interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome};
use crate::inventory::Inventory;
use crate::score::Score;
use crate::topology::{HelmCommand, K8sclient, Topology};
#[derive(Clone)]
pub struct HelmCommandExecutor {
pub env: HashMap<String, String>,
pub path: Option<PathBuf>,
pub args: Vec<String>,
pub api_versions: Option<Vec<String>>,
pub kube_version: String,
pub debug: Option<bool>,
pub globals: HelmGlobals,
pub chart: HelmChart,
}
#[derive(Clone)]
pub struct HelmGlobals {
pub chart_home: Option<PathBuf>,
pub config_home: Option<PathBuf>,
}
#[derive(Debug, Clone, Serialize)]
pub struct HelmChart {
pub name: String,
pub version: Option<String>,
pub repo: Option<String>,
pub release_name: Option<String>,
pub namespace: Option<String>,
pub additional_values_files: Vec<PathBuf>,
pub values_file: Option<PathBuf>,
pub values_inline: Option<String>,
pub include_crds: Option<bool>,
pub skip_hooks: Option<bool>,
pub api_versions: Option<Vec<String>>,
pub kube_version: Option<String>,
pub name_template: String,
pub skip_tests: Option<bool>,
pub debug: Option<bool>,
}
impl HelmCommandExecutor {
pub fn generate(mut self) -> Result<String, std::io::Error> {
if self.globals.chart_home.is_none() {
self.globals.chart_home = Some(PathBuf::from("charts"));
}
if self
.clone()
.chart
.clone()
.chart_exists_locally(self.clone().globals.chart_home.unwrap())
.is_none()
{
if self.chart.repo.is_none() {
return Err(std::io::Error::new(
ErrorKind::Other,
"Chart doesn't exist locally and no repo specified",
));
}
self.clone().run_command(
self.chart
.clone()
.pull_command(self.globals.chart_home.clone().unwrap()),
)?;
}
let out = match self.clone().run_command(
self.chart
.clone()
.helm_args(self.globals.chart_home.clone().unwrap()),
) {
Ok(out) => out,
Err(e) => return Err(e),
};
// TODO: don't use unwrap here
let s = String::from_utf8(out.stdout).unwrap();
debug!("helm stderr: {}", String::from_utf8(out.stderr).unwrap());
debug!("helm status: {}", out.status);
debug!("helm output: {s}");
let clean = s.split_once("---").unwrap().1;
Ok(clean.to_string())
}
pub fn version(self) -> Result<String, std::io::Error> {
let out = match self.run_command(vec![
"version".to_string(),
"-c".to_string(),
"--short".to_string(),
]) {
Ok(out) => out,
Err(e) => return Err(e),
};
// TODO: don't use unwrap
Ok(String::from_utf8(out.stdout).unwrap())
}
pub fn run_command(mut self, mut args: Vec<String>) -> Result<Output, std::io::Error> {
if let Some(d) = self.debug {
if d {
args.push("--debug".to_string());
}
}
let path = if let Some(p) = self.path {
p
} else {
PathBuf::from("helm")
};
let config_home = match self.globals.config_home {
Some(p) => p,
None => PathBuf::from(TempDir::new()?.path()),
};
match self.chart.values_inline {
Some(yaml_str) => {
let tf: TempFile;
tf = temp_file::with_contents(yaml_str.as_bytes());
self.chart
.additional_values_files
.push(PathBuf::from(tf.path()));
}
None => (),
};
self.env.insert(
"HELM_CONFIG_HOME".to_string(),
config_home.to_str().unwrap().to_string(),
);
self.env.insert(
"HELM_CACHE_HOME".to_string(),
config_home.to_str().unwrap().to_string(),
);
self.env.insert(
"HELM_DATA_HOME".to_string(),
config_home.to_str().unwrap().to_string(),
);
Command::new(path).envs(self.env).args(args).output()
}
}
impl HelmChart {
pub fn chart_exists_locally(self, chart_home: PathBuf) -> Option<PathBuf> {
let chart_path =
PathBuf::from(chart_home.to_str().unwrap().to_string() + "/" + &self.name.to_string());
if chart_path.exists() {
Some(chart_path)
} else {
None
}
}
pub fn pull_command(self, chart_home: PathBuf) -> Vec<String> {
let mut args = vec![
"pull".to_string(),
"--untar".to_string(),
"--untardir".to_string(),
chart_home.to_str().unwrap().to_string(),
];
match self.repo {
Some(r) => {
if r.starts_with("oci://") {
args.push(String::from(
r.trim_end_matches("/").to_string() + "/" + self.name.clone().as_str(),
));
} else {
args.push("--repo".to_string());
args.push(r.to_string());
args.push(self.name);
}
}
None => args.push(self.name),
};
match self.version {
Some(v) => {
args.push("--version".to_string());
args.push(v.to_string());
}
None => (),
}
args
}
pub fn helm_args(self, chart_home: PathBuf) -> Vec<String> {
let mut args: Vec<String> = vec!["template".to_string()];
match self.release_name {
Some(rn) => args.push(rn.to_string()),
None => args.push("--generate-name".to_string()),
}
args.push(
PathBuf::from(chart_home.to_str().unwrap().to_string() + "/" + self.name.as_str())
.to_str()
.unwrap()
.to_string(),
);
if let Some(n) = self.namespace {
args.push("--namespace".to_string());
args.push(n.to_string());
}
if let Some(f) = self.values_file {
args.push("-f".to_string());
args.push(f.to_str().unwrap().to_string());
}
for f in self.additional_values_files {
args.push("-f".to_string());
args.push(f.to_str().unwrap().to_string());
}
if let Some(vv) = self.api_versions {
for v in vv {
args.push("--api-versions".to_string());
args.push(v);
}
}
if let Some(kv) = self.kube_version {
args.push("--kube-version".to_string());
args.push(kv);
}
if let Some(crd) = self.include_crds {
if crd {
args.push("--include-crds".to_string());
}
}
if let Some(st) = self.skip_tests {
if st {
args.push("--skip-tests".to_string());
}
}
if let Some(sh) = self.skip_hooks {
if sh {
args.push("--no-hooks".to_string());
}
}
if let Some(d) = self.debug {
if d {
args.push("--debug".to_string());
}
}
args
}
}
#[derive(Debug, Clone, Serialize)]
pub struct HelmChartScoreV2 {
pub chart: HelmChart,
}
impl<T: Topology + K8sclient + HelmCommand> Score<T> for HelmChartScoreV2 {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(HelmChartInterpretV2 {
score: self.clone(),
})
}
fn name(&self) -> String {
format!(
"{} {} HelmChartScoreV2",
self.chart
.release_name
.clone()
.unwrap_or("Unknown".to_string()),
self.chart.name
)
}
}
#[derive(Debug, Serialize)]
pub struct HelmChartInterpretV2 {
pub score: HelmChartScoreV2,
}
impl HelmChartInterpretV2 {}
#[async_trait]
impl<T: Topology + K8sclient + HelmCommand> Interpret<T> for HelmChartInterpretV2 {
async fn execute(
&self,
_inventory: &Inventory,
_topology: &T,
) -> Result<Outcome, InterpretError> {
let ns = self
.score
.chart
.namespace
.as_ref()
.unwrap_or_else(|| todo!("Get namespace from active kubernetes cluster"));
let helm_executor = HelmCommandExecutor {
env: HashMap::new(),
path: None,
args: vec![],
api_versions: None,
kube_version: "v1.33.0".to_string(),
debug: Some(false),
globals: HelmGlobals {
chart_home: None,
config_home: None,
},
chart: self.score.chart.clone(),
};
// let mut helm_options = Vec::new();
// if self.score.create_namespace {
// helm_options.push(NonBlankString::from_str("--create-namespace").unwrap());
// }
let res = helm_executor.generate();
let output = match res {
Ok(output) => output,
Err(err) => return Err(InterpretError::new(err.to_string())),
};
// TODO: implement actually applying the YAML from the templating in the generate function to a k8s cluster, having trouble passing in straight YAML into the k8s client
// let k8s_resource = k8s_openapi::serde_json::from_str(output.as_str()).unwrap();
// let client = topology
// .k8s_client()
// .await
// .expect("Environment should provide enough information to instanciate a client")
// .apply_namespaced(&vec![output], Some(ns.to_string().as_str()));
// match client.apply_yaml(output) {
// Ok(_) => return Ok(Outcome::success("Helm chart deployed".to_string())),
// Err(e) => return Err(InterpretError::new(e)),
// }
Ok(Outcome::success("Helm chart deployed".to_string()))
}
fn get_name(&self) -> InterpretName {
todo!()
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -1 +1,2 @@
pub mod chart;
pub mod command;

View File

@@ -42,7 +42,7 @@ impl<T: Topology + K8sclient> Score<T> for K8sDeploymentScore {
{
"image": self.image,
"name": self.name,
"imagePullPolicy": "IfNotPresent",
"imagePullPolicy": "Always",
"env": self.env_vars,
}
]

View File

@@ -0,0 +1,98 @@
use harmony_macros::ingress_path;
use k8s_openapi::api::networking::v1::Ingress;
use serde::Serialize;
use serde_json::json;
use crate::{
interpret::Interpret,
score::Score,
topology::{K8sclient, Topology},
};
use super::resource::{K8sResourceInterpret, K8sResourceScore};
#[derive(Debug, Clone, Serialize)]
pub enum PathType {
ImplementationSpecific,
Exact,
Prefix,
}
impl PathType {
fn as_str(&self) -> &'static str {
match self {
PathType::ImplementationSpecific => "ImplementationSpecific",
PathType::Exact => "Exact",
PathType::Prefix => "Prefix",
}
}
}
type IngressPath = String;
#[derive(Debug, Clone, Serialize)]
pub struct K8sIngressScore {
pub name: fqdn::FQDN,
pub host: fqdn::FQDN,
pub backend_service: fqdn::FQDN,
pub port: u16,
pub path: Option<IngressPath>,
pub path_type: Option<PathType>,
pub namespace: Option<fqdn::FQDN>,
}
impl<T: Topology + K8sclient> Score<T> for K8sIngressScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
let path = match self.path.clone() {
Some(p) => p,
None => ingress_path!("/"),
};
let path_type = match self.path_type.clone() {
Some(p) => p,
None => PathType::Prefix,
};
let ingress = json!(
{
"metadata": {
"name": self.name
},
"spec": {
"rules": [
{ "host": self.host,
"http": {
"paths": [
{
"path": path,
"pathType": path_type.as_str(),
"backend": [
{
"service": self.backend_service,
"port": self.port
}
]
}
]
}
}
]
}
}
);
let ingress: Ingress = serde_json::from_value(ingress).unwrap();
Box::new(K8sResourceInterpret {
score: K8sResourceScore::single(
ingress.clone(),
self.namespace
.clone()
.map(|f| f.as_c_str().to_str().unwrap().to_string()),
),
})
}
fn name(&self) -> String {
format!("{} K8sIngressScore", self.name)
}
}

View File

@@ -1,3 +1,4 @@
pub mod deployment;
pub mod ingress;
pub mod namespace;
pub mod resource;

View File

@@ -1,6 +1,8 @@
use convert_case::{Case, Casing};
use dockerfile_builder::instruction::{CMD, COPY, ENV, EXPOSE, FROM, RUN, WORKDIR};
use dockerfile_builder::{Dockerfile, instruction_builder::EnvBuilder};
use fqdn::fqdn;
use harmony_macros::ingress_path;
use non_blank_string_rs::NonBlankString;
use serde_json::json;
use std::collections::HashMap;
@@ -13,6 +15,7 @@ use log::{debug, info};
use serde::Serialize;
use crate::config::{REGISTRY_PROJECT, REGISTRY_URL};
use crate::modules::k8s::ingress::K8sIngressScore;
use crate::topology::HelmCommand;
use crate::{
data::{Id, Version},
@@ -38,6 +41,7 @@ pub struct LAMPConfig {
pub project_root: PathBuf,
pub ssl_enabled: bool,
pub database_size: Option<String>,
pub namespace: String,
}
impl Default for LAMPConfig {
@@ -46,6 +50,7 @@ impl Default for LAMPConfig {
project_root: Path::new("./src").to_path_buf(),
ssl_enabled: true,
database_size: None,
namespace: "harmony-lamp".to_string(),
}
}
}
@@ -54,7 +59,6 @@ impl<T: Topology + K8sclient + HelmCommand> Score<T> for LAMPScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(LAMPInterpret {
score: self.clone(),
namespace: "harmony-lamp".to_string(),
})
}
@@ -66,7 +70,6 @@ impl<T: Topology + K8sclient + HelmCommand> Score<T> for LAMPScore {
#[derive(Debug)]
pub struct LAMPInterpret {
score: LAMPScore,
namespace: String,
}
#[async_trait]
@@ -132,7 +135,32 @@ impl<T: Topology + K8sclient + HelmCommand> Interpret<T> for LAMPInterpret {
info!("LAMP deployment_score {deployment_score:?}");
Ok(Outcome::success("Successfully deployed LAMP Stack!".to_string()))
let lamp_ingress = K8sIngressScore {
name: fqdn!("lamp-ingress"),
host: fqdn!("test"),
backend_service: fqdn!(
<LAMPScore as Score<T>>::name(&self.score)
.to_case(Case::Kebab)
.as_str()
),
port: 8080,
path: Some(ingress_path!("/")),
path_type: None,
namespace: self
.get_namespace()
.map(|nbs| fqdn!(nbs.to_string().as_str())),
};
lamp_ingress
.create_interpret()
.execute(inventory, topology)
.await?;
info!("LAMP lamp_ingress {lamp_ingress:?}");
Ok(Outcome::success(
"Successfully deployed LAMP Stack!".to_string(),
))
}
fn get_name(&self) -> InterpretName {
@@ -164,6 +192,10 @@ impl LAMPInterpret {
NonBlankString::from_str("primary.persistence.size").unwrap(),
database_size,
);
values_overrides.insert(
NonBlankString::from_str("auth.rootPassword").unwrap(),
"mariadb-changethis".to_string(),
);
}
let score = HelmChartScore {
namespace: self.get_namespace(),
@@ -176,7 +208,7 @@ impl LAMPInterpret {
chart_version: None,
values_overrides: Some(values_overrides),
create_namespace: true,
install_only: true,
install_only: false,
values_yaml: None,
repository: None,
};
@@ -231,6 +263,9 @@ impl LAMPInterpret {
opcache",
));
dockerfile.push(RUN::from(r#"sed -i 's/VirtualHost \*:80/VirtualHost *:8080/' /etc/apache2/sites-available/000-default.conf && \
sed -i 's/^Listen 80$/Listen 8080/' /etc/apache2/ports.conf"#));
// Copy PHP configuration
dockerfile.push(RUN::from("mkdir -p /usr/local/etc/php/conf.d/"));
@@ -296,7 +331,7 @@ opcache.fast_shutdown=1
dockerfile.push(RUN::from("chown -R appuser:appuser /var/www/html"));
// Expose Apache port
dockerfile.push(EXPOSE::from("80/tcp"));
dockerfile.push(EXPOSE::from("8080/tcp"));
// Set the default command
dockerfile.push(CMD::from("apache2-foreground"));
@@ -380,6 +415,6 @@ opcache.fast_shutdown=1
}
fn get_namespace(&self) -> Option<NonBlankString> {
Some(NonBlankString::from_str(&self.namespace).unwrap())
Some(NonBlankString::from_str(&self.score.config.namespace).unwrap())
}
}

View File

@@ -9,7 +9,8 @@ pub mod k3d;
pub mod k8s;
pub mod lamp;
pub mod load_balancer;
pub mod monitoring;
pub mod okd;
pub mod opnsense;
pub mod tenant;
pub mod tftp;
pub mod monitoring;

View File

@@ -0,0 +1,49 @@
use serde::Serialize;
use super::monitoring_alerting::AlertChannel;
#[derive(Debug, Clone, Serialize)]
pub struct KubePrometheusConfig {
pub namespace: String,
pub default_rules: bool,
pub windows_monitoring: bool,
pub alert_manager: bool,
pub node_exporter: bool,
pub prometheus: bool,
pub grafana: bool,
pub kubernetes_service_monitors: bool,
pub kubernetes_api_server: bool,
pub kubelet: bool,
pub kube_controller_manager: bool,
pub core_dns: bool,
pub kube_etcd: bool,
pub kube_scheduler: bool,
pub kube_proxy: bool,
pub kube_state_metrics: bool,
pub prometheus_operator: bool,
pub alert_channel: Vec<AlertChannel>,
}
impl KubePrometheusConfig {
pub fn new() -> Self {
Self {
namespace: "monitoring".into(),
default_rules: true,
windows_monitoring: false,
alert_manager: true,
alert_channel: Vec::new(),
grafana: true,
node_exporter: false,
prometheus: true,
kubernetes_service_monitors: true,
kubernetes_api_server: false,
kubelet: false,
kube_controller_manager: false,
kube_etcd: false,
kube_proxy: false,
kube_state_metrics: true,
prometheus_operator: true,
core_dns: false,
kube_scheduler: false,
}
}
}

View File

@@ -0,0 +1,35 @@
use std::str::FromStr;
use non_blank_string_rs::NonBlankString;
use url::Url;
use crate::modules::helm::chart::HelmChartScore;
pub fn discord_alert_manager_score(
webhook_url: Url,
namespace: String,
name: String,
) -> HelmChartScore {
let values = format!(
r#"
environment:
- name: "DISCORD_WEBHOOK"
value: "{webhook_url}"
"#,
);
HelmChartScore {
namespace: Some(NonBlankString::from_str(&namespace).unwrap()),
release_name: NonBlankString::from_str(&name).unwrap(),
chart_name: NonBlankString::from_str(
"oci://hub.nationtech.io/library/alertmanager-discord",
)
.unwrap(),
chart_version: None,
values_overrides: None,
values_yaml: Some(values.to_string()),
create_namespace: true,
install_only: true,
repository: None,
}
}

View File

@@ -0,0 +1,55 @@
use async_trait::async_trait;
use serde_json::Value;
use url::Url;
use crate::{
interpret::{InterpretError, Outcome},
topology::K8sAnywhereTopology,
};
#[derive(Debug, Clone)]
pub struct DiscordWebhookConfig {
pub webhook_url: Url,
pub name: String,
pub send_resolved_notifications: bool,
}
pub trait DiscordWebhookReceiver {
fn deploy_discord_webhook_receiver(
&self,
_notification_adapter_id: &str,
) -> Result<Outcome, InterpretError>;
fn delete_discord_webhook_receiver(
&self,
_notification_adapter_id: &str,
) -> Result<Outcome, InterpretError>;
}
// trait used to generate alert manager config values impl<T: Topology + AlertManagerConfig> Monitor for KubePrometheus
pub trait AlertManagerConfig<T> {
fn get_alert_manager_config(&self) -> Result<Value, InterpretError>;
}
#[async_trait]
impl<T: DiscordWebhookReceiver> AlertManagerConfig<T> for DiscordWebhookConfig {
fn get_alert_manager_config(&self) -> Result<Value, InterpretError> {
todo!()
}
}
#[async_trait]
impl DiscordWebhookReceiver for K8sAnywhereTopology {
fn deploy_discord_webhook_receiver(
&self,
_notification_adapter_id: &str,
) -> Result<Outcome, InterpretError> {
todo!()
}
fn delete_discord_webhook_receiver(
&self,
_notification_adapter_id: &str,
) -> Result<Outcome, InterpretError> {
todo!()
}
}

View File

@@ -1,14 +1,60 @@
use std::str::FromStr;
use super::{config::KubePrometheusConfig, monitoring_alerting::AlertChannel};
use log::info;
use non_blank_string_rs::NonBlankString;
use std::{collections::HashMap, str::FromStr};
use url::Url;
use crate::modules::helm::chart::HelmChartScore;
pub fn kube_prometheus_score(ns: &str) -> HelmChartScore {
pub fn kube_prometheus_helm_chart_score(config: &KubePrometheusConfig) -> HelmChartScore {
//TODO this should be make into a rule with default formatting that can be easily passed as a vec
//to the overrides or something leaving the user to deal with formatting here seems bad
let values = r#"
let default_rules = config.default_rules.to_string();
let windows_monitoring = config.windows_monitoring.to_string();
let alert_manager = config.alert_manager.to_string();
let grafana = config.grafana.to_string();
let kubernetes_service_monitors = config.kubernetes_service_monitors.to_string();
let kubernetes_api_server = config.kubernetes_api_server.to_string();
let kubelet = config.kubelet.to_string();
let kube_controller_manager = config.kube_controller_manager.to_string();
let core_dns = config.core_dns.to_string();
let kube_etcd = config.kube_etcd.to_string();
let kube_scheduler = config.kube_scheduler.to_string();
let kube_proxy = config.kube_proxy.to_string();
let kube_state_metrics = config.kube_state_metrics.to_string();
let node_exporter = config.node_exporter.to_string();
let prometheus_operator = config.prometheus_operator.to_string();
let prometheus = config.prometheus.to_string();
let mut values = format!(
r#"
additionalPrometheusRulesMap:
pods-status-alerts:
groups:
- name: pods
rules:
- alert: "[CRIT] POD not healthy"
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{{phase=~"Pending|Unknown|Failed"}})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
title: "[CRIT] POD not healthy : {{{{ $labels.pod }}}}"
description: |
A POD is in a non-ready state!
- **Pod**: {{{{ $labels.pod }}}}
- **Namespace**: {{{{ $labels.namespace }}}}
- alert: "[CRIT] POD crash looping"
expr: increase(kube_pod_container_status_restarts_total[5m]) > 3
for: 0m
labels:
severity: critical
annotations:
title: "[CRIT] POD crash looping : {{{{ $labels.pod }}}}"
description: |
A POD is drowning in a crash loop!
- **Pod**: {{{{ $labels.pod }}}}
- **Namespace**: {{{{ $labels.namespace }}}}
- **Instance**: {{{{ $labels.instance }}}}
pvc-alerts:
groups:
- name: pvc-alerts
@@ -29,15 +75,141 @@ additionalPrometheusRulesMap:
labels:
severity: warning
annotations:
description: The PVC {{ $labels.persistentvolumeclaim }} in namespace {{ $labels.namespace }} is predicted to fill over 95% in less than 2 days.
title: PVC {{ $labels.persistentvolumeclaim }} in namespace {{ $labels.namespace }} will fill over 95% in less than 2 days
"#;
description: The PVC {{{{ $labels.persistentvolumeclaim }}}} in namespace {{{{ $labels.namespace }}}} is predicted to fill over 95% in less than 2 days.
title: PVC {{{{ $labels.persistentvolumeclaim }}}} in namespace {{{{ $labels.namespace }}}} will fill over 95% in less than 2 days
defaultRules:
create: {default_rules}
rules:
alertmanager: true
etcd: true
configReloaders: true
general: true
k8sContainerCpuUsageSecondsTotal: true
k8sContainerMemoryCache: true
k8sContainerMemoryRss: true
k8sContainerMemorySwap: true
k8sContainerResource: true
k8sContainerMemoryWorkingSetBytes: true
k8sPodOwner: true
kubeApiserverAvailability: true
kubeApiserverBurnrate: true
kubeApiserverHistogram: true
kubeApiserverSlos: true
kubeControllerManager: true
kubelet: true
kubeProxy: true
kubePrometheusGeneral: true
kubePrometheusNodeRecording: true
kubernetesApps: true
kubernetesResources: true
kubernetesStorage: true
kubernetesSystem: true
kubeSchedulerAlerting: true
kubeSchedulerRecording: true
kubeStateMetrics: true
network: true
node: true
nodeExporterAlerting: true
nodeExporterRecording: true
prometheus: true
prometheusOperator: true
windows: true
windowsMonitoring:
enabled: {windows_monitoring}
grafana:
enabled: {grafana}
kubernetesServiceMonitors:
enabled: {kubernetes_service_monitors}
kubeApiServer:
enabled: {kubernetes_api_server}
kubelet:
enabled: {kubelet}
kubeControllerManager:
enabled: {kube_controller_manager}
coreDns:
enabled: {core_dns}
kubeEtcd:
enabled: {kube_etcd}
kubeScheduler:
enabled: {kube_scheduler}
kubeProxy:
enabled: {kube_proxy}
kubeStateMetrics:
enabled: {kube_state_metrics}
nodeExporter:
enabled: {node_exporter}
prometheusOperator:
enabled: {prometheus_operator}
prometheus:
enabled: {prometheus}
"#,
);
let alertmanager_config = alert_manager_yaml_builder(&config);
values.push_str(&alertmanager_config);
fn alert_manager_yaml_builder(config: &KubePrometheusConfig) -> String {
let mut receivers = String::new();
let mut routes = String::new();
let mut global_configs = String::new();
let alert_manager = config.alert_manager;
for alert_channel in &config.alert_channel {
match alert_channel {
AlertChannel::Discord { name, .. } => {
let (receiver, route) = discord_alert_builder(name);
info!("discord receiver: {} \nroute: {}", receiver, route);
receivers.push_str(&receiver);
routes.push_str(&route);
}
AlertChannel::Slack {
slack_channel,
webhook_url,
} => {
let (receiver, route) = slack_alert_builder(slack_channel);
info!("slack receiver: {} \nroute: {}", receiver, route);
receivers.push_str(&receiver);
routes.push_str(&route);
let global_config = format!(
r#"
global:
slack_api_url: {webhook_url}"#
);
global_configs.push_str(&global_config);
}
AlertChannel::Smpt { .. } => todo!(),
}
}
info!("after alert receiver: {}", receivers);
info!("after alert routes: {}", routes);
let alertmanager_config = format!(
r#"
alertmanager:
enabled: {alert_manager}
config: {global_configs}
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
routes:
{routes}
receivers:
- name: 'null'
{receivers}"#
);
info!("alert manager config: {}", alertmanager_config);
alertmanager_config
}
HelmChartScore {
namespace: Some(NonBlankString::from_str(ns).unwrap()),
namespace: Some(NonBlankString::from_str(&config.namespace).unwrap()),
release_name: NonBlankString::from_str("kube-prometheus").unwrap(),
chart_name: NonBlankString::from_str(
"oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack", //use kube prometheus chart which includes grafana, prometheus, alert
//manager, etc
"oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack",
)
.unwrap(),
chart_version: None,
@@ -48,3 +220,43 @@ additionalPrometheusRulesMap:
repository: None,
}
}
fn discord_alert_builder(release_name: &String) -> (String, String) {
let discord_receiver_name = format!("Discord-{}", release_name);
let receiver = format!(
r#"
- name: '{discord_receiver_name}'
webhook_configs:
- url: 'http://{release_name}-alertmanager-discord:9094'
send_resolved: true"#,
);
let route = format!(
r#"
- receiver: '{discord_receiver_name}'
matchers:
- alertname!=Watchdog
continue: true"#,
);
(receiver, route)
}
fn slack_alert_builder(slack_channel: &String) -> (String, String) {
let slack_receiver_name = format!("Slack-{}", slack_channel);
let receiver = format!(
r#"
- name: '{slack_receiver_name}'
slack_configs:
- channel: '{slack_channel}'
send_resolved: true
title: '{{{{ .CommonAnnotations.title }}}}'
text: '{{{{ .CommonAnnotations.description }}}}'"#,
);
let route = format!(
r#"
- receiver: '{slack_receiver_name}'
matchers:
- alertname!=Watchdog
continue: true"#,
);
(receiver, route)
}

View File

@@ -1,3 +1,5 @@
pub mod monitoring_alerting;
mod config;
mod discord_alert_manager;
pub mod discord_webhook_sender;
mod kube_prometheus;
pub mod monitoring_alerting;

View File

@@ -1,128 +1,145 @@
use async_trait::async_trait;
use email_address::EmailAddress;
use log::info;
use serde::Serialize;
use url::Url;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
maestro::Maestro,
score::{CloneBoxScore, Score},
topology::{HelmCommand, Topology, monitoring_alerting::MonitoringAlertingTopology},
score::Score,
topology::{HelmCommand, Topology},
};
use super::kube_prometheus::kube_prometheus_score;
use super::{
config::KubePrometheusConfig, discord_alert_manager::discord_alert_manager_score,
kube_prometheus::kube_prometheus_helm_chart_score,
};
#[derive(Debug)]
#[derive(Debug, Clone, Serialize)]
pub enum AlertChannel {
Discord {
name: String,
webhook_url: Url,
},
Slack {
slack_channel: String,
webhook_url: Url,
},
//TODO test and implement in helm chart
//currently does not work
Smpt {
email_address: EmailAddress,
service_name: String,
},
}
#[derive(Debug, Clone, Serialize)]
pub struct MonitoringAlertingStackScore {
//TODO add documenation to explain why its here
//keeps it open for the end user to specify which stack they want
//if it isnt default kube-prometheus
pub monitoring_stack: Vec<Box<dyn Score<MonitoringAlertingTopology>>>,
pub namespace: String,
pub alert_channel: Vec<AlertChannel>,
pub namespace: Option<String>,
}
impl MonitoringAlertingStackScore {
pub fn new(
monitoring_stack: Vec<Box<dyn Score<MonitoringAlertingTopology>>>,
namespace: String,
) -> Self {
pub fn new() -> Self {
Self {
monitoring_stack,
namespace,
alert_channel: Vec::new(),
namespace: None,
}
}
}
impl Default for MonitoringAlertingStackScore {
fn default() -> Self {
let ns = "monitoring";
Self {
monitoring_stack: vec![Box::new(kube_prometheus_score(ns))],
namespace: ns.to_string(),
}
}
}
impl Clone for MonitoringAlertingStackScore {
fn clone(&self) -> Self {
Self {
monitoring_stack: self
.monitoring_stack
.iter()
.map(|s| s.clone_box())
.collect(),
namespace: self.namespace.clone(),
}
}
}
impl Serialize for MonitoringAlertingStackScore {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
use serde::ser::SerializeStruct;
let mut s = serializer.serialize_struct("MonitoringAlertingStackScore", 1)?;
let monitoring_values: Vec<_> = self
.monitoring_stack
.iter()
.map(|m| m.serialize())
.collect();
s.serialize_field("monitoring", &monitoring_values)?;
s.end()
}
}
impl<T:Topology> Score<T> for MonitoringAlertingStackScore {
impl<T: Topology + HelmCommand> Score<T> for MonitoringAlertingStackScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(MonitoringAlertingStackInterpret {
score: MonitoringAlertingStackScore {
monitoring_stack: self
.monitoring_stack
.iter()
.map(|s| s.clone_box())
.collect(),
namespace: self.namespace.clone(),
},
score: self.clone(),
})
}
fn name(&self) -> String {
format!("MonitoringAlertingStackScore")
}
}
#[derive(Debug)]
#[derive(Debug, Clone, Serialize)]
struct MonitoringAlertingStackInterpret {
pub score: MonitoringAlertingStackScore,
score: MonitoringAlertingStackScore,
}
impl MonitoringAlertingStackInterpret {
async fn build_kube_prometheus_helm_chart_config(&self) -> KubePrometheusConfig {
let mut config = KubePrometheusConfig::new();
if let Some(ns) = &self.score.namespace {
config.namespace = ns.clone();
}
config.alert_channel = self.score.alert_channel.clone();
config
}
async fn deploy_kube_prometheus_helm_chart_score<T: Topology + HelmCommand>(
&self,
inventory: &Inventory,
topology: &T,
config: &KubePrometheusConfig,
) -> Result<Outcome, InterpretError> {
let helm_chart = kube_prometheus_helm_chart_score(config);
helm_chart
.create_interpret()
.execute(inventory, topology)
.await
}
async fn deploy_alert_channel_service<T: Topology + HelmCommand>(
&self,
inventory: &Inventory,
topology: &T,
config: &KubePrometheusConfig,
) -> Result<Outcome, InterpretError> {
//let mut outcomes = vec![];
//for channel in &self.score.alert_channel {
// let outcome = match channel {
// AlertChannel::Discord { .. } => {
// discord_alert_manager_score(config)
// .create_interpret()
// .execute(inventory, topology)
// .await
// }
// AlertChannel::Slack { .. } => Ok(Outcome::success(
// "No extra configs for slack alerting".to_string(),
// )),
// AlertChannel::Smpt { .. } => {
// todo!()
// }
// };
// outcomes.push(outcome);
//}
//for result in outcomes {
// result?;
//}
Ok(Outcome::success("All alert channels deployed".to_string()))
}
}
#[async_trait]
impl <T: Topology> Interpret<T> for MonitoringAlertingStackInterpret {
impl<T: Topology + HelmCommand> Interpret<T> for MonitoringAlertingStackInterpret {
async fn execute(
&self,
_inventory: &Inventory,
_topology: &T,
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let inventory = Inventory::autoload();
let topology = MonitoringAlertingTopology::new();
let maestro = match Maestro::initialize(inventory, topology).await {
Ok(m) => m,
Err(e) => {
println!("failed to initialize Maestro: {}", e);
std::process::exit(1);
}
};
let scores_vec = self.score.monitoring_stack.clone();
for s in scores_vec{
info!("Running: {}", s.name());
maestro.interpret(s).await?;
}
let config = self.build_kube_prometheus_helm_chart_config().await;
info!("Built kube prometheus config");
info!("Installing kube prometheus chart");
self.deploy_kube_prometheus_helm_chart_score(inventory, topology, &config)
.await?;
info!("Installing alert channel service");
self.deploy_alert_channel_service(inventory, topology, &config)
.await?;
Ok(Outcome::success(format!(
"monitoring stack installed in {} namespace",
self.score.namespace
"succesfully deployed monitoring and alerting stack"
)))
}

View File

@@ -0,0 +1,67 @@
use async_trait::async_trait;
use serde::Serialize;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::{
Topology,
tenant::{TenantConfig, TenantManager},
},
};
#[derive(Debug, Serialize, Clone)]
pub struct TenantScore {
pub config: TenantConfig,
}
impl<T: Topology + TenantManager> Score<T> for TenantScore {
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
Box::new(TenantInterpret {
tenant_config: self.config.clone(),
})
}
fn name(&self) -> String {
format!("{} TenantScore", self.config.name)
}
}
#[derive(Debug)]
pub struct TenantInterpret {
tenant_config: TenantConfig,
}
#[async_trait]
impl<T: Topology + TenantManager> Interpret<T> for TenantInterpret {
async fn execute(
&self,
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
topology.provision_tenant(&self.tenant_config).await?;
Ok(Outcome::success(format!(
"Successfully provisioned tenant {} with id {}",
self.tenant_config.name, self.tenant_config.id
)))
}
fn get_name(&self) -> InterpretName {
InterpretName::TenantInterpret
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -116,3 +116,19 @@ pub fn yaml(input: TokenStream) -> TokenStream {
}
.into()
}
/// Verify that a string is a valid(ish) ingress path
/// Panics if path does not start with `/`
#[proc_macro]
pub fn ingress_path(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as LitStr);
let path_str = input.value();
match path_str.starts_with("/") {
true => {
let expanded = quote! {(#path_str.to_string()) };
return TokenStream::from(expanded);
}
false => panic!("Invalid ingress path"),
}
}