Compare commits

..

27 Commits

Author SHA1 Message Date
Ricky Ng-Adam
0b8525fe05 feat: postgres 2025-07-03 16:46:47 -04:00
6bf10b093c Merge pull request 'refactor/ns' (#74) from refactor/ns into master
All checks were successful
Run Check Script / check (push) Successful in 0s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m7s
Reviewed-on: #74
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 19:54:28 +00:00
3eecc2f590 fix: K8sTenantManager is responsible for concrete implementation. K8sAnywhere should delegate
All checks were successful
Run Check Script / check (pull_request) Successful in 4s
2025-07-02 15:51:30 -04:00
3959c07261 Merge remote-tracking branch 'origin/master' into refactor/ns 2025-07-02 15:13:13 -04:00
e50c01c0b3 fix: Forgotten file 🙈
Some checks failed
Run Check Script / check (push) Failing after -28s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m15s
2025-07-02 15:11:03 -04:00
286460d59e Merge pull request 'feat: added default resource limit and request to k8s tenant' (#75) from feat/tenant_limit_range into master
Some checks failed
Run Check Script / check (push) Failing after 38s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m0s
Reviewed-on: #75
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 18:55:04 +00:00
4baa3ae707 feat: added default resource limit and request to k8s tenant
Some checks failed
Run Check Script / check (pull_request) Failing after 35s
2025-07-02 14:06:08 -04:00
82119076cf fix: merge conflict
Some checks failed
Run Check Script / check (pull_request) Failing after 41s
2025-07-02 13:46:26 -04:00
f2a350fae6 fix: comments from pr
All checks were successful
Run Check Script / check (pull_request) Successful in 5s
2025-07-02 13:35:20 -04:00
197770a603 feat: Add ntfy score (#69)
Some checks failed
Run Check Script / check (push) Failing after 42s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 4m4s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #69
2025-07-02 16:19:35 +00:00
ab69a2c264 feat: add service monitors support to prom (#66)
Some checks failed
Run Check Script / check (push) Failing after 45s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m30s
Co-authored-by: tahahawa <tahahawa@gmail.com>
Reviewed-on: #66
Co-authored-by: taha <taha@noreply.git.nationtech.io>
Co-committed-by: taha <taha@noreply.git.nationtech.io>
2025-07-02 15:29:16 +00:00
e857efa92f fix merge conflict
All checks were successful
Run Check Script / check (pull_request) Successful in 1m50s
2025-07-02 11:26:27 -04:00
2ff3f4afa9 Merge pull request 'feat: Introduce Application trait, not too sure how it will evolve but it makes sense, at the very least to identify the Application, also some minor refactoring' (#73) from feat/applicationTrait into master
Some checks failed
Run Check Script / check (push) Failing after 50s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #73
2025-07-02 15:25:26 +00:00
2f6a11ead7 Merge pull request 'feat: Application Interpret still WIP but now call ensure_installed on features, also introduced a rust app example, completed work on clone_box behavior' (#72) from feat/rust_cd into master
Some checks failed
Run Check Script / check (push) Successful in 2m4s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
Reviewed-on: #72
2025-07-02 15:20:24 +00:00
7de9860dcf refactor: monitoring takes namespace from tenant
All checks were successful
Run Check Script / check (pull_request) Successful in -6s
2025-07-02 11:14:24 -04:00
6e884cff3a feat: Start default implementation to ArgoCD for ContinuousDelivery feature
Some checks failed
Run Check Script / check (pull_request) Failing after -34s
2025-07-02 11:14:24 -04:00
c74c51090a feat: Introduce Application trait, not too sure how it will evolve but it makes sense, at the very least to identify the Application, also some minor refactoring
Some checks failed
Run Check Script / check (pull_request) Failing after -38s
2025-07-02 09:48:26 -04:00
8ae0d6b548 feat: Application Interpret still WIP but now call ensure_installed on features, also introduced a rust app example, completed work on clone_box behavior
All checks were successful
Run Check Script / check (pull_request) Successful in -6s
2025-07-01 22:44:44 -04:00
ee02906ce9 fix(composer): spawn commands to allow interaction (#71)
All checks were successful
Run Check Script / check (push) Successful in 1m39s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m5s
Using `Command::output()` executes the command and wait for it to be finished before returning the output.
Though in some cases the user might need to interact with the CLI before continuing, which hangs the command execution.

Instead, using `Command::spawn()` allows to forward stdin/stdout to the parent process.

Reviewed-on: #71
Reviewed-by: johnride <jg@nationtech.io>
2025-07-01 21:08:19 +00:00
284cc6afd7 feat: Application module architecture and placeholder features (#70)
Some checks failed
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Failing after 11m22s
With this architecture, we have an extensible application module for which we can easily define new features and add them to application scores.

All this is driven by the ApplicationInterpret, who understands features and make sure they are "installed".

The drawback of this design is that we now have three different places to launch scores within Harmony : Maestro, Topology and Interpret. This is an architectural smell and I am not sure how to deal with it at the moment.

However, all these places where execution is performed make sense semantically : an ApplicationInterpret must understand ApplicationFeatures and can very well be responsible of them. Same goes for a Topology which provides features itself by composition (ex. K8sAnywhereTopology implements TenantManager) so it is natural for this very imp
lementation to know how to install itself.

Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #70
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
2025-07-01 19:40:30 +00:00
9bf6aac82e doc: Fix curl command for environments without ~/.local/bin/ folder
All checks were successful
Run Check Script / check (push) Successful in 1m21s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m1s
2025-07-01 11:32:24 -04:00
460c8b59e1 wip: helm chart deploys to namespace with resource limits and requests, trying to fix connection refused to api error 2025-06-27 14:47:28 -04:00
8e857bc72a wip: using the name from tenant config as deployment namespace for kubeprometheus deployment or defaulting to monitoring if no tenant config exists 2025-06-26 16:24:19 -04:00
e8d55d27e4 Merge pull request 'feat: added webhook receiver to alertchannels' (#68) from feat/webhook_receiver into master
All checks were successful
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 3m8s
Reviewed-on: #68
Reviewed-by: taha <taha@noreply.git.nationtech.io>
2025-06-26 16:43:25 +00:00
fea7e9ddb9 doc: Improve harmony_composer README single command usage
Some checks failed
Run Check Script / check (push) Successful in 1m34s
Compile and package harmony_composer / package_harmony_composer (push) Has been cancelled
2025-06-26 12:40:39 -04:00
55143dcad4 Merge pull request 'feat: add dry-run functionality and similar dependency' (#62) from feat/dryRun into master
All checks were successful
Run Check Script / check (push) Successful in 1m42s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 9m8s
Reviewed-on: #62
Reviewed-by: wjro <wrolleman@nationtech.io>
2025-06-26 15:14:25 +00:00
acfb93f1a2 feat: add dry-run functionality and similar dependency
All checks were successful
Run Check Script / check (pull_request) Successful in 1m45s
- Implemented a dry-run mode for K8s resource patching, displaying diffs before applying changes.
- Added the `similar` dependency for calculating and displaying text diffs.
- Enhanced K8s resource application to handle various port specifications in NetworkPolicy ingress rules.
- Added support for port ranges and lists of ports in NetworkPolicy rules.
- Updated K8s client to utilize the dry-run configuration setting.
- Added configuration option `HARMONY_DRY_RUN` to enable or disable dry-run mode.
2025-06-24 14:54:22 -04:00
52 changed files with 2651 additions and 367 deletions

818
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -20,34 +20,35 @@ readme = "README.md"
license = "GNU AGPL v3"
[workspace.dependencies]
log = "0.4.22"
env_logger = "0.11.5"
derive-new = "0.7.0"
async-trait = "0.1.82"
tokio = { version = "1.40.0", features = [
log = "0.4"
env_logger = "0.11"
derive-new = "0.7"
async-trait = "0.1"
tokio = { version = "1.40", features = [
"io-std",
"fs",
"macros",
"rt-multi-thread",
] }
cidr = { features = ["serde"], version = "0.2" }
russh = "0.45.0"
russh-keys = "0.45.0"
rand = "0.8.5"
url = "2.5.4"
kube = "0.98.0"
k8s-openapi = { version = "0.24.0", features = ["v1_30"] }
serde_yaml = "0.9.34"
serde-value = "0.7.0"
http = "1.2.0"
inquire = "0.7.5"
convert_case = "0.8.0"
russh = "0.45"
russh-keys = "0.45"
rand = "0.8"
url = "2.5"
kube = { version = "1.1.0", features = [
"config",
"client",
"runtime",
"rustls-tls",
"ws",
"jsonpatch",
] }
k8s-openapi = { version = "0.25", features = ["v1_30"] }
serde_yaml = "0.9"
serde-value = "0.7"
http = "1.2"
inquire = "0.7"
convert_case = "0.8"
chrono = "0.4"
[workspace.dependencies.uuid]
version = "1.11.0"
features = [
"v4", # Lets you generate random UUIDs
"fast-rng", # Use a faster (but still sufficiently random) RNG
"macro-diagnostics", # Enable better diagnostics for compile-time UUIDs
]
similar = "2"
uuid = { version = "1.11", features = ["v4", "fast-rng", "macro-diagnostics"] }

View File

@@ -0,0 +1,78 @@
# Architecture Decision Record: Monitoring Notifications
Initial Author: Taha Hawa
Initial Date: 2025-06-26
Last Updated Date: 2025-06-26
## Status
Proposed
## Context
We need to send notifications (typically from AlertManager/Prometheus) and we need to receive said notifications on mobile devices for sure in some way, whether it's push messages, SMS, phone call, email, etc or all of the above.
## Decision
We should go with https://ntfy.sh except host it ourselves.
`ntfy` is an open source solution written in Go that has the features we need.
## Rationale
`ntfy` has pretty much everything we need (push notifications, email forwarding, receives via webhook), and nothing/not much we don't. Good fit, lightweight.
## Consequences
Pros:
- topics, with ACLs
- lightweight
- reliable
- easy to configure
- mobile app
- the mobile app can listen via websocket, poll, or receive via Firebase/GCM on Android, or similar on iOS.
- Forward to email
- Text-to-Speech phone call messages using Twilio integration
- Operates based on simple HTTP requests/Webhooks, easily usable via AlertManager
Cons:
- No SMS pushes
- SQLite DB, makes it harder to HA/scale
## Alternatives considered
[AWS SNS](https://aws.amazon.com/sns/):
Pros:
- highly reliable
- no hosting needed
Cons:
- no control, not self hosted
- costs (per usage)
[Apprise](https://github.com/caronc/apprise):
Pros:
- Way more ways of sending notifications
- Can use ntfy as one of the backends/ways of sending
Cons:
- Way too overkill for what we need in terms of features
[Gotify](https://github.com/gotify/server):
Pros:
- simple, lightweight, golang, etc
Cons:
- Pushes topics are per-user
## Additional Notes

View File

@@ -14,8 +14,8 @@ harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
kube = "0.98.0"
k8s-openapi = { version = "0.24.0", features = [ "v1_30" ] }
kube = "1.1.0"
k8s-openapi = { version = "0.25.0", features = ["v1_30"] }
http = "1.2.0"
serde_yaml = "0.9.34"
inquire.workspace = true

View File

@@ -8,5 +8,6 @@ license.workspace = true
[dependencies]
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
harmony_macros = { version = "0.1.0", path = "../../harmony_macros" }
tokio.workspace = true
url.workspace = true

View File

@@ -1,3 +1,5 @@
use std::collections::HashMap;
use harmony::{
inventory::Inventory,
maestro::Maestro,
@@ -5,7 +7,13 @@ use harmony::{
monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
kube_prometheus::helm_prometheus_alert_score::HelmPrometheusAlertingScore,
kube_prometheus::{
helm_prometheus_alert_score::HelmPrometheusAlertingScore,
types::{
HTTPScheme, MatchExpression, Operator, Selector, ServiceMonitor,
ServiceMonitorEndpoint,
},
},
},
prometheus::alerts::{
infra::dell_server::{
@@ -41,9 +49,30 @@ async fn main() {
],
);
let service_monitor_endpoint = ServiceMonitorEndpoint {
port: Some("80".to_string()),
path: "/metrics".to_string(),
scheme: HTTPScheme::HTTP,
..Default::default()
};
let service_monitor = ServiceMonitor {
name: "test-service-monitor".to_string(),
selector: Selector {
match_labels: HashMap::new(),
match_expressions: vec![MatchExpression {
key: "test".to_string(),
operator: Operator::In,
values: vec!["test-service".to_string()],
}],
},
endpoints: vec![service_monitor_endpoint],
..Default::default()
};
let alerting_score = HelmPrometheusAlertingScore {
receivers: vec![Box::new(discord_receiver)],
rules: vec![Box::new(additional_rules), Box::new(additional_rules2)],
service_monitors: vec![service_monitor],
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),

View File

@@ -0,0 +1,13 @@
[package]
name = "example-monitoring-with-tenant"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
cidr.workspace = true
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
tokio.workspace = true
url.workspace = true

View File

@@ -0,0 +1,90 @@
use std::collections::HashMap;
use harmony::{
data::Id,
inventory::Inventory,
maestro::Maestro,
modules::{
monitoring::{
alert_channel::discord_alert_channel::DiscordWebhook,
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
kube_prometheus::{
helm_prometheus_alert_score::HelmPrometheusAlertingScore,
types::{
HTTPScheme, MatchExpression, Operator, Selector, ServiceMonitor,
ServiceMonitorEndpoint,
},
},
},
prometheus::alerts::k8s::pvc::high_pvc_fill_rate_over_two_days,
tenant::TenantScore,
},
topology::{
K8sAnywhereTopology, Url,
tenant::{ResourceLimits, TenantConfig, TenantNetworkPolicy},
},
};
#[tokio::main]
async fn main() {
let tenant = TenantScore {
config: TenantConfig {
id: Id::from_string("1234".to_string()),
name: "test-tenant".to_string(),
resource_limits: ResourceLimits {
cpu_request_cores: 6.0,
cpu_limit_cores: 4.0,
memory_request_gb: 4.0,
memory_limit_gb: 4.0,
storage_total_gb: 10.0,
},
network_policy: TenantNetworkPolicy::default(),
},
};
let discord_receiver = DiscordWebhook {
name: "test-discord".to_string(),
url: Url::Url(url::Url::parse("https://discord.doesnt.exist.com").unwrap()),
};
let high_pvc_fill_rate_over_two_days_alert = high_pvc_fill_rate_over_two_days();
let additional_rules =
AlertManagerRuleGroup::new("pvc-alerts", vec![high_pvc_fill_rate_over_two_days_alert]);
let service_monitor_endpoint = ServiceMonitorEndpoint {
port: Some("80".to_string()),
path: "/metrics".to_string(),
scheme: HTTPScheme::HTTP,
..Default::default()
};
let service_monitor = ServiceMonitor {
name: "test-service-monitor".to_string(),
selector: Selector {
match_labels: HashMap::new(),
match_expressions: vec![MatchExpression {
key: "test".to_string(),
operator: Operator::In,
values: vec!["test-service".to_string()],
}],
},
endpoints: vec![service_monitor_endpoint],
..Default::default()
};
let alerting_score = HelmPrometheusAlertingScore {
receivers: vec![Box::new(discord_receiver)],
rules: vec![Box::new(additional_rules)],
service_monitors: vec![service_monitor],
};
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(tenant), Box::new(alerting_score)]);
harmony_cli::init(maestro, None).await.unwrap();
}

12
examples/ntfy/Cargo.toml Normal file
View File

@@ -0,0 +1,12 @@
[package]
name = "example-ntfy"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
tokio.workspace = true
url.workspace = true

19
examples/ntfy/src/main.rs Normal file
View File

@@ -0,0 +1,19 @@
use harmony::{
inventory::Inventory, maestro::Maestro, modules::monitoring::ntfy::ntfy::NtfyScore,
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
Inventory::autoload(),
K8sAnywhereTopology::from_env(),
)
.await
.unwrap();
maestro.register_all(vec![Box::new(NtfyScore {
namespace: "monitoring".to_string(),
})]);
harmony_cli::init(maestro, None).await.unwrap();
}

View File

@@ -0,0 +1,10 @@
[package]
name = "example-postgres"
version = "0.1.0"
edition = "2021"
[dependencies]
harmony = { path = "../../harmony" }
tokio = { version = "1", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
async-trait = "0.1.80"

View File

@@ -0,0 +1,84 @@
use async_trait::async_trait;
use harmony::{
data::{PostgresDatabase, PostgresUser},
interpret::InterpretError,
inventory::Inventory,
maestro::Maestro,
modules::postgres::PostgresScore,
topology::{PostgresServer, Topology},
};
use std::error::Error;
#[derive(Debug, Clone)]
struct MockTopology;
#[async_trait]
impl Topology for MockTopology {
fn name(&self) -> &str {
"MockTopology"
}
async fn ensure_ready(&self) -> Result<harmony::interpret::Outcome, InterpretError> {
Ok(harmony::interpret::Outcome::new(
harmony::interpret::InterpretStatus::SUCCESS,
"Mock topology is always ready".to_string(),
))
}
}
#[async_trait]
impl PostgresServer for MockTopology {
async fn ensure_users_exist(&self, users: Vec<PostgresUser>) -> Result<(), InterpretError> {
println!("Ensuring users exist:");
for user in users {
println!(" - {}: {}", user.name, user.password);
}
Ok(())
}
async fn ensure_databases_exist(
&self,
databases: Vec<PostgresDatabase>,
) -> Result<(), InterpretError> {
println!("Ensuring databases exist:");
for db in databases {
println!(" - {}: owner={}", db.name, db.owner);
}
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let users = vec![
PostgresUser {
name: "admin".to_string(),
password: "password".to_string(),
},
PostgresUser {
name: "user".to_string(),
password: "password".to_string(),
},
];
let databases = vec![
PostgresDatabase {
name: "app_db".to_string(),
owner: "admin".to_string(),
},
PostgresDatabase {
name: "user_db".to_string(),
owner: "user".to_string(),
},
];
let postgres_score = PostgresScore::new(users, databases);
let inventory = Inventory::empty();
let topology = MockTopology;
let maestro = Maestro::new(inventory, topology);
maestro.interpret(Box::new(postgres_score)).await?;
Ok(())
}

14
examples/rust/Cargo.toml Normal file
View File

@@ -0,0 +1,14 @@
[package]
name = "example-rust"
version = "0.1.0"
edition = "2024"
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_macros = { path = "../../harmony_macros" }
tokio = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

20
examples/rust/src/main.rs Normal file
View File

@@ -0,0 +1,20 @@
use harmony::{
inventory::Inventory,
maestro::Maestro,
modules::application::{RustWebappScore, features::ContinuousDelivery},
topology::{K8sAnywhereTopology, Url},
};
#[tokio::main]
async fn main() {
let app = RustWebappScore {
name: "Example Rust Webapp".to_string(),
domain: Url::Url(url::Url::parse("https://rustapp.harmony.example.com").unwrap()),
features: vec![Box::new(ContinuousDelivery {})],
};
let topology = K8sAnywhereTopology::from_env();
let mut maestro = Maestro::new(Inventory::autoload(), topology);
maestro.register_all(vec![Box::new(app)]);
harmony_cli::init(maestro, None).await.unwrap();
}

View File

@@ -10,9 +10,9 @@ publish = false
harmony = { path = "../../harmony" }
harmony_tui = { path = "../../harmony_tui" }
harmony_types = { path = "../../harmony_types" }
harmony_macros = { path = "../../harmony_macros" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }

View File

@@ -53,3 +53,7 @@ fqdn = { version = "0.4.6", features = [
] }
temp-dir = "0.1.14"
dyn-clone = "1.0.19"
similar.workspace = true
futures-util = "0.3.31"
tokio-util = "0.7.15"
strum = { version = "0.27.1", features = ["derive"] }

View File

@@ -10,4 +10,6 @@ lazy_static! {
std::env::var("HARMONY_REGISTRY_URL").unwrap_or_else(|_| "hub.nationtech.io".to_string());
pub static ref REGISTRY_PROJECT: String =
std::env::var("HARMONY_REGISTRY_PROJECT").unwrap_or_else(|_| "harmony".to_string());
pub static ref DRY_RUN: bool =
std::env::var("HARMONY_DRY_RUN").map_or(true, |value| value.parse().unwrap_or(true));
}

View File

@@ -2,3 +2,6 @@ mod id;
mod version;
pub use id::*;
pub use version::*;
mod postgres;
pub use postgres::*;

View File

@@ -0,0 +1,13 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PostgresUser {
pub name: String,
pub password: String, // In a real scenario, this should be a secret type
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PostgresDatabase {
pub name: String,
pub owner: String,
}

View File

@@ -21,6 +21,8 @@ pub enum InterpretName {
OPNSense,
K3dInstallation,
TenantInterpret,
Application,
Postgres,
}
impl std::fmt::Display for InterpretName {
@@ -37,6 +39,8 @@ impl std::fmt::Display for InterpretName {
InterpretName::OPNSense => f.write_str("OPNSense"),
InterpretName::K3dInstallation => f.write_str("K3dInstallation"),
InterpretName::TenantInterpret => f.write_str("Tenant"),
InterpretName::Application => f.write_str("Application"),
InterpretName::Postgres => f.write_str("Postgres"),
}
}
}
@@ -124,3 +128,11 @@ impl From<kube::Error> for InterpretError {
}
}
}
impl From<String> for InterpretError {
fn from(value: String) -> Self {
Self {
msg: format!("InterpretError : {value}"),
}
}
}

View File

@@ -34,6 +34,17 @@ pub struct Inventory {
}
impl Inventory {
pub fn empty() -> Self {
Self {
location: Location::new("Empty".to_string(), "location".to_string()),
switch: vec![],
firewall: vec![],
worker_host: vec![],
storage_host: vec![],
control_plane_host: vec![],
}
}
pub fn autoload() -> Self {
Self {
location: Location::test_building(),

View File

@@ -46,7 +46,7 @@ pub struct HAClusterTopology {
#[async_trait]
impl Topology for HAClusterTopology {
fn name(&self) -> &str {
todo!()
"HAClusterTopology"
}
async fn ensure_ready(&self) -> Result<Outcome, InterpretError> {
todo!(

View File

@@ -4,6 +4,8 @@ use crate::{interpret::InterpretError, inventory::Inventory};
#[async_trait]
pub trait Installable<T>: Send + Sync {
async fn configure(&self, inventory: &Inventory, topology: &T) -> Result<(), InterpretError>;
async fn ensure_installed(
&self,
inventory: &Inventory,

View File

@@ -1,18 +1,38 @@
use derive_new::new;
use k8s_openapi::{ClusterResourceScope, NamespaceResourceScope};
use futures_util::StreamExt;
use k8s_openapi::{
ClusterResourceScope, NamespaceResourceScope,
api::{apps::v1::Deployment, core::v1::Pod},
};
use kube::runtime::conditions;
use kube::runtime::wait::await_condition;
use kube::{
Api, Client, Config, Error, Resource,
api::{Patch, PatchParams},
Client, Config, Error, Resource,
api::{Api, AttachParams, ListParams, Patch, PatchParams, ResourceExt},
config::{KubeConfigOptions, Kubeconfig},
core::ErrorResponse,
runtime::reflector::Lookup,
};
use log::{debug, error, trace};
use serde::de::DeserializeOwned;
use similar::{DiffableStr, TextDiff};
#[derive(new)]
#[derive(new, Clone)]
pub struct K8sClient {
client: Client,
}
impl std::fmt::Debug for K8sClient {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// This is a poor man's debug implementation for now as kube::Client does not provide much
// useful information
f.write_fmt(format_args!(
"K8sClient {{ kube client using default namespace {} }}",
self.client.default_namespace()
))
}
}
impl K8sClient {
pub async fn try_default() -> Result<Self, Error> {
Ok(Self {
@@ -20,6 +40,88 @@ impl K8sClient {
})
}
pub async fn wait_until_deployment_ready(
&self,
name: String,
namespace: Option<&str>,
timeout: Option<u64>,
) -> Result<(), String> {
let api: Api<Deployment>;
if let Some(ns) = namespace {
api = Api::namespaced(self.client.clone(), ns);
} else {
api = Api::default_namespaced(self.client.clone());
}
let establish = await_condition(api, name.as_str(), conditions::is_deployment_completed());
let t = if let Some(t) = timeout { t } else { 300 };
let res = tokio::time::timeout(std::time::Duration::from_secs(t), establish).await;
if let Ok(r) = res {
return Ok(());
} else {
return Err("timed out while waiting for deployment".to_string());
}
}
/// Will execute a command in the first pod found that matches the label `app.kubernetes.io/name={name}`
pub async fn exec_app(
&self,
name: String,
namespace: Option<&str>,
command: Vec<&str>,
) -> Result<(), String> {
let api: Api<Pod>;
if let Some(ns) = namespace {
api = Api::namespaced(self.client.clone(), ns);
} else {
api = Api::default_namespaced(self.client.clone());
}
let pod_list = api
.list(&ListParams::default().labels(format!("app.kubernetes.io/name={name}").as_str()))
.await
.expect("couldn't get list of pods");
let res = api
.exec(
pod_list
.items
.first()
.expect("couldn't get pod")
.name()
.expect("couldn't get pod name")
.into_owned()
.as_str(),
command,
&AttachParams::default(),
)
.await;
match res {
Err(e) => return Err(e.to_string()),
Ok(mut process) => {
let status = process
.take_status()
.expect("Couldn't get status")
.await
.expect("Couldn't unwrap status");
if let Some(s) = status.status {
debug!("Status: {}", s);
if s == "Success" {
return Ok(());
} else {
return Err(s);
}
} else {
return Err("Couldn't get inner status of pod exec".to_string());
}
}
}
}
/// Apply a resource in namespace
///
/// See `kubectl apply` for more information on the expected behavior of this function
@@ -48,8 +150,79 @@ impl K8sClient {
.name
.as_ref()
.expect("K8s Resource should have a name");
api.patch(name, &patch_params, &Patch::Apply(resource))
.await
if *crate::config::DRY_RUN {
match api.get(name).await {
Ok(current) => {
trace!("Received current value {current:#?}");
// The resource exists, so we calculate and display a diff.
println!("\nPerforming dry-run for resource: '{}'", name);
let mut current_yaml = serde_yaml::to_value(&current)
.expect(&format!("Could not serialize current value : {current:#?}"));
if current_yaml.is_mapping() && current_yaml.get("status").is_some() {
let map = current_yaml.as_mapping_mut().unwrap();
let removed = map.remove_entry("status");
trace!("Removed status {:?}", removed);
} else {
trace!(
"Did not find status entry for current object {}/{}",
current.meta().namespace.as_ref().unwrap_or(&"".to_string()),
current.meta().name.as_ref().unwrap_or(&"".to_string())
);
}
let current_yaml = serde_yaml::to_string(&current_yaml)
.unwrap_or_else(|_| "Failed to serialize current resource".to_string());
let new_yaml = serde_yaml::to_string(resource)
.unwrap_or_else(|_| "Failed to serialize new resource".to_string());
if current_yaml == new_yaml {
println!("No changes detected.");
// Return the current resource state as there are no changes.
return Ok(current);
}
println!("Changes detected:");
let diff = TextDiff::from_lines(&current_yaml, &new_yaml);
// Iterate over the changes and print them in a git-like diff format.
for change in diff.iter_all_changes() {
let sign = match change.tag() {
similar::ChangeTag::Delete => "-",
similar::ChangeTag::Insert => "+",
similar::ChangeTag::Equal => " ",
};
print!("{}{}", sign, change);
}
// In a dry run, we return the new resource state that would have been applied.
Ok(resource.clone())
}
Err(Error::Api(ErrorResponse { code: 404, .. })) => {
// The resource does not exist, so the "diff" is the entire new resource.
println!("\nPerforming dry-run for new resource: '{}'", name);
println!(
"Resource does not exist. It would be created with the following content:"
);
let new_yaml = serde_yaml::to_string(resource)
.unwrap_or_else(|_| "Failed to serialize new resource".to_string());
// Print each line of the new resource with a '+' prefix.
for line in new_yaml.lines() {
println!("+{}", line);
}
// In a dry run, we return the new resource state that would have been created.
Ok(resource.clone())
}
Err(e) => {
// Another API error occurred.
error!("Failed to get resource '{}': {}", name, e);
Err(e)
}
}
} else {
return api
.patch(name, &patch_params, &Patch::Apply(resource))
.await;
}
}
pub async fn apply_many<K>(&self, resource: &Vec<K>, ns: Option<&str>) -> Result<Vec<K>, Error>

View File

@@ -3,6 +3,7 @@ use std::{process::Command, sync::Arc};
use async_trait::async_trait;
use inquire::Confirm;
use log::{debug, info, warn};
use serde::Serialize;
use tokio::sync::OnceCell;
use crate::{
@@ -20,22 +21,24 @@ use super::{
tenant::{TenantConfig, TenantManager, k8s::K8sTenantManager},
};
#[derive(Clone, Debug)]
struct K8sState {
client: Arc<K8sClient>,
_source: K8sSource,
message: String,
}
#[derive(Debug)]
#[derive(Debug, Clone)]
enum K8sSource {
LocalK3d,
Kubeconfig,
}
#[derive(Clone, Debug)]
pub struct K8sAnywhereTopology {
k8s_state: OnceCell<Option<K8sState>>,
tenant_manager: OnceCell<K8sTenantManager>,
config: K8sAnywhereConfig,
k8s_state: Arc<OnceCell<Option<K8sState>>>,
tenant_manager: Arc<OnceCell<K8sTenantManager>>,
config: Arc<K8sAnywhereConfig>,
}
#[async_trait]
@@ -55,20 +58,29 @@ impl K8sclient for K8sAnywhereTopology {
}
}
impl Serialize for K8sAnywhereTopology {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
todo!()
}
}
impl K8sAnywhereTopology {
pub fn from_env() -> Self {
Self {
k8s_state: OnceCell::new(),
tenant_manager: OnceCell::new(),
config: K8sAnywhereConfig::from_env(),
k8s_state: Arc::new(OnceCell::new()),
tenant_manager: Arc::new(OnceCell::new()),
config: Arc::new(K8sAnywhereConfig::from_env()),
}
}
pub fn with_config(config: K8sAnywhereConfig) -> Self {
Self {
k8s_state: OnceCell::new(),
tenant_manager: OnceCell::new(),
config,
k8s_state: Arc::new(OnceCell::new()),
tenant_manager: Arc::new(OnceCell::new()),
config: Arc::new(config),
}
}
@@ -184,8 +196,7 @@ impl K8sAnywhereTopology {
let k8s_client = self.k8s_client().await?;
Ok(K8sTenantManager::new(k8s_client))
})
.await
.unwrap();
.await?;
Ok(())
}
@@ -200,6 +211,7 @@ impl K8sAnywhereTopology {
}
}
#[derive(Clone, Debug)]
pub struct K8sAnywhereConfig {
/// The path of the KUBECONFIG file that Harmony should use to interact with the Kubernetes
/// cluster
@@ -272,4 +284,11 @@ impl TenantManager for K8sAnywhereTopology {
.provision_tenant(config)
.await
}
async fn get_tenant_config(&self) -> Option<TenantConfig> {
self.get_k8s_tenant_manager()
.ok()?
.get_tenant_config()
.await
}
}

View File

@@ -23,6 +23,9 @@ pub use network::*;
use serde::Serialize;
pub use tftp::*;
mod postgres;
pub use postgres::*;
mod helm_command;
pub use helm_command::*;

View File

@@ -27,6 +27,7 @@ impl<S: AlertSender + Installable<T>, T: Topology> Interpret<T> for AlertingInte
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
self.sender.configure(inventory, topology).await?;
for receiver in self.receivers.iter() {
receiver.install(&self.sender).await?;
}

View File

@@ -0,0 +1,14 @@
use crate::{
data::{PostgresDatabase, PostgresUser},
interpret::InterpretError,
};
use async_trait::async_trait;
#[async_trait]
pub trait PostgresServer {
async fn ensure_users_exist(&self, users: Vec<PostgresUser>) -> Result<(), InterpretError>;
async fn ensure_databases_exist(
&self,
databases: Vec<PostgresDatabase>,
) -> Result<(), InterpretError>;
}

View File

@@ -5,10 +5,9 @@ use crate::{
topology::k8s::{ApplyStrategy, K8sClient},
};
use async_trait::async_trait;
use derive_new::new;
use k8s_openapi::{
api::{
core::v1::{Namespace, ResourceQuota},
core::v1::{LimitRange, Namespace, ResourceQuota},
networking::v1::{
NetworkPolicy, NetworkPolicyEgressRule, NetworkPolicyIngressRule, NetworkPolicyPort,
},
@@ -19,12 +18,23 @@ use kube::Resource;
use log::{debug, info, warn};
use serde::de::DeserializeOwned;
use serde_json::json;
use tokio::sync::OnceCell;
use super::{TenantConfig, TenantManager};
#[derive(new)]
#[derive(Clone, Debug)]
pub struct K8sTenantManager {
k8s_client: Arc<K8sClient>,
k8s_tenant_config: Arc<OnceCell<TenantConfig>>,
}
impl K8sTenantManager {
pub fn new(client: Arc<K8sClient>) -> Self {
Self {
k8s_client: client,
k8s_tenant_config: Arc::new(OnceCell::new()),
}
}
}
impl K8sTenantManager {
@@ -112,8 +122,8 @@ impl K8sTenantManager {
"requests.storage": format!("{:.3}Gi", config.resource_limits.storage_total_gb),
"pods": "20",
"services": "10",
"configmaps": "30",
"secrets": "30",
"configmaps": "60",
"secrets": "60",
"persistentvolumeclaims": "15",
"services.loadbalancers": "2",
"services.nodeports": "5",
@@ -132,70 +142,142 @@ impl K8sTenantManager {
})
}
fn build_limit_range(&self, config: &TenantConfig) -> Result<LimitRange, ExecutorError> {
let limit_range = json!({
"apiVersion": "v1",
"kind": "LimitRange",
"metadata": {
"name": format!("{}-defaults", config.name),
"labels": {
"harmony.nationtech.io/tenant.id": config.id.to_string(),
"harmony.nationtech.io/tenant.name": config.name,
},
"namespace": self.get_namespace_name(config),
},
"spec": {
"limits": [
{
"type": "Container",
"default": {
"cpu": "500m",
"memory": "500Mi"
},
"defaultRequest": {
"cpu": "100m",
"memory": "100Mi"
},
}
]
}
});
serde_json::from_value(limit_range).map_err(|e| {
ExecutorError::ConfigurationError(format!(
"Could not build TenantManager LimitRange. {}",
e
))
})
}
fn build_network_policy(&self, config: &TenantConfig) -> Result<NetworkPolicy, ExecutorError> {
let network_policy = json!({
"apiVersion": "networking.k8s.io/v1",
"kind": "NetworkPolicy",
"metadata": {
"name": format!("{}-network-policy", config.name),
"namespace": self.get_namespace_name(config),
},
"spec": {
"podSelector": {},
"egress": [
{ "to": [ {"podSelector": {}}]},
{ "to":
[
{
"podSelector": {},
"namespaceSelector": {
"matchLabels": {
"kubernetes.io/metadata.name":"openshift-dns"
}
}
},
]
},
{ "to": [
{
"ipBlock": {
"cidr": "0.0.0.0/0",
// See https://en.wikipedia.org/wiki/Reserved_IP_addresses
"except": [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
"192.0.0.0/24",
"192.0.2.0/24",
"192.88.99.0/24",
"192.18.0.0/15",
"198.51.100.0/24",
"169.254.0.0/16",
"203.0.113.0/24",
"127.0.0.0/8",
// Not sure we should block this one as it is
// used for multicast. But better block more than less.
"224.0.0.0/4",
"240.0.0.0/4",
"100.64.0.0/10",
"233.252.0.0/24",
"0.0.0.0/8",
],
}
{
"to": [
{ "podSelector": {} }
]
},
{
"to": [
{
"podSelector": {},
"namespaceSelector": {
"matchLabels": {
"kubernetes.io/metadata.name": "kube-system"
}
]
},
}
}
]
},
{
"to": [
{
"podSelector": {},
"namespaceSelector": {
"matchLabels": {
"kubernetes.io/metadata.name": "openshift-dns"
}
}
}
]
},
{
"to": [
{
"ipBlock": {
"cidr": "10.43.0.1/32",
}
}
]
},
{
"to": [
{
"ipBlock": {
"cidr": "172.23.0.0/16",
}
}
]
},
{
"to": [
{
"ipBlock": {
"cidr": "0.0.0.0/0",
"except": [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
"192.0.0.0/24",
"192.0.2.0/24",
"192.88.99.0/24",
"192.18.0.0/15",
"198.51.100.0/24",
"169.254.0.0/16",
"203.0.113.0/24",
"127.0.0.0/8",
"224.0.0.0/4",
"240.0.0.0/4",
"100.64.0.0/10",
"233.252.0.0/24",
"0.0.0.0/8"
]
}
}
]
}
],
"ingress": [
{ "from": [ {"podSelector": {}}]}
{
"from": [
{ "podSelector": {} }
]
}
],
"policyTypes": [
"Ingress", "Egress",
"Ingress",
"Egress"
]
}
});
let mut network_policy: NetworkPolicy =
serde_json::from_value(network_policy).map_err(|e| {
ExecutorError::ConfigurationError(format!(
@@ -219,8 +301,29 @@ impl K8sTenantManager {
})
})
.collect();
let ports: Option<Vec<NetworkPolicyPort>> =
c.1.as_ref().map(|spec| match &spec.data {
super::PortSpecData::SinglePort(port) => vec![NetworkPolicyPort {
port: Some(IntOrString::Int(port.clone().into())),
..Default::default()
}],
super::PortSpecData::PortRange(start, end) => vec![NetworkPolicyPort {
port: Some(IntOrString::Int(start.clone().into())),
end_port: Some(end.clone().into()),
protocol: None, // Not currently supported by Harmony
}],
super::PortSpecData::ListOfPorts(items) => items
.iter()
.map(|i| NetworkPolicyPort {
port: Some(IntOrString::Int(i.clone().into())),
..Default::default()
})
.collect(),
});
let rule = serde_json::from_value::<NetworkPolicyIngressRule>(json!({
"from": cidr_list
"from": cidr_list,
"ports": ports,
}))
.map_err(|e| {
ExecutorError::ConfigurationError(format!(
@@ -298,6 +401,9 @@ impl K8sTenantManager {
Ok(network_policy)
}
fn store_config(&self, config: &TenantConfig) {
let _ = self.k8s_tenant_config.set(config.clone());
}
}
#[async_trait]
@@ -306,6 +412,7 @@ impl TenantManager for K8sTenantManager {
let namespace = self.build_namespace(config)?;
let resource_quota = self.build_resource_quota(config)?;
let network_policy = self.build_network_policy(config)?;
let resource_limit_range = self.build_limit_range(config)?;
self.ensure_constraints(&namespace)?;
@@ -315,6 +422,9 @@ impl TenantManager for K8sTenantManager {
debug!("Creating resource_quota for tenant {}", config.name);
self.apply_resource(resource_quota, config).await?;
debug!("Creating limit_range for tenant {}", config.name);
self.apply_resource(resource_limit_range, config).await?;
debug!("Creating network_policy for tenant {}", config.name);
self.apply_resource(network_policy, config).await?;
@@ -322,6 +432,10 @@ impl TenantManager for K8sTenantManager {
"Success provisionning K8s tenant id {} name {}",
config.id, config.name
);
self.store_config(config);
Ok(())
}
async fn get_tenant_config(&self) -> Option<TenantConfig> {
self.k8s_tenant_config.get().cloned()
}
}

View File

@@ -15,4 +15,6 @@ pub trait TenantManager {
/// # Arguments
/// * `config`: The desired configuration for the new tenant.
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError>;
async fn get_tenant_config(&self) -> Option<TenantConfig>;
}

View File

@@ -0,0 +1,41 @@
use async_trait::async_trait;
use serde::Serialize;
use crate::topology::Topology;
/// An ApplicationFeature provided by harmony, such as Backups, Monitoring, MultisiteAvailability,
/// ContinuousIntegration, ContinuousDelivery
#[async_trait]
pub trait ApplicationFeature<T: Topology>:
std::fmt::Debug + Send + Sync + ApplicationFeatureClone<T>
{
async fn ensure_installed(&self, topology: &T) -> Result<(), String>;
fn name(&self) -> String;
}
trait ApplicationFeatureClone<T: Topology> {
fn clone_box(&self) -> Box<dyn ApplicationFeature<T>>;
}
impl<A, T: Topology> ApplicationFeatureClone<T> for A
where
A: ApplicationFeature<T> + Clone + 'static,
{
fn clone_box(&self) -> Box<dyn ApplicationFeature<T>> {
Box::new(self.clone())
}
}
impl<T: Topology> Serialize for Box<dyn ApplicationFeature<T>> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
todo!()
}
}
impl<T: Topology> Clone for Box<dyn ApplicationFeature<T>> {
fn clone(&self) -> Self {
self.clone_box()
}
}

View File

@@ -0,0 +1,84 @@
use async_trait::async_trait;
use log::info;
use serde_json::Value;
use crate::{
data::Version,
inventory::Inventory,
modules::{application::ApplicationFeature, helm::chart::HelmChartScore},
score::Score,
topology::{HelmCommand, Topology, Url},
};
/// ContinuousDelivery in Harmony provides this functionality :
///
/// - **Package** the application
/// - **Push** to an artifact registry
/// - **Deploy** to a testing environment
/// - **Deploy** to a production environment
///
/// It is intended to be used as an application feature passed down to an ApplicationInterpret. For
/// example :
///
/// ```rust,ignore
/// let app = RustApplicationScore {
/// name: "My Rust App".to_string(),
/// features: vec![ContinuousDelivery::default()],
/// };
/// ```
///
/// *Note :*
///
/// By default, the Harmony Opinionated Pipeline is built using these technologies :
///
/// - Gitea Action (executes pipeline steps)
/// - Docker to build an OCI container image
/// - Helm chart to package Kubernetes resources
/// - Harbor as artifact registru
/// - ArgoCD to install/upgrade/rollback/inspect k8s resources
/// - Kubernetes for runtime orchestration
#[derive(Debug, Default, Clone)]
pub struct ContinuousDelivery {}
#[async_trait]
impl<T: Topology + HelmCommand + 'static> ApplicationFeature<T> for ContinuousDelivery {
async fn ensure_installed(&self, topology: &T) -> Result<(), String> {
info!("Installing ContinuousDelivery feature");
let cd_server = HelmChartScore {
namespace: todo!(
"ArgoCD Helm chart with proper understanding of Tenant, see how Will did it for Monitoring for now"
),
release_name: todo!("argocd helm chart whatever"),
chart_name: todo!(),
chart_version: todo!(),
values_overrides: todo!(),
values_yaml: todo!(),
create_namespace: todo!(),
install_only: todo!(),
repository: todo!(),
};
let interpret = cd_server.create_interpret();
interpret.execute(&Inventory::empty(), topology);
todo!("1. Create ArgoCD score that installs argo using helm chart, see if Taha's already done it
2. Package app (docker image, helm chart)
3. Push to registry if staging or prod
4. Poke Argo
5. Ensure app is up")
}
fn name(&self) -> String {
"ContinuousDelivery".to_string()
}
}
/// For now this is entirely bound to K8s / ArgoCD, will have to be revisited when we support
/// more CD systems
pub struct CDApplicationConfig {
version: Version,
helm_chart_url: Url,
values_overrides: Value,
}
pub trait ContinuousDeliveryApplication {
fn get_config(&self) -> CDApplicationConfig;
}

View File

@@ -0,0 +1,42 @@
use async_trait::async_trait;
use log::info;
use crate::{
modules::application::ApplicationFeature,
topology::{K8sclient, Topology},
};
#[derive(Debug, Clone)]
pub struct PublicEndpoint {
application_port: u16,
}
/// Use port 3000 as default port. Harmony wants to provide "sane defaults" in general, and in this
/// particular context, using port 80 goes against our philosophy to provide production grade
/// defaults out of the box. Using an unprivileged port is a good security practice and will allow
/// for unprivileged containers to work with this out of the box.
///
/// Now, why 3000 specifically? Many popular web/network frameworks use it by default, there is no
/// perfect answer for this but many Rust and Python libraries tend to use 3000.
impl Default for PublicEndpoint {
fn default() -> Self {
Self {
application_port: 3000,
}
}
}
/// For now we only suport K8s ingress, but we will support more stuff at some point
#[async_trait]
impl<T: Topology + K8sclient + 'static> ApplicationFeature<T> for PublicEndpoint {
async fn ensure_installed(&self, _topology: &T) -> Result<(), String> {
info!(
"Making sure public endpoint is installed for port {}",
self.application_port
);
todo!()
}
fn name(&self) -> String {
"PublicEndpoint".to_string()
}
}

View File

@@ -0,0 +1,8 @@
mod endpoint;
pub use endpoint::*;
mod monitoring;
pub use monitoring::*;
mod continuous_delivery;
pub use continuous_delivery::*;

View File

@@ -0,0 +1,21 @@
use async_trait::async_trait;
use log::info;
use crate::{
modules::application::ApplicationFeature,
topology::{HelmCommand, Topology},
};
#[derive(Debug, Default, Clone)]
pub struct Monitoring {}
#[async_trait]
impl<T: Topology + HelmCommand + 'static> ApplicationFeature<T> for Monitoring {
async fn ensure_installed(&self, _topology: &T) -> Result<(), String> {
info!("Ensuring monitoring is available for application");
todo!("create and execute k8s prometheus score, depends on Will's work")
}
fn name(&self) -> String {
"Monitoring".to_string()
}
}

View File

@@ -0,0 +1,78 @@
mod feature;
pub mod features;
mod rust;
pub use feature::*;
use log::info;
pub use rust::*;
use async_trait::async_trait;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
topology::Topology,
};
pub trait Application: std::fmt::Debug + Send + Sync {
fn name(&self) -> String;
}
#[derive(Debug)]
pub struct ApplicationInterpret<T: Topology + std::fmt::Debug> {
features: Vec<Box<dyn ApplicationFeature<T>>>,
application: Box<dyn Application>,
}
#[async_trait]
impl<T: Topology + std::fmt::Debug> Interpret<T> for ApplicationInterpret<T> {
async fn execute(
&self,
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let app_name = self.application.name();
info!(
"Preparing {} features [{}] for application {app_name}",
self.features.len(),
self.features
.iter()
.map(|f| f.name())
.collect::<Vec<String>>()
.join(", ")
);
for feature in self.features.iter() {
info!(
"Installing feature {} for application {app_name}",
feature.name()
);
let _ = match feature.ensure_installed(topology).await {
Ok(()) => (),
Err(msg) => {
return Err(InterpretError::new(format!(
"Application Interpret failed to install feature : {msg}"
)));
}
};
}
todo!(
"Do I need to do anything more than this here?? I feel like the Application trait itself should expose something like ensure_ready but its becoming redundant. We'll see as this evolves."
)
}
fn get_name(&self) -> InterpretName {
InterpretName::Application
}
fn get_version(&self) -> Version {
Version::from("1.0.0").unwrap()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -0,0 +1,41 @@
use serde::Serialize;
use crate::{
score::Score,
topology::{Topology, Url},
};
use super::{Application, ApplicationFeature, ApplicationInterpret};
#[derive(Debug, Serialize, Clone)]
pub struct RustWebappScore<T: Topology + Clone + Serialize> {
pub name: String,
pub domain: Url,
pub features: Vec<Box<dyn ApplicationFeature<T>>>,
}
impl<T: Topology + std::fmt::Debug + Clone + Serialize + 'static> Score<T> for RustWebappScore<T> {
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
Box::new(ApplicationInterpret {
features: self.features.clone(),
application: Box::new(RustWebapp {
name: self.name.clone(),
}),
})
}
fn name(&self) -> String {
format!("{}-RustWebapp", self.name)
}
}
#[derive(Debug)]
struct RustWebapp {
name: String,
}
impl Application for RustWebapp {
fn name(&self) -> String {
self.name.clone()
}
}

View File

@@ -1,3 +1,4 @@
pub mod application;
pub mod cert_manager;
pub mod dhcp;
pub mod dns;
@@ -15,3 +16,4 @@ pub mod opnsense;
pub mod prometheus;
pub mod tenant;
pub mod tftp;
pub mod postgres;

View File

@@ -1,13 +1,12 @@
use serde::Serialize;
use crate::modules::monitoring::{
alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
kube_prometheus::types::{AlertManagerAdditionalPromRules, AlertManagerChannelConfig},
use crate::modules::monitoring::kube_prometheus::types::{
AlertManagerAdditionalPromRules, AlertManagerChannelConfig, ServiceMonitor,
};
#[derive(Debug, Clone, Serialize)]
pub struct KubePrometheusConfig {
pub namespace: String,
pub namespace: Option<String>,
pub default_rules: bool,
pub windows_monitoring: bool,
pub alert_manager: bool,
@@ -26,11 +25,12 @@ pub struct KubePrometheusConfig {
pub prometheus_operator: bool,
pub alert_receiver_configs: Vec<AlertManagerChannelConfig>,
pub alert_rules: Vec<AlertManagerAdditionalPromRules>,
pub additional_service_monitors: Vec<ServiceMonitor>,
}
impl KubePrometheusConfig {
pub fn new() -> Self {
Self {
namespace: "monitoring".into(),
namespace: None,
default_rules: true,
windows_monitoring: false,
alert_manager: true,
@@ -39,7 +39,7 @@ impl KubePrometheusConfig {
prometheus: true,
kubernetes_service_monitors: true,
kubernetes_api_server: false,
kubelet: false,
kubelet: true,
kube_controller_manager: false,
kube_etcd: false,
kube_proxy: false,
@@ -49,6 +49,7 @@ impl KubePrometheusConfig {
kube_scheduler: false,
alert_receiver_configs: vec![],
alert_rules: vec![],
additional_service_monitors: vec![],
}
}
}

View File

@@ -12,7 +12,8 @@ use crate::modules::{
helm::chart::HelmChartScore,
monitoring::kube_prometheus::types::{
AlertGroup, AlertManager, AlertManagerAdditionalPromRules, AlertManagerConfig,
AlertManagerRoute, AlertManagerValues,
AlertManagerRoute, AlertManagerSpec, AlertManagerValues, ConfigReloader, Limits,
PrometheusConfig, Requests, Resources,
},
};
@@ -36,8 +37,47 @@ pub fn kube_prometheus_helm_chart_score(
let node_exporter = config.node_exporter.to_string();
let prometheus_operator = config.prometheus_operator.to_string();
let prometheus = config.prometheus.to_string();
let resource_limit = Resources {
limits: Limits {
memory: "100Mi".to_string(),
cpu: "100m".to_string(),
},
requests: Requests {
memory: "100Mi".to_string(),
cpu: "100m".to_string(),
},
};
fn indent_lines(s: &str, spaces: usize) -> String {
let pad = " ".repeat(spaces);
s.lines()
.map(|line| format!("{pad}{line}"))
.collect::<Vec<_>>()
.join("\n")
}
fn resource_block(resource: &Resources, indent_level: usize) -> String {
let yaml = serde_yaml::to_string(resource).unwrap();
format!(
"{}resources:\n{}",
" ".repeat(indent_level),
indent_lines(&yaml, indent_level + 2)
)
}
let resource_section = resource_block(&resource_limit, 2);
let mut values = format!(
r#"
prometheus:
enabled: {prometheus}
prometheusSpec:
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 200m
memory: 1000Mi
defaultRules:
create: {default_rules}
rules:
@@ -77,35 +117,183 @@ defaultRules:
windows: true
windowsMonitoring:
enabled: {windows_monitoring}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
grafana:
enabled: {grafana}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
initChownData:
resources:
requests:
cpu: 10m
memory: 50Mi
limits:
cpu: 50m
memory: 100Mi
sidecar:
resources:
requests:
cpu: 10m
memory: 50Mi
limits:
cpu: 50m
memory: 100Mi
kubernetesServiceMonitors:
enabled: {kubernetes_service_monitors}
kubeApiServer:
enabled: {kubernetes_api_server}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
kubelet:
enabled: {kubelet}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
kubeControllerManager:
enabled: {kube_controller_manager}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
coreDns:
enabled: {core_dns}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
kubeEtcd:
enabled: {kube_etcd}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
kubeScheduler:
enabled: {kube_scheduler}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
kubeProxy:
enabled: {kube_proxy}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
kubeStateMetrics:
enabled: {kube_state_metrics}
kube-state-metrics:
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
nodeExporter:
enabled: {node_exporter}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
prometheus-node-exporter:
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 200m
memory: 250Mi
prometheusOperator:
enabled: {prometheus_operator}
prometheus:
enabled: {prometheus}
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 100m
memory: 200Mi
prometheusConfigReloader:
resources:
requests:
cpu: 100m
memory: 150Mi
limits:
cpu: 100m
memory: 200Mi
admissionWebhooks:
deployment:
resources:
limits:
cpu: 10m
memory: 100Mi
requests:
cpu: 10m
memory: 100Mi
patch:
resources:
limits:
cpu: 10m
memory: 100Mi
requests:
cpu: 10m
memory: 100Mi
"#,
);
let prometheus_config =
crate::modules::monitoring::kube_prometheus::types::PrometheusConfigValues {
prometheus: PrometheusConfig {
prometheus: bool::from_str(prometheus.as_str()).expect("couldn't parse bool"),
additional_service_monitors: config.additional_service_monitors.clone(),
},
};
let prometheus_config_yaml =
serde_yaml::to_string(&prometheus_config).expect("Failed to serialize YAML");
debug!(
"serialized prometheus config: \n {:#}",
prometheus_config_yaml
);
values.push_str(&prometheus_config_yaml);
// add required null receiver for prometheus alert manager
let mut null_receiver = Mapping::new();
null_receiver.insert(
@@ -145,6 +333,30 @@ prometheus:
alertmanager: AlertManager {
enabled: config.alert_manager,
config: alert_manager_channel_config,
alertmanager_spec: AlertManagerSpec {
resources: Resources {
limits: Limits {
memory: "100Mi".to_string(),
cpu: "100m".to_string(),
},
requests: Requests {
memory: "100Mi".to_string(),
cpu: "100m".to_string(),
},
},
},
init_config_reloader: ConfigReloader {
resources: Resources {
limits: Limits {
memory: "100Mi".to_string(),
cpu: "100m".to_string(),
},
requests: Requests {
memory: "100Mi".to_string(),
cpu: "100m".to_string(),
},
},
},
},
};
@@ -185,7 +397,7 @@ prometheus:
debug!("full values.yaml: \n {:#}", values);
HelmChartScore {
namespace: Some(NonBlankString::from_str(&config.namespace).unwrap()),
namespace: Some(NonBlankString::from_str(&config.namespace.clone().unwrap()).unwrap()),
release_name: NonBlankString::from_str("kube-prometheus").unwrap(),
chart_name: NonBlankString::from_str(
"oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack",

View File

@@ -4,10 +4,12 @@ use serde::Serialize;
use super::{helm::config::KubePrometheusConfig, prometheus::Prometheus};
use crate::{
modules::monitoring::kube_prometheus::types::ServiceMonitor,
score::Score,
topology::{
HelmCommand, Topology,
oberservability::monitoring::{AlertReceiver, AlertRule, AlertingInterpret},
tenant::TenantManager,
},
};
@@ -15,14 +17,18 @@ use crate::{
pub struct HelmPrometheusAlertingScore {
pub receivers: Vec<Box<dyn AlertReceiver<Prometheus>>>,
pub rules: Vec<Box<dyn AlertRule<Prometheus>>>,
pub service_monitors: Vec<ServiceMonitor>,
}
impl<T: Topology + HelmCommand> Score<T> for HelmPrometheusAlertingScore {
impl<T: Topology + HelmCommand + TenantManager> Score<T> for HelmPrometheusAlertingScore {
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
let config = Arc::new(Mutex::new(KubePrometheusConfig::new()));
config
.try_lock()
.expect("couldn't lock config")
.additional_service_monitors = self.service_monitors.clone();
Box::new(AlertingInterpret {
sender: Prometheus {
config: Arc::new(Mutex::new(KubePrometheusConfig::new())),
},
sender: Prometheus::new(),
receivers: self.receivers.clone(),
rules: self.rules.clone(),
})

View File

@@ -1,7 +1,7 @@
use std::sync::{Arc, Mutex};
use async_trait::async_trait;
use log::debug;
use log::{debug, error};
use serde::Serialize;
use crate::{
@@ -10,9 +10,10 @@ use crate::{
modules::monitoring::alert_rule::prometheus_alert_rule::AlertManagerRuleGroup,
score,
topology::{
HelmCommand, Topology,
HelmCommand, K8sAnywhereTopology, Topology,
installable::Installable,
oberservability::monitoring::{AlertReceiver, AlertRule, AlertSender},
tenant::TenantManager,
},
};
@@ -33,7 +34,12 @@ impl AlertSender for Prometheus {
}
#[async_trait]
impl<T: Topology + HelmCommand> Installable<T> for Prometheus {
impl<T: Topology + HelmCommand + TenantManager> Installable<T> for Prometheus {
async fn configure(&self, _inventory: &Inventory, topology: &T) -> Result<(), InterpretError> {
self.configure_with_topology(topology).await;
Ok(())
}
async fn ensure_installed(
&self,
inventory: &Inventory,
@@ -50,6 +56,23 @@ pub struct Prometheus {
}
impl Prometheus {
pub fn new() -> Self {
Self {
config: Arc::new(Mutex::new(KubePrometheusConfig::new())),
}
}
pub async fn configure_with_topology<T: TenantManager>(&self, topology: &T) {
let ns = topology
.get_tenant_config()
.await
.map(|cfg| cfg.name.clone())
.unwrap_or_else(|| "monitoring".to_string());
error!("This must be refactored, see comments in pr #74");
debug!("NS: {}", ns);
self.config.lock().unwrap().namespace = Some(ns);
}
pub async fn install_receiver(
&self,
prometheus_receiver: &dyn PrometheusReceiver,

View File

@@ -1,4 +1,4 @@
use std::collections::BTreeMap;
use std::collections::{BTreeMap, HashMap};
use async_trait::async_trait;
use serde::Serialize;
@@ -16,9 +16,17 @@ pub struct AlertManagerValues {
pub alertmanager: AlertManager,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct AlertManager {
pub enabled: bool,
pub config: AlertManagerConfig,
pub alertmanager_spec: AlertManagerSpec,
pub init_config_reloader: ConfigReloader,
}
#[derive(Debug, Clone, Serialize)]
pub struct ConfigReloader {
pub resources: Resources,
}
#[derive(Debug, Clone, Serialize)]
@@ -43,6 +51,30 @@ pub struct AlertManagerChannelConfig {
pub channel_receiver: Value,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct AlertManagerSpec {
pub(crate) resources: Resources,
}
#[derive(Debug, Clone, Serialize)]
pub struct Resources {
pub limits: Limits,
pub requests: Requests,
}
#[derive(Debug, Clone, Serialize)]
pub struct Limits {
pub memory: String,
pub cpu: String,
}
#[derive(Debug, Clone, Serialize)]
pub struct Requests {
pub memory: String,
pub cpu: String,
}
#[derive(Debug, Clone, Serialize)]
pub struct AlertManagerAdditionalPromRules {
#[serde(flatten)]
@@ -53,3 +85,202 @@ pub struct AlertManagerAdditionalPromRules {
pub struct AlertGroup {
pub groups: Vec<AlertManagerRuleGroup>,
}
#[derive(Debug, Clone, Serialize)]
pub enum HTTPScheme {
#[serde(rename = "http")]
HTTP,
#[serde(rename = "https")]
HTTPS,
}
#[derive(Debug, Clone, Serialize)]
pub enum Operator {
In,
NotIn,
Exists,
DoesNotExist,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct PrometheusConfigValues {
pub prometheus: PrometheusConfig,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct PrometheusConfig {
pub prometheus: bool,
pub additional_service_monitors: Vec<ServiceMonitor>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ServiceMonitorTLSConfig {
// ## Path to the CA file
// ##
pub ca_file: Option<String>,
// ## Path to client certificate file
// ##
pub cert_file: Option<String>,
// ## Skip certificate verification
// ##
pub insecure_skip_verify: Option<bool>,
// ## Path to client key file
// ##
pub key_file: Option<String>,
// ## Server name used to verify host name
// ##
pub server_name: Option<String>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ServiceMonitorEndpoint {
// ## Name of the endpoint's service port
// ## Mutually exclusive with targetPort
pub port: Option<String>,
// ## Name or number of the endpoint's target port
// ## Mutually exclusive with port
pub target_port: Option<String>,
// ## File containing bearer token to be used when scraping targets
// ##
pub bearer_token_file: Option<String>,
// ## Interval at which metrics should be scraped
// ##
pub interval: Option<String>,
// ## HTTP path to scrape for metrics
// ##
pub path: String,
// ## HTTP scheme to use for scraping
// ##
pub scheme: HTTPScheme,
// ## TLS configuration to use when scraping the endpoint
// ##
pub tls_config: Option<ServiceMonitorTLSConfig>,
// ## MetricRelabelConfigs to apply to samples after scraping, but before ingestion.
// ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#relabelconfig
// ##
// # - action: keep
// # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
// # sourceLabels: [__name__]
pub metric_relabelings: Vec<Mapping>,
// ## RelabelConfigs to apply to samples before scraping
// ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#relabelconfig
// ##
// # - sourceLabels: [__meta_kubernetes_pod_node_name]
// # separator: ;
// # regex: ^(.*)$
// # targetLabel: nodename
// # replacement: $1
// # action: replace
pub relabelings: Vec<Mapping>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct MatchExpression {
pub key: String,
pub operator: Operator,
pub values: Vec<String>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct Selector {
// # label selector for services
pub match_labels: HashMap<String, String>,
pub match_expressions: Vec<MatchExpression>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ServiceMonitor {
pub name: String,
// # Additional labels to set used for the ServiceMonitorSelector. Together with standard labels from the chart
pub additional_labels: Option<Mapping>,
// # Service label for use in assembling a job name of the form <label value>-<port>
// # If no label is specified, the service name is used.
pub job_label: Option<String>,
// # labels to transfer from the kubernetes service to the target
pub target_labels: Vec<String>,
// # labels to transfer from the kubernetes pods to the target
pub pod_target_labels: Vec<String>,
// # Label selector for services to which this ServiceMonitor applies
// # Example which selects all services to be monitored
// # with label "monitoredby" with values any of "example-service-1" or "example-service-2"
// matchExpressions:
// - key: "monitoredby"
// operator: In
// values:
// - example-service-1
// - example-service-2
pub selector: Selector,
// # Namespaces from which services are selected
// # Match any namespace
// any: bool,
// # Explicit list of namespace names to select
// matchNames: Vec,
pub namespace_selector: Option<Mapping>,
// # Endpoints of the selected service to be monitored
pub endpoints: Vec<ServiceMonitorEndpoint>,
// # Fallback scrape protocol used by Prometheus for scraping metrics
// # ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#monitoring.coreos.com/v1.ScrapeProtocol
pub fallback_scrape_protocol: Option<String>,
}
impl Default for ServiceMonitor {
fn default() -> Self {
Self {
name: Default::default(),
additional_labels: Default::default(),
job_label: Default::default(),
target_labels: Default::default(),
pod_target_labels: Default::default(),
selector: Selector {
match_labels: HashMap::new(),
match_expressions: vec![],
},
namespace_selector: Default::default(),
endpoints: Default::default(),
fallback_scrape_protocol: Default::default(),
}
}
}
impl Default for ServiceMonitorEndpoint {
fn default() -> Self {
Self {
port: Some("80".to_string()),
target_port: Default::default(),
bearer_token_file: Default::default(),
interval: Default::default(),
path: "/metrics".to_string(),
scheme: HTTPScheme::HTTP,
tls_config: Default::default(),
metric_relabelings: Default::default(),
relabelings: Default::default(),
}
}
}

View File

@@ -1,3 +1,4 @@
pub mod alert_channel;
pub mod alert_rule;
pub mod kube_prometheus;
pub mod ntfy;

View File

@@ -0,0 +1 @@
pub mod ntfy_helm_chart;

View File

@@ -0,0 +1,83 @@
use non_blank_string_rs::NonBlankString;
use std::str::FromStr;
use crate::modules::helm::chart::{HelmChartScore, HelmRepository};
pub fn ntfy_helm_chart_score(namespace: String) -> HelmChartScore {
let values = format!(
r#"
replicaCount: 1
image:
repository: binwiederhier/ntfy
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "v2.12.0"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
# annotations:
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# name: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
# annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: ntfy.host.com
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
autoscaling:
enabled: false
config:
enabled: true
data:
# base-url: "https://ntfy.something.com"
auth-file: "/var/cache/ntfy/user.db"
auth-default-access: "deny-all"
cache-file: "/var/cache/ntfy/cache.db"
attachment-cache-dir: "/var/cache/ntfy/attachments"
behind-proxy: true
# web-root: "disable"
enable-signup: false
enable-login: "true"
persistence:
enabled: true
size: 200Mi
"#,
);
HelmChartScore {
namespace: Some(NonBlankString::from_str(&namespace).unwrap()),
release_name: NonBlankString::from_str("ntfy").unwrap(),
chart_name: NonBlankString::from_str("sarab97/ntfy").unwrap(),
chart_version: Some(NonBlankString::from_str("0.1.7").unwrap()),
values_overrides: None,
values_yaml: Some(values.to_string()),
create_namespace: true,
install_only: false,
repository: Some(HelmRepository::new(
"sarab97".to_string(),
url::Url::parse("https://charts.sarabsingh.com").unwrap(),
true,
)),
}
}

View File

@@ -0,0 +1,2 @@
pub mod helm;
pub mod ntfy;

View File

@@ -0,0 +1,169 @@
use std::sync::Arc;
use async_trait::async_trait;
use log::debug;
use serde::Serialize;
use strum::{Display, EnumString};
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
modules::monitoring::ntfy::helm::ntfy_helm_chart::ntfy_helm_chart_score,
score::Score,
topology::{HelmCommand, K8sclient, Topology, k8s::K8sClient},
};
#[derive(Debug, Clone, Serialize)]
pub struct NtfyScore {
pub namespace: String,
}
impl<T: Topology + HelmCommand + K8sclient> Score<T> for NtfyScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(NtfyInterpret {
score: self.clone(),
})
}
fn name(&self) -> String {
format!("Ntfy")
}
}
#[derive(Debug, Serialize)]
pub struct NtfyInterpret {
pub score: NtfyScore,
}
#[derive(Debug, EnumString, Display)]
enum NtfyAccessMode {
#[strum(serialize = "read-write", serialize = "rw", to_string = "read-write")]
ReadWrite,
#[strum(
serialize = "read-only",
serialize = "ro",
serialize = "read",
to_string = "read-only"
)]
ReadOnly,
#[strum(
serialize = "write-only",
serialize = "wo",
serialize = "write",
to_string = "write-only"
)]
WriteOnly,
#[strum(serialize = "none", to_string = "deny")]
Deny,
}
#[derive(Debug, EnumString, Display)]
enum NtfyRole {
#[strum(serialize = "user", to_string = "user")]
User,
#[strum(serialize = "admin", to_string = "admin")]
Admin,
}
impl NtfyInterpret {
async fn add_user(
&self,
k8s_client: Arc<K8sClient>,
username: &str,
password: &str,
role: Option<NtfyRole>,
) -> Result<(), String> {
let role = match role {
Some(r) => r,
None => NtfyRole::User,
};
k8s_client
.exec_app(
"ntfy".to_string(),
Some(&self.score.namespace),
vec![
"sh",
"-c",
format!("NTFY_PASSWORD={password} ntfy user add --role={role} {username}")
.as_str(),
],
)
.await?;
Ok(())
}
async fn set_access(
&self,
k8s_client: Arc<K8sClient>,
username: &str,
topic: &str,
mode: NtfyAccessMode,
) -> Result<(), String> {
k8s_client
.exec_app(
"ntfy".to_string(),
Some(&self.score.namespace),
vec![
"sh",
"-c",
format!("ntfy access {username} {topic} {mode}").as_str(),
],
)
.await?;
Ok(())
}
}
/// We need a ntfy interpret to wrap the HelmChartScore in order to run the score, and then bootstrap the config inside ntfy
#[async_trait]
impl<T: Topology + HelmCommand + K8sclient> Interpret<T> for NtfyInterpret {
async fn execute(
&self,
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
ntfy_helm_chart_score(self.score.namespace.clone())
.create_interpret()
.execute(inventory, topology)
.await?;
debug!("installed ntfy helm chart");
let client = topology
.k8s_client()
.await
.expect("couldn't get k8s client");
client
.wait_until_deployment_ready(
"ntfy".to_string(),
Some(&self.score.namespace.as_str()),
None,
)
.await?;
debug!("created k8s client");
self.add_user(client, "harmony", "harmony", Some(NtfyRole::Admin))
.await?;
debug!("exec into pod done");
Ok(Outcome::success("installed ntfy".to_string()))
}
fn get_name(&self) -> InterpretName {
todo!()
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}

View File

@@ -0,0 +1,119 @@
use async_trait::async_trait;
use derive_new::new;
use log::info;
use serde::{Deserialize, Serialize};
use crate::{
data::{PostgresDatabase, PostgresUser, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::{PostgresServer, Topology},
};
#[derive(Debug, new, Clone, Serialize, Deserialize)]
pub struct PostgresScore {
users: Vec<PostgresUser>,
databases: Vec<PostgresDatabase>,
}
impl<T: Topology + PostgresServer> Score<T> for PostgresScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(PostgresInterpret::new(self.clone()))
}
fn name(&self) -> String {
"PostgresScore".to_string()
}
}
#[derive(Debug, Clone)]
pub struct PostgresInterpret {
score: PostgresScore,
version: Version,
status: InterpretStatus,
}
impl PostgresInterpret {
pub fn new(score: PostgresScore) -> Self {
let version = Version::from("1.0.0").expect("Version should be valid");
Self {
version,
score,
status: InterpretStatus::QUEUED,
}
}
async fn ensure_users_exist<P: PostgresServer>(
&self,
postgres_server: &P,
) -> Result<Outcome, InterpretError> {
let users = &self.score.users;
postgres_server.ensure_users_exist(users.clone()).await?;
Ok(Outcome::new(
InterpretStatus::SUCCESS,
format!(
"PostgresInterpret ensured {} users exist successfully",
users.len()
),
))
}
async fn ensure_databases_exist<P: PostgresServer>(
&self,
postgres_server: &P,
) -> Result<Outcome, InterpretError> {
let databases = &self.score.databases;
postgres_server
.ensure_databases_exist(databases.clone())
.await?;
Ok(Outcome::new(
InterpretStatus::SUCCESS,
format!(
"PostgresInterpret ensured {} databases exist successfully",
databases.len()
),
))
}
}
#[async_trait]
impl<T: Topology + PostgresServer> Interpret<T> for PostgresInterpret {
fn get_name(&self) -> InterpretName {
InterpretName::Postgres
}
fn get_version(&self) -> crate::domain::data::Version {
self.version.clone()
}
fn get_status(&self) -> InterpretStatus {
self.status.clone()
}
fn get_children(&self) -> Vec<crate::domain::data::Id> {
todo!()
}
async fn execute(
&self,
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
info!(
"Executing {} on inventory {inventory:?})",
<PostgresInterpret as Interpret<T>>::get_name(self)
);
self.ensure_users_exist(topology).await?;
self.ensure_databases_exist(topology).await?;
Ok(Outcome::new(
InterpretStatus::SUCCESS,
"Postgres Interpret execution successful".to_string(),
))
}
}

View File

@@ -12,15 +12,15 @@ use harmony_tui;
pub struct Args {
/// Run score(s) without prompt
#[arg(short, long, default_value_t = false, conflicts_with = "interactive")]
yes: bool,
pub yes: bool,
/// Filter query
#[arg(short, long, conflicts_with = "interactive")]
filter: Option<String>,
pub filter: Option<String>,
/// Run interactive TUI or not
#[arg(short, long, default_value_t = false)]
interactive: bool,
pub interactive: bool,
/// Run all or nth, defaults to all
#[arg(
@@ -31,15 +31,15 @@ pub struct Args {
conflicts_with = "number",
conflicts_with = "interactive"
)]
all: bool,
pub all: bool,
/// Run nth matching, zero indexed
#[arg(short, long, default_value_t = 0, conflicts_with = "interactive")]
number: usize,
pub number: usize,
/// list scores, will also be affected by run filter
#[arg(short, long, default_value_t = false, conflicts_with = "interactive")]
list: bool,
pub list: bool,
}
fn maestro_scores_filter<T: Topology>(

View File

@@ -9,12 +9,20 @@ It's designed to simplify the build process by either compiling a Harmony projec
You can download and run the latest snapshot build with a single command. This will place the binary in ~/.local/bin, which should be in your PATH on most modern Linux distributions.
```bash
curl -Ls https://git.nationtech.io/NationTech/harmony/releases/download/snapshot-latest/harmony_composer \
mkdir -p ~/.local/bin && \
curl -L https://git.nationtech.io/NationTech/harmony/releases/download/snapshot-latest/harmony_composer \
-o ~/.local/bin/harmony_composer && \
chmod +x ~/.local/bin/harmony_composer
chmod +x ~/.local/bin/harmony_composer && \
alias hc=~/.local/bin/harmony_composer && \
echo "\n\nharmony_composer installed successfully\!\n\nUse \`hc\` to run it.\n\nNote : this hc alias only works for the current shell session. Add 'alias hc=~/.local/bin/harmony_composer' to your '~/.bashrc' or '~/.zshrc' file to make it permanently available to your user."
```
Then you can start using it with either :
- `harmony_composer` if `~/.local/bin` is in you `$PATH`
- `hc` alias set up in your current shell session.
- If you want to make the `hc` command always available, add `alias hc=~/.local/bin/harmony_composer` to your shell profile. Usually `~/.bashrc` for bash, `~/.zshrc` for zsh.
> ⚠️ Warning: Unstable Builds
> The snapshot-latest tag points to the latest build from the master branch. It is unstable, unsupported, and intended only for early testing of new features. Please do not use it in production environments.

View File

@@ -73,10 +73,9 @@ async fn main() {
.try_exists()
.expect("couldn't check if path exists");
let harmony_bin_path: PathBuf;
match harmony_path {
let harmony_bin_path: PathBuf = match harmony_path {
true => {
harmony_bin_path = compile_harmony(
compile_harmony(
cli_args.compile_method,
cli_args.compile_platform,
cli_args.harmony_path.clone(),
@@ -84,7 +83,7 @@ async fn main() {
.await
}
false => todo!("implement autodetect code"),
}
};
match cli_args.command {
Some(command) => match command {
@@ -103,8 +102,10 @@ async fn main() {
};
let check_output = Command::new(check_script)
.output()
.expect("failed to run check script");
.spawn()
.expect("failed to run check script")
.wait_with_output()
.unwrap();
info!(
"check stdout: {}, check stderr: {}",
String::from_utf8(check_output.stdout).expect("couldn't parse from utf8"),
@@ -112,18 +113,16 @@ async fn main() {
);
}
Commands::Deploy(args) => {
if args.staging {
todo!("implement staging deployment");
let deploy = if args.staging {
todo!("implement staging deployment")
} else if args.prod {
todo!("implement prod deployment")
} else {
Command::new(harmony_bin_path).arg("-y").arg("-a").spawn()
}
.expect("failed to run harmony deploy");
if args.prod {
todo!("implement prod deployment");
}
let deploy_output = Command::new(harmony_bin_path)
.arg("-y")
.arg("-a")
.output()
.expect("failed to run harmony deploy");
let deploy_output = deploy.wait_with_output().unwrap();
println!(
"deploy output: {}",
String::from_utf8(deploy_output.stdout).expect("couldn't parse from utf8")