Compare commits
26 Commits
feat/slack
...
feat/kube-
| Author | SHA1 | Date | |
|---|---|---|---|
| de3e7869f7 | |||
| 57eabc9834 | |||
| cd40660350 | |||
| 2ca732cecd | |||
| 12eb4ae31f | |||
| a2be9457b9 | |||
| 0d56fbc09d | |||
| 56dc1e93c1 | |||
| 691540fe64 | |||
| 7e3f1b1830 | |||
| b631e8ccbb | |||
| 27f1a9dbdd | |||
| e7917843bc | |||
| 7cd541bdd8 | |||
| 270dd49567 | |||
| 0187300473 | |||
| bf16566b4e | |||
| 895fb02f4e | |||
| 88d6af9815 | |||
| 5aa9dc701f | |||
| f4ef895d2e | |||
| 6e7148a945 | |||
| 83453273c6 | |||
| 76ae5eb747 | |||
| 9c51040f3b | |||
| 19bd47a545 |
14
.gitea/workflows/check.yml
Normal file
14
.gitea/workflows/check.yml
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
name: Run Check Script
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
pull_request:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check:
|
||||||
|
runs-on: rust-cargo
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Run check script
|
||||||
|
run: bash check.sh
|
||||||
36
CONTRIBUTING.md
Normal file
36
CONTRIBUTING.md
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
# Contributing to the Harmony project
|
||||||
|
|
||||||
|
## Write small P-R
|
||||||
|
|
||||||
|
Aim for the smallest piece of work that is mergeable.
|
||||||
|
|
||||||
|
Mergeable means that :
|
||||||
|
|
||||||
|
- it does not break the build
|
||||||
|
- it moves the codebase one step forward
|
||||||
|
|
||||||
|
P-Rs can be many things, they do not have to be complete features.
|
||||||
|
|
||||||
|
### What a P-R **should** be
|
||||||
|
|
||||||
|
- Introduce a new trait : This will be the place to discuss the new trait addition, its design and implementation
|
||||||
|
- A new implementation of a trait : a new concrete implementation of the LoadBalancer trait
|
||||||
|
- A new CI check : something that improves quality, robustness, ci performance
|
||||||
|
- Documentation improvements
|
||||||
|
- Refactoring
|
||||||
|
- Bugfix
|
||||||
|
|
||||||
|
### What a P-R **should not** be
|
||||||
|
|
||||||
|
- Large. Anything over 200 lines (excluding generated lines) should have a very good reason to be this large.
|
||||||
|
- A mix of refactoring, bug fixes and new features.
|
||||||
|
- Introducing multiple new features or ideas at once.
|
||||||
|
- Multiple new implementations of a trait/functionnality at once
|
||||||
|
|
||||||
|
The general idea is to keep P-Rs small and single purpose.
|
||||||
|
|
||||||
|
## Commit message formatting
|
||||||
|
|
||||||
|
We follow conventional commits guidelines.
|
||||||
|
|
||||||
|
https://www.conventionalcommits.org/en/v1.0.0/
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
# Architecture Decision Record: \<Title\>
|
# Architecture Decision Record: \<Title\>
|
||||||
|
|
||||||
Name: \<Name\>
|
Initial Author: \<Name\>
|
||||||
|
|
||||||
Initial Date: \<Date\>
|
Initial Date: \<Date\>
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Architecture Decision Record: Helm and Kustomize Handling
|
# Architecture Decision Record: Helm and Kustomize Handling
|
||||||
|
|
||||||
Name: Taha Hawa
|
Initial Author: Taha Hawa
|
||||||
|
|
||||||
Initial Date: 2025-04-15
|
Initial Date: 2025-04-15
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
# Architecture Decision Record: Monitoring and Alerting
|
# Architecture Decision Record: Monitoring and Alerting
|
||||||
|
|
||||||
Proposed by: Willem Rolleman
|
Initial Author : Willem Rolleman
|
||||||
Date: April 28 2025
|
Date : April 28 2025
|
||||||
|
|
||||||
## Status
|
## Status
|
||||||
|
|
||||||
|
|||||||
160
adr/011-multi-tenant-cluster.md
Normal file
160
adr/011-multi-tenant-cluster.md
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
# Architecture Decision Record: Multi-Tenancy Strategy for Harmony Managed Clusters
|
||||||
|
|
||||||
|
Initial Author: Jean-Gabriel Gill-Couture
|
||||||
|
|
||||||
|
Initial Date: 2025-05-26
|
||||||
|
|
||||||
|
## Status
|
||||||
|
|
||||||
|
Proposed
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Harmony manages production OKD/Kubernetes clusters that serve multiple clients with varying trust levels and operational requirements. We need a multi-tenancy strategy that provides:
|
||||||
|
|
||||||
|
1. **Strong isolation** between client workloads while maintaining operational simplicity
|
||||||
|
2. **Controlled API access** allowing clients self-service capabilities within defined boundaries
|
||||||
|
3. **Security-first approach** protecting both the cluster infrastructure and tenant data
|
||||||
|
4. **Harmony-native implementation** using our Score/Interpret pattern for automated tenant provisioning
|
||||||
|
5. **Scalable management** supporting both small trusted clients and larger enterprise customers
|
||||||
|
|
||||||
|
The official Kubernetes multi-tenancy documentation identifies two primary models: namespace-based isolation and virtual control planes per tenant. Given Harmony's focus on operational simplicity, provider-agnostic abstractions (ADR-003), and hexagonal architecture (ADR-002), we must choose an approach that balances security, usability, and maintainability.
|
||||||
|
|
||||||
|
Our clients represent a hybrid tenancy model:
|
||||||
|
- **Customer multi-tenancy**: Each client operates independently with no cross-tenant trust
|
||||||
|
- **Team multi-tenancy**: Individual clients may have multiple team members requiring coordinated access
|
||||||
|
- **API access requirement**: Unlike pure SaaS scenarios, clients need controlled Kubernetes API access for self-service operations
|
||||||
|
|
||||||
|
The official kubernetes documentation on multi tenancy heavily inspired this ADR : https://kubernetes.io/docs/concepts/security/multi-tenancy/
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Implement **namespace-based multi-tenancy** with the following architecture:
|
||||||
|
|
||||||
|
### 1. Network Security Model
|
||||||
|
- **Private cluster access**: Kubernetes API and OpenShift console accessible only via WireGuard VPN
|
||||||
|
- **No public exposure**: Control plane endpoints remain internal to prevent unauthorized access attempts
|
||||||
|
- **VPN-based authentication**: Initial access control through WireGuard client certificates
|
||||||
|
|
||||||
|
### 2. Tenant Isolation Strategy
|
||||||
|
- **Dedicated namespace per tenant**: Each client receives an isolated namespace with access limited only to the required resources and operations
|
||||||
|
- **Complete network isolation**: NetworkPolicies prevent cross-namespace communication while allowing full egress to public internet
|
||||||
|
- **Resource governance**: ResourceQuotas and LimitRanges enforce CPU, memory, and storage consumption limits
|
||||||
|
- **Storage access control**: Clients can create PersistentVolumeClaims but cannot directly manipulate PersistentVolumes or access other tenants' storage
|
||||||
|
|
||||||
|
### 3. Access Control Framework
|
||||||
|
- **Principle of Least Privilege**: RBAC grants only necessary permissions within tenant namespace scope
|
||||||
|
- **Namespace-scoped**: Clients can create/modify/delete resources within their namespace
|
||||||
|
- **Cluster-level restrictions**: No access to cluster-wide resources, other namespaces, or sensitive cluster operations
|
||||||
|
- **Whitelisted operations**: Controlled self-service capabilities for ingress, secrets, configmaps, and workload management
|
||||||
|
|
||||||
|
### 4. Identity Management Evolution
|
||||||
|
- **Phase 1**: Manual provisioning of VPN access and Kubernetes ServiceAccounts/Users
|
||||||
|
- **Phase 2**: Migration to Keycloak-based identity management (aligning with ADR-006) for centralized authentication and lifecycle management
|
||||||
|
|
||||||
|
### 5. Harmony Integration
|
||||||
|
- **TenantScore implementation**: Declarative tenant provisioning using Harmony's Score/Interpret pattern
|
||||||
|
- **Topology abstraction**: Tenant configuration abstracted from underlying Kubernetes implementation details
|
||||||
|
- **Automated deployment**: Complete tenant setup automated through Harmony's orchestration capabilities
|
||||||
|
|
||||||
|
## Rationale
|
||||||
|
|
||||||
|
### Network Security Through VPN Access
|
||||||
|
- **Defense in depth**: VPN requirement adds critical security layer preventing unauthorized cluster access
|
||||||
|
- **Simplified firewall rules**: No need for complex public endpoint protections or rate limiting
|
||||||
|
- **Audit capability**: VPN access provides clear audit trail of cluster connections
|
||||||
|
- **Aligns with enterprise practices**: Most enterprise customers already use VPN infrastructure
|
||||||
|
|
||||||
|
### Namespace Isolation vs Virtual Control Planes
|
||||||
|
Following Kubernetes official guidance, namespace isolation provides:
|
||||||
|
- **Lower resource overhead**: Virtual control planes require dedicated etcd, API server, and controller manager per tenant
|
||||||
|
- **Operational simplicity**: Single control plane to maintain, upgrade, and monitor
|
||||||
|
- **Cross-tenant service integration**: Enables future controlled cross-tenant communication if required
|
||||||
|
- **Proven stability**: Namespace-based isolation is well-tested and widely deployed
|
||||||
|
- **Cost efficiency**: Significantly lower infrastructure costs compared to dedicated control planes
|
||||||
|
|
||||||
|
### Hybrid Tenancy Model Suitability
|
||||||
|
Our approach addresses both customer and team multi-tenancy requirements:
|
||||||
|
- **Customer isolation**: Strong network and RBAC boundaries prevent cross-tenant interference
|
||||||
|
- **Team collaboration**: Multiple team members can share namespace access through group-based RBAC
|
||||||
|
- **Self-service balance**: Controlled API access enables client autonomy without compromising security
|
||||||
|
|
||||||
|
### Harmony Architecture Alignment
|
||||||
|
- **Provider agnostic**: TenantScore abstracts multi-tenancy concepts, enabling future support for other Kubernetes distributions
|
||||||
|
- **Hexagonal architecture**: Tenant management becomes an infrastructure capability accessed through well-defined ports
|
||||||
|
- **Declarative automation**: Tenant lifecycle fully managed through Harmony's Score execution model
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive Consequences
|
||||||
|
- **Strong security posture**: VPN + namespace isolation provides robust tenant separation
|
||||||
|
- **Operational efficiency**: Single cluster management with automated tenant provisioning
|
||||||
|
- **Client autonomy**: Self-service capabilities reduce operational support burden
|
||||||
|
- **Scalable architecture**: Can support hundreds of tenants per cluster without architectural changes
|
||||||
|
- **Future flexibility**: Foundation supports evolution to more sophisticated multi-tenancy models
|
||||||
|
- **Cost optimization**: Shared infrastructure maximizes resource utilization
|
||||||
|
|
||||||
|
### Negative Consequences
|
||||||
|
- **VPN operational overhead**: Requires VPN infrastructure management
|
||||||
|
- **Manual provisioning complexity**: Phase 1 manual user management creates administrative burden
|
||||||
|
- **Network policy dependency**: Requires CNI with NetworkPolicy support (OVN-Kubernetes provides this and is the OKD/Openshift default)
|
||||||
|
- **Cluster-wide resource limitations**: Some advanced Kubernetes features require cluster-wide access
|
||||||
|
- **Single point of failure**: Cluster outage affects all tenants simultaneously
|
||||||
|
|
||||||
|
### Migration Challenges
|
||||||
|
- **Legacy client integration**: Existing clients may need VPN client setup and credential migration
|
||||||
|
- **Monitoring complexity**: Per-tenant observability requires careful metric and log segmentation
|
||||||
|
- **Backup considerations**: Tenant data backup must respect isolation boundaries
|
||||||
|
|
||||||
|
## Alternatives Considered
|
||||||
|
|
||||||
|
### Alternative 1: Virtual Control Plane Per Tenant
|
||||||
|
**Pros**: Complete control plane isolation, full Kubernetes API access per tenant
|
||||||
|
**Cons**: 3-5x higher resource usage, complex cross-tenant networking, operational complexity scales linearly with tenants
|
||||||
|
|
||||||
|
**Rejected**: Resource overhead incompatible with cost-effective multi-tenancy goals
|
||||||
|
|
||||||
|
### Alternative 2: Dedicated Clusters Per Tenant
|
||||||
|
**Pros**: Maximum isolation, independent upgrade cycles, simplified security model
|
||||||
|
**Cons**: Exponential operational complexity, prohibitive costs, resource waste
|
||||||
|
|
||||||
|
**Rejected**: Operational overhead makes this approach unsustainable for multiple clients
|
||||||
|
|
||||||
|
### Alternative 3: Public API with Advanced Authentication
|
||||||
|
**Pros**: No VPN requirement, potentially simpler client access
|
||||||
|
**Cons**: Larger attack surface, complex rate limiting and DDoS protection, increased security monitoring requirements
|
||||||
|
|
||||||
|
**Rejected**: Risk/benefit analysis favors VPN-based access control
|
||||||
|
|
||||||
|
### Alternative 4: Service Mesh Based Isolation
|
||||||
|
**Pros**: Fine-grained traffic control, encryption, advanced observability
|
||||||
|
**Cons**: Significant operational complexity, performance overhead, steep learning curve
|
||||||
|
|
||||||
|
**Rejected**: Complexity overhead outweighs benefits for current requirements; remains option for future enhancement
|
||||||
|
|
||||||
|
## Additional Notes
|
||||||
|
|
||||||
|
### Implementation Roadmap
|
||||||
|
1. **Phase 1**: Implement VPN access and manual tenant provisioning
|
||||||
|
2. **Phase 2**: Deploy TenantScore automation for namespace, RBAC, and NetworkPolicy management
|
||||||
|
3. **Phase 3**: Integrate Keycloak for centralized identity management
|
||||||
|
4. **Phase 4**: Add advanced monitoring and per-tenant observability
|
||||||
|
|
||||||
|
### TenantScore Structure Preview
|
||||||
|
```rust
|
||||||
|
pub struct TenantScore {
|
||||||
|
pub tenant_config: TenantConfig,
|
||||||
|
pub resource_quotas: ResourceQuotaConfig,
|
||||||
|
pub network_isolation: NetworkIsolationPolicy,
|
||||||
|
pub storage_access: StorageAccessConfig,
|
||||||
|
pub rbac_config: RBACConfig,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Future Enhancements
|
||||||
|
- **Cross-tenant service mesh**: For approved inter-tenant communication
|
||||||
|
- **Advanced monitoring**: Per-tenant Prometheus/Grafana instances
|
||||||
|
- **Backup automation**: Tenant-scoped backup policies
|
||||||
|
- **Cost allocation**: Detailed per-tenant resource usage tracking
|
||||||
|
|
||||||
|
This ADR establishes the foundation for secure, scalable multi-tenancy in Harmony-managed clusters while maintaining operational simplicity and cost effectiveness. A follow-up ADR will detail the Tenant abstraction and user management mechanisms within the Harmony framework.
|
||||||
@@ -4,9 +4,7 @@ use harmony::{
|
|||||||
maestro::Maestro,
|
maestro::Maestro,
|
||||||
modules::{
|
modules::{
|
||||||
lamp::{LAMPConfig, LAMPScore},
|
lamp::{LAMPConfig, LAMPScore},
|
||||||
monitoring::monitoring_alerting::{
|
monitoring::monitoring_alerting::{AlertChannel, MonitoringAlertingStackScore},
|
||||||
AlertChannel, MonitoringAlertingStackScore, WebhookServiceType,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
topology::{K8sAnywhereTopology, Url},
|
topology::{K8sAnywhereTopology, Url},
|
||||||
};
|
};
|
||||||
@@ -50,10 +48,6 @@ async fn main() {
|
|||||||
|
|
||||||
let mut monitoring_stack_score = MonitoringAlertingStackScore::new();
|
let mut monitoring_stack_score = MonitoringAlertingStackScore::new();
|
||||||
monitoring_stack_score.namespace = Some(lamp_stack.config.namespace.clone());
|
monitoring_stack_score.namespace = Some(lamp_stack.config.namespace.clone());
|
||||||
monitoring_stack_score.alert_channel = Some(AlertChannel::WebHookUrl {
|
|
||||||
url: url,
|
|
||||||
webhook_service_type: WebhookServiceType::Discord,
|
|
||||||
});
|
|
||||||
|
|
||||||
maestro.register_all(vec![Box::new(lamp_stack), Box::new(monitoring_stack_score)]);
|
maestro.register_all(vec![Box::new(lamp_stack), Box::new(monitoring_stack_score)]);
|
||||||
// Here we bootstrap the CLI, this gives some nice features if you need them
|
// Here we bootstrap the CLI, this gives some nice features if you need them
|
||||||
|
|||||||
@@ -49,3 +49,4 @@ fqdn = { version = "0.4.6", features = [
|
|||||||
"serde",
|
"serde",
|
||||||
] }
|
] }
|
||||||
temp-dir = "0.1.14"
|
temp-dir = "0.1.14"
|
||||||
|
dyn-clone = "1.0.19"
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||||
pub struct Id {
|
pub struct Id {
|
||||||
value: String,
|
value: String,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,11 +1,12 @@
|
|||||||
use std::{process::Command, sync::Arc};
|
use std::{collections::HashMap, process::Command, sync::Arc};
|
||||||
|
|
||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
use inquire::Confirm;
|
use inquire::Confirm;
|
||||||
use log::{info, warn};
|
use log::{info, warn};
|
||||||
use tokio::sync::OnceCell;
|
use tokio::sync::{Mutex, OnceCell};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
|
executors::ExecutorError,
|
||||||
interpret::{InterpretError, Outcome},
|
interpret::{InterpretError, Outcome},
|
||||||
inventory::Inventory,
|
inventory::Inventory,
|
||||||
maestro::Maestro,
|
maestro::Maestro,
|
||||||
@@ -13,7 +14,14 @@ use crate::{
|
|||||||
topology::LocalhostTopology,
|
topology::LocalhostTopology,
|
||||||
};
|
};
|
||||||
|
|
||||||
use super::{HelmCommand, K8sclient, Topology, k8s::K8sClient};
|
use super::{
|
||||||
|
HelmCommand, K8sclient, Topology,
|
||||||
|
k8s::K8sClient,
|
||||||
|
oberservability::monitoring::AlertReceiver,
|
||||||
|
tenant::{
|
||||||
|
ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy, k8s::K8sTenantManager,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
struct K8sState {
|
struct K8sState {
|
||||||
client: Arc<K8sClient>,
|
client: Arc<K8sClient>,
|
||||||
@@ -21,6 +29,7 @@ struct K8sState {
|
|||||||
message: String,
|
message: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
enum K8sSource {
|
enum K8sSource {
|
||||||
LocalK3d,
|
LocalK3d,
|
||||||
Kubeconfig,
|
Kubeconfig,
|
||||||
@@ -28,6 +37,8 @@ enum K8sSource {
|
|||||||
|
|
||||||
pub struct K8sAnywhereTopology {
|
pub struct K8sAnywhereTopology {
|
||||||
k8s_state: OnceCell<Option<K8sState>>,
|
k8s_state: OnceCell<Option<K8sState>>,
|
||||||
|
tenant_manager: OnceCell<K8sTenantManager>,
|
||||||
|
pub alert_receivers: Mutex<HashMap<String, OnceCell<AlertReceiver>>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
@@ -51,6 +62,8 @@ impl K8sAnywhereTopology {
|
|||||||
pub fn new() -> Self {
|
pub fn new() -> Self {
|
||||||
Self {
|
Self {
|
||||||
k8s_state: OnceCell::new(),
|
k8s_state: OnceCell::new(),
|
||||||
|
tenant_manager: OnceCell::new(),
|
||||||
|
alert_receivers: Mutex::new(HashMap::new()),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -159,6 +172,15 @@ impl K8sAnywhereTopology {
|
|||||||
|
|
||||||
Ok(Some(state))
|
Ok(Some(state))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn get_k8s_tenant_manager(&self) -> Result<&K8sTenantManager, ExecutorError> {
|
||||||
|
match self.tenant_manager.get() {
|
||||||
|
Some(t) => Ok(t),
|
||||||
|
None => Err(ExecutorError::UnexpectedError(
|
||||||
|
"K8sTenantManager not available".to_string(),
|
||||||
|
)),
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
struct K8sAnywhereConfig {
|
struct K8sAnywhereConfig {
|
||||||
@@ -209,3 +231,38 @@ impl Topology for K8sAnywhereTopology {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl HelmCommand for K8sAnywhereTopology {}
|
impl HelmCommand for K8sAnywhereTopology {}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl TenantManager for K8sAnywhereTopology {
|
||||||
|
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
|
||||||
|
self.get_k8s_tenant_manager()?
|
||||||
|
.provision_tenant(config)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn update_tenant_resource_limits(
|
||||||
|
&self,
|
||||||
|
tenant_name: &str,
|
||||||
|
new_limits: &ResourceLimits,
|
||||||
|
) -> Result<(), ExecutorError> {
|
||||||
|
self.get_k8s_tenant_manager()?
|
||||||
|
.update_tenant_resource_limits(tenant_name, new_limits)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn update_tenant_network_policy(
|
||||||
|
&self,
|
||||||
|
tenant_name: &str,
|
||||||
|
new_policy: &TenantNetworkPolicy,
|
||||||
|
) -> Result<(), ExecutorError> {
|
||||||
|
self.get_k8s_tenant_manager()?
|
||||||
|
.update_tenant_network_policy(tenant_name, new_policy)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
|
||||||
|
self.get_k8s_tenant_manager()?
|
||||||
|
.deprovision_tenant(tenant_name)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -7,6 +7,12 @@ use serde::Serialize;
|
|||||||
use super::{IpAddress, LogicalHost};
|
use super::{IpAddress, LogicalHost};
|
||||||
use crate::executors::ExecutorError;
|
use crate::executors::ExecutorError;
|
||||||
|
|
||||||
|
impl std::fmt::Debug for dyn LoadBalancer {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
f.write_fmt(format_args!("LoadBalancer {}", self.get_ip()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
pub trait LoadBalancer: Send + Sync {
|
pub trait LoadBalancer: Send + Sync {
|
||||||
fn get_ip(&self) -> IpAddress;
|
fn get_ip(&self) -> IpAddress;
|
||||||
@@ -32,11 +38,6 @@ pub trait LoadBalancer: Send + Sync {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl std::fmt::Debug for dyn LoadBalancer {
|
|
||||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
|
||||||
f.write_fmt(format_args!("LoadBalancer {}", self.get_ip()))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
#[derive(Debug, PartialEq, Clone, Serialize)]
|
#[derive(Debug, PartialEq, Clone, Serialize)]
|
||||||
pub struct LoadBalancerService {
|
pub struct LoadBalancerService {
|
||||||
pub backend_servers: Vec<BackendServer>,
|
pub backend_servers: Vec<BackendServer>,
|
||||||
|
|||||||
@@ -3,6 +3,8 @@ mod host_binding;
|
|||||||
mod http;
|
mod http;
|
||||||
mod k8s_anywhere;
|
mod k8s_anywhere;
|
||||||
mod localhost;
|
mod localhost;
|
||||||
|
pub mod oberservability;
|
||||||
|
pub mod tenant;
|
||||||
pub use k8s_anywhere::*;
|
pub use k8s_anywhere::*;
|
||||||
pub use localhost::*;
|
pub use localhost::*;
|
||||||
pub mod k8s;
|
pub mod k8s;
|
||||||
|
|||||||
1
harmony/src/domain/topology/oberservability/mod.rs
Normal file
1
harmony/src/domain/topology/oberservability/mod.rs
Normal file
@@ -0,0 +1 @@
|
|||||||
|
pub mod monitoring;
|
||||||
33
harmony/src/domain/topology/oberservability/monitoring.rs
Normal file
33
harmony/src/domain/topology/oberservability/monitoring.rs
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
use dyn_clone::DynClone;
|
||||||
|
use serde::Serialize;
|
||||||
|
|
||||||
|
use std::fmt::Debug;
|
||||||
|
|
||||||
|
use crate::interpret::InterpretError;
|
||||||
|
|
||||||
|
use crate::{interpret::Outcome, topology::Topology};
|
||||||
|
|
||||||
|
/// Represents an entity responsible for collecting and organizing observability data
|
||||||
|
/// from various telemetry sources
|
||||||
|
/// A `Monitor` abstracts the logic required to scrape, aggregate, and structure
|
||||||
|
/// monitoring data, enabling consistent processing regardless of the underlying data source.
|
||||||
|
#[async_trait]
|
||||||
|
pub trait Monitor<T: Topology>: Debug + Send + Sync {
|
||||||
|
async fn deploy_monitor(&self, topology: &T) -> Result<Outcome, InterpretError>;
|
||||||
|
|
||||||
|
async fn delete_monitor(&self, topolgy: &T) -> Result<Outcome, InterpretError>;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
pub trait AlertReceiverDeployment<T: Topology>: Debug + DynClone + Send + Sync {
|
||||||
|
async fn deploy_alert_receiver(&self, topology: &T) -> Result<Outcome, InterpretError>;
|
||||||
|
}
|
||||||
|
|
||||||
|
dyn_clone::clone_trait_object!(<T> AlertReceiverDeployment<T>);
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize)]
|
||||||
|
pub struct AlertReceiver {
|
||||||
|
pub receiver_id: String,
|
||||||
|
pub receiver_installed: bool,
|
||||||
|
}
|
||||||
95
harmony/src/domain/topology/tenant/k8s.rs
Normal file
95
harmony/src/domain/topology/tenant/k8s.rs
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use crate::{executors::ExecutorError, topology::k8s::K8sClient};
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use derive_new::new;
|
||||||
|
use k8s_openapi::api::core::v1::Namespace;
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
use super::{ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy};
|
||||||
|
|
||||||
|
#[derive(new)]
|
||||||
|
pub struct K8sTenantManager {
|
||||||
|
k8s_client: Arc<K8sClient>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl TenantManager for K8sTenantManager {
|
||||||
|
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
|
||||||
|
let namespace = json!(
|
||||||
|
{
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"kind": "Namespace",
|
||||||
|
"metadata": {
|
||||||
|
"labels": {
|
||||||
|
"harmony.nationtech.io/tenant.id": config.id,
|
||||||
|
"harmony.nationtech.io/tenant.name": config.name,
|
||||||
|
},
|
||||||
|
"name": config.name,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
);
|
||||||
|
todo!("Validate that when tenant already exists (by id) that name has not changed");
|
||||||
|
|
||||||
|
let namespace: Namespace = serde_json::from_value(namespace).unwrap();
|
||||||
|
|
||||||
|
let resource_quota = json!(
|
||||||
|
{
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"kind": "List",
|
||||||
|
"items": [
|
||||||
|
{
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"kind": "ResourceQuota",
|
||||||
|
"metadata": {
|
||||||
|
"name": config.name,
|
||||||
|
"labels": {
|
||||||
|
"harmony.nationtech.io/tenant.id": config.id,
|
||||||
|
"harmony.nationtech.io/tenant.name": config.name,
|
||||||
|
},
|
||||||
|
"namespace": config.name,
|
||||||
|
},
|
||||||
|
"spec": {
|
||||||
|
"hard": {
|
||||||
|
"limits.cpu": format!("{:.0}",config.resource_limits.cpu_limit_cores),
|
||||||
|
"limits.memory": format!("{:.3}Gi", config.resource_limits.memory_limit_gb),
|
||||||
|
"requests.cpu": format!("{:.0}",config.resource_limits.cpu_request_cores),
|
||||||
|
"requests.memory": format!("{:.3}Gi", config.resource_limits.memory_request_gb),
|
||||||
|
"requests.storage": format!("{:.3}", config.resource_limits.storage_total_gb),
|
||||||
|
"pods": "20",
|
||||||
|
"services": "10",
|
||||||
|
"configmaps": "30",
|
||||||
|
"secrets": "30",
|
||||||
|
"persistentvolumeclaims": "15",
|
||||||
|
"services.loadbalancers": "2",
|
||||||
|
"services.nodeports": "5",
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn update_tenant_resource_limits(
|
||||||
|
&self,
|
||||||
|
tenant_name: &str,
|
||||||
|
new_limits: &ResourceLimits,
|
||||||
|
) -> Result<(), ExecutorError> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn update_tenant_network_policy(
|
||||||
|
&self,
|
||||||
|
tenant_name: &str,
|
||||||
|
new_policy: &TenantNetworkPolicy,
|
||||||
|
) -> Result<(), ExecutorError> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
46
harmony/src/domain/topology/tenant/manager.rs
Normal file
46
harmony/src/domain/topology/tenant/manager.rs
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
use super::*;
|
||||||
|
use async_trait::async_trait;
|
||||||
|
|
||||||
|
use crate::executors::ExecutorError;
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
pub trait TenantManager {
|
||||||
|
/// Provisions a new tenant based on the provided configuration.
|
||||||
|
/// This operation should be idempotent; if a tenant with the same `config.name`
|
||||||
|
/// already exists and matches the config, it will succeed without changes.
|
||||||
|
/// If it exists but differs, it will be updated, or return an error if the update
|
||||||
|
/// action is not supported
|
||||||
|
///
|
||||||
|
/// # Arguments
|
||||||
|
/// * `config`: The desired configuration for the new tenant.
|
||||||
|
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError>;
|
||||||
|
|
||||||
|
/// Updates the resource limits for an existing tenant.
|
||||||
|
///
|
||||||
|
/// # Arguments
|
||||||
|
/// * `tenant_name`: The logical name of the tenant to update.
|
||||||
|
/// * `new_limits`: The new set of resource limits to apply.
|
||||||
|
async fn update_tenant_resource_limits(
|
||||||
|
&self,
|
||||||
|
tenant_name: &str,
|
||||||
|
new_limits: &ResourceLimits,
|
||||||
|
) -> Result<(), ExecutorError>;
|
||||||
|
|
||||||
|
/// Updates the high-level network isolation policy for an existing tenant.
|
||||||
|
///
|
||||||
|
/// # Arguments
|
||||||
|
/// * `tenant_name`: The logical name of the tenant to update.
|
||||||
|
/// * `new_policy`: The new network policy to apply.
|
||||||
|
async fn update_tenant_network_policy(
|
||||||
|
&self,
|
||||||
|
tenant_name: &str,
|
||||||
|
new_policy: &TenantNetworkPolicy,
|
||||||
|
) -> Result<(), ExecutorError>;
|
||||||
|
|
||||||
|
/// Decommissions an existing tenant, removing its isolated context and associated resources.
|
||||||
|
/// This operation should be idempotent.
|
||||||
|
///
|
||||||
|
/// # Arguments
|
||||||
|
/// * `tenant_name`: The logical name of the tenant to deprovision.
|
||||||
|
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError>;
|
||||||
|
}
|
||||||
67
harmony/src/domain/topology/tenant/mod.rs
Normal file
67
harmony/src/domain/topology/tenant/mod.rs
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
pub mod k8s;
|
||||||
|
mod manager;
|
||||||
|
pub use manager::*;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use crate::data::Id;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] // Assuming serde for Scores
|
||||||
|
pub struct TenantConfig {
|
||||||
|
/// This will be used as the primary unique identifier for management operations and will never
|
||||||
|
/// change for the entire lifetime of the tenant
|
||||||
|
pub id: Id,
|
||||||
|
|
||||||
|
/// A human-readable name for the tenant (e.g., "client-alpha", "project-phoenix").
|
||||||
|
pub name: String,
|
||||||
|
|
||||||
|
/// Desired resource allocations and limits for the tenant.
|
||||||
|
pub resource_limits: ResourceLimits,
|
||||||
|
|
||||||
|
/// High-level network isolation policies for the tenant.
|
||||||
|
pub network_policy: TenantNetworkPolicy,
|
||||||
|
|
||||||
|
/// Key-value pairs for provider-specific tagging, labeling, or metadata.
|
||||||
|
/// Useful for billing, organization, or filtering within the provider's console.
|
||||||
|
pub labels_or_tags: HashMap<String, String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Default)]
|
||||||
|
pub struct ResourceLimits {
|
||||||
|
/// Requested/guaranteed CPU cores (e.g., 2.0).
|
||||||
|
pub cpu_request_cores: f32,
|
||||||
|
/// Maximum CPU cores the tenant can burst to (e.g., 4.0).
|
||||||
|
pub cpu_limit_cores: f32,
|
||||||
|
|
||||||
|
/// Requested/guaranteed memory in Gigabytes (e.g., 8.0).
|
||||||
|
pub memory_request_gb: f32,
|
||||||
|
/// Maximum memory in Gigabytes tenant can burst to (e.g., 16.0).
|
||||||
|
pub memory_limit_gb: f32,
|
||||||
|
|
||||||
|
/// Total persistent storage allocation in Gigabytes across all volumes.
|
||||||
|
pub storage_total_gb: f32,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||||
|
pub struct TenantNetworkPolicy {
|
||||||
|
/// Policy for ingress traffic originating from other tenants within the same Harmony-managed environment.
|
||||||
|
pub default_inter_tenant_ingress: InterTenantIngressPolicy,
|
||||||
|
|
||||||
|
/// Policy for egress traffic destined for the public internet.
|
||||||
|
pub default_internet_egress: InternetEgressPolicy,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||||
|
pub enum InterTenantIngressPolicy {
|
||||||
|
/// Deny all traffic from other tenants by default.
|
||||||
|
DenyAll,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||||
|
pub enum InternetEgressPolicy {
|
||||||
|
/// Allow all outbound traffic to the internet.
|
||||||
|
AllowAll,
|
||||||
|
/// Deny all outbound traffic to the internet by default.
|
||||||
|
DenyAll,
|
||||||
|
}
|
||||||
@@ -23,7 +23,7 @@ pub struct HelmRepository {
|
|||||||
force_update: bool,
|
force_update: bool,
|
||||||
}
|
}
|
||||||
impl HelmRepository {
|
impl HelmRepository {
|
||||||
pub(crate) fn new(name: String, url: Url, force_update: bool) -> Self {
|
pub fn new(name: String, url: Url, force_update: bool) -> Self {
|
||||||
Self {
|
Self {
|
||||||
name,
|
name,
|
||||||
url,
|
url,
|
||||||
@@ -104,7 +104,10 @@ impl HelmChartInterpret {
|
|||||||
|
|
||||||
fn run_helm_command(args: &[&str]) -> Result<Output, InterpretError> {
|
fn run_helm_command(args: &[&str]) -> Result<Output, InterpretError> {
|
||||||
let command_str = format!("helm {}", args.join(" "));
|
let command_str = format!("helm {}", args.join(" "));
|
||||||
debug!("Got KUBECONFIG: `{}`", std::env::var("KUBECONFIG").unwrap());
|
debug!(
|
||||||
|
"Got KUBECONFIG: `{}`",
|
||||||
|
std::env::var("KUBECONFIG").unwrap_or("".to_string())
|
||||||
|
);
|
||||||
debug!("Running Helm command: `{}`", command_str);
|
debug!("Running Helm command: `{}`", command_str);
|
||||||
|
|
||||||
let output = Command::new("helm")
|
let output = Command::new("helm")
|
||||||
|
|||||||
@@ -1,12 +1,9 @@
|
|||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
use log::debug;
|
use log::debug;
|
||||||
use non_blank_string_rs::NonBlankString;
|
|
||||||
use serde::Serialize;
|
use serde::Serialize;
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::env::temp_dir;
|
|
||||||
use std::ffi::OsStr;
|
|
||||||
use std::io::ErrorKind;
|
use std::io::ErrorKind;
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::PathBuf;
|
||||||
use std::process::{Command, Output};
|
use std::process::{Command, Output};
|
||||||
use temp_dir::{self, TempDir};
|
use temp_dir::{self, TempDir};
|
||||||
use temp_file::TempFile;
|
use temp_file::TempFile;
|
||||||
|
|||||||
102
harmony/src/modules/monitoring/alertmanager_types.rs
Normal file
102
harmony/src/modules/monitoring/alertmanager_types.rs
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use url::Url;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AlertManagerValues {
|
||||||
|
pub alertmanager: AlertManager,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AlertManager {
|
||||||
|
pub enabled: bool,
|
||||||
|
pub config: AlertManagerConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||||
|
pub struct AlertChannelConfig {
|
||||||
|
pub receiver: AlertChannelReceiver,
|
||||||
|
pub route: AlertChannelRoute,
|
||||||
|
pub global_config: Option<AlertChannelGlobalConfig>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AlertChannelReceiver {
|
||||||
|
pub name: String,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub slack_configs: Option<Vec<SlackConfig>>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub webhook_configs: Option<Vec<WebhookConfig>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AlertManagerRoute {
|
||||||
|
pub group_by: Vec<String>,
|
||||||
|
pub group_wait: String,
|
||||||
|
pub group_interval: String,
|
||||||
|
pub repeat_interval: String,
|
||||||
|
pub routes: Vec<AlertChannelRoute>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AlertChannelGlobalConfig {
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub slack_api_url: Option<Url>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct SlackConfig {
|
||||||
|
pub channel: String,
|
||||||
|
pub send_resolved: bool,
|
||||||
|
pub title: String,
|
||||||
|
pub text: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct WebhookConfig {
|
||||||
|
pub url: Url,
|
||||||
|
pub send_resolved: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AlertChannelRoute {
|
||||||
|
pub receiver: String,
|
||||||
|
pub matchers: Vec<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub r#continue: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct AlertManagerConfig {
|
||||||
|
pub global: Option<AlertChannelGlobalConfig>,
|
||||||
|
pub route: AlertManagerRoute,
|
||||||
|
pub receivers: Vec<AlertChannelReceiver>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl AlertManagerValues {
|
||||||
|
pub fn default() -> Self {
|
||||||
|
Self {
|
||||||
|
alertmanager: AlertManager {
|
||||||
|
enabled: true,
|
||||||
|
config: AlertManagerConfig {
|
||||||
|
global: None,
|
||||||
|
route: AlertManagerRoute {
|
||||||
|
group_by: vec!["job".to_string()],
|
||||||
|
group_wait: "30s".to_string(),
|
||||||
|
group_interval: "5m".to_string(),
|
||||||
|
repeat_interval: "12h".to_string(),
|
||||||
|
routes: vec![AlertChannelRoute {
|
||||||
|
receiver: "null".to_string(),
|
||||||
|
matchers: vec!["alertname=Watchdog".to_string()],
|
||||||
|
r#continue: false,
|
||||||
|
}],
|
||||||
|
},
|
||||||
|
receivers: vec![AlertChannelReceiver {
|
||||||
|
name: "null".to_string(),
|
||||||
|
slack_configs: None,
|
||||||
|
webhook_configs: None,
|
||||||
|
}],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -2,7 +2,6 @@ use serde::Serialize;
|
|||||||
|
|
||||||
use super::monitoring_alerting::AlertChannel;
|
use super::monitoring_alerting::AlertChannel;
|
||||||
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize)]
|
#[derive(Debug, Clone, Serialize)]
|
||||||
pub struct KubePrometheusConfig {
|
pub struct KubePrometheusConfig {
|
||||||
pub namespace: String,
|
pub namespace: String,
|
||||||
|
|||||||
@@ -1,46 +1,35 @@
|
|||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use non_blank_string_rs::NonBlankString;
|
use non_blank_string_rs::NonBlankString;
|
||||||
|
use url::Url;
|
||||||
|
|
||||||
use crate::modules::helm::chart::HelmChartScore;
|
use crate::modules::helm::chart::HelmChartScore;
|
||||||
|
|
||||||
use super::{config::KubePrometheusConfig, monitoring_alerting::AlertChannel};
|
pub fn discord_alert_manager_score(
|
||||||
|
webhook_url: Url,
|
||||||
fn get_discord_alert_manager_score(config: &KubePrometheusConfig) -> Option<HelmChartScore> {
|
namespace: String,
|
||||||
let (url, name) = config.alert_channel.iter().find_map(|channel| {
|
name: String,
|
||||||
if let AlertChannel::Discord { webhook_url, name } = channel {
|
) -> HelmChartScore {
|
||||||
Some((webhook_url, name))
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
})?;
|
|
||||||
|
|
||||||
let values = format!(
|
let values = format!(
|
||||||
r#"
|
r#"
|
||||||
environment:
|
environment:
|
||||||
- name: "DISCORD_WEBHOOK"
|
- name: "DISCORD_WEBHOOK"
|
||||||
value: "{url}"
|
value: "{webhook_url}"
|
||||||
"#,
|
"#,
|
||||||
);
|
);
|
||||||
|
|
||||||
Some(HelmChartScore {
|
HelmChartScore {
|
||||||
namespace: Some(NonBlankString::from_str(&config.namespace).unwrap()),
|
namespace: Some(NonBlankString::from_str(&namespace).unwrap()),
|
||||||
release_name: NonBlankString::from_str(&name).unwrap(),
|
release_name: NonBlankString::from_str(&name).unwrap(),
|
||||||
chart_name: NonBlankString::from_str("oci://hub.nationtech.io/library/alertmanager-discord")
|
chart_name: NonBlankString::from_str(
|
||||||
.unwrap(),
|
"oci://hub.nationtech.io/library/alertmanager-discord",
|
||||||
|
)
|
||||||
|
.unwrap(),
|
||||||
chart_version: None,
|
chart_version: None,
|
||||||
values_overrides: None,
|
values_overrides: None,
|
||||||
values_yaml: Some(values.to_string()),
|
values_yaml: Some(values.to_string()),
|
||||||
create_namespace: true,
|
create_namespace: true,
|
||||||
install_only: true,
|
install_only: true,
|
||||||
repository: None,
|
repository: None,
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn discord_alert_manager_score(config: &KubePrometheusConfig) -> HelmChartScore {
|
|
||||||
if let Some(chart) = get_discord_alert_manager_score(config) {
|
|
||||||
chart
|
|
||||||
} else {
|
|
||||||
panic!("Expected discord alert manager helm chart");
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
168
harmony/src/modules/monitoring/discord_webhook_sender.rs
Normal file
168
harmony/src/modules/monitoring/discord_webhook_sender.rs
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
use super::{
|
||||||
|
discord_alert_manager::discord_alert_manager_score, kube_prometheus_monitor::AlertManagerConfig,
|
||||||
|
};
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use serde::Serialize;
|
||||||
|
use serde_yaml::Value;
|
||||||
|
use tokio::sync::OnceCell;
|
||||||
|
use url::Url;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
data::{Id, Version},
|
||||||
|
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
||||||
|
inventory::Inventory,
|
||||||
|
score::Score,
|
||||||
|
topology::{
|
||||||
|
HelmCommand, K8sAnywhereTopology, Topology,
|
||||||
|
oberservability::monitoring::{AlertReceiver, AlertReceiverDeployment},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<T: Topology + DiscordWebhookReceiver> AlertReceiverDeployment<T> for DiscordWebhookConfig {
|
||||||
|
async fn deploy_alert_receiver(&self, topology: &T) -> Result<Outcome, InterpretError> {
|
||||||
|
topology.deploy_discord_webhook_receiver(self.clone()).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize)]
|
||||||
|
pub struct DiscordWebhookConfig {
|
||||||
|
pub webhook_url: Url,
|
||||||
|
pub name: String,
|
||||||
|
pub send_resolved_notifications: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
pub trait DiscordWebhookReceiver {
|
||||||
|
async fn deploy_discord_webhook_receiver(
|
||||||
|
&self,
|
||||||
|
config: DiscordWebhookConfig,
|
||||||
|
) -> Result<Outcome, InterpretError>;
|
||||||
|
fn delete_discord_webhook_receiver(
|
||||||
|
&self,
|
||||||
|
config: DiscordWebhookConfig,
|
||||||
|
) -> Result<Outcome, InterpretError>;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<T: DiscordWebhookReceiver> AlertManagerConfig<T> for DiscordWebhookConfig {
|
||||||
|
async fn get_alert_manager_config(&self) -> Result<Value, InterpretError> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl DiscordWebhookReceiver for K8sAnywhereTopology {
|
||||||
|
async fn deploy_discord_webhook_receiver(
|
||||||
|
&self,
|
||||||
|
config: DiscordWebhookConfig,
|
||||||
|
) -> Result<Outcome, InterpretError> {
|
||||||
|
let receiver_key = config.name.clone();
|
||||||
|
let mut adapters_map_guard = self.alert_receivers.lock().await;
|
||||||
|
|
||||||
|
let cell = adapters_map_guard
|
||||||
|
.entry(receiver_key.clone())
|
||||||
|
.or_insert_with(OnceCell::new);
|
||||||
|
|
||||||
|
if let Some(initialized_receiver) = cell.get() {
|
||||||
|
return Ok(Outcome::success(format!(
|
||||||
|
"Discord Webhook adapter for '{}' already initialized.",
|
||||||
|
initialized_receiver.receiver_id
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let final_state = cell
|
||||||
|
.get_or_try_init(|| async {
|
||||||
|
initialize_discord_webhook_receiver(config.clone(), self).await
|
||||||
|
})
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(Outcome::success(format!(
|
||||||
|
"Discord Webhook Receiver for '{}' ensured/initialized.",
|
||||||
|
final_state.receiver_id
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn delete_discord_webhook_receiver(
|
||||||
|
&self,
|
||||||
|
_config: DiscordWebhookConfig,
|
||||||
|
) -> Result<Outcome, InterpretError> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn initialize_discord_webhook_receiver(
|
||||||
|
conf: DiscordWebhookConfig,
|
||||||
|
topology: &K8sAnywhereTopology,
|
||||||
|
) -> Result<AlertReceiver, InterpretError> {
|
||||||
|
println!(
|
||||||
|
"Attempting to initialize Discord adapter for: {}",
|
||||||
|
conf.name
|
||||||
|
);
|
||||||
|
let score = DiscordWebhookReceiverScore {
|
||||||
|
config: conf.clone(),
|
||||||
|
};
|
||||||
|
let inventory = Inventory::autoload();
|
||||||
|
let interpret = score.create_interpret();
|
||||||
|
|
||||||
|
interpret.execute(&inventory, topology).await?;
|
||||||
|
|
||||||
|
Ok(AlertReceiver {
|
||||||
|
receiver_id: conf.name,
|
||||||
|
receiver_installed: true,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
#[derive(Debug, Clone, Serialize)]
|
||||||
|
struct DiscordWebhookReceiverScore {
|
||||||
|
config: DiscordWebhookConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T: Topology + HelmCommand> Score<T> for DiscordWebhookReceiverScore {
|
||||||
|
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||||
|
Box::new(DiscordWebhookReceiverScoreInterpret {
|
||||||
|
config: self.config.clone(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn name(&self) -> String {
|
||||||
|
"DiscordWebhookReceiverScore".to_string()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#[derive(Debug)]
|
||||||
|
struct DiscordWebhookReceiverScoreInterpret {
|
||||||
|
config: DiscordWebhookConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<T: Topology + HelmCommand> Interpret<T> for DiscordWebhookReceiverScoreInterpret {
|
||||||
|
async fn execute(
|
||||||
|
&self,
|
||||||
|
inventory: &Inventory,
|
||||||
|
topology: &T,
|
||||||
|
) -> Result<Outcome, InterpretError> {
|
||||||
|
discord_alert_manager_score(
|
||||||
|
self.config.webhook_url.clone(),
|
||||||
|
self.config.name.clone(),
|
||||||
|
self.config.name.clone(),
|
||||||
|
)
|
||||||
|
.create_interpret()
|
||||||
|
.execute(inventory, topology)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_name(&self) -> InterpretName {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_version(&self) -> Version {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_status(&self) -> InterpretStatus {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_children(&self) -> Vec<Id> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
108
harmony/src/modules/monitoring/kube_prometheus_monitor.rs
Normal file
108
harmony/src/modules/monitoring/kube_prometheus_monitor.rs
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
use serde::Serialize;
|
||||||
|
use serde_yaml::Value;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
data::{Id, Version},
|
||||||
|
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
||||||
|
inventory::Inventory,
|
||||||
|
score::Score,
|
||||||
|
topology::{
|
||||||
|
HelmCommand, Topology,
|
||||||
|
oberservability::monitoring::{AlertReceiverDeployment, Monitor},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
use super::{
|
||||||
|
config::KubePrometheusConfig, kube_prometheus_helm_chart::kube_prometheus_helm_chart_score,
|
||||||
|
};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct KubePrometheus<T> {
|
||||||
|
alert_receivers: Vec<Box<dyn AlertReceiverDeployment<T>>>,
|
||||||
|
config: KubePrometheusConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
pub trait AlertManagerConfig<T> {
|
||||||
|
async fn get_alert_manager_config(&self) -> Result<Value, InterpretError>;
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T: Topology> KubePrometheus<T> {
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self {
|
||||||
|
alert_receivers: Vec::new(),
|
||||||
|
config: KubePrometheusConfig::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<T: Topology + HelmCommand + std::fmt::Debug> Monitor<T> for KubePrometheus<T> {
|
||||||
|
async fn deploy_monitor(&self, topology: &T) -> Result<Outcome, InterpretError> {
|
||||||
|
for alert_receiver in &self.alert_receivers {
|
||||||
|
alert_receiver.deploy_alert_receiver(topology).await?;
|
||||||
|
}
|
||||||
|
let score = KubePrometheusScore {
|
||||||
|
config: self.config.clone(),
|
||||||
|
};
|
||||||
|
let inventory = Inventory::autoload();
|
||||||
|
score.create_interpret().execute(&inventory, topology).await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn delete_monitor(&self, _topolgy: &T) -> Result<Outcome, InterpretError> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize)]
|
||||||
|
struct KubePrometheusScore {
|
||||||
|
config: KubePrometheusConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T: Topology + HelmCommand> Score<T> for KubePrometheusScore {
|
||||||
|
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||||
|
Box::new(KubePromethusScoreInterpret {
|
||||||
|
score: self.clone(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn name(&self) -> String {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize)]
|
||||||
|
struct KubePromethusScoreInterpret {
|
||||||
|
score: KubePrometheusScore,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<T: Topology + HelmCommand> Interpret<T> for KubePromethusScoreInterpret {
|
||||||
|
async fn execute(
|
||||||
|
&self,
|
||||||
|
inventory: &Inventory,
|
||||||
|
topology: &T,
|
||||||
|
) -> Result<Outcome, InterpretError> {
|
||||||
|
kube_prometheus_helm_chart_score(&self.score.config)
|
||||||
|
.create_interpret()
|
||||||
|
.execute(inventory, topology)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_name(&self) -> InterpretName {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_version(&self) -> Version {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_status(&self) -> InterpretStatus {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_children(&self) -> Vec<Id> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,4 +1,7 @@
|
|||||||
mod kube_prometheus;
|
pub mod alertmanager_types;
|
||||||
pub mod monitoring_alerting;
|
|
||||||
mod discord_alert_manager;
|
|
||||||
mod config;
|
mod config;
|
||||||
|
mod discord_alert_manager;
|
||||||
|
pub mod discord_webhook_sender;
|
||||||
|
mod kube_prometheus_helm_chart;
|
||||||
|
pub mod kube_prometheus_monitor;
|
||||||
|
pub mod monitoring_alerting;
|
||||||
|
|||||||
@@ -14,8 +14,7 @@ use crate::{
|
|||||||
};
|
};
|
||||||
|
|
||||||
use super::{
|
use super::{
|
||||||
config::KubePrometheusConfig, discord_alert_manager::discord_alert_manager_score,
|
config::KubePrometheusConfig, kube_prometheus_helm_chart::kube_prometheus_helm_chart_score,
|
||||||
kube_prometheus::kube_prometheus_helm_chart_score,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize)]
|
#[derive(Debug, Clone, Serialize)]
|
||||||
@@ -96,28 +95,28 @@ impl MonitoringAlertingStackInterpret {
|
|||||||
topology: &T,
|
topology: &T,
|
||||||
config: &KubePrometheusConfig,
|
config: &KubePrometheusConfig,
|
||||||
) -> Result<Outcome, InterpretError> {
|
) -> Result<Outcome, InterpretError> {
|
||||||
let mut outcomes = vec![];
|
//let mut outcomes = vec![];
|
||||||
|
|
||||||
for channel in &self.score.alert_channel {
|
//for channel in &self.score.alert_channel {
|
||||||
let outcome = match channel {
|
// let outcome = match channel {
|
||||||
AlertChannel::Discord { .. } => {
|
// AlertChannel::Discord { .. } => {
|
||||||
discord_alert_manager_score(config)
|
// discord_alert_manager_score(config)
|
||||||
.create_interpret()
|
// .create_interpret()
|
||||||
.execute(inventory, topology)
|
// .execute(inventory, topology)
|
||||||
.await
|
// .await
|
||||||
}
|
// }
|
||||||
AlertChannel::Slack { .. } => Ok(Outcome::success(
|
// AlertChannel::Slack { .. } => Ok(Outcome::success(
|
||||||
"No extra configs for slack alerting".to_string(),
|
// "No extra configs for slack alerting".to_string(),
|
||||||
)),
|
// )),
|
||||||
AlertChannel::Smpt { .. } => {
|
// AlertChannel::Smpt { .. } => {
|
||||||
todo!()
|
// todo!()
|
||||||
}
|
// }
|
||||||
};
|
// };
|
||||||
outcomes.push(outcome);
|
// outcomes.push(outcome);
|
||||||
}
|
//}
|
||||||
for result in outcomes {
|
//for result in outcomes {
|
||||||
result?;
|
// result?;
|
||||||
}
|
//}
|
||||||
|
|
||||||
Ok(Outcome::success("All alert channels deployed".to_string()))
|
Ok(Outcome::success("All alert channels deployed".to_string()))
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user