Compare commits
100 Commits
80bdd0ee8a
...
feat/tenan
| Author | SHA1 | Date | |
|---|---|---|---|
| 31e59937dc | |||
| 12eb4ae31f | |||
| a2be9457b9 | |||
| 0d56fbc09d | |||
| 56dc1e93c1 | |||
| 691540fe64 | |||
| 7e3f1b1830 | |||
| b631e8ccbb | |||
| 60f2f31d6c | |||
| 27f1a9dbdd | |||
| e7917843bc | |||
| 7cd541bdd8 | |||
| 270dd49567 | |||
| 0187300473 | |||
| bf16566b4e | |||
| 895fb02f4e | |||
| 88d6af9815 | |||
| 5aa9dc701f | |||
| f4ef895d2e | |||
| 6e7148a945 | |||
| 83453273c6 | |||
| 76ae5eb747 | |||
| 9c51040f3b | |||
| e1a8ee1c15 | |||
| 44b2b092a8 | |||
| 19bd47a545 | |||
| 2b6d2e8606 | |||
| 7fc2b1ebfe | |||
| e80752ea3f | |||
| bae7222d64 | |||
| f7d3da3ac9 | |||
| eb8a8a2e04 | |||
| b4c6848433 | |||
| 0d94c537a0 | |||
| 861f266c4e | |||
| 51724d0e55 | |||
| c2d1cb9b76 | |||
|
|
c84a02c8ec | ||
| 8d3d167848 | |||
| 94f6cc6942 | |||
| 4a9b95acad | |||
| ef9c1cce77 | |||
| df65ac3439 | |||
| e5ddd296db | |||
| 4be008556e | |||
| 78e9893341 | |||
| d9921b857b | |||
| e62ef001ed | |||
| 1fb7132c64 | |||
| 2d74c66fc6 | |||
| 8a199b64f5 | |||
| b7fe62fcbb | |||
| cd8542258c | |||
| 472a3c1051 | |||
| 88270ece61 | |||
| e7cfbf914a | |||
| fbd466a85c | |||
| 2f8e150f41 | |||
| 764fd6d451 | |||
| 78fffcd725 | |||
| e1133ea114 | |||
| d8e8a49745 | |||
| a7ba9be486 | |||
| 1c3669cb47 | |||
| 90b80b24bc | |||
| c879ca143f | |||
| bc2bd2f2f4 | |||
| 28978299c9 | |||
| 87f6afc249 | |||
| 254f392cb5 | |||
| a6bcaade46 | |||
| 6c145f1100 | |||
| 40cd765019 | |||
| db9c8d83e6 | |||
| 20551b4a80 | |||
| 5c026ae6dd | |||
| 76c0cacc1b | |||
| f17948397f | |||
| 16a665241e | |||
| 065e3904b8 | |||
| 22752960f9 | |||
| 23971ecd7c | |||
| fbcd3e4f7f | |||
| d307893f15 | |||
| 00c0566533 | |||
| f5e3f1aaea | |||
| 508b97ca7c | |||
| c8547e38f2 | |||
| bfc79abfb6 | |||
| 7697a170bd | |||
| 941c9bc0b0 | |||
| 51aeea1ec9 | |||
| 8118df85ee | |||
| 7af83910ef | |||
| 1475f4af0c | |||
| a3a61c734f | |||
| 3f77bc7aef | |||
| d5125dd811 | |||
| 1ca316c085 | |||
| e390f1edb3 |
5
.cargo/config.toml
Normal file
5
.cargo/config.toml
Normal file
@@ -0,0 +1,5 @@
|
||||
[target.x86_64-pc-windows-msvc]
|
||||
rustflags = ["-C", "link-arg=/STACK:8000000"]
|
||||
|
||||
[target.x86_64-pc-windows-gnu]
|
||||
rustflags = ["-C", "link-arg=-Wl,--stack,8000000"]
|
||||
14
.gitea/workflows/check.yml
Normal file
14
.gitea/workflows/check.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
name: Run Check Script
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
check:
|
||||
runs-on: rust-cargo
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run check script
|
||||
run: bash check.sh
|
||||
36
CONTRIBUTING.md
Normal file
36
CONTRIBUTING.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Contributing to the Harmony project
|
||||
|
||||
## Write small P-R
|
||||
|
||||
Aim for the smallest piece of work that is mergeable.
|
||||
|
||||
Mergeable means that :
|
||||
|
||||
- it does not break the build
|
||||
- it moves the codebase one step forward
|
||||
|
||||
P-Rs can be many things, they do not have to be complete features.
|
||||
|
||||
### What a P-R **should** be
|
||||
|
||||
- Introduce a new trait : This will be the place to discuss the new trait addition, its design and implementation
|
||||
- A new implementation of a trait : a new concrete implementation of the LoadBalancer trait
|
||||
- A new CI check : something that improves quality, robustness, ci performance
|
||||
- Documentation improvements
|
||||
- Refactoring
|
||||
- Bugfix
|
||||
|
||||
### What a P-R **should not** be
|
||||
|
||||
- Large. Anything over 200 lines (excluding generated lines) should have a very good reason to be this large.
|
||||
- A mix of refactoring, bug fixes and new features.
|
||||
- Introducing multiple new features or ideas at once.
|
||||
- Multiple new implementations of a trait/functionnality at once
|
||||
|
||||
The general idea is to keep P-Rs small and single purpose.
|
||||
|
||||
## Commit message formatting
|
||||
|
||||
We follow conventional commits guidelines.
|
||||
|
||||
https://www.conventionalcommits.org/en/v1.0.0/
|
||||
519
Cargo.lock
generated
519
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -35,6 +35,7 @@ serde_yaml = "0.9.34"
|
||||
serde-value = "0.7.0"
|
||||
http = "1.2.0"
|
||||
inquire = "0.7.5"
|
||||
convert_case = "0.8.0"
|
||||
|
||||
[workspace.dependencies.uuid]
|
||||
version = "1.11.0"
|
||||
|
||||
138
README.md
138
README.md
@@ -31,3 +31,141 @@ Options:
|
||||
|
||||

|
||||
````
|
||||
## Supporting a new field in OPNSense `config.xml`
|
||||
|
||||
Two steps:
|
||||
- Supporting the field in `opnsense-config-xml`
|
||||
- Enabling Harmony to control the field
|
||||
|
||||
We'll use the `filename` field in the `dhcpcd` section of the file as an example.
|
||||
|
||||
### Supporting the field
|
||||
|
||||
As type checking if enforced, every field from `config.xml` must be known by the code. Each subsection of `config.xml` has its `.rs` file. For the `dhcpcd` section, we'll modify `opnsense-config-xml/src/data/dhcpd.rs`.
|
||||
|
||||
When a new field appears in the xml file, an error like this will be thrown and Harmony will panic :
|
||||
```
|
||||
Running `/home/stremblay/nt/dir/harmony/target/debug/example-nanodc`
|
||||
Found unauthorized element filename
|
||||
thread 'main' panicked at opnsense-config-xml/src/data/opnsense.rs:54:14:
|
||||
OPNSense received invalid string, should be full XML: ()
|
||||
|
||||
```
|
||||
|
||||
Define the missing field (`filename`) in the `DhcpInterface` struct of `opnsense-config-xml/src/data/dhcpd.rs`:
|
||||
```
|
||||
pub struct DhcpInterface {
|
||||
...
|
||||
pub filename: Option<String>,
|
||||
```
|
||||
|
||||
Harmony should now be fixed, build and run.
|
||||
|
||||
### Controlling the field
|
||||
|
||||
Define the `xml field setter` in `opnsense-config/src/modules/dhcpd.rs`.
|
||||
```
|
||||
impl<'a> DhcpConfig<'a> {
|
||||
...
|
||||
pub fn set_filename(&mut self, filename: &str) {
|
||||
self.enable_netboot();
|
||||
self.get_lan_dhcpd().filename = Some(filename.to_string());
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
Define the `value setter` in the `DhcpServer trait` in `domain/topology/network.rs`
|
||||
```
|
||||
#[async_trait]
|
||||
pub trait DhcpServer: Send + Sync {
|
||||
...
|
||||
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError>;
|
||||
...
|
||||
```
|
||||
|
||||
Implement the `value setter` in each `DhcpServer` implementation.
|
||||
`infra/opnsense/dhcp.rs`:
|
||||
```
|
||||
#[async_trait]
|
||||
impl DhcpServer for OPNSenseFirewall {
|
||||
...
|
||||
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
|
||||
{
|
||||
let mut writable_opnsense = self.opnsense_config.write().await;
|
||||
writable_opnsense.dhcp().set_filename(filename);
|
||||
debug!("OPNsense dhcp server set filename {filename}");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
`domain/topology/ha_cluster.rs`
|
||||
```
|
||||
#[async_trait]
|
||||
impl DhcpServer for DummyInfra {
|
||||
...
|
||||
async fn set_filename(&self, _filename: &str) -> Result<(), ExecutorError> {
|
||||
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
Add the new field to the DhcpScore in `modules/dhcp.rs`
|
||||
```
|
||||
pub struct DhcpScore {
|
||||
...
|
||||
pub filename: Option<String>,
|
||||
```
|
||||
|
||||
Define it in its implementation in `modules/okd/dhcp.rs`
|
||||
```
|
||||
impl OKDDhcpScore {
|
||||
...
|
||||
Self {
|
||||
dhcp_score: DhcpScore {
|
||||
...
|
||||
filename: Some("undionly.kpxe".to_string()),
|
||||
```
|
||||
|
||||
Define it in its implementation in `modules/okd/bootstrap_dhcp.rs`
|
||||
```
|
||||
impl OKDDhcpScore {
|
||||
...
|
||||
Self {
|
||||
dhcp_score: DhcpScore::new(
|
||||
...
|
||||
Some("undionly.kpxe".to_string()),
|
||||
```
|
||||
|
||||
Update the interpret (function called by the `execute` fn of the interpret) so it now updates the `filename` field value in `modules/dhcp.rs`
|
||||
```
|
||||
impl DhcpInterpret {
|
||||
...
|
||||
let filename_outcome = match &self.score.filename {
|
||||
Some(filename) => {
|
||||
let dhcp_server = Arc::new(topology.dhcp_server.clone());
|
||||
dhcp_server.set_filename(&filename).await?;
|
||||
Outcome::new(
|
||||
InterpretStatus::SUCCESS,
|
||||
format!("Dhcp Interpret Set filename to {filename}"),
|
||||
)
|
||||
}
|
||||
None => Outcome::noop(),
|
||||
};
|
||||
|
||||
if next_server_outcome.status == InterpretStatus::NOOP
|
||||
&& boot_filename_outcome.status == InterpretStatus::NOOP
|
||||
&& filename_outcome.status == InterpretStatus::NOOP
|
||||
|
||||
...
|
||||
|
||||
Ok(Outcome::new(
|
||||
InterpretStatus::SUCCESS,
|
||||
format!(
|
||||
"Dhcp Interpret Set next boot to [{:?}], boot_filename to [{:?}], filename to [{:?}]",
|
||||
self.score.boot_filename, self.score.boot_filename, self.score.filename
|
||||
)
|
||||
...
|
||||
```
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Architecture Decision Record: \<Title\>
|
||||
|
||||
Name: \<Name\>
|
||||
Initial Author: \<Name\>
|
||||
|
||||
Initial Date: \<Date\>
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Architecture Decision Record: Helm and Kustomize Handling
|
||||
|
||||
Name: Taha Hawa
|
||||
Initial Author: Taha Hawa
|
||||
|
||||
Initial Date: 2025-04-15
|
||||
|
||||
|
||||
68
adr/010-monitoring-and-alerting.md
Normal file
68
adr/010-monitoring-and-alerting.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Architecture Decision Record: Monitoring and Alerting
|
||||
|
||||
Initial Author : Willem Rolleman
|
||||
Date : April 28 2025
|
||||
|
||||
## Status
|
||||
|
||||
Proposed
|
||||
|
||||
## Context
|
||||
|
||||
A harmony user should be able to initialize a monitoring stack easily, either at the first run of Harmony, or that integrates with existing proects and infra without creating multiple instances of the monitoring stack or overwriting existing alerts/configurations.The user also needs a simple way to configure the stack so that it watches the projects. There should be reasonable defaults configured that are easily customizable for each project
|
||||
|
||||
## Decision
|
||||
|
||||
Create MonitoringStack score that creates a maestro to launch the monitoring stack or not if it is already present.
|
||||
The MonitoringStack score can be passed to the maestro in the vec! scores list
|
||||
|
||||
## Rationale
|
||||
|
||||
Having the score launch a maestro will allow the user to easily create a new monitoring stack and keeps composants grouped together. The MonitoringScore can handle all the logic for adding alerts, ensuring that the stack is running etc.
|
||||
|
||||
## Alerternatives considered
|
||||
|
||||
- ### Implement alerting and monitoring stack using existing HelmScore for each project
|
||||
- **Pros**:
|
||||
- Each project can choose to use the monitoring and alerting stack that they choose
|
||||
- Less overhead in terms of care harmony code
|
||||
- can add Box::new(grafana::grafanascore(namespace))
|
||||
- **Cons**:
|
||||
- No default solution implemented
|
||||
- Dev needs to chose what they use
|
||||
- Increases complexity of score projects
|
||||
- Each project will create a new monitoring and alerting instance rather than joining the existing one
|
||||
|
||||
|
||||
- ### Use OKD grafana and prometheus
|
||||
- **Pros**:
|
||||
- Minimal config to do in Harmony
|
||||
- **Cons**:
|
||||
- relies on OKD so will not working for local testing via k3d
|
||||
|
||||
- ### Create a monitoring and alerting crate similar to harmony tui
|
||||
- **Pros**:
|
||||
- Creates a default solution that can be implemented once by harmony
|
||||
- can create a join function that will allow a project to connect to the existing solution
|
||||
- eliminates risk of creating multiple instances of grafana or prometheus
|
||||
- **Cons**:
|
||||
- more complex than using a helm score
|
||||
- management of values files for individual functions becomes more complicated, ie how do you create alerts for one project via helm install that doesnt overwrite the other alerts
|
||||
|
||||
- ### Add monitoring to Maestro struct so whether the monitoring stack is used must be defined
|
||||
- **Pros**:
|
||||
- less for the user to define
|
||||
- may be easier to set defaults
|
||||
- **Cons**:
|
||||
- feels counterintuitive
|
||||
- would need to modify the structure of the maestro and how it operates which seems like a bad idea
|
||||
- unclear how to allow user to pass custom values/configs to the monitoring stack for subsequent projects
|
||||
|
||||
- ### Create MonitoringStack score to add to scores vec! which loads a maestro to install stack if not ready or add custom endpoints/alerts to existing stack
|
||||
- **Pros**:
|
||||
- Maestro already accepts a list of scores to initialize
|
||||
- leaving out the monitoring score simply means the user does not want monitoring
|
||||
- if the monitoring stack is already created, the MonitoringStack score doesn't necessarily need to be added to each project
|
||||
- composants of the monitoring stack are bundled together and can be expaned or modified from the same place
|
||||
- **Cons**:
|
||||
- maybe need to create
|
||||
160
adr/011-multi-tenant-cluster.md
Normal file
160
adr/011-multi-tenant-cluster.md
Normal file
@@ -0,0 +1,160 @@
|
||||
# Architecture Decision Record: Multi-Tenancy Strategy for Harmony Managed Clusters
|
||||
|
||||
Initial Author: Jean-Gabriel Gill-Couture
|
||||
|
||||
Initial Date: 2025-05-26
|
||||
|
||||
## Status
|
||||
|
||||
Proposed
|
||||
|
||||
## Context
|
||||
|
||||
Harmony manages production OKD/Kubernetes clusters that serve multiple clients with varying trust levels and operational requirements. We need a multi-tenancy strategy that provides:
|
||||
|
||||
1. **Strong isolation** between client workloads while maintaining operational simplicity
|
||||
2. **Controlled API access** allowing clients self-service capabilities within defined boundaries
|
||||
3. **Security-first approach** protecting both the cluster infrastructure and tenant data
|
||||
4. **Harmony-native implementation** using our Score/Interpret pattern for automated tenant provisioning
|
||||
5. **Scalable management** supporting both small trusted clients and larger enterprise customers
|
||||
|
||||
The official Kubernetes multi-tenancy documentation identifies two primary models: namespace-based isolation and virtual control planes per tenant. Given Harmony's focus on operational simplicity, provider-agnostic abstractions (ADR-003), and hexagonal architecture (ADR-002), we must choose an approach that balances security, usability, and maintainability.
|
||||
|
||||
Our clients represent a hybrid tenancy model:
|
||||
- **Customer multi-tenancy**: Each client operates independently with no cross-tenant trust
|
||||
- **Team multi-tenancy**: Individual clients may have multiple team members requiring coordinated access
|
||||
- **API access requirement**: Unlike pure SaaS scenarios, clients need controlled Kubernetes API access for self-service operations
|
||||
|
||||
The official kubernetes documentation on multi tenancy heavily inspired this ADR : https://kubernetes.io/docs/concepts/security/multi-tenancy/
|
||||
|
||||
## Decision
|
||||
|
||||
Implement **namespace-based multi-tenancy** with the following architecture:
|
||||
|
||||
### 1. Network Security Model
|
||||
- **Private cluster access**: Kubernetes API and OpenShift console accessible only via WireGuard VPN
|
||||
- **No public exposure**: Control plane endpoints remain internal to prevent unauthorized access attempts
|
||||
- **VPN-based authentication**: Initial access control through WireGuard client certificates
|
||||
|
||||
### 2. Tenant Isolation Strategy
|
||||
- **Dedicated namespace per tenant**: Each client receives an isolated namespace with access limited only to the required resources and operations
|
||||
- **Complete network isolation**: NetworkPolicies prevent cross-namespace communication while allowing full egress to public internet
|
||||
- **Resource governance**: ResourceQuotas and LimitRanges enforce CPU, memory, and storage consumption limits
|
||||
- **Storage access control**: Clients can create PersistentVolumeClaims but cannot directly manipulate PersistentVolumes or access other tenants' storage
|
||||
|
||||
### 3. Access Control Framework
|
||||
- **Principle of Least Privilege**: RBAC grants only necessary permissions within tenant namespace scope
|
||||
- **Namespace-scoped**: Clients can create/modify/delete resources within their namespace
|
||||
- **Cluster-level restrictions**: No access to cluster-wide resources, other namespaces, or sensitive cluster operations
|
||||
- **Whitelisted operations**: Controlled self-service capabilities for ingress, secrets, configmaps, and workload management
|
||||
|
||||
### 4. Identity Management Evolution
|
||||
- **Phase 1**: Manual provisioning of VPN access and Kubernetes ServiceAccounts/Users
|
||||
- **Phase 2**: Migration to Keycloak-based identity management (aligning with ADR-006) for centralized authentication and lifecycle management
|
||||
|
||||
### 5. Harmony Integration
|
||||
- **TenantScore implementation**: Declarative tenant provisioning using Harmony's Score/Interpret pattern
|
||||
- **Topology abstraction**: Tenant configuration abstracted from underlying Kubernetes implementation details
|
||||
- **Automated deployment**: Complete tenant setup automated through Harmony's orchestration capabilities
|
||||
|
||||
## Rationale
|
||||
|
||||
### Network Security Through VPN Access
|
||||
- **Defense in depth**: VPN requirement adds critical security layer preventing unauthorized cluster access
|
||||
- **Simplified firewall rules**: No need for complex public endpoint protections or rate limiting
|
||||
- **Audit capability**: VPN access provides clear audit trail of cluster connections
|
||||
- **Aligns with enterprise practices**: Most enterprise customers already use VPN infrastructure
|
||||
|
||||
### Namespace Isolation vs Virtual Control Planes
|
||||
Following Kubernetes official guidance, namespace isolation provides:
|
||||
- **Lower resource overhead**: Virtual control planes require dedicated etcd, API server, and controller manager per tenant
|
||||
- **Operational simplicity**: Single control plane to maintain, upgrade, and monitor
|
||||
- **Cross-tenant service integration**: Enables future controlled cross-tenant communication if required
|
||||
- **Proven stability**: Namespace-based isolation is well-tested and widely deployed
|
||||
- **Cost efficiency**: Significantly lower infrastructure costs compared to dedicated control planes
|
||||
|
||||
### Hybrid Tenancy Model Suitability
|
||||
Our approach addresses both customer and team multi-tenancy requirements:
|
||||
- **Customer isolation**: Strong network and RBAC boundaries prevent cross-tenant interference
|
||||
- **Team collaboration**: Multiple team members can share namespace access through group-based RBAC
|
||||
- **Self-service balance**: Controlled API access enables client autonomy without compromising security
|
||||
|
||||
### Harmony Architecture Alignment
|
||||
- **Provider agnostic**: TenantScore abstracts multi-tenancy concepts, enabling future support for other Kubernetes distributions
|
||||
- **Hexagonal architecture**: Tenant management becomes an infrastructure capability accessed through well-defined ports
|
||||
- **Declarative automation**: Tenant lifecycle fully managed through Harmony's Score execution model
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive Consequences
|
||||
- **Strong security posture**: VPN + namespace isolation provides robust tenant separation
|
||||
- **Operational efficiency**: Single cluster management with automated tenant provisioning
|
||||
- **Client autonomy**: Self-service capabilities reduce operational support burden
|
||||
- **Scalable architecture**: Can support hundreds of tenants per cluster without architectural changes
|
||||
- **Future flexibility**: Foundation supports evolution to more sophisticated multi-tenancy models
|
||||
- **Cost optimization**: Shared infrastructure maximizes resource utilization
|
||||
|
||||
### Negative Consequences
|
||||
- **VPN operational overhead**: Requires VPN infrastructure management
|
||||
- **Manual provisioning complexity**: Phase 1 manual user management creates administrative burden
|
||||
- **Network policy dependency**: Requires CNI with NetworkPolicy support (OVN-Kubernetes provides this and is the OKD/Openshift default)
|
||||
- **Cluster-wide resource limitations**: Some advanced Kubernetes features require cluster-wide access
|
||||
- **Single point of failure**: Cluster outage affects all tenants simultaneously
|
||||
|
||||
### Migration Challenges
|
||||
- **Legacy client integration**: Existing clients may need VPN client setup and credential migration
|
||||
- **Monitoring complexity**: Per-tenant observability requires careful metric and log segmentation
|
||||
- **Backup considerations**: Tenant data backup must respect isolation boundaries
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### Alternative 1: Virtual Control Plane Per Tenant
|
||||
**Pros**: Complete control plane isolation, full Kubernetes API access per tenant
|
||||
**Cons**: 3-5x higher resource usage, complex cross-tenant networking, operational complexity scales linearly with tenants
|
||||
|
||||
**Rejected**: Resource overhead incompatible with cost-effective multi-tenancy goals
|
||||
|
||||
### Alternative 2: Dedicated Clusters Per Tenant
|
||||
**Pros**: Maximum isolation, independent upgrade cycles, simplified security model
|
||||
**Cons**: Exponential operational complexity, prohibitive costs, resource waste
|
||||
|
||||
**Rejected**: Operational overhead makes this approach unsustainable for multiple clients
|
||||
|
||||
### Alternative 3: Public API with Advanced Authentication
|
||||
**Pros**: No VPN requirement, potentially simpler client access
|
||||
**Cons**: Larger attack surface, complex rate limiting and DDoS protection, increased security monitoring requirements
|
||||
|
||||
**Rejected**: Risk/benefit analysis favors VPN-based access control
|
||||
|
||||
### Alternative 4: Service Mesh Based Isolation
|
||||
**Pros**: Fine-grained traffic control, encryption, advanced observability
|
||||
**Cons**: Significant operational complexity, performance overhead, steep learning curve
|
||||
|
||||
**Rejected**: Complexity overhead outweighs benefits for current requirements; remains option for future enhancement
|
||||
|
||||
## Additional Notes
|
||||
|
||||
### Implementation Roadmap
|
||||
1. **Phase 1**: Implement VPN access and manual tenant provisioning
|
||||
2. **Phase 2**: Deploy TenantScore automation for namespace, RBAC, and NetworkPolicy management
|
||||
3. **Phase 3**: Integrate Keycloak for centralized identity management
|
||||
4. **Phase 4**: Add advanced monitoring and per-tenant observability
|
||||
|
||||
### TenantScore Structure Preview
|
||||
```rust
|
||||
pub struct TenantScore {
|
||||
pub tenant_config: TenantConfig,
|
||||
pub resource_quotas: ResourceQuotaConfig,
|
||||
pub network_isolation: NetworkIsolationPolicy,
|
||||
pub storage_access: StorageAccessConfig,
|
||||
pub rbac_config: RBACConfig,
|
||||
}
|
||||
```
|
||||
|
||||
### Future Enhancements
|
||||
- **Cross-tenant service mesh**: For approved inter-tenant communication
|
||||
- **Advanced monitoring**: Per-tenant Prometheus/Grafana instances
|
||||
- **Backup automation**: Tenant-scoped backup policies
|
||||
- **Cost allocation**: Detailed per-tenant resource usage tracking
|
||||
|
||||
This ADR establishes the foundation for secure, scalable multi-tenancy in Harmony-managed clusters while maintaining operational simplicity and cost effectiveness. A follow-up ADR will detail the Tenant abstraction and user management mechanisms within the Harmony framework.
|
||||
5
check.sh
Executable file
5
check.sh
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
cargo check --all-targets --all-features --keep-going
|
||||
cargo fmt --check
|
||||
cargo test
|
||||
1
data/watchguard/pxe-http-files/.gitattributes
vendored
Normal file
1
data/watchguard/pxe-http-files/.gitattributes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
slitaz/* filter=lfs diff=lfs merge=lfs -text
|
||||
6
data/watchguard/pxe-http-files/boot.ipxe
Normal file
6
data/watchguard/pxe-http-files/boot.ipxe
Normal file
@@ -0,0 +1,6 @@
|
||||
#!ipxe
|
||||
|
||||
set base-url http://192.168.33.1:8080
|
||||
set hostfile ${base-url}/byMAC/01-${mac:hexhyp}.ipxe
|
||||
|
||||
chain ${hostfile} || chain ${base-url}/default.ipxe
|
||||
@@ -0,0 +1,35 @@
|
||||
#!ipxe
|
||||
menu PXE Boot Menu - [${mac}]
|
||||
item okdinstallation Install OKD
|
||||
item slitaz Boot to Slitaz - old linux for debugging
|
||||
choose selected
|
||||
|
||||
goto ${selected}
|
||||
|
||||
:local
|
||||
exit
|
||||
|
||||
#################################
|
||||
# okdinstallation
|
||||
#################################
|
||||
:okdinstallation
|
||||
set base-url http://192.168.33.1:8080
|
||||
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
|
||||
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
|
||||
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
|
||||
set install-disk /dev/nvme0n1
|
||||
set ignition-file ncd0/master.ign
|
||||
|
||||
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
|
||||
initrd --name main ${base-url}/${live-initramfs}
|
||||
boot
|
||||
|
||||
#################################
|
||||
# slitaz
|
||||
#################################
|
||||
:slitaz
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/slitaz
|
||||
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
|
||||
initrd ${base_url}/rootfs.gz
|
||||
boot
|
||||
@@ -0,0 +1,35 @@
|
||||
#!ipxe
|
||||
menu PXE Boot Menu - [${mac}]
|
||||
item okdinstallation Install OKD
|
||||
item slitaz Boot to Slitaz - old linux for debugging
|
||||
choose selected
|
||||
|
||||
goto ${selected}
|
||||
|
||||
:local
|
||||
exit
|
||||
|
||||
#################################
|
||||
# okdinstallation
|
||||
#################################
|
||||
:okdinstallation
|
||||
set base-url http://192.168.33.1:8080
|
||||
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
|
||||
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
|
||||
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
|
||||
set install-disk /dev/nvme0n1
|
||||
set ignition-file ncd0/master.ign
|
||||
|
||||
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
|
||||
initrd --name main ${base-url}/${live-initramfs}
|
||||
boot
|
||||
|
||||
#################################
|
||||
# slitaz
|
||||
#################################
|
||||
:slitaz
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/slitaz
|
||||
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
|
||||
initrd ${base_url}/rootfs.gz
|
||||
boot
|
||||
@@ -0,0 +1,35 @@
|
||||
#!ipxe
|
||||
menu PXE Boot Menu - [${mac}]
|
||||
item okdinstallation Install OKD
|
||||
item slitaz Slitaz - an old linux image for debugging
|
||||
choose selected
|
||||
|
||||
goto ${selected}
|
||||
|
||||
:local
|
||||
exit
|
||||
|
||||
#################################
|
||||
# okdinstallation
|
||||
#################################
|
||||
:okdinstallation
|
||||
set base-url http://192.168.33.1:8080
|
||||
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
|
||||
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
|
||||
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
|
||||
set install-disk /dev/sda
|
||||
set ignition-file ncd0/worker.ign
|
||||
|
||||
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
|
||||
initrd --name main ${base-url}/${live-initramfs}
|
||||
boot
|
||||
|
||||
#################################
|
||||
# slitaz
|
||||
#################################
|
||||
:slitaz
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/slitaz
|
||||
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
|
||||
initrd ${base_url}/rootfs.gz
|
||||
boot
|
||||
@@ -0,0 +1,35 @@
|
||||
#!ipxe
|
||||
menu PXE Boot Menu - [${mac}]
|
||||
item okdinstallation Install OKD
|
||||
item slitaz Boot to Slitaz - old linux for debugging
|
||||
choose selected
|
||||
|
||||
goto ${selected}
|
||||
|
||||
:local
|
||||
exit
|
||||
|
||||
#################################
|
||||
# okdinstallation
|
||||
#################################
|
||||
:okdinstallation
|
||||
set base-url http://192.168.33.1:8080
|
||||
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
|
||||
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
|
||||
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
|
||||
set install-disk /dev/nvme0n1
|
||||
set ignition-file ncd0/master.ign
|
||||
|
||||
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
|
||||
initrd --name main ${base-url}/${live-initramfs}
|
||||
boot
|
||||
|
||||
#################################
|
||||
# slitaz
|
||||
#################################
|
||||
:slitaz
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/slitaz
|
||||
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
|
||||
initrd ${base_url}/rootfs.gz
|
||||
boot
|
||||
@@ -0,0 +1,35 @@
|
||||
#!ipxe
|
||||
menu PXE Boot Menu - [${mac}]
|
||||
item okdinstallation Install OKD
|
||||
item slitaz Slitaz - an old linux image for debugging
|
||||
choose selected
|
||||
|
||||
goto ${selected}
|
||||
|
||||
:local
|
||||
exit
|
||||
|
||||
#################################
|
||||
# okdinstallation
|
||||
#################################
|
||||
:okdinstallation
|
||||
set base-url http://192.168.33.1:8080
|
||||
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
|
||||
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
|
||||
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
|
||||
set install-disk /dev/sda
|
||||
set ignition-file ncd0/worker.ign
|
||||
|
||||
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
|
||||
initrd --name main ${base-url}/${live-initramfs}
|
||||
boot
|
||||
|
||||
#################################
|
||||
# slitaz
|
||||
#################################
|
||||
:slitaz
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/slitaz
|
||||
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
|
||||
initrd ${base_url}/rootfs.gz
|
||||
boot
|
||||
@@ -0,0 +1,37 @@
|
||||
#!ipxe
|
||||
menu PXE Boot Menu - [${mac}]
|
||||
item okdinstallation Install OKD
|
||||
item slitaz Slitaz - an old linux image for debugging
|
||||
choose selected
|
||||
|
||||
goto ${selected}
|
||||
|
||||
:local
|
||||
exit
|
||||
# This is the bootstrap node
|
||||
# it will become wk2
|
||||
|
||||
#################################
|
||||
# okdinstallation
|
||||
#################################
|
||||
:okdinstallation
|
||||
set base-url http://192.168.33.1:8080
|
||||
set kernel-image fcos/fedora-coreos-39.20231101.3.0-live-kernel-x86_64
|
||||
set live-rootfs fcos/fedora-coreos-39.20231101.3.0-live-rootfs.x86_64.img
|
||||
set live-initramfs fcos/fedora-coreos-39.20231101.3.0-live-initramfs.x86_64.img
|
||||
set install-disk /dev/sda
|
||||
set ignition-file ncd0/worker.ign
|
||||
|
||||
kernel ${base-url}/${kernel-image} initrd=main coreos.live.rootfs_url=${base-url}/${live-rootfs} coreos.inst.install_dev=${install-disk} coreos.inst.ignition_url=${base-url}/${ignition-file} ip=enp1s0:dhcp
|
||||
initrd --name main ${base-url}/${live-initramfs}
|
||||
boot
|
||||
|
||||
#################################
|
||||
# slitaz
|
||||
#################################
|
||||
:slitaz
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/slitaz
|
||||
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
|
||||
initrd ${base_url}/rootfs.gz
|
||||
boot
|
||||
71
data/watchguard/pxe-http-files/default.ipxe
Normal file
71
data/watchguard/pxe-http-files/default.ipxe
Normal file
@@ -0,0 +1,71 @@
|
||||
#!ipxe
|
||||
menu PXE Boot Menu - [${mac}]
|
||||
item local Boot from Hard Disk
|
||||
item slitaz Boot slitaz live environment [tux|root:root]
|
||||
#item ubuntu-server Ubuntu 24.04.1 live server
|
||||
#item ubuntu-desktop Ubuntu 24.04.1 desktop
|
||||
#item systemrescue System Rescue 11.03
|
||||
item memtest memtest
|
||||
#choose --default local --timeout 5000 selected
|
||||
choose selected
|
||||
|
||||
goto ${selected}
|
||||
|
||||
:local
|
||||
exit
|
||||
|
||||
#################################
|
||||
# slitaz
|
||||
#################################
|
||||
:slitaz
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/slitaz
|
||||
kernel ${base_url}/vmlinuz-2.6.37-slitaz rw root=/dev/null vga=788 initrd=rootfs.gz
|
||||
initrd ${base_url}/rootfs.gz
|
||||
boot
|
||||
|
||||
#################################
|
||||
# Ubuntu Server
|
||||
#################################
|
||||
:ubuntu-server
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/ubuntu/live-server-24.04.1
|
||||
|
||||
kernel ${base_url}/vmlinuz ip=dhcp url=${base_url}/ubuntu-24.04.1-live-server-amd64.iso autoinstall ds=nocloud
|
||||
initrd ${base_url}/initrd
|
||||
boot
|
||||
|
||||
#################################
|
||||
# Ubuntu Desktop
|
||||
#################################
|
||||
:ubuntu-desktop
|
||||
set server_ip 192.168.33.1:8080
|
||||
set base_url http://${server_ip}/ubuntu/desktop-24.04.1
|
||||
|
||||
kernel ${base_url}/vmlinuz ip=dhcp url=${base_url}/ubuntu-24.04.1-desktop-amd64.iso autoinstall ds=nocloud
|
||||
initrd ${base_url}/initrd
|
||||
boot
|
||||
|
||||
#################################
|
||||
# System Rescue
|
||||
#################################
|
||||
:systemrescue
|
||||
set base-url http://192.168.33.1:8080/systemrescue
|
||||
|
||||
kernel ${base-url}/vmlinuz initrd=sysresccd.img boot=systemrescue docache
|
||||
initrd ${base-url}/sysresccd.img
|
||||
boot
|
||||
|
||||
#################################
|
||||
# MemTest86 (BIOS/UEFI)
|
||||
#################################
|
||||
:memtest
|
||||
iseq ${platform} efi && goto memtest_efi || goto memtest_bios
|
||||
|
||||
:memtest_efi
|
||||
kernel http://192.168.33.1:8080/memtest/memtest64.efi
|
||||
boot
|
||||
|
||||
:memtest_bios
|
||||
kernel http://192.168.33.1:8080/memtest/memtest64.bin
|
||||
boot
|
||||
BIN
data/watchguard/pxe-http-files/memtest86/memtest32.bin
Normal file
BIN
data/watchguard/pxe-http-files/memtest86/memtest32.bin
Normal file
Binary file not shown.
BIN
data/watchguard/pxe-http-files/memtest86/memtest32.efi
Normal file
BIN
data/watchguard/pxe-http-files/memtest86/memtest32.efi
Normal file
Binary file not shown.
BIN
data/watchguard/pxe-http-files/memtest86/memtest64.bin
Normal file
BIN
data/watchguard/pxe-http-files/memtest86/memtest64.bin
Normal file
Binary file not shown.
BIN
data/watchguard/pxe-http-files/memtest86/memtest64.efi
Normal file
BIN
data/watchguard/pxe-http-files/memtest86/memtest64.efi
Normal file
Binary file not shown.
BIN
data/watchguard/pxe-http-files/memtest86/memtestla64.efi
Normal file
BIN
data/watchguard/pxe-http-files/memtest86/memtestla64.efi
Normal file
Binary file not shown.
@@ -1 +0,0 @@
|
||||
hey i am paul
|
||||
BIN
data/watchguard/pxe-http-files/slitaz/rootfs.gz
(Stored with Git LFS)
Normal file
BIN
data/watchguard/pxe-http-files/slitaz/rootfs.gz
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
data/watchguard/pxe-http-files/slitaz/vmlinuz-2.6.37-slitaz
(Stored with Git LFS)
Normal file
BIN
data/watchguard/pxe-http-files/slitaz/vmlinuz-2.6.37-slitaz
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
data/watchguard/tftpboot/ipxe.efi
Normal file
BIN
data/watchguard/tftpboot/ipxe.efi
Normal file
Binary file not shown.
BIN
data/watchguard/tftpboot/undionly.kpxe
Normal file
BIN
data/watchguard/tftpboot/undionly.kpxe
Normal file
Binary file not shown.
@@ -8,7 +8,7 @@ publish = false
|
||||
|
||||
[dependencies]
|
||||
harmony = { path = "../../harmony" }
|
||||
harmony_tui = { path = "../../harmony_tui" }
|
||||
harmony_cli = { path = "../../harmony_cli" }
|
||||
harmony_types = { path = "../../harmony_types" }
|
||||
cidr = { workspace = true }
|
||||
tokio = { workspace = true }
|
||||
|
||||
@@ -1,3 +1,85 @@
|
||||
<?php
|
||||
print_r("Hello this is from PHP")
|
||||
|
||||
ini_set('display_errors', 1);
|
||||
error_reporting(E_ALL);
|
||||
|
||||
$host = getenv('MYSQL_HOST') ?: '';
|
||||
$user = getenv('MYSQL_USER') ?: 'root';
|
||||
$pass = getenv('MYSQL_PASSWORD') ?: '';
|
||||
$db = 'testfill';
|
||||
$charset = 'utf8mb4';
|
||||
|
||||
$dsn = "mysql:host=$host;charset=$charset";
|
||||
$options = [
|
||||
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
|
||||
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
|
||||
];
|
||||
|
||||
try {
|
||||
$pdo = new PDO($dsn, $user, $pass, $options);
|
||||
$pdo->exec("CREATE DATABASE IF NOT EXISTS `$db`");
|
||||
$pdo->exec("USE `$db`");
|
||||
$pdo->exec("
|
||||
CREATE TABLE IF NOT EXISTS filler (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
data LONGBLOB
|
||||
)
|
||||
");
|
||||
} catch (\PDOException $e) {
|
||||
die("❌ DB connection failed: " . $e->getMessage());
|
||||
}
|
||||
|
||||
function getDbStats($pdo, $db) {
|
||||
$stmt = $pdo->query("
|
||||
SELECT
|
||||
ROUND(SUM(data_length + index_length) / 1024 / 1024 / 1024, 2) AS total_size_gb,
|
||||
SUM(table_rows) AS total_rows
|
||||
FROM information_schema.tables
|
||||
WHERE table_schema = '$db'
|
||||
");
|
||||
$result = $stmt->fetch();
|
||||
$sizeGb = $result['total_size_gb'] ?? '0';
|
||||
$rows = $result['total_rows'] ?? '0';
|
||||
$avgMb = ($rows > 0) ? round(($sizeGb * 1024) / $rows, 2) : 0;
|
||||
return [$sizeGb, $rows, $avgMb];
|
||||
}
|
||||
|
||||
list($dbSize, $rowCount, $avgRowMb) = getDbStats($pdo, $db);
|
||||
|
||||
$message = '';
|
||||
|
||||
if ($_SERVER['REQUEST_METHOD'] === 'POST' && isset($_POST['fill'])) {
|
||||
$iterations = 1024;
|
||||
$data = str_repeat(random_bytes(1024), 1024); // 1MB
|
||||
$stmt = $pdo->prepare("INSERT INTO filler (data) VALUES (:data)");
|
||||
|
||||
for ($i = 0; $i < $iterations; $i++) {
|
||||
$stmt->execute([':data' => $data]);
|
||||
}
|
||||
|
||||
list($dbSize, $rowCount, $avgRowMb) = getDbStats($pdo, $db);
|
||||
|
||||
$message = "<p style='color: green;'>✅ 1GB inserted into MariaDB successfully.</p>";
|
||||
}
|
||||
?>
|
||||
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>MariaDB Filler</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>MariaDB Storage Filler</h1>
|
||||
<?= $message ?>
|
||||
<ul>
|
||||
<li><strong>📦 MariaDB Used Size:</strong> <?= $dbSize ?> GB</li>
|
||||
<li><strong>📊 Total Rows:</strong> <?= $rowCount ?></li>
|
||||
<li><strong>📐 Average Row Size:</strong> <?= $avgRowMb ?> MB</li>
|
||||
</ul>
|
||||
|
||||
<form method="post">
|
||||
<button name="fill" value="1" type="submit">Insert 1GB into DB</button>
|
||||
</form>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -1,23 +1,56 @@
|
||||
use harmony::{
|
||||
data::Version,
|
||||
inventory::Inventory,
|
||||
maestro::Maestro,
|
||||
modules::lamp::{LAMPConfig, LAMPScore},
|
||||
modules::{
|
||||
lamp::{LAMPConfig, LAMPScore},
|
||||
monitoring::monitoring_alerting::{AlertChannel, MonitoringAlertingStackScore},
|
||||
},
|
||||
topology::{K8sAnywhereTopology, Url},
|
||||
};
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
// This here is the whole configuration to
|
||||
// - setup a local K3D cluster
|
||||
// - Build a docker image with the PHP project builtin and production grade settings
|
||||
// - Deploy a mariadb database using a production grade helm chart
|
||||
// - Deploy the new container using a kubernetes deployment
|
||||
// - Configure networking between the PHP container and the database
|
||||
// - Provision a public route and an SSL certificate automatically on production environments
|
||||
//
|
||||
// Enjoy :)
|
||||
let lamp_stack = LAMPScore {
|
||||
name: "harmony-lamp-demo".to_string(),
|
||||
domain: Url::Url(url::Url::parse("https://lampdemo.harmony.nationtech.io").unwrap()),
|
||||
php_version: Version::from("8.4.4").unwrap(),
|
||||
// This config can be extended as needed for more complicated configurations
|
||||
config: LAMPConfig {
|
||||
project_root: "./php".into(),
|
||||
database_size: format!("4Gi").into(),
|
||||
..Default::default()
|
||||
},
|
||||
};
|
||||
|
||||
let mut maestro = Maestro::<K8sAnywhereTopology>::load_from_env();
|
||||
maestro.register_all(vec![Box::new(lamp_stack)]);
|
||||
harmony_tui::init(maestro).await.unwrap();
|
||||
// You can choose the type of Topology you want, we suggest starting with the
|
||||
// K8sAnywhereTopology as it is the most automatic one that enables you to easily deploy
|
||||
// locally, to development environment from a CI, to staging, and to production with settings
|
||||
// that automatically adapt to each environment grade.
|
||||
let mut maestro = Maestro::<K8sAnywhereTopology>::initialize(
|
||||
Inventory::autoload(),
|
||||
K8sAnywhereTopology::new(),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let url = url::Url::parse("https://discord.com/api/webhooks/dummy_channel/dummy_token")
|
||||
.expect("invalid URL");
|
||||
|
||||
let mut monitoring_stack_score = MonitoringAlertingStackScore::new();
|
||||
monitoring_stack_score.namespace = Some(lamp_stack.config.namespace.clone());
|
||||
|
||||
maestro.register_all(vec![Box::new(lamp_stack), Box::new(monitoring_stack_score)]);
|
||||
// Here we bootstrap the CLI, this gives some nice features if you need them
|
||||
harmony_cli::init(maestro, None).await.unwrap();
|
||||
}
|
||||
// That's it, end of the infra as code.
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
#!/bin/bash
|
||||
|
||||
helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \
|
||||
--set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml
|
||||
721
examples/nanodc/rook-cephcluster/values.yaml
Normal file
721
examples/nanodc/rook-cephcluster/values.yaml
Normal file
@@ -0,0 +1,721 @@
|
||||
# Default values for a single rook-ceph cluster
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
# -- Namespace of the main rook operator
|
||||
operatorNamespace: rook-ceph
|
||||
|
||||
# -- The metadata.name of the CephCluster CR
|
||||
# @default -- The same as the namespace
|
||||
clusterName:
|
||||
|
||||
# -- Optional override of the target kubernetes version
|
||||
kubeVersion:
|
||||
|
||||
# -- Cluster ceph.conf override
|
||||
configOverride:
|
||||
# configOverride: |
|
||||
# [global]
|
||||
# mon_allow_pool_delete = true
|
||||
# osd_pool_default_size = 3
|
||||
# osd_pool_default_min_size = 2
|
||||
|
||||
# Installs a debugging toolbox deployment
|
||||
toolbox:
|
||||
# -- Enable Ceph debugging pod deployment. See [toolbox](../Troubleshooting/ceph-toolbox.md)
|
||||
enabled: true
|
||||
# -- Toolbox image, defaults to the image used by the Ceph cluster
|
||||
image: #quay.io/ceph/ceph:v19.2.2
|
||||
# -- Toolbox tolerations
|
||||
tolerations: []
|
||||
# -- Toolbox affinity
|
||||
affinity: {}
|
||||
# -- Toolbox container security context
|
||||
containerSecurityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 2016
|
||||
runAsGroup: 2016
|
||||
capabilities:
|
||||
drop: ["ALL"]
|
||||
# -- Toolbox resources
|
||||
resources:
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "128Mi"
|
||||
# -- Set the priority class for the toolbox if desired
|
||||
priorityClassName:
|
||||
|
||||
monitoring:
|
||||
# -- Enable Prometheus integration, will also create necessary RBAC rules to allow Operator to create ServiceMonitors.
|
||||
# Monitoring requires Prometheus to be pre-installed
|
||||
enabled: false
|
||||
# -- Whether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabled
|
||||
metricsDisabled: false
|
||||
# -- Whether to create the Prometheus rules for Ceph alerts
|
||||
createPrometheusRules: false
|
||||
# -- The namespace in which to create the prometheus rules, if different from the rook cluster namespace.
|
||||
# If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus
|
||||
# deployed) to set rulesNamespaceOverride for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions.
|
||||
rulesNamespaceOverride:
|
||||
# Monitoring settings for external clusters:
|
||||
# externalMgrEndpoints: <list of endpoints>
|
||||
# externalMgrPrometheusPort: <port>
|
||||
# Scrape interval for prometheus
|
||||
# interval: 10s
|
||||
# allow adding custom labels and annotations to the prometheus rule
|
||||
prometheusRule:
|
||||
# -- Labels applied to PrometheusRule
|
||||
labels: {}
|
||||
# -- Annotations applied to PrometheusRule
|
||||
annotations: {}
|
||||
|
||||
# -- Create & use PSP resources. Set this to the same value as the rook-ceph chart.
|
||||
pspEnable: false
|
||||
|
||||
# imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts.
|
||||
# imagePullSecrets:
|
||||
# - name: my-registry-secret
|
||||
|
||||
# All values below are taken from the CephCluster CRD
|
||||
# -- Cluster configuration.
|
||||
# @default -- See [below](#ceph-cluster-spec)
|
||||
cephClusterSpec:
|
||||
# This cluster spec example is for a converged cluster where all the Ceph daemons are running locally,
|
||||
# as in the host-based example (cluster.yaml). For a different configuration such as a
|
||||
# PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml),
|
||||
# or stretch cluster (cluster-stretched.yaml), replace this entire `cephClusterSpec`
|
||||
# with the specs from those examples.
|
||||
|
||||
# For more details, check https://rook.io/docs/rook/v1.10/CRDs/Cluster/ceph-cluster-crd/
|
||||
cephVersion:
|
||||
# The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw).
|
||||
# v18 is Reef, v19 is Squid
|
||||
# RECOMMENDATION: In production, use a specific version tag instead of the general v18 flag, which pulls the latest release and could result in different
|
||||
# versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.
|
||||
# If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v19.2.2-20250409
|
||||
# This tag might not contain a new Ceph version, just security fixes from the underlying operating system, which will reduce vulnerabilities
|
||||
image: quay.io/ceph/ceph:v19.2.2
|
||||
# Whether to allow unsupported versions of Ceph. Currently Reef and Squid are supported.
|
||||
# Future versions such as Tentacle (v20) would require this to be set to `true`.
|
||||
# Do not set to true in production.
|
||||
allowUnsupported: false
|
||||
|
||||
# The path on the host where configuration files will be persisted. Must be specified. If there are multiple clusters, the directory must be unique for each cluster.
|
||||
# Important: if you reinstall the cluster, make sure you delete this directory from each host or else the mons will fail to start on the new cluster.
|
||||
# In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment.
|
||||
dataDirHostPath: /var/lib/rook
|
||||
|
||||
# Whether or not upgrade should continue even if a check fails
|
||||
# This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise
|
||||
# Use at your OWN risk
|
||||
# To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/v1.10/Upgrade/ceph-upgrade/
|
||||
skipUpgradeChecks: false
|
||||
|
||||
# Whether or not continue if PGs are not clean during an upgrade
|
||||
continueUpgradeAfterChecksEvenIfNotHealthy: false
|
||||
|
||||
# WaitTimeoutForHealthyOSDInMinutes defines the time (in minutes) the operator would wait before an OSD can be stopped for upgrade or restart.
|
||||
# If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one
|
||||
# if `continueUpgradeAfterChecksEvenIfNotHealthy` is `false`. If `continueUpgradeAfterChecksEvenIfNotHealthy` is `true`, then operator would
|
||||
# continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won't be applied if `skipUpgradeChecks` is `true`.
|
||||
# The default wait timeout is 10 minutes.
|
||||
waitTimeoutForHealthyOSDInMinutes: 10
|
||||
|
||||
# Whether or not requires PGs are clean before an OSD upgrade. If set to `true` OSD upgrade process won't start until PGs are healthy.
|
||||
# This configuration will be ignored if `skipUpgradeChecks` is `true`.
|
||||
# Default is false.
|
||||
upgradeOSDRequiresHealthyPGs: false
|
||||
|
||||
mon:
|
||||
# Set the number of mons to be started. Generally recommended to be 3.
|
||||
# For highest availability, an odd number of mons should be specified.
|
||||
count: 3
|
||||
# The mons should be on unique nodes. For production, at least 3 nodes are recommended for this reason.
|
||||
# Mons should only be allowed on the same node for test environments where data loss is acceptable.
|
||||
allowMultiplePerNode: false
|
||||
|
||||
mgr:
|
||||
# When higher availability of the mgr is needed, increase the count to 2.
|
||||
# In that case, one mgr will be active and one in standby. When Ceph updates which
|
||||
# mgr is active, Rook will update the mgr services to match the active mgr.
|
||||
count: 2
|
||||
allowMultiplePerNode: false
|
||||
modules:
|
||||
# List of modules to optionally enable or disable.
|
||||
# Note the "dashboard" and "monitoring" modules are already configured by other settings in the cluster CR.
|
||||
# - name: rook
|
||||
# enabled: true
|
||||
|
||||
# enable the ceph dashboard for viewing cluster status
|
||||
dashboard:
|
||||
enabled: true
|
||||
# serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
|
||||
# urlPrefix: /ceph-dashboard
|
||||
# serve the dashboard at the given port.
|
||||
# port: 8443
|
||||
# Serve the dashboard using SSL (if using ingress to expose the dashboard and `ssl: true` you need to set
|
||||
# the corresponding "backend protocol" annotation(s) for your ingress controller of choice)
|
||||
ssl: true
|
||||
|
||||
# Network configuration, see: https://github.com/rook/rook/blob/master/Documentation/CRDs/Cluster/ceph-cluster-crd.md#network-configuration-settings
|
||||
network:
|
||||
connections:
|
||||
# Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network.
|
||||
# The default is false. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted.
|
||||
# When encryption is not enabled, clients still establish a strong initial authentication and data integrity is still validated with a crc check.
|
||||
# IMPORTANT: Encryption requires the 5.11 kernel for the latest nbd and cephfs drivers. Alternatively for testing only,
|
||||
# you can set the "mounter: rbd-nbd" in the rbd storage class, or "mounter: fuse" in the cephfs storage class.
|
||||
# The nbd and fuse drivers are *not* recommended in production since restarting the csi driver pod will disconnect the volumes.
|
||||
encryption:
|
||||
enabled: false
|
||||
# Whether to compress the data in transit across the wire. The default is false.
|
||||
# The kernel requirements above for encryption also apply to compression.
|
||||
compression:
|
||||
enabled: false
|
||||
# Whether to require communication over msgr2. If true, the msgr v1 port (6789) will be disabled
|
||||
# and clients will be required to connect to the Ceph cluster with the v2 port (3300).
|
||||
# Requires a kernel that supports msgr v2 (kernel 5.11 or CentOS 8.4 or newer).
|
||||
requireMsgr2: false
|
||||
# # enable host networking
|
||||
# provider: host
|
||||
# # EXPERIMENTAL: enable the Multus network provider
|
||||
# provider: multus
|
||||
# selectors:
|
||||
# # The selector keys are required to be `public` and `cluster`.
|
||||
# # Based on the configuration, the operator will do the following:
|
||||
# # 1. if only the `public` selector key is specified both public_network and cluster_network Ceph settings will listen on that interface
|
||||
# # 2. if both `public` and `cluster` selector keys are specified the first one will point to 'public_network' flag and the second one to 'cluster_network'
|
||||
# #
|
||||
# # In order to work, each selector value must match a NetworkAttachmentDefinition object in Multus
|
||||
# #
|
||||
# # public: public-conf --> NetworkAttachmentDefinition object name in Multus
|
||||
# # cluster: cluster-conf --> NetworkAttachmentDefinition object name in Multus
|
||||
# # Provide internet protocol version. IPv6, IPv4 or empty string are valid options. Empty string would mean IPv4
|
||||
# ipFamily: "IPv6"
|
||||
# # Ceph daemons to listen on both IPv4 and Ipv6 networks
|
||||
# dualStack: false
|
||||
|
||||
# enable the crash collector for ceph daemon crash collection
|
||||
crashCollector:
|
||||
disable: false
|
||||
# Uncomment daysToRetain to prune ceph crash entries older than the
|
||||
# specified number of days.
|
||||
# daysToRetain: 30
|
||||
|
||||
# enable log collector, daemons will log on files and rotate
|
||||
logCollector:
|
||||
enabled: true
|
||||
periodicity: daily # one of: hourly, daily, weekly, monthly
|
||||
maxLogSize: 500M # SUFFIX may be 'M' or 'G'. Must be at least 1M.
|
||||
|
||||
# automate [data cleanup process](https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/ceph-teardown.md#delete-the-data-on-hosts) in cluster destruction.
|
||||
cleanupPolicy:
|
||||
# Since cluster cleanup is destructive to data, confirmation is required.
|
||||
# To destroy all Rook data on hosts during uninstall, confirmation must be set to "yes-really-destroy-data".
|
||||
# This value should only be set when the cluster is about to be deleted. After the confirmation is set,
|
||||
# Rook will immediately stop configuring the cluster and only wait for the delete command.
|
||||
# If the empty string is set, Rook will not destroy any data on hosts during uninstall.
|
||||
confirmation: ""
|
||||
# sanitizeDisks represents settings for sanitizing OSD disks on cluster deletion
|
||||
sanitizeDisks:
|
||||
# method indicates if the entire disk should be sanitized or simply ceph's metadata
|
||||
# in both case, re-install is possible
|
||||
# possible choices are 'complete' or 'quick' (default)
|
||||
method: quick
|
||||
# dataSource indicate where to get random bytes from to write on the disk
|
||||
# possible choices are 'zero' (default) or 'random'
|
||||
# using random sources will consume entropy from the system and will take much more time then the zero source
|
||||
dataSource: zero
|
||||
# iteration overwrite N times instead of the default (1)
|
||||
# takes an integer value
|
||||
iteration: 1
|
||||
# allowUninstallWithVolumes defines how the uninstall should be performed
|
||||
# If set to true, cephCluster deletion does not wait for the PVs to be deleted.
|
||||
allowUninstallWithVolumes: false
|
||||
|
||||
# To control where various services will be scheduled by kubernetes, use the placement configuration sections below.
|
||||
# The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node' and
|
||||
# tolerate taints with a key of 'storage-node'.
|
||||
# placement:
|
||||
# all:
|
||||
# nodeAffinity:
|
||||
# requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# nodeSelectorTerms:
|
||||
# - matchExpressions:
|
||||
# - key: role
|
||||
# operator: In
|
||||
# values:
|
||||
# - storage-node
|
||||
# podAffinity:
|
||||
# podAntiAffinity:
|
||||
# topologySpreadConstraints:
|
||||
# tolerations:
|
||||
# - key: storage-node
|
||||
# operator: Exists
|
||||
# # The above placement information can also be specified for mon, osd, and mgr components
|
||||
# mon:
|
||||
# # Monitor deployments may contain an anti-affinity rule for avoiding monitor
|
||||
# # collocation on the same node. This is a required rule when host network is used
|
||||
# # or when AllowMultiplePerNode is false. Otherwise this anti-affinity rule is a
|
||||
# # preferred rule with weight: 50.
|
||||
# osd:
|
||||
# mgr:
|
||||
# cleanup:
|
||||
|
||||
# annotations:
|
||||
# all:
|
||||
# mon:
|
||||
# osd:
|
||||
# cleanup:
|
||||
# prepareosd:
|
||||
# # If no mgr annotations are set, prometheus scrape annotations will be set by default.
|
||||
# mgr:
|
||||
# dashboard:
|
||||
|
||||
# labels:
|
||||
# all:
|
||||
# mon:
|
||||
# osd:
|
||||
# cleanup:
|
||||
# mgr:
|
||||
# prepareosd:
|
||||
# # monitoring is a list of key-value pairs. It is injected into all the monitoring resources created by operator.
|
||||
# # These labels can be passed as LabelSelector to Prometheus
|
||||
# monitoring:
|
||||
# dashboard:
|
||||
|
||||
resources:
|
||||
mgr:
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
requests:
|
||||
cpu: "500m"
|
||||
memory: "512Mi"
|
||||
mon:
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
requests:
|
||||
cpu: "1000m"
|
||||
memory: "1Gi"
|
||||
osd:
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
requests:
|
||||
cpu: "1000m"
|
||||
memory: "4Gi"
|
||||
prepareosd:
|
||||
# limits: It is not recommended to set limits on the OSD prepare job
|
||||
# since it's a one-time burst for memory that must be allowed to
|
||||
# complete without an OOM kill. Note however that if a k8s
|
||||
# limitRange guardrail is defined external to Rook, the lack of
|
||||
# a limit here may result in a sync failure, in which case a
|
||||
# limit should be added. 1200Mi may suffice for up to 15Ti
|
||||
# OSDs ; for larger devices 2Gi may be required.
|
||||
# cf. https://github.com/rook/rook/pull/11103
|
||||
requests:
|
||||
cpu: "500m"
|
||||
memory: "50Mi"
|
||||
mgr-sidecar:
|
||||
limits:
|
||||
memory: "100Mi"
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "40Mi"
|
||||
crashcollector:
|
||||
limits:
|
||||
memory: "60Mi"
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "60Mi"
|
||||
logcollector:
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "100Mi"
|
||||
cleanup:
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
requests:
|
||||
cpu: "500m"
|
||||
memory: "100Mi"
|
||||
exporter:
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
requests:
|
||||
cpu: "50m"
|
||||
memory: "50Mi"
|
||||
|
||||
# The option to automatically remove OSDs that are out and are safe to destroy.
|
||||
removeOSDsIfOutAndSafeToRemove: false
|
||||
|
||||
# priority classes to apply to ceph resources
|
||||
priorityClassNames:
|
||||
mon: system-node-critical
|
||||
osd: system-node-critical
|
||||
mgr: system-cluster-critical
|
||||
|
||||
storage: # cluster level storage configuration and selection
|
||||
useAllNodes: true
|
||||
useAllDevices: true
|
||||
# deviceFilter:
|
||||
# config:
|
||||
# crushRoot: "custom-root" # specify a non-default root label for the CRUSH map
|
||||
# metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
|
||||
# databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
|
||||
# osdsPerDevice: "1" # this value can be overridden at the node or device level
|
||||
# encryptedDevice: "true" # the default value for this option is "false"
|
||||
# # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named
|
||||
# # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label.
|
||||
# nodes:
|
||||
# - name: "172.17.4.201"
|
||||
# devices: # specific devices to use for storage can be specified for each node
|
||||
# - name: "sdb"
|
||||
# - name: "nvme01" # multiple osds can be created on high performance devices
|
||||
# config:
|
||||
# osdsPerDevice: "5"
|
||||
# - name: "/dev/disk/by-id/ata-ST4000DM004-XXXX" # devices can be specified using full udev paths
|
||||
# config: # configuration can be specified at the node level which overrides the cluster level config
|
||||
# - name: "172.17.4.301"
|
||||
# deviceFilter: "^sd."
|
||||
|
||||
# The section for configuring management of daemon disruptions during upgrade or fencing.
|
||||
disruptionManagement:
|
||||
# If true, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically
|
||||
# via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will
|
||||
# block eviction of OSDs by default and unblock them safely when drains are detected.
|
||||
managePodBudgets: true
|
||||
# A duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the
|
||||
# default DOWN/OUT interval) when it is draining. This is only relevant when `managePodBudgets` is `true`. The default value is `30` minutes.
|
||||
osdMaintenanceTimeout: 30
|
||||
|
||||
# Configure the healthcheck and liveness probes for ceph pods.
|
||||
# Valid values for daemons are 'mon', 'osd', 'status'
|
||||
healthCheck:
|
||||
daemonHealth:
|
||||
mon:
|
||||
disabled: false
|
||||
interval: 45s
|
||||
osd:
|
||||
disabled: false
|
||||
interval: 60s
|
||||
status:
|
||||
disabled: false
|
||||
interval: 60s
|
||||
# Change pod liveness probe, it works for all mon, mgr, and osd pods.
|
||||
livenessProbe:
|
||||
mon:
|
||||
disabled: false
|
||||
mgr:
|
||||
disabled: false
|
||||
osd:
|
||||
disabled: false
|
||||
|
||||
ingress:
|
||||
# -- Enable an ingress for the ceph-dashboard
|
||||
dashboard:
|
||||
# {}
|
||||
# labels:
|
||||
# external-dns/private: "true"
|
||||
annotations:
|
||||
"route.openshift.io/termination": "passthrough"
|
||||
# external-dns.alpha.kubernetes.io/hostname: dashboard.example.com
|
||||
# nginx.ingress.kubernetes.io/rewrite-target: /ceph-dashboard/$2
|
||||
# If the dashboard has ssl: true the following will make sure the NGINX Ingress controller can expose the dashboard correctly
|
||||
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
|
||||
# nginx.ingress.kubernetes.io/server-snippet: |
|
||||
# proxy_ssl_verify off;
|
||||
host:
|
||||
name: ceph.apps.ncd0.harmony.mcd
|
||||
path: null # TODO the chart does not allow removing the path, and it causes openshift to fail creating a route, because path is not supported with termination mode passthrough
|
||||
pathType: ImplementationSpecific
|
||||
tls:
|
||||
- {}
|
||||
# secretName: testsecret-tls
|
||||
# Note: Only one of ingress class annotation or the `ingressClassName:` can be used at a time
|
||||
# to set the ingress class
|
||||
# ingressClassName: openshift-default
|
||||
# labels:
|
||||
# external-dns/private: "true"
|
||||
# annotations:
|
||||
# external-dns.alpha.kubernetes.io/hostname: dashboard.example.com
|
||||
# nginx.ingress.kubernetes.io/rewrite-target: /ceph-dashboard/$2
|
||||
# If the dashboard has ssl: true the following will make sure the NGINX Ingress controller can expose the dashboard correctly
|
||||
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
|
||||
# nginx.ingress.kubernetes.io/server-snippet: |
|
||||
# proxy_ssl_verify off;
|
||||
# host:
|
||||
# name: dashboard.example.com
|
||||
# path: "/ceph-dashboard(/|$)(.*)"
|
||||
# pathType: Prefix
|
||||
# tls:
|
||||
# - hosts:
|
||||
# - dashboard.example.com
|
||||
# secretName: testsecret-tls
|
||||
## Note: Only one of ingress class annotation or the `ingressClassName:` can be used at a time
|
||||
## to set the ingress class
|
||||
# ingressClassName: nginx
|
||||
|
||||
# -- A list of CephBlockPool configurations to deploy
|
||||
# @default -- See [below](#ceph-block-pools)
|
||||
cephBlockPools:
|
||||
- name: ceph-blockpool
|
||||
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Block-Storage/ceph-block-pool-crd.md#spec for available configuration
|
||||
spec:
|
||||
failureDomain: host
|
||||
replicated:
|
||||
size: 3
|
||||
# Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false.
|
||||
# For reference: https://docs.ceph.com/docs/latest/mgr/prometheus/#rbd-io-statistics
|
||||
# enableRBDStats: true
|
||||
storageClass:
|
||||
enabled: true
|
||||
name: ceph-block
|
||||
annotations: {}
|
||||
labels: {}
|
||||
isDefault: true
|
||||
reclaimPolicy: Delete
|
||||
allowVolumeExpansion: true
|
||||
volumeBindingMode: "Immediate"
|
||||
mountOptions: []
|
||||
# see https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies
|
||||
allowedTopologies: []
|
||||
# - matchLabelExpressions:
|
||||
# - key: rook-ceph-role
|
||||
# values:
|
||||
# - storage-node
|
||||
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Block-Storage-RBD/block-storage.md#provision-storage for available configuration
|
||||
parameters:
|
||||
# (optional) mapOptions is a comma-separated list of map options.
|
||||
# For krbd options refer
|
||||
# https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
|
||||
# For nbd options refer
|
||||
# https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
|
||||
# mapOptions: lock_on_read,queue_depth=1024
|
||||
|
||||
# (optional) unmapOptions is a comma-separated list of unmap options.
|
||||
# For krbd options refer
|
||||
# https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
|
||||
# For nbd options refer
|
||||
# https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
|
||||
# unmapOptions: force
|
||||
|
||||
# RBD image format. Defaults to "2".
|
||||
imageFormat: "2"
|
||||
|
||||
# RBD image features, equivalent to OR'd bitfield value: 63
|
||||
# Available for imageFormat: "2". Older releases of CSI RBD
|
||||
# support only the `layering` feature. The Linux kernel (KRBD) supports the
|
||||
# full feature complement as of 5.4
|
||||
imageFeatures: layering
|
||||
|
||||
# These secrets contain Ceph admin credentials.
|
||||
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
|
||||
csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}"
|
||||
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
|
||||
csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}"
|
||||
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
|
||||
csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}"
|
||||
# Specify the filesystem type of the volume. If not specified, csi-provisioner
|
||||
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
|
||||
# in hyperconverged settings where the volume is mounted on the same node as the osds.
|
||||
csi.storage.k8s.io/fstype: ext4
|
||||
|
||||
# -- A list of CephFileSystem configurations to deploy
|
||||
# @default -- See [below](#ceph-file-systems)
|
||||
cephFileSystems:
|
||||
- name: ceph-filesystem
|
||||
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Shared-Filesystem/ceph-filesystem-crd.md#filesystem-settings for available configuration
|
||||
spec:
|
||||
metadataPool:
|
||||
replicated:
|
||||
size: 3
|
||||
dataPools:
|
||||
- failureDomain: host
|
||||
replicated:
|
||||
size: 3
|
||||
# Optional and highly recommended, 'data0' by default, see https://github.com/rook/rook/blob/master/Documentation/CRDs/Shared-Filesystem/ceph-filesystem-crd.md#pools
|
||||
name: data0
|
||||
metadataServer:
|
||||
activeCount: 1
|
||||
activeStandby: true
|
||||
resources:
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
requests:
|
||||
cpu: "1000m"
|
||||
memory: "4Gi"
|
||||
priorityClassName: system-cluster-critical
|
||||
storageClass:
|
||||
enabled: true
|
||||
isDefault: false
|
||||
name: ceph-filesystem
|
||||
# (Optional) specify a data pool to use, must be the name of one of the data pools above, 'data0' by default
|
||||
pool: data0
|
||||
reclaimPolicy: Delete
|
||||
allowVolumeExpansion: true
|
||||
volumeBindingMode: "Immediate"
|
||||
annotations: {}
|
||||
labels: {}
|
||||
mountOptions: []
|
||||
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage.md#provision-storage for available configuration
|
||||
parameters:
|
||||
# The secrets contain Ceph admin credentials.
|
||||
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
|
||||
csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}"
|
||||
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
|
||||
csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}"
|
||||
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
|
||||
csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}"
|
||||
# Specify the filesystem type of the volume. If not specified, csi-provisioner
|
||||
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
|
||||
# in hyperconverged settings where the volume is mounted on the same node as the osds.
|
||||
csi.storage.k8s.io/fstype: ext4
|
||||
|
||||
# -- Settings for the filesystem snapshot class
|
||||
# @default -- See [CephFS Snapshots](../Storage-Configuration/Ceph-CSI/ceph-csi-snapshot.md#cephfs-snapshots)
|
||||
cephFileSystemVolumeSnapshotClass:
|
||||
enabled: false
|
||||
name: ceph-filesystem
|
||||
isDefault: true
|
||||
deletionPolicy: Delete
|
||||
annotations: {}
|
||||
labels: {}
|
||||
# see https://rook.io/docs/rook/v1.10/Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-snapshots for available configuration
|
||||
parameters: {}
|
||||
|
||||
# -- Settings for the block pool snapshot class
|
||||
# @default -- See [RBD Snapshots](../Storage-Configuration/Ceph-CSI/ceph-csi-snapshot.md#rbd-snapshots)
|
||||
cephBlockPoolsVolumeSnapshotClass:
|
||||
enabled: false
|
||||
name: ceph-block
|
||||
isDefault: false
|
||||
deletionPolicy: Delete
|
||||
annotations: {}
|
||||
labels: {}
|
||||
# see https://rook.io/docs/rook/v1.10/Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-snapshots for available configuration
|
||||
parameters: {}
|
||||
|
||||
# -- A list of CephObjectStore configurations to deploy
|
||||
# @default -- See [below](#ceph-object-stores)
|
||||
cephObjectStores:
|
||||
- name: ceph-objectstore
|
||||
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md#object-store-settings for available configuration
|
||||
spec:
|
||||
metadataPool:
|
||||
failureDomain: host
|
||||
replicated:
|
||||
size: 3
|
||||
dataPool:
|
||||
failureDomain: host
|
||||
erasureCoded:
|
||||
dataChunks: 2
|
||||
codingChunks: 1
|
||||
parameters:
|
||||
bulk: "true"
|
||||
preservePoolsOnDelete: true
|
||||
gateway:
|
||||
port: 80
|
||||
resources:
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
requests:
|
||||
cpu: "1000m"
|
||||
memory: "1Gi"
|
||||
# securePort: 443
|
||||
# sslCertificateRef:
|
||||
instances: 1
|
||||
priorityClassName: system-cluster-critical
|
||||
# opsLogSidecar:
|
||||
# resources:
|
||||
# limits:
|
||||
# memory: "100Mi"
|
||||
# requests:
|
||||
# cpu: "100m"
|
||||
# memory: "40Mi"
|
||||
storageClass:
|
||||
enabled: true
|
||||
name: ceph-bucket
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: "Immediate"
|
||||
annotations: {}
|
||||
labels: {}
|
||||
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim.md#storageclass for available configuration
|
||||
parameters:
|
||||
# note: objectStoreNamespace and objectStoreName are configured by the chart
|
||||
region: us-east-1
|
||||
ingress:
|
||||
# Enable an ingress for the ceph-objectstore
|
||||
enabled: true
|
||||
# The ingress port by default will be the object store's "securePort" (if set), or the gateway "port".
|
||||
# To override those defaults, set this ingress port to the desired port.
|
||||
# port: 80
|
||||
# annotations: {}
|
||||
host:
|
||||
name: objectstore.apps.ncd0.harmony.mcd
|
||||
path: /
|
||||
pathType: Prefix
|
||||
# tls:
|
||||
# - hosts:
|
||||
# - objectstore.example.com
|
||||
# secretName: ceph-objectstore-tls
|
||||
# ingressClassName: nginx
|
||||
## cephECBlockPools are disabled by default, please remove the comments and set desired values to enable it
|
||||
## For erasure coded a replicated metadata pool is required.
|
||||
## https://rook.io/docs/rook/latest/CRDs/Shared-Filesystem/ceph-filesystem-crd/#erasure-coded
|
||||
#cephECBlockPools:
|
||||
# - name: ec-pool
|
||||
# spec:
|
||||
# metadataPool:
|
||||
# replicated:
|
||||
# size: 2
|
||||
# dataPool:
|
||||
# failureDomain: osd
|
||||
# erasureCoded:
|
||||
# dataChunks: 2
|
||||
# codingChunks: 1
|
||||
# deviceClass: hdd
|
||||
#
|
||||
# parameters:
|
||||
# # clusterID is the namespace where the rook cluster is running
|
||||
# # If you change this namespace, also change the namespace below where the secret namespaces are defined
|
||||
# clusterID: rook-ceph # namespace:cluster
|
||||
# # (optional) mapOptions is a comma-separated list of map options.
|
||||
# # For krbd options refer
|
||||
# # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
|
||||
# # For nbd options refer
|
||||
# # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
|
||||
# # mapOptions: lock_on_read,queue_depth=1024
|
||||
#
|
||||
# # (optional) unmapOptions is a comma-separated list of unmap options.
|
||||
# # For krbd options refer
|
||||
# # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options
|
||||
# # For nbd options refer
|
||||
# # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options
|
||||
# # unmapOptions: force
|
||||
#
|
||||
# # RBD image format. Defaults to "2".
|
||||
# imageFormat: "2"
|
||||
#
|
||||
# # RBD image features, equivalent to OR'd bitfield value: 63
|
||||
# # Available for imageFormat: "2". Older releases of CSI RBD
|
||||
# # support only the `layering` feature. The Linux kernel (KRBD) supports the
|
||||
# # full feature complement as of 5.4
|
||||
# # imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock
|
||||
# imageFeatures: layering
|
||||
#
|
||||
# storageClass:
|
||||
# provisioner: rook-ceph.rbd.csi.ceph.com # csi-provisioner-name
|
||||
# enabled: true
|
||||
# name: rook-ceph-block
|
||||
# isDefault: false
|
||||
# annotations: { }
|
||||
# labels: { }
|
||||
# allowVolumeExpansion: true
|
||||
# reclaimPolicy: Delete
|
||||
|
||||
# -- CSI driver name prefix for cephfs, rbd and nfs.
|
||||
# @default -- `namespace name where rook-ceph operator is deployed`
|
||||
csiDriverNamePrefix:
|
||||
3
examples/nanodc/rook-operator/install-rook-operator.sh
Normal file
3
examples/nanodc/rook-operator/install-rook-operator.sh
Normal file
@@ -0,0 +1,3 @@
|
||||
#!/bin/bash
|
||||
helm repo add rook-release https://charts.rook.io/release
|
||||
helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml
|
||||
674
examples/nanodc/rook-operator/values.yaml
Normal file
674
examples/nanodc/rook-operator/values.yaml
Normal file
@@ -0,0 +1,674 @@
|
||||
# Default values for rook-ceph-operator
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
image:
|
||||
# -- Image
|
||||
repository: docker.io/rook/ceph
|
||||
# -- Image tag
|
||||
# @default -- `master`
|
||||
tag: v1.17.1
|
||||
# -- Image pull policy
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
crds:
|
||||
# -- Whether the helm chart should create and update the CRDs. If false, the CRDs must be
|
||||
# managed independently with deploy/examples/crds.yaml.
|
||||
# **WARNING** Only set during first deployment. If later disabled the cluster may be DESTROYED.
|
||||
# If the CRDs are deleted in this case, see
|
||||
# [the disaster recovery guide](https://rook.io/docs/rook/latest/Troubleshooting/disaster-recovery/#restoring-crds-after-deletion)
|
||||
# to restore them.
|
||||
enabled: true
|
||||
|
||||
# -- Pod resource requests & limits
|
||||
resources:
|
||||
limits:
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 128Mi
|
||||
|
||||
# -- Kubernetes [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) to add to the Deployment.
|
||||
nodeSelector: {}
|
||||
# Constraint rook-ceph-operator Deployment to nodes with label `disktype: ssd`.
|
||||
# For more info, see https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
|
||||
# disktype: ssd
|
||||
|
||||
# -- List of Kubernetes [`tolerations`](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) to add to the Deployment.
|
||||
tolerations: []
|
||||
|
||||
# -- Delay to use for the `node.kubernetes.io/unreachable` pod failure toleration to override
|
||||
# the Kubernetes default of 5 minutes
|
||||
unreachableNodeTolerationSeconds: 5
|
||||
|
||||
# -- Whether the operator should watch cluster CRD in its own namespace or not
|
||||
currentNamespaceOnly: false
|
||||
|
||||
# -- Custom pod labels for the operator
|
||||
operatorPodLabels: {}
|
||||
|
||||
# -- Pod annotations
|
||||
annotations: {}
|
||||
|
||||
# -- Global log level for the operator.
|
||||
# Options: `ERROR`, `WARNING`, `INFO`, `DEBUG`
|
||||
logLevel: INFO
|
||||
|
||||
# -- If true, create & use RBAC resources
|
||||
rbacEnable: true
|
||||
|
||||
rbacAggregate:
|
||||
# -- If true, create a ClusterRole aggregated to [user facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) for objectbucketclaims
|
||||
enableOBCs: false
|
||||
|
||||
# -- If true, create & use PSP resources
|
||||
pspEnable: false
|
||||
|
||||
# -- Set the priority class for the rook operator deployment if desired
|
||||
priorityClassName:
|
||||
|
||||
# -- Set the container security context for the operator
|
||||
containerSecurityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 2016
|
||||
runAsGroup: 2016
|
||||
capabilities:
|
||||
drop: ["ALL"]
|
||||
# -- If true, loop devices are allowed to be used for osds in test clusters
|
||||
allowLoopDevices: false
|
||||
|
||||
# Settings for whether to disable the drivers or other daemons if they are not
|
||||
# needed
|
||||
csi:
|
||||
# -- Enable Ceph CSI RBD driver
|
||||
enableRbdDriver: true
|
||||
# -- Enable Ceph CSI CephFS driver
|
||||
enableCephfsDriver: true
|
||||
# -- Disable the CSI driver.
|
||||
disableCsiDriver: "false"
|
||||
|
||||
# -- Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
|
||||
# in some network configurations where the SDN does not provide access to an external cluster or
|
||||
# there is significant drop in read/write performance
|
||||
enableCSIHostNetwork: true
|
||||
# -- Enable Snapshotter in CephFS provisioner pod
|
||||
enableCephfsSnapshotter: true
|
||||
# -- Enable Snapshotter in NFS provisioner pod
|
||||
enableNFSSnapshotter: true
|
||||
# -- Enable Snapshotter in RBD provisioner pod
|
||||
enableRBDSnapshotter: true
|
||||
# -- Enable Host mount for `/etc/selinux` directory for Ceph CSI nodeplugins
|
||||
enablePluginSelinuxHostMount: false
|
||||
# -- Enable Ceph CSI PVC encryption support
|
||||
enableCSIEncryption: false
|
||||
|
||||
# -- Enable volume group snapshot feature. This feature is
|
||||
# enabled by default as long as the necessary CRDs are available in the cluster.
|
||||
enableVolumeGroupSnapshot: true
|
||||
# -- PriorityClassName to be set on csi driver plugin pods
|
||||
pluginPriorityClassName: system-node-critical
|
||||
|
||||
# -- PriorityClassName to be set on csi driver provisioner pods
|
||||
provisionerPriorityClassName: system-cluster-critical
|
||||
|
||||
# -- Policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
|
||||
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
|
||||
rbdFSGroupPolicy: "File"
|
||||
|
||||
# -- Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
|
||||
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
|
||||
cephFSFSGroupPolicy: "File"
|
||||
|
||||
# -- Policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted.
|
||||
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
|
||||
nfsFSGroupPolicy: "File"
|
||||
|
||||
# -- OMAP generator generates the omap mapping between the PV name and the RBD image
|
||||
# which helps CSI to identify the rbd images for CSI operations.
|
||||
# `CSI_ENABLE_OMAP_GENERATOR` needs to be enabled when we are using rbd mirroring feature.
|
||||
# By default OMAP generator is disabled and when enabled, it will be deployed as a
|
||||
# sidecar with CSI provisioner pod, to enable set it to true.
|
||||
enableOMAPGenerator: false
|
||||
|
||||
# -- Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options.
|
||||
# Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR
|
||||
cephFSKernelMountOptions:
|
||||
|
||||
# -- Enable adding volume metadata on the CephFS subvolumes and RBD images.
|
||||
# Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images.
|
||||
# Hence enable metadata is false by default
|
||||
enableMetadata: false
|
||||
|
||||
# -- Set replicas for csi provisioner deployment
|
||||
provisionerReplicas: 2
|
||||
|
||||
# -- Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful
|
||||
# in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster
|
||||
clusterName:
|
||||
|
||||
# -- Set logging level for cephCSI containers maintained by the cephCSI.
|
||||
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
|
||||
logLevel: 0
|
||||
|
||||
# -- Set logging level for Kubernetes-csi sidecar containers.
|
||||
# Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity.
|
||||
# @default -- `0`
|
||||
sidecarLogLevel:
|
||||
|
||||
# -- CSI driver name prefix for cephfs, rbd and nfs.
|
||||
# @default -- `namespace name where rook-ceph operator is deployed`
|
||||
csiDriverNamePrefix:
|
||||
|
||||
# -- CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
|
||||
# @default -- `RollingUpdate`
|
||||
rbdPluginUpdateStrategy:
|
||||
|
||||
# -- A maxUnavailable parameter of CSI RBD plugin daemonset update strategy.
|
||||
# @default -- `1`
|
||||
rbdPluginUpdateStrategyMaxUnavailable:
|
||||
|
||||
# -- CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
|
||||
# @default -- `RollingUpdate`
|
||||
cephFSPluginUpdateStrategy:
|
||||
|
||||
# -- A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy.
|
||||
# @default -- `1`
|
||||
cephFSPluginUpdateStrategyMaxUnavailable:
|
||||
|
||||
# -- CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate
|
||||
# @default -- `RollingUpdate`
|
||||
nfsPluginUpdateStrategy:
|
||||
|
||||
# -- Set GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150
|
||||
grpcTimeoutInSeconds: 150
|
||||
|
||||
# -- Burst to use while communicating with the kubernetes apiserver.
|
||||
kubeApiBurst:
|
||||
|
||||
# -- QPS to use while communicating with the kubernetes apiserver.
|
||||
kubeApiQPS:
|
||||
|
||||
# -- The volume of the CephCSI RBD plugin DaemonSet
|
||||
csiRBDPluginVolume:
|
||||
# - name: lib-modules
|
||||
# hostPath:
|
||||
# path: /run/booted-system/kernel-modules/lib/modules/
|
||||
# - name: host-nix
|
||||
# hostPath:
|
||||
# path: /nix
|
||||
|
||||
# -- The volume mounts of the CephCSI RBD plugin DaemonSet
|
||||
csiRBDPluginVolumeMount:
|
||||
# - name: host-nix
|
||||
# mountPath: /nix
|
||||
# readOnly: true
|
||||
|
||||
# -- The volume of the CephCSI CephFS plugin DaemonSet
|
||||
csiCephFSPluginVolume:
|
||||
# - name: lib-modules
|
||||
# hostPath:
|
||||
# path: /run/booted-system/kernel-modules/lib/modules/
|
||||
# - name: host-nix
|
||||
# hostPath:
|
||||
# path: /nix
|
||||
|
||||
# -- The volume mounts of the CephCSI CephFS plugin DaemonSet
|
||||
csiCephFSPluginVolumeMount:
|
||||
# - name: host-nix
|
||||
# mountPath: /nix
|
||||
# readOnly: true
|
||||
|
||||
# -- CEPH CSI RBD provisioner resource requirement list
|
||||
# csi-omap-generator resources will be applied only if `enableOMAPGenerator` is set to `true`
|
||||
# @default -- see values.yaml
|
||||
csiRBDProvisionerResource: |
|
||||
- name : csi-provisioner
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-resizer
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-attacher
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-snapshotter
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-rbdplugin
|
||||
resource:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
limits:
|
||||
memory: 1Gi
|
||||
- name : csi-omap-generator
|
||||
resource:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
- name : liveness-prometheus
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
|
||||
# -- CEPH CSI RBD plugin resource requirement list
|
||||
# @default -- see values.yaml
|
||||
csiRBDPluginResource: |
|
||||
- name : driver-registrar
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-rbdplugin
|
||||
resource:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
- name : liveness-prometheus
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
|
||||
# -- CEPH CSI CephFS provisioner resource requirement list
|
||||
# @default -- see values.yaml
|
||||
csiCephFSProvisionerResource: |
|
||||
- name : csi-provisioner
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-resizer
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-attacher
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-snapshotter
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-cephfsplugin
|
||||
resource:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
- name : liveness-prometheus
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
|
||||
# -- CEPH CSI CephFS plugin resource requirement list
|
||||
# @default -- see values.yaml
|
||||
csiCephFSPluginResource: |
|
||||
- name : driver-registrar
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-cephfsplugin
|
||||
resource:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
- name : liveness-prometheus
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
|
||||
# -- CEPH CSI NFS provisioner resource requirement list
|
||||
# @default -- see values.yaml
|
||||
csiNFSProvisionerResource: |
|
||||
- name : csi-provisioner
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-nfsplugin
|
||||
resource:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
- name : csi-attacher
|
||||
resource:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
|
||||
# -- CEPH CSI NFS plugin resource requirement list
|
||||
# @default -- see values.yaml
|
||||
csiNFSPluginResource: |
|
||||
- name : driver-registrar
|
||||
resource:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
- name : csi-nfsplugin
|
||||
resource:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
|
||||
# Set provisionerTolerations and provisionerNodeAffinity for provisioner pod.
|
||||
# The CSI provisioner would be best to start on the same nodes as other ceph daemons.
|
||||
|
||||
# -- Array of tolerations in YAML format which will be added to CSI provisioner deployment
|
||||
provisionerTolerations:
|
||||
# - key: key
|
||||
# operator: Exists
|
||||
# effect: NoSchedule
|
||||
|
||||
# -- The node labels for affinity of the CSI provisioner deployment [^1]
|
||||
provisionerNodeAffinity: #key1=value1,value2; key2=value3
|
||||
# Set pluginTolerations and pluginNodeAffinity for plugin daemonset pods.
|
||||
# The CSI plugins need to be started on all the nodes where the clients need to mount the storage.
|
||||
|
||||
# -- Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet
|
||||
pluginTolerations:
|
||||
# - key: key
|
||||
# operator: Exists
|
||||
# effect: NoSchedule
|
||||
|
||||
# -- The node labels for affinity of the CephCSI RBD plugin DaemonSet [^1]
|
||||
pluginNodeAffinity: # key1=value1,value2; key2=value3
|
||||
|
||||
# -- Enable Ceph CSI Liveness sidecar deployment
|
||||
enableLiveness: false
|
||||
|
||||
# -- CSI CephFS driver metrics port
|
||||
# @default -- `9081`
|
||||
cephfsLivenessMetricsPort:
|
||||
|
||||
# -- CSI Addons server port
|
||||
# @default -- `9070`
|
||||
csiAddonsPort:
|
||||
# -- CSI Addons server port for the RBD provisioner
|
||||
# @default -- `9070`
|
||||
csiAddonsRBDProvisionerPort:
|
||||
# -- CSI Addons server port for the Ceph FS provisioner
|
||||
# @default -- `9070`
|
||||
csiAddonsCephFSProvisionerPort:
|
||||
|
||||
# -- Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS
|
||||
# you may want to disable this setting. However, this will cause an issue during upgrades
|
||||
# with the FUSE client. See the [upgrade guide](https://rook.io/docs/rook/v1.2/ceph-upgrade.html)
|
||||
forceCephFSKernelClient: true
|
||||
|
||||
# -- Ceph CSI RBD driver metrics port
|
||||
# @default -- `8080`
|
||||
rbdLivenessMetricsPort:
|
||||
|
||||
serviceMonitor:
|
||||
# -- Enable ServiceMonitor for Ceph CSI drivers
|
||||
enabled: false
|
||||
# -- Service monitor scrape interval
|
||||
interval: 10s
|
||||
# -- ServiceMonitor additional labels
|
||||
labels: {}
|
||||
# -- Use a different namespace for the ServiceMonitor
|
||||
namespace:
|
||||
|
||||
# -- Kubelet root directory path (if the Kubelet uses a different path for the `--root-dir` flag)
|
||||
# @default -- `/var/lib/kubelet`
|
||||
kubeletDirPath:
|
||||
|
||||
# -- Duration in seconds that non-leader candidates will wait to force acquire leadership.
|
||||
# @default -- `137s`
|
||||
csiLeaderElectionLeaseDuration:
|
||||
|
||||
# -- Deadline in seconds that the acting leader will retry refreshing leadership before giving up.
|
||||
# @default -- `107s`
|
||||
csiLeaderElectionRenewDeadline:
|
||||
|
||||
# -- Retry period in seconds the LeaderElector clients should wait between tries of actions.
|
||||
# @default -- `26s`
|
||||
csiLeaderElectionRetryPeriod:
|
||||
|
||||
cephcsi:
|
||||
# -- Ceph CSI image repository
|
||||
repository: quay.io/cephcsi/cephcsi
|
||||
# -- Ceph CSI image tag
|
||||
tag: v3.14.0
|
||||
|
||||
registrar:
|
||||
# -- Kubernetes CSI registrar image repository
|
||||
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
|
||||
# -- Registrar image tag
|
||||
tag: v2.13.0
|
||||
|
||||
provisioner:
|
||||
# -- Kubernetes CSI provisioner image repository
|
||||
repository: registry.k8s.io/sig-storage/csi-provisioner
|
||||
# -- Provisioner image tag
|
||||
tag: v5.1.0
|
||||
|
||||
snapshotter:
|
||||
# -- Kubernetes CSI snapshotter image repository
|
||||
repository: registry.k8s.io/sig-storage/csi-snapshotter
|
||||
# -- Snapshotter image tag
|
||||
tag: v8.2.0
|
||||
|
||||
attacher:
|
||||
# -- Kubernetes CSI Attacher image repository
|
||||
repository: registry.k8s.io/sig-storage/csi-attacher
|
||||
# -- Attacher image tag
|
||||
tag: v4.8.0
|
||||
|
||||
resizer:
|
||||
# -- Kubernetes CSI resizer image repository
|
||||
repository: registry.k8s.io/sig-storage/csi-resizer
|
||||
# -- Resizer image tag
|
||||
tag: v1.13.1
|
||||
|
||||
# -- Image pull policy
|
||||
imagePullPolicy: IfNotPresent
|
||||
|
||||
# -- Labels to add to the CSI CephFS Deployments and DaemonSets Pods
|
||||
cephfsPodLabels: #"key1=value1,key2=value2"
|
||||
|
||||
# -- Labels to add to the CSI NFS Deployments and DaemonSets Pods
|
||||
nfsPodLabels: #"key1=value1,key2=value2"
|
||||
|
||||
# -- Labels to add to the CSI RBD Deployments and DaemonSets Pods
|
||||
rbdPodLabels: #"key1=value1,key2=value2"
|
||||
|
||||
csiAddons:
|
||||
# -- Enable CSIAddons
|
||||
enabled: false
|
||||
# -- CSIAddons sidecar image repository
|
||||
repository: quay.io/csiaddons/k8s-sidecar
|
||||
# -- CSIAddons sidecar image tag
|
||||
tag: v0.12.0
|
||||
|
||||
nfs:
|
||||
# -- Enable the nfs csi driver
|
||||
enabled: false
|
||||
|
||||
topology:
|
||||
# -- Enable topology based provisioning
|
||||
enabled: false
|
||||
# NOTE: the value here serves as an example and needs to be
|
||||
# updated with node labels that define domains of interest
|
||||
# -- domainLabels define which node labels to use as domains
|
||||
# for CSI nodeplugins to advertise their domains
|
||||
domainLabels:
|
||||
# - kubernetes.io/hostname
|
||||
# - topology.kubernetes.io/zone
|
||||
# - topology.rook.io/rack
|
||||
|
||||
# -- Whether to skip any attach operation altogether for CephFS PVCs. See more details
|
||||
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
|
||||
# If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation
|
||||
# of pods using the CephFS PVC fast. **WARNING** It's highly discouraged to use this for
|
||||
# CephFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
|
||||
cephFSAttachRequired: true
|
||||
# -- Whether to skip any attach operation altogether for RBD PVCs. See more details
|
||||
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
|
||||
# If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast.
|
||||
# **WARNING** It's highly discouraged to use this for RWO volumes as it can cause data corruption.
|
||||
# csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set
|
||||
# to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on.
|
||||
# Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
|
||||
rbdAttachRequired: true
|
||||
# -- Whether to skip any attach operation altogether for NFS PVCs. See more details
|
||||
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
|
||||
# If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation
|
||||
# of pods using the NFS PVC fast. **WARNING** It's highly discouraged to use this for
|
||||
# NFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
|
||||
nfsAttachRequired: true
|
||||
|
||||
# -- Enable discovery daemon
|
||||
enableDiscoveryDaemon: false
|
||||
# -- Set the discovery daemon device discovery interval (default to 60m)
|
||||
discoveryDaemonInterval: 60m
|
||||
|
||||
# -- The timeout for ceph commands in seconds
|
||||
cephCommandsTimeoutSeconds: "15"
|
||||
|
||||
# -- If true, run rook operator on the host network
|
||||
useOperatorHostNetwork:
|
||||
|
||||
# -- If true, scale down the rook operator.
|
||||
# This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling
|
||||
# to deploy your helm charts.
|
||||
scaleDownOperator: false
|
||||
|
||||
## Rook Discover configuration
|
||||
## toleration: NoSchedule, PreferNoSchedule or NoExecute
|
||||
## tolerationKey: Set this to the specific key of the taint to tolerate
|
||||
## tolerations: Array of tolerations in YAML format which will be added to agent deployment
|
||||
## nodeAffinity: Set to labels of the node to match
|
||||
|
||||
discover:
|
||||
# -- Toleration for the discover pods.
|
||||
# Options: `NoSchedule`, `PreferNoSchedule` or `NoExecute`
|
||||
toleration:
|
||||
# -- The specific key of the taint to tolerate
|
||||
tolerationKey:
|
||||
# -- Array of tolerations in YAML format which will be added to discover deployment
|
||||
tolerations:
|
||||
# - key: key
|
||||
# operator: Exists
|
||||
# effect: NoSchedule
|
||||
# -- The node labels for affinity of `discover-agent` [^1]
|
||||
nodeAffinity:
|
||||
# key1=value1,value2; key2=value3
|
||||
#
|
||||
# or
|
||||
#
|
||||
# requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# nodeSelectorTerms:
|
||||
# - matchExpressions:
|
||||
# - key: storage-node
|
||||
# operator: Exists
|
||||
# -- Labels to add to the discover pods
|
||||
podLabels: # "key1=value1,key2=value2"
|
||||
# -- Add resources to discover daemon pods
|
||||
resources:
|
||||
# - limits:
|
||||
# memory: 512Mi
|
||||
# - requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
# -- Custom label to identify node hostname. If not set `kubernetes.io/hostname` will be used
|
||||
customHostnameLabel:
|
||||
|
||||
# -- Runs Ceph Pods as privileged to be able to write to `hostPaths` in OpenShift with SELinux restrictions.
|
||||
hostpathRequiresPrivileged: false
|
||||
|
||||
# -- Whether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabled
|
||||
enforceHostNetwork: false
|
||||
|
||||
# -- Disable automatic orchestration when new devices are discovered.
|
||||
disableDeviceHotplug: false
|
||||
|
||||
# -- The revision history limit for all pods created by Rook. If blank, the K8s default is 10.
|
||||
revisionHistoryLimit:
|
||||
|
||||
# -- Blacklist certain disks according to the regex provided.
|
||||
discoverDaemonUdev:
|
||||
|
||||
# -- imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts.
|
||||
imagePullSecrets:
|
||||
# - name: my-registry-secret
|
||||
|
||||
# -- Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used
|
||||
enableOBCWatchOperatorNamespace: true
|
||||
|
||||
# -- Specify the prefix for the OBC provisioner in place of the cluster namespace
|
||||
# @default -- `ceph cluster namespace`
|
||||
obcProvisionerNamePrefix:
|
||||
|
||||
# -- Many OBC additional config fields may be risky for administrators to allow users control over.
|
||||
# The safe and default-allowed fields are 'maxObjects' and 'maxSize'.
|
||||
# Other fields should be considered risky. To allow all additional configs, use this value:
|
||||
# "maxObjects,maxSize,bucketMaxObjects,bucketMaxSize,bucketPolicy,bucketLifecycle,bucketOwner"
|
||||
# @default -- "maxObjects,maxSize"
|
||||
obcAllowAdditionalConfigFields: "maxObjects,maxSize"
|
||||
|
||||
monitoring:
|
||||
# -- Enable monitoring. Requires Prometheus to be pre-installed.
|
||||
# Enabling will also create RBAC rules to allow Operator to create ServiceMonitors
|
||||
enabled: false
|
||||
@@ -1,26 +1,145 @@
|
||||
use std::{
|
||||
net::{IpAddr, Ipv4Addr},
|
||||
sync::Arc,
|
||||
};
|
||||
|
||||
use cidr::Ipv4Cidr;
|
||||
use harmony::{
|
||||
hardware::{FirewallGroup, HostCategory, Location, PhysicalHost, SwitchGroup},
|
||||
infra::opnsense::OPNSenseManagementInterface,
|
||||
inventory::Inventory,
|
||||
maestro::Maestro,
|
||||
modules::dummy::{ErrorScore, PanicScore, SuccessScore},
|
||||
topology::HAClusterTopology,
|
||||
modules::{
|
||||
http::HttpScore,
|
||||
ipxe::IpxeScore,
|
||||
okd::{
|
||||
bootstrap_dhcp::OKDBootstrapDhcpScore,
|
||||
bootstrap_load_balancer::OKDBootstrapLoadBalancerScore, dhcp::OKDDhcpScore,
|
||||
dns::OKDDnsScore,
|
||||
},
|
||||
tftp::TftpScore,
|
||||
},
|
||||
topology::{LogicalHost, UnmanagedRouter, Url},
|
||||
};
|
||||
use harmony_macros::{ip, mac_address};
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let inventory = Inventory::autoload();
|
||||
let topology = HAClusterTopology::autoload();
|
||||
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
|
||||
let firewall = harmony::topology::LogicalHost {
|
||||
ip: ip!("192.168.33.1"),
|
||||
name: String::from("fw0"),
|
||||
};
|
||||
|
||||
let opnsense = Arc::new(
|
||||
harmony::infra::opnsense::OPNSenseFirewall::new(firewall, None, "root", "opnsense").await,
|
||||
);
|
||||
let lan_subnet = Ipv4Addr::new(192, 168, 33, 0);
|
||||
let gateway_ipv4 = Ipv4Addr::new(192, 168, 33, 1);
|
||||
let gateway_ip = IpAddr::V4(gateway_ipv4);
|
||||
let topology = harmony::topology::HAClusterTopology {
|
||||
domain_name: "ncd0.harmony.mcd".to_string(), // TODO this must be set manually correctly
|
||||
// when setting up the opnsense firewall
|
||||
router: Arc::new(UnmanagedRouter::new(
|
||||
gateway_ip,
|
||||
Ipv4Cidr::new(lan_subnet, 24).unwrap(),
|
||||
)),
|
||||
load_balancer: opnsense.clone(),
|
||||
firewall: opnsense.clone(),
|
||||
tftp_server: opnsense.clone(),
|
||||
http_server: opnsense.clone(),
|
||||
dhcp_server: opnsense.clone(),
|
||||
dns_server: opnsense.clone(),
|
||||
control_plane: vec![
|
||||
LogicalHost {
|
||||
ip: ip!("192.168.33.20"),
|
||||
name: "cp0".to_string(),
|
||||
},
|
||||
LogicalHost {
|
||||
ip: ip!("192.168.33.21"),
|
||||
name: "cp1".to_string(),
|
||||
},
|
||||
LogicalHost {
|
||||
ip: ip!("192.168.33.22"),
|
||||
name: "cp2".to_string(),
|
||||
},
|
||||
],
|
||||
bootstrap_host: LogicalHost {
|
||||
ip: ip!("192.168.33.66"),
|
||||
name: "bootstrap".to_string(),
|
||||
},
|
||||
workers: vec![
|
||||
LogicalHost {
|
||||
ip: ip!("192.168.33.30"),
|
||||
name: "wk0".to_string(),
|
||||
},
|
||||
LogicalHost {
|
||||
ip: ip!("192.168.33.31"),
|
||||
name: "wk1".to_string(),
|
||||
},
|
||||
LogicalHost {
|
||||
ip: ip!("192.168.33.32"),
|
||||
name: "wk2".to_string(),
|
||||
},
|
||||
],
|
||||
switch: vec![],
|
||||
};
|
||||
|
||||
let inventory = Inventory {
|
||||
location: Location::new("I am mobile".to_string(), "earth".to_string()),
|
||||
switch: SwitchGroup::from([]),
|
||||
firewall: FirewallGroup::from([PhysicalHost::empty(HostCategory::Firewall)
|
||||
.management(Arc::new(OPNSenseManagementInterface::new()))]),
|
||||
storage_host: vec![],
|
||||
worker_host: vec![
|
||||
PhysicalHost::empty(HostCategory::Server)
|
||||
.mac_address(mac_address!("C4:62:37:02:61:0F")),
|
||||
PhysicalHost::empty(HostCategory::Server)
|
||||
.mac_address(mac_address!("C4:62:37:02:61:26")),
|
||||
// thisone
|
||||
// Then create the ipxe file
|
||||
// set the dns static leases
|
||||
// bootstrap nodes
|
||||
// start ceph cluster
|
||||
// try installation of lampscore
|
||||
// bingo?
|
||||
PhysicalHost::empty(HostCategory::Server)
|
||||
.mac_address(mac_address!("C4:62:37:02:61:70")),
|
||||
],
|
||||
control_plane_host: vec![
|
||||
PhysicalHost::empty(HostCategory::Server)
|
||||
.mac_address(mac_address!("C4:62:37:02:60:FA")),
|
||||
PhysicalHost::empty(HostCategory::Server)
|
||||
.mac_address(mac_address!("C4:62:37:02:61:1A")),
|
||||
PhysicalHost::empty(HostCategory::Server)
|
||||
.mac_address(mac_address!("C4:62:37:01:BC:68")),
|
||||
],
|
||||
};
|
||||
|
||||
// TODO regroup smaller scores in a larger one such as this
|
||||
// let okd_boostrap_preparation();
|
||||
|
||||
let bootstrap_dhcp_score = OKDBootstrapDhcpScore::new(&topology, &inventory);
|
||||
let bootstrap_load_balancer_score = OKDBootstrapLoadBalancerScore::new(&topology);
|
||||
let dhcp_score = OKDDhcpScore::new(&topology, &inventory);
|
||||
let dns_score = OKDDnsScore::new(&topology);
|
||||
let load_balancer_score =
|
||||
harmony::modules::okd::load_balancer::OKDLoadBalancerScore::new(&topology);
|
||||
|
||||
let tftp_score = TftpScore::new(Url::LocalFolder("./data/watchguard/tftpboot".to_string()));
|
||||
let http_score = HttpScore::new(Url::LocalFolder(
|
||||
"./data/watchguard/pxe-http-files".to_string(),
|
||||
));
|
||||
let ipxe_score = IpxeScore::new();
|
||||
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
|
||||
maestro.register_all(vec![
|
||||
// ADD scores :
|
||||
// 1. OPNSense setup scores
|
||||
// 2. Bootstrap node setup
|
||||
// 3. Control plane setup
|
||||
// 4. Workers setup
|
||||
// 5. Various tools and apps setup
|
||||
Box::new(SuccessScore {}),
|
||||
Box::new(ErrorScore {}),
|
||||
Box::new(PanicScore {}),
|
||||
Box::new(dns_score),
|
||||
Box::new(bootstrap_dhcp_score),
|
||||
Box::new(bootstrap_load_balancer_score),
|
||||
Box::new(load_balancer_score),
|
||||
Box::new(tftp_score),
|
||||
Box::new(http_score),
|
||||
Box::new(ipxe_score),
|
||||
Box::new(dhcp_score),
|
||||
]);
|
||||
harmony_tui::init(maestro).await.unwrap();
|
||||
}
|
||||
|
||||
@@ -13,23 +13,40 @@ rust-ipmi = "0.1.1"
|
||||
semver = "1.0.23"
|
||||
serde = { version = "1.0.209", features = ["derive"] }
|
||||
serde_json = "1.0.127"
|
||||
tokio = { workspace = true }
|
||||
derive-new = { workspace = true }
|
||||
log = { workspace = true }
|
||||
env_logger = { workspace = true }
|
||||
async-trait = { workspace = true }
|
||||
cidr = { workspace = true }
|
||||
tokio.workspace = true
|
||||
derive-new.workspace = true
|
||||
log.workspace = true
|
||||
env_logger.workspace = true
|
||||
async-trait.workspace = true
|
||||
cidr.workspace = true
|
||||
opnsense-config = { path = "../opnsense-config" }
|
||||
opnsense-config-xml = { path = "../opnsense-config-xml" }
|
||||
harmony_macros = { path = "../harmony_macros" }
|
||||
harmony_types = { path = "../harmony_types" }
|
||||
uuid = { workspace = true }
|
||||
url = { workspace = true }
|
||||
kube = { workspace = true }
|
||||
k8s-openapi = { workspace = true }
|
||||
serde_yaml = { workspace = true }
|
||||
http = { workspace = true }
|
||||
serde-value = { workspace = true }
|
||||
uuid.workspace = true
|
||||
url.workspace = true
|
||||
kube.workspace = true
|
||||
k8s-openapi.workspace = true
|
||||
serde_yaml.workspace = true
|
||||
http.workspace = true
|
||||
serde-value.workspace = true
|
||||
inquire.workspace = true
|
||||
helm-wrapper-rs = "0.4.0"
|
||||
non-blank-string-rs = "1.0.4"
|
||||
k3d-rs = { path = "../k3d" }
|
||||
directories = "6.0.0"
|
||||
lazy_static = "1.5.0"
|
||||
dockerfile_builder = "0.1.5"
|
||||
temp-file = "0.1.9"
|
||||
convert_case.workspace = true
|
||||
email_address = "0.2.9"
|
||||
fqdn = { version = "0.4.6", features = [
|
||||
"domain-label-cannot-start-or-end-with-hyphen",
|
||||
"domain-label-length-limited-to-63",
|
||||
"domain-name-without-special-chars",
|
||||
"domain-name-length-limited-to-255",
|
||||
"punycode",
|
||||
"serde",
|
||||
] }
|
||||
temp-dir = "0.1.14"
|
||||
dyn-clone = "1.0.19"
|
||||
|
||||
13
harmony/src/domain/config.rs
Normal file
13
harmony/src/domain/config.rs
Normal file
@@ -0,0 +1,13 @@
|
||||
use lazy_static::lazy_static;
|
||||
use std::path::PathBuf;
|
||||
|
||||
lazy_static! {
|
||||
pub static ref HARMONY_CONFIG_DIR: PathBuf = directories::BaseDirs::new()
|
||||
.unwrap()
|
||||
.data_dir()
|
||||
.join("harmony");
|
||||
pub static ref REGISTRY_URL: String =
|
||||
std::env::var("HARMONY_REGISTRY_URL").unwrap_or_else(|_| "hub.nationtech.io".to_string());
|
||||
pub static ref REGISTRY_PROJECT: String =
|
||||
std::env::var("HARMONY_REGISTRY_PROJECT").unwrap_or_else(|_| "harmony".to_string());
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||
pub struct Id {
|
||||
value: String,
|
||||
}
|
||||
@@ -10,3 +10,9 @@ impl Id {
|
||||
Self { value }
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Id {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.write_str(&self.value)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -138,7 +138,8 @@ impl ManagementInterface for ManualManagementInterface {
|
||||
}
|
||||
|
||||
fn get_supported_protocol_names(&self) -> String {
|
||||
todo!()
|
||||
// todo!()
|
||||
"none".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -15,10 +15,12 @@ pub enum InterpretName {
|
||||
LoadBalancer,
|
||||
Tftp,
|
||||
Http,
|
||||
Ipxe,
|
||||
Dummy,
|
||||
Panic,
|
||||
OPNSense,
|
||||
K3dInstallation,
|
||||
TenantInterpret,
|
||||
}
|
||||
|
||||
impl std::fmt::Display for InterpretName {
|
||||
@@ -29,10 +31,12 @@ impl std::fmt::Display for InterpretName {
|
||||
InterpretName::LoadBalancer => f.write_str("LoadBalancer"),
|
||||
InterpretName::Tftp => f.write_str("Tftp"),
|
||||
InterpretName::Http => f.write_str("Http"),
|
||||
InterpretName::Ipxe => f.write_str("iPXE"),
|
||||
InterpretName::Dummy => f.write_str("Dummy"),
|
||||
InterpretName::Panic => f.write_str("Panic"),
|
||||
InterpretName::OPNSense => f.write_str("OPNSense"),
|
||||
InterpretName::K3dInstallation => f.write_str("K3dInstallation"),
|
||||
InterpretName::TenantInterpret => f.write_str("Tenant"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -52,27 +52,6 @@ impl<T: Topology> Maestro<T> {
|
||||
Ok(outcome)
|
||||
}
|
||||
|
||||
// Load the inventory and inventory from environment.
|
||||
// This function is able to discover the context that it is running in, such as k8s clusters, aws cloud, linux host, etc.
|
||||
// When the HARMONY_TOPOLOGY environment variable is not set, it will default to install k3s
|
||||
// locally (lazily, if not installed yet, when the first execution occurs) and use that as a topology
|
||||
// So, by default, the inventory is a single host that the binary is running on, and the
|
||||
// topology is a single node k3s
|
||||
//
|
||||
// By default :
|
||||
// - Linux => k3s
|
||||
// - macos, windows => docker compose
|
||||
//
|
||||
// To run more complex cases like OKDHACluster, either provide the default target in the
|
||||
// harmony infrastructure as code or as an environment variable
|
||||
pub fn load_from_env() -> Self {
|
||||
// Load env var HARMONY_TOPOLOGY
|
||||
match std::env::var("HARMONY_TOPOLOGY") {
|
||||
Ok(_) => todo!(),
|
||||
Err(_) => todo!(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn register_all(&mut self, mut scores: ScoreVec<T>) {
|
||||
let mut score_mut = self.scores.write().expect("Should acquire lock");
|
||||
score_mut.append(&mut scores);
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
pub mod config;
|
||||
pub mod data;
|
||||
pub mod executors;
|
||||
pub mod filter;
|
||||
|
||||
@@ -218,7 +218,7 @@ where
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::modules::dns::DnsScore;
|
||||
use crate::topology::{self, HAClusterTopology};
|
||||
use crate::topology::HAClusterTopology;
|
||||
|
||||
#[test]
|
||||
fn test_format_values_as_string() {
|
||||
|
||||
@@ -57,8 +57,10 @@ impl Topology for HAClusterTopology {
|
||||
|
||||
#[async_trait]
|
||||
impl K8sclient for HAClusterTopology {
|
||||
async fn k8s_client(&self) -> Result<Arc<K8sClient>, kube::Error> {
|
||||
Ok(Arc::new(K8sClient::try_default().await?))
|
||||
async fn k8s_client(&self) -> Result<Arc<K8sClient>, String> {
|
||||
Ok(Arc::new(
|
||||
K8sClient::try_default().await.map_err(|e| e.to_string())?,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -166,6 +168,16 @@ impl DhcpServer for HAClusterTopology {
|
||||
async fn commit_config(&self) -> Result<(), ExecutorError> {
|
||||
self.dhcp_server.commit_config().await
|
||||
}
|
||||
|
||||
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
|
||||
self.dhcp_server.set_filename(filename).await
|
||||
}
|
||||
async fn set_filename64(&self, filename64: &str) -> Result<(), ExecutorError> {
|
||||
self.dhcp_server.set_filename64(filename64).await
|
||||
}
|
||||
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError> {
|
||||
self.dhcp_server.set_filenameipxe(filenameipxe).await
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
@@ -291,6 +303,15 @@ impl DhcpServer for DummyInfra {
|
||||
async fn set_boot_filename(&self, _boot_filename: &str) -> Result<(), ExecutorError> {
|
||||
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
|
||||
}
|
||||
async fn set_filename(&self, _filename: &str) -> Result<(), ExecutorError> {
|
||||
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
|
||||
}
|
||||
async fn set_filename64(&self, _filename: &str) -> Result<(), ExecutorError> {
|
||||
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
|
||||
}
|
||||
async fn set_filenameipxe(&self, _filenameipxe: &str) -> Result<(), ExecutorError> {
|
||||
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
|
||||
}
|
||||
fn get_ip(&self) -> IpAddress {
|
||||
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,14 @@
|
||||
use derive_new::new;
|
||||
use k8s_openapi::NamespaceResourceScope;
|
||||
use kube::{Api, Client, Error, Resource, api::PostParams};
|
||||
use kube::{
|
||||
Api, Client, Config, Error, Resource,
|
||||
api::PostParams,
|
||||
config::{KubeConfigOptions, Kubeconfig},
|
||||
};
|
||||
use log::error;
|
||||
use serde::de::DeserializeOwned;
|
||||
|
||||
#[derive(new)]
|
||||
pub struct K8sClient {
|
||||
client: Client,
|
||||
}
|
||||
@@ -36,7 +43,11 @@ impl K8sClient {
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
pub async fn apply_namespaced<K>(&self, resource: &Vec<K>) -> Result<K, Error>
|
||||
pub async fn apply_namespaced<K>(
|
||||
&self,
|
||||
resource: &Vec<K>,
|
||||
ns: Option<&str>,
|
||||
) -> Result<Vec<K>, Error>
|
||||
where
|
||||
K: Resource<Scope = NamespaceResourceScope>
|
||||
+ Clone
|
||||
@@ -46,10 +57,32 @@ impl K8sClient {
|
||||
+ Default,
|
||||
<K as kube::Resource>::DynamicType: Default,
|
||||
{
|
||||
let mut resources = Vec::new();
|
||||
for r in resource.iter() {
|
||||
let api: Api<K> = Api::default_namespaced(self.client.clone());
|
||||
api.create(&PostParams::default(), &r).await?;
|
||||
let api: Api<K> = match ns {
|
||||
Some(ns) => Api::namespaced(self.client.clone(), ns),
|
||||
None => Api::default_namespaced(self.client.clone()),
|
||||
};
|
||||
resources.push(api.create(&PostParams::default(), &r).await?);
|
||||
}
|
||||
todo!("")
|
||||
Ok(resources)
|
||||
}
|
||||
|
||||
pub(crate) async fn from_kubeconfig(path: &str) -> Option<K8sClient> {
|
||||
let k = match Kubeconfig::read_from(path) {
|
||||
Ok(k) => k,
|
||||
Err(e) => {
|
||||
error!("Failed to load kubeconfig from {path} : {e}");
|
||||
return None;
|
||||
}
|
||||
};
|
||||
Some(K8sClient::new(
|
||||
Client::try_from(
|
||||
Config::from_custom_kubeconfig(k, &KubeConfigOptions::default())
|
||||
.await
|
||||
.unwrap(),
|
||||
)
|
||||
.unwrap(),
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use std::process::Command;
|
||||
use std::{process::Command, sync::Arc};
|
||||
|
||||
use async_trait::async_trait;
|
||||
use inquire::Confirm;
|
||||
@@ -6,6 +6,7 @@ use log::{info, warn};
|
||||
use tokio::sync::OnceCell;
|
||||
|
||||
use crate::{
|
||||
executors::ExecutorError,
|
||||
interpret::{InterpretError, Outcome},
|
||||
inventory::Inventory,
|
||||
maestro::Maestro,
|
||||
@@ -13,26 +14,53 @@ use crate::{
|
||||
topology::LocalhostTopology,
|
||||
};
|
||||
|
||||
use super::{HelmCommand, Topology, k8s::K8sClient};
|
||||
use super::{
|
||||
HelmCommand, K8sclient, Topology,
|
||||
k8s::K8sClient,
|
||||
tenant::{
|
||||
ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy, k8s::K8sTenantManager,
|
||||
},
|
||||
};
|
||||
|
||||
struct K8sState {
|
||||
_client: K8sClient,
|
||||
_source: K8sSource,
|
||||
client: Arc<K8sClient>,
|
||||
source: K8sSource,
|
||||
message: String,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
enum K8sSource {
|
||||
LocalK3d,
|
||||
Kubeconfig,
|
||||
}
|
||||
|
||||
pub struct K8sAnywhereTopology {
|
||||
k8s_state: OnceCell<Option<K8sState>>,
|
||||
tenant_manager: OnceCell<K8sTenantManager>,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl K8sclient for K8sAnywhereTopology {
|
||||
async fn k8s_client(&self) -> Result<Arc<K8sClient>, String> {
|
||||
let state = match self.k8s_state.get() {
|
||||
Some(state) => state,
|
||||
None => return Err("K8s state not initialized yet".to_string()),
|
||||
};
|
||||
|
||||
let state = match state {
|
||||
Some(state) => state,
|
||||
None => return Err("K8s client initialized but empty".to_string()),
|
||||
};
|
||||
|
||||
Ok(state.client.clone())
|
||||
}
|
||||
}
|
||||
|
||||
impl K8sAnywhereTopology {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
k8s_state: OnceCell::new(),
|
||||
tenant_manager: OnceCell::new(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -58,23 +86,23 @@ impl K8sAnywhereTopology {
|
||||
}
|
||||
|
||||
async fn try_load_kubeconfig(&self, path: &str) -> Option<K8sClient> {
|
||||
todo!("Use kube-rs to load kubeconfig at path {path}");
|
||||
K8sClient::from_kubeconfig(path).await
|
||||
}
|
||||
|
||||
async fn try_install_k3d(&self) -> Result<K8sClient, InterpretError> {
|
||||
fn get_k3d_installation_score(&self) -> K3DInstallationScore {
|
||||
K3DInstallationScore::default()
|
||||
}
|
||||
|
||||
async fn try_install_k3d(&self) -> Result<(), InterpretError> {
|
||||
let maestro = Maestro::initialize(Inventory::autoload(), LocalhostTopology::new()).await?;
|
||||
let k3d_score = K3DInstallationScore::new();
|
||||
let k3d_score = self.get_k3d_installation_score();
|
||||
maestro.interpret(Box::new(k3d_score)).await?;
|
||||
todo!(
|
||||
"Create Maestro with LocalDockerTopology or something along these lines and run a K3dInstallationScore on it"
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn try_get_or_install_k8s_client(&self) -> Result<Option<K8sState>, InterpretError> {
|
||||
let k8s_anywhere_config = K8sAnywhereConfig {
|
||||
kubeconfig: std::env::var("HARMONY_KUBECONFIG")
|
||||
.ok()
|
||||
.map(|v| v.to_string()),
|
||||
kubeconfig: std::env::var("KUBECONFIG").ok().map(|v| v.to_string()),
|
||||
use_system_kubeconfig: std::env::var("HARMONY_USE_SYSTEM_KUBECONFIG")
|
||||
.map_or_else(|_| false, |v| v.parse().ok().unwrap_or(false)),
|
||||
autoinstall: std::env::var("HARMONY_AUTOINSTALL")
|
||||
@@ -90,8 +118,18 @@ impl K8sAnywhereTopology {
|
||||
|
||||
if let Some(kubeconfig) = k8s_anywhere_config.kubeconfig {
|
||||
match self.try_load_kubeconfig(&kubeconfig).await {
|
||||
Some(_client) => todo!(),
|
||||
None => todo!(),
|
||||
Some(client) => {
|
||||
return Ok(Some(K8sState {
|
||||
client: Arc::new(client),
|
||||
source: K8sSource::Kubeconfig,
|
||||
message: format!("Loaded k8s client from kubeconfig {kubeconfig}"),
|
||||
}));
|
||||
}
|
||||
None => {
|
||||
return Err(InterpretError::new(format!(
|
||||
"Failed to load kubeconfig from {kubeconfig}"
|
||||
)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -112,13 +150,32 @@ impl K8sAnywhereTopology {
|
||||
}
|
||||
|
||||
info!("Starting K8sAnywhere installation");
|
||||
match self.try_install_k3d().await {
|
||||
Ok(client) => Ok(Some(K8sState {
|
||||
_client: client,
|
||||
_source: K8sSource::LocalK3d,
|
||||
self.try_install_k3d().await?;
|
||||
let k3d_score = self.get_k3d_installation_score();
|
||||
// I feel like having to rely on the k3d_rs crate here is a smell
|
||||
// I think we should have a way to interact more deeply with scores/interpret. Maybe the
|
||||
// K3DInstallationScore should expose a method to get_client ? Not too sure what would be a
|
||||
// good implementation due to the stateful nature of the k3d thing. Which is why I went
|
||||
// with this solution for now
|
||||
let k3d = k3d_rs::K3d::new(k3d_score.installation_path, Some(k3d_score.cluster_name));
|
||||
let state = match k3d.get_client().await {
|
||||
Ok(client) => K8sState {
|
||||
client: Arc::new(K8sClient::new(client)),
|
||||
source: K8sSource::LocalK3d,
|
||||
message: "Successfully installed K3D cluster and acquired client".to_string(),
|
||||
})),
|
||||
},
|
||||
Err(_) => todo!(),
|
||||
};
|
||||
|
||||
Ok(Some(state))
|
||||
}
|
||||
|
||||
fn get_k8s_tenant_manager(&self) -> Result<&K8sTenantManager, ExecutorError> {
|
||||
match self.tenant_manager.get() {
|
||||
Some(t) => Ok(t),
|
||||
None => Err(ExecutorError::UnexpectedError(
|
||||
"K8sTenantManager not available".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -147,29 +204,62 @@ struct K8sAnywhereConfig {
|
||||
#[async_trait]
|
||||
impl Topology for K8sAnywhereTopology {
|
||||
fn name(&self) -> &str {
|
||||
todo!()
|
||||
"K8sAnywhereTopology"
|
||||
}
|
||||
|
||||
async fn ensure_ready(&self) -> Result<Outcome, InterpretError> {
|
||||
let k8s_state = self
|
||||
let k8s_state = self
|
||||
.k8s_state
|
||||
.get_or_try_init(|| self.try_get_or_install_k8s_client())
|
||||
.await?;
|
||||
|
||||
let k8s_state: &K8sState = k8s_state
|
||||
.as_ref()
|
||||
.ok_or(InterpretError::new(
|
||||
"No K8s client could be found or installed".to_string(),
|
||||
))?;
|
||||
|
||||
let k8s_state: &K8sState = k8s_state.as_ref().ok_or(InterpretError::new(
|
||||
"No K8s client could be found or installed".to_string(),
|
||||
))?;
|
||||
|
||||
match self.is_helm_available() {
|
||||
Ok(()) => Ok(Outcome::success(format!(
|
||||
"{} + helm available",
|
||||
k8s_state.message.clone()
|
||||
))),
|
||||
Err(_) => Err(InterpretError::new("helm unavailable".to_string())),
|
||||
"{} + helm available",
|
||||
k8s_state.message.clone()
|
||||
))),
|
||||
Err(e) => Err(InterpretError::new(format!("helm unavailable: {}", e))),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl HelmCommand for K8sAnywhereTopology {}
|
||||
|
||||
#[async_trait]
|
||||
impl TenantManager for K8sAnywhereTopology {
|
||||
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
|
||||
self.get_k8s_tenant_manager()?
|
||||
.provision_tenant(config)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn update_tenant_resource_limits(
|
||||
&self,
|
||||
tenant_name: &str,
|
||||
new_limits: &ResourceLimits,
|
||||
) -> Result<(), ExecutorError> {
|
||||
self.get_k8s_tenant_manager()?
|
||||
.update_tenant_resource_limits(tenant_name, new_limits)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn update_tenant_network_policy(
|
||||
&self,
|
||||
tenant_name: &str,
|
||||
new_policy: &TenantNetworkPolicy,
|
||||
) -> Result<(), ExecutorError> {
|
||||
self.get_k8s_tenant_manager()?
|
||||
.update_tenant_network_policy(tenant_name, new_policy)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
|
||||
self.get_k8s_tenant_manager()?
|
||||
.deprovision_tenant(tenant_name)
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,6 +7,12 @@ use serde::Serialize;
|
||||
use super::{IpAddress, LogicalHost};
|
||||
use crate::executors::ExecutorError;
|
||||
|
||||
impl std::fmt::Debug for dyn LoadBalancer {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.write_fmt(format_args!("LoadBalancer {}", self.get_ip()))
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
pub trait LoadBalancer: Send + Sync {
|
||||
fn get_ip(&self) -> IpAddress;
|
||||
@@ -32,11 +38,6 @@ pub trait LoadBalancer: Send + Sync {
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Debug for dyn LoadBalancer {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.write_fmt(format_args!("LoadBalancer {}", self.get_ip()))
|
||||
}
|
||||
}
|
||||
#[derive(Debug, PartialEq, Clone, Serialize)]
|
||||
pub struct LoadBalancerService {
|
||||
pub backend_servers: Vec<BackendServer>,
|
||||
|
||||
@@ -3,6 +3,8 @@ mod host_binding;
|
||||
mod http;
|
||||
mod k8s_anywhere;
|
||||
mod localhost;
|
||||
pub mod oberservability;
|
||||
pub mod tenant;
|
||||
pub use k8s_anywhere::*;
|
||||
pub use localhost::*;
|
||||
pub mod k8s;
|
||||
|
||||
@@ -42,8 +42,8 @@ pub struct NetworkDomain {
|
||||
pub name: String,
|
||||
}
|
||||
#[async_trait]
|
||||
pub trait K8sclient: Send + Sync + std::fmt::Debug {
|
||||
async fn k8s_client(&self) -> Result<Arc<K8sClient>, kube::Error>;
|
||||
pub trait K8sclient: Send + Sync {
|
||||
async fn k8s_client(&self) -> Result<Arc<K8sClient>, String>;
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
@@ -53,6 +53,9 @@ pub trait DhcpServer: Send + Sync + std::fmt::Debug {
|
||||
async fn list_static_mappings(&self) -> Vec<(MacAddress, IpAddress)>;
|
||||
async fn set_next_server(&self, ip: IpAddress) -> Result<(), ExecutorError>;
|
||||
async fn set_boot_filename(&self, boot_filename: &str) -> Result<(), ExecutorError>;
|
||||
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError>;
|
||||
async fn set_filename64(&self, filename64: &str) -> Result<(), ExecutorError>;
|
||||
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError>;
|
||||
fn get_ip(&self) -> IpAddress;
|
||||
fn get_host(&self) -> LogicalHost;
|
||||
async fn commit_config(&self) -> Result<(), ExecutorError>;
|
||||
|
||||
1
harmony/src/domain/topology/oberservability/mod.rs
Normal file
1
harmony/src/domain/topology/oberservability/mod.rs
Normal file
@@ -0,0 +1 @@
|
||||
pub mod monitoring;
|
||||
31
harmony/src/domain/topology/oberservability/monitoring.rs
Normal file
31
harmony/src/domain/topology/oberservability/monitoring.rs
Normal file
@@ -0,0 +1,31 @@
|
||||
use async_trait::async_trait;
|
||||
|
||||
use std::fmt::Debug;
|
||||
use url::Url;
|
||||
|
||||
use crate::interpret::InterpretError;
|
||||
|
||||
use crate::{interpret::Outcome, topology::Topology};
|
||||
|
||||
/// Represents an entity responsible for collecting and organizing observability data
|
||||
/// from various telemetry sources
|
||||
/// A `Monitor` abstracts the logic required to scrape, aggregate, and structure
|
||||
/// monitoring data, enabling consistent processing regardless of the underlying data source.
|
||||
#[async_trait]
|
||||
pub trait Monitor<T: Topology>: Debug + Send + Sync {
|
||||
async fn deploy_monitor(
|
||||
&self,
|
||||
topology: &T,
|
||||
alert_receivers: Vec<AlertReceiver>,
|
||||
) -> Result<Outcome, InterpretError>;
|
||||
|
||||
async fn delete_monitor(
|
||||
&self,
|
||||
topolgy: &T,
|
||||
alert_receivers: Vec<AlertReceiver>,
|
||||
) -> Result<Outcome, InterpretError>;
|
||||
}
|
||||
|
||||
pub struct AlertReceiver {
|
||||
pub receiver_id: String,
|
||||
}
|
||||
95
harmony/src/domain/topology/tenant/k8s.rs
Normal file
95
harmony/src/domain/topology/tenant/k8s.rs
Normal file
@@ -0,0 +1,95 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use crate::{executors::ExecutorError, topology::k8s::K8sClient};
|
||||
use async_trait::async_trait;
|
||||
use derive_new::new;
|
||||
use k8s_openapi::api::core::v1::Namespace;
|
||||
use serde_json::json;
|
||||
|
||||
use super::{ResourceLimits, TenantConfig, TenantManager, TenantNetworkPolicy};
|
||||
|
||||
#[derive(new)]
|
||||
pub struct K8sTenantManager {
|
||||
k8s_client: Arc<K8sClient>,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl TenantManager for K8sTenantManager {
|
||||
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError> {
|
||||
let namespace = json!(
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Namespace",
|
||||
"metadata": {
|
||||
"labels": {
|
||||
"harmony.nationtech.io/tenant.id": config.id,
|
||||
"harmony.nationtech.io/tenant.name": config.name,
|
||||
},
|
||||
"name": config.name,
|
||||
},
|
||||
}
|
||||
);
|
||||
todo!("Validate that when tenant already exists (by id) that name has not changed");
|
||||
|
||||
let namespace: Namespace = serde_json::from_value(namespace).unwrap();
|
||||
|
||||
let resource_quota = json!(
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
"items": [
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "ResourceQuota",
|
||||
"metadata": {
|
||||
"name": config.name,
|
||||
"labels": {
|
||||
"harmony.nationtech.io/tenant.id": config.id,
|
||||
"harmony.nationtech.io/tenant.name": config.name,
|
||||
},
|
||||
"namespace": config.name,
|
||||
},
|
||||
"spec": {
|
||||
"hard": {
|
||||
"limits.cpu": format!("{:.0}",config.resource_limits.cpu_limit_cores),
|
||||
"limits.memory": format!("{:.3}Gi", config.resource_limits.memory_limit_gb),
|
||||
"requests.cpu": format!("{:.0}",config.resource_limits.cpu_request_cores),
|
||||
"requests.memory": format!("{:.3}Gi", config.resource_limits.memory_request_gb),
|
||||
"requests.storage": format!("{:.3}", config.resource_limits.storage_total_gb),
|
||||
"pods": "20",
|
||||
"services": "10",
|
||||
"configmaps": "30",
|
||||
"secrets": "30",
|
||||
"persistentvolumeclaims": "15",
|
||||
"services.loadbalancers": "2",
|
||||
"services.nodeports": "5",
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
);
|
||||
}
|
||||
|
||||
async fn update_tenant_resource_limits(
|
||||
&self,
|
||||
tenant_name: &str,
|
||||
new_limits: &ResourceLimits,
|
||||
) -> Result<(), ExecutorError> {
|
||||
todo!()
|
||||
}
|
||||
|
||||
async fn update_tenant_network_policy(
|
||||
&self,
|
||||
tenant_name: &str,
|
||||
new_policy: &TenantNetworkPolicy,
|
||||
) -> Result<(), ExecutorError> {
|
||||
todo!()
|
||||
}
|
||||
|
||||
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
46
harmony/src/domain/topology/tenant/manager.rs
Normal file
46
harmony/src/domain/topology/tenant/manager.rs
Normal file
@@ -0,0 +1,46 @@
|
||||
use super::*;
|
||||
use async_trait::async_trait;
|
||||
|
||||
use crate::executors::ExecutorError;
|
||||
|
||||
#[async_trait]
|
||||
pub trait TenantManager {
|
||||
/// Provisions a new tenant based on the provided configuration.
|
||||
/// This operation should be idempotent; if a tenant with the same `config.name`
|
||||
/// already exists and matches the config, it will succeed without changes.
|
||||
/// If it exists but differs, it will be updated, or return an error if the update
|
||||
/// action is not supported
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `config`: The desired configuration for the new tenant.
|
||||
async fn provision_tenant(&self, config: &TenantConfig) -> Result<(), ExecutorError>;
|
||||
|
||||
/// Updates the resource limits for an existing tenant.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `tenant_name`: The logical name of the tenant to update.
|
||||
/// * `new_limits`: The new set of resource limits to apply.
|
||||
async fn update_tenant_resource_limits(
|
||||
&self,
|
||||
tenant_name: &str,
|
||||
new_limits: &ResourceLimits,
|
||||
) -> Result<(), ExecutorError>;
|
||||
|
||||
/// Updates the high-level network isolation policy for an existing tenant.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `tenant_name`: The logical name of the tenant to update.
|
||||
/// * `new_policy`: The new network policy to apply.
|
||||
async fn update_tenant_network_policy(
|
||||
&self,
|
||||
tenant_name: &str,
|
||||
new_policy: &TenantNetworkPolicy,
|
||||
) -> Result<(), ExecutorError>;
|
||||
|
||||
/// Decommissions an existing tenant, removing its isolated context and associated resources.
|
||||
/// This operation should be idempotent.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `tenant_name`: The logical name of the tenant to deprovision.
|
||||
async fn deprovision_tenant(&self, tenant_name: &str) -> Result<(), ExecutorError>;
|
||||
}
|
||||
67
harmony/src/domain/topology/tenant/mod.rs
Normal file
67
harmony/src/domain/topology/tenant/mod.rs
Normal file
@@ -0,0 +1,67 @@
|
||||
pub mod k8s;
|
||||
mod manager;
|
||||
pub use manager::*;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
use crate::data::Id;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] // Assuming serde for Scores
|
||||
pub struct TenantConfig {
|
||||
/// This will be used as the primary unique identifier for management operations and will never
|
||||
/// change for the entire lifetime of the tenant
|
||||
pub id: Id,
|
||||
|
||||
/// A human-readable name for the tenant (e.g., "client-alpha", "project-phoenix").
|
||||
pub name: String,
|
||||
|
||||
/// Desired resource allocations and limits for the tenant.
|
||||
pub resource_limits: ResourceLimits,
|
||||
|
||||
/// High-level network isolation policies for the tenant.
|
||||
pub network_policy: TenantNetworkPolicy,
|
||||
|
||||
/// Key-value pairs for provider-specific tagging, labeling, or metadata.
|
||||
/// Useful for billing, organization, or filtering within the provider's console.
|
||||
pub labels_or_tags: HashMap<String, String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Default)]
|
||||
pub struct ResourceLimits {
|
||||
/// Requested/guaranteed CPU cores (e.g., 2.0).
|
||||
pub cpu_request_cores: f32,
|
||||
/// Maximum CPU cores the tenant can burst to (e.g., 4.0).
|
||||
pub cpu_limit_cores: f32,
|
||||
|
||||
/// Requested/guaranteed memory in Gigabytes (e.g., 8.0).
|
||||
pub memory_request_gb: f32,
|
||||
/// Maximum memory in Gigabytes tenant can burst to (e.g., 16.0).
|
||||
pub memory_limit_gb: f32,
|
||||
|
||||
/// Total persistent storage allocation in Gigabytes across all volumes.
|
||||
pub storage_total_gb: f32,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||
pub struct TenantNetworkPolicy {
|
||||
/// Policy for ingress traffic originating from other tenants within the same Harmony-managed environment.
|
||||
pub default_inter_tenant_ingress: InterTenantIngressPolicy,
|
||||
|
||||
/// Policy for egress traffic destined for the public internet.
|
||||
pub default_internet_egress: InternetEgressPolicy,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||
pub enum InterTenantIngressPolicy {
|
||||
/// Deny all traffic from other tenants by default.
|
||||
DenyAll,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||
pub enum InternetEgressPolicy {
|
||||
/// Allow all outbound traffic to the internet.
|
||||
AllowAll,
|
||||
/// Deny all outbound traffic to the internet by default.
|
||||
DenyAll,
|
||||
}
|
||||
@@ -69,4 +69,34 @@ impl DhcpServer for OPNSenseFirewall {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
|
||||
{
|
||||
let mut writable_opnsense = self.opnsense_config.write().await;
|
||||
writable_opnsense.dhcp().set_filename(filename);
|
||||
debug!("OPNsense dhcp server set filename {filename}");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn set_filename64(&self, filename: &str) -> Result<(), ExecutorError> {
|
||||
{
|
||||
let mut writable_opnsense = self.opnsense_config.write().await;
|
||||
writable_opnsense.dhcp().set_filename64(filename);
|
||||
debug!("OPNsense dhcp server set filename {filename}");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError> {
|
||||
{
|
||||
let mut writable_opnsense = self.opnsense_config.write().await;
|
||||
writable_opnsense.dhcp().set_filenameipxe(filenameipxe);
|
||||
debug!("OPNsense dhcp server set filenameipxe {filenameipxe}");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -61,7 +61,7 @@ impl HttpServer for OPNSenseFirewall {
|
||||
info!("Adding custom caddy config files");
|
||||
config
|
||||
.upload_files(
|
||||
"../../../watchguard/caddy_config",
|
||||
"./data/watchguard/caddy_config",
|
||||
"/usr/local/etc/caddy/caddy.d/",
|
||||
)
|
||||
.await
|
||||
|
||||
@@ -370,10 +370,13 @@ mod tests {
|
||||
let result = get_servers_for_backend(&backend, &haproxy);
|
||||
|
||||
// Check the result
|
||||
assert_eq!(result, vec![BackendServer {
|
||||
address: "192.168.1.1".to_string(),
|
||||
port: 80,
|
||||
},]);
|
||||
assert_eq!(
|
||||
result,
|
||||
vec![BackendServer {
|
||||
address: "192.168.1.1".to_string(),
|
||||
port: 80,
|
||||
},]
|
||||
);
|
||||
}
|
||||
#[test]
|
||||
fn test_get_servers_for_backend_no_linked_servers() {
|
||||
@@ -430,15 +433,18 @@ mod tests {
|
||||
// Call the function
|
||||
let result = get_servers_for_backend(&backend, &haproxy);
|
||||
// Check the result
|
||||
assert_eq!(result, vec![
|
||||
BackendServer {
|
||||
address: "some-hostname.test.mcd".to_string(),
|
||||
port: 80,
|
||||
},
|
||||
BackendServer {
|
||||
address: "192.168.1.2".to_string(),
|
||||
port: 8080,
|
||||
},
|
||||
]);
|
||||
assert_eq!(
|
||||
result,
|
||||
vec![
|
||||
BackendServer {
|
||||
address: "some-hostname.test.mcd".to_string(),
|
||||
port: 80,
|
||||
},
|
||||
BackendServer {
|
||||
address: "192.168.1.2".to_string(),
|
||||
port: 8080,
|
||||
},
|
||||
]
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
46
harmony/src/modules/cert_manager/helm.rs
Normal file
46
harmony/src/modules/cert_manager/helm.rs
Normal file
@@ -0,0 +1,46 @@
|
||||
use std::{collections::HashMap, str::FromStr};
|
||||
|
||||
use non_blank_string_rs::NonBlankString;
|
||||
use serde::Serialize;
|
||||
use url::Url;
|
||||
|
||||
use crate::{
|
||||
modules::helm::chart::{HelmChartScore, HelmRepository},
|
||||
score::Score,
|
||||
topology::{HelmCommand, Topology},
|
||||
};
|
||||
|
||||
#[derive(Debug, Serialize, Clone)]
|
||||
pub struct CertManagerHelmScore {}
|
||||
|
||||
impl<T: Topology + HelmCommand> Score<T> for CertManagerHelmScore {
|
||||
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
|
||||
let mut values_overrides = HashMap::new();
|
||||
values_overrides.insert(
|
||||
NonBlankString::from_str("crds.enabled").unwrap(),
|
||||
"true".to_string(),
|
||||
);
|
||||
let values_overrides = Some(values_overrides);
|
||||
|
||||
HelmChartScore {
|
||||
namespace: Some(NonBlankString::from_str("cert-manager").unwrap()),
|
||||
release_name: NonBlankString::from_str("cert-manager").unwrap(),
|
||||
chart_name: NonBlankString::from_str("jetstack/cert-manager").unwrap(),
|
||||
chart_version: None,
|
||||
values_overrides,
|
||||
values_yaml: None,
|
||||
create_namespace: true,
|
||||
install_only: true,
|
||||
repository: Some(HelmRepository::new(
|
||||
"jetstack".to_string(),
|
||||
Url::parse("https://charts.jetstack.io").unwrap(),
|
||||
true,
|
||||
)),
|
||||
}
|
||||
.create_interpret()
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
format!("CertManagerHelmScore")
|
||||
}
|
||||
}
|
||||
2
harmony/src/modules/cert_manager/mod.rs
Normal file
2
harmony/src/modules/cert_manager/mod.rs
Normal file
@@ -0,0 +1,2 @@
|
||||
mod helm;
|
||||
pub use helm::*;
|
||||
@@ -17,6 +17,9 @@ pub struct DhcpScore {
|
||||
pub host_binding: Vec<HostBinding>,
|
||||
pub next_server: Option<IpAddress>,
|
||||
pub boot_filename: Option<String>,
|
||||
pub filename: Option<String>,
|
||||
pub filename64: Option<String>,
|
||||
pub filenameipxe: Option<String>,
|
||||
}
|
||||
|
||||
impl<T: Topology + DhcpServer> Score<T> for DhcpScore {
|
||||
@@ -117,8 +120,44 @@ impl DhcpInterpret {
|
||||
None => Outcome::noop(),
|
||||
};
|
||||
|
||||
let filename_outcome = match &self.score.filename {
|
||||
Some(filename) => {
|
||||
dhcp_server.set_filename(&filename).await?;
|
||||
Outcome::new(
|
||||
InterpretStatus::SUCCESS,
|
||||
format!("Dhcp Interpret Set filename to {filename}"),
|
||||
)
|
||||
}
|
||||
None => Outcome::noop(),
|
||||
};
|
||||
|
||||
let filename64_outcome = match &self.score.filename64 {
|
||||
Some(filename64) => {
|
||||
dhcp_server.set_filename64(&filename64).await?;
|
||||
Outcome::new(
|
||||
InterpretStatus::SUCCESS,
|
||||
format!("Dhcp Interpret Set filename64 to {filename64}"),
|
||||
)
|
||||
}
|
||||
None => Outcome::noop(),
|
||||
};
|
||||
|
||||
let filenameipxe_outcome = match &self.score.filenameipxe {
|
||||
Some(filenameipxe) => {
|
||||
dhcp_server.set_filenameipxe(&filenameipxe).await?;
|
||||
Outcome::new(
|
||||
InterpretStatus::SUCCESS,
|
||||
format!("Dhcp Interpret Set filenameipxe to {filenameipxe}"),
|
||||
)
|
||||
}
|
||||
None => Outcome::noop(),
|
||||
};
|
||||
|
||||
if next_server_outcome.status == InterpretStatus::NOOP
|
||||
&& boot_filename_outcome.status == InterpretStatus::NOOP
|
||||
&& filename_outcome.status == InterpretStatus::NOOP
|
||||
&& filename64_outcome.status == InterpretStatus::NOOP
|
||||
&& filenameipxe_outcome.status == InterpretStatus::NOOP
|
||||
{
|
||||
return Ok(Outcome::noop());
|
||||
}
|
||||
@@ -126,8 +165,12 @@ impl DhcpInterpret {
|
||||
Ok(Outcome::new(
|
||||
InterpretStatus::SUCCESS,
|
||||
format!(
|
||||
"Dhcp Interpret Set next boot to {:?} and boot_filename to {:?}",
|
||||
self.score.boot_filename, self.score.boot_filename
|
||||
"Dhcp Interpret Set next boot to [{:?}], boot_filename to [{:?}], filename to [{:?}], filename64 to [{:?}], filenameipxe to [:{:?}]",
|
||||
self.score.boot_filename,
|
||||
self.score.boot_filename,
|
||||
self.score.filename,
|
||||
self.score.filename64,
|
||||
self.score.filenameipxe
|
||||
),
|
||||
))
|
||||
}
|
||||
|
||||
@@ -6,9 +6,31 @@ use crate::topology::{HelmCommand, Topology};
|
||||
use async_trait::async_trait;
|
||||
use helm_wrapper_rs;
|
||||
use helm_wrapper_rs::blocking::{DefaultHelmExecutor, HelmExecutor};
|
||||
use non_blank_string_rs::NonBlankString;
|
||||
use log::{debug, info, warn};
|
||||
pub use non_blank_string_rs::NonBlankString;
|
||||
use serde::Serialize;
|
||||
use std::collections::HashMap;
|
||||
use std::path::Path;
|
||||
use std::process::{Command, Output, Stdio};
|
||||
use std::str::FromStr;
|
||||
use temp_file::TempFile;
|
||||
use url::Url;
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct HelmRepository {
|
||||
name: String,
|
||||
url: Url,
|
||||
force_update: bool,
|
||||
}
|
||||
impl HelmRepository {
|
||||
pub fn new(name: String, url: Url, force_update: bool) -> Self {
|
||||
Self {
|
||||
name,
|
||||
url,
|
||||
force_update,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct HelmChartScore {
|
||||
@@ -17,6 +39,12 @@ pub struct HelmChartScore {
|
||||
pub chart_name: NonBlankString,
|
||||
pub chart_version: Option<NonBlankString>,
|
||||
pub values_overrides: Option<HashMap<NonBlankString, String>>,
|
||||
pub values_yaml: Option<String>,
|
||||
pub create_namespace: bool,
|
||||
|
||||
/// Wether to run `helm upgrade --install` under the hood or only install when not present
|
||||
pub install_only: bool,
|
||||
pub repository: Option<HelmRepository>,
|
||||
}
|
||||
|
||||
impl<T: Topology + HelmCommand> Score<T> for HelmChartScore {
|
||||
@@ -27,7 +55,7 @@ impl<T: Topology + HelmCommand> Score<T> for HelmChartScore {
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
"HelmChartScore".to_string()
|
||||
format!("{} {} HelmChartScore", self.release_name, self.chart_name)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -35,6 +63,81 @@ impl<T: Topology + HelmCommand> Score<T> for HelmChartScore {
|
||||
pub struct HelmChartInterpret {
|
||||
pub score: HelmChartScore,
|
||||
}
|
||||
impl HelmChartInterpret {
|
||||
fn add_repo(&self) -> Result<(), InterpretError> {
|
||||
let repo = match &self.score.repository {
|
||||
Some(repo) => repo,
|
||||
None => {
|
||||
info!("No Helm repository specified in the score. Skipping repository setup.");
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
info!(
|
||||
"Ensuring Helm repository exists: Name='{}', URL='{}', ForceUpdate={}",
|
||||
repo.name, repo.url, repo.force_update
|
||||
);
|
||||
|
||||
let mut add_args = vec!["repo", "add", &repo.name, repo.url.as_str()];
|
||||
if repo.force_update {
|
||||
add_args.push("--force-update");
|
||||
}
|
||||
|
||||
let add_output = run_helm_command(&add_args)?;
|
||||
let full_output = format!(
|
||||
"{}\n{}",
|
||||
String::from_utf8_lossy(&add_output.stdout),
|
||||
String::from_utf8_lossy(&add_output.stderr)
|
||||
);
|
||||
|
||||
match add_output.status.success() {
|
||||
true => {
|
||||
return Ok(());
|
||||
}
|
||||
false => {
|
||||
return Err(InterpretError::new(format!(
|
||||
"Failed to add helm repository!\n{full_output}"
|
||||
)));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn run_helm_command(args: &[&str]) -> Result<Output, InterpretError> {
|
||||
let command_str = format!("helm {}", args.join(" "));
|
||||
debug!(
|
||||
"Got KUBECONFIG: `{}`",
|
||||
std::env::var("KUBECONFIG").unwrap_or("".to_string())
|
||||
);
|
||||
debug!("Running Helm command: `{}`", command_str);
|
||||
|
||||
let output = Command::new("helm")
|
||||
.args(args)
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::piped())
|
||||
.output()
|
||||
.map_err(|e| {
|
||||
InterpretError::new(format!(
|
||||
"Failed to execute helm command '{}': {}. Is helm installed and in PATH?",
|
||||
command_str, e
|
||||
))
|
||||
})?;
|
||||
|
||||
if !output.status.success() {
|
||||
let stdout = String::from_utf8_lossy(&output.stdout);
|
||||
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||
warn!(
|
||||
"Helm command `{}` failed with status: {}\nStdout:\n{}\nStderr:\n{}",
|
||||
command_str, output.status, stdout, stderr
|
||||
);
|
||||
} else {
|
||||
debug!(
|
||||
"Helm command `{}` finished successfully. Status: {}",
|
||||
command_str, output.status
|
||||
);
|
||||
}
|
||||
|
||||
Ok(output)
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
|
||||
@@ -48,16 +151,76 @@ impl<T: Topology + HelmCommand> Interpret<T> for HelmChartInterpret {
|
||||
.namespace
|
||||
.as_ref()
|
||||
.unwrap_or_else(|| todo!("Get namespace from active kubernetes cluster"));
|
||||
let helm_executor = DefaultHelmExecutor::new();
|
||||
|
||||
let tf: TempFile;
|
||||
let yaml_path: Option<&Path> = match self.score.values_yaml.as_ref() {
|
||||
Some(yaml_str) => {
|
||||
tf = temp_file::with_contents(yaml_str.as_bytes());
|
||||
Some(tf.path())
|
||||
}
|
||||
None => None,
|
||||
};
|
||||
|
||||
self.add_repo()?;
|
||||
|
||||
let helm_executor = DefaultHelmExecutor::new_with_opts(
|
||||
&NonBlankString::from_str("helm").unwrap(),
|
||||
None,
|
||||
900,
|
||||
false,
|
||||
false,
|
||||
);
|
||||
|
||||
let mut helm_options = Vec::new();
|
||||
if self.score.create_namespace {
|
||||
helm_options.push(NonBlankString::from_str("--create-namespace").unwrap());
|
||||
}
|
||||
|
||||
if self.score.install_only {
|
||||
let chart_list = match helm_executor.list(Some(ns)) {
|
||||
Ok(charts) => charts,
|
||||
Err(e) => {
|
||||
return Err(InterpretError::new(format!(
|
||||
"Failed to list scores in namespace {:?} because of error : {}",
|
||||
self.score.namespace, e
|
||||
)));
|
||||
}
|
||||
};
|
||||
|
||||
if chart_list
|
||||
.iter()
|
||||
.any(|item| item.name == self.score.release_name.to_string())
|
||||
{
|
||||
info!(
|
||||
"Release '{}' already exists in namespace '{}'. Skipping installation as install_only is true.",
|
||||
self.score.release_name, ns
|
||||
);
|
||||
|
||||
return Ok(Outcome::new(
|
||||
InterpretStatus::SUCCESS,
|
||||
format!(
|
||||
"Helm Chart '{}' already installed to namespace {ns} and install_only=true",
|
||||
self.score.release_name
|
||||
),
|
||||
));
|
||||
} else {
|
||||
info!(
|
||||
"Release '{}' not found in namespace '{}'. Proceeding with installation.",
|
||||
self.score.release_name, ns
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let res = helm_executor.install_or_upgrade(
|
||||
&ns,
|
||||
&self.score.release_name,
|
||||
&self.score.chart_name,
|
||||
self.score.chart_version.as_ref(),
|
||||
self.score.values_overrides.as_ref(),
|
||||
None,
|
||||
None,
|
||||
yaml_path,
|
||||
Some(&helm_options),
|
||||
);
|
||||
|
||||
let status = match res {
|
||||
Ok(status) => status,
|
||||
Err(err) => return Err(InterpretError::new(err.to_string())),
|
||||
|
||||
376
harmony/src/modules/helm/command.rs
Normal file
376
harmony/src/modules/helm/command.rs
Normal file
@@ -0,0 +1,376 @@
|
||||
use async_trait::async_trait;
|
||||
use log::debug;
|
||||
use serde::Serialize;
|
||||
use std::collections::HashMap;
|
||||
use std::io::ErrorKind;
|
||||
use std::path::PathBuf;
|
||||
use std::process::{Command, Output};
|
||||
use temp_dir::{self, TempDir};
|
||||
use temp_file::TempFile;
|
||||
|
||||
use crate::data::{Id, Version};
|
||||
use crate::interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome};
|
||||
use crate::inventory::Inventory;
|
||||
use crate::score::Score;
|
||||
use crate::topology::{HelmCommand, K8sclient, Topology};
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct HelmCommandExecutor {
|
||||
pub env: HashMap<String, String>,
|
||||
pub path: Option<PathBuf>,
|
||||
pub args: Vec<String>,
|
||||
pub api_versions: Option<Vec<String>>,
|
||||
pub kube_version: String,
|
||||
pub debug: Option<bool>,
|
||||
pub globals: HelmGlobals,
|
||||
pub chart: HelmChart,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct HelmGlobals {
|
||||
pub chart_home: Option<PathBuf>,
|
||||
pub config_home: Option<PathBuf>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct HelmChart {
|
||||
pub name: String,
|
||||
pub version: Option<String>,
|
||||
pub repo: Option<String>,
|
||||
pub release_name: Option<String>,
|
||||
pub namespace: Option<String>,
|
||||
pub additional_values_files: Vec<PathBuf>,
|
||||
pub values_file: Option<PathBuf>,
|
||||
pub values_inline: Option<String>,
|
||||
pub include_crds: Option<bool>,
|
||||
pub skip_hooks: Option<bool>,
|
||||
pub api_versions: Option<Vec<String>>,
|
||||
pub kube_version: Option<String>,
|
||||
pub name_template: String,
|
||||
pub skip_tests: Option<bool>,
|
||||
pub debug: Option<bool>,
|
||||
}
|
||||
|
||||
impl HelmCommandExecutor {
|
||||
pub fn generate(mut self) -> Result<String, std::io::Error> {
|
||||
if self.globals.chart_home.is_none() {
|
||||
self.globals.chart_home = Some(PathBuf::from("charts"));
|
||||
}
|
||||
|
||||
if self
|
||||
.clone()
|
||||
.chart
|
||||
.clone()
|
||||
.chart_exists_locally(self.clone().globals.chart_home.unwrap())
|
||||
.is_none()
|
||||
{
|
||||
if self.chart.repo.is_none() {
|
||||
return Err(std::io::Error::new(
|
||||
ErrorKind::Other,
|
||||
"Chart doesn't exist locally and no repo specified",
|
||||
));
|
||||
}
|
||||
self.clone().run_command(
|
||||
self.chart
|
||||
.clone()
|
||||
.pull_command(self.globals.chart_home.clone().unwrap()),
|
||||
)?;
|
||||
}
|
||||
|
||||
let out = match self.clone().run_command(
|
||||
self.chart
|
||||
.clone()
|
||||
.helm_args(self.globals.chart_home.clone().unwrap()),
|
||||
) {
|
||||
Ok(out) => out,
|
||||
Err(e) => return Err(e),
|
||||
};
|
||||
|
||||
// TODO: don't use unwrap here
|
||||
let s = String::from_utf8(out.stdout).unwrap();
|
||||
debug!("helm stderr: {}", String::from_utf8(out.stderr).unwrap());
|
||||
debug!("helm status: {}", out.status);
|
||||
debug!("helm output: {s}");
|
||||
|
||||
let clean = s.split_once("---").unwrap().1;
|
||||
|
||||
Ok(clean.to_string())
|
||||
}
|
||||
|
||||
pub fn version(self) -> Result<String, std::io::Error> {
|
||||
let out = match self.run_command(vec![
|
||||
"version".to_string(),
|
||||
"-c".to_string(),
|
||||
"--short".to_string(),
|
||||
]) {
|
||||
Ok(out) => out,
|
||||
Err(e) => return Err(e),
|
||||
};
|
||||
|
||||
// TODO: don't use unwrap
|
||||
Ok(String::from_utf8(out.stdout).unwrap())
|
||||
}
|
||||
|
||||
pub fn run_command(mut self, mut args: Vec<String>) -> Result<Output, std::io::Error> {
|
||||
if let Some(d) = self.debug {
|
||||
if d {
|
||||
args.push("--debug".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
let path = if let Some(p) = self.path {
|
||||
p
|
||||
} else {
|
||||
PathBuf::from("helm")
|
||||
};
|
||||
|
||||
let config_home = match self.globals.config_home {
|
||||
Some(p) => p,
|
||||
None => PathBuf::from(TempDir::new()?.path()),
|
||||
};
|
||||
|
||||
match self.chart.values_inline {
|
||||
Some(yaml_str) => {
|
||||
let tf: TempFile;
|
||||
tf = temp_file::with_contents(yaml_str.as_bytes());
|
||||
self.chart
|
||||
.additional_values_files
|
||||
.push(PathBuf::from(tf.path()));
|
||||
}
|
||||
None => (),
|
||||
};
|
||||
|
||||
self.env.insert(
|
||||
"HELM_CONFIG_HOME".to_string(),
|
||||
config_home.to_str().unwrap().to_string(),
|
||||
);
|
||||
self.env.insert(
|
||||
"HELM_CACHE_HOME".to_string(),
|
||||
config_home.to_str().unwrap().to_string(),
|
||||
);
|
||||
self.env.insert(
|
||||
"HELM_DATA_HOME".to_string(),
|
||||
config_home.to_str().unwrap().to_string(),
|
||||
);
|
||||
|
||||
Command::new(path).envs(self.env).args(args).output()
|
||||
}
|
||||
}
|
||||
|
||||
impl HelmChart {
|
||||
pub fn chart_exists_locally(self, chart_home: PathBuf) -> Option<PathBuf> {
|
||||
let chart_path =
|
||||
PathBuf::from(chart_home.to_str().unwrap().to_string() + "/" + &self.name.to_string());
|
||||
|
||||
if chart_path.exists() {
|
||||
Some(chart_path)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
pub fn pull_command(self, chart_home: PathBuf) -> Vec<String> {
|
||||
let mut args = vec![
|
||||
"pull".to_string(),
|
||||
"--untar".to_string(),
|
||||
"--untardir".to_string(),
|
||||
chart_home.to_str().unwrap().to_string(),
|
||||
];
|
||||
|
||||
match self.repo {
|
||||
Some(r) => {
|
||||
if r.starts_with("oci://") {
|
||||
args.push(String::from(
|
||||
r.trim_end_matches("/").to_string() + "/" + self.name.clone().as_str(),
|
||||
));
|
||||
} else {
|
||||
args.push("--repo".to_string());
|
||||
args.push(r.to_string());
|
||||
|
||||
args.push(self.name);
|
||||
}
|
||||
}
|
||||
None => args.push(self.name),
|
||||
};
|
||||
|
||||
match self.version {
|
||||
Some(v) => {
|
||||
args.push("--version".to_string());
|
||||
args.push(v.to_string());
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
|
||||
args
|
||||
}
|
||||
|
||||
pub fn helm_args(self, chart_home: PathBuf) -> Vec<String> {
|
||||
let mut args: Vec<String> = vec!["template".to_string()];
|
||||
|
||||
match self.release_name {
|
||||
Some(rn) => args.push(rn.to_string()),
|
||||
None => args.push("--generate-name".to_string()),
|
||||
}
|
||||
|
||||
args.push(
|
||||
PathBuf::from(chart_home.to_str().unwrap().to_string() + "/" + self.name.as_str())
|
||||
.to_str()
|
||||
.unwrap()
|
||||
.to_string(),
|
||||
);
|
||||
|
||||
if let Some(n) = self.namespace {
|
||||
args.push("--namespace".to_string());
|
||||
args.push(n.to_string());
|
||||
}
|
||||
|
||||
if let Some(f) = self.values_file {
|
||||
args.push("-f".to_string());
|
||||
args.push(f.to_str().unwrap().to_string());
|
||||
}
|
||||
|
||||
for f in self.additional_values_files {
|
||||
args.push("-f".to_string());
|
||||
args.push(f.to_str().unwrap().to_string());
|
||||
}
|
||||
|
||||
if let Some(vv) = self.api_versions {
|
||||
for v in vv {
|
||||
args.push("--api-versions".to_string());
|
||||
args.push(v);
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(kv) = self.kube_version {
|
||||
args.push("--kube-version".to_string());
|
||||
args.push(kv);
|
||||
}
|
||||
|
||||
if let Some(crd) = self.include_crds {
|
||||
if crd {
|
||||
args.push("--include-crds".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(st) = self.skip_tests {
|
||||
if st {
|
||||
args.push("--skip-tests".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(sh) = self.skip_hooks {
|
||||
if sh {
|
||||
args.push("--no-hooks".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(d) = self.debug {
|
||||
if d {
|
||||
args.push("--debug".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
args
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct HelmChartScoreV2 {
|
||||
pub chart: HelmChart,
|
||||
}
|
||||
|
||||
impl<T: Topology + K8sclient + HelmCommand> Score<T> for HelmChartScoreV2 {
|
||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||
Box::new(HelmChartInterpretV2 {
|
||||
score: self.clone(),
|
||||
})
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
format!(
|
||||
"{} {} HelmChartScoreV2",
|
||||
self.chart
|
||||
.release_name
|
||||
.clone()
|
||||
.unwrap_or("Unknown".to_string()),
|
||||
self.chart.name
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct HelmChartInterpretV2 {
|
||||
pub score: HelmChartScoreV2,
|
||||
}
|
||||
impl HelmChartInterpretV2 {}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: Topology + K8sclient + HelmCommand> Interpret<T> for HelmChartInterpretV2 {
|
||||
async fn execute(
|
||||
&self,
|
||||
_inventory: &Inventory,
|
||||
_topology: &T,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
let ns = self
|
||||
.score
|
||||
.chart
|
||||
.namespace
|
||||
.as_ref()
|
||||
.unwrap_or_else(|| todo!("Get namespace from active kubernetes cluster"));
|
||||
|
||||
let helm_executor = HelmCommandExecutor {
|
||||
env: HashMap::new(),
|
||||
path: None,
|
||||
args: vec![],
|
||||
api_versions: None,
|
||||
kube_version: "v1.33.0".to_string(),
|
||||
debug: Some(false),
|
||||
globals: HelmGlobals {
|
||||
chart_home: None,
|
||||
config_home: None,
|
||||
},
|
||||
chart: self.score.chart.clone(),
|
||||
};
|
||||
|
||||
// let mut helm_options = Vec::new();
|
||||
// if self.score.create_namespace {
|
||||
// helm_options.push(NonBlankString::from_str("--create-namespace").unwrap());
|
||||
// }
|
||||
|
||||
let res = helm_executor.generate();
|
||||
|
||||
let output = match res {
|
||||
Ok(output) => output,
|
||||
Err(err) => return Err(InterpretError::new(err.to_string())),
|
||||
};
|
||||
|
||||
// TODO: implement actually applying the YAML from the templating in the generate function to a k8s cluster, having trouble passing in straight YAML into the k8s client
|
||||
|
||||
// let k8s_resource = k8s_openapi::serde_json::from_str(output.as_str()).unwrap();
|
||||
|
||||
// let client = topology
|
||||
// .k8s_client()
|
||||
// .await
|
||||
// .expect("Environment should provide enough information to instanciate a client")
|
||||
// .apply_namespaced(&vec![output], Some(ns.to_string().as_str()));
|
||||
// match client.apply_yaml(output) {
|
||||
// Ok(_) => return Ok(Outcome::success("Helm chart deployed".to_string())),
|
||||
// Err(e) => return Err(InterpretError::new(e)),
|
||||
// }
|
||||
|
||||
Ok(Outcome::success("Helm chart deployed".to_string()))
|
||||
}
|
||||
|
||||
fn get_name(&self) -> InterpretName {
|
||||
todo!()
|
||||
}
|
||||
fn get_version(&self) -> Version {
|
||||
todo!()
|
||||
}
|
||||
fn get_status(&self) -> InterpretStatus {
|
||||
todo!()
|
||||
}
|
||||
fn get_children(&self) -> Vec<Id> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
@@ -1 +1,2 @@
|
||||
pub mod chart;
|
||||
pub mod command;
|
||||
|
||||
66
harmony/src/modules/ipxe.rs
Normal file
66
harmony/src/modules/ipxe.rs
Normal file
@@ -0,0 +1,66 @@
|
||||
use async_trait::async_trait;
|
||||
use derive_new::new;
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::{
|
||||
data::{Id, Version},
|
||||
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
||||
inventory::Inventory,
|
||||
score::Score,
|
||||
topology::Topology,
|
||||
};
|
||||
|
||||
#[derive(Debug, new, Clone, Serialize)]
|
||||
pub struct IpxeScore {
|
||||
//files_to_serve: Url,
|
||||
}
|
||||
|
||||
impl<T: Topology> Score<T> for IpxeScore {
|
||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||
Box::new(IpxeInterpret::new(self.clone()))
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
"IpxeScore".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, new, Clone)]
|
||||
pub struct IpxeInterpret {
|
||||
_score: IpxeScore,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: Topology> Interpret<T> for IpxeInterpret {
|
||||
async fn execute(
|
||||
&self,
|
||||
_inventory: &Inventory,
|
||||
_topology: &T,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
/*
|
||||
let http_server = &topology.http_server;
|
||||
http_server.ensure_initialized().await?;
|
||||
Ok(Outcome::success(format!(
|
||||
"Http Server running and serving files from {}",
|
||||
self.score.files_to_serve
|
||||
)))
|
||||
*/
|
||||
todo!();
|
||||
}
|
||||
|
||||
fn get_name(&self) -> InterpretName {
|
||||
InterpretName::Ipxe
|
||||
}
|
||||
|
||||
fn get_version(&self) -> Version {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn get_status(&self) -> InterpretStatus {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn get_children(&self) -> Vec<Id> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
@@ -1,7 +1,11 @@
|
||||
use std::path::PathBuf;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use log::info;
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::{
|
||||
config::HARMONY_CONFIG_DIR,
|
||||
data::{Id, Version},
|
||||
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
||||
inventory::Inventory,
|
||||
@@ -10,26 +14,25 @@ use crate::{
|
||||
};
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct K3DInstallationScore {}
|
||||
pub struct K3DInstallationScore {
|
||||
pub installation_path: PathBuf,
|
||||
pub cluster_name: String,
|
||||
}
|
||||
|
||||
impl K3DInstallationScore {
|
||||
pub fn new() -> Self {
|
||||
Self {}
|
||||
impl Default for K3DInstallationScore {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
installation_path: HARMONY_CONFIG_DIR.join("k3d"),
|
||||
cluster_name: "harmony".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Topology> Score<T> for K3DInstallationScore {
|
||||
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
|
||||
todo!("
|
||||
1. Decide if I create a new crate for k3d management, especially to avoid the ocrtograb dependency
|
||||
2. Implement k3d management
|
||||
3. Find latest tag
|
||||
4. Download k3d to some path managed by harmony (or not?)
|
||||
5. Bootstrap cluster
|
||||
6. Get kubeconfig
|
||||
7. Load kubeconfig in k8s anywhere
|
||||
8. Complete k8sanywhere setup
|
||||
")
|
||||
Box::new(K3dInstallationInterpret {
|
||||
score: self.clone(),
|
||||
})
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
@@ -38,7 +41,9 @@ impl<T: Topology> Score<T> for K3DInstallationScore {
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct K3dInstallationInterpret {}
|
||||
pub struct K3dInstallationInterpret {
|
||||
score: K3DInstallationScore,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: Topology> Interpret<T> for K3dInstallationInterpret {
|
||||
@@ -47,7 +52,20 @@ impl<T: Topology> Interpret<T> for K3dInstallationInterpret {
|
||||
_inventory: &Inventory,
|
||||
_topology: &T,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
todo!()
|
||||
let k3d = k3d_rs::K3d::new(
|
||||
self.score.installation_path.clone(),
|
||||
Some(self.score.cluster_name.clone()),
|
||||
);
|
||||
match k3d.ensure_installed().await {
|
||||
Ok(_client) => {
|
||||
let msg = format!("k3d cluster {} is installed ", self.score.cluster_name);
|
||||
info!("{msg}");
|
||||
Ok(Outcome::success(msg))
|
||||
}
|
||||
Err(msg) => Err(InterpretError::new(format!(
|
||||
"K3dInstallationInterpret failed to ensure k3d is installed : {msg}"
|
||||
))),
|
||||
}
|
||||
}
|
||||
fn get_name(&self) -> InterpretName {
|
||||
InterpretName::K3dInstallation
|
||||
|
||||
@@ -14,11 +14,13 @@ use super::resource::{K8sResourceInterpret, K8sResourceScore};
|
||||
pub struct K8sDeploymentScore {
|
||||
pub name: String,
|
||||
pub image: String,
|
||||
pub namespace: Option<String>,
|
||||
pub env_vars: serde_json::Value,
|
||||
}
|
||||
|
||||
impl<T: Topology + K8sclient> Score<T> for K8sDeploymentScore {
|
||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||
let deployment: Deployment = serde_json::from_value(json!(
|
||||
let deployment = json!(
|
||||
{
|
||||
"metadata": {
|
||||
"name": self.name
|
||||
@@ -38,18 +40,21 @@ impl<T: Topology + K8sclient> Score<T> for K8sDeploymentScore {
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"image": self.image,
|
||||
"name": self.image
|
||||
"image": self.image,
|
||||
"name": self.name,
|
||||
"imagePullPolicy": "Always",
|
||||
"env": self.env_vars,
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
))
|
||||
.unwrap();
|
||||
);
|
||||
|
||||
let deployment: Deployment = serde_json::from_value(deployment).unwrap();
|
||||
Box::new(K8sResourceInterpret {
|
||||
score: K8sResourceScore::single(deployment.clone()),
|
||||
score: K8sResourceScore::single(deployment.clone(), self.namespace.clone()),
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
98
harmony/src/modules/k8s/ingress.rs
Normal file
98
harmony/src/modules/k8s/ingress.rs
Normal file
@@ -0,0 +1,98 @@
|
||||
use harmony_macros::ingress_path;
|
||||
use k8s_openapi::api::networking::v1::Ingress;
|
||||
use serde::Serialize;
|
||||
use serde_json::json;
|
||||
|
||||
use crate::{
|
||||
interpret::Interpret,
|
||||
score::Score,
|
||||
topology::{K8sclient, Topology},
|
||||
};
|
||||
|
||||
use super::resource::{K8sResourceInterpret, K8sResourceScore};
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub enum PathType {
|
||||
ImplementationSpecific,
|
||||
Exact,
|
||||
Prefix,
|
||||
}
|
||||
|
||||
impl PathType {
|
||||
fn as_str(&self) -> &'static str {
|
||||
match self {
|
||||
PathType::ImplementationSpecific => "ImplementationSpecific",
|
||||
PathType::Exact => "Exact",
|
||||
PathType::Prefix => "Prefix",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type IngressPath = String;
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct K8sIngressScore {
|
||||
pub name: fqdn::FQDN,
|
||||
pub host: fqdn::FQDN,
|
||||
pub backend_service: fqdn::FQDN,
|
||||
pub port: u16,
|
||||
pub path: Option<IngressPath>,
|
||||
pub path_type: Option<PathType>,
|
||||
pub namespace: Option<fqdn::FQDN>,
|
||||
}
|
||||
|
||||
impl<T: Topology + K8sclient> Score<T> for K8sIngressScore {
|
||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||
let path = match self.path.clone() {
|
||||
Some(p) => p,
|
||||
None => ingress_path!("/"),
|
||||
};
|
||||
|
||||
let path_type = match self.path_type.clone() {
|
||||
Some(p) => p,
|
||||
None => PathType::Prefix,
|
||||
};
|
||||
|
||||
let ingress = json!(
|
||||
{
|
||||
"metadata": {
|
||||
"name": self.name
|
||||
},
|
||||
"spec": {
|
||||
"rules": [
|
||||
{ "host": self.host,
|
||||
"http": {
|
||||
"paths": [
|
||||
{
|
||||
"path": path,
|
||||
"pathType": path_type.as_str(),
|
||||
"backend": [
|
||||
{
|
||||
"service": self.backend_service,
|
||||
"port": self.port
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
let ingress: Ingress = serde_json::from_value(ingress).unwrap();
|
||||
Box::new(K8sResourceInterpret {
|
||||
score: K8sResourceScore::single(
|
||||
ingress.clone(),
|
||||
self.namespace
|
||||
.clone()
|
||||
.map(|f| f.as_c_str().to_str().unwrap().to_string()),
|
||||
),
|
||||
})
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
format!("{} K8sIngressScore", self.name)
|
||||
}
|
||||
}
|
||||
@@ -1,2 +1,4 @@
|
||||
pub mod deployment;
|
||||
pub mod ingress;
|
||||
pub mod namespace;
|
||||
pub mod resource;
|
||||
|
||||
46
harmony/src/modules/k8s/namespace.rs
Normal file
46
harmony/src/modules/k8s/namespace.rs
Normal file
@@ -0,0 +1,46 @@
|
||||
use k8s_openapi::api::core::v1::Namespace;
|
||||
use non_blank_string_rs::NonBlankString;
|
||||
use serde::Serialize;
|
||||
use serde_json::json;
|
||||
|
||||
use crate::{
|
||||
interpret::Interpret,
|
||||
score::Score,
|
||||
topology::{K8sclient, Topology},
|
||||
};
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct K8sNamespaceScore {
|
||||
pub name: Option<NonBlankString>,
|
||||
}
|
||||
|
||||
impl<T: Topology + K8sclient> Score<T> for K8sNamespaceScore {
|
||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||
let name = match &self.name {
|
||||
Some(name) => name,
|
||||
None => todo!(
|
||||
"Return NoOp interpret when no namespace specified or something that makes sense"
|
||||
),
|
||||
};
|
||||
let _namespace: Namespace = serde_json::from_value(json!(
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Namespace",
|
||||
"metadata": {
|
||||
"name": name,
|
||||
},
|
||||
}
|
||||
))
|
||||
.unwrap();
|
||||
todo!(
|
||||
"We currently only support namespaced ressources (see Scope = NamespaceResourceScope)"
|
||||
);
|
||||
// Box::new(K8sResourceInterpret {
|
||||
// score: K8sResourceScore::single(namespace.clone()),
|
||||
// })
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
"K8sNamespaceScore".to_string()
|
||||
}
|
||||
}
|
||||
@@ -14,12 +14,14 @@ use crate::{
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct K8sResourceScore<K: Resource + std::fmt::Debug> {
|
||||
pub resource: Vec<K>,
|
||||
pub namespace: Option<String>,
|
||||
}
|
||||
|
||||
impl<K: Resource + std::fmt::Debug> K8sResourceScore<K> {
|
||||
pub fn single(resource: K) -> Self {
|
||||
pub fn single(resource: K, namespace: Option<String>) -> Self {
|
||||
Self {
|
||||
resource: vec![resource],
|
||||
namespace,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -77,7 +79,7 @@ where
|
||||
.k8s_client()
|
||||
.await
|
||||
.expect("Environment should provide enough information to instanciate a client")
|
||||
.apply_namespaced(&self.score.resource)
|
||||
.apply_namespaced(&self.score.resource, self.score.namespace.as_deref())
|
||||
.await?;
|
||||
|
||||
Ok(Outcome::success(
|
||||
|
||||
@@ -1,8 +1,22 @@
|
||||
use convert_case::{Case, Casing};
|
||||
use dockerfile_builder::instruction::{CMD, COPY, ENV, EXPOSE, FROM, RUN, WORKDIR};
|
||||
use dockerfile_builder::{Dockerfile, instruction_builder::EnvBuilder};
|
||||
use fqdn::fqdn;
|
||||
use harmony_macros::ingress_path;
|
||||
use non_blank_string_rs::NonBlankString;
|
||||
use serde_json::json;
|
||||
use std::collections::HashMap;
|
||||
use std::fs;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::str::FromStr;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use log::{debug, info};
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::config::{REGISTRY_PROJECT, REGISTRY_URL};
|
||||
use crate::modules::k8s::ingress::K8sIngressScore;
|
||||
use crate::topology::HelmCommand;
|
||||
use crate::{
|
||||
data::{Id, Version},
|
||||
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
||||
@@ -12,6 +26,8 @@ use crate::{
|
||||
topology::{K8sclient, Topology, Url},
|
||||
};
|
||||
|
||||
use super::helm::chart::HelmChartScore;
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct LAMPScore {
|
||||
pub name: String,
|
||||
@@ -24,6 +40,8 @@ pub struct LAMPScore {
|
||||
pub struct LAMPConfig {
|
||||
pub project_root: PathBuf,
|
||||
pub ssl_enabled: bool,
|
||||
pub database_size: Option<String>,
|
||||
pub namespace: String,
|
||||
}
|
||||
|
||||
impl Default for LAMPConfig {
|
||||
@@ -31,13 +49,17 @@ impl Default for LAMPConfig {
|
||||
LAMPConfig {
|
||||
project_root: Path::new("./src").to_path_buf(),
|
||||
ssl_enabled: true,
|
||||
database_size: None,
|
||||
namespace: "harmony-lamp".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Topology> Score<T> for LAMPScore {
|
||||
impl<T: Topology + K8sclient + HelmCommand> Score<T> for LAMPScore {
|
||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||
todo!()
|
||||
Box::new(LAMPInterpret {
|
||||
score: self.clone(),
|
||||
})
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
@@ -51,22 +73,94 @@ pub struct LAMPInterpret {
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: Topology + K8sclient> Interpret<T> for LAMPInterpret {
|
||||
impl<T: Topology + K8sclient + HelmCommand> Interpret<T> for LAMPInterpret {
|
||||
async fn execute(
|
||||
&self,
|
||||
inventory: &Inventory,
|
||||
topology: &T,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
let deployment_score = K8sDeploymentScore {
|
||||
name: <LAMPScore as Score<T>>::name(&self.score),
|
||||
image: "local_image".to_string(),
|
||||
let image_name = match self.build_docker_image() {
|
||||
Ok(name) => name,
|
||||
Err(e) => {
|
||||
return Err(InterpretError::new(format!(
|
||||
"Could not build LAMP docker image {e}"
|
||||
)));
|
||||
}
|
||||
};
|
||||
info!("LAMP docker image built {image_name}");
|
||||
|
||||
let remote_name = match self.push_docker_image(&image_name) {
|
||||
Ok(remote_name) => remote_name,
|
||||
Err(e) => {
|
||||
return Err(InterpretError::new(format!(
|
||||
"Could not push docker image {e}"
|
||||
)));
|
||||
}
|
||||
};
|
||||
info!("LAMP docker image pushed to {remote_name}");
|
||||
|
||||
info!("Deploying database");
|
||||
self.deploy_database(inventory, topology).await?;
|
||||
|
||||
let base_name = self.score.name.to_case(Case::Kebab);
|
||||
let secret_name = format!("{}-database-mariadb", base_name);
|
||||
|
||||
let deployment_score = K8sDeploymentScore {
|
||||
name: <LAMPScore as Score<T>>::name(&self.score).to_case(Case::Kebab),
|
||||
image: remote_name,
|
||||
namespace: self.get_namespace().map(|nbs| nbs.to_string()),
|
||||
env_vars: json!([
|
||||
{
|
||||
"name": "MYSQL_PASSWORD",
|
||||
"valueFrom": {
|
||||
"secretKeyRef": {
|
||||
"name": secret_name,
|
||||
"key": "mariadb-root-password"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "MYSQL_HOST",
|
||||
"value": secret_name
|
||||
},
|
||||
]),
|
||||
};
|
||||
|
||||
info!("Deploying score {deployment_score:#?}");
|
||||
|
||||
deployment_score
|
||||
.create_interpret()
|
||||
.execute(inventory, topology)
|
||||
.await?;
|
||||
todo!()
|
||||
|
||||
info!("LAMP deployment_score {deployment_score:?}");
|
||||
|
||||
let lamp_ingress = K8sIngressScore {
|
||||
name: fqdn!("lamp-ingress"),
|
||||
host: fqdn!("test"),
|
||||
backend_service: fqdn!(
|
||||
<LAMPScore as Score<T>>::name(&self.score)
|
||||
.to_case(Case::Kebab)
|
||||
.as_str()
|
||||
),
|
||||
port: 8080,
|
||||
path: Some(ingress_path!("/")),
|
||||
path_type: None,
|
||||
namespace: self
|
||||
.get_namespace()
|
||||
.map(|nbs| fqdn!(nbs.to_string().as_str())),
|
||||
};
|
||||
|
||||
lamp_ingress
|
||||
.create_interpret()
|
||||
.execute(inventory, topology)
|
||||
.await?;
|
||||
|
||||
info!("LAMP lamp_ingress {lamp_ingress:?}");
|
||||
|
||||
Ok(Outcome::success(
|
||||
"Successfully deployed LAMP Stack!".to_string(),
|
||||
))
|
||||
}
|
||||
|
||||
fn get_name(&self) -> InterpretName {
|
||||
@@ -85,3 +179,242 @@ impl<T: Topology + K8sclient> Interpret<T> for LAMPInterpret {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
|
||||
impl LAMPInterpret {
|
||||
async fn deploy_database<T: Topology + K8sclient + HelmCommand>(
|
||||
&self,
|
||||
inventory: &Inventory,
|
||||
topology: &T,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
let mut values_overrides = HashMap::new();
|
||||
if let Some(database_size) = self.score.config.database_size.clone() {
|
||||
values_overrides.insert(
|
||||
NonBlankString::from_str("primary.persistence.size").unwrap(),
|
||||
database_size,
|
||||
);
|
||||
values_overrides.insert(
|
||||
NonBlankString::from_str("auth.rootPassword").unwrap(),
|
||||
"mariadb-changethis".to_string(),
|
||||
);
|
||||
}
|
||||
let score = HelmChartScore {
|
||||
namespace: self.get_namespace(),
|
||||
release_name: NonBlankString::from_str(&format!("{}-database", self.score.name))
|
||||
.unwrap(),
|
||||
chart_name: NonBlankString::from_str(
|
||||
"oci://registry-1.docker.io/bitnamicharts/mariadb",
|
||||
)
|
||||
.unwrap(),
|
||||
chart_version: None,
|
||||
values_overrides: Some(values_overrides),
|
||||
create_namespace: true,
|
||||
install_only: false,
|
||||
values_yaml: None,
|
||||
repository: None,
|
||||
};
|
||||
|
||||
score.create_interpret().execute(inventory, topology).await
|
||||
}
|
||||
fn build_dockerfile(&self, score: &LAMPScore) -> Result<PathBuf, Box<dyn std::error::Error>> {
|
||||
let mut dockerfile = Dockerfile::new();
|
||||
|
||||
// Use the PHP version from the score to determine the base image
|
||||
let php_version = score.php_version.to_string();
|
||||
let php_major_minor = php_version
|
||||
.split('.')
|
||||
.take(2)
|
||||
.collect::<Vec<&str>>()
|
||||
.join(".");
|
||||
|
||||
// Base image selection - using official PHP image with Apache
|
||||
dockerfile.push(FROM::from(format!("php:{}-apache", php_major_minor)));
|
||||
|
||||
// Set environment variables for PHP configuration
|
||||
dockerfile.push(ENV::from("PHP_MEMORY_LIMIT=256M"));
|
||||
dockerfile.push(ENV::from("PHP_MAX_EXECUTION_TIME=30"));
|
||||
dockerfile.push(
|
||||
EnvBuilder::builder()
|
||||
.key("PHP_ERROR_REPORTING")
|
||||
.value("\"E_ERROR | E_WARNING | E_PARSE\"")
|
||||
.build()
|
||||
.unwrap(),
|
||||
);
|
||||
|
||||
// Install necessary PHP extensions and dependencies
|
||||
dockerfile.push(RUN::from(
|
||||
"apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
libfreetype6-dev \
|
||||
libjpeg62-turbo-dev \
|
||||
libpng-dev \
|
||||
libzip-dev \
|
||||
unzip \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*",
|
||||
));
|
||||
|
||||
dockerfile.push(RUN::from(
|
||||
"docker-php-ext-configure gd --with-freetype --with-jpeg && \
|
||||
docker-php-ext-install -j$(nproc) \
|
||||
gd \
|
||||
mysqli \
|
||||
pdo_mysql \
|
||||
zip \
|
||||
opcache",
|
||||
));
|
||||
|
||||
dockerfile.push(RUN::from(r#"sed -i 's/VirtualHost \*:80/VirtualHost *:8080/' /etc/apache2/sites-available/000-default.conf && \
|
||||
sed -i 's/^Listen 80$/Listen 8080/' /etc/apache2/ports.conf"#));
|
||||
|
||||
// Copy PHP configuration
|
||||
dockerfile.push(RUN::from("mkdir -p /usr/local/etc/php/conf.d/"));
|
||||
|
||||
// Create and copy a custom PHP configuration
|
||||
let php_config = r#"
|
||||
memory_limit = ${PHP_MEMORY_LIMIT}
|
||||
max_execution_time = ${PHP_MAX_EXECUTION_TIME}
|
||||
error_reporting = ${PHP_ERROR_REPORTING}
|
||||
display_errors = Off
|
||||
log_errors = On
|
||||
error_log = /dev/stderr
|
||||
date.timezone = UTC
|
||||
|
||||
; Opcache configuration for production
|
||||
opcache.enable=1
|
||||
opcache.memory_consumption=128
|
||||
opcache.interned_strings_buffer=8
|
||||
opcache.max_accelerated_files=4000
|
||||
opcache.revalidate_freq=2
|
||||
opcache.fast_shutdown=1
|
||||
"#;
|
||||
|
||||
// Save this configuration to a temporary file within the project root
|
||||
let config_path = Path::new(&score.config.project_root).join("docker-php.ini");
|
||||
fs::write(&config_path, php_config)?;
|
||||
|
||||
// Reference the file within the Docker context (where the build runs)
|
||||
dockerfile.push(COPY::from(
|
||||
"docker-php.ini /usr/local/etc/php/conf.d/docker-php.ini",
|
||||
));
|
||||
|
||||
// Security hardening
|
||||
dockerfile.push(RUN::from(
|
||||
"a2enmod headers && \
|
||||
a2enmod rewrite && \
|
||||
sed -i 's/ServerTokens OS/ServerTokens Prod/' /etc/apache2/conf-enabled/security.conf && \
|
||||
sed -i 's/ServerSignature On/ServerSignature Off/' /etc/apache2/conf-enabled/security.conf"
|
||||
));
|
||||
|
||||
// Set env vars
|
||||
dockerfile.push(RUN::from(
|
||||
"echo 'PassEnv MYSQL_PASSWORD' >> /etc/apache2/sites-available/000-default.conf \
|
||||
&& echo 'PassEnv MYSQL_USER' >> /etc/apache2/sites-available/000-default.conf \
|
||||
&& echo 'PassEnv MYSQL_HOST' >> /etc/apache2/sites-available/000-default.conf",
|
||||
));
|
||||
|
||||
// Create a dedicated user for running Apache
|
||||
dockerfile.push(RUN::from(
|
||||
"groupadd -g 1000 appuser && \
|
||||
useradd -u 1000 -g appuser -m -s /bin/bash appuser && \
|
||||
chown -R appuser:appuser /var/www/html",
|
||||
));
|
||||
|
||||
// Set the working directory
|
||||
dockerfile.push(WORKDIR::from("/var/www/html"));
|
||||
|
||||
// Copy application code from the project root to the container
|
||||
// Note: In Dockerfile, the COPY context is relative to the build context
|
||||
// We'll handle the actual context in the build_docker_image method
|
||||
dockerfile.push(COPY::from(". /var/www/html"));
|
||||
|
||||
// Fix permissions
|
||||
dockerfile.push(RUN::from("chown -R appuser:appuser /var/www/html"));
|
||||
|
||||
// Expose Apache port
|
||||
dockerfile.push(EXPOSE::from("8080/tcp"));
|
||||
|
||||
// Set the default command
|
||||
dockerfile.push(CMD::from("apache2-foreground"));
|
||||
|
||||
// Save the Dockerfile to disk in the project root
|
||||
let dockerfile_path = Path::new(&score.config.project_root).join("Dockerfile");
|
||||
fs::write(&dockerfile_path, dockerfile.to_string())?;
|
||||
|
||||
Ok(dockerfile_path)
|
||||
}
|
||||
|
||||
fn check_output(
|
||||
&self,
|
||||
output: &std::process::Output,
|
||||
msg: &str,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
if !output.status.success() {
|
||||
return Err(format!("{msg}: {}", String::from_utf8_lossy(&output.stderr)).into());
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn push_docker_image(&self, image_name: &str) -> Result<String, Box<dyn std::error::Error>> {
|
||||
let full_tag = format!("{}/{}/{}", *REGISTRY_URL, *REGISTRY_PROJECT, &image_name);
|
||||
let output = std::process::Command::new("docker")
|
||||
.args(["tag", image_name, &full_tag])
|
||||
.output()?;
|
||||
self.check_output(&output, "Tagging docker image failed")?;
|
||||
|
||||
debug!(
|
||||
"docker tag output {} {}",
|
||||
String::from_utf8_lossy(&output.stdout),
|
||||
String::from_utf8_lossy(&output.stderr)
|
||||
);
|
||||
|
||||
let output = std::process::Command::new("docker")
|
||||
.args(["push", &full_tag])
|
||||
.output()?;
|
||||
self.check_output(&output, "Pushing docker image failed")?;
|
||||
debug!(
|
||||
"docker push output {} {}",
|
||||
String::from_utf8_lossy(&output.stdout),
|
||||
String::from_utf8_lossy(&output.stderr)
|
||||
);
|
||||
|
||||
Ok(full_tag)
|
||||
}
|
||||
|
||||
pub fn build_docker_image(&self) -> Result<String, Box<dyn std::error::Error>> {
|
||||
info!("Generating Dockerfile");
|
||||
let dockerfile = self.build_dockerfile(&self.score)?;
|
||||
|
||||
info!(
|
||||
"Building Docker image with file {} from root {}",
|
||||
dockerfile.to_string_lossy(),
|
||||
self.score.config.project_root.to_string_lossy()
|
||||
);
|
||||
let image_name = format!("{}-php-apache", self.score.name);
|
||||
let project_root = &self.score.config.project_root;
|
||||
|
||||
let output = std::process::Command::new("docker")
|
||||
.args([
|
||||
"build",
|
||||
"--file",
|
||||
dockerfile.to_str().unwrap(),
|
||||
"-t",
|
||||
&image_name,
|
||||
project_root.to_str().unwrap(),
|
||||
])
|
||||
.output()?;
|
||||
|
||||
if !output.status.success() {
|
||||
return Err(format!(
|
||||
"Failed to build Docker image: {}",
|
||||
String::from_utf8_lossy(&output.stderr)
|
||||
)
|
||||
.into());
|
||||
}
|
||||
|
||||
Ok(image_name)
|
||||
}
|
||||
|
||||
fn get_namespace(&self) -> Option<NonBlankString> {
|
||||
Some(NonBlankString::from_str(&self.score.config.namespace).unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,12 +1,16 @@
|
||||
pub mod cert_manager;
|
||||
pub mod dhcp;
|
||||
pub mod dns;
|
||||
pub mod dummy;
|
||||
pub mod helm;
|
||||
pub mod http;
|
||||
pub mod ipxe;
|
||||
pub mod k3d;
|
||||
pub mod k8s;
|
||||
pub mod lamp;
|
||||
pub mod load_balancer;
|
||||
pub mod monitoring;
|
||||
pub mod okd;
|
||||
pub mod opnsense;
|
||||
pub mod tenant;
|
||||
pub mod tftp;
|
||||
|
||||
49
harmony/src/modules/monitoring/config.rs
Normal file
49
harmony/src/modules/monitoring/config.rs
Normal file
@@ -0,0 +1,49 @@
|
||||
use serde::Serialize;
|
||||
|
||||
use super::monitoring_alerting::AlertChannel;
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct KubePrometheusConfig {
|
||||
pub namespace: String,
|
||||
pub default_rules: bool,
|
||||
pub windows_monitoring: bool,
|
||||
pub alert_manager: bool,
|
||||
pub node_exporter: bool,
|
||||
pub prometheus: bool,
|
||||
pub grafana: bool,
|
||||
pub kubernetes_service_monitors: bool,
|
||||
pub kubernetes_api_server: bool,
|
||||
pub kubelet: bool,
|
||||
pub kube_controller_manager: bool,
|
||||
pub core_dns: bool,
|
||||
pub kube_etcd: bool,
|
||||
pub kube_scheduler: bool,
|
||||
pub kube_proxy: bool,
|
||||
pub kube_state_metrics: bool,
|
||||
pub prometheus_operator: bool,
|
||||
pub alert_channel: Vec<AlertChannel>,
|
||||
}
|
||||
impl KubePrometheusConfig {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
namespace: "monitoring".into(),
|
||||
default_rules: true,
|
||||
windows_monitoring: false,
|
||||
alert_manager: true,
|
||||
alert_channel: Vec::new(),
|
||||
grafana: true,
|
||||
node_exporter: false,
|
||||
prometheus: true,
|
||||
kubernetes_service_monitors: true,
|
||||
kubernetes_api_server: false,
|
||||
kubelet: false,
|
||||
kube_controller_manager: false,
|
||||
kube_etcd: false,
|
||||
kube_proxy: false,
|
||||
kube_state_metrics: true,
|
||||
prometheus_operator: true,
|
||||
core_dns: false,
|
||||
kube_scheduler: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
35
harmony/src/modules/monitoring/discord_alert_manager.rs
Normal file
35
harmony/src/modules/monitoring/discord_alert_manager.rs
Normal file
@@ -0,0 +1,35 @@
|
||||
use std::str::FromStr;
|
||||
|
||||
use non_blank_string_rs::NonBlankString;
|
||||
use url::Url;
|
||||
|
||||
use crate::modules::helm::chart::HelmChartScore;
|
||||
|
||||
pub fn discord_alert_manager_score(
|
||||
webhook_url: Url,
|
||||
namespace: String,
|
||||
name: String,
|
||||
) -> HelmChartScore {
|
||||
let values = format!(
|
||||
r#"
|
||||
environment:
|
||||
- name: "DISCORD_WEBHOOK"
|
||||
value: "{webhook_url}"
|
||||
"#,
|
||||
);
|
||||
|
||||
HelmChartScore {
|
||||
namespace: Some(NonBlankString::from_str(&namespace).unwrap()),
|
||||
release_name: NonBlankString::from_str(&name).unwrap(),
|
||||
chart_name: NonBlankString::from_str(
|
||||
"oci://hub.nationtech.io/library/alertmanager-discord",
|
||||
)
|
||||
.unwrap(),
|
||||
chart_version: None,
|
||||
values_overrides: None,
|
||||
values_yaml: Some(values.to_string()),
|
||||
create_namespace: true,
|
||||
install_only: true,
|
||||
repository: None,
|
||||
}
|
||||
}
|
||||
55
harmony/src/modules/monitoring/discord_webhook_sender.rs
Normal file
55
harmony/src/modules/monitoring/discord_webhook_sender.rs
Normal file
@@ -0,0 +1,55 @@
|
||||
use async_trait::async_trait;
|
||||
use serde_json::Value;
|
||||
use url::Url;
|
||||
|
||||
use crate::{
|
||||
interpret::{InterpretError, Outcome},
|
||||
topology::K8sAnywhereTopology,
|
||||
};
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct DiscordWebhookConfig {
|
||||
pub webhook_url: Url,
|
||||
pub name: String,
|
||||
pub send_resolved_notifications: bool,
|
||||
}
|
||||
|
||||
pub trait DiscordWebhookReceiver {
|
||||
fn deploy_discord_webhook_receiver(
|
||||
&self,
|
||||
_notification_adapter_id: &str,
|
||||
) -> Result<Outcome, InterpretError>;
|
||||
|
||||
fn delete_discord_webhook_receiver(
|
||||
&self,
|
||||
_notification_adapter_id: &str,
|
||||
) -> Result<Outcome, InterpretError>;
|
||||
}
|
||||
|
||||
// trait used to generate alert manager config values impl<T: Topology + AlertManagerConfig> Monitor for KubePrometheus
|
||||
pub trait AlertManagerConfig<T> {
|
||||
fn get_alert_manager_config(&self) -> Result<Value, InterpretError>;
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: DiscordWebhookReceiver> AlertManagerConfig<T> for DiscordWebhookConfig {
|
||||
fn get_alert_manager_config(&self) -> Result<Value, InterpretError> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl DiscordWebhookReceiver for K8sAnywhereTopology {
|
||||
fn deploy_discord_webhook_receiver(
|
||||
&self,
|
||||
_notification_adapter_id: &str,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
todo!()
|
||||
}
|
||||
fn delete_discord_webhook_receiver(
|
||||
&self,
|
||||
_notification_adapter_id: &str,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
262
harmony/src/modules/monitoring/kube_prometheus.rs
Normal file
262
harmony/src/modules/monitoring/kube_prometheus.rs
Normal file
@@ -0,0 +1,262 @@
|
||||
use super::{config::KubePrometheusConfig, monitoring_alerting::AlertChannel};
|
||||
use log::info;
|
||||
use non_blank_string_rs::NonBlankString;
|
||||
use std::{collections::HashMap, str::FromStr};
|
||||
use url::Url;
|
||||
|
||||
use crate::modules::helm::chart::HelmChartScore;
|
||||
|
||||
pub fn kube_prometheus_helm_chart_score(config: &KubePrometheusConfig) -> HelmChartScore {
|
||||
//TODO this should be make into a rule with default formatting that can be easily passed as a vec
|
||||
//to the overrides or something leaving the user to deal with formatting here seems bad
|
||||
let default_rules = config.default_rules.to_string();
|
||||
let windows_monitoring = config.windows_monitoring.to_string();
|
||||
let alert_manager = config.alert_manager.to_string();
|
||||
let grafana = config.grafana.to_string();
|
||||
let kubernetes_service_monitors = config.kubernetes_service_monitors.to_string();
|
||||
let kubernetes_api_server = config.kubernetes_api_server.to_string();
|
||||
let kubelet = config.kubelet.to_string();
|
||||
let kube_controller_manager = config.kube_controller_manager.to_string();
|
||||
let core_dns = config.core_dns.to_string();
|
||||
let kube_etcd = config.kube_etcd.to_string();
|
||||
let kube_scheduler = config.kube_scheduler.to_string();
|
||||
let kube_proxy = config.kube_proxy.to_string();
|
||||
let kube_state_metrics = config.kube_state_metrics.to_string();
|
||||
let node_exporter = config.node_exporter.to_string();
|
||||
let prometheus_operator = config.prometheus_operator.to_string();
|
||||
let prometheus = config.prometheus.to_string();
|
||||
let mut values = format!(
|
||||
r#"
|
||||
additionalPrometheusRulesMap:
|
||||
pods-status-alerts:
|
||||
groups:
|
||||
- name: pods
|
||||
rules:
|
||||
- alert: "[CRIT] POD not healthy"
|
||||
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{{phase=~"Pending|Unknown|Failed"}})[15m:1m]) > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
title: "[CRIT] POD not healthy : {{{{ $labels.pod }}}}"
|
||||
description: |
|
||||
A POD is in a non-ready state!
|
||||
- **Pod**: {{{{ $labels.pod }}}}
|
||||
- **Namespace**: {{{{ $labels.namespace }}}}
|
||||
- alert: "[CRIT] POD crash looping"
|
||||
expr: increase(kube_pod_container_status_restarts_total[5m]) > 3
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
title: "[CRIT] POD crash looping : {{{{ $labels.pod }}}}"
|
||||
description: |
|
||||
A POD is drowning in a crash loop!
|
||||
- **Pod**: {{{{ $labels.pod }}}}
|
||||
- **Namespace**: {{{{ $labels.namespace }}}}
|
||||
- **Instance**: {{{{ $labels.instance }}}}
|
||||
pvc-alerts:
|
||||
groups:
|
||||
- name: pvc-alerts
|
||||
rules:
|
||||
- alert: 'PVC Fill Over 95 Percent In 2 Days'
|
||||
expr: |
|
||||
(
|
||||
kubelet_volume_stats_used_bytes
|
||||
/
|
||||
kubelet_volume_stats_capacity_bytes
|
||||
) > 0.95
|
||||
AND
|
||||
predict_linear(kubelet_volume_stats_used_bytes[2d], 2 * 24 * 60 * 60)
|
||||
/
|
||||
kubelet_volume_stats_capacity_bytes
|
||||
> 0.95
|
||||
for: 1m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
description: The PVC {{{{ $labels.persistentvolumeclaim }}}} in namespace {{{{ $labels.namespace }}}} is predicted to fill over 95% in less than 2 days.
|
||||
title: PVC {{{{ $labels.persistentvolumeclaim }}}} in namespace {{{{ $labels.namespace }}}} will fill over 95% in less than 2 days
|
||||
defaultRules:
|
||||
create: {default_rules}
|
||||
rules:
|
||||
alertmanager: true
|
||||
etcd: true
|
||||
configReloaders: true
|
||||
general: true
|
||||
k8sContainerCpuUsageSecondsTotal: true
|
||||
k8sContainerMemoryCache: true
|
||||
k8sContainerMemoryRss: true
|
||||
k8sContainerMemorySwap: true
|
||||
k8sContainerResource: true
|
||||
k8sContainerMemoryWorkingSetBytes: true
|
||||
k8sPodOwner: true
|
||||
kubeApiserverAvailability: true
|
||||
kubeApiserverBurnrate: true
|
||||
kubeApiserverHistogram: true
|
||||
kubeApiserverSlos: true
|
||||
kubeControllerManager: true
|
||||
kubelet: true
|
||||
kubeProxy: true
|
||||
kubePrometheusGeneral: true
|
||||
kubePrometheusNodeRecording: true
|
||||
kubernetesApps: true
|
||||
kubernetesResources: true
|
||||
kubernetesStorage: true
|
||||
kubernetesSystem: true
|
||||
kubeSchedulerAlerting: true
|
||||
kubeSchedulerRecording: true
|
||||
kubeStateMetrics: true
|
||||
network: true
|
||||
node: true
|
||||
nodeExporterAlerting: true
|
||||
nodeExporterRecording: true
|
||||
prometheus: true
|
||||
prometheusOperator: true
|
||||
windows: true
|
||||
windowsMonitoring:
|
||||
enabled: {windows_monitoring}
|
||||
grafana:
|
||||
enabled: {grafana}
|
||||
kubernetesServiceMonitors:
|
||||
enabled: {kubernetes_service_monitors}
|
||||
kubeApiServer:
|
||||
enabled: {kubernetes_api_server}
|
||||
kubelet:
|
||||
enabled: {kubelet}
|
||||
kubeControllerManager:
|
||||
enabled: {kube_controller_manager}
|
||||
coreDns:
|
||||
enabled: {core_dns}
|
||||
kubeEtcd:
|
||||
enabled: {kube_etcd}
|
||||
kubeScheduler:
|
||||
enabled: {kube_scheduler}
|
||||
kubeProxy:
|
||||
enabled: {kube_proxy}
|
||||
kubeStateMetrics:
|
||||
enabled: {kube_state_metrics}
|
||||
nodeExporter:
|
||||
enabled: {node_exporter}
|
||||
prometheusOperator:
|
||||
enabled: {prometheus_operator}
|
||||
prometheus:
|
||||
enabled: {prometheus}
|
||||
"#,
|
||||
);
|
||||
|
||||
let alertmanager_config = alert_manager_yaml_builder(&config);
|
||||
values.push_str(&alertmanager_config);
|
||||
|
||||
fn alert_manager_yaml_builder(config: &KubePrometheusConfig) -> String {
|
||||
let mut receivers = String::new();
|
||||
let mut routes = String::new();
|
||||
let mut global_configs = String::new();
|
||||
let alert_manager = config.alert_manager;
|
||||
for alert_channel in &config.alert_channel {
|
||||
match alert_channel {
|
||||
AlertChannel::Discord { name, .. } => {
|
||||
let (receiver, route) = discord_alert_builder(name);
|
||||
info!("discord receiver: {} \nroute: {}", receiver, route);
|
||||
receivers.push_str(&receiver);
|
||||
routes.push_str(&route);
|
||||
}
|
||||
AlertChannel::Slack {
|
||||
slack_channel,
|
||||
webhook_url,
|
||||
} => {
|
||||
let (receiver, route) = slack_alert_builder(slack_channel);
|
||||
info!("slack receiver: {} \nroute: {}", receiver, route);
|
||||
receivers.push_str(&receiver);
|
||||
|
||||
routes.push_str(&route);
|
||||
let global_config = format!(
|
||||
r#"
|
||||
global:
|
||||
slack_api_url: {webhook_url}"#
|
||||
);
|
||||
|
||||
global_configs.push_str(&global_config);
|
||||
}
|
||||
AlertChannel::Smpt { .. } => todo!(),
|
||||
}
|
||||
}
|
||||
info!("after alert receiver: {}", receivers);
|
||||
info!("after alert routes: {}", routes);
|
||||
|
||||
let alertmanager_config = format!(
|
||||
r#"
|
||||
alertmanager:
|
||||
enabled: {alert_manager}
|
||||
config: {global_configs}
|
||||
route:
|
||||
group_by: ['job']
|
||||
group_wait: 30s
|
||||
group_interval: 5m
|
||||
repeat_interval: 12h
|
||||
routes:
|
||||
{routes}
|
||||
receivers:
|
||||
- name: 'null'
|
||||
{receivers}"#
|
||||
);
|
||||
|
||||
info!("alert manager config: {}", alertmanager_config);
|
||||
alertmanager_config
|
||||
}
|
||||
|
||||
HelmChartScore {
|
||||
namespace: Some(NonBlankString::from_str(&config.namespace).unwrap()),
|
||||
release_name: NonBlankString::from_str("kube-prometheus").unwrap(),
|
||||
chart_name: NonBlankString::from_str(
|
||||
"oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack",
|
||||
)
|
||||
.unwrap(),
|
||||
chart_version: None,
|
||||
values_overrides: None,
|
||||
values_yaml: Some(values.to_string()),
|
||||
create_namespace: true,
|
||||
install_only: true,
|
||||
repository: None,
|
||||
}
|
||||
}
|
||||
|
||||
fn discord_alert_builder(release_name: &String) -> (String, String) {
|
||||
let discord_receiver_name = format!("Discord-{}", release_name);
|
||||
let receiver = format!(
|
||||
r#"
|
||||
- name: '{discord_receiver_name}'
|
||||
webhook_configs:
|
||||
- url: 'http://{release_name}-alertmanager-discord:9094'
|
||||
send_resolved: true"#,
|
||||
);
|
||||
let route = format!(
|
||||
r#"
|
||||
- receiver: '{discord_receiver_name}'
|
||||
matchers:
|
||||
- alertname!=Watchdog
|
||||
continue: true"#,
|
||||
);
|
||||
(receiver, route)
|
||||
}
|
||||
|
||||
fn slack_alert_builder(slack_channel: &String) -> (String, String) {
|
||||
let slack_receiver_name = format!("Slack-{}", slack_channel);
|
||||
let receiver = format!(
|
||||
r#"
|
||||
- name: '{slack_receiver_name}'
|
||||
slack_configs:
|
||||
- channel: '{slack_channel}'
|
||||
send_resolved: true
|
||||
title: '{{{{ .CommonAnnotations.title }}}}'
|
||||
text: '{{{{ .CommonAnnotations.description }}}}'"#,
|
||||
);
|
||||
let route = format!(
|
||||
r#"
|
||||
- receiver: '{slack_receiver_name}'
|
||||
matchers:
|
||||
- alertname!=Watchdog
|
||||
continue: true"#,
|
||||
);
|
||||
(receiver, route)
|
||||
}
|
||||
5
harmony/src/modules/monitoring/mod.rs
Normal file
5
harmony/src/modules/monitoring/mod.rs
Normal file
@@ -0,0 +1,5 @@
|
||||
mod config;
|
||||
mod discord_alert_manager;
|
||||
pub mod discord_webhook_sender;
|
||||
mod kube_prometheus;
|
||||
pub mod monitoring_alerting;
|
||||
161
harmony/src/modules/monitoring/monitoring_alerting.rs
Normal file
161
harmony/src/modules/monitoring/monitoring_alerting.rs
Normal file
@@ -0,0 +1,161 @@
|
||||
use async_trait::async_trait;
|
||||
use email_address::EmailAddress;
|
||||
|
||||
use log::info;
|
||||
use serde::Serialize;
|
||||
use url::Url;
|
||||
|
||||
use crate::{
|
||||
data::{Id, Version},
|
||||
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
||||
inventory::Inventory,
|
||||
score::Score,
|
||||
topology::{HelmCommand, Topology},
|
||||
};
|
||||
|
||||
use super::{
|
||||
config::KubePrometheusConfig, discord_alert_manager::discord_alert_manager_score,
|
||||
kube_prometheus::kube_prometheus_helm_chart_score,
|
||||
};
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub enum AlertChannel {
|
||||
Discord {
|
||||
name: String,
|
||||
webhook_url: Url,
|
||||
},
|
||||
Slack {
|
||||
slack_channel: String,
|
||||
webhook_url: Url,
|
||||
},
|
||||
//TODO test and implement in helm chart
|
||||
//currently does not work
|
||||
Smpt {
|
||||
email_address: EmailAddress,
|
||||
service_name: String,
|
||||
},
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct MonitoringAlertingStackScore {
|
||||
pub alert_channel: Vec<AlertChannel>,
|
||||
pub namespace: Option<String>,
|
||||
}
|
||||
|
||||
impl MonitoringAlertingStackScore {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
alert_channel: Vec::new(),
|
||||
namespace: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Topology + HelmCommand> Score<T> for MonitoringAlertingStackScore {
|
||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
||||
Box::new(MonitoringAlertingStackInterpret {
|
||||
score: self.clone(),
|
||||
})
|
||||
}
|
||||
fn name(&self) -> String {
|
||||
format!("MonitoringAlertingStackScore")
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
struct MonitoringAlertingStackInterpret {
|
||||
score: MonitoringAlertingStackScore,
|
||||
}
|
||||
|
||||
impl MonitoringAlertingStackInterpret {
|
||||
async fn build_kube_prometheus_helm_chart_config(&self) -> KubePrometheusConfig {
|
||||
let mut config = KubePrometheusConfig::new();
|
||||
if let Some(ns) = &self.score.namespace {
|
||||
config.namespace = ns.clone();
|
||||
}
|
||||
config.alert_channel = self.score.alert_channel.clone();
|
||||
config
|
||||
}
|
||||
|
||||
async fn deploy_kube_prometheus_helm_chart_score<T: Topology + HelmCommand>(
|
||||
&self,
|
||||
inventory: &Inventory,
|
||||
topology: &T,
|
||||
config: &KubePrometheusConfig,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
let helm_chart = kube_prometheus_helm_chart_score(config);
|
||||
helm_chart
|
||||
.create_interpret()
|
||||
.execute(inventory, topology)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn deploy_alert_channel_service<T: Topology + HelmCommand>(
|
||||
&self,
|
||||
inventory: &Inventory,
|
||||
topology: &T,
|
||||
config: &KubePrometheusConfig,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
//let mut outcomes = vec![];
|
||||
|
||||
//for channel in &self.score.alert_channel {
|
||||
// let outcome = match channel {
|
||||
// AlertChannel::Discord { .. } => {
|
||||
// discord_alert_manager_score(config)
|
||||
// .create_interpret()
|
||||
// .execute(inventory, topology)
|
||||
// .await
|
||||
// }
|
||||
// AlertChannel::Slack { .. } => Ok(Outcome::success(
|
||||
// "No extra configs for slack alerting".to_string(),
|
||||
// )),
|
||||
// AlertChannel::Smpt { .. } => {
|
||||
// todo!()
|
||||
// }
|
||||
// };
|
||||
// outcomes.push(outcome);
|
||||
//}
|
||||
//for result in outcomes {
|
||||
// result?;
|
||||
//}
|
||||
|
||||
Ok(Outcome::success("All alert channels deployed".to_string()))
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: Topology + HelmCommand> Interpret<T> for MonitoringAlertingStackInterpret {
|
||||
async fn execute(
|
||||
&self,
|
||||
inventory: &Inventory,
|
||||
topology: &T,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
let config = self.build_kube_prometheus_helm_chart_config().await;
|
||||
info!("Built kube prometheus config");
|
||||
info!("Installing kube prometheus chart");
|
||||
self.deploy_kube_prometheus_helm_chart_score(inventory, topology, &config)
|
||||
.await?;
|
||||
info!("Installing alert channel service");
|
||||
self.deploy_alert_channel_service(inventory, topology, &config)
|
||||
.await?;
|
||||
Ok(Outcome::success(format!(
|
||||
"succesfully deployed monitoring and alerting stack"
|
||||
)))
|
||||
}
|
||||
|
||||
fn get_name(&self) -> InterpretName {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn get_version(&self) -> Version {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn get_status(&self) -> InterpretStatus {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn get_children(&self) -> Vec<Id> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
@@ -36,13 +36,20 @@ impl OKDBootstrapDhcpScore {
|
||||
.expect("Should have at least one worker to be used as bootstrap node")
|
||||
.clone(),
|
||||
});
|
||||
// TODO refactor this so it is not copy pasted from dhcp.rs
|
||||
Self {
|
||||
dhcp_score: DhcpScore::new(
|
||||
host_binding,
|
||||
// TODO : we should add a tftp server to the topology instead of relying on the
|
||||
// router address, this is leaking implementation details
|
||||
Some(topology.router.get_gateway()),
|
||||
Some("bootx64.efi".to_string()),
|
||||
None, // To allow UEFI boot we cannot provide a legacy file
|
||||
Some("undionly.kpxe".to_string()),
|
||||
Some("ipxe.efi".to_string()),
|
||||
Some(format!(
|
||||
"http://{}:8080/boot.ipxe",
|
||||
topology.router.get_gateway()
|
||||
)),
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,7 +15,7 @@ pub struct OKDDhcpScore {
|
||||
|
||||
impl OKDDhcpScore {
|
||||
pub fn new(topology: &HAClusterTopology, inventory: &Inventory) -> Self {
|
||||
let host_binding = topology
|
||||
let mut host_binding: Vec<HostBinding> = topology
|
||||
.control_plane
|
||||
.iter()
|
||||
.enumerate()
|
||||
@@ -28,13 +28,35 @@ impl OKDDhcpScore {
|
||||
.clone(),
|
||||
})
|
||||
.collect();
|
||||
|
||||
topology
|
||||
.workers
|
||||
.iter()
|
||||
.enumerate()
|
||||
.for_each(|(index, topology_entry)| {
|
||||
host_binding.push(HostBinding {
|
||||
logical_host: topology_entry.clone(),
|
||||
physical_host: inventory
|
||||
.worker_host
|
||||
.get(index)
|
||||
.expect("There should be enough worker hosts to fill topology")
|
||||
.clone(),
|
||||
})
|
||||
});
|
||||
|
||||
Self {
|
||||
// TODO : we should add a tftp server to the topology instead of relying on the
|
||||
// router address, this is leaking implementation details
|
||||
dhcp_score: DhcpScore {
|
||||
host_binding,
|
||||
next_server: Some(topology.router.get_gateway()),
|
||||
boot_filename: Some("bootx64.efi".to_string()),
|
||||
boot_filename: None,
|
||||
filename: Some("undionly.kpxe".to_string()),
|
||||
filename64: Some("ipxe.efi".to_string()),
|
||||
filenameipxe: Some(format!(
|
||||
"http://{}:8080/boot.ipxe",
|
||||
topology.router.get_gateway()
|
||||
)),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
67
harmony/src/modules/tenant/mod.rs
Normal file
67
harmony/src/modules/tenant/mod.rs
Normal file
@@ -0,0 +1,67 @@
|
||||
use async_trait::async_trait;
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::{
|
||||
data::{Id, Version},
|
||||
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
||||
inventory::Inventory,
|
||||
score::Score,
|
||||
topology::{
|
||||
Topology,
|
||||
tenant::{TenantConfig, TenantManager},
|
||||
},
|
||||
};
|
||||
|
||||
#[derive(Debug, Serialize, Clone)]
|
||||
pub struct TenantScore {
|
||||
config: TenantConfig,
|
||||
}
|
||||
|
||||
impl<T: Topology + TenantManager> Score<T> for TenantScore {
|
||||
fn create_interpret(&self) -> Box<dyn crate::interpret::Interpret<T>> {
|
||||
Box::new(TenantInterpret {
|
||||
tenant_config: self.config.clone(),
|
||||
})
|
||||
}
|
||||
|
||||
fn name(&self) -> String {
|
||||
format!("{} TenantScore", self.config.name)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct TenantInterpret {
|
||||
tenant_config: TenantConfig,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: Topology + TenantManager> Interpret<T> for TenantInterpret {
|
||||
async fn execute(
|
||||
&self,
|
||||
_inventory: &Inventory,
|
||||
topology: &T,
|
||||
) -> Result<Outcome, InterpretError> {
|
||||
topology.provision_tenant(&self.tenant_config).await?;
|
||||
|
||||
Ok(Outcome::success(format!(
|
||||
"Successfully provisioned tenant {} with id {}",
|
||||
self.tenant_config.name, self.tenant_config.id
|
||||
)))
|
||||
}
|
||||
|
||||
fn get_name(&self) -> InterpretName {
|
||||
InterpretName::TenantInterpret
|
||||
}
|
||||
|
||||
fn get_version(&self) -> Version {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn get_status(&self) -> InterpretStatus {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn get_children(&self) -> Vec<Id> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
@@ -12,6 +12,7 @@ harmony = { path = "../harmony" }
|
||||
harmony_tui = { path = "../harmony_tui", optional = true }
|
||||
inquire.workspace = true
|
||||
tokio.workspace = true
|
||||
env_logger.workspace = true
|
||||
|
||||
|
||||
[features]
|
||||
|
||||
@@ -99,6 +99,8 @@ pub async fn init<T: Topology + Send + Sync + 'static>(
|
||||
return Err("Not compiled with interactive support".into());
|
||||
}
|
||||
|
||||
let _ = env_logger::builder().try_init();
|
||||
|
||||
let scores_vec = maestro_scores_filter(&maestro, args.all, args.filter, args.number);
|
||||
|
||||
if scores_vec.len() == 0 {
|
||||
@@ -147,7 +149,6 @@ mod test {
|
||||
modules::dummy::{ErrorScore, PanicScore, SuccessScore},
|
||||
topology::HAClusterTopology,
|
||||
};
|
||||
use harmony::{score::Score, topology::Topology};
|
||||
|
||||
fn init_test_maestro() -> Maestro<HAClusterTopology> {
|
||||
let inventory = Inventory::autoload();
|
||||
|
||||
@@ -116,3 +116,19 @@ pub fn yaml(input: TokenStream) -> TokenStream {
|
||||
}
|
||||
.into()
|
||||
}
|
||||
|
||||
/// Verify that a string is a valid(ish) ingress path
|
||||
/// Panics if path does not start with `/`
|
||||
#[proc_macro]
|
||||
pub fn ingress_path(input: TokenStream) -> TokenStream {
|
||||
let input = parse_macro_input!(input as LitStr);
|
||||
let path_str = input.value();
|
||||
|
||||
match path_str.starts_with("/") {
|
||||
true => {
|
||||
let expanded = quote! {(#path_str.to_string()) };
|
||||
return TokenStream::from(expanded);
|
||||
}
|
||||
false => panic!("Invalid ingress path"),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,6 +15,7 @@ reqwest = { version = "0.12", features = ["stream"] }
|
||||
url.workspace = true
|
||||
sha2 = "0.10.8"
|
||||
futures-util = "0.3.31"
|
||||
kube.workspace = true
|
||||
|
||||
[dev-dependencies]
|
||||
env_logger = { workspace = true }
|
||||
|
||||
266
k3d/src/lib.rs
266
k3d/src/lib.rs
@@ -1,18 +1,23 @@
|
||||
mod downloadable_asset;
|
||||
use downloadable_asset::*;
|
||||
|
||||
use log::{debug, info};
|
||||
use kube::Client;
|
||||
use log::{debug, info, warn};
|
||||
use std::path::PathBuf;
|
||||
|
||||
const K3D_BIN_FILE_NAME: &str = "k3d";
|
||||
|
||||
pub struct K3d {
|
||||
base_dir: PathBuf,
|
||||
cluster_name: Option<String>,
|
||||
}
|
||||
|
||||
impl K3d {
|
||||
pub fn new(base_dir: PathBuf) -> Self {
|
||||
Self { base_dir }
|
||||
pub fn new(base_dir: PathBuf, cluster_name: Option<String>) -> Self {
|
||||
Self {
|
||||
base_dir,
|
||||
cluster_name,
|
||||
}
|
||||
}
|
||||
|
||||
async fn get_binary_for_current_platform(
|
||||
@@ -24,7 +29,6 @@ impl K3d {
|
||||
|
||||
debug!("Detecting platform: OS={}, ARCH={}", os, arch);
|
||||
|
||||
// 2. Construct the binary name pattern based on platform
|
||||
let binary_pattern = match (os, arch) {
|
||||
("linux", "x86") => "k3d-linux-386",
|
||||
("linux", "x86_64") => "k3d-linux-amd64",
|
||||
@@ -38,7 +42,6 @@ impl K3d {
|
||||
|
||||
debug!("Looking for binary matching pattern: {}", binary_pattern);
|
||||
|
||||
// 3. Find the matching binary in release assets
|
||||
let binary_asset = latest_release
|
||||
.assets
|
||||
.iter()
|
||||
@@ -47,14 +50,12 @@ impl K3d {
|
||||
|
||||
let binary_url = binary_asset.browser_download_url.clone();
|
||||
|
||||
// 4. Find and parse the checksums file
|
||||
let checksums_asset = latest_release
|
||||
.assets
|
||||
.iter()
|
||||
.find(|asset| asset.name == "checksums.txt")
|
||||
.expect("Checksums file not found in release assets");
|
||||
|
||||
// 5. Download and parse checksums file
|
||||
let checksums_url = checksums_asset.browser_download_url.clone();
|
||||
|
||||
let body = reqwest::get(checksums_url)
|
||||
@@ -65,7 +66,6 @@ impl K3d {
|
||||
.unwrap();
|
||||
println!("body: {body}");
|
||||
|
||||
// 6. Find the checksum for our binary
|
||||
let checksum = body
|
||||
.lines()
|
||||
.find_map(|line| {
|
||||
@@ -109,6 +109,252 @@ impl K3d {
|
||||
|
||||
Ok(latest_release)
|
||||
}
|
||||
|
||||
/// Checks if k3d binary exists and is executable
|
||||
///
|
||||
/// Verifies that:
|
||||
/// 1. The k3d binary exists in the base directory
|
||||
/// 2. It has proper executable permissions (on Unix systems)
|
||||
/// 3. It responds correctly to a simple command (`k3d --version`)
|
||||
pub fn is_installed(&self) -> bool {
|
||||
let binary_path = self.get_k3d_binary_path();
|
||||
|
||||
if !binary_path.exists() {
|
||||
debug!("K3d binary not found at {:?}", binary_path);
|
||||
return false;
|
||||
}
|
||||
|
||||
if !self.ensure_binary_executable(&binary_path) {
|
||||
return false;
|
||||
}
|
||||
|
||||
self.can_execute_binary_check(&binary_path)
|
||||
}
|
||||
|
||||
/// Verifies if the specified cluster is already created
|
||||
///
|
||||
/// Executes `k3d cluster list <cluster_name>` and checks for a successful response,
|
||||
/// indicating that the cluster exists and is registered with k3d.
|
||||
pub fn is_cluster_initialized(&self) -> bool {
|
||||
let cluster_name = match self.get_cluster_name() {
|
||||
Ok(name) => name,
|
||||
Err(_) => {
|
||||
debug!("Could not get cluster name, can't verify if cluster is initialized");
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
let binary_path = self.base_dir.join(K3D_BIN_FILE_NAME);
|
||||
if !binary_path.exists() {
|
||||
return false;
|
||||
}
|
||||
|
||||
self.verify_cluster_exists(&binary_path, cluster_name)
|
||||
}
|
||||
|
||||
fn get_cluster_name(&self) -> Result<&String, String> {
|
||||
match &self.cluster_name {
|
||||
Some(name) => Ok(name),
|
||||
None => Err("No cluster name available".to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates a new k3d cluster with the specified name
|
||||
///
|
||||
/// This method:
|
||||
/// 1. Creates a new k3d cluster using `k3d cluster create <cluster_name>`
|
||||
/// 2. Waits for the cluster to initialize
|
||||
/// 3. Returns a configured Kubernetes client connected to the cluster
|
||||
///
|
||||
/// # Returns
|
||||
/// - `Ok(Client)` - Successfully created cluster and connected client
|
||||
/// - `Err(String)` - Error message detailing what went wrong
|
||||
pub async fn initialize_cluster(&self) -> Result<Client, String> {
|
||||
let cluster_name = match self.get_cluster_name() {
|
||||
Ok(name) => name,
|
||||
Err(_) => return Err("Could not get cluster_name, cannot initialize".to_string()),
|
||||
};
|
||||
|
||||
info!("Initializing k3d cluster '{}'", cluster_name);
|
||||
|
||||
self.create_cluster(cluster_name)?;
|
||||
self.create_kubernetes_client().await
|
||||
}
|
||||
|
||||
fn get_k3d_binary_path(&self) -> PathBuf {
|
||||
self.base_dir.join(K3D_BIN_FILE_NAME)
|
||||
}
|
||||
|
||||
fn get_k3d_binary(&self) -> Result<PathBuf, String> {
|
||||
let path = self.get_k3d_binary_path();
|
||||
if !path.exists() {
|
||||
return Err(format!("K3d binary not found at {:?}", path));
|
||||
}
|
||||
Ok(path)
|
||||
}
|
||||
|
||||
/// Ensures k3d is installed and the cluster is initialized
|
||||
///
|
||||
/// This method provides a complete setup flow:
|
||||
/// 1. Checks if k3d is installed, downloads and installs it if needed
|
||||
/// 2. Verifies if the specified cluster exists, creates it if not
|
||||
/// 3. Returns a Kubernetes client connected to the cluster
|
||||
///
|
||||
/// # Returns
|
||||
/// - `Ok(Client)` - Successfully ensured k3d and cluster are ready
|
||||
/// - `Err(String)` - Error message if any step failed
|
||||
pub async fn ensure_installed(&self) -> Result<Client, String> {
|
||||
if !self.is_installed() {
|
||||
info!("K3d is not installed, downloading latest release");
|
||||
self.download_latest_release()
|
||||
.await
|
||||
.map_err(|e| format!("Failed to download k3d: {}", e))?;
|
||||
|
||||
if !self.is_installed() {
|
||||
return Err("Failed to install k3d properly".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
if !self.is_cluster_initialized() {
|
||||
info!("Cluster is not initialized, initializing now");
|
||||
return self.initialize_cluster().await;
|
||||
}
|
||||
|
||||
self.start_cluster().await?;
|
||||
|
||||
info!("K3d and cluster are already properly set up");
|
||||
self.create_kubernetes_client().await
|
||||
}
|
||||
|
||||
// Private helper methods
|
||||
|
||||
#[cfg(not(target_os = "windows"))]
|
||||
fn ensure_binary_executable(&self, binary_path: &PathBuf) -> bool {
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
|
||||
let mut perms = match std::fs::metadata(binary_path) {
|
||||
Ok(metadata) => metadata.permissions(),
|
||||
Err(e) => {
|
||||
debug!("Failed to get binary metadata: {}", e);
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
perms.set_mode(0o755);
|
||||
|
||||
if let Err(e) = std::fs::set_permissions(binary_path, perms) {
|
||||
debug!("Failed to set executable permissions on k3d binary: {}", e);
|
||||
return false;
|
||||
}
|
||||
|
||||
true
|
||||
}
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
fn ensure_binary_executable(&self, _binary_path: &PathBuf) -> bool {
|
||||
// Windows doesn't use executable file permissions
|
||||
true
|
||||
}
|
||||
|
||||
fn can_execute_binary_check(&self, binary_path: &PathBuf) -> bool {
|
||||
match std::process::Command::new(binary_path)
|
||||
.arg("--version")
|
||||
.output()
|
||||
{
|
||||
Ok(output) => {
|
||||
if output.status.success() {
|
||||
debug!("K3d binary is installed and working");
|
||||
true
|
||||
} else {
|
||||
debug!("K3d binary check failed: {:?}", output);
|
||||
false
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
debug!("Failed to execute K3d binary: {}", e);
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn verify_cluster_exists(&self, binary_path: &PathBuf, cluster_name: &str) -> bool {
|
||||
match std::process::Command::new(binary_path)
|
||||
.args(["cluster", "list", cluster_name, "--no-headers"])
|
||||
.output()
|
||||
{
|
||||
Ok(output) => {
|
||||
if output.status.success() && !output.stdout.is_empty() {
|
||||
debug!("Cluster '{}' is initialized", cluster_name);
|
||||
true
|
||||
} else {
|
||||
debug!("Cluster '{}' is not initialized", cluster_name);
|
||||
false
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
debug!("Failed to check cluster initialization: {}", e);
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn run_k3d_command<I, S>(&self, args: I) -> Result<std::process::Output, String>
|
||||
where
|
||||
I: IntoIterator<Item = S>,
|
||||
S: AsRef<std::ffi::OsStr>,
|
||||
{
|
||||
let binary_path = self.get_k3d_binary()?;
|
||||
let output = std::process::Command::new(binary_path).args(args).output();
|
||||
match output {
|
||||
Ok(output) => {
|
||||
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||
debug!("stderr : {}", stderr);
|
||||
let stdout = String::from_utf8_lossy(&output.stdout);
|
||||
debug!("stdout : {}", stdout);
|
||||
Ok(output)
|
||||
}
|
||||
Err(e) => Err(format!("Failed to execute k3d command: {}", e)),
|
||||
}
|
||||
}
|
||||
|
||||
fn create_cluster(&self, cluster_name: &str) -> Result<(), String> {
|
||||
let output = self.run_k3d_command(["cluster", "create", cluster_name])?;
|
||||
|
||||
if !output.status.success() {
|
||||
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||
return Err(format!("Failed to create cluster: {}", stderr));
|
||||
}
|
||||
|
||||
info!("Successfully created k3d cluster '{}'", cluster_name);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn create_kubernetes_client(&self) -> Result<Client, String> {
|
||||
warn!("TODO this method is way too dumb, it should make sure that the client is connected to the k3d cluster actually represented by this instance, not just any default client");
|
||||
Client::try_default()
|
||||
.await
|
||||
.map_err(|e| format!("Failed to create Kubernetes client: {}", e))
|
||||
}
|
||||
|
||||
pub async fn get_client(&self) -> Result<Client, String> {
|
||||
match self.is_cluster_initialized() {
|
||||
true => Ok(self.create_kubernetes_client().await?),
|
||||
false => Err("Cannot get client! Cluster not initialized yet".to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
async fn start_cluster(&self) -> Result<(), String> {
|
||||
let cluster_name = self.get_cluster_name()?;
|
||||
let output = self.run_k3d_command(["cluster", "start", cluster_name])?;
|
||||
|
||||
if !output.status.success() {
|
||||
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||
return Err(format!("Failed to start cluster: {}", stderr));
|
||||
}
|
||||
|
||||
info!("Successfully started k3d cluster '{}'", cluster_name);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
@@ -124,7 +370,7 @@ mod test {
|
||||
|
||||
assert_eq!(dir.join(K3D_BIN_FILE_NAME).exists(), false);
|
||||
|
||||
let k3d = K3d::new(dir.clone());
|
||||
let k3d = K3d::new(dir.clone(), None);
|
||||
let latest_release = k3d.get_latest_release_tag().await.unwrap();
|
||||
|
||||
let tag_regex = Regex::new(r"^v\d+\.\d+\.\d+$").unwrap();
|
||||
@@ -138,7 +384,7 @@ mod test {
|
||||
|
||||
assert_eq!(dir.join(K3D_BIN_FILE_NAME).exists(), false);
|
||||
|
||||
let k3d = K3d::new(dir.clone());
|
||||
let k3d = K3d::new(dir.clone(), None);
|
||||
let bin_file_path = k3d.download_latest_release().await.unwrap();
|
||||
assert_eq!(bin_file_path, dir.join(K3D_BIN_FILE_NAME));
|
||||
assert_eq!(dir.join(K3D_BIN_FILE_NAME).exists(), true);
|
||||
|
||||
@@ -40,7 +40,11 @@ pub struct CaddyGeneral {
|
||||
#[yaserde(rename = "TlsDnsOptionalField4")]
|
||||
pub tls_dns_optional_field4: MaybeString,
|
||||
#[yaserde(rename = "TlsDnsPropagationTimeout")]
|
||||
pub tls_dns_propagation_timeout: MaybeString,
|
||||
pub tls_dns_propagation_timeout: Option<MaybeString>,
|
||||
#[yaserde(rename = "TlsDnsPropagationTimeoutPeriod")]
|
||||
pub tls_dns_propagation_timeout_period: Option<MaybeString>,
|
||||
#[yaserde(rename = "TlsDnsPropagationDelay")]
|
||||
pub tls_dns_propagation_delay: Option<MaybeString>,
|
||||
#[yaserde(rename = "TlsDnsPropagationResolvers")]
|
||||
pub tls_dns_propagation_resolvers: MaybeString,
|
||||
pub accesslist: MaybeString,
|
||||
@@ -82,4 +86,8 @@ pub struct CaddyGeneral {
|
||||
pub auth_to_tls: Option<i32>,
|
||||
#[yaserde(rename = "AuthToUri")]
|
||||
pub auth_to_uri: MaybeString,
|
||||
#[yaserde(rename = "ClientIpHeaders")]
|
||||
pub client_ip_headers: MaybeString,
|
||||
#[yaserde(rename = "CopyHeaders")]
|
||||
pub copy_headers: MaybeString,
|
||||
}
|
||||
|
||||
@@ -14,6 +14,8 @@ pub struct DhcpInterface {
|
||||
pub netboot: Option<u32>,
|
||||
pub nextserver: Option<String>,
|
||||
pub filename64: Option<String>,
|
||||
pub filename: Option<String>,
|
||||
pub filenameipxe: Option<String>,
|
||||
#[yaserde(rename = "ddnsdomainalgorithm")]
|
||||
pub ddns_domain_algorithm: Option<MaybeString>,
|
||||
#[yaserde(rename = "numberoptions")]
|
||||
|
||||
@@ -45,6 +45,7 @@ pub struct OPNsense {
|
||||
#[yaserde(rename = "Pischem")]
|
||||
pub pischem: Option<Pischem>,
|
||||
pub ifgroups: Ifgroups,
|
||||
pub dnsmasq: Option<RawXml>,
|
||||
}
|
||||
|
||||
impl From<String> for OPNsense {
|
||||
@@ -166,7 +167,7 @@ pub struct Sysctl {
|
||||
pub struct SysctlItem {
|
||||
pub descr: MaybeString,
|
||||
pub tunable: String,
|
||||
pub value: String,
|
||||
pub value: MaybeString,
|
||||
}
|
||||
|
||||
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
|
||||
@@ -279,6 +280,7 @@ pub struct User {
|
||||
pub scope: String,
|
||||
pub groupname: Option<MaybeString>,
|
||||
pub password: String,
|
||||
pub pwd_changed_at: Option<MaybeString>,
|
||||
pub uid: u32,
|
||||
pub disabled: Option<u8>,
|
||||
pub landing_page: Option<MaybeString>,
|
||||
@@ -540,6 +542,8 @@ pub struct GeneralIpsec {
|
||||
preferred_oldsa: Option<MaybeString>,
|
||||
disablevpnrules: Option<MaybeString>,
|
||||
passthrough_networks: Option<MaybeString>,
|
||||
user_source: Option<MaybeString>,
|
||||
local_group: Option<MaybeString>,
|
||||
}
|
||||
|
||||
#[derive(Debug, YaSerialize, YaDeserialize, PartialEq)]
|
||||
@@ -1219,6 +1223,7 @@ pub struct Host {
|
||||
pub rr: String,
|
||||
pub mxprio: MaybeString,
|
||||
pub mx: MaybeString,
|
||||
pub ttl: Option<MaybeString>,
|
||||
pub server: String,
|
||||
pub description: Option<String>,
|
||||
}
|
||||
@@ -1233,6 +1238,7 @@ impl Host {
|
||||
rr,
|
||||
server,
|
||||
mxprio: MaybeString::default(),
|
||||
ttl: Some(MaybeString::default()),
|
||||
mx: MaybeString::default(),
|
||||
description: None,
|
||||
}
|
||||
@@ -1421,7 +1427,7 @@ pub struct VirtualIp {
|
||||
#[yaserde(attribute = true)]
|
||||
pub version: String,
|
||||
#[yaserde(rename = "vip")]
|
||||
pub vip: Vip,
|
||||
pub vip: Option<Vip>,
|
||||
}
|
||||
|
||||
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
|
||||
|
||||
@@ -23,7 +23,7 @@ pub struct Config {
|
||||
}
|
||||
|
||||
impl Serialize for Config {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
fn serialize<S>(&self, _serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: serde::Serializer,
|
||||
{
|
||||
|
||||
@@ -4,17 +4,17 @@ pub mod modules;
|
||||
|
||||
pub use config::Config;
|
||||
pub use error::Error;
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
|
||||
#[cfg(e2e_test)]
|
||||
mod e2e_test {
|
||||
use opnsense_config_xml::StaticMap;
|
||||
use std::net::Ipv4Addr;
|
||||
|
||||
use crate::Config;
|
||||
use pretty_assertions::assert_eq;
|
||||
|
||||
#[cfg(opnsenseendtoend)]
|
||||
#[tokio::test]
|
||||
async fn test_public_sdk() {
|
||||
use pretty_assertions::assert_eq;
|
||||
let mac = "11:22:33:44:55:66";
|
||||
let ip = Ipv4Addr::new(10, 100, 8, 200);
|
||||
let hostname = "test_hostname";
|
||||
|
||||
@@ -179,7 +179,21 @@ impl<'a> DhcpConfig<'a> {
|
||||
|
||||
pub fn set_boot_filename(&mut self, boot_filename: &str) {
|
||||
self.enable_netboot();
|
||||
self.get_lan_dhcpd().filename64 = Some(boot_filename.to_string());
|
||||
self.get_lan_dhcpd().bootfilename = Some(boot_filename.to_string());
|
||||
}
|
||||
|
||||
pub fn set_filename(&mut self, filename: &str) {
|
||||
self.enable_netboot();
|
||||
self.get_lan_dhcpd().filename = Some(filename.to_string());
|
||||
}
|
||||
|
||||
pub fn set_filename64(&mut self, filename64: &str) {
|
||||
self.enable_netboot();
|
||||
self.get_lan_dhcpd().filename64 = Some(filename64.to_string());
|
||||
}
|
||||
|
||||
pub fn set_filenameipxe(&mut self, filenameipxe: &str) {
|
||||
self.enable_netboot();
|
||||
self.get_lan_dhcpd().filenameipxe = Some(filenameipxe.to_string());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
[package]
|
||||
name = "example"
|
||||
edition = "2024"
|
||||
|
||||
Reference in New Issue
Block a user