* it was named `hurl!` instead of just `url!` because it was clashing with the crate `url` so we would have been forced to use it with `harmony_macros::url!` which is less sexy
Reviewed-on: #135
* Okd needs to use the cluster observability operator in order to deploy namespaced prometheuses and alertmanagers
* allow namespaced deployments of alertmanager and prometheuses as well as its associated rules, etc.
Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #134
Co-authored-by: Willem <wrolleman@nationtech.io>
Co-committed-by: Willem <wrolleman@nationtech.io>
## Fully automated inventory gathering now works!
Boot up harmony_inventory_agent with `cargo run -p harmony_inventory_agent`
Launch the DiscoverInventoryAgentScore , currently available this way :
`RUST_LOG=info cargo run -p example-cli -- -f Discover -y`
And you will have automatically all hosts saved to the database. Run `cargo sqlx setup` if you have not done it yet.
Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #127
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
The process will setup DHCP dnsmasq on opnsense to boot the correct ipxe file depending on the architecture
Then ipxe will chainload to either a mac-specific ipxe boot file or the fallback inventory boot file
Then a kickstart pre script will setup the cluster ssh key to allow ssh connections to the machine and also setup and start harmony_inventory_agent to allow being scraped
Note: there is a bug with the inventory agent currently, it cannot find lsmod on centos stream 9, will fix this soon
Reviewed-on: #111
This pull request introduces a comprehensive and ergonomic secret management system via a new harmony-secret crate.
What's Done
New harmony-secret Crate:
A new crate dedicated to secret management, providing a clean, static API: SecretManager::get::<MySecret>() and SecretManager::set(&my_secret).
A #[derive(Secret)] procedural macro that automatically uses the struct's name as the secret key, simplifying usage.
An async SecretStore trait to support various backend implementations.
Two Secret Store Implementations:
LocalFileSecretStore: A simple file-based store that saves secrets as JSON in the user's data directory. Ideal for local development and testing.
InfisicalSecretStore: A production-ready implementation that integrates with Infisical for centralized, secure secret management.
Configuration via Environment Variables:
The secret store is selected at runtime via the HARMONY_SECRET_STORE environment variable (file or infisical).
Infisical integration is configured through HARMONY_SECRET_INFISICAL_* variables.
What's Not Done (Future Work)
Automated Infisical Setup: The initial configuration for the Infisical backend is currently manual. Developers must create a project and a Universal Auth identity in Infisical and set the corresponding environment variables to run tests or use the backend. The new test_harmony_secret_infisical.sh script serves as a clear example of the required variables.
This new secrets module provides a solid and secure foundation for managing credentials for components like OPNsense, Kubernetes, and other infrastructure services going forward. Even with the manual first-time setup for Infisical, this architecture is robust enough to serve our needs for the foreseeable future.
* define Ntfy ingress (naive implementation) based on current target
* use patched Ntfy Helm Chart
* create Ntfy main user only if needed
* add info logs
* better error bubbling
* instrument feature installations
* upgrade prometheus alerting charts if already installed
* harmony_composer params to control deployment `target` and `profile`
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Reviewed-on: #107
The multiprogress wasn't used properly and leading to conflicting progress bars (within our own progress bars, as well as the log wrapper).
This PR introduce a layer on top of `indicatif::MultiProgress` to properly handle sections of progress bars, where we can dynamically add/update/remove progress bars from any sections.
We can see in the demo that new sections + progress bars are added on the fly and that extra logs (e.g. info logs) are appended on top of the progress bars.
Progress are also grouped together based on their parent score.
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Co-authored-by: johnride <jg@nationtech.io>
Reviewed-on: #101
The CI pipeline (`./check.sh`) was failing because of test errors, which was caused by the instrumentation framework complaining that no subscribers/listeners were registered.
Instead of setting up all tests to run with a dummy subscriber, move the implementation of the instrumentation behind a feature flag so that it runs only for tests.
There's a catch though: the `#[cfg(test)]` directive works only when directly testing the crate. If a crate `A` depends on another crate `B`, `B` will be compiled as usual (aka not in test mode) which will not trigger the `test` flag.
So we need to introduce our own `testing` feature flag for `harmony` core and import it with that flag (only during dev/test).
More info: https://github.com/rust-lang/rust/issues/59168
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Reviewed-on: https://git.nationtech.io/NationTech/harmony/pulls/102
First step in a direction to better orchestrate the core flow, even though it feels weird to move this logic into the `Score`. We'll refactor this as soon as we have a better solution.
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Reviewed-on: #100
Introduce a way to instrument what happens within Harmony and around Harmony (e.g. in the CLI or in Composer).
The goal is to provide visual feedback to the end users and inform them of the progress of their tasks (e.g. deployment) as clearly as possible. It is important to also let them know of the outcome of their tasks (what was created, where to access stuff, etc.).
<img src="https://media.discordapp.net/attachments/1295353830300713062/1400289618636574741/demo.gif?ex=688c18d5&is=688ac755&hm=2c70884aacb08f7bd15cbb65a7562a174846906718aa15294bbb238e64febbce&=" />
## Changes
### Instrumentation architecture
Extensibility and ease of use is key here, while preserving type safety as much as possible.
The proposed API is quite simple:
```rs
// Emit an event
instrumentation::instrument(
HarmonyEvent::TopologyPrepared {
topology: "k8s-anywhere",
outcome: Outcome::success("yay")
}
);
// Consume events
instrumentation::subscribe("Harmony CLI Logger", async |event| {
match event {
HarmonyEvent::TopologyPrepared { name, outcome } => todo!(),
}
});
```
#### Current limitations
* this API is not very extensible, but it could be easily changed to allow end users to define custom events in addition to Harmony core events
* we use a tokio broadcast channel behind the scene so only in process communication can happen, but it could be easily changed to a more flexible communication mechanism as implementation details are hidden
### `harmony_composer` VS `harmony_cli`
As Harmony Composer launches commands from Harmony (CLI), they both live in different processes. And because of this, we cannot easily make all the logging happens in one place (Harmony Composer) and get rid of Harmony CLI. At least not without introducing additional complexity such as communication through a server, unix socket, etc.
So for the time being, it was decided to preserve both `harmony_composer` and `harmony_cli` and let them independently log their stuff and handle their own responsibilities:
* `harmony_composer`: takes care only of setting up & packaging a project, delegates everything else to `harmony_cli`
* `harmony_cli`: takes care of configuring & running Harmony
### Logging & prompts
* [indicatif](https://github.com/console-rs/indicatif) is used to create progress bars and track progress within Harmony, Harmony CLI, and Harmony Composer
* [inquire](https://github.com/mikaelmello/inquire) is preserved, but was removed from `harmony` (core) as UI concerns shouldn't go that deep
* note: for now the only prompt we had was simply deleted, we'll have to find a better way to prompt stuff in the future
## Todos
* [ ] Update/Create ADRs
* [ ] Continue instrumentation for missing branches
* [ ] Allow instrumentation to emit and subscribe to custom events
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Reviewed-on: #91
Reviewed-by: johnride <jg@nationtech.io>
A Maestro was initialized with a new inventory simply to provide a
localhost topology to install K3D locally. But in practice, the K3D
installation wasn't actually using the topology nor the inventory.
Directly installing K3D within the K8s Anywhere topology makes things
simpler and actually enforce the topology to provide the capabilities
required to install K3D.
- Added functionality to generate a Helm chart for the application.
- Implemented chart packaging and pushing to an OCI registry.
- Utilized `helm package` and `helm push` commands.
- Included configurable registry URL and project name.
- Added tests to verify chart generation and packaging.
- Improved error handling and logging.
Using `Command::output()` executes the command and wait for it to be finished before returning the output.
Though in some cases the user might need to interact with the CLI before continuing, which hangs the command execution.
Instead, using `Command::spawn()` allows to forward stdin/stdout to the parent process.
Reviewed-on: #71
Reviewed-by: johnride <jg@nationtech.io>
With this architecture, we have an extensible application module for which we can easily define new features and add them to application scores.
All this is driven by the ApplicationInterpret, who understands features and make sure they are "installed".
The drawback of this design is that we now have three different places to launch scores within Harmony : Maestro, Topology and Interpret. This is an architectural smell and I am not sure how to deal with it at the moment.
However, all these places where execution is performed make sense semantically : an ApplicationInterpret must understand ApplicationFeatures and can very well be responsible of them. Same goes for a Topology which provides features itself by composition (ex. K8sAnywhereTopology implements TenantManager) so it is natural for this very imp
lementation to know how to install itself.
Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #70
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
- Implemented a dry-run mode for K8s resource patching, displaying diffs before applying changes.
- Added the `similar` dependency for calculating and displaying text diffs.
- Enhanced K8s resource application to handle various port specifications in NetworkPolicy ingress rules.
- Added support for port ranges and lists of ports in NetworkPolicy rules.
- Updated K8s client to utilize the dry-run configuration setting.
- Added configuration option `HARMONY_DRY_RUN` to enable or disable dry-run mode.
Adds the foundation for managing tenant credentials, including:
- `TenantCredentialScore` for scoring credential-related operations.
- `TenantCredentialManager` trait for creating users.
- `CredentialMetadata` struct to store credential information.
- `CredentialData` enum to hold credential content.
- `TenantCredentialBundle` struct to encapsulate metadata and content.
This provides a starting point for implementing credential creation, storage, and retrieval within the harmony system.
Reviewed-on: #63
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
- Added `additional_allowed_cidr_ingress` and `additional_allowed_cidr_egress` fields to `TenantNetworkPolicy` to allow specifying custom CIDR blocks for network access.
- Updated K8sTenantManager to parse and apply these CIDR rules to NetworkPolicy ingress and egress rules.
- Added `cidr` dependency to `harmony_macros` and a custom proc macro `cidrv4` to easily parse CIDR strings.
- Updated TenantConfig to default inter tenant and internet egress to deny all and added default empty vectors for CIDR ingress and egress.
- Updated ResourceLimits to implement default.
Reviewed-on: #60
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
This Id implementation is optimized for ease of use. Ids are prefixed with the unix epoch and suffixed with 7 alphanumeric characters. But Ids can also contain any String the user wants to pass it
- Implemented a new `cert-manager` module for deploying cert-manager.
- Added support for specifying a Helm repository in module configurations.
- Introduced `cert_manager` module in `modules/mod.rs`.
- Created `src/modules/cert_manager` directory and its associated code.
- Implemented `add_repo` function in `src/modules/helm.rs` for adding Helm repositories.
- Updated `LAMPInterpret` and `lamp.rs` to integrate the new module.
- Added logging for Helm command execution.
- Updated k8s deployment file to remove unused DeepMerge dependency.
- Added functionality to tag and push the built Docker image to a specified registry.
- Modified deployment score to use the full image tag (including registry and project).
- Included error handling and logging for the `docker tag` and `docker push` commands.
- Updated the `K8sDeploymentScore` struct to include a namespace field and environment variables for database credentials.
- Added kebab-case conversion for deployment name and namespace.
- Implemented a check_output function for better error reporting.
- Adds a `deploy_database` function to the `LAMPInterpret` struct to deploy a MariaDB database using Helm.
- Integrates `HelmCommand` trait requirement to the `LAMPInterpret` struct.
- Introduces `HelmChartScore` to manage MariaDB deployment.
- Adds namespace configuration for helm deployments.
- Updates trait bounds for `LAMPInterpret` to include `HelmCommand`.
- Implements `get_namespace` function to retrieve the namespace.
Escapes the value of the PHP_ERROR_REPORTING environment variable in the Dockerfile to prevent potential issues with shell interpretation. Uses EnvBuilder for a more structured approach.
- Added `build_dockerfile` function to generate a Dockerfile based on the LAMP stack for the given project.
- Implemented `build_docker_image` to execute the docker build command and create the image.
- Configured user and permissions for apache.
- Included necessary apache configuration for security.
- Added error handling for docker build failures.
- Exposed port 80 for external access.
- Added basic serialization to Config struct.
- Refactor k3d cluster management to explicitly start the cluster.
- Introduce `start_cluster` function to ensure cluster is running before operations.
- Improve error handling and logging during cluster startup.
- Update `create_cluster` and other related functions to utilize the new startup mechanism.
- Enhance reliability and prevent potential issues caused by an uninitialized cluster.
- Add `run_k3d_command` to handle k3d commands with logging and error handling.
- Adds functionality to download, install, and manage k3d clusters.
- Includes methods for downloading the latest release, creating clusters, and verifying cluster existence.
- Implements `ensure_k3d_installed`, `get_latest_release_tag`, `download_latest_release`, `is_k3d_installed`, `verify_cluster_exists`, `create_cluster` and `create_kubernetes_client`.
- Provides a `get_client` method to access the Kubernetes client.
- Includes unit tests for download and installation.
- Adds handling for different operating systems.
- Improves error handling and logging.
- Introduces a `K3d` struct to encapsulate k3d cluster management logic.
- Adds the ability to specify the cluster name during K3d initialization.
Adds a new interpret for k3d installation. This includes defining the `K3dInstallationInterpret` struct, implementing the `Interpret` trait for it, and adding the `K3dInstallation` variant to the `InterpretName` enum. The implementation currently contains `todo!()` placeholders for the actual logic.
- Implemented functionality to fetch the latest k3d release tag from GitHub.
- Added logic to determine the appropriate binary URL based on the current platform.
- Implemented downloading and saving the binary to a specified directory.
- Included unit tests to verify the download and installation process.
- Added a `K3D_BIN_FILE_NAME` constant for clarity.
- Added logging for better debugging.
- Added initial K8sAnywhere topology and related modules.
- Implemented a basic K3d installation score for cluster bootstrapping.
- Introduced LocalhostTopology for local development and testing.
- Added necessary module structure and dependencies.
- Implemented user prompt for K3d installation confirmation.
- Added basic error handling and logging.
- Refactored existing code to improve modularity and maintainability.
- Included necessary tests to ensure functionality.
This commit introduces a new topology, `K8sAnywhereTopology`, designed to handle Kubernetes deployments more flexibly.
Key changes include:
- Introduced `K8sAnywhereTopology` to encapsulate Kubernetes client management and configuration.
- Refactored existing Kubernetes-related code to utilize the new topology.
- Updated `OcK8sclient` to `K8sclient` across modules (k8s, lamp, deployment, resource) for consistency.
- Ensured all relevant modules now interface with Kubernetes through the `K8sclient` trait.
This change promotes a more modular and maintainable codebase for Kubernetes integrations within Harmony.
Adds a `ensure_ready` method to the `Topology` trait to ensure the infrastructure is prepared before score execution.
- Introduces a new `Outcome` status to indicate the result of the readiness check.
- Implements a `topology_preparation_result` field in `Maestro` to track initialization status.
- Adds a check in `interpret` to warn if the topology isn't fully initialized.
- Provides detailed documentation for the `Topology` trait and `ensure_ready` method, including recommended patterns for complex setups.
- Adds `async_trait` dependency.
- Corrected XML test data to remove unnecessary `<descr>` tags, resolving failing tests.
- Removed the unused `ratatui_utils` module and its associated code.
- Simplified example in `harmony_tui/src/lib.rs` to use `tokio::main` and register scores directly with `Maestro`. This aligns with the project's evolving structure.
Adds a quick demo command using `cargo run -p example-tui` to launch a minimalist TUI with demo scores.
Also includes a core architecture diagram and overview in the README for better understanding of the project structure.
Decouples score definitions from UI implementations by mandating `serde::Serialize` and `serde::Deserialize` for all `Score` structs. UIs will interact with scores via their serialized representation, enabling scalability and reducing complexity for score authors.
This approach:
- Scales better with new score types and UI targets.
- Simplifies score authoring by removing the need for UI-specific display traits.
- Leverages the `serde` ecosystem for robust data handling.
Adding new field types requires updates to all UIs, a trade-off acknowledged in the ADR.
This commit adds `serde` dependency and derives `Serialize` trait for `Score` types. This is necessary for serialization and deserialization of these types, which is required to display Scores to various user interfaces
- Added `serde` dependency to `harmony_types/Cargo.toml`.
- Added `serde::Serialize` derive macro to `MacAddress` in `harmony_types/src/lib.rs`.
- Added `serde::Serialize` derive macro to `Config` in `opnsense-config/src/config/config.rs`.
- Added `serde::Serialize` derive macro to `Score` in `harmony_types/src/lib.rs`.
- Added `serde::Serialize` derive macro to `Config` and `Score` in relevant modules.
- Added placeholder `todo!()` implementations for `serialize` methods. These will be implemented in future commits.
# TODO: build ARM images and MacOS binaries (or other targets) too
- name:Update snapshot-latest tag
run:|
git config user.name "Gitea CI"
git config user.email "ci@nationtech.io"
git tag -f snapshot-latest
git push origin snapshot-latest --force
- name:Install jq
run:apt install -y jq# The current image includes apt lists so we don't have to apt update and rm /var/lib/apt... every time. But if the image is optimized it won't work anymore
- name:Create or update release
run:|
# First, check if release exists and delete it if it does
# Harmony : Open-source infrastructure orchestration that treats your platform like first-class code
Due to the current setup being a mix of separate repositories with gitignore and rust workspace, a few options are required for cargo-watch to have the desired behavior :
From a **developer laptop** to a **global production cluster**, a single **source of truth** drives the **full software lifecycle.**
---
## 1 · The Harmony Philosophy
Infrastructure is essential, but it shouldn’t be your core business. Harmony is built on three guiding principles that make modern platforms reliable, repeatable, and easy to reason about.
| **Infrastructure as Resilient Code** | Replace sprawling YAML and bash scripts with type-safe Rust. Test, refactor, and version your platform just like application code. |
| **Prove It Works — Before You Deploy** | Harmony uses the compiler to verify that your application’s needs match the target environment’s capabilities at **compile-time**, eliminating an entire class of runtime outages. |
| **One Unified Model** | Software and infrastructure are a single system. Harmony models them together, enabling deep automation—from bare-metal servers to Kubernetes workloads—with zero context switching. |
These principles surface as simple, ergonomic Rust APIs that let teams focus on their product while trusting the platform underneath.
---
## 2 · Quick Start
The snippet below spins up a complete **production-grade Rust + Leptos Webapp** with monitoring. Swap it for your own scores to deploy anything from microservices to machine-learning pipelines.
- **Extending Harmony** – write new Scores / Interprets, add hardware like OPNsense firewalls, or embed Harmony in your own tooling (`/docs`).
- **Community** – discussions and roadmap live in [GitLab issues](https://git.nationtech.io/nationtech/harmony/-/issues). PRs, ideas, and feedback are welcome!
---
## 6 · License
Harmony is released under the **GNU AGPL v3**.
> We choose a strong copyleft license to ensure the project—and every improvement to it—remains open and benefits the entire community. Fork it, enhance it, even out-innovate us; just keep it open.
See [LICENSE](LICENSE) for the full text.
---
_Made with ❤️ & 🦀 by the NationTech and the Harmony community_
## Architecture Decision Record: Data Representation and UI Rendering for Score Types
**Status:** Proposed
**TL;DR:**`Score` types will be serialized (using `serde`) for presentation in UIs. This decouples data definition from presentation, improving scalability and reducing complexity for developers defining `Score` types. New UI types only need to handle existing field types, and new `Score` types don’t require UI changes as long as they use existing field types. Adding a new field type *does* require updates to all UIs.
**Key benefits:** Scalability, reduced complexity for `Score` authors, decoupling of data and presentation.
**Key trade-off:** Adding new field types requires updating all UIs.
---
**Context:**
Harmony is a pure Rust infrastructure orchestrator focused on compile-time safety and providing a developer-friendly, Ansible-module-like experience for defining infrastructure configurations via "Scores". These Scores (e.g., `LAMPScore`) are Rust structs composed of specific, strongly-typed fields (e.g., `VersionField`, `UrlField`, `PathField`) which are validated at compile-time using macros (`Version!`, `Url!`, etc.).
A key requirement is displaying the configuration defined in these Scores across various user interfaces (Web UI, TUI, potentially Mobile UI, etc.) in a consistent and type-safe manner. As the number of Score types is expected to grow significantly (hundreds or thousands), we need a scalable approach for rendering their data that avoids tightly coupling Score definitions to specific UI implementations.
The primary challenge is preventing the need for every `Score` struct author to implement multiple display traits (e.g., `Display`, `WebDisplay`, `TuiDisplay`) for every potential UI target. This would create an N x M complexity problem (N Scores * M UI types) and place an unreasonable burden on Score developers, hindering scalability and maintainability.
**Decision:**
1.**Mandatory Serialization:** All `Score` structs *must* implement `serde::Serialize` and `serde::Deserialize`. They *will not* be required to implement `std::fmt::Display` or any custom UI-specific display traits (e.g., `WebDisplay`, `TuiDisplay`).
2.**Field-Level Rendering:** Responsibility for rendering data will reside within the UI components. Each UI (Web, TUI, etc.) will implement logic to display *individual field types* (e.g., `UrlField`, `VersionField`, `IpAddressField`, `SecretField`).
3.**Data Access via Serialization:** UIs will primarily interact with `Score` data through its serialized representation (e.g., JSON obtained via `serde_json`). This provides a standardized interface for UIs to consume the data structure agnostic of the specific `Score` type. Alternatively, UIs *could* potentially use reflection or specific visitor patterns on the `Score` struct itself, but serialization is the preferred decoupling mechanism.
**Rationale:**
1.**Decoupling Data from Presentation:** This decision cleanly separates the data definition (`Score` structs and their fields) from the presentation logic (UI rendering). `Score` authors can focus solely on defining the data and its structure, while UI developers focus on how to best present known data *types*.
2.**Scalability:** This approach scales significantly better than requiring display trait implementations on Scores:
* Adding a *new Score type* requires *no changes* to existing UI code, provided it uses existing field types.
* Adding a *new UI type* requires implementing rendering logic only for the defined set of *field types*, not for every individual `Score` type. This reduces the N x M complexity to N + M complexity (approximately).
3.**Simplicity for Score Authors:** Requiring only `serde::Serialize + Deserialize` (which can often be derived automatically with `#[derive(Serialize, Deserialize)]`) is a much lower burden than implementing custom rendering logic for multiple, potentially unknown, UI targets.
4.**Leverages Rust Ecosystem Standards:**`serde` is the de facto standard for serialization and deserialization in Rust. Relying on it aligns with common Rust practices and benefits from its robustness, performance, and extensive tooling.
5.**Consistency for UIs:** Serialization provides a consistent, structured format (like JSON) for UIs to consume data, regardless of the underlying `Score` struct's complexity or composition.
6.**Flexibility for UI Implementation:** UIs can choose the best way to render each field type based on their capabilities (e.g., a `UrlField` might be a clickable link in a Web UI, plain text in a TUI; a `SecretField` might be masked).
**Consequences:**
**Positive:**
* Greatly improved scalability for adding new Score types and UI targets.
* Strong separation of concerns between data definition and presentation.
* Reduced implementation burden and complexity for Score authors.
* Consistent mechanism for UIs to access and interpret Score data.
* Aligns well with the Hexagonal Architecture (ADR-002) by treating UIs as adapters interacting with the application core via a defined port (the serialized data contract).
**Negative:**
* Adding a *new field type* (e.g., `EmailField`) requires updates to *all* existing UI implementations to support rendering it.
* UI components become dependent on the set of defined field types and need comprehensive logic to handle each one appropriately.
* Potential minor overhead of serialization/deserialization compared to direct function calls (though likely negligible for UI purposes).
* Requires careful design and management of the standard library of field types.
**Alternatives Considered:**
1.**`Score` Implements `std::fmt::Display`:**
*_Rejected:_ Too simplistic. Only suitable for basic text rendering, doesn't cater to structured UIs (Web, etc.), and doesn't allow type-specific rendering logic (e.g., masking secrets). Doesn't scale to multiple UI formats.
*_Rejected:_ Leads directly to the N x M complexity problem. Tightly couples Score definitions to specific UI implementations. Places an excessive burden on Score authors, hindering adoption and scalability.
3.**Generic Display Trait with Context (`Score` implements `DisplayWithContext<UIContext>`):**
*_Rejected:_ More flexible than multiple traits, but still requires Score authors to implement potentially complex rendering logic within the `Score` definition itself. The `Score` would still need awareness of different UI contexts, leading to undesirable coupling. Managing context types adds complexity.
# Architecture Decision Record: Helm and Kustomize Handling
Initial Author: Taha Hawa
Initial Date: 2025-04-15
Last Updated Date: 2025-04-15
## Status
Proposed
## Context
We need to find a way to handle Helm charts and deploy them to a Kubernetes cluster. Helm has a lot of extra functionality that we may or may not need. Kustomize handles Helm charts by inflating them and applying them as vanilla Kubernetes yaml. How should Harmony handle it?
## Decision
In order to move quickly and efficiently, Harmony should handle Helm charts similarly to how Kustomize does: invoke Helm to inflate/render the charts with the needed inputs, and deploy the rendered artifacts to Kubernetes as if it were vanilla manifests.
## Rationale
A lot of Helm's features aren't strictly necessary and would add unneeded overhead. This is likely the fastest way to go from zero to deployed. Other tools (e.g. Kustomize) already do this. Kustomize has tooling for patching and modifying k8s manifests before deploying, and Harmony should have that power too, even if it's not what Helm typically intends.
Perhaps in future also have a Kustomize resource in Harmony? Which could handle Helm charts for Harmony as well/instead.
## Consequences
**Pros**:
- Much easier (and faster) than implementing all of Helm's featureset
- Can potentially re-use code from K8sResource already present in Harmony
- Harmony retains more control over how the deployment goes after rendering (i.e. can act like Kustomize, or leverage Kustomize itself to modify deployments after rendering/inflation)
- Reduce (unstable) surface of dealing with Helm binary
**Cons**:
- Lose some Helm functionality
- Potentially lose some compatibility with Helm
## Alternatives considered
- ### Implement Helm resouce/client fully in Harmony
- **Pros**:
- Retain full compatibility with Helm as a tool
- Retain full functionality of Helm
- **Cons**:
- Longer dev time
- More complex integration
- Dealing with larger (unstable) surface of Helm as a binary
- ### Leverage Kustomize to deal with Helm charts
- **Pros**:
- Already has a good, minimal inflation solution built
- Powerful post-processing/patching
- Can integrate with `kubectl`
- **Cons**:
- Unstable binary tool/surface to deal with
- Still requires Helm to be installed as well as Kustomize
# Architecture Decision Record: Monitoring and Alerting
Initial Author : Willem Rolleman
Date : April 28 2025
## Status
Proposed
## Context
A harmony user should be able to initialize a monitoring stack easily, either at the first run of Harmony, or that integrates with existing proects and infra without creating multiple instances of the monitoring stack or overwriting existing alerts/configurations.The user also needs a simple way to configure the stack so that it watches the projects. There should be reasonable defaults configured that are easily customizable for each project
## Decision
Create MonitoringStack score that creates a maestro to launch the monitoring stack or not if it is already present.
The MonitoringStack score can be passed to the maestro in the vec! scores list
## Rationale
Having the score launch a maestro will allow the user to easily create a new monitoring stack and keeps composants grouped together. The MonitoringScore can handle all the logic for adding alerts, ensuring that the stack is running etc.
## Alerternatives considered
- ### Implement alerting and monitoring stack using existing HelmScore for each project
- **Pros**:
- Each project can choose to use the monitoring and alerting stack that they choose
- Less overhead in terms of care harmony code
- can add Box::new(grafana::grafanascore(namespace))
- **Cons**:
- No default solution implemented
- Dev needs to chose what they use
- Increases complexity of score projects
- Each project will create a new monitoring and alerting instance rather than joining the existing one
- ### Use OKD grafana and prometheus
- **Pros**:
- Minimal config to do in Harmony
- **Cons**:
- relies on OKD so will not working for local testing via k3d
- ### Create a monitoring and alerting crate similar to harmony tui
- **Pros**:
- Creates a default solution that can be implemented once by harmony
- can create a join function that will allow a project to connect to the existing solution
- eliminates risk of creating multiple instances of grafana or prometheus
- **Cons**:
- more complex than using a helm score
- management of values files for individual functions becomes more complicated, ie how do you create alerts for one project via helm install that doesnt overwrite the other alerts
- ### Add monitoring to Maestro struct so whether the monitoring stack is used must be defined
- **Pros**:
- less for the user to define
- may be easier to set defaults
- **Cons**:
- feels counterintuitive
- would need to modify the structure of the maestro and how it operates which seems like a bad idea
- unclear how to allow user to pass custom values/configs to the monitoring stack for subsequent projects
- ### Create MonitoringStack score to add to scores vec! which loads a maestro to install stack if not ready or add custom endpoints/alerts to existing stack
- **Pros**:
- Maestro already accepts a list of scores to initialize
- leaving out the monitoring score simply means the user does not want monitoring
- if the monitoring stack is already created, the MonitoringStack score doesn't necessarily need to be added to each project
- composants of the monitoring stack are bundled together and can be expaned or modified from the same place
# Architecture Decision Record: Multi-Tenancy Strategy for Harmony Managed Clusters
Initial Author: Jean-Gabriel Gill-Couture
Initial Date: 2025-05-26
## Status
Proposed
## Context
Harmony manages production OKD/Kubernetes clusters that serve multiple clients with varying trust levels and operational requirements. We need a multi-tenancy strategy that provides:
1.**Strong isolation** between client workloads while maintaining operational simplicity
2.**Controlled API access** allowing clients self-service capabilities within defined boundaries
3.**Security-first approach** protecting both the cluster infrastructure and tenant data
4.**Harmony-native implementation** using our Score/Interpret pattern for automated tenant provisioning
5.**Scalable management** supporting both small trusted clients and larger enterprise customers
The official Kubernetes multi-tenancy documentation identifies two primary models: namespace-based isolation and virtual control planes per tenant. Given Harmony's focus on operational simplicity, provider-agnostic abstractions (ADR-003), and hexagonal architecture (ADR-002), we must choose an approach that balances security, usability, and maintainability.
Our clients represent a hybrid tenancy model:
- **Customer multi-tenancy**: Each client operates independently with no cross-tenant trust
- **Team multi-tenancy**: Individual clients may have multiple team members requiring coordinated access
- **API access requirement**: Unlike pure SaaS scenarios, clients need controlled Kubernetes API access for self-service operations
The official kubernetes documentation on multi tenancy heavily inspired this ADR : https://kubernetes.io/docs/concepts/security/multi-tenancy/
## Decision
Implement **namespace-based multi-tenancy** with the following architecture:
### 1. Network Security Model
- **Private cluster access**: Kubernetes API and OpenShift console accessible only via WireGuard VPN
- **No public exposure**: Control plane endpoints remain internal to prevent unauthorized access attempts
- **VPN-based authentication**: Initial access control through WireGuard client certificates
### 2. Tenant Isolation Strategy
- **Dedicated namespace per tenant**: Each client receives an isolated namespace with access limited only to the required resources and operations
- **Complete network isolation**: NetworkPolicies prevent cross-namespace communication while allowing full egress to public internet
- **Resource governance**: ResourceQuotas and LimitRanges enforce CPU, memory, and storage consumption limits
- **Storage access control**: Clients can create PersistentVolumeClaims but cannot directly manipulate PersistentVolumes or access other tenants' storage
### 3. Access Control Framework
- **Principle of Least Privilege**: RBAC grants only necessary permissions within tenant namespace scope
- **Namespace-scoped**: Clients can create/modify/delete resources within their namespace
- **Cluster-level restrictions**: No access to cluster-wide resources, other namespaces, or sensitive cluster operations
- **Whitelisted operations**: Controlled self-service capabilities for ingress, secrets, configmaps, and workload management
### 4. Identity Management Evolution
- **Phase 1**: Manual provisioning of VPN access and Kubernetes ServiceAccounts/Users
- **Phase 2**: Migration to Keycloak-based identity management (aligning with ADR-006) for centralized authentication and lifecycle management
### 5. Harmony Integration
- **TenantScore implementation**: Declarative tenant provisioning using Harmony's Score/Interpret pattern
This ADR establishes the foundation for secure, scalable multi-tenancy in Harmony-managed clusters while maintaining operational simplicity and cost effectiveness. A follow-up ADR will detail the Tenant abstraction and user management mechanisms within the Harmony framework.
As Harmony's goal is to make software delivery easier, we must provide an easy way for developers to express their app's semantics and dependencies with great abstractions, in a similar fashion to what the score.dev project is doing.
Thus, we started working on ways to package common types of applications such as LAMP, which we started working on with `LAMPScore`.
Now is time for the next step : we want to pave the way towards complete lifecycle automation. To do this, we will start with a way to execute Harmony's modules easily from anywhere, starting with locally and in CI environments.
## Decision
To achieve easy, portable execution of Harmony, we will follow this architecture :
- Host a basic harmony release that is compiled with the CLI by our gitea/github server
- This binary will do the following : check if there is a `harmony` folder in the current path
- If yes
- Check if cargo is available locally and compile the harmony binary, or compile the harmony binary using a rust docker container, if neither cargo or a container runtime is available, output a message explaining the situation
- Run the newly compiled binary. (Ideally using pid handoff like exec does but some research around this should be done. I think handing off the process is to help with OS interaction such as terminal apps, signals, exit codes, process handling, etc but there might be some side effects)
- If not
- Suggest initializing a project by auto detecting what the project looks like
- When the project type cannot be auto detected, provide links to Harmony's documentation on how to set up a project, a link to the examples folder, and a ask the user if he wants to initialize an empty Harmony project in the current folder
- harmony/Cargo.toml with dependencies set
- harmony/src/main.rs with an example LAMPScore setup and ready to run
- This same binary can be used in a CI environment to run the target project's Harmony module. By default, we provide these opinionated steps :
1.**An empty check step.** The purpose of this step is to run all tests and checks against the codebase. For complex projects this could involve a very complex pipeline of test environments setup and execution but this is out of scope for now. This is not handled by harmony. For projects with automatic setup, we can fill this step with something like `cargo fmt --check; cargo test; cargo build` but Harmony is not directly involved in the execution of this step.
2.**Package and publish.** Once all checks have passed, the production ready container is built and pushed to a registry. This is done by Harmony.
3.**Deploy to staging automatically.**
4.**Run a sanity check on staging.** As Harmony is responsible for deploying, Harmony should have all the knowledge of how to perform a sanity check on the staging environment. This will, most of the time, be a simple verification of the kubernetes health of all deployed components, and a poke on the public endpoint when there is one.
5.**Deploy to production automatically.** Many projects will require manual approval here, this can be easily set up in the CI afterwards, but our opinion is that
6.**Run a sanity check on production.** Same check as staging, but on production.
*Note on providing a base pipeline :* Having a complete pipeline set up automatically will encourage development teams to build upon these by adding tests where they belong. The goal here is to provide an opiniated solution that works for most small and large projects. Of course, many orgnizations will need to add steps such as deploying to sandbox environments, requiring more advanced approvals, more complex publication and coordination with other projects. But this here encompasses the basics required to build and deploy software reliably at any scale.
### Environment setup
TBD : For now, environments (tenants) will be set up and configured manually. Harmony will rely on the kubeconfig provided in the environment where it is running to deploy in the namespace.
For the CD tool such as Argo or Flux they will be activated by default by Harmony when using application level Scores such as LAMPScore in a similar way that the container is automatically built. Then, CI deployment steps will be notifying the CD tool using its API of the new release to deploy.
## Rationale
Reasoning behind the decision
## Consequences
Pros/Cons of chosen solution
## Alternatives considered
Pros/Cons of various proposed solutions considered
We need to send notifications (typically from AlertManager/Prometheus) and we need to receive said notifications on mobile devices for sure in some way, whether it's push messages, SMS, phone call, email, etc or all of the above.
## Decision
We should go with https://ntfy.sh except host it ourselves.
`ntfy` is an open source solution written in Go that has the features we need.
## Rationale
`ntfy` has pretty much everything we need (push notifications, email forwarding, receives via webhook), and nothing/not much we don't. Good fit, lightweight.
## Consequences
Pros:
- topics, with ACLs
- lightweight
- reliable
- easy to configure
- mobile app
- the mobile app can listen via websocket, poll, or receive via Firebase/GCM on Android, or similar on iOS.
- Forward to email
- Text-to-Speech phone call messages using Twilio integration
- Operates based on simple HTTP requests/Webhooks, easily usable via AlertManager
Cons:
- No SMS pushes
- SQLite DB, makes it harder to HA/scale
## Alternatives considered
[AWS SNS](https://aws.amazon.com/sns/):
Pros:
- highly reliable
- no hosting needed
Cons:
- no control, not self hosted
- costs (per usage)
[Apprise](https://github.com/caronc/apprise):
Pros:
- Way more ways of sending notifications
- Can use ntfy as one of the backends/ways of sending
Cons:
- Way too overkill for what we need in terms of features
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.