* it was named `hurl!` instead of just `url!` because it was clashing with the crate `url` so we would have been forced to use it with `harmony_macros::url!` which is less sexy
Reviewed-on: #135
* Okd needs to use the cluster observability operator in order to deploy namespaced prometheuses and alertmanagers
* allow namespaced deployments of alertmanager and prometheuses as well as its associated rules, etc.
Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #134
Co-authored-by: Willem <wrolleman@nationtech.io>
Co-committed-by: Willem <wrolleman@nationtech.io>
## Fully automated inventory gathering now works!
Boot up harmony_inventory_agent with `cargo run -p harmony_inventory_agent`
Launch the DiscoverInventoryAgentScore , currently available this way :
`RUST_LOG=info cargo run -p example-cli -- -f Discover -y`
And you will have automatically all hosts saved to the database. Run `cargo sqlx setup` if you have not done it yet.
Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #127
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
The process will setup DHCP dnsmasq on opnsense to boot the correct ipxe file depending on the architecture
Then ipxe will chainload to either a mac-specific ipxe boot file or the fallback inventory boot file
Then a kickstart pre script will setup the cluster ssh key to allow ssh connections to the machine and also setup and start harmony_inventory_agent to allow being scraped
Note: there is a bug with the inventory agent currently, it cannot find lsmod on centos stream 9, will fix this soon
Reviewed-on: #111
This pull request introduces a comprehensive and ergonomic secret management system via a new harmony-secret crate.
What's Done
New harmony-secret Crate:
A new crate dedicated to secret management, providing a clean, static API: SecretManager::get::<MySecret>() and SecretManager::set(&my_secret).
A #[derive(Secret)] procedural macro that automatically uses the struct's name as the secret key, simplifying usage.
An async SecretStore trait to support various backend implementations.
Two Secret Store Implementations:
LocalFileSecretStore: A simple file-based store that saves secrets as JSON in the user's data directory. Ideal for local development and testing.
InfisicalSecretStore: A production-ready implementation that integrates with Infisical for centralized, secure secret management.
Configuration via Environment Variables:
The secret store is selected at runtime via the HARMONY_SECRET_STORE environment variable (file or infisical).
Infisical integration is configured through HARMONY_SECRET_INFISICAL_* variables.
What's Not Done (Future Work)
Automated Infisical Setup: The initial configuration for the Infisical backend is currently manual. Developers must create a project and a Universal Auth identity in Infisical and set the corresponding environment variables to run tests or use the backend. The new test_harmony_secret_infisical.sh script serves as a clear example of the required variables.
This new secrets module provides a solid and secure foundation for managing credentials for components like OPNsense, Kubernetes, and other infrastructure services going forward. Even with the manual first-time setup for Infisical, this architecture is robust enough to serve our needs for the foreseeable future.
* define Ntfy ingress (naive implementation) based on current target
* use patched Ntfy Helm Chart
* create Ntfy main user only if needed
* add info logs
* better error bubbling
* instrument feature installations
* upgrade prometheus alerting charts if already installed
* harmony_composer params to control deployment `target` and `profile`
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Reviewed-on: #107
The multiprogress wasn't used properly and leading to conflicting progress bars (within our own progress bars, as well as the log wrapper).
This PR introduce a layer on top of `indicatif::MultiProgress` to properly handle sections of progress bars, where we can dynamically add/update/remove progress bars from any sections.
We can see in the demo that new sections + progress bars are added on the fly and that extra logs (e.g. info logs) are appended on top of the progress bars.
Progress are also grouped together based on their parent score.
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Co-authored-by: johnride <jg@nationtech.io>
Reviewed-on: #101
The CI pipeline (`./check.sh`) was failing because of test errors, which was caused by the instrumentation framework complaining that no subscribers/listeners were registered.
Instead of setting up all tests to run with a dummy subscriber, move the implementation of the instrumentation behind a feature flag so that it runs only for tests.
There's a catch though: the `#[cfg(test)]` directive works only when directly testing the crate. If a crate `A` depends on another crate `B`, `B` will be compiled as usual (aka not in test mode) which will not trigger the `test` flag.
So we need to introduce our own `testing` feature flag for `harmony` core and import it with that flag (only during dev/test).
More info: https://github.com/rust-lang/rust/issues/59168
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Reviewed-on: https://git.nationtech.io/NationTech/harmony/pulls/102
First step in a direction to better orchestrate the core flow, even though it feels weird to move this logic into the `Score`. We'll refactor this as soon as we have a better solution.
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Reviewed-on: #100
Introduce a way to instrument what happens within Harmony and around Harmony (e.g. in the CLI or in Composer).
The goal is to provide visual feedback to the end users and inform them of the progress of their tasks (e.g. deployment) as clearly as possible. It is important to also let them know of the outcome of their tasks (what was created, where to access stuff, etc.).
<img src="https://media.discordapp.net/attachments/1295353830300713062/1400289618636574741/demo.gif?ex=688c18d5&is=688ac755&hm=2c70884aacb08f7bd15cbb65a7562a174846906718aa15294bbb238e64febbce&=" />
## Changes
### Instrumentation architecture
Extensibility and ease of use is key here, while preserving type safety as much as possible.
The proposed API is quite simple:
```rs
// Emit an event
instrumentation::instrument(
HarmonyEvent::TopologyPrepared {
topology: "k8s-anywhere",
outcome: Outcome::success("yay")
}
);
// Consume events
instrumentation::subscribe("Harmony CLI Logger", async |event| {
match event {
HarmonyEvent::TopologyPrepared { name, outcome } => todo!(),
}
});
```
#### Current limitations
* this API is not very extensible, but it could be easily changed to allow end users to define custom events in addition to Harmony core events
* we use a tokio broadcast channel behind the scene so only in process communication can happen, but it could be easily changed to a more flexible communication mechanism as implementation details are hidden
### `harmony_composer` VS `harmony_cli`
As Harmony Composer launches commands from Harmony (CLI), they both live in different processes. And because of this, we cannot easily make all the logging happens in one place (Harmony Composer) and get rid of Harmony CLI. At least not without introducing additional complexity such as communication through a server, unix socket, etc.
So for the time being, it was decided to preserve both `harmony_composer` and `harmony_cli` and let them independently log their stuff and handle their own responsibilities:
* `harmony_composer`: takes care only of setting up & packaging a project, delegates everything else to `harmony_cli`
* `harmony_cli`: takes care of configuring & running Harmony
### Logging & prompts
* [indicatif](https://github.com/console-rs/indicatif) is used to create progress bars and track progress within Harmony, Harmony CLI, and Harmony Composer
* [inquire](https://github.com/mikaelmello/inquire) is preserved, but was removed from `harmony` (core) as UI concerns shouldn't go that deep
* note: for now the only prompt we had was simply deleted, we'll have to find a better way to prompt stuff in the future
## Todos
* [ ] Update/Create ADRs
* [ ] Continue instrumentation for missing branches
* [ ] Allow instrumentation to emit and subscribe to custom events
Co-authored-by: Ian Letourneau <letourneau.ian@gmail.com>
Reviewed-on: #91
Reviewed-by: johnride <jg@nationtech.io>
A Maestro was initialized with a new inventory simply to provide a
localhost topology to install K3D locally. But in practice, the K3D
installation wasn't actually using the topology nor the inventory.
Directly installing K3D within the K8s Anywhere topology makes things
simpler and actually enforce the topology to provide the capabilities
required to install K3D.
- Added functionality to generate a Helm chart for the application.
- Implemented chart packaging and pushing to an OCI registry.
- Utilized `helm package` and `helm push` commands.
- Included configurable registry URL and project name.
- Added tests to verify chart generation and packaging.
- Improved error handling and logging.
Using `Command::output()` executes the command and wait for it to be finished before returning the output.
Though in some cases the user might need to interact with the CLI before continuing, which hangs the command execution.
Instead, using `Command::spawn()` allows to forward stdin/stdout to the parent process.
Reviewed-on: #71
Reviewed-by: johnride <jg@nationtech.io>
With this architecture, we have an extensible application module for which we can easily define new features and add them to application scores.
All this is driven by the ApplicationInterpret, who understands features and make sure they are "installed".
The drawback of this design is that we now have three different places to launch scores within Harmony : Maestro, Topology and Interpret. This is an architectural smell and I am not sure how to deal with it at the moment.
However, all these places where execution is performed make sense semantically : an ApplicationInterpret must understand ApplicationFeatures and can very well be responsible of them. Same goes for a Topology which provides features itself by composition (ex. K8sAnywhereTopology implements TenantManager) so it is natural for this very imp
lementation to know how to install itself.
Co-authored-by: Ian Letourneau <ian@noma.to>
Reviewed-on: #70
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
- Implemented a dry-run mode for K8s resource patching, displaying diffs before applying changes.
- Added the `similar` dependency for calculating and displaying text diffs.
- Enhanced K8s resource application to handle various port specifications in NetworkPolicy ingress rules.
- Added support for port ranges and lists of ports in NetworkPolicy rules.
- Updated K8s client to utilize the dry-run configuration setting.
- Added configuration option `HARMONY_DRY_RUN` to enable or disable dry-run mode.
Adds the foundation for managing tenant credentials, including:
- `TenantCredentialScore` for scoring credential-related operations.
- `TenantCredentialManager` trait for creating users.
- `CredentialMetadata` struct to store credential information.
- `CredentialData` enum to hold credential content.
- `TenantCredentialBundle` struct to encapsulate metadata and content.
This provides a starting point for implementing credential creation, storage, and retrieval within the harmony system.
Reviewed-on: #63
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
- Added `additional_allowed_cidr_ingress` and `additional_allowed_cidr_egress` fields to `TenantNetworkPolicy` to allow specifying custom CIDR blocks for network access.
- Updated K8sTenantManager to parse and apply these CIDR rules to NetworkPolicy ingress and egress rules.
- Added `cidr` dependency to `harmony_macros` and a custom proc macro `cidrv4` to easily parse CIDR strings.
- Updated TenantConfig to default inter tenant and internet egress to deny all and added default empty vectors for CIDR ingress and egress.
- Updated ResourceLimits to implement default.
Reviewed-on: #60
Co-authored-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
Co-committed-by: Jean-Gabriel Gill-Couture <jg@nationtech.io>
This Id implementation is optimized for ease of use. Ids are prefixed with the unix epoch and suffixed with 7 alphanumeric characters. But Ids can also contain any String the user wants to pass it
# TODO: build ARM images and MacOS binaries (or other targets) too
- name:Update snapshot-latest tag
run:|
git config user.name "Gitea CI"
git config user.email "ci@nationtech.io"
git tag -f snapshot-latest
git push origin snapshot-latest --force
- name:Install jq
run:apt install -y jq# The current image includes apt lists so we don't have to apt update and rm /var/lib/apt... every time. But if the image is optimized it won't work anymore
- name:Create or update release
run:|
# First, check if release exists and delete it if it does
This will launch Harmony's minimalist terminal ui which embeds a few demo scores.
### Unify
Usage instructions will be displayed at the bottom of the TUI.
- **Project Scaffolding**
- **Infrastructure Provisioning**
- **Application Deployment**
- **Day-2 operations**
`cargo run --bin example-cli -- --help`
All in **one strongly-typed Rust codebase**.
This is the harmony CLI, a minimal implementation
### Deploy anywhere
The current help text:
From a **developer laptop** to a **global production cluster**, a single **source of truth** drives the **full software lifecycle.**
````
Usage: example-cli [OPTIONS]
---
Options:
-y, --yes Run score(s) or not
-f, --filter <FILTER> Filter query
-i, --interactive Run interactive TUI or not
-a, --all Run all or nth, defaults to all
-n, --number <NUMBER> Run nth matching, zero indexed [default: 0]
-l, --list list scores, will also be affected by run filter
-h, --help Print help
-V, --version Print version```
## 1 · The Harmony Philosophy
## Core architecture
Infrastructure is essential, but it shouldn’t be your core business. Harmony is built on three guiding principles that make modern platforms reliable, repeatable, and easy to reason about.
| **Infrastructure as Resilient Code** | Replace sprawling YAML and bash scripts with type-safe Rust. Test, refactor, and version your platform just like application code. |
| **Prove It Works — Before You Deploy** | Harmony uses the compiler to verify that your application’s needs match the target environment’s capabilities at **compile-time**, eliminating an entire class of runtime outages. |
| **One Unified Model** | Software and infrastructure are a single system. Harmony models them together, enabling deep automation—from bare-metal servers to Kubernetes workloads—with zero context switching. |
Two steps:
- Supporting the field in `opnsense-config-xml`
- Enabling Harmony to control the field
These principles surface as simple, ergonomic Rust APIs that let teams focus on their product while trusting the platform underneath.
We'll use the `filename` field in the `dhcpcd` section of the file as an example.
---
### Supporting the field
## 2 · Quick Start
As type checking if enforced, every field from `config.xml` must be known by the code. Each subsection of `config.xml` has its `.rs` file. For the `dhcpcd` section, we'll modify `opnsense-config-xml/src/data/dhcpd.rs`.
The snippet below spins up a complete **production-grade Rust + Leptos Webapp** with monitoring. Swap it for your own scores to deploy anything from microservices to machine-learning pipelines.
When a new field appears in the xml file, an error like this will be thrown and Harmony will panic :
- **Extending Harmony** – write new Scores / Interprets, add hardware like OPNsense firewalls, or embed Harmony in your own tooling (`/docs`).
- **Community** – discussions and roadmap live in [GitLab issues](https://git.nationtech.io/nationtech/harmony/-/issues). PRs, ideas, and feedback are welcome!
---
## 6 · License
Harmony is released under the **GNU AGPL v3**.
> We choose a strong copyleft license to ensure the project—and every improvement to it—remains open and benefits the entire community. Fork it, enhance it, even out-innovate us; just keep it open.
See [LICENSE](LICENSE) for the full text.
---
_Made with ❤️ & 🦀 by the NationTech and the Harmony community_
# Architecture Decision Record: Multi-Tenancy Strategy for Harmony Managed Clusters
Initial Author: Jean-Gabriel Gill-Couture
Initial Date: 2025-05-26
## Status
Proposed
## Context
Harmony manages production OKD/Kubernetes clusters that serve multiple clients with varying trust levels and operational requirements. We need a multi-tenancy strategy that provides:
1.**Strong isolation** between client workloads while maintaining operational simplicity
2.**Controlled API access** allowing clients self-service capabilities within defined boundaries
3.**Security-first approach** protecting both the cluster infrastructure and tenant data
4.**Harmony-native implementation** using our Score/Interpret pattern for automated tenant provisioning
5.**Scalable management** supporting both small trusted clients and larger enterprise customers
The official Kubernetes multi-tenancy documentation identifies two primary models: namespace-based isolation and virtual control planes per tenant. Given Harmony's focus on operational simplicity, provider-agnostic abstractions (ADR-003), and hexagonal architecture (ADR-002), we must choose an approach that balances security, usability, and maintainability.
Our clients represent a hybrid tenancy model:
- **Customer multi-tenancy**: Each client operates independently with no cross-tenant trust
- **Team multi-tenancy**: Individual clients may have multiple team members requiring coordinated access
- **API access requirement**: Unlike pure SaaS scenarios, clients need controlled Kubernetes API access for self-service operations
The official kubernetes documentation on multi tenancy heavily inspired this ADR : https://kubernetes.io/docs/concepts/security/multi-tenancy/
## Decision
Implement **namespace-based multi-tenancy** with the following architecture:
### 1. Network Security Model
- **Private cluster access**: Kubernetes API and OpenShift console accessible only via WireGuard VPN
- **No public exposure**: Control plane endpoints remain internal to prevent unauthorized access attempts
- **VPN-based authentication**: Initial access control through WireGuard client certificates
### 2. Tenant Isolation Strategy
- **Dedicated namespace per tenant**: Each client receives an isolated namespace with access limited only to the required resources and operations
- **Complete network isolation**: NetworkPolicies prevent cross-namespace communication while allowing full egress to public internet
- **Resource governance**: ResourceQuotas and LimitRanges enforce CPU, memory, and storage consumption limits
- **Storage access control**: Clients can create PersistentVolumeClaims but cannot directly manipulate PersistentVolumes or access other tenants' storage
### 3. Access Control Framework
- **Principle of Least Privilege**: RBAC grants only necessary permissions within tenant namespace scope
- **Namespace-scoped**: Clients can create/modify/delete resources within their namespace
- **Cluster-level restrictions**: No access to cluster-wide resources, other namespaces, or sensitive cluster operations
- **Whitelisted operations**: Controlled self-service capabilities for ingress, secrets, configmaps, and workload management
### 4. Identity Management Evolution
- **Phase 1**: Manual provisioning of VPN access and Kubernetes ServiceAccounts/Users
- **Phase 2**: Migration to Keycloak-based identity management (aligning with ADR-006) for centralized authentication and lifecycle management
### 5. Harmony Integration
- **TenantScore implementation**: Declarative tenant provisioning using Harmony's Score/Interpret pattern
This ADR establishes the foundation for secure, scalable multi-tenancy in Harmony-managed clusters while maintaining operational simplicity and cost effectiveness. A follow-up ADR will detail the Tenant abstraction and user management mechanisms within the Harmony framework.
As Harmony's goal is to make software delivery easier, we must provide an easy way for developers to express their app's semantics and dependencies with great abstractions, in a similar fashion to what the score.dev project is doing.
Thus, we started working on ways to package common types of applications such as LAMP, which we started working on with `LAMPScore`.
Now is time for the next step : we want to pave the way towards complete lifecycle automation. To do this, we will start with a way to execute Harmony's modules easily from anywhere, starting with locally and in CI environments.
## Decision
To achieve easy, portable execution of Harmony, we will follow this architecture :
- Host a basic harmony release that is compiled with the CLI by our gitea/github server
- This binary will do the following : check if there is a `harmony` folder in the current path
- If yes
- Check if cargo is available locally and compile the harmony binary, or compile the harmony binary using a rust docker container, if neither cargo or a container runtime is available, output a message explaining the situation
- Run the newly compiled binary. (Ideally using pid handoff like exec does but some research around this should be done. I think handing off the process is to help with OS interaction such as terminal apps, signals, exit codes, process handling, etc but there might be some side effects)
- If not
- Suggest initializing a project by auto detecting what the project looks like
- When the project type cannot be auto detected, provide links to Harmony's documentation on how to set up a project, a link to the examples folder, and a ask the user if he wants to initialize an empty Harmony project in the current folder
- harmony/Cargo.toml with dependencies set
- harmony/src/main.rs with an example LAMPScore setup and ready to run
- This same binary can be used in a CI environment to run the target project's Harmony module. By default, we provide these opinionated steps :
1.**An empty check step.** The purpose of this step is to run all tests and checks against the codebase. For complex projects this could involve a very complex pipeline of test environments setup and execution but this is out of scope for now. This is not handled by harmony. For projects with automatic setup, we can fill this step with something like `cargo fmt --check; cargo test; cargo build` but Harmony is not directly involved in the execution of this step.
2.**Package and publish.** Once all checks have passed, the production ready container is built and pushed to a registry. This is done by Harmony.
3.**Deploy to staging automatically.**
4.**Run a sanity check on staging.** As Harmony is responsible for deploying, Harmony should have all the knowledge of how to perform a sanity check on the staging environment. This will, most of the time, be a simple verification of the kubernetes health of all deployed components, and a poke on the public endpoint when there is one.
5.**Deploy to production automatically.** Many projects will require manual approval here, this can be easily set up in the CI afterwards, but our opinion is that
6.**Run a sanity check on production.** Same check as staging, but on production.
*Note on providing a base pipeline :* Having a complete pipeline set up automatically will encourage development teams to build upon these by adding tests where they belong. The goal here is to provide an opiniated solution that works for most small and large projects. Of course, many orgnizations will need to add steps such as deploying to sandbox environments, requiring more advanced approvals, more complex publication and coordination with other projects. But this here encompasses the basics required to build and deploy software reliably at any scale.
### Environment setup
TBD : For now, environments (tenants) will be set up and configured manually. Harmony will rely on the kubeconfig provided in the environment where it is running to deploy in the namespace.
For the CD tool such as Argo or Flux they will be activated by default by Harmony when using application level Scores such as LAMPScore in a similar way that the container is automatically built. Then, CI deployment steps will be notifying the CD tool using its API of the new release to deploy.
## Rationale
Reasoning behind the decision
## Consequences
Pros/Cons of chosen solution
## Alternatives considered
Pros/Cons of various proposed solutions considered
We need to send notifications (typically from AlertManager/Prometheus) and we need to receive said notifications on mobile devices for sure in some way, whether it's push messages, SMS, phone call, email, etc or all of the above.
## Decision
We should go with https://ntfy.sh except host it ourselves.
`ntfy` is an open source solution written in Go that has the features we need.
## Rationale
`ntfy` has pretty much everything we need (push notifications, email forwarding, receives via webhook), and nothing/not much we don't. Good fit, lightweight.
## Consequences
Pros:
- topics, with ACLs
- lightweight
- reliable
- easy to configure
- mobile app
- the mobile app can listen via websocket, poll, or receive via Firebase/GCM on Android, or similar on iOS.
- Forward to email
- Text-to-Speech phone call messages using Twilio integration
- Operates based on simple HTTP requests/Webhooks, easily usable via AlertManager
Cons:
- No SMS pushes
- SQLite DB, makes it harder to HA/scale
## Alternatives considered
[AWS SNS](https://aws.amazon.com/sns/):
Pros:
- highly reliable
- no hosting needed
Cons:
- no control, not self hosted
- costs (per usage)
[Apprise](https://github.com/caronc/apprise):
Pros:
- Way more ways of sending notifications
- Can use ntfy as one of the backends/ways of sending
Cons:
- Way too overkill for what we need in terms of features
- **Visual:** Clean and simple. Your company logo (NationTech) and the Harmony logo.
---
#### **Slide 2: The YAML Labyrinth**
**Goal:** Get every head in the room nodding in agreement. Start with their world, not yours.
- **Visual:**
- Option A: "The Pull Request from Hell". A screenshot of a GitHub pull request for a seemingly minor change that touches dozens of YAML files across multiple directories. A sea of red and green diffs that is visually overwhelming.
- Option B: A complex flowchart connecting dozens of logos: Terraform, Ansible, K8s, Helm, etc.
- **Narration:**\
[...ADD SOMETHING FOR INTRODUCTION...]\
"We love the power that tools like Kubernetes and the CNCF landscape have given us. But let's be honest... when did our infrastructure code start looking like _this_?"\
"We have GitOps, which is great. But it often means we're managing this fragile cathedral of YAML, Helm charts, and brittle scripts. We spend more time debugging indentation and tracing variables than we do building truly resilient systems."
---
#### **Slide 3: The Real Cost of Infrastructure**
- **Visual:** "The Jenga Tower of Tools". A tall, precarious Jenga tower where each block is the logo of a different tool (Terraform, K8s, Helm, Ansible, Prometheus, ArgoCD, etc.). One block near the bottom is being nervously pulled out.
- **Narration:**
"The real cost isn't just complexity; it's the constant need to choose, learn, integrate, and operate a dozen different tools, each with its own syntax and failure modes. It's the nagging fear that a tiny typo in a config file could bring everything down. Click-ops isn't the answer, but the current state of IaC feels like we've traded one problem for another."
---
#### **Slide 4: The Broken Promise of "Code"**
**Goal:** Introduce the core idea before introducing the product. This makes the solution feel inevitable.
- **(Initial Visual):** A two-panel slide.
- **Left Panel Title: "The Plan"** - A terminal showing a green, successful `terraform plan` output.
- **Right Panel Title: "The Reality"** - The _next_ screen in the terminal, showing the `terraform apply` failing with a cascade of red error text.
- **Narration:**
"We call our discipline **Infrastructure as Code**. And we've all been here. Our 'compiler' is a `terraform plan` that says everything looks perfect. We get the green light."
(Pause for a beat)
"And then we `apply`, and reality hits. It fails halfway through, at runtime, when it's most expensive and painful to fix."
**(Click to transition the slide)**
- **(New Visual):** The entire slide is replaced by a clean screenshot of a code editor (like nvim 😉) showing Harmony's Rust DSL. A red squiggly line is under a config line. The error message is clear in the "Problems" panel: `error: Incompatible deployment. Production target 'gcp-prod-cluster' requires a StorageClass with 'snapshots' capability, but 'standard-sc' does not provide it.`
- **Narration (continued):**
"In software development, we solved these problems years ago. We don't accept 'it compiled, but crashed on startup'. We have real tools, type systems, compilers, test frameworks, and IDEs that catch our mistakes before they ever reach production. **So, what if we could treat our entire infrastructure... like a modern, compiled application?**"
"What if your infrastructure code could get compile-time checks, straight into the editor... instead of runtime panics and failures at 3 AM in production?"
---
#### **Slide 5: Introducing Harmony**
**Goal:** Introduce Harmony as the answer to the "What If?" question.
- **Visual:** The Harmony logo, large and centered.
- **Tagline:** `Infrastructure in type-safe Rust. No YAML required.`
- **Narration:**
"This is Harmony. It's an open-source orchestrator that lets you define your entire stack — from a dev laptop to a multi-site bare-metal cluster—in a single, type-safe Rust codebase."
---
#### **Slide 6: Before & After**
- **Visual:** A side-by-side comparison. Left side: A screen full of complex, nested YAML. Right side: 10-15 lines of clean, readable Harmony Rust DSL that accomplishes the same thing.
- **Narration:**
"This is the difference. On the left, the fragile world of strings and templates. On the right, a portable, verifiable program that describes your apps, your infra, and your operations. We unify scaffolding, provisioning, and Day-2 ops, all verified by the Rust compiler. But enough slides... let's see it in action."
---
#### **Slide 7: Live Demo: Zero to Monitored App**
**Goal:** Show, don't just tell. Make it look effortless. This is where you build the "dream."
- **Visual:** Your terminal/IDE, ready to go.
- **Narration Guide:**
"Okay, for this demo, we're going to take a standard web app from GitHub. Nothing special about it."
_(Show the repo)_
"Now, let's bring it into Harmony. This is the entire definition we need to describe the application and its needs."
_(Show the Rust DSL)_
"First, let's run it locally on k3d. The exact same definition for dev as for prod."
_(Deploy locally, show it works)_
"Cool. But a real app needs monitoring. In Harmony, that's just adding a feature to our code."
_(Uncomment one line: `.with_feature(Monitoring)` and redeploy)_
"And just like that, we have a fully configured Prometheus and Grafana stack, scraping our app. No YAML, no extra config."
"Finally, let's push this to our production staging cluster. We just change the target and specify our multi-site Ceph storage."
_(Deploy to the remote cluster)_
"And there it is. We've gone from a simple web app to a monitored, enterprise-grade service in minutes."
---
#### **Slide 8: Live Demo: Embracing Chaos**
**Goal:** Prove the "predictable" and "resilient" claims in the most dramatic way possible.
- **Visual:** A slide showing a map or diagram of your distributed infrastructure (the different data centers). Then switch back to your terminal.
- **Narration Guide:**
"This is great when things are sunny. But production is chaos. So... let's break things. On purpose."
"First, a network failure." _(Kill a switch/link, show app is still up)_
"Now, let's power off a storage server." _(Force off a server, show Ceph healing and the app is unaffected)_
"How about a control plane node?" _(Force off a k8s control plane, show the cluster is still running)_
"Okay, for the grand finale. What if we have a cascading failure? I'm going to kill _another_ storage server. This should cause a total failure in this data center."
_(Force off the second server, narrate what's happening)_
"And there it is... Ceph has lost quorum in this site... and Harmony has automatically failed everything over to our other datacenter. The app is still running."
---
#### **Slide 9: The New Reality**
**Goal:** Summarize the dream and tell the audience what you want them to do.
- **Visual:** The clean, simple Harmony Rust DSL code from Slide 6. A summary of what was just accomplished is listed next to it: `✓ GitHub to Prod in minutes`, `✓ Type-Safe Validation`, `✓ Built-in Monitoring`, `✓ Automated Multi-Site Failover`.
- **Narration:**
"So, in just a few minutes, we went from a simple web app to a multi-site, monitored, and chaos-proof production deployment. We did it with a small amount of code that is easy to read, easy to verify, and completely portable. This is our vision: to offload the complexity, and make infrastructure simple, predictable, and even fun again."
---
#### **Slide 10: Join Us**
- **Visual:** A clean, final slide with QR codes and links.
- GitHub Repo (`github.com/nation-tech/harmony`)
- Website (`harmony.sh` or similar)
- Your contact info (`jg@nation.tech` / LinkedIn / Twitter)
- **Narration:**
"Harmony is open-source, AGPLv3. We believe this is the future, but we're just getting started. We know this crowd has great infrastructure minds out there, and we need your feedback. Please, check out the project on GitHub. Star it if you like what you see. Tell us what's missing. Let's build this future together. Thank you."
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.