Compare commits

..

9 Commits

Author SHA1 Message Date
bf84bffd57 wip: config + secret merge with e2e sso examples incoming 2026-03-23 23:26:42 -04:00
d4613e42d3 wip: openbao + zitadel e2e setup and test for harmony_config 2026-03-22 21:27:06 -04:00
6a57361356 chore: Update config roadmap
Some checks failed
Run Check Script / check (pull_request) Failing after 12s
2026-03-22 19:04:16 -04:00
d0d4f15122 feat(config): Example prompting
Some checks failed
Run Check Script / check (pull_request) Failing after 14s
2026-03-22 18:18:57 -04:00
93b83b8161 feat(config): Sqlite storage and example 2026-03-22 17:43:12 -04:00
6ca8663422 wip: Roadmap for config 2026-03-22 16:57:36 -04:00
f6ce0c6d4f chore: Harmony short term roadmap 2026-03-22 11:43:43 -04:00
8a1eca21f7 Merge branch 'feat/harmony_assets' into feature/kvm-module 2026-03-22 11:26:04 -04:00
ccc26e07eb feat: harmony_asset crate to manage assets, local, s3, http urls, etc
Some checks failed
Run Check Script / check (pull_request) Failing after 17s
2026-03-21 11:10:51 -04:00
46 changed files with 13819 additions and 188 deletions

9210
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -25,6 +25,7 @@ members = [
"harmony_agent/deploy",
"harmony_node_readiness",
"harmony-k8s",
"harmony_assets",
]
[workspace.package]
@@ -39,6 +40,7 @@ derive-new = "0.7"
async-trait = "0.1"
tokio = { version = "1.40", features = [
"io-std",
"io-util",
"fs",
"macros",
"rt-multi-thread",
@@ -75,6 +77,7 @@ base64 = "0.22.1"
tar = "0.4.44"
lazy_static = "1.5.0"
directories = "6.0.0"
futures-util = "0.3"
thiserror = "2.0.14"
serde = { version = "1.0.209", features = ["derive", "rc"] }
serde_json = "1.0.127"
@@ -88,3 +91,5 @@ reqwest = { version = "0.12", features = [
"json",
], default-features = false }
assertor = "0.0.4"
tokio-test = "0.4"
anyhow = "1.0"

29
ROADMAP.md Normal file
View File

@@ -0,0 +1,29 @@
# Harmony Roadmap
Six phases to take Harmony from working prototype to production-ready open-source project.
| # | Phase | Status | Depends On | Detail |
|---|-------|--------|------------|--------|
| 1 | [Harden `harmony_config`](ROADMAP/01-config-crate.md) | Not started | — | Test every source, add SQLite backend, wire Zitadel + OpenBao, validate zero-setup UX |
| 2 | [Migrate to `harmony_config`](ROADMAP/02-refactor-harmony-config.md) | Not started | 1 | Replace all 19 `SecretManager` call sites, deprecate direct `harmony_secret` usage |
| 3 | [Complete `harmony_assets`](ROADMAP/03-assets-crate.md) | Not started | 1, 2 | Test, refactor k3d and OKD to use it, implement `Url::Url`, remove LFS |
| 4 | [Publish to GitHub](ROADMAP/04-publish-github.md) | Not started | 3 | Clean history, set up GitHub as community hub, CI on self-hosted runners |
| 5 | [E2E tests: PostgreSQL & RustFS](ROADMAP/05-e2e-tests-simple.md) | Not started | 1 | k3d-based test harness, two passing E2E tests, CI job |
| 6 | [E2E tests: OKD HA on KVM](ROADMAP/06-e2e-tests-kvm.md) | Not started | 5 | KVM test infrastructure, full OKD installation test, nightly CI |
## Current State (as of branch `feature/kvm-module`)
- `harmony_config` crate exists with `EnvSource`, `LocalFileSource`, `PromptSource`, `StoreSource`. 12 unit tests. **Zero consumers** in workspace — everything still uses `harmony_secret::SecretManager` directly (19 call sites).
- `harmony_assets` crate exists with `Asset`, `LocalCache`, `LocalStore`, `S3Store`. **No tests. Zero consumers.** The `k3d` crate has its own `DownloadableAsset` with identical functionality and full test coverage.
- `harmony_secret` has `LocalFileSecretStore`, `OpenbaoSecretStore` (token/userpass only), `InfisicalSecretStore`. Works but no Zitadel OIDC integration.
- KVM module exists on this branch with `KvmExecutor`, VM lifecycle, ISO download, two examples (`example_linux_vm`, `kvm_okd_ha_cluster`).
- RustFS module exists on `feat/rustfs` branch (2 commits ahead of master).
- 39 example crates, **zero E2E tests**. Unit tests pass across workspace (~240 tests).
- CI runs `cargo check`, `fmt`, `clippy`, `test` on Gitea. No E2E job.
## Guiding Principles
- **Zero-setup first**: A new user clones, runs `cargo run`, gets prompted for config, values persist to local SQLite. No env vars, no external services required.
- **Progressive disclosure**: Local SQLite → OpenBao → Zitadel SSO. Each layer is opt-in.
- **Test what ships**: Every example that works should have an E2E test proving it works.
- **Community over infrastructure**: GitHub for engagement, self-hosted runners for CI.

623
ROADMAP/01-config-crate.md Normal file
View File

@@ -0,0 +1,623 @@
# Phase 1: Harden `harmony_config`, Validate UX, Zero-Setup Starting Point
## Goal
Make `harmony_config` production-ready with a seamless first-run experience: clone, run, get prompted, values persist locally. Then progressively add team-scale backends (OpenBao, Zitadel SSO) without changing any calling code.
## Current State
`harmony_config` now has:
- `Config` trait + `#[derive(Config)]` macro
- `ConfigManager` with ordered source chain
- Five `ConfigSource` implementations:
- `EnvSource` — reads `HARMONY_CONFIG_{KEY}` env vars
- `LocalFileSource` — reads/writes `{key}.json` files from a directory
- `SqliteSource`**NEW** reads/writes to SQLite database
- `PromptSource` — returns `None` / no-op on set (placeholder for TUI integration)
- `StoreSource<S: SecretStore>` — wraps any `harmony_secret::SecretStore` backend
- 26 unit tests (mock source, env, local file, sqlite, prompt, integration, store graceful fallback)
- Global `CONFIG_MANAGER` static with `init()`, `get()`, `get_or_prompt()`, `set()`
- Two examples: `basic` and `prompting` in `harmony_config/examples/`
- **Zero workspace consumers** — nothing calls `harmony_config` yet
## Tasks
### 1.1 Add `SqliteSource` as the default zero-setup backend ✅
**Status**: Implemented
**Implementation Details**:
- Database location: `~/.local/share/harmony/config/config.db` (directory is auto-created)
- Schema: `config(key TEXT PRIMARY KEY, value TEXT NOT NULL, updated_at TEXT NOT NULL DEFAULT (datetime('now')))`
- Uses `sqlx` with SQLite runtime
- `SqliteSource::open(path)` - opens/creates database at given path
- `SqliteSource::default()` - uses default Harmony data directory
**Files**:
- `harmony_config/src/source/sqlite.rs` - new file
- `harmony_config/Cargo.toml` - added `sqlx = { workspace = true, features = ["runtime-tokio", "sqlite"] }`
- `Cargo.toml` - added `anyhow = "1.0"` to workspace dependencies
**Tests** (all passing):
- `test_sqlite_set_and_get` — round-trip a `TestConfig` struct
- `test_sqlite_get_returns_none_when_missing` — key not in DB
- `test_sqlite_overwrites_on_set` — set twice, get returns latest
- `test_sqlite_concurrent_access` — two tasks writing different keys simultaneously
### 1.1.1 Add Config example to show exact DX and confirm functionality ✅
**Status**: Implemented
**Examples created**:
1. `harmony_config/examples/basic.rs` - demonstrates:
- Zero-setup SQLite backend (auto-creates directory)
- Using the `#[derive(Config)]` macro
- Environment variable override (`HARMONY_CONFIG_TestConfig` overrides SQLite)
- Direct set/get operations
- Persistence verification
2. `harmony_config/examples/prompting.rs` - demonstrates:
- Config with no defaults (requires user input via `inquire`)
- `get()` flow: env > sqlite > prompt fallback
- `get_or_prompt()` for interactive configuration
- Full resolution chain
- Persistence of prompted values
### 1.2 Make `PromptSource` functional ✅
**Status**: Implemented with design improvement
**Key Finding - Bug Fixed During Implementation**:
The original design had a critical bug in `get_or_prompt()`:
```rust
// OLD (BUGGY) - breaks on first source where set() returns Ok(())
for source in &self.sources {
if source.set(T::KEY, &value).await.is_ok() {
break;
}
}
```
Since `EnvSource.set()` returns `Ok(())` (successfully sets env var), the loop would break immediately and never write to `SqliteSource`. Prompted values were never persisted!
**Solution - Added `should_persist()` method to ConfigSource trait**:
```rust
#[async_trait]
pub trait ConfigSource: Send + Sync {
async fn get(&self, key: &str) -> Result<Option<serde_json::Value>, ConfigError>;
async fn set(&self, key: &str, value: &serde_json::Value) -> Result<(), ConfigError>;
fn should_persist(&self) -> bool {
true
}
}
```
- `EnvSource::should_persist()` returns `false` - shouldn't persist prompted values to env vars
- `PromptSource::should_persist()` returns `false` - doesn't persist anyway
- `get_or_prompt()` now skips sources where `should_persist()` is `false`
**Updated `get_or_prompt()`**:
```rust
for source in &self.sources {
if !source.should_persist() {
continue;
}
if source.set(T::KEY, &value).await.is_ok() {
break;
}
}
```
**Tests**:
- `test_prompt_source_always_returns_none`
- `test_prompt_source_set_is_noop`
- `test_prompt_source_does_not_persist`
- `test_full_chain_with_prompt_source_falls_through_to_prompt`
### 1.3 Integration test: full resolution chain ✅
**Status**: Implemented
**Tests**:
- `test_full_resolution_chain_sqlite_fallback` — env not set, sqlite has value, get() returns sqlite
- `test_full_resolution_chain_env_overrides_sqlite` — env set, sqlite has value, get() returns env
- `test_branch_switching_scenario_deserialization_error` — old struct shape in sqlite returns Deserialization error
### 1.4 Validate Zitadel + OpenBao integration path ⏳
**Status**: Planning phase - detailed execution plan below
**Background**: ADR 020-1 documents the target architecture for Zitadel OIDC + OpenBao integration. This task validates the full chain by deploying Zitadel and OpenBao on a local k3d cluster and demonstrating an end-to-end example.
**Architecture Overview**:
```
┌─────────────────────────────────────────────────────────────────────┐
│ Harmony CLI / App │
│ │
│ ConfigManager: │
│ 1. EnvSource ← HARMONY_CONFIG_* env vars (highest priority) │
│ 2. SqliteSource ← ~/.local/share/harmony/config/config.db │
│ 3. StoreSource ← OpenBao (team-scale, via Zitadel OIDC) │
│ │
│ When StoreSource fails (OpenBao unreachable): │
│ → returns Ok(None), chain falls through to SqliteSource │
└─────────────────────────────────────────────────────────────────────┘
┌──────────────────┐ ┌──────────────────┐
│ Zitadel │ │ OpenBao │
│ (IdP + OIDC) │ │ (Secret Store) │
│ │ │ │
│ Device Auth │────JWT──▶│ JWT Auth │
│ Flow (RFC 8628)│ │ Method │
└──────────────────┘ └──────────────────┘
```
**Prerequisites**:
- Docker running (for k3d)
- Rust toolchain (edition 2024)
- Network access to download Helm charts
- `kubectl` (installed automatically with k3d, or pre-installed)
**Step-by-Step Execution Plan**:
#### Step 1: Create k3d cluster for local development
When you run `cargo run -p example-zitadel` (or any example using `K8sAnywhereTopology::from_env()`), Harmony automatically provisions a k3d cluster if one does not exist. By default:
- `use_local_k3d = true` (env: `HARMONY_USE_LOCAL_K3D`, default `true`)
- `autoinstall = true` (env: `HARMONY_AUTOINSTALL`, default `true`)
- Cluster name: **`harmony`** (hardcoded in `K3DInstallationScore::default()`)
- k3d binary is downloaded to `~/.local/share/harmony/k3d/`
- Kubeconfig is merged into `~/.kube/config`, context set to `k3d-harmony`
No manual `k3d cluster create` is needed. If you want to create the cluster manually first:
```bash
# Install k3d (requires sudo or install to user path)
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
# Create the cluster with the same name Harmony expects
k3d cluster create harmony
kubectl cluster-info --context k3d-harmony
```
**Validation**: `kubectl get nodes --context k3d-harmony` shows 1 server node (k3d default)
**Note**: The existing examples use hardcoded external hostnames (e.g., `sso.sto1.nationtech.io`) for ingress. On a local k3d cluster, these hostnames are not routable. For local development you must either:
- Use `kubectl port-forward` to access services directly
- Configure `/etc/hosts` entries pointing to `127.0.0.1`
- Use a k3d loadbalancer with `--port` mappings
#### Step 2: Deploy Zitadel
Zitadel requires the topology to implement `Topology + K8sclient + HelmCommand + PostgreSQL`. The `K8sAnywhereTopology` satisfies all four.
```bash
cargo run -p example-zitadel
```
**What happens internally** (see `harmony/src/modules/zitadel/mod.rs`):
1. Creates `zitadel` namespace via `K8sResourceScore`
2. Deploys a CNPG PostgreSQL cluster:
- Name: `zitadel-pg`
- Instances: **2** (not 1)
- Storage: 10Gi
- Namespace: `zitadel`
3. Resolves the internal DB endpoint (`host:port`) from the CNPG cluster
4. Generates a 32-byte alphanumeric masterkey, stores it as Kubernetes Secret `zitadel-masterkey` (idempotent: skips if it already exists)
5. Generates a 16-char admin password (guaranteed 1+ uppercase, lowercase, digit, symbol)
6. Deploys Zitadel Helm chart (`zitadel/zitadel` from `https://charts.zitadel.com`):
- `chart_version: None` -- **uses latest chart version** (not pinned)
- No `--wait` flag -- returns before pods are ready
- Ingress annotations are **OpenShift-oriented** (`route.openshift.io/termination: edge`, `cert-manager.io/cluster-issuer: letsencrypt-prod`). On k3d these annotations are silently ignored.
- Ingress includes TLS config with `secretName: "{host}-tls"`, which requires cert-manager. Without cert-manager, TLS termination does not happen at the ingress level.
**Key Helm values set by ZitadelScore**:
- `zitadel.configmapConfig.ExternalDomain`: the `host` field (e.g., `sso.sto1.nationtech.io`)
- `zitadel.configmapConfig.ExternalSecure: true`
- `zitadel.configmapConfig.TLS.Enabled: false` (TLS at ingress, not in Zitadel)
- Admin user: `UserName: "admin"`, Email: **`admin@zitadel.example.com`** (hardcoded, not derived from host)
- Database credentials: injected via `env[].valueFrom.secretKeyRef` from secret `zitadel-pg-superuser` (both user and admin use the same superuser -- there is a TODO to fix this)
**Expected output**:
```
===== ZITADEL DEPLOYMENT COMPLETE =====
Login URL: https://sso.sto1.nationtech.io
Username: admin@zitadel.sso.sto1.nationtech.io
Password: <generated 16-char password>
```
**Note on the success message**: The printed username `admin@zitadel.{host}` does not match the actual configured email `admin@zitadel.example.com`. The actual login username in Zitadel is `admin` (the `UserName` field). This discrepancy exists in the current code.
**Validation on k3d**:
```bash
# Wait for pods to be ready (Helm returns before readiness)
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=zitadel -n zitadel --timeout=300s
# Port-forward to access Zitadel (ingress won't work without proper DNS/TLS on k3d)
kubectl port-forward svc/zitadel -n zitadel 8080:8080
# Access at http://localhost:8080 (note: ExternalSecure=true may cause redirect issues)
```
**Known issues for k3d deployment**:
- `ExternalSecure: true` tells Zitadel to expect HTTPS, but k3d port-forward is HTTP. This may cause redirect loops. Override with: modify the example to set `ExternalSecure: false` for local dev.
- The CNPG operator must be installed on the cluster. `K8sAnywhereTopology` handles this via the `PostgreSQL` trait implementation, which deploys the operator first.
#### Step 3: Deploy OpenBao
OpenBao requires only `Topology + K8sclient + HelmCommand` (no PostgreSQL dependency).
```bash
cargo run -p example-openbao
```
**What happens internally** (see `harmony/src/modules/openbao/mod.rs`):
1. `OpenbaoScore` directly delegates to `HelmChartScore.create_interpret()` -- there is no custom `execute()` logic, no namespace creation step, no secret generation
2. Deploys OpenBao Helm chart (`openbao/openbao` from `https://openbao.github.io/openbao-helm`):
- `chart_version: None` -- **uses latest chart version** (not pinned)
- `create_namespace: true` -- the `openbao` namespace is created by Helm
- `install_only: false` -- uses `helm upgrade --install`
**Exact Helm values set by OpenbaoScore**:
```yaml
global:
openshift: true # <-- PROBLEM: hardcoded, see below
server:
standalone:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/openbao/data"
}
service:
enabled: true
ingress:
enabled: true
hosts:
- host: <host field> # e.g., openbao.sebastien.sto1.nationtech.io
dataStorage:
enabled: true
size: 10Gi
storageClass: null # uses cluster default
accessMode: ReadWriteOnce
auditStorage:
enabled: true
size: 10Gi
storageClass: null
accessMode: ReadWriteOnce
ui:
enabled: true
```
**Critical issue: `global.openshift: true` is hardcoded.** The OpenBao Helm chart default is `global.openshift: false`. When set to `true`, the chart adjusts security contexts and may create OpenShift Routes instead of standard Kubernetes Ingress resources. **On k3d (vanilla k8s), this will produce resources that may not work correctly.** Before deploying on k3d, this must be overridden.
**Fix required for k3d**: Either:
1. Modify `OpenbaoScore` to accept an `openshift: bool` field (preferred long-term fix)
2. Or for this example, create a custom example that passes `values_overrides` with `global.openshift=false`
**Post-deployment initialization** (manual -- the TODO in `mod.rs` acknowledges this is not automated):
OpenBao starts in a sealed state. You must initialize and unseal it manually. See https://openbao.org/docs/platform/k8s/helm/run/
```bash
# Initialize OpenBao (generates unseal keys + root token)
kubectl exec -n openbao openbao-0 -- bao operator init
# Save the output! It contains 5 unseal keys and the root token.
# Example output:
# Unseal Key 1: abc123...
# Unseal Key 2: def456...
# ...
# Initial Root Token: hvs.xxxxx
# Unseal (requires 3 of 5 keys by default)
kubectl exec -n openbao openbao-0 -- bao operator unseal <key1>
kubectl exec -n openbao openbao-0 -- bao operator unseal <key2>
kubectl exec -n openbao openbao-0 -- bao operator unseal <key3>
```
**Validation**:
```bash
kubectl exec -n openbao openbao-0 -- bao status
# Should show "Sealed: false"
```
**Note**: The ingress has **no TLS configuration** (unlike Zitadel's ingress). Access is HTTP-only unless you configure TLS separately.
#### Step 4: Configure OpenBao for Harmony
Two paths are available depending on the authentication method:
##### Path A: Userpass auth (simpler, for local dev)
The current `OpenbaoSecretStore` supports **token** and **userpass** authentication. It does NOT yet implement the JWT/OIDC device flow described in ADR 020-1.
```bash
# Port-forward to access OpenBao API
kubectl port-forward svc/openbao -n openbao 8200:8200 &
export BAO_ADDR="http://127.0.0.1:8200"
export BAO_TOKEN="<root token from init>"
# Enable KV v2 secrets engine (default mount "secret")
bao secrets enable -path=secret kv-v2
# Enable userpass auth method
bao auth enable userpass
# Create a user for Harmony
bao write auth/userpass/login/harmony password="harmony-dev-password"
# Create policy granting read/write on harmony/* paths
cat <<'EOF' | bao policy write harmony-dev -
path "secret/data/harmony/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "secret/metadata/harmony/*" {
capabilities = ["list", "read", "delete"]
}
EOF
# Create the user with the policy attached
bao write auth/userpass/users/harmony \
password="harmony-dev-password" \
policies="harmony-dev"
```
**Bug in `OpenbaoSecretStore::authenticate_userpass()`**: The `kv_mount` parameter (default `"secret"`) is passed to `vaultrs::auth::userpass::login()` as the auth mount path. This means it calls `POST /v1/auth/secret/login/{username}` instead of the correct `POST /v1/auth/userpass/login/{username}`. **The auth mount and KV mount are conflated into one parameter.**
**Workaround**: Set `OPENBAO_KV_MOUNT=userpass` so the auth call hits the correct mount path. But then KV operations would use mount `userpass` instead of `secret`, which is wrong.
**Proper fix needed**: Split `kv_mount` into two separate parameters: one for the KV v2 engine mount (`secret`) and one for the auth mount (`userpass`). This is a bug in `harmony_secret/src/store/openbao.rs:234`.
**For this example**: Use **token auth** instead of userpass to sidestep the bug:
```bash
# Set env vars for the example
export OPENBAO_URL="http://127.0.0.1:8200"
export OPENBAO_TOKEN="<root token from init>"
export OPENBAO_KV_MOUNT="secret"
```
##### Path B: JWT auth with Zitadel (target architecture, per ADR 020-1)
This is the production path described in the ADR. It requires the device flow code that is **not yet implemented** in `OpenbaoSecretStore`. The current code only supports token and userpass.
When implemented, the flow will be:
1. Enable JWT auth method in OpenBao
2. Configure it to trust Zitadel's OIDC discovery URL
3. Create a role that maps Zitadel JWT claims to OpenBao policies
```bash
# Enable JWT auth
bao auth enable jwt
# Configure JWT auth to trust Zitadel
bao write auth/jwt/config \
oidc_discovery_url="https://<zitadel-host>" \
bound_issuer="https://<zitadel-host>"
# Create role for Harmony developers
bao write auth/jwt/role/harmony-developer \
role_type="jwt" \
bound_audiences="<harmony_client_id>" \
user_claim="email" \
groups_claim="urn:zitadel:iam:org:project:roles" \
policies="harmony-dev" \
ttl="4h" \
max_ttl="24h" \
token_type="service"
```
**Zitadel application setup** (in Zitadel console):
1. Create project: `Harmony`
2. Add application: `Harmony CLI` (Native app type)
3. Enable Device Authorization grant type
4. Set scopes: `openid email profile offline_access`
5. Note the `client_id`
This path is deferred until the device flow is implemented in `OpenbaoSecretStore`.
#### Step 5: Write end-to-end example
The example uses `StoreSource<OpenbaoSecretStore>` with token auth to avoid the userpass mount bug.
**Environment variables required** (from `harmony_secret/src/config.rs`):
| Variable | Required | Default | Notes |
|---|---|---|---|
| `OPENBAO_URL` | Yes | None | Falls back to `VAULT_ADDR` |
| `OPENBAO_TOKEN` | For token auth | None | Root or user token |
| `OPENBAO_USERNAME` | For userpass | None | Requires `OPENBAO_PASSWORD` too |
| `OPENBAO_PASSWORD` | For userpass | None | |
| `OPENBAO_KV_MOUNT` | No | `"secret"` | KV v2 engine mount path. **Also used as userpass auth mount -- this is a bug.** |
| `OPENBAO_SKIP_TLS` | No | `false` | Set `"true"` to disable TLS verification |
**Note**: `OpenbaoSecretStore::new()` is `async` and **requires a running OpenBao** at construction time (it validates the token if using cached auth). If OpenBao is unreachable during construction, the call will fail. The graceful fallback only applies to `StoreSource::get()` calls after construction -- the `ConfigManager` must be built with a live store, or the store must be wrapped in a lazy initialization pattern.
```rust
// harmony_config/examples/openbao_chain.rs
use harmony_config::{ConfigManager, EnvSource, SqliteSource, StoreSource};
use harmony_secret::OpenbaoSecretStore;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
#[derive(Debug, Clone, Serialize, Deserialize, schemars::JsonSchema, PartialEq)]
struct AppConfig {
host: String,
port: u16,
}
impl harmony_config::Config for AppConfig {
const KEY: &'static str = "AppConfig";
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
// Build the source chain
let env_source: Arc<dyn harmony_config::ConfigSource> = Arc::new(EnvSource);
let sqlite = Arc::new(
SqliteSource::default()
.await
.expect("Failed to open SQLite"),
);
// OpenBao store -- requires OPENBAO_URL and OPENBAO_TOKEN env vars
// Falls back gracefully if OpenBao is unreachable at query time
let openbao_url = std::env::var("OPENBAO_URL")
.or(std::env::var("VAULT_ADDR"))
.ok();
let sources: Vec<Arc<dyn harmony_config::ConfigSource>> = if let Some(url) = openbao_url {
let kv_mount = std::env::var("OPENBAO_KV_MOUNT")
.unwrap_or_else(|_| "secret".to_string());
let skip_tls = std::env::var("OPENBAO_SKIP_TLS")
.map(|v| v == "true")
.unwrap_or(false);
match OpenbaoSecretStore::new(
url,
kv_mount,
skip_tls,
std::env::var("OPENBAO_TOKEN").ok(),
std::env::var("OPENBAO_USERNAME").ok(),
std::env::var("OPENBAO_PASSWORD").ok(),
)
.await
{
Ok(store) => {
let store_source = Arc::new(StoreSource::new("harmony".to_string(), store));
vec![env_source, Arc::clone(&sqlite) as _, store_source]
}
Err(e) => {
eprintln!("Warning: OpenBao unavailable ({e}), using local sources only");
vec![env_source, sqlite]
}
}
} else {
println!("No OPENBAO_URL set, using local sources only");
vec![env_source, sqlite]
};
let manager = ConfigManager::new(sources);
// Scenario 1: get() with nothing stored -- returns NotFound
let result = manager.get::<AppConfig>().await;
println!("Get (empty): {:?}", result);
// Scenario 2: set() then get()
let config = AppConfig {
host: "production.example.com".to_string(),
port: 443,
};
manager.set(&config).await?;
println!("Set: {:?}", config);
let retrieved = manager.get::<AppConfig>().await?;
println!("Get (after set): {:?}", retrieved);
assert_eq!(config, retrieved);
println!("End-to-end chain validated!");
Ok(())
}
```
**Key behaviors demonstrated**:
1. **Graceful construction fallback**: If `OPENBAO_URL` is not set or OpenBao is unreachable at startup, the chain is built without it
2. **Graceful query fallback**: `StoreSource::get()` returns `Ok(None)` on any error, so the chain continues to SQLite
3. **Environment override**: `HARMONY_CONFIG_AppConfig='{"host":"env-host","port":9090}'` bypasses all backends
#### Step 6: Validate graceful fallback
Already validated via unit tests (26 tests pass):
- `test_store_source_error_falls_through_to_sqlite` -- `StoreSource` with `AlwaysErrorStore` returns connection error, chain falls through to `SqliteSource`
- `test_store_source_not_found_falls_through_to_sqlite` -- `StoreSource` returns `NotFound`, chain falls through to `SqliteSource`
**Code path (FIXED in `harmony_config/src/source/store.rs`)**:
```rust
// StoreSource::get() -- returns Ok(None) on ANY error, allowing chain to continue
match self.store.get_raw(&self.namespace, key).await {
Ok(bytes) => { /* deserialize and return */ Ok(Some(value)) }
Err(SecretStoreError::NotFound { .. }) => Ok(None),
Err(_) => Ok(None), // Connection errors, timeouts, etc.
}
```
#### Step 7: Known issues and blockers
| Issue | Location | Severity | Status |
|---|---|---|---|
| `global.openshift: true` hardcoded | `harmony/src/modules/openbao/mod.rs:32` | **Blocker for k3d** | ✅ Fixed: Added `openshift: bool` field to `OpenbaoScore` (defaults to `false`) |
| `kv_mount` used as auth mount path | `harmony_secret/src/store/openbao.rs:234` | **Bug** | ✅ Fixed: Added separate `auth_mount` parameter; added `OPENBAO_AUTH_MOUNT` env var |
| Admin email hardcoded `admin@zitadel.example.com` | `harmony/src/modules/zitadel/mod.rs:314` | Minor | Cosmetic mismatch with success message |
| `ExternalSecure: true` hardcoded | `harmony/src/modules/zitadel/mod.rs:306` | **Issue for k3d** | ✅ Fixed: Zitadel now detects Kubernetes distribution and uses appropriate settings (OpenShift = TLS + cert-manager annotations, k3d = plain nginx ingress without TLS) |
| No Helm chart version pinning | Both modules | Risk | Non-deterministic deploys |
| No `--wait` on Helm install | `harmony/src/modules/helm/chart.rs` | UX | Must manually wait for readiness |
| `get_version()`/`get_status()` are `todo!()` | Both modules | Panic risk | Do not call these methods |
| JWT/OIDC device flow not implemented | `harmony_secret/src/store/openbao.rs` | **Gap** | ✅ Implemented: `ZitadelOidcAuth` in `harmony_secret/src/store/zitadel.rs` |
| `HARMONY_SECRET_NAMESPACE` panics if not set | `harmony_secret/src/config.rs:5` | Runtime panic | Only affects `SecretManager`, not `StoreSource` directly |
**Remaining work**:
- [x] `StoreSource<OpenbaoSecretStore>` integration validates compilation
- [x] StoreSource returns `Ok(None)` on connection error (not `Err`)
- [x] Graceful fallback tests pass when OpenBao is unreachable (2 new tests)
- [x] Fix `global.openshift: true` in `OpenbaoScore` for k3d compatibility
- [x] Fix `kv_mount` / auth mount conflation bug in `OpenbaoSecretStore`
- [x] Create and test `harmony_config/examples/openbao_chain.rs` against real k3d deployment
- [x] Implement JWT/OIDC device flow in `OpenbaoSecretStore` (ADR 020-1) — `ZitadelOidcAuth` implemented and wired into `OpenbaoSecretStore::new()` auth chain
- [x] Fix Zitadel distribution detection — Zitadel now uses `k8s_client.get_k8s_distribution()` to detect OpenShift vs k3d and applies appropriate Helm values (TLS + cert-manager for OpenShift, plain nginx for k3d)
### 1.5 UX validation checklist ⏳
**Status**: Partially complete - manual verification needed
- [ ] `cargo run --example postgresql` with no env vars → prompts for nothing
- [ ] An example that uses `SecretManager` today (e.g., `brocade_snmp_server`) → when migrated to `harmony_config`, first run prompts, second run reads from SQLite
- [ ] Setting `HARMONY_CONFIG_BrocadeSwitchAuth='{"host":"...","user":"...","password":"..."}'` → skips prompt, uses env value
- [ ] Deleting `~/.local/share/harmony/config/` directory → re-prompts on next run
## Deliverables
- [x] `SqliteSource` implementation with tests
- [x] Functional `PromptSource` with `should_persist()` design
- [x] Fix `get_or_prompt` to persist to first writable source (via `should_persist()`), not all sources
- [x] Integration tests for full resolution chain
- [x] Branch-switching deserialization failure test
- [x] `StoreSource<OpenbaoSecretStore>` integration validated (compiles, graceful fallback)
- [x] ADR for Zitadel OIDC target architecture
- [ ] Update docs to reflect final implementation and behavior
## Key Implementation Notes
1. **SQLite path**: `~/.local/share/harmony/config/config.db` (not `~/.local/share/harmony/config.db`)
2. **Auto-create directory**: `SqliteSource::open()` creates parent directories if they don't exist
3. **Default path**: `SqliteSource::default()` uses `directories::ProjectDirs` to find the correct data directory
4. **Env var precedence**: Environment variables always take precedence over SQLite in the resolution chain
5. **Testing**: All tests use `tempfile::NamedTempFile` for temporary database paths, ensuring test isolation
6. **Graceful fallback**: `StoreSource::get()` returns `Ok(None)` on any error (connection refused, timeout, etc.), allowing the chain to fall through to the next source. This ensures OpenBao unavailability doesn't break the config chain.
7. **StoreSource errors don't block chain**: When OpenBao is unreachable, `StoreSource::get()` returns `Ok(None)` and the `ConfigManager` continues to the next source (typically `SqliteSource`). This is validated by `test_store_source_error_falls_through_to_sqlite` and `test_store_source_not_found_falls_through_to_sqlite`.

View File

@@ -0,0 +1,112 @@
# Phase 2: Migrate Workspace to `harmony_config`
## Goal
Replace every direct `harmony_secret::SecretManager` call with `harmony_config` equivalents. After this phase, modules and examples depend only on `harmony_config`. `harmony_secret` becomes an internal implementation detail behind `StoreSource`.
## Current State
19 call sites use `SecretManager::get_or_prompt::<T>()` across:
| Location | Secret Types | Call Sites |
|----------|-------------|------------|
| `harmony/src/modules/brocade/brocade_snmp.rs` | `BrocadeSnmpAuth`, `BrocadeSwitchAuth` | 2 |
| `harmony/src/modules/nats/score_nats_k8s.rs` | `NatsAdmin` | 1 |
| `harmony/src/modules/okd/bootstrap_02_bootstrap.rs` | `RedhatSecret`, `SshKeyPair` | 2 |
| `harmony/src/modules/application/features/monitoring.rs` | `NtfyAuth` | 1 |
| `brocade/examples/main.rs` | `BrocadeSwitchAuth` | 1 |
| `examples/okd_installation/src/main.rs` + `topology.rs` | `SshKeyPair`, `BrocadeSwitchAuth`, `OPNSenseFirewallConfig` | 3 |
| `examples/okd_pxe/src/main.rs` + `topology.rs` | `SshKeyPair`, `BrocadeSwitchAuth`, `OPNSenseFirewallCredentials` | 3 |
| `examples/opnsense/src/main.rs` | `OPNSenseFirewallCredentials` | 1 |
| `examples/sttest/src/main.rs` + `topology.rs` | `SshKeyPair`, `OPNSenseFirewallConfig` | 2 |
| `examples/opnsense_node_exporter/` | (has dep but unclear usage) | ~1 |
| `examples/okd_cluster_alerts/` | (has dep but unclear usage) | ~1 |
| `examples/brocade_snmp_server/` | (has dep but unclear usage) | ~1 |
## Tasks
### 2.1 Bootstrap `harmony_config` in CLI and TUI entry points
Add `harmony_config::init()` as the first thing that happens in `harmony_cli::run()` and `harmony_tui::run()`.
```rust
// harmony_cli/src/lib.rs — inside run()
pub async fn run<T: Topology + Send + Sync + 'static>(
inventory: Inventory,
topology: T,
scores: Vec<Box<dyn Score<T>>>,
args_struct: Option<Args>,
) -> Result<(), Box<dyn std::error::Error>> {
// Initialize config system with default source chain
let sqlite = Arc::new(SqliteSource::default().await?);
let env = Arc::new(EnvSource);
harmony_config::init(vec![env, sqlite]).await;
// ... rest of run()
}
```
This replaces the implicit `SecretManager` lazy initialization that currently happens on first `get_or_prompt` call.
### 2.2 Migrate each secret type from `Secret` to `Config`
For each secret struct, change:
```rust
// Before
use harmony_secret::Secret;
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, InteractiveParse, Secret)]
struct BrocadeSwitchAuth { ... }
// After
use harmony_config::Config;
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, InteractiveParse, Config)]
struct BrocadeSwitchAuth { ... }
```
At each call site, change:
```rust
// Before
let config = SecretManager::get_or_prompt::<BrocadeSwitchAuth>().await.unwrap();
// After
let config = harmony_config::get_or_prompt::<BrocadeSwitchAuth>().await.unwrap();
```
### 2.3 Migration order (low risk to high risk)
1. **`brocade/examples/main.rs`** — 1 call site, isolated example, easy to test manually
2. **`examples/opnsense/src/main.rs`** — 1 call site, isolated
3. **`harmony/src/modules/brocade/brocade_snmp.rs`** — 2 call sites, core module but straightforward
4. **`harmony/src/modules/nats/score_nats_k8s.rs`** — 1 call site
5. **`harmony/src/modules/application/features/monitoring.rs`** — 1 call site
6. **`examples/sttest/`** — 2 call sites, has both main.rs and topology.rs patterns
7. **`examples/okd_installation/`** — 3 call sites, complex topology setup
8. **`examples/okd_pxe/`** — 3 call sites, similar to okd_installation
9. **`harmony/src/modules/okd/bootstrap_02_bootstrap.rs`** — 2 call sites, critical OKD bootstrap path
### 2.4 Remove `harmony_secret` from direct dependencies
After all call sites are migrated:
1. Remove `harmony_secret` from `Cargo.toml` of: `harmony`, `brocade`, and all examples that had it
2. `harmony_config` keeps `harmony_secret` as a dependency (for `StoreSource`)
3. The `Secret` trait and `SecretManager` remain in `harmony_secret` but are not used directly anymore
### 2.5 Backward compatibility for existing local secrets
Users who already have secrets stored via `LocalFileSecretStore` (JSON files in `~/.local/share/harmony/secrets/`) need a migration path:
- On first run after upgrade, if SQLite has no entry for a key but the old JSON file exists, read from JSON and write to SQLite
- Or: add `LocalFileSource` as a fallback source at the end of the chain (read-only) for one release cycle
- Log a deprecation warning when reading from old JSON files
## Deliverables
- [ ] `harmony_config::init()` called in `harmony_cli::run()` and `harmony_tui::run()`
- [ ] All 19 call sites migrated from `SecretManager` to `harmony_config`
- [ ] `harmony_secret` removed from direct dependencies of `harmony`, `brocade`, and all examples
- [ ] Backward compatibility for existing local JSON secrets
- [ ] All existing unit tests still pass
- [ ] Manual verification: one migrated example works end-to-end (prompt → persist → read)

141
ROADMAP/03-assets-crate.md Normal file
View File

@@ -0,0 +1,141 @@
# Phase 3: Complete `harmony_assets`, Refactor Consumers
## Goal
Make `harmony_assets` the single way to manage downloadable binaries and images across Harmony. Eliminate `k3d::DownloadableAsset` duplication, implement `Url::Url` in OPNsense infra, remove LFS-tracked files from git.
## Current State
- `harmony_assets` exists with `Asset`, `LocalCache`, `LocalStore`, `S3Store` (behind feature flag). CLI with `upload`, `download`, `checksum`, `verify` commands. **No tests. Zero consumers.**
- `k3d/src/downloadable_asset.rs` has the same functionality with full test coverage (httptest mock server, checksum verification, cache hit, 404 handling, checksum failure).
- `Url::Url` variant in `harmony_types/src/net.rs` exists but is `todo!()` in OPNsense TFTP and HTTP infra layers.
- OKD modules hardcode `./data/...` paths (`bootstrap_02_bootstrap.rs:84-88`, `ipxe.rs:73`).
- `data/` directory contains ~3GB of LFS-tracked files (OKD binaries, PXE images, SCOS images).
## Tasks
### 3.1 Port k3d tests to `harmony_assets`
The k3d crate has 5 well-written tests in `downloadable_asset.rs`. Port them to test `harmony_assets::LocalStore`:
```rust
// harmony_assets/tests/local_store.rs (or in src/ as unit tests)
#[tokio::test]
async fn test_fetch_downloads_and_verifies_checksum() {
// Start httptest server serving a known file
// Create Asset with URL pointing to mock server
// Fetch via LocalStore
// Assert file exists at expected cache path
// Assert checksum matches
}
#[tokio::test]
async fn test_fetch_returns_cached_file_when_present() {
// Pre-populate cache with correct file
// Fetch — assert no HTTP request made (mock server not hit)
}
#[tokio::test]
async fn test_fetch_fails_on_404() { ... }
#[tokio::test]
async fn test_fetch_fails_on_checksum_mismatch() { ... }
#[tokio::test]
async fn test_fetch_with_progress_callback() {
// Assert progress callback is called with (bytes_received, total_size)
}
```
Add `httptest` to `[dev-dependencies]` of `harmony_assets`.
### 3.2 Refactor `k3d` to use `harmony_assets`
Replace `k3d/src/downloadable_asset.rs` with calls to `harmony_assets`:
```rust
// k3d/src/lib.rs — in download_latest_release()
use harmony_assets::{Asset, LocalCache, LocalStore, ChecksumAlgo};
let asset = Asset::new(
binary_url,
checksum,
ChecksumAlgo::SHA256,
K3D_BIN_FILE_NAME.to_string(),
);
let cache = LocalCache::new(self.base_dir.clone());
let store = LocalStore::new();
let path = store.fetch(&asset, &cache, None).await
.map_err(|e| format!("Failed to download k3d: {}", e))?;
```
Delete `k3d/src/downloadable_asset.rs`. Update k3d's `Cargo.toml` to depend on `harmony_assets`.
### 3.3 Define asset metadata as config structs
Following `plan.md` Phase 2, create typed config for OKD assets using `harmony_config`:
```rust
// harmony/src/modules/okd/config.rs
#[derive(Config, Serialize, Deserialize, JsonSchema, InteractiveParse)]
struct OkdInstallerConfig {
pub openshift_install_url: String,
pub openshift_install_sha256: String,
pub scos_kernel_url: String,
pub scos_kernel_sha256: String,
pub scos_initramfs_url: String,
pub scos_initramfs_sha256: String,
pub scos_rootfs_url: String,
pub scos_rootfs_sha256: String,
}
```
First run prompts for URLs/checksums (or uses compiled-in defaults). Values persist to SQLite. Can be overridden via env vars or OpenBao.
### 3.4 Implement `Url::Url` in OPNsense infra layer
In `harmony/src/infra/opnsense/http.rs` and `tftp.rs`, implement the `Url::Url(url)` match arm:
```rust
// Instead of SCP-ing files to OPNsense:
// SSH into OPNsense, run: fetch -o /usr/local/http/{path} {url}
// (FreeBSD-native HTTP client, no extra deps on OPNsense)
```
This eliminates the manual `scp` workaround and the `inquire::Confirm` prompts in `ipxe.rs:126` and `bootstrap_02_bootstrap.rs:230`.
### 3.5 Refactor OKD modules to use assets + config
In `bootstrap_02_bootstrap.rs`:
- `openshift-install`: Resolve `OkdInstallerConfig` from `harmony_config`, download via `harmony_assets`, invoke from cache.
- SCOS images: Pass `Url::Url(scos_kernel_url)` etc. to `StaticFilesHttpScore`. OPNsense fetches from S3 directly.
- Remove `oc` and `kubectl` from `data/okd/bin/` (never used by code).
In `ipxe.rs`:
- Replace the folder-to-serve SCP workaround with individual `Url::Url` entries.
- Remove the `inquire::Confirm` SCP prompts.
### 3.6 Upload assets to S3
- Upload all current `data/` binaries to Ceph S3 bucket with path scheme: `harmony-assets/okd/v{version}/openshift-install`, `harmony-assets/pxe/centos-stream-9/install.img`, etc.
- Set public-read ACL or configure presigned URL generation.
- Record S3 URLs and SHA256 checksums as defaults in the config structs.
### 3.7 Remove LFS, clean git
- Remove all LFS-tracked files from the repo.
- Update `.gitattributes` to remove LFS filters.
- Keep `data/` in `.gitignore` (it becomes a local cache directory).
- Optionally use `git filter-repo` or BFG to strip LFS objects from history (required before Phase 4 GitHub publish).
## Deliverables
- [ ] `harmony_assets` has tests ported from k3d pattern (5+ tests with httptest)
- [ ] `k3d::DownloadableAsset` replaced by `harmony_assets` usage
- [ ] `OkdInstallerConfig` struct using `harmony_config`
- [ ] `Url::Url` implemented in OPNsense HTTP and TFTP infra
- [ ] OKD bootstrap refactored to use lazy-download pattern
- [ ] Assets uploaded to S3 with documented URLs/checksums
- [ ] LFS removed, git history cleaned
- [ ] Repo size small enough for GitHub (~code + templates only)

View File

@@ -0,0 +1,110 @@
# Phase 4: Publish to GitHub
## Goal
Make Harmony publicly available on GitHub as the primary community hub for issues, pull requests, and discussions. CI runs on self-hosted runners.
## Prerequisites
- Phase 3 complete: LFS removed, git history cleaned, repo is small
- README polished with quick-start, architecture overview, examples
- All existing tests pass
## Tasks
### 4.1 Clean git history
```bash
# Option A: git filter-repo (preferred)
git filter-repo --strip-blobs-bigger-than 10M
# Option B: BFG Repo Cleaner
bfg --strip-blobs-bigger-than 10M
git reflog expire --expire=now --all
git gc --prune=now --aggressive
```
Verify final repo size is reasonable (target: <50MB including all code, docs, templates).
### 4.2 Create GitHub repository
- Create `NationTech/harmony` (or chosen org/name) on GitHub
- Push cleaned repo as initial commit
- Set default branch to `main` (rename from `master` if desired)
### 4.3 Set up CI on self-hosted runners
GitHub is the community hub, but CI runs on your own infrastructure. Options:
**Option A: GitHub Actions with self-hosted runners**
- Register your Gitea runner machines as GitHub Actions self-hosted runners
- Port `.gitea/workflows/check.yml` to `.github/workflows/check.yml`
- Same Docker image (`hub.nationtech.io/harmony/harmony_composer:latest`), same commands
- Pro: native GitHub PR checks, no external service needed
- Con: runners need outbound access to GitHub API
**Option B: External CI (Woodpecker, Drone, Jenkins)**
- Use any CI that supports webhooks from GitHub
- Report status back to GitHub via commit status API / checks API
- Pro: fully self-hosted, no GitHub dependency for builds
- Con: extra integration work
**Option C: Keep Gitea CI, mirror from GitHub**
- GitHub repo has a webhook that triggers Gitea CI on push
- Gitea reports back to GitHub via commit status API
- Pro: no migration of CI config
- Con: fragile webhook chain
**Recommendation**: Option A. GitHub Actions self-hosted runners are straightforward and give the best contributor UX (native PR checks). The workflow files are nearly identical to Gitea workflows.
```yaml
# .github/workflows/check.yml
name: Check
on: [push, pull_request]
jobs:
check:
runs-on: self-hosted
container:
image: hub.nationtech.io/harmony/harmony_composer:latest
steps:
- uses: actions/checkout@v4
- run: bash build/check.sh
```
### 4.4 Polish documentation
- **README.md**: Quick-start (clone run get prompted see result), architecture diagram (Score Interpret Topology), link to docs and examples
- **CONTRIBUTING.md**: Already exists. Review for GitHub-specific guidance (fork workflow, PR template)
- **docs/**: Already comprehensive. Verify links work on GitHub rendering
- **Examples**: Ensure each example has a one-line description in its `Cargo.toml` and a comment block in `main.rs`
### 4.5 License and legal
- Verify workspace `license` field in root `Cargo.toml` is set correctly
- Add `LICENSE` file at repo root if not present
- Scan for any proprietary dependencies or hardcoded internal URLs
### 4.6 GitHub repository configuration
- Branch protection on `main`: require PR review, require CI to pass
- Issue templates: bug report, feature request
- PR template: checklist (tests pass, docs updated, etc.)
- Topics/tags: `rust`, `infrastructure-as-code`, `kubernetes`, `orchestration`, `bare-metal`
- Repository description: "Infrastructure orchestration framework. Declare what you want (Score), describe your infrastructure (Topology), let Harmony figure out how."
### 4.7 Gitea as internal mirror
- Set up Gitea to mirror from GitHub (pull mirror)
- Internal CI can continue running on Gitea for private/experimental branches
- Public contributions flow through GitHub
## Deliverables
- [ ] Git history cleaned, repo size <50MB
- [ ] Public GitHub repository created
- [ ] CI running on self-hosted runners with GitHub Actions
- [ ] Branch protection enabled
- [ ] README polished with quick-start guide
- [ ] Issue and PR templates created
- [ ] LICENSE file present
- [ ] Gitea configured as mirror

View File

@@ -0,0 +1,255 @@
# Phase 5: E2E Tests for PostgreSQL & RustFS
## Goal
Establish an automated E2E test pipeline that proves working examples actually work. Start with the two simplest k8s-based examples: PostgreSQL and RustFS.
## Prerequisites
- Phase 1 complete (config crate works, bootstrap is clean)
- `feat/rustfs` branch merged
## Architecture
### Test harness: `tests/e2e/`
A dedicated workspace member crate at `tests/e2e/` that contains:
1. **Shared k3d utilities** — create/destroy clusters, wait for readiness
2. **Per-example test modules** — each example gets a `#[tokio::test]` function
3. **Assertion helpers** — wait for pods, check CRDs exist, verify services
```
tests/
e2e/
Cargo.toml
src/
lib.rs # Shared test utilities
k3d.rs # k3d cluster lifecycle
k8s_assert.rs # K8s assertion helpers
tests/
postgresql.rs # PostgreSQL E2E test
rustfs.rs # RustFS E2E test
```
### k3d cluster lifecycle
```rust
// tests/e2e/src/k3d.rs
use k3d_rs::K3d;
pub struct TestCluster {
pub name: String,
pub k3d: K3d,
pub client: kube::Client,
reuse: bool,
}
impl TestCluster {
/// Creates a k3d cluster for testing.
/// If HARMONY_E2E_REUSE_CLUSTER=1, reuses existing cluster.
pub async fn ensure(name: &str) -> Result<Self, String> {
let reuse = std::env::var("HARMONY_E2E_REUSE_CLUSTER")
.map(|v| v == "1")
.unwrap_or(false);
let base_dir = PathBuf::from("/tmp/harmony-e2e");
let k3d = K3d::new(base_dir, Some(name.to_string()));
let client = k3d.ensure_installed().await?;
Ok(Self { name: name.to_string(), k3d, client, reuse })
}
/// Returns the kubeconfig path for this cluster.
pub fn kubeconfig_path(&self) -> String { ... }
}
impl Drop for TestCluster {
fn drop(&mut self) {
if !self.reuse {
// Best-effort cleanup
let _ = self.k3d.run_k3d_command(["cluster", "delete", &self.name]);
}
}
}
```
### K8s assertion helpers
```rust
// tests/e2e/src/k8s_assert.rs
/// Wait until a pod matching the label selector is Running in the namespace.
/// Times out after `timeout` duration.
pub async fn wait_for_pod_running(
client: &kube::Client,
namespace: &str,
label_selector: &str,
timeout: Duration,
) -> Result<(), String>
/// Assert a CRD instance exists.
pub async fn assert_resource_exists<K: kube::Resource>(
client: &kube::Client,
name: &str,
namespace: Option<&str>,
) -> Result<(), String>
/// Install a Helm chart. Returns when all pods in the release are running.
pub async fn helm_install(
release_name: &str,
chart: &str,
namespace: &str,
repo_url: Option<&str>,
timeout: Duration,
) -> Result<(), String>
```
## Tasks
### 5.1 Create the `tests/e2e/` crate
Add to workspace `Cargo.toml`:
```toml
[workspace]
members = [
# ... existing members
"tests/e2e",
]
```
`tests/e2e/Cargo.toml`:
```toml
[package]
name = "harmony-e2e-tests"
edition = "2024"
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
k3d_rs = { path = "../../k3d", package = "k3d_rs" }
kube = { workspace = true }
k8s-openapi = { workspace = true }
tokio = { workspace = true }
log = { workspace = true }
env_logger = { workspace = true }
[dev-dependencies]
pretty_assertions = { workspace = true }
```
### 5.2 PostgreSQL E2E test
```rust
// tests/e2e/tests/postgresql.rs
use harmony::modules::postgresql::{PostgreSQLScore, capability::PostgreSQLConfig};
use harmony::topology::K8sAnywhereTopology;
use harmony::inventory::Inventory;
use harmony::maestro::Maestro;
#[tokio::test]
async fn test_postgresql_deploys_on_k3d() {
let cluster = TestCluster::ensure("harmony-e2e-pg").await.unwrap();
// Install CNPG operator via Helm
// (K8sAnywhereTopology::ensure_ready() now handles this since
// commit e1183ef "K8s postgresql score now ensures cnpg is installed")
// But we may need the Helm chart for non-OKD:
helm_install(
"cnpg",
"cloudnative-pg",
"cnpg-system",
Some("https://cloudnative-pg.github.io/charts"),
Duration::from_secs(120),
).await.unwrap();
// Configure topology pointing to test cluster
let config = K8sAnywhereConfig {
kubeconfig: Some(cluster.kubeconfig_path()),
use_local_k3d: false,
autoinstall: false,
use_system_kubeconfig: false,
harmony_profile: "dev".to_string(),
k8s_context: None,
};
let topology = K8sAnywhereTopology::with_config(config);
// Create and run the score
let score = PostgreSQLScore {
config: PostgreSQLConfig {
cluster_name: "e2e-test-pg".to_string(),
namespace: "e2e-pg-test".to_string(),
..Default::default()
},
};
let mut maestro = Maestro::initialize(Inventory::autoload(), topology).await.unwrap();
maestro.register_all(vec![Box::new(score)]);
let scores = maestro.scores().read().unwrap().first().unwrap().clone_box();
let result = maestro.interpret(scores).await;
assert!(result.is_ok(), "PostgreSQL score failed: {:?}", result.err());
// Assert: CNPG Cluster resource exists
// (the Cluster CRD is applied — pod readiness may take longer)
let client = cluster.client.clone();
// ... assert Cluster CRD exists in e2e-pg-test namespace
}
```
### 5.3 RustFS E2E test
Similar structure. Details depend on what the RustFS score deploys (likely a Helm chart or k8s resources for MinIO/RustFS).
```rust
#[tokio::test]
async fn test_rustfs_deploys_on_k3d() {
let cluster = TestCluster::ensure("harmony-e2e-rustfs").await.unwrap();
// ... similar pattern: configure topology, create score, interpret, assert
}
```
### 5.4 CI job for E2E tests
New workflow file (Gitea or GitHub Actions):
```yaml
# .gitea/workflows/e2e.yml (or .github/workflows/e2e.yml)
name: E2E Tests
on:
push:
branches: [master, main]
# Don't run on every PR — too slow. Run on label or manual trigger.
workflow_dispatch:
jobs:
e2e:
runs-on: self-hosted # Must have Docker available for k3d
timeout-minutes: 15
steps:
- uses: actions/checkout@v4
- name: Install k3d
run: curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
- name: Run E2E tests
run: cargo test -p harmony-e2e-tests -- --test-threads=1
env:
RUST_LOG: info
```
Note `--test-threads=1`: E2E tests create k3d clusters and should not run in parallel (port conflicts, resource contention).
## Deliverables
- [ ] `tests/e2e/` crate added to workspace
- [ ] Shared test utilities: `TestCluster`, `wait_for_pod_running`, `helm_install`
- [ ] PostgreSQL E2E test passing
- [ ] RustFS E2E test passing (after `feat/rustfs` merge)
- [ ] CI job running E2E tests on push to main
- [ ] `HARMONY_E2E_REUSE_CLUSTER=1` for fast local iteration

214
ROADMAP/06-e2e-tests-kvm.md Normal file
View File

@@ -0,0 +1,214 @@
# Phase 6: E2E Tests for OKD HA Cluster on KVM
## Goal
Prove the full OKD bare-metal installation flow works end-to-end using KVM virtual machines. This is the ultimate validation of Harmony's core value proposition: declare an OKD cluster, point it at infrastructure, watch it materialize.
## Prerequisites
- Phase 5 complete (test harness exists, k3d tests passing)
- `feature/kvm-module` merged to main
- A CI runner with libvirt/KVM access and nested virtualization support
## Architecture
The KVM branch already has a `kvm_okd_ha_cluster` example that creates:
```
Host bridge (WAN)
|
+--------------------+
| OPNsense | 192.168.100.1
| gateway + PXE |
+--------+-----------+
|
harmonylan (192.168.100.0/24)
+---------+---------+---------+---------+
| | | | |
+----+---+ +---+---+ +---+---+ +---+---+ +--+----+
| cp0 | | cp1 | | cp2 | |worker0| |worker1|
| .10 | | .11 | | .12 | | .20 | | .21 |
+--------+ +-------+ +-------+ +-------+ +---+---+
|
+-----+----+
| worker2 |
| .22 |
+----------+
```
The test needs to orchestrate this entire setup, wait for OKD to converge, and assert the cluster is healthy.
## Tasks
### 6.1 Start with `example_linux_vm` — the simplest KVM test
Before tackling the full OKD stack, validate the KVM module itself with the simplest possible test:
```rust
// tests/e2e/tests/kvm_linux_vm.rs
#[tokio::test]
#[ignore] // Requires libvirt access — run with: cargo test -- --ignored
async fn test_linux_vm_boots_from_iso() {
let executor = KvmExecutor::from_env().unwrap();
// Create isolated network
let network = NetworkConfig {
name: "e2e-test-net".to_string(),
bridge: "virbr200".to_string(),
// ...
};
executor.ensure_network(&network).await.unwrap();
// Define and start VM
let vm_config = VmConfig::builder("e2e-linux-test")
.vcpus(1)
.memory_gb(1)
.disk(5)
.network(NetworkRef::named("e2e-test-net"))
.cdrom("https://releases.ubuntu.com/24.04/ubuntu-24.04-live-server-amd64.iso")
.boot_order([BootDevice::Cdrom, BootDevice::Disk])
.build();
executor.ensure_vm(&vm_config).await.unwrap();
executor.start_vm("e2e-linux-test").await.unwrap();
// Assert VM is running
let status = executor.vm_status("e2e-linux-test").await.unwrap();
assert_eq!(status, VmStatus::Running);
// Cleanup
executor.destroy_vm("e2e-linux-test").await.unwrap();
executor.undefine_vm("e2e-linux-test").await.unwrap();
executor.delete_network("e2e-test-net").await.unwrap();
}
```
This test validates:
- ISO download works (via `harmony_assets` if refactored, or built-in KVM module download)
- libvirt XML generation is correct
- VM lifecycle (define → start → status → destroy → undefine)
- Network creation/deletion
### 6.2 OKD HA Cluster E2E test
The full integration test. This is long-running (30-60 minutes) and should only run nightly or on-demand.
```rust
// tests/e2e/tests/kvm_okd_ha.rs
#[tokio::test]
#[ignore] // Requires KVM + significant resources. Run nightly.
async fn test_okd_ha_cluster_on_kvm() {
// 1. Create virtual infrastructure
// - OPNsense gateway VM
// - 3 control plane VMs
// - 3 worker VMs
// - Virtual network (harmonylan)
// 2. Run OKD installation scores
// (the kvm_okd_ha_cluster example, but as a test)
// 3. Wait for OKD API server to become reachable
// - Poll https://api.okd.harmonylan:6443 until it responds
// - Timeout: 30 minutes
// 4. Assert cluster health
// - All nodes in Ready state
// - ClusterVersion reports Available=True
// - Sample workload (nginx) deploys and pod reaches Running
// 5. Cleanup
// - Destroy all VMs
// - Delete virtual networks
// - Clean up disk images
}
```
### 6.3 CI runner requirements
The KVM E2E test needs a runner with:
- **Hardware**: 32GB+ RAM, 8+ CPU cores, 100GB+ disk
- **Software**: libvirt, QEMU/KVM, `virsh`, nested virtualization enabled
- **Network**: Outbound internet access (to download ISOs, OKD images)
- **Permissions**: User in `libvirt` group, or root access
Options:
- **Dedicated bare-metal machine** registered as a self-hosted GitHub Actions runner
- **Cloud VM with nested virt** (e.g., GCP n2-standard-8 with `--enable-nested-virtualization`)
- **Manual trigger only** — developer runs locally, CI just tracks pass/fail
### 6.4 Nightly CI job
```yaml
# .github/workflows/e2e-kvm.yml
name: E2E KVM Tests
on:
schedule:
- cron: '0 2 * * *' # 2 AM daily
workflow_dispatch: # Manual trigger
jobs:
kvm-tests:
runs-on: [self-hosted, kvm] # Label for KVM-capable runners
timeout-minutes: 90
steps:
- uses: actions/checkout@v4
- name: Run KVM E2E tests
run: cargo test -p harmony-e2e-tests -- --ignored --test-threads=1
env:
RUST_LOG: info
HARMONY_KVM_URI: qemu:///system
- name: Cleanup VMs on failure
if: failure()
run: |
virsh list --all --name | grep e2e | xargs -I {} virsh destroy {} || true
virsh list --all --name | grep e2e | xargs -I {} virsh undefine {} --remove-all-storage || true
```
### 6.5 Test resource management
KVM tests create real resources that must be cleaned up even on failure. Implement a test fixture pattern:
```rust
struct KvmTestFixture {
executor: KvmExecutor,
vms: Vec<String>,
networks: Vec<String>,
}
impl KvmTestFixture {
fn track_vm(&mut self, name: &str) { self.vms.push(name.to_string()); }
fn track_network(&mut self, name: &str) { self.networks.push(name.to_string()); }
}
impl Drop for KvmTestFixture {
fn drop(&mut self) {
// Best-effort cleanup of all tracked resources
for vm in &self.vms {
let _ = std::process::Command::new("virsh")
.args(["destroy", vm]).output();
let _ = std::process::Command::new("virsh")
.args(["undefine", vm, "--remove-all-storage"]).output();
}
for net in &self.networks {
let _ = std::process::Command::new("virsh")
.args(["net-destroy", net]).output();
let _ = std::process::Command::new("virsh")
.args(["net-undefine", net]).output();
}
}
}
```
## Deliverables
- [ ] `test_linux_vm_boots_from_iso` — passing KVM smoke test
- [ ] `test_okd_ha_cluster_on_kvm` — full OKD installation test
- [ ] `KvmTestFixture` with resource cleanup on test failure
- [ ] Nightly CI job on KVM-capable runner
- [ ] Force-cleanup script for leaked VMs/networks
- [ ] Documentation: how to set up a KVM runner for E2E tests

View File

@@ -0,0 +1,25 @@
[package]
name = "example-harmony-sso"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_config = { path = "../../harmony_config" }
harmony_macros = { path = "../../harmony_macros" }
harmony_secret = { path = "../../harmony_secret" }
harmony_types = { path = "../../harmony_types" }
k3d-rs = { path = "../../k3d" }
kube.workspace = true
tokio.workspace = true
url.workspace = true
log.workspace = true
env_logger.workspace = true
serde.workspace = true
serde_json.workspace = true
anyhow.workspace = true
reqwest.workspace = true
directories = "6.0.0"

View File

@@ -0,0 +1,395 @@
use anyhow::Context;
use harmony::inventory::Inventory;
use harmony::modules::openbao::OpenbaoScore;
use harmony::score::Score;
use harmony::topology::Topology;
use k3d_rs::{K3d, PortMapping};
use log::info;
use serde::Deserialize;
use serde::Serialize;
use std::path::PathBuf;
use std::process::Command;
const CLUSTER_NAME: &str = "harmony-example";
const ZITADEL_HOST: &str = "sso.harmony.local";
const OPENBAO_HOST: &str = "bao.harmony.local";
const ZITADEL_PORT: u32 = 8080;
const OPENBAO_PORT: u32 = 8200;
fn get_k3d_binary_path() -> PathBuf {
directories::BaseDirs::new()
.map(|dirs| dirs.data_dir().join("harmony").join("k3d"))
.unwrap_or_else(|| PathBuf::from("/tmp/harmony-k3d"))
}
fn get_openbao_data_path() -> PathBuf {
directories::BaseDirs::new()
.map(|dirs| dirs.data_dir().join("harmony").join("openbao"))
.unwrap_or_else(|| PathBuf::from("/tmp/harmony-openbao"))
}
async fn ensure_k3d_cluster() -> anyhow::Result<()> {
let base_dir = get_k3d_binary_path();
std::fs::create_dir_all(&base_dir).context("Failed to create k3d data directory")?;
info!(
"Ensuring k3d cluster '{}' is running with port mappings",
CLUSTER_NAME
);
let k3d = K3d::new(base_dir.clone(), Some(CLUSTER_NAME.to_string())).with_port_mappings(vec![
PortMapping::new(ZITADEL_PORT, 80),
PortMapping::new(OPENBAO_PORT, 8200),
]);
k3d.ensure_installed()
.await
.map_err(|e| anyhow::anyhow!("Failed to ensure k3d installed: {}", e))?;
info!("k3d cluster '{}' is ready", CLUSTER_NAME);
Ok(())
}
fn create_topology() -> harmony::topology::K8sAnywhereTopology {
unsafe {
std::env::set_var("HARMONY_USE_LOCAL_K3D", "false");
std::env::set_var("HARMONY_AUTOINSTALL", "false");
std::env::set_var("HARMONY_K8S_CONTEXT", "k3d-harmony-example");
}
harmony::topology::K8sAnywhereTopology::from_env()
}
async fn cleanup_openbao_webhook() -> anyhow::Result<()> {
let output = Command::new("kubectl")
.args([
"--context",
"k3d-harmony-example",
"get",
"mutatingwebhookconfigurations",
])
.output()
.context("Failed to check webhooks")?;
if String::from_utf8_lossy(&output.stdout).contains("openbao-agent-injector-cfg") {
info!("Deleting conflicting OpenBao webhook...");
let _ = Command::new("kubectl")
.args([
"--context",
"k3d-harmony-example",
"delete",
"mutatingwebhookconfiguration",
"openbao-agent-injector-cfg",
"--ignore-not-found=true",
])
.output();
}
Ok(())
}
async fn deploy_openbao(topology: &harmony::topology::K8sAnywhereTopology) -> anyhow::Result<()> {
info!("Deploying OpenBao...");
let openbao = OpenbaoScore {
host: OPENBAO_HOST.to_string(),
openshift: false,
};
let inventory = Inventory::autoload();
openbao
.interpret(&inventory, topology)
.await
.context("OpenBao deployment failed")?;
info!("OpenBao deployed successfully");
Ok(())
}
async fn wait_for_openbao_running() -> anyhow::Result<()> {
info!("Waiting for OpenBao pods to be running...");
let output = Command::new("kubectl")
.args([
"--context",
"k3d-harmony-example",
"wait",
"-n",
"openbao",
"--for=condition=podinitialized",
"pod/openbao-0",
"--timeout=120s",
])
.output()
.context("Failed to wait for OpenBao pod")?;
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
info!(
"Pod initialized wait failed, trying alternative approach: {}",
stderr
);
}
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
info!("OpenBao pod is running (may be sealed)");
Ok(())
}
#[derive(Debug, Serialize, Deserialize)]
struct OpenBaoInitOutput {
#[serde(rename = "unseal_keys_b64")]
keys: Vec<String>,
#[serde(rename = "root_token")]
root_token: String,
}
async fn init_openbao() -> anyhow::Result<String> {
let data_path = get_openbao_data_path();
std::fs::create_dir_all(&data_path).context("Failed to create openbao data directory")?;
let keys_file = data_path.join("unseal-keys.json");
if keys_file.exists() {
info!("OpenBao already initialized, loading existing keys");
let content = std::fs::read_to_string(&keys_file)?;
let init_output: OpenBaoInitOutput = serde_json::from_str(&content)?;
return Ok(init_output.root_token);
}
info!("Initializing OpenBao...");
let output = Command::new("kubectl")
.args([
"--context",
"k3d-harmony-example",
"exec",
"-n",
"openbao",
"openbao-0",
"--",
"bao",
"operator",
"init",
"-format=json",
])
.output()
.context("Failed to initialize OpenBao")?;
let stderr = String::from_utf8_lossy(&output.stderr);
let stdout = String::from_utf8_lossy(&output.stdout);
if stderr.contains("already initialized") {
info!("OpenBao is already initialized");
return Err(anyhow::anyhow!(
"OpenBao is already initialized but no keys file found. \
Please delete the cluster and try again: k3d cluster delete harmony-example"
));
}
if !output.status.success() {
return Err(anyhow::anyhow!(
"OpenBao init failed with status {}: {}",
output.status,
stderr
));
}
if stdout.trim().is_empty() {
return Err(anyhow::anyhow!(
"OpenBao init returned empty output. stderr: {}",
stderr
));
}
let init_output: OpenBaoInitOutput = serde_json::from_str(&stdout)?;
std::fs::write(&keys_file, serde_json::to_string_pretty(&init_output)?)?;
info!("OpenBao initialized successfully");
info!("Unseal keys saved to {:?}", keys_file);
Ok(init_output.root_token)
}
async fn unseal_openbao(root_token: &str) -> anyhow::Result<()> {
info!("Unsealing OpenBao...");
let status_output = Command::new("kubectl")
.args([
"--context",
"k3d-harmony-example",
"exec",
"-n",
"openbao",
"openbao-0",
"--",
"bao",
"status",
"-format=json",
])
.output()
.context("Failed to get OpenBao status")?;
#[derive(Deserialize)]
struct StatusOutput {
sealed: bool,
}
if status_output.status.success() {
if let Ok(status) =
serde_json::from_str::<StatusOutput>(&String::from_utf8_lossy(&status_output.stdout))
{
if !status.sealed {
info!("OpenBao is already unsealed");
return Ok(());
}
}
}
let data_path = get_openbao_data_path();
let keys_file = data_path.join("unseal-keys.json");
let content = std::fs::read_to_string(&keys_file)?;
let init_output: OpenBaoInitOutput = serde_json::from_str(&content)?;
for key in &init_output.keys[0..3] {
let output = Command::new("kubectl")
.args([
"--context",
"k3d-harmony-example",
"exec",
"-n",
"openbao",
"openbao-0",
"--",
"bao",
"operator",
"unseal",
key,
])
.output()
.context("Failed to unseal OpenBao")?;
if !output.status.success() {
return Err(anyhow::anyhow!(
"Unseal failed: {}",
String::from_utf8_lossy(&output.stderr)
));
}
}
info!("OpenBao unsealed successfully");
Ok(())
}
async fn run_bao_command(root_token: &str, args: &[&str]) -> anyhow::Result<String> {
let command = args.join(" ");
let shell_command = format!("VAULT_TOKEN={} {}", root_token, command);
let output = Command::new("kubectl")
.args([
"--context",
"k3d-harmony-example",
"exec",
"-n",
"openbao",
"openbao-0",
"--",
"sh",
"-c",
&shell_command,
])
.output()
.context("Failed to run bao command")?;
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
if !output.status.success() {
return Err(anyhow::anyhow!("bao command failed: {}", stderr));
}
Ok(stdout.to_string())
}
async fn configure_openbao_admin_user(root_token: &str) -> anyhow::Result<()> {
info!("Configuring OpenBao with userpass auth...");
let _ = run_bao_command(root_token, &["bao", "auth", "enable", "userpass"]).await;
let _ = run_bao_command(
root_token,
&["bao", "secrets", "enable", "-path=secret", "kv-v2"],
)
.await;
run_bao_command(
root_token,
&[
"bao",
"write",
"auth/userpass/users/harmony",
"password=harmony-dev-password",
"policies=default",
],
)
.await?;
info!("OpenBao configured with userpass auth");
info!(" Username: harmony");
info!(" Password: harmony-dev-password");
info!(" Root token: {}", root_token);
Ok(())
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init();
info!("===========================================");
info!("Harmony SSO Example");
info!("Deploys Zitadel + OpenBao on k3d");
info!("===========================================");
ensure_k3d_cluster().await?;
info!("===========================================");
info!("Cluster '{}' is ready", CLUSTER_NAME);
info!(
"Zitadel will be available at: http://{}:{}",
ZITADEL_HOST, ZITADEL_PORT
);
info!(
"OpenBao will be available at: http://{}:{}",
OPENBAO_HOST, OPENBAO_PORT
);
info!("===========================================");
let topology = create_topology();
topology
.ensure_ready()
.await
.context("Failed to initialize topology")?;
cleanup_openbao_webhook().await?;
deploy_openbao(&topology).await?;
wait_for_openbao_running().await?;
let root_token = init_openbao().await?;
unseal_openbao(&root_token).await?;
configure_openbao_admin_user(&root_token).await?;
info!("===========================================");
info!("OpenBao initialized and configured!");
info!("===========================================");
info!("Zitadel: http://{}:{}", ZITADEL_HOST, ZITADEL_PORT);
info!("OpenBao: http://{}:{}", OPENBAO_HOST, OPENBAO_PORT);
info!("===========================================");
info!("OpenBao credentials:");
info!(" Username: harmony");
info!(" Password: harmony-dev-password");
info!("===========================================");
Ok(())
}

View File

@@ -6,6 +6,7 @@ use harmony::{
async fn main() {
let openbao = OpenbaoScore {
host: "openbao.sebastien.sto1.nationtech.io".to_string(),
openshift: false,
};
harmony_cli::run(

View File

@@ -7,6 +7,7 @@ async fn main() {
let zitadel = ZitadelScore {
host: "sso.sto1.nationtech.io".to_string(),
zitadel_version: "v4.12.1".to_string(),
external_secure: true,
};
harmony_cli::run(

View File

@@ -15,6 +15,9 @@ use crate::{
pub struct OpenbaoScore {
/// Host used for external access (ingress)
pub host: String,
/// Set to true when deploying to OpenShift. Defaults to false for k3d/Kubernetes.
#[serde(default)]
pub openshift: bool,
}
impl<T: Topology + K8sclient + HelmCommand> Score<T> for OpenbaoScore {
@@ -24,12 +27,12 @@ impl<T: Topology + K8sclient + HelmCommand> Score<T> for OpenbaoScore {
#[doc(hidden)]
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
// TODO exec pod commands to initialize secret store if not already done
let host = &self.host;
let openshift = self.openshift;
let values_yaml = Some(format!(
r#"global:
openshift: true
openshift: {openshift}
server:
standalone:
enabled: true

View File

@@ -1,3 +1,4 @@
use harmony_k8s::KubernetesDistribution;
use k8s_openapi::api::core::v1::Namespace;
use k8s_openapi::apimachinery::pkg::apis::meta::v1::ObjectMeta;
use k8s_openapi::{ByteString, api::core::v1::Secret};
@@ -63,6 +64,20 @@ pub struct ZitadelScore {
/// External domain (e.g. `"auth.example.com"`).
pub host: String,
pub zitadel_version: String,
/// Set to false for local k3d development (uses HTTP instead of HTTPS).
/// Defaults to true for production deployments.
#[serde(default)]
pub external_secure: bool,
}
impl Default for ZitadelScore {
fn default() -> Self {
Self {
host: Default::default(),
zitadel_version: "v4.12.1".to_string(),
external_secure: true,
}
}
}
impl<T: Topology + K8sclient + HelmCommand + PostgreSQL> Score<T> for ZitadelScore {
@@ -75,6 +90,7 @@ impl<T: Topology + K8sclient + HelmCommand + PostgreSQL> Score<T> for ZitadelSco
Box::new(ZitadelInterpret {
host: self.host.clone(),
zitadel_version: self.zitadel_version.clone(),
external_secure: self.external_secure,
})
}
}
@@ -85,6 +101,7 @@ impl<T: Topology + K8sclient + HelmCommand + PostgreSQL> Score<T> for ZitadelSco
struct ZitadelInterpret {
host: String,
zitadel_version: String,
external_secure: bool,
}
#[async_trait]
@@ -222,7 +239,20 @@ impl<T: Topology + K8sclient + HelmCommand + PostgreSQL> Interpret<T> for Zitade
let admin_password = generate_secure_password(16);
// --- Step 3: Create masterkey secret ------------------------------------
// --- Step 3: Get k8s client and detect distribution -------------------
let k8s_client = topology
.k8s_client()
.await
.map_err(|e| InterpretError::new(format!("Failed to get k8s client: {e}")))?;
let distro = k8s_client.get_k8s_distribution().await.map_err(|e| {
InterpretError::new(format!("Failed to detect k8s distribution: {}", e))
})?;
info!("[Zitadel] Detected Kubernetes distribution: {:?}", distro);
// --- Step 4: Create masterkey secret ------------------------------------
debug!(
"[Zitadel] Creating masterkey secret '{}' in namespace '{}'",
@@ -254,13 +284,7 @@ impl<T: Topology + K8sclient + HelmCommand + PostgreSQL> Interpret<T> for Zitade
..Secret::default()
};
match topology
.k8s_client()
.await
.map_err(|e| InterpretError::new(format!("Failed to get k8s client : {e}")))?
.create(&masterkey_secret, Some(NAMESPACE))
.await
{
match k8s_client.create(&masterkey_secret, Some(NAMESPACE)).await {
Ok(_) => {
info!(
"[Zitadel] Masterkey secret '{}' created",
@@ -288,16 +312,15 @@ impl<T: Topology + K8sclient + HelmCommand + PostgreSQL> Interpret<T> for Zitade
MASTERKEY_SECRET_NAME
);
// --- Step 4: Build Helm values ------------------------------------
// --- Step 5: Build Helm values ------------------------------------
warn!(
"[Zitadel] Applying TLS-enabled ingress defaults for OKD/OpenShift. \
cert-manager annotations are included as optional hints and are \
ignored on clusters without cert-manager."
);
let values_yaml = format!(
r#"image:
let values_yaml = match distro {
KubernetesDistribution::OpenshiftFamily => {
warn!(
"[Zitadel] Applying OpenShift-specific ingress with TLS and cert-manager annotations."
);
format!(
r#"image:
tag: {zitadel_version}
zitadel:
masterkeySecretName: "{MASTERKEY_SECRET_NAME}"
@@ -330,8 +353,6 @@ zitadel:
Username: postgres
SSL:
Mode: require
# Directly import credentials from the postgres secret
# TODO : use a less privileged postgres user
env:
- name: ZITADEL_DATABASE_POSTGRES_USER_USERNAME
valueFrom:
@@ -353,7 +374,6 @@ env:
secretKeyRef:
name: "{pg_superuser_secret}"
key: password
# Security context for OpenShift restricted PSA compliance
podSecurityContext:
runAsNonRoot: true
runAsUser: null
@@ -370,7 +390,6 @@ securityContext:
fsGroup: null
seccompProfile:
type: RuntimeDefault
# Init job security context (runs before main deployment)
initJob:
podSecurityContext:
runAsNonRoot: true
@@ -388,7 +407,6 @@ initJob:
fsGroup: null
seccompProfile:
type: RuntimeDefault
# Setup job security context
setupJob:
podSecurityContext:
runAsNonRoot: true
@@ -417,10 +435,9 @@ ingress:
- path: /
pathType: Prefix
tls:
- hosts:
- secretName: zitadel-tls
hosts:
- "{host}"
secretName: "{host}-tls"
login:
enabled: true
podSecurityContext:
@@ -450,15 +467,177 @@ login:
- path: /ui/v2/login
pathType: Prefix
tls:
- hosts:
- "{host}"
secretName: "{host}-tls""#,
zitadel_version = self.zitadel_version
);
- secretName: zitadel-login-tls
hosts:
- "{host}""#,
zitadel_version = self.zitadel_version,
host = host,
db_host = db_host,
db_port = db_port,
admin_password = admin_password,
pg_superuser_secret = pg_superuser_secret
)
}
KubernetesDistribution::K3sFamily | KubernetesDistribution::Default => {
warn!("[Zitadel] Applying k3d/generic ingress without TLS (HTTP only).");
format!(
r#"image:
tag: {zitadel_version}
zitadel:
masterkeySecretName: "{MASTERKEY_SECRET_NAME}"
configmapConfig:
ExternalDomain: "{host}"
ExternalSecure: false
FirstInstance:
Org:
Human:
UserName: "admin"
Password: "{admin_password}"
FirstName: "Zitadel"
LastName: "Admin"
Email: "admin@zitadel.example.com"
PasswordChangeRequired: true
TLS:
Enabled: false
Database:
Postgres:
Host: "{db_host}"
Port: {db_port}
Database: zitadel
MaxOpenConns: 20
MaxIdleConns: 10
User:
Username: postgres
SSL:
Mode: require
Admin:
Username: postgres
SSL:
Mode: require
env:
- name: ZITADEL_DATABASE_POSTGRES_USER_USERNAME
valueFrom:
secretKeyRef:
name: "{pg_superuser_secret}"
key: user
- name: ZITADEL_DATABASE_POSTGRES_USER_PASSWORD
valueFrom:
secretKeyRef:
name: "{pg_superuser_secret}"
key: password
- name: ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: "{pg_superuser_secret}"
key: user
- name: ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: "{pg_superuser_secret}"
key: password
podSecurityContext:
runAsNonRoot: true
runAsUser: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
initJob:
podSecurityContext:
runAsNonRoot: true
runAsUser: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
setupJob:
podSecurityContext:
runAsNonRoot: true
runAsUser: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
hosts:
- host: "{host}"
paths:
- path: /
pathType: Prefix
tls: []
login:
enabled: true
podSecurityContext:
runAsNonRoot: true
runAsUser: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
hosts:
- host: "{host}"
paths:
- path: /ui/v2/login
pathType: Prefix
tls: []"#,
zitadel_version = self.zitadel_version,
host = host,
db_host = db_host,
db_port = db_port,
admin_password = admin_password,
pg_superuser_secret = pg_superuser_secret
)
}
};
trace!("[Zitadel] Helm values YAML:\n{values_yaml}");
// --- Step 5: Deploy Helm chart ------------------------------------
// --- Step 6: Deploy Helm chart ------------------------------------
info!(
"[Zitadel] Deploying Helm chart 'zitadel/zitadel' as release 'zitadel' in namespace '{NAMESPACE}'"
@@ -482,17 +661,25 @@ login:
.interpret(inventory, topology)
.await;
let protocol = if self.external_secure {
"https"
} else {
"http"
};
match &result {
Ok(_) => info!(
"[Zitadel] Helm chart deployed successfully\n\n\
===== ZITADEL DEPLOYMENT COMPLETE =====\n\
Login URL: https://{host}\n\
Username: admin@zitadel.{host}\n\
Login URL: {protocol}://{host}\n\
Username: admin\n\
Password: {admin_password}\n\n\
IMPORTANT: The password is saved in ConfigMap 'zitadel-config-yaml'\n\
and must be changed on first login. Save the credentials in a\n\
secure location after changing them.\n\
========================================="
=========================================",
protocol = protocol,
host = self.host,
admin_password = admin_password
),
Err(e) => error!("[Zitadel] Helm chart deployment failed: {e}"),
}

56
harmony_assets/Cargo.toml Normal file
View File

@@ -0,0 +1,56 @@
[package]
name = "harmony_assets"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[lib]
name = "harmony_assets"
[[bin]]
name = "harmony_assets"
path = "src/cli/mod.rs"
required-features = ["cli"]
[features]
default = ["blake3"]
sha256 = ["dep:sha2"]
blake3 = ["dep:blake3"]
s3 = [
"dep:aws-sdk-s3",
"dep:aws-config",
]
cli = [
"dep:clap",
"dep:indicatif",
"dep:inquire",
]
reqwest = ["dep:reqwest"]
[dependencies]
log.workspace = true
tokio.workspace = true
thiserror.workspace = true
directories.workspace = true
sha2 = { version = "0.10", optional = true }
blake3 = { version = "1.5", optional = true }
reqwest = { version = "0.12", optional = true, default-features = false, features = ["stream", "rustls-tls"] }
futures-util.workspace = true
async-trait.workspace = true
url.workspace = true
# CLI only
clap = { version = "4.5", features = ["derive"], optional = true }
indicatif = { version = "0.18", optional = true }
inquire = { version = "0.7", optional = true }
# S3 only
aws-sdk-s3 = { version = "1", optional = true }
aws-config = { version = "1", optional = true }
[dev-dependencies]
tempfile.workspace = true
httptest = "0.16"
pretty_assertions.workspace = true
tokio-test.workspace = true

View File

@@ -0,0 +1,80 @@
use crate::hash::ChecksumAlgo;
use std::path::PathBuf;
use url::Url;
#[derive(Debug, Clone)]
pub struct Asset {
pub url: Url,
pub checksum: String,
pub checksum_algo: ChecksumAlgo,
pub file_name: String,
pub size: Option<u64>,
}
impl Asset {
pub fn new(url: Url, checksum: String, checksum_algo: ChecksumAlgo, file_name: String) -> Self {
Self {
url,
checksum,
checksum_algo,
file_name,
size: None,
}
}
pub fn with_size(mut self, size: u64) -> Self {
self.size = Some(size);
self
}
pub fn formatted_checksum(&self) -> String {
crate::hash::format_checksum(&self.checksum, self.checksum_algo.clone())
}
}
#[derive(Debug, Clone)]
pub struct LocalCache {
pub base_dir: PathBuf,
}
impl LocalCache {
pub fn new(base_dir: PathBuf) -> Self {
Self { base_dir }
}
pub fn path_for(&self, asset: &Asset) -> PathBuf {
let prefix = &asset.checksum[..16.min(asset.checksum.len())];
self.base_dir.join(prefix).join(&asset.file_name)
}
pub fn cache_key_dir(&self, asset: &Asset) -> PathBuf {
let prefix = &asset.checksum[..16.min(asset.checksum.len())];
self.base_dir.join(prefix)
}
pub async fn ensure_dir(&self, asset: &Asset) -> Result<(), crate::errors::AssetError> {
let dir = self.cache_key_dir(asset);
tokio::fs::create_dir_all(&dir)
.await
.map_err(|e| crate::errors::AssetError::IoError(e))?;
Ok(())
}
}
impl Default for LocalCache {
fn default() -> Self {
let base_dir = directories::ProjectDirs::from("io", "NationTech", "Harmony")
.map(|dirs| dirs.cache_dir().join("assets"))
.unwrap_or_else(|| PathBuf::from("/tmp/harmony_assets"));
Self::new(base_dir)
}
}
#[derive(Debug, Clone)]
pub struct StoredAsset {
pub url: Url,
pub checksum: String,
pub checksum_algo: ChecksumAlgo,
pub size: u64,
pub key: String,
}

View File

@@ -0,0 +1,25 @@
use clap::Parser;
#[derive(Parser, Debug)]
pub struct ChecksumArgs {
pub path: String,
#[arg(short, long, default_value = "blake3")]
pub algo: String,
}
pub async fn execute(args: ChecksumArgs) -> Result<(), Box<dyn std::error::Error>> {
use harmony_assets::{ChecksumAlgo, checksum_for_path};
let path = std::path::Path::new(&args.path);
if !path.exists() {
eprintln!("Error: File not found: {}", args.path);
std::process::exit(1);
}
let algo = ChecksumAlgo::from_str(&args.algo)?;
let checksum = checksum_for_path(path, algo.clone()).await?;
println!("{}:{} {}", algo.name(), checksum, args.path);
Ok(())
}

View File

@@ -0,0 +1,82 @@
use clap::Parser;
#[derive(Parser, Debug)]
pub struct DownloadArgs {
pub url: String,
pub checksum: String,
#[arg(short, long)]
pub output: Option<String>,
#[arg(short, long, default_value = "blake3")]
pub algo: String,
}
pub async fn execute(args: DownloadArgs) -> Result<(), Box<dyn std::error::Error>> {
use harmony_assets::{
Asset, AssetStore, ChecksumAlgo, LocalCache, LocalStore, verify_checksum,
};
use indicatif::{ProgressBar, ProgressStyle};
use url::Url;
let url = Url::parse(&args.url).map_err(|e| format!("Invalid URL: {}", e))?;
let file_name = args
.output
.or_else(|| {
std::path::Path::new(&args.url)
.file_name()
.and_then(|n| n.to_str())
.map(|s| s.to_string())
})
.unwrap_or_else(|| "download".to_string());
let algo = ChecksumAlgo::from_str(&args.algo)?;
let asset = Asset::new(url, args.checksum.clone(), algo.clone(), file_name);
let cache = LocalCache::default();
println!("Downloading: {}", asset.url);
println!("Checksum: {}:{}", algo.name(), args.checksum);
println!("Cache dir: {:?}", cache.base_dir);
let total_size = asset.size.unwrap_or(0);
let pb = if total_size > 0 {
let pb = ProgressBar::new(total_size);
pb.set_style(
ProgressStyle::default_bar()
.template("{spinner:.green} [{elapsed_precise}] [{bar:40}] {bytes}/{total_bytes} ({bytes_per_sec})")?
.progress_chars("=>-"),
);
Some(pb)
} else {
None
};
let progress_fn: Box<dyn Fn(u64, Option<u64>) + Send> = Box::new({
let pb = pb.clone();
move |bytes, _total| {
if let Some(ref pb) = pb {
pb.set_position(bytes);
}
}
});
let store = LocalStore::default();
let result = store.fetch(&asset, &cache, Some(progress_fn)).await;
if let Some(pb) = pb {
pb.finish();
}
match result {
Ok(path) => {
verify_checksum(&path, &args.checksum, algo).await?;
println!("\nDownloaded to: {:?}", path);
println!("Checksum verified OK");
Ok(())
}
Err(e) => {
eprintln!("Download failed: {}", e);
std::process::exit(1);
}
}
}

View File

@@ -0,0 +1,49 @@
pub mod checksum;
pub mod download;
pub mod upload;
pub mod verify;
use clap::{Parser, Subcommand};
#[derive(Parser, Debug)]
#[command(
name = "harmony_assets",
version,
about = "Asset management CLI for downloading, uploading, and verifying large binary assets"
)]
pub struct Cli {
#[command(subcommand)]
pub command: Commands,
}
#[derive(Subcommand, Debug)]
pub enum Commands {
Upload(upload::UploadArgs),
Download(download::DownloadArgs),
Checksum(checksum::ChecksumArgs),
Verify(verify::VerifyArgs),
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
log::info!("Starting harmony_assets CLI");
let cli = Cli::parse();
match cli.command {
Commands::Upload(args) => {
upload::execute(args).await?;
}
Commands::Download(args) => {
download::execute(args).await?;
}
Commands::Checksum(args) => {
checksum::execute(args).await?;
}
Commands::Verify(args) => {
verify::execute(args).await?;
}
}
Ok(())
}

View File

@@ -0,0 +1,166 @@
use clap::Parser;
use harmony_assets::{S3Config, S3Store, checksum_for_path_with_progress};
use indicatif::{ProgressBar, ProgressStyle};
use std::path::Path;
#[derive(Parser, Debug)]
pub struct UploadArgs {
pub source: String,
pub key: Option<String>,
#[arg(short, long)]
pub content_type: Option<String>,
#[arg(short, long, default_value_t = true)]
pub public_read: bool,
#[arg(short, long)]
pub endpoint: Option<String>,
#[arg(short, long)]
pub bucket: Option<String>,
#[arg(short, long)]
pub region: Option<String>,
#[arg(short, long)]
pub access_key_id: Option<String>,
#[arg(short, long)]
pub secret_access_key: Option<String>,
#[arg(short, long, default_value = "blake3")]
pub algo: String,
#[arg(short, long, default_value_t = false)]
pub yes: bool,
}
pub async fn execute(args: UploadArgs) -> Result<(), Box<dyn std::error::Error>> {
let source_path = Path::new(&args.source);
if !source_path.exists() {
eprintln!("Error: File not found: {}", args.source);
std::process::exit(1);
}
let key = args.key.unwrap_or_else(|| {
source_path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("upload")
.to_string()
});
let metadata = tokio::fs::metadata(source_path)
.await
.map_err(|e| format!("Failed to read file metadata: {}", e))?;
let total_size = metadata.len();
let endpoint = args
.endpoint
.or_else(|| std::env::var("S3_ENDPOINT").ok())
.unwrap_or_default();
let bucket = args
.bucket
.or_else(|| std::env::var("S3_BUCKET").ok())
.unwrap_or_else(|| {
inquire::Text::new("S3 Bucket name:")
.with_default("harmony-assets")
.prompt()
.unwrap()
});
let region = args
.region
.or_else(|| std::env::var("S3_REGION").ok())
.unwrap_or_else(|| {
inquire::Text::new("S3 Region:")
.with_default("us-east-1")
.prompt()
.unwrap()
});
let access_key_id = args
.access_key_id
.or_else(|| std::env::var("AWS_ACCESS_KEY_ID").ok());
let secret_access_key = args
.secret_access_key
.or_else(|| std::env::var("AWS_SECRET_ACCESS_KEY").ok());
let config = S3Config {
endpoint: if endpoint.is_empty() {
None
} else {
Some(endpoint)
},
bucket: bucket.clone(),
region: region.clone(),
access_key_id,
secret_access_key,
public_read: args.public_read,
};
println!("Upload Configuration:");
println!(" Source: {}", args.source);
println!(" S3 Key: {}", key);
println!(" Bucket: {}", bucket);
println!(" Region: {}", region);
println!(
" Size: {} bytes ({} MB)",
total_size,
total_size as f64 / 1024.0 / 1024.0
);
println!();
if !args.yes {
let confirm = inquire::Confirm::new("Proceed with upload?")
.with_default(true)
.prompt()?;
if !confirm {
println!("Upload cancelled.");
return Ok(());
}
}
let store = S3Store::new(config)
.await
.map_err(|e| format!("Failed to initialize S3 client: {}", e))?;
println!("Computing checksum while uploading...\n");
let pb = ProgressBar::new(total_size);
pb.set_style(
ProgressStyle::default_bar()
.template("{spinner:.green} [{elapsed_precise}] [{bar:40}] {bytes}/{total_bytes} ({bytes_per_sec})")?
.progress_chars("=>-"),
);
{
let algo = harmony_assets::ChecksumAlgo::from_str(&args.algo)?;
let rt = tokio::runtime::Handle::current();
let pb_clone = pb.clone();
let _checksum = rt.block_on(checksum_for_path_with_progress(
source_path,
algo,
|read, _total| {
pb_clone.set_position(read);
},
))?;
}
pb.set_position(total_size);
let result = store
.store(source_path, &key, args.content_type.as_deref())
.await;
pb.finish();
match result {
Ok(asset) => {
println!("\nUpload complete!");
println!(" URL: {}", asset.url);
println!(
" Checksum: {}:{}",
asset.checksum_algo.name(),
asset.checksum
);
println!(" Size: {} bytes", asset.size);
println!(" Key: {}", asset.key);
Ok(())
}
Err(e) => {
eprintln!("Upload failed: {}", e);
std::process::exit(1);
}
}
}

View File

@@ -0,0 +1,32 @@
use clap::Parser;
#[derive(Parser, Debug)]
pub struct VerifyArgs {
pub path: String,
pub expected: String,
#[arg(short, long, default_value = "blake3")]
pub algo: String,
}
pub async fn execute(args: VerifyArgs) -> Result<(), Box<dyn std::error::Error>> {
use harmony_assets::{ChecksumAlgo, verify_checksum};
let path = std::path::Path::new(&args.path);
if !path.exists() {
eprintln!("Error: File not found: {}", args.path);
std::process::exit(1);
}
let algo = ChecksumAlgo::from_str(&args.algo)?;
match verify_checksum(path, &args.expected, algo).await {
Ok(()) => {
println!("Checksum verified OK");
Ok(())
}
Err(e) => {
eprintln!("Verification FAILED: {}", e);
std::process::exit(1);
}
}
}

View File

@@ -0,0 +1,37 @@
use std::path::PathBuf;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum AssetError {
#[error("File not found: {0}")]
FileNotFound(PathBuf),
#[error("Checksum mismatch for '{path}': expected {expected}, got {actual}")]
ChecksumMismatch {
path: PathBuf,
expected: String,
actual: String,
},
#[error("Checksum algorithm not available: {0}. Enable the corresponding feature flag.")]
ChecksumAlgoNotAvailable(String),
#[error("Download failed: {0}")]
DownloadFailed(String),
#[error("S3 error: {0}")]
S3Error(String),
#[error("IO error: {0}")]
IoError(#[from] std::io::Error),
#[cfg(feature = "reqwest")]
#[error("HTTP error: {0}")]
HttpError(#[from] reqwest::Error),
#[error("Store error: {0}")]
StoreError(String),
#[error("Configuration error: {0}")]
ConfigError(String),
}

233
harmony_assets/src/hash.rs Normal file
View File

@@ -0,0 +1,233 @@
use crate::errors::AssetError;
use std::path::Path;
#[cfg(feature = "blake3")]
use blake3::Hasher as B3Hasher;
#[cfg(feature = "sha256")]
use sha2::{Digest, Sha256};
#[derive(Debug, Clone)]
pub enum ChecksumAlgo {
BLAKE3,
SHA256,
}
impl Default for ChecksumAlgo {
fn default() -> Self {
#[cfg(feature = "blake3")]
return ChecksumAlgo::BLAKE3;
#[cfg(not(feature = "blake3"))]
return ChecksumAlgo::SHA256;
}
}
impl ChecksumAlgo {
pub fn name(&self) -> &'static str {
match self {
ChecksumAlgo::BLAKE3 => "blake3",
ChecksumAlgo::SHA256 => "sha256",
}
}
pub fn from_str(s: &str) -> Result<Self, AssetError> {
match s.to_lowercase().as_str() {
"blake3" | "b3" => Ok(ChecksumAlgo::BLAKE3),
"sha256" | "sha-256" => Ok(ChecksumAlgo::SHA256),
_ => Err(AssetError::ChecksumAlgoNotAvailable(s.to_string())),
}
}
}
impl std::fmt::Display for ChecksumAlgo {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.name())
}
}
pub async fn checksum_for_file<R>(reader: R, algo: ChecksumAlgo) -> Result<String, AssetError>
where
R: tokio::io::AsyncRead + Unpin,
{
match algo {
#[cfg(feature = "blake3")]
ChecksumAlgo::BLAKE3 => {
let mut hasher = B3Hasher::new();
let mut reader = reader;
let mut buf = vec![0u8; 65536];
loop {
let n = tokio::io::AsyncReadExt::read(&mut reader, &mut buf).await?;
if n == 0 {
break;
}
hasher.update(&buf[..n]);
}
Ok(hasher.finalize().to_hex().to_string())
}
#[cfg(not(feature = "blake3"))]
ChecksumAlgo::BLAKE3 => Err(AssetError::ChecksumAlgoNotAvailable("blake3".to_string())),
#[cfg(feature = "sha256")]
ChecksumAlgo::SHA256 => {
let mut hasher = Sha256::new();
let mut reader = reader;
let mut buf = vec![0u8; 65536];
loop {
let n = tokio::io::AsyncReadExt::read(&mut reader, &mut buf).await?;
if n == 0 {
break;
}
hasher.update(&buf[..n]);
}
Ok(format!("{:x}", hasher.finalize()))
}
#[cfg(not(feature = "sha256"))]
ChecksumAlgo::SHA256 => Err(AssetError::ChecksumAlgoNotAvailable("sha256".to_string())),
}
}
pub async fn checksum_for_path(path: &Path, algo: ChecksumAlgo) -> Result<String, AssetError> {
let file = tokio::fs::File::open(path)
.await
.map_err(|e| AssetError::IoError(e))?;
let reader = tokio::io::BufReader::with_capacity(65536, file);
checksum_for_file(reader, algo).await
}
pub async fn checksum_for_path_with_progress<F>(
path: &Path,
algo: ChecksumAlgo,
mut progress: F,
) -> Result<String, AssetError>
where
F: FnMut(u64, Option<u64>) + Send,
{
let file = tokio::fs::File::open(path)
.await
.map_err(|e| AssetError::IoError(e))?;
let metadata = file.metadata().await.map_err(|e| AssetError::IoError(e))?;
let total = Some(metadata.len());
let reader = tokio::io::BufReader::with_capacity(65536, file);
match algo {
#[cfg(feature = "blake3")]
ChecksumAlgo::BLAKE3 => {
let mut hasher = B3Hasher::new();
let mut reader = reader;
let mut buf = vec![0u8; 65536];
let mut read: u64 = 0;
loop {
let n = tokio::io::AsyncReadExt::read(&mut reader, &mut buf).await?;
if n == 0 {
break;
}
hasher.update(&buf[..n]);
read += n as u64;
progress(read, total);
}
Ok(hasher.finalize().to_hex().to_string())
}
#[cfg(not(feature = "blake3"))]
ChecksumAlgo::BLAKE3 => Err(AssetError::ChecksumAlgoNotAvailable("blake3".to_string())),
#[cfg(feature = "sha256")]
ChecksumAlgo::SHA256 => {
let mut hasher = Sha256::new();
let mut reader = reader;
let mut buf = vec![0u8; 65536];
let mut read: u64 = 0;
loop {
let n = tokio::io::AsyncReadExt::read(&mut reader, &mut buf).await?;
if n == 0 {
break;
}
hasher.update(&buf[..n]);
read += n as u64;
progress(read, total);
}
Ok(format!("{:x}", hasher.finalize()))
}
#[cfg(not(feature = "sha256"))]
ChecksumAlgo::SHA256 => Err(AssetError::ChecksumAlgoNotAvailable("sha256".to_string())),
}
}
pub async fn verify_checksum(
path: &Path,
expected: &str,
algo: ChecksumAlgo,
) -> Result<(), AssetError> {
let actual = checksum_for_path(path, algo).await?;
let expected_clean = expected
.trim_start_matches("blake3:")
.trim_start_matches("sha256:")
.trim_start_matches("b3:")
.trim_start_matches("sha-256:");
if actual != expected_clean {
return Err(AssetError::ChecksumMismatch {
path: path.to_path_buf(),
expected: expected_clean.to_string(),
actual,
});
}
Ok(())
}
pub fn format_checksum(checksum: &str, algo: ChecksumAlgo) -> String {
format!("{}:{}", algo.name(), checksum)
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Write;
use tempfile::NamedTempFile;
async fn create_temp_file(content: &[u8]) -> NamedTempFile {
let mut file = NamedTempFile::new().unwrap();
file.write_all(content).unwrap();
file.flush().unwrap();
file
}
#[tokio::test]
async fn test_checksum_blake3() {
let file = create_temp_file(b"hello world").await;
let checksum = checksum_for_path(file.path(), ChecksumAlgo::BLAKE3)
.await
.unwrap();
assert_eq!(
checksum,
"d74981efa70a0c880b8d8c1985d075dbcbf679b99a5f9914e5aaf96b831a9e24"
);
}
#[tokio::test]
async fn test_verify_checksum_success() {
let file = create_temp_file(b"hello world").await;
let checksum = checksum_for_path(file.path(), ChecksumAlgo::BLAKE3)
.await
.unwrap();
let result = verify_checksum(file.path(), &checksum, ChecksumAlgo::BLAKE3).await;
assert!(result.is_ok());
}
#[tokio::test]
async fn test_verify_checksum_failure() {
let file = create_temp_file(b"hello world").await;
let result = verify_checksum(
file.path(),
"blake3:0000000000000000000000000000000000000000000000000000000000000000",
ChecksumAlgo::BLAKE3,
)
.await;
assert!(matches!(result, Err(AssetError::ChecksumMismatch { .. })));
}
#[tokio::test]
async fn test_checksum_with_prefix() {
let file = create_temp_file(b"hello world").await;
let checksum = checksum_for_path(file.path(), ChecksumAlgo::BLAKE3)
.await
.unwrap();
let formatted = format_checksum(&checksum, ChecksumAlgo::BLAKE3);
assert!(formatted.starts_with("blake3:"));
}
}

14
harmony_assets/src/lib.rs Normal file
View File

@@ -0,0 +1,14 @@
pub mod asset;
pub mod errors;
pub mod hash;
pub mod store;
pub use asset::{Asset, LocalCache, StoredAsset};
pub use errors::AssetError;
pub use hash::{ChecksumAlgo, checksum_for_path, checksum_for_path_with_progress, verify_checksum};
pub use store::AssetStore;
#[cfg(feature = "s3")]
pub use store::{S3Config, S3Store};
pub use store::local::LocalStore;

View File

@@ -0,0 +1,137 @@
use crate::asset::{Asset, LocalCache};
use crate::errors::AssetError;
use crate::store::AssetStore;
use async_trait::async_trait;
use std::path::PathBuf;
use url::Url;
#[cfg(feature = "reqwest")]
use crate::hash::verify_checksum;
#[derive(Debug, Clone)]
pub struct LocalStore {
base_dir: PathBuf,
}
impl LocalStore {
pub fn new(base_dir: PathBuf) -> Self {
Self { base_dir }
}
pub fn with_cache(cache: LocalCache) -> Self {
Self {
base_dir: cache.base_dir.clone(),
}
}
pub fn base_dir(&self) -> &PathBuf {
&self.base_dir
}
}
impl Default for LocalStore {
fn default() -> Self {
Self::new(LocalCache::default().base_dir)
}
}
#[async_trait]
impl AssetStore for LocalStore {
#[cfg(feature = "reqwest")]
async fn fetch(
&self,
asset: &Asset,
cache: &LocalCache,
progress: Option<Box<dyn Fn(u64, Option<u64>) + Send>>,
) -> Result<PathBuf, AssetError> {
use futures_util::StreamExt;
let dest_path = cache.path_for(asset);
if dest_path.exists() {
let verification =
verify_checksum(&dest_path, &asset.checksum, asset.checksum_algo.clone()).await;
if verification.is_ok() {
log::debug!("Asset already cached at {:?}", dest_path);
return Ok(dest_path);
} else {
log::warn!("Cached file failed checksum verification, re-downloading");
tokio::fs::remove_file(&dest_path)
.await
.map_err(|e| AssetError::IoError(e))?;
}
}
cache.ensure_dir(asset).await?;
log::info!("Downloading asset from {}", asset.url);
let client = reqwest::Client::new();
let response = client
.get(asset.url.as_str())
.send()
.await
.map_err(|e| AssetError::DownloadFailed(e.to_string()))?;
if !response.status().is_success() {
return Err(AssetError::DownloadFailed(format!(
"HTTP {}: {}",
response.status(),
asset.url
)));
}
let total_size = response.content_length();
let mut file = tokio::fs::File::create(&dest_path)
.await
.map_err(|e| AssetError::IoError(e))?;
let mut stream = response.bytes_stream();
let mut downloaded: u64 = 0;
while let Some(chunk_result) = stream.next().await {
let chunk = chunk_result.map_err(|e| AssetError::DownloadFailed(e.to_string()))?;
tokio::io::AsyncWriteExt::write_all(&mut file, &chunk)
.await
.map_err(|e| AssetError::IoError(e))?;
downloaded += chunk.len() as u64;
if let Some(ref p) = progress {
p(downloaded, total_size);
}
}
tokio::io::AsyncWriteExt::flush(&mut file)
.await
.map_err(|e| AssetError::IoError(e))?;
drop(file);
verify_checksum(&dest_path, &asset.checksum, asset.checksum_algo.clone()).await?;
log::info!("Asset downloaded and verified: {:?}", dest_path);
Ok(dest_path)
}
#[cfg(not(feature = "reqwest"))]
async fn fetch(
&self,
_asset: &Asset,
_cache: &LocalCache,
_progress: Option<Box<dyn Fn(u64, Option<u64>) + Send>>,
) -> Result<PathBuf, AssetError> {
Err(AssetError::DownloadFailed(
"HTTP downloads not available. Enable the 'reqwest' feature.".to_string(),
))
}
async fn exists(&self, key: &str) -> Result<bool, AssetError> {
let path = self.base_dir.join(key);
Ok(path.exists())
}
fn url_for(&self, key: &str) -> Result<Url, AssetError> {
let path = self.base_dir.join(key);
Url::from_file_path(&path)
.map_err(|_| AssetError::StoreError("Could not convert path to file URL".to_string()))
}
}

View File

@@ -0,0 +1,27 @@
use crate::asset::{Asset, LocalCache};
use crate::errors::AssetError;
use async_trait::async_trait;
use std::path::PathBuf;
use url::Url;
pub mod local;
#[cfg(feature = "s3")]
pub mod s3;
#[async_trait]
pub trait AssetStore: Send + Sync {
async fn fetch(
&self,
asset: &Asset,
cache: &LocalCache,
progress: Option<Box<dyn Fn(u64, Option<u64>) + Send>>,
) -> Result<PathBuf, AssetError>;
async fn exists(&self, key: &str) -> Result<bool, AssetError>;
fn url_for(&self, key: &str) -> Result<Url, AssetError>;
}
#[cfg(feature = "s3")]
pub use s3::{S3Config, S3Store};

View File

@@ -0,0 +1,235 @@
use crate::asset::StoredAsset;
use crate::errors::AssetError;
use crate::hash::ChecksumAlgo;
use async_trait::async_trait;
use aws_sdk_s3::Client as S3Client;
use aws_sdk_s3::primitives::ByteStream;
use aws_sdk_s3::types::ObjectCannedAcl;
use std::path::Path;
use url::Url;
#[derive(Debug, Clone)]
pub struct S3Config {
pub endpoint: Option<String>,
pub bucket: String,
pub region: String,
pub access_key_id: Option<String>,
pub secret_access_key: Option<String>,
pub public_read: bool,
}
impl Default for S3Config {
fn default() -> Self {
Self {
endpoint: None,
bucket: String::new(),
region: String::from("us-east-1"),
access_key_id: None,
secret_access_key: None,
public_read: true,
}
}
}
#[derive(Debug, Clone)]
pub struct S3Store {
client: S3Client,
config: S3Config,
}
impl S3Store {
pub async fn new(config: S3Config) -> Result<Self, AssetError> {
let mut cfg_builder = aws_config::defaults(aws_config::BehaviorVersion::latest());
if let Some(ref endpoint) = config.endpoint {
cfg_builder = cfg_builder.endpoint_url(endpoint);
}
let cfg = cfg_builder.load().await;
let client = S3Client::new(&cfg);
Ok(Self { client, config })
}
pub fn config(&self) -> &S3Config {
&self.config
}
fn public_url(&self, key: &str) -> Result<Url, AssetError> {
let url_str = if let Some(ref endpoint) = self.config.endpoint {
format!(
"{}/{}/{}",
endpoint.trim_end_matches('/'),
self.config.bucket,
key
)
} else {
format!(
"https://{}.s3.{}.amazonaws.com/{}",
self.config.bucket, self.config.region, key
)
};
Url::parse(&url_str).map_err(|e| AssetError::S3Error(e.to_string()))
}
pub async fn store(
&self,
source: &Path,
key: &str,
content_type: Option<&str>,
) -> Result<StoredAsset, AssetError> {
let metadata = tokio::fs::metadata(source)
.await
.map_err(|e| AssetError::IoError(e))?;
let size = metadata.len();
let checksum = crate::checksum_for_path(source, ChecksumAlgo::default())
.await
.map_err(|e| AssetError::StoreError(e.to_string()))?;
let body = ByteStream::from_path(source).await.map_err(|e| {
AssetError::IoError(std::io::Error::new(
std::io::ErrorKind::Other,
e.to_string(),
))
})?;
let mut put_builder = self
.client
.put_object()
.bucket(&self.config.bucket)
.key(key)
.body(body)
.content_length(size as i64)
.metadata("checksum", &checksum);
if self.config.public_read {
put_builder = put_builder.acl(ObjectCannedAcl::PublicRead);
}
if let Some(ct) = content_type {
put_builder = put_builder.content_type(ct);
}
put_builder
.send()
.await
.map_err(|e| AssetError::S3Error(e.to_string()))?;
Ok(StoredAsset {
url: self.public_url(key)?,
checksum,
checksum_algo: ChecksumAlgo::default(),
size,
key: key.to_string(),
})
}
}
use crate::store::AssetStore;
use crate::{Asset, LocalCache};
#[async_trait]
impl AssetStore for S3Store {
async fn fetch(
&self,
asset: &Asset,
cache: &LocalCache,
progress: Option<Box<dyn Fn(u64, Option<u64>) + Send>>,
) -> Result<std::path::PathBuf, AssetError> {
let dest_path = cache.path_for(asset);
if dest_path.exists() {
let verification =
crate::verify_checksum(&dest_path, &asset.checksum, asset.checksum_algo.clone())
.await;
if verification.is_ok() {
log::debug!("Asset already cached at {:?}", dest_path);
return Ok(dest_path);
}
}
cache.ensure_dir(asset).await?;
log::info!(
"Downloading asset from s3://{}/{}",
self.config.bucket,
asset.url
);
let key = extract_s3_key(&asset.url, &self.config.bucket)?;
let obj = self
.client
.get_object()
.bucket(&self.config.bucket)
.key(&key)
.send()
.await
.map_err(|e| AssetError::S3Error(e.to_string()))?;
let total_size = obj.content_length.unwrap_or(0) as u64;
let mut file = tokio::fs::File::create(&dest_path)
.await
.map_err(|e| AssetError::IoError(e))?;
let mut stream = obj.body;
let mut downloaded: u64 = 0;
while let Some(chunk_result) = stream.next().await {
let chunk = chunk_result.map_err(|e| AssetError::S3Error(e.to_string()))?;
tokio::io::AsyncWriteExt::write_all(&mut file, &chunk)
.await
.map_err(|e| AssetError::IoError(e))?;
downloaded += chunk.len() as u64;
if let Some(ref p) = progress {
p(downloaded, Some(total_size));
}
}
tokio::io::AsyncWriteExt::flush(&mut file)
.await
.map_err(|e| AssetError::IoError(e))?;
drop(file);
crate::verify_checksum(&dest_path, &asset.checksum, asset.checksum_algo.clone()).await?;
Ok(dest_path)
}
async fn exists(&self, key: &str) -> Result<bool, AssetError> {
match self
.client
.head_object()
.bucket(&self.config.bucket)
.key(key)
.send()
.await
{
Ok(_) => Ok(true),
Err(e) => {
let err_str = e.to_string();
if err_str.contains("NoSuchKey") || err_str.contains("NotFound") {
Ok(false)
} else {
Err(AssetError::S3Error(err_str))
}
}
}
}
fn url_for(&self, key: &str) -> Result<Url, AssetError> {
self.public_url(key)
}
}
fn extract_s3_key(url: &Url, bucket: &str) -> Result<String, AssetError> {
let path = url.path().trim_start_matches('/');
if let Some(stripped) = path.strip_prefix(&format!("{}/", bucket)) {
Ok(stripped.to_string())
} else if path == bucket {
Ok(String::new())
} else {
Ok(path.to_string())
}
}

View File

@@ -18,6 +18,9 @@ interactive-parse = "0.1.5"
log.workspace = true
directories.workspace = true
inquire.workspace = true
sqlx = { workspace = true, features = ["runtime-tokio", "sqlite"] }
anyhow.workspace = true
env_logger.workspace = true
[dev-dependencies]
pretty_assertions.workspace = true

View File

@@ -0,0 +1,88 @@
//! Basic example showing harmony_config with SQLite backend
//!
//! This example demonstrates:
//! - Zero-setup SQLite backend (no configuration needed)
//! - Using the `#[derive(Config)]` macro
//! - Environment variable override (HARMONY_CONFIG_TestConfig overrides SQLite)
//! - Direct set/get operations (prompting requires a TTY)
//!
//! Run with:
//! - `cargo run --example basic` - creates/reads config from SQLite
//! - `HARMONY_CONFIG_TestConfig='{"name":"from_env","count":42}' cargo run --example basic` - uses env var
use std::sync::Arc;
use harmony_config::{Config, ConfigManager, EnvSource, SqliteSource};
use log::info;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Config)]
struct TestConfig {
name: String,
count: u32,
}
impl Default for TestConfig {
fn default() -> Self {
Self {
name: "default_name".to_string(),
count: 0,
}
}
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
let sqlite = SqliteSource::default().await?;
let manager = ConfigManager::new(vec![Arc::new(EnvSource), Arc::new(sqlite)]);
info!("1. Attempting to get TestConfig (expect NotFound on first run)...");
match manager.get::<TestConfig>().await {
Ok(config) => {
info!(" Found config: {:?}", config);
}
Err(harmony_config::ConfigError::NotFound { .. }) => {
info!(" NotFound - as expected on first run");
}
Err(e) => {
info!(" Error: {:?}", e);
}
}
info!("\n2. Setting config directly...");
let config = TestConfig {
name: "from_code".to_string(),
count: 42,
};
manager.set(&config).await?;
info!(" Set config: {:?}", config);
info!("\n3. Getting config back from SQLite...");
let retrieved: TestConfig = manager.get().await?;
info!(" Retrieved: {:?}", retrieved);
info!("\n4. Using env override...");
info!(" Env var HARMONY_CONFIG_TestConfig overrides SQLite");
let env_config = TestConfig {
name: "from_env".to_string(),
count: 99,
};
unsafe {
std::env::set_var(
"HARMONY_CONFIG_TestConfig",
serde_json::to_string(&env_config)?,
);
}
let from_env: TestConfig = manager.get().await?;
info!(" Got from env: {:?}", from_env);
unsafe {
std::env::remove_var("HARMONY_CONFIG_TestConfig");
}
info!("\nDone! Config persisted at ~/.local/share/harmony/config/config.db");
Ok(())
}

View File

@@ -0,0 +1,190 @@
//! End-to-end example: harmony_config with OpenBao as a ConfigSource
//!
//! This example demonstrates the full config resolution chain:
//! EnvSource → SqliteSource → StoreSource<OpenbaoSecretStore>
//!
//! When OpenBao is unreachable, the chain gracefully falls through to SQLite.
//!
//! **Prerequisites**:
//! - OpenBao must be initialized and unsealed
//! - KV v2 engine must be enabled at the `OPENBAO_KV_MOUNT` path (default: `secret`)
//! - Auth method must be enabled at the `OPENBAO_AUTH_MOUNT` path (default: `userpass`)
//!
//! **Environment variables**:
//! - `OPENBAO_URL` (required for OpenBao): URL of the OpenBao server
//! - `OPENBAO_TOKEN` (optional): Use token auth instead of userpass
//! - `OPENBAO_USERNAME` + `OPENBAO_PASSWORD` (optional): Userpass auth
//! - `OPENBAO_KV_MOUNT` (default: `secret`): KV v2 engine mount path
//! - `OPENBAO_AUTH_MOUNT` (default: `userpass`): Auth method mount path
//! - `OPENBAO_SKIP_TLS` (default: `false`): Skip TLS verification
//! - `HARMONY_SSO_URL` + `HARMONY_SSO_CLIENT_ID` (optional): Zitadel OIDC device flow (RFC 8628)
//!
//! **Run**:
//! ```bash
//! # Without OpenBao (SqliteSource only):
//! cargo run --example openbao_chain
//!
//! # With OpenBao (full chain):
//! export OPENBAO_URL="http://127.0.0.1:8200"
//! export OPENBAO_TOKEN="<your-token>"
//! cargo run --example openbao_chain
//! ```
//!
//! **Setup OpenBao** (if needed):
//! ```bash
//! # Port-forward to local OpenBao
//! kubectl port-forward svc/openbao -n openbao 8200:8200 &
//!
//! # Initialize (one-time)
//! kubectl exec -n openbao openbao-0 -- bao operator init
//!
//! # Enable KV and userpass (one-time)
//! kubectl exec -n openbao openbao-0 -- bao secrets enable -path=secret kv-v2
//! kubectl exec -n openbao openbao-0 -- bao auth enable userpass
//!
//! # Create test user
//! kubectl exec -n openbao openbao-0 -- bao write auth/userpass/users/testuser \
//! password="testpass" policies="default"
//! ```
use std::sync::Arc;
use harmony_config::{Config, ConfigManager, ConfigSource, EnvSource, SqliteSource, StoreSource};
use harmony_secret::OpenbaoSecretStore;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, PartialEq)]
struct AppConfig {
host: String,
port: u16,
}
impl Default for AppConfig {
fn default() -> Self {
Self {
host: "localhost".to_string(),
port: 8080,
}
}
}
impl Config for AppConfig {
const KEY: &'static str = "AppConfig";
}
async fn build_manager() -> ConfigManager {
let sqlite = Arc::new(
SqliteSource::default()
.await
.expect("Failed to open SQLite database"),
);
let env_source: Arc<dyn ConfigSource> = Arc::new(EnvSource);
let openbao_url = std::env::var("OPENBAO_URL")
.or_else(|_| std::env::var("VAULT_ADDR"))
.ok();
match openbao_url {
Some(url) => {
let kv_mount =
std::env::var("OPENBAO_KV_MOUNT").unwrap_or_else(|_| "secret".to_string());
let auth_mount =
std::env::var("OPENBAO_AUTH_MOUNT").unwrap_or_else(|_| "userpass".to_string());
let skip_tls = std::env::var("OPENBAO_SKIP_TLS")
.map(|v| v == "true")
.unwrap_or(false);
match OpenbaoSecretStore::new(
url,
kv_mount,
auth_mount,
skip_tls,
std::env::var("OPENBAO_TOKEN").ok(),
std::env::var("OPENBAO_USERNAME").ok(),
std::env::var("OPENBAO_PASSWORD").ok(),
std::env::var("HARMONY_SSO_URL").ok(),
std::env::var("HARMONY_SSO_CLIENT_ID").ok(),
)
.await
{
Ok(store) => {
let store_source: Arc<dyn ConfigSource> =
Arc::new(StoreSource::new("harmony".to_string(), store));
println!("OpenBao connected. Full chain: env → sqlite → openbao");
ConfigManager::new(vec![env_source, Arc::clone(&sqlite) as _, store_source])
}
Err(e) => {
eprintln!(
"Warning: OpenBao unavailable ({e}), using local chain: env → sqlite"
);
ConfigManager::new(vec![env_source, sqlite])
}
}
}
None => {
println!("OPENBAO_URL not set. Using local chain: env → sqlite");
ConfigManager::new(vec![env_source, sqlite])
}
}
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
let manager = build_manager().await;
println!("\n=== harmony_config OpenBao Chain Demo ===\n");
println!("1. Attempting to get AppConfig (expect NotFound on first run)...");
match manager.get::<AppConfig>().await {
Ok(config) => {
println!(" Found: {:?}", config);
}
Err(harmony_config::ConfigError::NotFound { .. }) => {
println!(" NotFound - no config stored yet");
}
Err(e) => {
println!(" Error: {:?}", e);
}
}
println!("\n2. Setting AppConfig via set()...");
let config = AppConfig {
host: "production.example.com".to_string(),
port: 443,
};
manager.set(&config).await?;
println!(" Set: {:?}", config);
println!("\n3. Getting AppConfig back...");
let retrieved: AppConfig = manager.get().await?;
println!(" Retrieved: {:?}", retrieved);
assert_eq!(config, retrieved);
println!("\n4. Demonstrating env override...");
println!(" HARMONY_CONFIG_AppConfig env var overrides all backends");
let env_config = AppConfig {
host: "env-override.example.com".to_string(),
port: 9090,
};
unsafe {
std::env::set_var(
"HARMONY_CONFIG_AppConfig",
serde_json::to_string(&env_config)?,
);
}
let from_env: AppConfig = manager.get().await?;
println!(" Got from env: {:?}", from_env);
assert_eq!(env_config.host, "env-override.example.com");
unsafe {
std::env::remove_var("HARMONY_CONFIG_AppConfig");
}
println!("\n=== Done! ===");
println!("Config persisted at ~/.local/share/harmony/config/config.db");
Ok(())
}

View File

@@ -0,0 +1,69 @@
//! Example demonstrating configuration prompting with harmony_config
//!
//! This example shows how to use `get_or_prompt()` to interactively
//! ask the user for configuration values when none are found.
//!
//! **Note**: This example requires a TTY to work properly since it uses
//! interactive prompting via `inquire`. Run in a terminal.
//!
//! Run with:
//! - `cargo run --example prompting` - will prompt for values interactively
//! - If config exists in SQLite, it will be used directly without prompting
use std::sync::Arc;
use harmony_config::{Config, ConfigManager, EnvSource, PromptSource, SqliteSource};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Config)]
struct UserConfig {
username: String,
email: String,
theme: String,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
let sqlite = SqliteSource::default().await?;
let manager = ConfigManager::new(vec![
Arc::new(EnvSource),
Arc::new(sqlite),
Arc::new(PromptSource::new()),
]);
println!("UserConfig Setup");
println!("=================\n");
println!("Attempting to get UserConfig (env > sqlite > prompt)...\n");
match manager.get::<UserConfig>().await {
Ok(config) => {
println!("Found existing config:");
println!(" Username: {}", config.username);
println!(" Email: {}", config.email);
println!(" Theme: {}", config.theme);
println!("\nNo prompting needed - using stored config.");
}
Err(harmony_config::ConfigError::NotFound { .. }) => {
println!("No config found in env or SQLite.");
println!("Calling get_or_prompt() to interactively request config...\n");
let config: UserConfig = manager.get_or_prompt().await?;
println!("\nConfig received and saved to SQLite:");
println!(" Username: {}", config.username);
println!(" Email: {}", config.email);
println!(" Theme: {}", config.theme);
}
Err(e) => {
println!("Error: {:?}", e);
}
}
println!("\nConfig is persisted at ~/.local/share/harmony/config/config.db");
println!("On next run, the stored config will be used without prompting.");
Ok(())
}

View File

@@ -15,6 +15,7 @@ pub use harmony_config_derive::Config;
pub use source::env::EnvSource;
pub use source::local_file::LocalFileSource;
pub use source::prompt::PromptSource;
pub use source::sqlite::SqliteSource;
pub use source::store::StoreSource;
#[derive(Debug, Error)]
@@ -51,6 +52,9 @@ pub enum ConfigError {
#[error("I/O error: {0}")]
IoError(#[from] std::io::Error),
#[error("SQLite error: {0}")]
SqliteError(String),
}
pub trait Config: Serialize + DeserializeOwned + JsonSchema + InteractiveParseObj + Sized {
@@ -62,6 +66,10 @@ pub trait ConfigSource: Send + Sync {
async fn get(&self, key: &str) -> Result<Option<serde_json::Value>, ConfigError>;
async fn set(&self, key: &str, value: &serde_json::Value) -> Result<(), ConfigError>;
fn should_persist(&self) -> bool {
true
}
}
pub struct ConfigManager {
@@ -97,20 +105,18 @@ impl ConfigManager {
let config =
T::parse_to_obj().map_err(|e| ConfigError::PromptError(e.to_string()))?;
let value =
serde_json::to_value(&config).map_err(|e| ConfigError::Serialization {
key: T::KEY.to_string(),
source: e,
})?;
for source in &self.sources {
if let Err(e) = source
.set(
T::KEY,
&serde_json::to_value(&config).map_err(|e| {
ConfigError::Serialization {
key: T::KEY.to_string(),
source: e,
}
})?,
)
.await
{
debug!("Failed to save config to source: {e}");
if !source.should_persist() {
continue;
}
if source.set(T::KEY, &value).await.is_ok() {
break;
}
}
@@ -175,6 +181,7 @@ pub fn default_config_dir() -> Option<PathBuf> {
#[cfg(test)]
mod tests {
use super::*;
use harmony_secret::{SecretStore, SecretStoreError};
use pretty_assertions::assert_eq;
use serde::{Deserialize, Serialize};
use std::sync::Mutex;
@@ -216,15 +223,6 @@ mod tests {
const KEY: &'static str = "TestConfig";
}
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, PartialEq)]
struct AnotherTestConfig {
value: String,
}
impl Config for AnotherTestConfig {
const KEY: &'static str = "AnotherTestConfig";
}
struct MockSource {
data: std::sync::Mutex<std::collections::HashMap<String, serde_json::Value>>,
get_count: AtomicUsize,
@@ -469,4 +467,354 @@ mod tests {
assert_eq!(parsed, config);
}
#[tokio::test]
async fn test_sqlite_set_and_get() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let path = temp_file.path().to_path_buf();
let source = SqliteSource::open(path).await.unwrap();
let config = TestConfig {
name: "sqlite_test".to_string(),
count: 42,
};
source
.set("TestConfig", &serde_json::to_value(&config).unwrap())
.await
.unwrap();
let result = source.get("TestConfig").await.unwrap().unwrap();
let parsed: TestConfig = serde_json::from_value(result).unwrap();
assert_eq!(parsed, config);
}
#[tokio::test]
async fn test_sqlite_get_returns_none_when_missing() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let path = temp_file.path().to_path_buf();
let source = SqliteSource::open(path).await.unwrap();
let result = source.get("NonExistentConfig").await.unwrap();
assert!(result.is_none());
}
#[tokio::test]
async fn test_sqlite_overwrites_on_set() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let path = temp_file.path().to_path_buf();
let source = SqliteSource::open(path).await.unwrap();
let config1 = TestConfig {
name: "first".to_string(),
count: 1,
};
let config2 = TestConfig {
name: "second".to_string(),
count: 2,
};
source
.set("TestConfig", &serde_json::to_value(&config1).unwrap())
.await
.unwrap();
source
.set("TestConfig", &serde_json::to_value(&config2).unwrap())
.await
.unwrap();
let result = source.get("TestConfig").await.unwrap().unwrap();
let parsed: TestConfig = serde_json::from_value(result).unwrap();
assert_eq!(parsed, config2);
}
#[tokio::test]
async fn test_sqlite_concurrent_access() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let path = temp_file.path().to_path_buf();
let source = SqliteSource::open(path).await.unwrap();
let source = Arc::new(source);
let config1 = TestConfig {
name: "task1".to_string(),
count: 100,
};
let config2 = TestConfig {
name: "task2".to_string(),
count: 200,
};
let (r1, r2) = tokio::join!(
async {
source
.set("key1", &serde_json::to_value(&config1).unwrap())
.await
.unwrap();
source.get("key1").await.unwrap().unwrap()
},
async {
source
.set("key2", &serde_json::to_value(&config2).unwrap())
.await
.unwrap();
source.get("key2").await.unwrap().unwrap()
}
);
let parsed1: TestConfig = serde_json::from_value(r1).unwrap();
let parsed2: TestConfig = serde_json::from_value(r2).unwrap();
assert_eq!(parsed1, config1);
assert_eq!(parsed2, config2);
}
#[tokio::test]
async fn test_get_or_prompt_persists_to_first_writable_source() {
let source1 = Arc::new(MockSource::new());
let source2 = Arc::new(MockSource::new());
let manager = ConfigManager::new(vec![source1.clone(), source2.clone()]);
let result: Result<TestConfig, ConfigError> = manager.get_or_prompt().await;
assert!(result.is_err());
assert_eq!(source1.set_call_count(), 0);
assert_eq!(source2.set_call_count(), 0);
}
#[tokio::test]
async fn test_full_resolution_chain_sqlite_fallback() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let sqlite = SqliteSource::open(temp_file.path().to_path_buf())
.await
.unwrap();
let sqlite = Arc::new(sqlite);
let manager = ConfigManager::new(vec![sqlite.clone()]);
let config = TestConfig {
name: "from_sqlite".to_string(),
count: 42,
};
sqlite
.set("TestConfig", &serde_json::to_value(&config).unwrap())
.await
.unwrap();
let result: TestConfig = manager.get().await.unwrap();
assert_eq!(result, config);
}
#[tokio::test]
async fn test_full_resolution_chain_env_overrides_sqlite() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let sqlite = SqliteSource::open(temp_file.path().to_path_buf())
.await
.unwrap();
let sqlite = Arc::new(sqlite);
let env_source = Arc::new(EnvSource);
let manager = ConfigManager::new(vec![env_source.clone(), sqlite.clone()]);
let sqlite_config = TestConfig {
name: "from_sqlite".to_string(),
count: 42,
};
let env_config = TestConfig {
name: "from_env".to_string(),
count: 99,
};
sqlite
.set("TestConfig", &serde_json::to_value(&sqlite_config).unwrap())
.await
.unwrap();
let env_key = format!("HARMONY_CONFIG_{}", "TestConfig");
unsafe {
std::env::set_var(&env_key, serde_json::to_string(&env_config).unwrap());
}
let result: TestConfig = manager.get().await.unwrap();
assert_eq!(result.name, "from_env");
assert_eq!(result.count, 99);
unsafe {
std::env::remove_var(&env_key);
}
}
#[tokio::test]
async fn test_branch_switching_scenario_deserialization_error() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let sqlite = SqliteSource::open(temp_file.path().to_path_buf())
.await
.unwrap();
let sqlite = Arc::new(sqlite);
let manager = ConfigManager::new(vec![sqlite.clone()]);
let old_config = serde_json::json!({
"name": "old_config",
"count": "not_a_number"
});
sqlite.set("TestConfig", &old_config).await.unwrap();
let result: Result<TestConfig, ConfigError> = manager.get().await;
assert!(matches!(result, Err(ConfigError::Deserialization { .. })));
}
#[tokio::test]
async fn test_prompt_source_always_returns_none() {
let source = PromptSource::new();
let result = source.get("AnyKey").await.unwrap();
assert!(result.is_none());
}
#[tokio::test]
async fn test_prompt_source_set_is_noop() {
let source = PromptSource::new();
let result = source
.set("AnyKey", &serde_json::json!({"test": "value"}))
.await;
assert!(result.is_ok());
}
#[tokio::test]
async fn test_prompt_source_does_not_persist() {
let source = PromptSource::new();
source
.set(
"TestConfig",
&serde_json::json!({"name": "test", "count": 42}),
)
.await
.unwrap();
let result = source.get("TestConfig").await.unwrap();
assert!(result.is_none());
}
#[tokio::test]
async fn test_full_chain_with_prompt_source_falls_through_to_prompt() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let sqlite = SqliteSource::open(temp_file.path().to_path_buf())
.await
.unwrap();
let sqlite = Arc::new(sqlite);
let source1 = Arc::new(MockSource::new());
let prompt_source = Arc::new(PromptSource::new());
let manager =
ConfigManager::new(vec![source1.clone(), sqlite.clone(), prompt_source.clone()]);
let result: Result<TestConfig, ConfigError> = manager.get().await;
assert!(matches!(result, Err(ConfigError::NotFound { .. })));
}
#[derive(Debug)]
struct AlwaysErrorStore;
#[async_trait]
impl SecretStore for AlwaysErrorStore {
async fn get_raw(&self, _: &str, _: &str) -> Result<Vec<u8>, SecretStoreError> {
Err(SecretStoreError::Store("Connection refused".into()))
}
async fn set_raw(&self, _: &str, _: &str, _: &[u8]) -> Result<(), SecretStoreError> {
Err(SecretStoreError::Store("Connection refused".into()))
}
}
#[tokio::test]
async fn test_store_source_error_falls_through_to_sqlite() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let sqlite = SqliteSource::open(temp_file.path().to_path_buf())
.await
.unwrap();
let sqlite = Arc::new(sqlite);
let store_source = Arc::new(StoreSource::new("test".to_string(), AlwaysErrorStore));
let manager = ConfigManager::new(vec![store_source.clone(), sqlite.clone()]);
let config = TestConfig {
name: "from_sqlite".to_string(),
count: 42,
};
sqlite
.set("TestConfig", &serde_json::to_value(&config).unwrap())
.await
.unwrap();
let result: TestConfig = manager.get().await.unwrap();
assert_eq!(result.name, "from_sqlite");
assert_eq!(result.count, 42);
}
#[derive(Debug)]
struct NeverFindsStore;
#[async_trait]
impl SecretStore for NeverFindsStore {
async fn get_raw(&self, _: &str, _: &str) -> Result<Vec<u8>, SecretStoreError> {
Err(SecretStoreError::NotFound {
namespace: "test".to_string(),
key: "test".to_string(),
})
}
async fn set_raw(&self, _: &str, _: &str, _: &[u8]) -> Result<(), SecretStoreError> {
Ok(())
}
}
#[tokio::test]
async fn test_store_source_not_found_falls_through_to_sqlite() {
use tempfile::NamedTempFile;
let temp_file = NamedTempFile::new().unwrap();
let sqlite = SqliteSource::open(temp_file.path().to_path_buf())
.await
.unwrap();
let sqlite = Arc::new(sqlite);
let store_source = Arc::new(StoreSource::new("test".to_string(), NeverFindsStore));
let manager = ConfigManager::new(vec![store_source.clone(), sqlite.clone()]);
let config = TestConfig {
name: "from_sqlite".to_string(),
count: 99,
};
sqlite
.set("TestConfig", &serde_json::to_value(&config).unwrap())
.await
.unwrap();
let result: TestConfig = manager.get().await.unwrap();
assert_eq!(result.name, "from_sqlite");
assert_eq!(result.count, 99);
}
}

View File

@@ -42,4 +42,8 @@ impl ConfigSource for EnvSource {
}
Ok(())
}
fn should_persist(&self) -> bool {
false
}
}

View File

@@ -1,4 +1,5 @@
pub mod env;
pub mod local_file;
pub mod prompt;
pub mod sqlite;
pub mod store;

View File

@@ -1,11 +1,8 @@
use async_trait::async_trait;
use std::sync::Arc;
use tokio::sync::Mutex;
use crate::{ConfigError, ConfigSource};
static PROMPT_MUTEX: Mutex<()> = Mutex::const_new(());
pub struct PromptSource {
#[allow(dead_code)]
writer: Option<Arc<dyn std::io::Write + Send + Sync>>,
@@ -39,12 +36,8 @@ impl ConfigSource for PromptSource {
async fn set(&self, _key: &str, _value: &serde_json::Value) -> Result<(), ConfigError> {
Ok(())
}
}
pub async fn with_prompt_lock<F, T>(f: F) -> Result<T, ConfigError>
where
F: std::future::Future<Output = Result<T, ConfigError>>,
{
let _guard = PROMPT_MUTEX.lock().await;
f.await
fn should_persist(&self) -> bool {
false
}
}

View File

@@ -0,0 +1,91 @@
use async_trait::async_trait;
use sqlx::{SqlitePool, sqlite::SqlitePoolOptions};
use std::path::PathBuf;
use tokio::fs;
use crate::{ConfigError, ConfigSource};
pub struct SqliteSource {
pool: SqlitePool,
}
impl SqliteSource {
pub async fn open(path: PathBuf) -> Result<Self, ConfigError> {
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await.map_err(|e| {
ConfigError::SqliteError(format!("Failed to create config directory: {}", e))
})?;
}
let database_url = format!("sqlite:{}?mode=rwc", path.display());
let pool = SqlitePoolOptions::new()
.connect(&database_url)
.await
.map_err(|e| ConfigError::SqliteError(format!("Failed to open database: {}", e)))?;
sqlx::query(
"CREATE TABLE IF NOT EXISTS config (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
)",
)
.execute(&pool)
.await
.map_err(|e| ConfigError::SqliteError(format!("Failed to create table: {}", e)))?;
Ok(Self { pool })
}
pub async fn default() -> Result<Self, ConfigError> {
let path = crate::default_config_dir()
.ok_or_else(|| {
ConfigError::SqliteError("Could not determine default config directory".into())
})?
.join("config.db");
Self::open(path).await
}
}
#[async_trait]
impl ConfigSource for SqliteSource {
async fn get(&self, key: &str) -> Result<Option<serde_json::Value>, ConfigError> {
let row: Option<(String,)> = sqlx::query_as("SELECT value FROM config WHERE key = ?")
.bind(key)
.fetch_optional(&self.pool)
.await
.map_err(|e| ConfigError::SqliteError(format!("Failed to query database: {}", e)))?;
match row {
Some((value,)) => {
let json_value: serde_json::Value =
serde_json::from_str(&value).map_err(|e| ConfigError::Deserialization {
key: key.to_string(),
source: e,
})?;
Ok(Some(json_value))
}
None => Ok(None),
}
}
async fn set(&self, key: &str, value: &serde_json::Value) -> Result<(), ConfigError> {
let json_string = serde_json::to_string(value).map_err(|e| ConfigError::Serialization {
key: key.to_string(),
source: e,
})?;
sqlx::query(
"INSERT OR REPLACE INTO config (key, value, updated_at) VALUES (?, ?, datetime('now'))",
)
.bind(key)
.bind(&json_string)
.execute(&self.pool)
.await
.map_err(|e| {
ConfigError::SqliteError(format!("Failed to insert/update database: {}", e))
})?;
Ok(())
}
}

View File

@@ -27,7 +27,7 @@ impl<S: SecretStore + 'static> ConfigSource for StoreSource<S> {
Ok(Some(value))
}
Err(harmony_secret::SecretStoreError::NotFound { .. }) => Ok(None),
Err(e) => Err(ConfigError::StoreError(e)),
Err(_) => Ok(None),
}
}

View File

@@ -22,6 +22,8 @@ inquire.workspace = true
interactive-parse = "0.1.5"
schemars = "0.8"
vaultrs = "0.7.4"
reqwest = { workspace = true, features = ["json"] }
url.workspace = true
[dev-dependencies]
pretty_assertions.workspace = true

View File

@@ -29,4 +29,18 @@ lazy_static! {
env::var("OPENBAO_SKIP_TLS").map(|v| v == "true").unwrap_or(false);
pub static ref OPENBAO_KV_MOUNT: String =
env::var("OPENBAO_KV_MOUNT").unwrap_or_else(|_| "secret".to_string());
pub static ref OPENBAO_AUTH_MOUNT: String =
env::var("OPENBAO_AUTH_MOUNT").unwrap_or_else(|_| "userpass".to_string());
}
lazy_static! {
// Zitadel OIDC configuration (for JWT auth flow)
pub static ref HARMONY_SSO_URL: Option<String> =
env::var("HARMONY_SSO_URL").ok();
pub static ref HARMONY_SSO_CLIENT_ID: Option<String> =
env::var("HARMONY_SSO_CLIENT_ID").ok();
pub static ref HARMONY_SSO_CLIENT_SECRET: Option<String> =
env::var("HARMONY_SSO_CLIENT_SECRET").ok();
pub static ref HARMONY_SECRETS_URL: Option<String> =
env::var("HARMONY_SECRETS_URL").ok();
}

View File

@@ -1,13 +1,16 @@
pub mod config;
mod store;
pub mod store;
use crate::config::SECRET_NAMESPACE;
use async_trait::async_trait;
use config::HARMONY_SSO_CLIENT_ID;
use config::HARMONY_SSO_URL;
use config::INFISICAL_CLIENT_ID;
use config::INFISICAL_CLIENT_SECRET;
use config::INFISICAL_ENVIRONMENT;
use config::INFISICAL_PROJECT_ID;
use config::INFISICAL_URL;
use config::OPENBAO_AUTH_MOUNT;
use config::OPENBAO_KV_MOUNT;
use config::OPENBAO_PASSWORD;
use config::OPENBAO_SKIP_TLS;
@@ -23,7 +26,7 @@ use serde::{Serialize, de::DeserializeOwned};
use std::fmt;
use store::InfisicalSecretStore;
use store::LocalFileSecretStore;
use store::OpenbaoSecretStore;
pub use store::OpenbaoSecretStore;
use thiserror::Error;
use tokio::sync::OnceCell;
@@ -85,10 +88,13 @@ async fn init_secret_manager() -> SecretManager {
let store = OpenbaoSecretStore::new(
OPENBAO_URL.clone().expect("Openbao/Vault URL must be set, see harmony_secret config for ways to provide it. You can try with OPENBAO_URL or VAULT_ADDR"),
OPENBAO_KV_MOUNT.clone(),
OPENBAO_AUTH_MOUNT.clone(),
*OPENBAO_SKIP_TLS,
OPENBAO_TOKEN.clone(),
OPENBAO_USERNAME.clone(),
OPENBAO_PASSWORD.clone(),
HARMONY_SSO_URL.clone(),
HARMONY_SSO_CLIENT_ID.clone(),
)
.await
.expect("Failed to initialize Openbao/Vault secret store");

View File

@@ -1,9 +1,8 @@
mod infisical;
mod local_file;
mod openbao;
pub mod zitadel;
pub use infisical::InfisicalSecretStore;
pub use infisical::*;
pub use local_file::LocalFileSecretStore;
pub use local_file::*;
pub use openbao::OpenbaoSecretStore;

View File

@@ -6,14 +6,10 @@ use std::fmt::Debug;
use std::fs;
use std::path::PathBuf;
use vaultrs::auth;
use vaultrs::client::{Client, VaultClient, VaultClientSettingsBuilder};
use vaultrs::client::{VaultClient, VaultClientSettingsBuilder};
use vaultrs::kv2;
/// Token response from Vault/Openbao auth endpoints
#[derive(Debug, Deserialize)]
struct TokenResponse {
auth: AuthInfo,
}
use super::zitadel::ZitadelOidcAuth;
#[derive(Debug, Serialize, Deserialize)]
struct AuthInfo {
@@ -36,6 +32,7 @@ impl From<vaultrs::api::AuthInfo> for AuthInfo {
pub struct OpenbaoSecretStore {
client: VaultClient,
kv_mount: String,
auth_mount: String,
}
impl Debug for OpenbaoSecretStore {
@@ -43,26 +40,35 @@ impl Debug for OpenbaoSecretStore {
f.debug_struct("OpenbaoSecretStore")
.field("client", &self.client.settings)
.field("kv_mount", &self.kv_mount)
.field("auth_mount", &self.auth_mount)
.finish()
}
}
impl OpenbaoSecretStore {
/// Creates a new Openbao/Vault secret store with authentication
/// Creates a new Openbao/Vault secret store with authentication.
///
/// - `kv_mount`: The KV v2 engine mount path (e.g., `"secret"`). Used for secret storage.
/// - `auth_mount`: The auth method mount path (e.g., `"userpass"`). Used for userpass authentication.
/// - `zitadel_sso_url`: Zitadel OIDC server URL (e.g., `"https://sso.example.com"`). If provided along with `zitadel_client_id`, Zitadel OIDC device flow will be attempted.
/// - `zitadel_client_id`: OIDC client ID registered in Zitadel.
pub async fn new(
base_url: String,
kv_mount: String,
auth_mount: String,
skip_tls: bool,
token: Option<String>,
username: Option<String>,
password: Option<String>,
zitadel_sso_url: Option<String>,
zitadel_client_id: Option<String>,
) -> Result<Self, SecretStoreError> {
info!("OPENBAO_STORE: Initializing client for URL: {base_url}");
// 1. If token is provided via env var, use it directly
if let Some(t) = token {
debug!("OPENBAO_STORE: Using token from environment variable");
return Self::with_token(&base_url, skip_tls, &t, &kv_mount);
return Self::with_token(&base_url, skip_tls, &t, &kv_mount, &auth_mount);
}
// 2. Try to load cached token
@@ -76,32 +82,81 @@ impl OpenbaoSecretStore {
skip_tls,
&cached_token.client_token,
&kv_mount,
&auth_mount,
);
}
warn!("OPENBAO_STORE: Cached token is invalid or expired");
}
// 3. Authenticate with username/password
// 3. Try Zitadel OIDC device flow if configured
if let (Some(sso_url), Some(client_id)) = (zitadel_sso_url, zitadel_client_id) {
info!("OPENBAO_STORE: Attempting Zitadel OIDC device flow");
match Self::authenticate_zitadel_oidc(&sso_url, &client_id, skip_tls).await {
Ok(oidc_session) => {
info!("OPENBAO_STORE: Zitadel OIDC authentication successful");
// Cache the OIDC session token
let auth_info = AuthInfo {
client_token: oidc_session.openbao_token,
lease_duration: Some(oidc_session.openbao_token_ttl),
token_type: "oidc".to_string(),
};
if let Err(e) = Self::cache_token(&cache_path, &auth_info) {
warn!("OPENBAO_STORE: Failed to cache OIDC token: {e}");
}
return Self::with_token(
&base_url,
skip_tls,
&auth_info.client_token,
&kv_mount,
&auth_mount,
);
}
Err(e) => {
warn!("OPENBAO_STORE: Zitadel OIDC failed: {e}, trying other auth methods");
}
}
}
// 4. Authenticate with username/password
let (user, pass) = match (username, password) {
(Some(u), Some(p)) => (u, p),
_ => {
return Err(SecretStoreError::Store(
"No valid token found and username/password not provided. \
Set OPENBAO_TOKEN or OPENBAO_USERNAME/OPENBAO_PASSWORD environment variables."
Set OPENBAO_TOKEN, OPENBAO_USERNAME/OPENBAO_PASSWORD, or \
HARMONY_SSO_URL/HARMONY_SSO_CLIENT_ID for Zitadel OIDC."
.into(),
));
}
};
let token =
Self::authenticate_userpass(&base_url, &kv_mount, skip_tls, &user, &pass).await?;
Self::authenticate_userpass(&base_url, &auth_mount, skip_tls, &user, &pass).await?;
// Cache the token
if let Err(e) = Self::cache_token(&cache_path, &token) {
warn!("OPENBAO_STORE: Failed to cache token: {e}");
}
Self::with_token(&base_url, skip_tls, &token.client_token, &kv_mount)
Self::with_token(
&base_url,
skip_tls,
&token.client_token,
&kv_mount,
&auth_mount,
)
}
async fn authenticate_zitadel_oidc(
sso_url: &str,
client_id: &str,
skip_tls: bool,
) -> Result<super::zitadel::OidcSession, SecretStoreError> {
let oidc_auth = ZitadelOidcAuth::new(sso_url.to_string(), client_id.to_string(), skip_tls);
oidc_auth
.authenticate()
.await
.map_err(|e| SecretStoreError::Store(e.into()))
}
/// Create a client with an existing token
@@ -110,6 +165,7 @@ impl OpenbaoSecretStore {
skip_tls: bool,
token: &str,
kv_mount: &str,
auth_mount: &str,
) -> Result<Self, SecretStoreError> {
let mut settings = VaultClientSettingsBuilder::default();
settings.address(base_url).token(token);
@@ -129,6 +185,7 @@ impl OpenbaoSecretStore {
Ok(Self {
client,
kv_mount: kv_mount.to_string(),
auth_mount: auth_mount.to_string(),
})
}
@@ -168,7 +225,6 @@ impl OpenbaoSecretStore {
if let Some(parent) = path.parent() {
fs::create_dir_all(parent)?;
}
// Set file permissions to 0600 (owner read/write only)
#[cfg(unix)]
{
use std::os::unix::fs::OpenOptionsExt;
@@ -209,14 +265,13 @@ impl OpenbaoSecretStore {
/// Authenticate using username/password (userpass auth method)
async fn authenticate_userpass(
base_url: &str,
kv_mount: &str,
auth_mount: &str,
skip_tls: bool,
username: &str,
password: &str,
) -> Result<AuthInfo, SecretStoreError> {
info!("OPENBAO_STORE: Authenticating with username/password");
// Create a client without a token for authentication
let mut settings = VaultClientSettingsBuilder::default();
settings.address(base_url);
if skip_tls {
@@ -230,8 +285,7 @@ impl OpenbaoSecretStore {
)
.map_err(|e| SecretStoreError::Store(Box::new(e)))?;
// Authenticate using userpass method
let token = auth::userpass::login(&client, kv_mount, username, password)
let token = auth::userpass::login(&client, auth_mount, username, password)
.await
.map_err(|e| SecretStoreError::Store(Box::new(e)))?;
@@ -249,7 +303,6 @@ impl SecretStore for OpenbaoSecretStore {
let data: serde_json::Value = kv2::read(&self.client, &self.kv_mount, &path)
.await
.map_err(|e| {
// Check for not found error
if e.to_string().contains("does not exist") || e.to_string().contains("404") {
SecretStoreError::NotFound {
namespace: namespace.to_string(),
@@ -260,7 +313,6 @@ impl SecretStore for OpenbaoSecretStore {
}
})?;
// Extract the actual secret value stored under the "value" key
let value = data.get("value").and_then(|v| v.as_str()).ok_or_else(|| {
SecretStoreError::Store("Secret does not contain expected 'value' field".into())
})?;
@@ -281,7 +333,6 @@ impl SecretStore for OpenbaoSecretStore {
let value_str =
String::from_utf8(val.to_vec()).map_err(|e| SecretStoreError::Store(Box::new(e)))?;
// Create the data structure expected by our format
let data = serde_json::json!({
"value": value_str
});

View File

@@ -0,0 +1,335 @@
use log::{debug, info};
use serde::{Deserialize, Serialize};
use std::fs;
use std::path::PathBuf;
use std::time::Duration;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OidcSession {
pub openbao_token: String,
pub openbao_token_ttl: u64,
pub openbao_renewable: bool,
pub refresh_token: Option<String>,
pub id_token: Option<String>,
pub expires_at: Option<i64>,
}
impl OidcSession {
pub fn is_expired(&self) -> bool {
if let Some(expires_at) = self.expires_at {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_secs() as i64)
.unwrap_or(0);
expires_at <= now
} else {
false
}
}
pub fn is_openbao_token_expired(&self, _ttl: u64) -> bool {
if let Some(expires_at) = self.expires_at {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_secs() as i64)
.unwrap_or(0);
expires_at <= now
} else {
false
}
}
}
#[derive(Debug, Deserialize)]
struct DeviceAuthorizationResponse {
device_code: String,
user_code: String,
verification_uri: String,
#[serde(rename = "verification_uri_complete")]
verification_uri_complete: Option<String>,
expires_in: u64,
interval: u64,
}
#[derive(Debug, Deserialize)]
struct TokenResponse {
access_token: String,
#[serde(rename = "expires_in", default)]
expires_in: Option<u64>,
#[serde(rename = "refresh_token", default)]
refresh_token: Option<String>,
#[serde(rename = "id_token", default)]
id_token: Option<String>,
}
#[derive(Debug, Deserialize)]
struct TokenErrorResponse {
error: String,
#[serde(rename = "error_description")]
error_description: Option<String>,
}
fn get_session_cache_path() -> PathBuf {
let hash = {
use std::collections::hash_map::DefaultHasher;
use std::hash::{Hash, Hasher};
let mut hasher = DefaultHasher::new();
"zitadel-oidc".hash(&mut hasher);
format!("{:016x}", hasher.finish())
};
directories::BaseDirs::new()
.map(|dirs| {
dirs.data_dir()
.join("harmony")
.join("secrets")
.join(format!("oidc_session_{hash}"))
})
.unwrap_or_else(|| PathBuf::from(format!("/tmp/oidc_session_{hash}")))
}
fn load_session() -> Result<OidcSession, String> {
let path = get_session_cache_path();
serde_json::from_str(
&fs::read_to_string(&path)
.map_err(|e| format!("Could not load session from {path:?}: {e}"))?,
)
.map_err(|e| format!("Could not deserialize session from {path:?}: {e}"))
}
fn save_session(session: &OidcSession) -> Result<(), String> {
let path = get_session_cache_path();
if let Some(parent) = path.parent() {
fs::create_dir_all(parent)
.map_err(|e| format!("Could not create session directory: {e}"))?;
}
#[cfg(unix)]
{
use std::os::unix::fs::OpenOptionsExt;
let mut file = fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.mode(0o600)
.open(&path)
.map_err(|e| format!("Could not open session file: {e}"))?;
use std::io::Write;
file.write_all(
serde_json::to_string_pretty(session)
.map_err(|e| format!("Could not serialize session: {e}"))?
.as_bytes(),
)
.map_err(|e| format!("Could not write session file: {e}"))?;
}
#[cfg(not(unix))]
{
fs::write(
&path,
serde_json::to_string_pretty(session).map_err(|e| e.to_string())?,
)
.map_err(|e| format!("Could not write session file: {e}"))?;
}
Ok(())
}
pub struct ZitadelOidcAuth {
sso_url: String,
client_id: String,
skip_tls: bool,
}
impl ZitadelOidcAuth {
pub fn new(sso_url: String, client_id: String, skip_tls: bool) -> Self {
Self {
sso_url,
client_id,
skip_tls,
}
}
pub async fn authenticate(&self) -> Result<OidcSession, String> {
if let Ok(session) = load_session() {
if !session.is_expired() {
info!("ZITADEL_OIDC: Using cached session");
return Ok(session);
}
}
info!("ZITADEL_OIDC: Starting device authorization flow");
let device_code = self.request_device_code().await?;
self.print_verification_instructions(&device_code);
let token_response = self
.poll_for_token(&device_code, device_code.interval)
.await?;
let session = self.process_token_response(token_response).await?;
let _ = save_session(&session);
Ok(session)
}
fn http_client(&self) -> Result<reqwest::Client, String> {
let mut builder = reqwest::Client::builder();
if self.skip_tls {
builder = builder.danger_accept_invalid_certs(true);
}
builder
.build()
.map_err(|e| format!("Failed to build HTTP client: {e}"))
}
async fn request_device_code(&self) -> Result<DeviceAuthorizationResponse, String> {
let client = self.http_client()?;
let params = [
("client_id", self.client_id.as_str()),
("scope", "openid email profile offline_access"),
];
let response = client
.post(format!("{}/oauth/v2/device_authorization", self.sso_url))
.form(&params)
.send()
.await
.map_err(|e| format!("Device authorization request failed: {e}"))?;
response
.json::<DeviceAuthorizationResponse>()
.await
.map_err(|e| format!("Failed to parse device authorization response: {e}"))
}
fn print_verification_instructions(&self, code: &DeviceAuthorizationResponse) {
println!();
println!("=================================================");
println!("[Harmony] To authenticate with Zitadel, open your browser:");
println!(" {}", code.verification_uri);
println!();
println!(" and enter code: {}", code.user_code);
if let Some(ref complete_url) = code.verification_uri_complete {
println!();
println!(" Or visit this direct link (code is pre-filled):");
println!(" {}", complete_url);
}
println!("=================================================");
println!();
}
async fn poll_for_token(
&self,
code: &DeviceAuthorizationResponse,
interval_secs: u64,
) -> Result<TokenResponse, String> {
let client = self.http_client()?;
let params = [
("grant_type", "urn:ietf:params:oauth:grant-type:device_code"),
("device_code", code.device_code.as_str()),
("client_id", self.client_id.as_str()),
];
let interval = Duration::from_secs(interval_secs.max(5));
let max_attempts = (code.expires_in / interval_secs).max(60) as usize;
for attempt in 0..max_attempts {
tokio::time::sleep(interval).await;
let response = client
.post(format!("{}/oauth/v2/token", self.sso_url))
.form(&params)
.send()
.await
.map_err(|e| format!("Token request failed: {e}"))?;
let status = response.status();
let body = response
.text()
.await
.map_err(|e| format!("Failed to read response body: {e}"))?;
if status == 400 {
if let Ok(error) = serde_json::from_str::<TokenErrorResponse>(&body) {
match error.error.as_str() {
"authorization_pending" => {
debug!("ZITADEL_OIDC: authorization_pending (attempt {})", attempt);
continue;
}
"slow_down" => {
debug!("ZITADEL_OIDC: slow_down, increasing interval");
tokio::time::sleep(Duration::from_secs(5)).await;
continue;
}
"expired_token" => {
return Err(
"Device code expired. Please restart authentication.".to_string()
);
}
"access_denied" => {
return Err("Access denied by user.".to_string());
}
_ => {
return Err(format!(
"OAuth error: {} - {}",
error.error,
error.error_description.unwrap_or_default()
));
}
}
}
}
return serde_json::from_str(&body)
.map_err(|e| format!("Failed to parse token response: {e}"));
}
Err("Token polling timed out".to_string())
}
async fn process_token_response(&self, response: TokenResponse) -> Result<OidcSession, String> {
let expires_at = response.expires_in.map(|ttl| {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_secs() as i64)
.unwrap_or(0);
now + ttl as i64
});
Ok(OidcSession {
openbao_token: response.access_token,
openbao_token_ttl: response.expires_in.unwrap_or(3600),
openbao_renewable: true,
refresh_token: response.refresh_token,
id_token: response.id_token,
expires_at,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_oidc_session_is_expired() {
let session = OidcSession {
openbao_token: "test".to_string(),
openbao_token_ttl: 3600,
openbao_renewable: true,
refresh_token: None,
id_token: None,
expires_at: Some(0),
};
assert!(session.is_expired());
let future = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_secs() as i64)
.unwrap_or(0)
+ 3600;
let session2 = OidcSession {
openbao_token: "test".to_string(),
openbao_token_ttl: 3600,
openbao_renewable: true,
refresh_token: None,
id_token: None,
expires_at: Some(future),
};
assert!(!session2.is_expired());
}
}

View File

@@ -10,6 +10,31 @@ const K3D_BIN_FILE_NAME: &str = "k3d";
pub struct K3d {
base_dir: PathBuf,
cluster_name: Option<String>,
port_mappings: Vec<PortMapping>,
}
#[derive(Debug, Clone)]
pub struct PortMapping {
pub host_port: u32,
pub container_port: u32,
pub loadbalancer: String,
}
impl PortMapping {
pub fn new(host_port: u32, container_port: u32) -> Self {
Self {
host_port,
container_port,
loadbalancer: "loadbalancer".to_string(),
}
}
pub fn to_arg(&self) -> String {
format!(
"{}:{}@{}",
self.host_port, self.container_port, self.loadbalancer
)
}
}
impl K3d {
@@ -17,9 +42,15 @@ impl K3d {
Self {
base_dir,
cluster_name,
port_mappings: Vec::new(),
}
}
pub fn with_port_mappings(mut self, mappings: Vec<PortMapping>) -> Self {
self.port_mappings = mappings;
self
}
async fn get_binary_for_current_platform(
&self,
latest_release: octocrab::models::repos::Release,
@@ -329,14 +360,29 @@ impl K3d {
}
fn create_cluster(&self, cluster_name: &str) -> Result<(), String> {
let output = self.run_k3d_command(["cluster", "create", cluster_name])?;
let mut args = vec![
"cluster".to_string(),
"create".to_string(),
cluster_name.to_string(),
];
for mapping in &self.port_mappings {
args.push("-p".to_string());
args.push(mapping.to_arg());
}
let output = self.run_k3d_command(args)?;
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
return Err(format!("Failed to create cluster: {}", stderr));
}
info!("Successfully created k3d cluster '{}'", cluster_name);
info!(
"Successfully created k3d cluster '{}' with {} port mappings",
cluster_name,
self.port_mappings.len()
);
Ok(())
}

93
plan.md
View File

@@ -1,93 +0,0 @@
Final Plan: S3-Backed Asset Management for Harmony
Context Summary
Harmony is an infrastructure-as-code framework where Scores (desired state) are interpreted against Topologies (infrastructure capabilities). The existing Url enum (harmony_types/src/net.rs:96) already has LocalFolder(String) and Url(url::Url) variants, but the Url variant is unimplemented (todo!()) in both OPNsense TFTP and HTTP infra layers. Configuration in Harmony follows a "schema in Git, state in the store" pattern via harmony_config -- compile-time structs with values resolved from environment, secret store, or interactive prompt.
Findings
1. openshift-install is the only OKD binary actually invoked from Rust code (bootstrap_02_bootstrap.rs:139,162). oc and kubectl in data/okd/bin/ are never used by any code path.
2. The Url::Url variant is the designed extension point. The architecture explicitly anticipated remote URL sources but left them as todo!().
3. The k3d crate has a working lazy-download pattern (DownloadableAsset with SHA256 checksum verification, local caching, and HTTP download). This should be generalized.
4. The manual SCP workaround (ipxe.rs:126, bootstrap_02_bootstrap.rs:230) exists because russh is too slow for large file transfers. The S3 approach eliminates this entirely -- the OPNsense box pulls from S3 over HTTP instead.
5. All data/ paths are hardcoded as ./data/... in bootstrap_02_bootstrap.rs:84-88 and ipxe.rs:73.
---
Phase 1: Create a shared DownloadableAsset crate
Goal: Generalize the k3d download pattern into a reusable crate.
- Extract k3d/src/downloadable_asset.rs into a new shared crate (e.g., harmony_asset or add to harmony_types)
- The struct stays simple: { url, file_name, checksum, local_cache_path }
- Behavior: check local cache first (by checksum), download if missing, verify checksum after download
- The k3d crate becomes a consumer of this shared code
This is a straightforward refactor of ~160 lines of existing, tested code.
Phase 2: Define asset metadata as compile-time configuration
Goal: Replace hardcoded ./data/... paths with typed configuration, following the harmony_config pattern.
Create config structs for each asset group:
```rust
#[derive(Config, Serialize, Deserialize, JsonSchema, InteractiveParse)]
struct OkdInstallerConfig {
pub openshift_install_url: String,
pub openshift_install_sha256: String,
pub scos_kernel_url: String,
pub scos_kernel_sha256: String,
pub scos_initramfs_url: String,
pub scos_initramfs_sha256: String,
pub scos_rootfs_url: String,
pub scos_rootfs_sha256: String,
}
#[derive(Config, Serialize, Deserialize, JsonSchema, InteractiveParse)]
struct PxeAssetsConfig {
pub centos_install_img_url: String,
pub centos_install_img_sha256: String,
// ... etc
}
```
These structs live in the OKD module. On first run, harmony_config::get_or_prompt will prompt for the S3 URLs and checksums; after that, the values are persisted in the config store (OpenBao or local file). This means:
- No manifest file to maintain separately
- URLs/checksums can be updated per-team/per-environment without code changes
- Defaults can be compiled in for convenience
Phase 3: Implement Url::Url in OPNsense infra layer
Goal: Make the OPNsense TFTP/HTTP server pull files from remote URLs.
In harmony/src/infra/opnsense/http.rs and tftp.rs, implement the Url::Url(url) match arm:
- SSH into the OPNsense box
- Run fetch -o /usr/local/http/{path} {url} (FreeBSD/OPNsense native) or curl -o ...
- This completely replaces the manual SCP workaround for internet-connected environments
For serve_files with a folder of remote assets: the Score would pass individual Url::Url entries rather than a single Url::LocalFolder. This may require the trait to accept a list of URLs or an iterator pattern.
Phase 4: Refactor OKD modules
Goal: Wire up the new patterns in the OKD bootstrap flow.
In bootstrap_02_bootstrap.rs:
- openshift-install: Use the lazy-download pattern (like k3d). On execute(), resolve OkdInstallerConfig from harmony_config, download openshift-install to a local cache, invoke it.
- SCOS images: Pass Url::Url(scos_kernel_url) etc. to the StaticFilesHttpScore, which triggers the OPNsense box to fetch them from S3 directly. No more SCP.
- Remove oc and kubectl from data/okd/bin/ (they are unused).
In ipxe.rs:
- TFTP boot files (ipxe.efi, undionly.kpxe): These are small (~1MB). Either keep them in git (they're not the size problem) or move to S3 and lazy-download.
- HTTP files folder: Replace the folder_to_serve: None / SCP workaround with individual Url::Url entries for each asset.
- Remove the inquire::Confirm SCP prompts.
Phase 5: Upload assets to Ceph S3
Goal: Populate the S3 bucket and configure defaults.
- Upload all current data/ binaries to your Ceph S3 with a clear path scheme: harmony-assets/okd/v{version}/openshift-install, harmony-assets/pxe/centos-stream-9/install.img, etc.
- Set public-read ACL (or document presigned URL generation)
- Record the S3 URLs and SHA256 checksums
- These become the default values for the config structs (can be hardcoded as defaults or documented)
Phase 6: Remove LFS, clean up git history, publish to GitHub
Goal: Make the repo publishable.
- Remove all LFS-tracked files from the repo
- Update .gitattributes to remove LFS filters
- Keep data/ in .gitignore (it becomes a local cache directory)
- Optionally use git filter-repo or BFG to clean LFS objects from history
- The repo is now small enough for GitHub (code + templates + small configs only)
- Document the setup: "after clone, run the program and it will prompt for asset URLs or download from defaults"
---
Risks and Mitigations
Risk Mitigation
OPNsense can't reach S3 (network issues) Url::LocalFolder remains as fallback; populate local data/ manually for air-gapped
S3 bucket permissions misconfigured Test with curl from OPNsense before wiring into code
Large download times during bootstrap Progress reporting in the fetch/curl command; files are cached after first download
Breaking change to existing workflows Phase the rollout; keep LocalFolder working throughout
What About Upstream URL Resilience?
You mentioned upstream repos sometimes get cleaned up. The S3 bucket is your durable mirror. The config structs could optionally include upstream_url as a fallback source, but the primary retrieval should always be from your S3. Periodically re-uploading new versions to S3 (when upstream releases new images) is a manual but infrequent operation.
---
Order of Execution
I'd suggest this order:
1. Phase 5 first (upload to S3) -- this is independent of code and gives you the URLs to work with
2. Phase 1 (shared DownloadableAsset crate) -- small, testable refactor
3. Phase 2 (config structs) -- define the schema
4. Phase 3 (Url::Url implementation) -- the core infra change
5. Phase 4 (OKD module refactor) -- wire it all together
6. Phase 6 (LFS removal + GitHub) -- final cleanup
Does this plan align with your vision? Any aspect you'd like me to adjust or elaborate on before implementation?