Files
harmony/ROADMAP/01-config-crate.md

28 KiB

Phase 1: Harden harmony_config, Validate UX, Zero-Setup Starting Point

Goal

Make harmony_config production-ready with a seamless first-run experience: clone, run, get prompted, values persist locally. Then progressively add team-scale backends (OpenBao, Zitadel SSO) without changing any calling code.

Current State

harmony_config now has:

  • Config trait + #[derive(Config)] macro
  • ConfigManager with ordered source chain
  • Five ConfigSource implementations:
    • EnvSource — reads HARMONY_CONFIG_{KEY} env vars
    • LocalFileSource — reads/writes {key}.json files from a directory
    • SqliteSourceNEW reads/writes to SQLite database
    • PromptSource — returns None / no-op on set (placeholder for TUI integration)
    • StoreSource<S: SecretStore> — wraps any harmony_secret::SecretStore backend
  • 26 unit tests (mock source, env, local file, sqlite, prompt, integration, store graceful fallback)
  • Global CONFIG_MANAGER static with init(), get(), get_or_prompt(), set()
  • Two examples: basic and prompting in harmony_config/examples/
  • Zero workspace consumers — nothing calls harmony_config yet

Tasks

1.1 Add SqliteSource as the default zero-setup backend

Status: Implemented

Implementation Details:

  • Database location: ~/.local/share/harmony/config/config.db (directory is auto-created)
  • Schema: config(key TEXT PRIMARY KEY, value TEXT NOT NULL, updated_at TEXT NOT NULL DEFAULT (datetime('now')))
  • Uses sqlx with SQLite runtime
  • SqliteSource::open(path) - opens/creates database at given path
  • SqliteSource::default() - uses default Harmony data directory

Files:

  • harmony_config/src/source/sqlite.rs - new file
  • harmony_config/Cargo.toml - added sqlx = { workspace = true, features = ["runtime-tokio", "sqlite"] }
  • Cargo.toml - added anyhow = "1.0" to workspace dependencies

Tests (all passing):

  • test_sqlite_set_and_get — round-trip a TestConfig struct
  • test_sqlite_get_returns_none_when_missing — key not in DB
  • test_sqlite_overwrites_on_set — set twice, get returns latest
  • test_sqlite_concurrent_access — two tasks writing different keys simultaneously

1.1.1 Add Config example to show exact DX and confirm functionality

Status: Implemented

Examples created:

  1. harmony_config/examples/basic.rs - demonstrates:

    • Zero-setup SQLite backend (auto-creates directory)
    • Using the #[derive(Config)] macro
    • Environment variable override (HARMONY_CONFIG_TestConfig overrides SQLite)
    • Direct set/get operations
    • Persistence verification
  2. harmony_config/examples/prompting.rs - demonstrates:

    • Config with no defaults (requires user input via inquire)
    • get() flow: env > sqlite > prompt fallback
    • get_or_prompt() for interactive configuration
    • Full resolution chain
    • Persistence of prompted values

1.2 Make PromptSource functional

Status: Implemented with design improvement

Key Finding - Bug Fixed During Implementation:

The original design had a critical bug in get_or_prompt():

// OLD (BUGGY) - breaks on first source where set() returns Ok(())
for source in &self.sources {
    if source.set(T::KEY, &value).await.is_ok() {
        break;
    }
}

Since EnvSource.set() returns Ok(()) (successfully sets env var), the loop would break immediately and never write to SqliteSource. Prompted values were never persisted!

Solution - Added should_persist() method to ConfigSource trait:

#[async_trait]
pub trait ConfigSource: Send + Sync {
    async fn get(&self, key: &str) -> Result<Option<serde_json::Value>, ConfigError>;
    async fn set(&self, key: &str, value: &serde_json::Value) -> Result<(), ConfigError>;
    fn should_persist(&self) -> bool {
        true
    }
}
  • EnvSource::should_persist() returns false - shouldn't persist prompted values to env vars
  • PromptSource::should_persist() returns false - doesn't persist anyway
  • get_or_prompt() now skips sources where should_persist() is false

Updated get_or_prompt():

for source in &self.sources {
    if !source.should_persist() {
        continue;
    }
    if source.set(T::KEY, &value).await.is_ok() {
        break;
    }
}

Tests:

  • test_prompt_source_always_returns_none
  • test_prompt_source_set_is_noop
  • test_prompt_source_does_not_persist
  • test_full_chain_with_prompt_source_falls_through_to_prompt

1.3 Integration test: full resolution chain

Status: Implemented

Tests:

  • test_full_resolution_chain_sqlite_fallback — env not set, sqlite has value, get() returns sqlite
  • test_full_resolution_chain_env_overrides_sqlite — env set, sqlite has value, get() returns env
  • test_branch_switching_scenario_deserialization_error — old struct shape in sqlite returns Deserialization error

1.4 Validate Zitadel + OpenBao integration path

Status: Planning phase - detailed execution plan below

Background: ADR 020-1 documents the target architecture for Zitadel OIDC + OpenBao integration. This task validates the full chain by deploying Zitadel and OpenBao on a local k3d cluster and demonstrating an end-to-end example.

Architecture Overview:

┌─────────────────────────────────────────────────────────────────────┐
│                         Harmony CLI / App                           │
│                                                                     │
│  ConfigManager:                                                     │
│    1. EnvSource      ← HARMONY_CONFIG_* env vars (highest priority) │
│    2. SqliteSource   ← ~/.local/share/harmony/config/config.db      │
│    3. StoreSource    ← OpenBao (team-scale, via Zitadel OIDC)       │
│                                                                     │
│  When StoreSource fails (OpenBao unreachable):                      │
│    → returns Ok(None), chain falls through to SqliteSource           │
└─────────────────────────────────────────────────────────────────────┘

┌──────────────────┐         ┌──────────────────┐
│     Zitadel      │         │     OpenBao      │
│   (IdP + OIDC)   │         │  (Secret Store)  │
│                  │         │                  │
│  Device Auth    │────JWT──▶│  JWT Auth       │
│  Flow (RFC 8628)│         │  Method          │
└──────────────────┘         └──────────────────┘

Prerequisites:

  • Docker running (for k3d)
  • Rust toolchain (edition 2024)
  • Network access to download Helm charts
  • kubectl (installed automatically with k3d, or pre-installed)

Step-by-Step Execution Plan:

Step 1: Create k3d cluster for local development

When you run cargo run -p example-zitadel (or any example using K8sAnywhereTopology::from_env()), Harmony automatically provisions a k3d cluster if one does not exist. By default:

  • use_local_k3d = true (env: HARMONY_USE_LOCAL_K3D, default true)
  • autoinstall = true (env: HARMONY_AUTOINSTALL, default true)
  • Cluster name: harmony (hardcoded in K3DInstallationScore::default())
  • k3d binary is downloaded to ~/.local/share/harmony/k3d/
  • Kubeconfig is merged into ~/.kube/config, context set to k3d-harmony

No manual k3d cluster create is needed. If you want to create the cluster manually first:

# Install k3d (requires sudo or install to user path)
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash

# Create the cluster with the same name Harmony expects
k3d cluster create harmony
kubectl cluster-info --context k3d-harmony

Validation: kubectl get nodes --context k3d-harmony shows 1 server node (k3d default)

Note: The existing examples use hardcoded external hostnames (e.g., sso.sto1.nationtech.io) for ingress. On a local k3d cluster, these hostnames are not routable. For local development you must either:

  • Use kubectl port-forward to access services directly
  • Configure /etc/hosts entries pointing to 127.0.0.1
  • Use a k3d loadbalancer with --port mappings

Step 2: Deploy Zitadel

Zitadel requires the topology to implement Topology + K8sclient + HelmCommand + PostgreSQL. The K8sAnywhereTopology satisfies all four.

cargo run -p example-zitadel

What happens internally (see harmony/src/modules/zitadel/mod.rs):

  1. Creates zitadel namespace via K8sResourceScore
  2. Deploys a CNPG PostgreSQL cluster:
    • Name: zitadel-pg
    • Instances: 2 (not 1)
    • Storage: 10Gi
    • Namespace: zitadel
  3. Resolves the internal DB endpoint (host:port) from the CNPG cluster
  4. Generates a 32-byte alphanumeric masterkey, stores it as Kubernetes Secret zitadel-masterkey (idempotent: skips if it already exists)
  5. Generates a 16-char admin password (guaranteed 1+ uppercase, lowercase, digit, symbol)
  6. Deploys Zitadel Helm chart (zitadel/zitadel from https://charts.zitadel.com):
    • chart_version: None -- uses latest chart version (not pinned)
    • No --wait flag -- returns before pods are ready
    • Ingress annotations are OpenShift-oriented (route.openshift.io/termination: edge, cert-manager.io/cluster-issuer: letsencrypt-prod). On k3d these annotations are silently ignored.
    • Ingress includes TLS config with secretName: "{host}-tls", which requires cert-manager. Without cert-manager, TLS termination does not happen at the ingress level.

Key Helm values set by ZitadelScore:

  • zitadel.configmapConfig.ExternalDomain: the host field (e.g., sso.sto1.nationtech.io)
  • zitadel.configmapConfig.ExternalSecure: true
  • zitadel.configmapConfig.TLS.Enabled: false (TLS at ingress, not in Zitadel)
  • Admin user: UserName: "admin", Email: admin@zitadel.example.com (hardcoded, not derived from host)
  • Database credentials: injected via env[].valueFrom.secretKeyRef from secret zitadel-pg-superuser (both user and admin use the same superuser -- there is a TODO to fix this)

Expected output:

===== ZITADEL DEPLOYMENT COMPLETE =====
Login URL: https://sso.sto1.nationtech.io
Username: admin@zitadel.sso.sto1.nationtech.io
Password: <generated 16-char password>

Note on the success message: The printed username admin@zitadel.{host} does not match the actual configured email admin@zitadel.example.com. The actual login username in Zitadel is admin (the UserName field). This discrepancy exists in the current code.

Validation on k3d:

# Wait for pods to be ready (Helm returns before readiness)
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=zitadel -n zitadel --timeout=300s

# Port-forward to access Zitadel (ingress won't work without proper DNS/TLS on k3d)
kubectl port-forward svc/zitadel -n zitadel 8080:8080

# Access at http://localhost:8080 (note: ExternalSecure=true may cause redirect issues)

Known issues for k3d deployment:

  • ExternalSecure: true tells Zitadel to expect HTTPS, but k3d port-forward is HTTP. This may cause redirect loops. Override with: modify the example to set ExternalSecure: false for local dev.
  • The CNPG operator must be installed on the cluster. K8sAnywhereTopology handles this via the PostgreSQL trait implementation, which deploys the operator first.

Step 3: Deploy OpenBao

OpenBao requires only Topology + K8sclient + HelmCommand (no PostgreSQL dependency).

cargo run -p example-openbao

What happens internally (see harmony/src/modules/openbao/mod.rs):

  1. OpenbaoScore directly delegates to HelmChartScore.create_interpret() -- there is no custom execute() logic, no namespace creation step, no secret generation
  2. Deploys OpenBao Helm chart (openbao/openbao from https://openbao.github.io/openbao-helm):
    • chart_version: None -- uses latest chart version (not pinned)
    • create_namespace: true -- the openbao namespace is created by Helm
    • install_only: false -- uses helm upgrade --install

Exact Helm values set by OpenbaoScore:

global:
  openshift: true          # <-- PROBLEM: hardcoded, see below
server:
  standalone:
    enabled: true
    config: |
      ui = true
      listener "tcp" {
        tls_disable = true
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "file" {
        path = "/openbao/data"
      }
  service:
    enabled: true
  ingress:
    enabled: true
    hosts:
      - host: <host field>   # e.g., openbao.sebastien.sto1.nationtech.io
  dataStorage:
    enabled: true
    size: 10Gi
    storageClass: null       # uses cluster default
    accessMode: ReadWriteOnce
  auditStorage:
    enabled: true
    size: 10Gi
    storageClass: null
    accessMode: ReadWriteOnce
ui:
  enabled: true

Critical issue: global.openshift: true is hardcoded. The OpenBao Helm chart default is global.openshift: false. When set to true, the chart adjusts security contexts and may create OpenShift Routes instead of standard Kubernetes Ingress resources. On k3d (vanilla k8s), this will produce resources that may not work correctly. Before deploying on k3d, this must be overridden.

Fix required for k3d: Either:

  1. Modify OpenbaoScore to accept an openshift: bool field (preferred long-term fix)
  2. Or for this example, create a custom example that passes values_overrides with global.openshift=false

Post-deployment initialization (manual -- the TODO in mod.rs acknowledges this is not automated):

OpenBao starts in a sealed state. You must initialize and unseal it manually. See https://openbao.org/docs/platform/k8s/helm/run/

# Initialize OpenBao (generates unseal keys + root token)
kubectl exec -n openbao openbao-0 -- bao operator init

# Save the output! It contains 5 unseal keys and the root token.
# Example output:
# Unseal Key 1: abc123...
# Unseal Key 2: def456...
# ...
# Initial Root Token: hvs.xxxxx

# Unseal (requires 3 of 5 keys by default)
kubectl exec -n openbao openbao-0 -- bao operator unseal <key1>
kubectl exec -n openbao openbao-0 -- bao operator unseal <key2>
kubectl exec -n openbao openbao-0 -- bao operator unseal <key3>

Validation:

kubectl exec -n openbao openbao-0 -- bao status
# Should show "Sealed: false"

Note: The ingress has no TLS configuration (unlike Zitadel's ingress). Access is HTTP-only unless you configure TLS separately.

Step 4: Configure OpenBao for Harmony

Two paths are available depending on the authentication method:

Path A: Userpass auth (simpler, for local dev)

The current OpenbaoSecretStore supports token and userpass authentication. It does NOT yet implement the JWT/OIDC device flow described in ADR 020-1.

# Port-forward to access OpenBao API
kubectl port-forward svc/openbao -n openbao 8200:8200 &

export BAO_ADDR="http://127.0.0.1:8200"
export BAO_TOKEN="<root token from init>"

# Enable KV v2 secrets engine (default mount "secret")
bao secrets enable -path=secret kv-v2

# Enable userpass auth method
bao auth enable userpass

# Create a user for Harmony
bao write auth/userpass/login/harmony password="harmony-dev-password"

# Create policy granting read/write on harmony/* paths
cat <<'EOF' | bao policy write harmony-dev -
path "secret/data/harmony/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}
path "secret/metadata/harmony/*" {
  capabilities = ["list", "read", "delete"]
}
EOF

# Create the user with the policy attached
bao write auth/userpass/users/harmony \
    password="harmony-dev-password" \
    policies="harmony-dev"

Bug in OpenbaoSecretStore::authenticate_userpass(): The kv_mount parameter (default "secret") is passed to vaultrs::auth::userpass::login() as the auth mount path. This means it calls POST /v1/auth/secret/login/{username} instead of the correct POST /v1/auth/userpass/login/{username}. The auth mount and KV mount are conflated into one parameter.

Workaround: Set OPENBAO_KV_MOUNT=userpass so the auth call hits the correct mount path. But then KV operations would use mount userpass instead of secret, which is wrong.

Proper fix needed: Split kv_mount into two separate parameters: one for the KV v2 engine mount (secret) and one for the auth mount (userpass). This is a bug in harmony_secret/src/store/openbao.rs:234.

For this example: Use token auth instead of userpass to sidestep the bug:

# Set env vars for the example
export OPENBAO_URL="http://127.0.0.1:8200"
export OPENBAO_TOKEN="<root token from init>"
export OPENBAO_KV_MOUNT="secret"
Path B: JWT auth with Zitadel (target architecture, per ADR 020-1)

This is the production path described in the ADR. It requires the device flow code that is not yet implemented in OpenbaoSecretStore. The current code only supports token and userpass.

When implemented, the flow will be:

  1. Enable JWT auth method in OpenBao
  2. Configure it to trust Zitadel's OIDC discovery URL
  3. Create a role that maps Zitadel JWT claims to OpenBao policies
# Enable JWT auth
bao auth enable jwt

# Configure JWT auth to trust Zitadel
bao write auth/jwt/config \
    oidc_discovery_url="https://<zitadel-host>" \
    bound_issuer="https://<zitadel-host>"

# Create role for Harmony developers
bao write auth/jwt/role/harmony-developer \
    role_type="jwt" \
    bound_audiences="<harmony_client_id>" \
    user_claim="email" \
    groups_claim="urn:zitadel:iam:org:project:roles" \
    policies="harmony-dev" \
    ttl="4h" \
    max_ttl="24h" \
    token_type="service"

Zitadel application setup (in Zitadel console):

  1. Create project: Harmony
  2. Add application: Harmony CLI (Native app type)
  3. Enable Device Authorization grant type
  4. Set scopes: openid email profile offline_access
  5. Note the client_id

This path is deferred until the device flow is implemented in OpenbaoSecretStore.

Step 5: Write end-to-end example

The example uses StoreSource<OpenbaoSecretStore> with token auth to avoid the userpass mount bug.

Environment variables required (from harmony_secret/src/config.rs):

Variable Required Default Notes
OPENBAO_URL Yes None Falls back to VAULT_ADDR
OPENBAO_TOKEN For token auth None Root or user token
OPENBAO_USERNAME For userpass None Requires OPENBAO_PASSWORD too
OPENBAO_PASSWORD For userpass None
OPENBAO_KV_MOUNT No "secret" KV v2 engine mount path. Also used as userpass auth mount -- this is a bug.
OPENBAO_SKIP_TLS No false Set "true" to disable TLS verification

Note: OpenbaoSecretStore::new() is async and requires a running OpenBao at construction time (it validates the token if using cached auth). If OpenBao is unreachable during construction, the call will fail. The graceful fallback only applies to StoreSource::get() calls after construction -- the ConfigManager must be built with a live store, or the store must be wrapped in a lazy initialization pattern.

// harmony_config/examples/openbao_chain.rs
use harmony_config::{ConfigManager, EnvSource, SqliteSource, StoreSource};
use harmony_secret::OpenbaoSecretStore;
use serde::{Deserialize, Serialize};
use std::sync::Arc;

#[derive(Debug, Clone, Serialize, Deserialize, schemars::JsonSchema, PartialEq)]
struct AppConfig {
    host: String,
    port: u16,
}

impl harmony_config::Config for AppConfig {
    const KEY: &'static str = "AppConfig";
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    env_logger::init();

    // Build the source chain
    let env_source: Arc<dyn harmony_config::ConfigSource> = Arc::new(EnvSource);

    let sqlite = Arc::new(
        SqliteSource::default()
            .await
            .expect("Failed to open SQLite"),
    );

    // OpenBao store -- requires OPENBAO_URL and OPENBAO_TOKEN env vars
    // Falls back gracefully if OpenBao is unreachable at query time
    let openbao_url = std::env::var("OPENBAO_URL")
        .or(std::env::var("VAULT_ADDR"))
        .ok();

    let sources: Vec<Arc<dyn harmony_config::ConfigSource>> = if let Some(url) = openbao_url {
        let kv_mount = std::env::var("OPENBAO_KV_MOUNT")
            .unwrap_or_else(|_| "secret".to_string());
        let skip_tls = std::env::var("OPENBAO_SKIP_TLS")
            .map(|v| v == "true")
            .unwrap_or(false);

        match OpenbaoSecretStore::new(
            url,
            kv_mount,
            skip_tls,
            std::env::var("OPENBAO_TOKEN").ok(),
            std::env::var("OPENBAO_USERNAME").ok(),
            std::env::var("OPENBAO_PASSWORD").ok(),
        )
        .await
        {
            Ok(store) => {
                let store_source = Arc::new(StoreSource::new("harmony".to_string(), store));
                vec![env_source, Arc::clone(&sqlite) as _, store_source]
            }
            Err(e) => {
                eprintln!("Warning: OpenBao unavailable ({e}), using local sources only");
                vec![env_source, sqlite]
            }
        }
    } else {
        println!("No OPENBAO_URL set, using local sources only");
        vec![env_source, sqlite]
    };

    let manager = ConfigManager::new(sources);

    // Scenario 1: get() with nothing stored -- returns NotFound
    let result = manager.get::<AppConfig>().await;
    println!("Get (empty): {:?}", result);

    // Scenario 2: set() then get()
    let config = AppConfig {
        host: "production.example.com".to_string(),
        port: 443,
    };
    manager.set(&config).await?;
    println!("Set: {:?}", config);

    let retrieved = manager.get::<AppConfig>().await?;
    println!("Get (after set): {:?}", retrieved);
    assert_eq!(config, retrieved);

    println!("End-to-end chain validated!");
    Ok(())
}

Key behaviors demonstrated:

  1. Graceful construction fallback: If OPENBAO_URL is not set or OpenBao is unreachable at startup, the chain is built without it
  2. Graceful query fallback: StoreSource::get() returns Ok(None) on any error, so the chain continues to SQLite
  3. Environment override: HARMONY_CONFIG_AppConfig='{"host":"env-host","port":9090}' bypasses all backends

Step 6: Validate graceful fallback

Already validated via unit tests (26 tests pass):

  • test_store_source_error_falls_through_to_sqlite -- StoreSource with AlwaysErrorStore returns connection error, chain falls through to SqliteSource
  • test_store_source_not_found_falls_through_to_sqlite -- StoreSource returns NotFound, chain falls through to SqliteSource

Code path (FIXED in harmony_config/src/source/store.rs):

// StoreSource::get() -- returns Ok(None) on ANY error, allowing chain to continue
match self.store.get_raw(&self.namespace, key).await {
    Ok(bytes) => { /* deserialize and return */ Ok(Some(value)) }
    Err(SecretStoreError::NotFound { .. }) => Ok(None),
    Err(_) => Ok(None),  // Connection errors, timeouts, etc.
}

Step 7: Known issues and blockers

Issue Location Severity Status
global.openshift: true hardcoded harmony/src/modules/openbao/mod.rs:32 Blocker for k3d Fixed: Added openshift: bool field to OpenbaoScore (defaults to false)
kv_mount used as auth mount path harmony_secret/src/store/openbao.rs:234 Bug Fixed: Added separate auth_mount parameter; added OPENBAO_AUTH_MOUNT env var
Admin email hardcoded admin@zitadel.example.com harmony/src/modules/zitadel/mod.rs:314 Minor Cosmetic mismatch with success message
ExternalSecure: true hardcoded harmony/src/modules/zitadel/mod.rs:306 Issue for k3d Fixed: Zitadel now detects Kubernetes distribution and uses appropriate settings (OpenShift = TLS + cert-manager annotations, k3d = plain nginx ingress without TLS)
No Helm chart version pinning Both modules Risk Non-deterministic deploys
No --wait on Helm install harmony/src/modules/helm/chart.rs UX Must manually wait for readiness
get_version()/get_status() are todo!() Both modules Panic risk Do not call these methods
JWT/OIDC device flow not implemented harmony_secret/src/store/openbao.rs Gap Implemented: ZitadelOidcAuth in harmony_secret/src/store/zitadel.rs
HARMONY_SECRET_NAMESPACE panics if not set harmony_secret/src/config.rs:5 Runtime panic Only affects SecretManager, not StoreSource directly

Remaining work:

  • StoreSource<OpenbaoSecretStore> integration validates compilation
  • StoreSource returns Ok(None) on connection error (not Err)
  • Graceful fallback tests pass when OpenBao is unreachable (2 new tests)
  • Fix global.openshift: true in OpenbaoScore for k3d compatibility
  • Fix kv_mount / auth mount conflation bug in OpenbaoSecretStore
  • Create and test harmony_config/examples/openbao_chain.rs against real k3d deployment
  • Implement JWT/OIDC device flow in OpenbaoSecretStore (ADR 020-1) — ZitadelOidcAuth implemented and wired into OpenbaoSecretStore::new() auth chain
  • Fix Zitadel distribution detection — Zitadel now uses k8s_client.get_k8s_distribution() to detect OpenShift vs k3d and applies appropriate Helm values (TLS + cert-manager for OpenShift, plain nginx for k3d)

1.5 UX validation checklist

Status: Partially complete - manual verification needed

  • cargo run --example postgresql with no env vars → prompts for nothing
  • An example that uses SecretManager today (e.g., brocade_snmp_server) → when migrated to harmony_config, first run prompts, second run reads from SQLite
  • Setting HARMONY_CONFIG_BrocadeSwitchAuth='{"host":"...","user":"...","password":"..."}' → skips prompt, uses env value
  • Deleting ~/.local/share/harmony/config/ directory → re-prompts on next run

Deliverables

  • SqliteSource implementation with tests
  • Functional PromptSource with should_persist() design
  • Fix get_or_prompt to persist to first writable source (via should_persist()), not all sources
  • Integration tests for full resolution chain
  • Branch-switching deserialization failure test
  • StoreSource<OpenbaoSecretStore> integration validated (compiles, graceful fallback)
  • ADR for Zitadel OIDC target architecture
  • Update docs to reflect final implementation and behavior

Key Implementation Notes

  1. SQLite path: ~/.local/share/harmony/config/config.db (not ~/.local/share/harmony/config.db)

  2. Auto-create directory: SqliteSource::open() creates parent directories if they don't exist

  3. Default path: SqliteSource::default() uses directories::ProjectDirs to find the correct data directory

  4. Env var precedence: Environment variables always take precedence over SQLite in the resolution chain

  5. Testing: All tests use tempfile::NamedTempFile for temporary database paths, ensuring test isolation

  6. Graceful fallback: StoreSource::get() returns Ok(None) on any error (connection refused, timeout, etc.), allowing the chain to fall through to the next source. This ensures OpenBao unavailability doesn't break the config chain.

  7. StoreSource errors don't block chain: When OpenBao is unreachable, StoreSource::get() returns Ok(None) and the ConfigManager continues to the next source (typically SqliteSource). This is validated by test_store_source_error_falls_through_to_sqlite and test_store_source_not_found_falls_through_to_sqlite.