All checks were successful
Run Check Script / check (pull_request) Successful in 2m23s
DeploymentSpec.target_devices (flat string list) is gone. In its
place, DeploymentSpec.target_selector is a minimal
LabelSelector-shaped struct (matchLabels only for now, matchExpressions
deferred until there's a real need). Devices publish a labels map
in every AgentStatus heartbeat; operator resolves the selector
against the current fleet snapshot on each reconcile + aggregator
tick.
No legacy shim — the CRD is v1alpha1 and not yet deployed in the wild.
Aggregator consequences:
- controller and aggregator now share a StatusSnapshots map so
selector resolution sees the same data on both sides.
- unreported is dropped: a device that has never heartbeated is
invisible to the selector machinery, so the field no longer
has clean semantics. "device went dark" can come back as a
staleness metric later if needed.
- controller's MissingTargets error is gone: zero matches is a
legitimate state (devices may not have joined yet). The
controller logs and fast-requeues (15s/30s) so a just-joining
device picks the deployment up without needing a
cross-task subscription.
Agent + setup Score:
- Agent config grows a [labels] section (BTreeMap); the flat
[agent].group field is gone. group becomes just one label.
- IotDeviceSetupConfig takes a BTreeMap<String, String> instead
of a String group. TOML render iterates the BTreeMap (ordered)
so idempotent change detection still works cleanly.
CLI-facing:
- example_iot_apply_deployment: --target-device -> --to, accepts
comma-separated key=value pairs.
- example_iot_vm_setup: --group -> --labels, same grammar.
- smoke-a4.sh: VM publishes group=$GROUP,device=$DEVICE_ID;
deploys target --to device=$DEVICE_ID so single-device smoke
behavior is preserved while exercising the selector path.
CRD regenerated via chart/regen-crd.sh. 7 contract tests + 6
operator tests pass.
Examples
This directory contains runnable examples demonstrating Harmony's capabilities. Each example is a self-contained program that can be run with cargo run -p example-<name>.
Quick Reference
| Example | Description | Local K3D | Existing Cluster | Hardware Needed |
|---|---|---|---|---|
postgresql |
Deploy a PostgreSQL cluster | ✅ | ✅ | — |
ntfy |
Deploy ntfy notification server | ✅ | ✅ | — |
tenant |
Create a multi-tenant namespace | ✅ | ✅ | — |
cert_manager |
Provision TLS certificates | ✅ | ✅ | — |
node_health |
Check Kubernetes node health | ✅ | ✅ | — |
monitoring |
Deploy Prometheus alerting | ✅ | ✅ | — |
monitoring_with_tenant |
Monitoring + tenant isolation | ✅ | ✅ | — |
operatorhub_catalog |
Install OperatorHub catalog | ✅ | ✅ | — |
validate_ceph_cluster_health |
Verify Ceph cluster health | — | ✅ | Rook/Ceph |
remove_rook_osd |
Remove a Rook OSD | — | ✅ | Rook/Ceph |
brocade_snmp_server |
Configure Brocade switch SNMP | — | ✅ | Brocade switch |
opnsense_node_exporter |
Node exporter on OPNsense | — | ✅ | OPNsense firewall |
opnsense_vm_integration |
Full OPNsense firewall automation (11 Scores) | ✅ | — | KVM/libvirt |
opnsense_pair_integration |
OPNsense HA pair with CARP failover | ✅ | — | KVM/libvirt |
okd_pxe |
PXE boot configuration for OKD | — | — | ✅ |
okd_installation |
Full OKD bare-metal install | — | — | ✅ |
okd_cluster_alerts |
OKD cluster monitoring alerts | — | ✅ | OKD cluster |
multisite_postgres |
Multi-site PostgreSQL failover | — | ✅ | Multi-cluster |
nats |
Deploy NATS messaging | — | ✅ | Multi-cluster |
nats-supercluster |
NATS supercluster across sites | — | ✅ | Multi-cluster |
lamp |
LAMP stack deployment | ✅ | ✅ | — |
openbao |
Deploy OpenBao vault | ✅ | ✅ | — |
zitadel |
Deploy Zitadel identity provider | ✅ | ✅ | — |
try_rust_webapp |
Rust webapp with packaging | ✅ | ✅ | Submodule |
rust |
Rust webapp with full monitoring | ✅ | ✅ | — |
rhob_application_monitoring |
RHOB monitoring setup | ✅ | ✅ | — |
sttest |
Full OKD stack test | — | — | ✅ |
application_monitoring_with_tenant |
App monitoring + tenant | — | ✅ | OKD cluster |
kube-rs |
Direct kube-rs client usage | ✅ | ✅ | — |
k8s_drain_node |
Drain a Kubernetes node | ✅ | ✅ | — |
k8s_write_file_on_node |
Write files to K8s nodes | ✅ | ✅ | — |
harmony_inventory_builder |
Discover hosts via subnet scan | ✅ | — | — |
cli |
CLI tool with inventory discovery | ✅ | — | — |
tui |
Terminal UI demonstration | ✅ | — | — |
Status Legend
| Symbol | Meaning |
|---|---|
| ✅ | Works out-of-the-box |
| — | Not applicable or requires specific setup |
By Category
Data Services
postgresql— Deploy a PostgreSQL cluster via CloudNativePGmultisite_postgres— Multi-site PostgreSQL with failoverpublic_postgres— Public-facing PostgreSQL (⚠️ uses NationTech DNS)
Kubernetes Utilities
node_health— Check node health in a clusterk8s_drain_node— Drain and reboot a nodek8s_write_file_on_node— Write files to nodesvalidate_ceph_cluster_health— Verify Ceph/Rook cluster healthremove_rook_osd— Remove an OSD from Rook/Cephkube-rs— Direct Kubernetes client usage demo
Monitoring & Alerting
monitoring— Deploy Prometheus alerting with Discord webhooksmonitoring_with_tenant— Monitoring with tenant isolationntfy— Deploy ntfy notification serverokd_cluster_alerts— OKD-specific cluster alerts
Application Deployment
try_rust_webapp— Deploy a Rust webapp with packaging (⚠️ requirestryrust.orgsubmodule)rust— Rust webapp with full monitoring featuresrhob_application_monitoring— Red Hat Observability Stack monitoringlamp— LAMP stack deployment (⚠️ uses NationTech DNS)application_monitoring_with_tenant— App monitoring with tenant isolation
Infrastructure & Bare Metal
opnsense_vm_integration— Recommended demo. Boot an OPNsense VM and configure it with 11 Scores (load balancer, DHCP, TFTP, VLANs, firewall rules, NAT, VIPs, LAGG). Fully automated, requires only KVM. See the detailed guide.opnsense_pair_integration— Boot two OPNsense VMs and configure a CARP HA firewall pair withFirewallPairTopologyandCarpVipScore. Demonstrates NIC link control for sequential bootstrap.okd_installation— Full OKD cluster from scratchokd_pxe— PXE boot configuration for OKDsttest— Full OKD stack test with specific hardwarebrocade_snmp_server— Configure Brocade switch via SNMPopnsense_node_exporter— Node exporter on OPNsense firewall
Multi-Cluster
nats— NATS deployment on a clusternats-supercluster— NATS supercluster across multiple sitesmultisite_postgres— PostgreSQL with multi-site failover
Identity & Secrets
openbao— Deploy OpenBao vault (⚠️ uses NationTech DNS)zitadel— Deploy Zitadel identity provider (⚠️ uses NationTech DNS)
Cluster Services
cert_manager— Provision TLS certificatestenant— Create a multi-tenant namespaceoperatorhub_catalog— Install OperatorHub catalog sources
Development & Testing
cli— CLI tool with inventory discoverytui— Terminal UI demonstrationharmony_inventory_builder— Host discovery via subnet scan
Running Examples
# Build first
cargo build --release
# Run any example
cargo run -p example-postgresql
cargo run -p example-ntfy
cargo run -p example-tenant
For examples that need an existing Kubernetes cluster:
export KUBECONFIG=/path/to/your/kubeconfig
export HARMONY_USE_LOCAL_K3D=false
export HARMONY_AUTOINSTALL=false
cargo run -p example-monitoring
Notes on Private Infrastructure
Some examples use NationTech-hosted infrastructure by default (DNS domains like *.nationtech.io, *.harmony.mcd). These are not suitable for public use without modification. See the Getting Started Guide for the recommended public examples.