Files
harmony/examples
Jean-Gabriel Gill-Couture 92150da12a
All checks were successful
Run Check Script / check (pull_request) Successful in 2m23s
feat(iot): label-selector targeting (replace target_devices with targetSelector)
DeploymentSpec.target_devices (flat string list) is gone. In its
place, DeploymentSpec.target_selector is a minimal
LabelSelector-shaped struct (matchLabels only for now, matchExpressions
deferred until there's a real need). Devices publish a labels map
in every AgentStatus heartbeat; operator resolves the selector
against the current fleet snapshot on each reconcile + aggregator
tick.

No legacy shim — the CRD is v1alpha1 and not yet deployed in the wild.

Aggregator consequences:
  - controller and aggregator now share a StatusSnapshots map so
    selector resolution sees the same data on both sides.
  - unreported is dropped: a device that has never heartbeated is
    invisible to the selector machinery, so the field no longer
    has clean semantics. "device went dark" can come back as a
    staleness metric later if needed.
  - controller's MissingTargets error is gone: zero matches is a
    legitimate state (devices may not have joined yet). The
    controller logs and fast-requeues (15s/30s) so a just-joining
    device picks the deployment up without needing a
    cross-task subscription.

Agent + setup Score:
  - Agent config grows a [labels] section (BTreeMap); the flat
    [agent].group field is gone. group becomes just one label.
  - IotDeviceSetupConfig takes a BTreeMap<String, String> instead
    of a String group. TOML render iterates the BTreeMap (ordered)
    so idempotent change detection still works cleanly.

CLI-facing:
  - example_iot_apply_deployment: --target-device -> --to, accepts
    comma-separated key=value pairs.
  - example_iot_vm_setup: --group -> --labels, same grammar.
  - smoke-a4.sh: VM publishes group=$GROUP,device=$DEVICE_ID;
    deploys target --to device=$DEVICE_ID so single-device smoke
    behavior is preserved while exercising the selector path.

CRD regenerated via chart/regen-crd.sh. 7 contract tests + 6
operator tests pass.
2026-04-22 11:13:42 -04:00
..
2026-01-09 17:30:51 -05:00

Examples

This directory contains runnable examples demonstrating Harmony's capabilities. Each example is a self-contained program that can be run with cargo run -p example-<name>.

Quick Reference

Example Description Local K3D Existing Cluster Hardware Needed
postgresql Deploy a PostgreSQL cluster
ntfy Deploy ntfy notification server
tenant Create a multi-tenant namespace
cert_manager Provision TLS certificates
node_health Check Kubernetes node health
monitoring Deploy Prometheus alerting
monitoring_with_tenant Monitoring + tenant isolation
operatorhub_catalog Install OperatorHub catalog
validate_ceph_cluster_health Verify Ceph cluster health Rook/Ceph
remove_rook_osd Remove a Rook OSD Rook/Ceph
brocade_snmp_server Configure Brocade switch SNMP Brocade switch
opnsense_node_exporter Node exporter on OPNsense OPNsense firewall
opnsense_vm_integration Full OPNsense firewall automation (11 Scores) KVM/libvirt
opnsense_pair_integration OPNsense HA pair with CARP failover KVM/libvirt
okd_pxe PXE boot configuration for OKD
okd_installation Full OKD bare-metal install
okd_cluster_alerts OKD cluster monitoring alerts OKD cluster
multisite_postgres Multi-site PostgreSQL failover Multi-cluster
nats Deploy NATS messaging Multi-cluster
nats-supercluster NATS supercluster across sites Multi-cluster
lamp LAMP stack deployment
openbao Deploy OpenBao vault
zitadel Deploy Zitadel identity provider
try_rust_webapp Rust webapp with packaging Submodule
rust Rust webapp with full monitoring
rhob_application_monitoring RHOB monitoring setup
sttest Full OKD stack test
application_monitoring_with_tenant App monitoring + tenant OKD cluster
kube-rs Direct kube-rs client usage
k8s_drain_node Drain a Kubernetes node
k8s_write_file_on_node Write files to K8s nodes
harmony_inventory_builder Discover hosts via subnet scan
cli CLI tool with inventory discovery
tui Terminal UI demonstration

Status Legend

Symbol Meaning
Works out-of-the-box
Not applicable or requires specific setup

By Category

Data Services

  • postgresql — Deploy a PostgreSQL cluster via CloudNativePG
  • multisite_postgres — Multi-site PostgreSQL with failover
  • public_postgres — Public-facing PostgreSQL (⚠️ uses NationTech DNS)

Kubernetes Utilities

  • node_health — Check node health in a cluster
  • k8s_drain_node — Drain and reboot a node
  • k8s_write_file_on_node — Write files to nodes
  • validate_ceph_cluster_health — Verify Ceph/Rook cluster health
  • remove_rook_osd — Remove an OSD from Rook/Ceph
  • kube-rs — Direct Kubernetes client usage demo

Monitoring & Alerting

  • monitoring — Deploy Prometheus alerting with Discord webhooks
  • monitoring_with_tenant — Monitoring with tenant isolation
  • ntfy — Deploy ntfy notification server
  • okd_cluster_alerts — OKD-specific cluster alerts

Application Deployment

  • try_rust_webapp — Deploy a Rust webapp with packaging (⚠️ requires tryrust.org submodule)
  • rust — Rust webapp with full monitoring features
  • rhob_application_monitoring — Red Hat Observability Stack monitoring
  • lamp — LAMP stack deployment (⚠️ uses NationTech DNS)
  • application_monitoring_with_tenant — App monitoring with tenant isolation

Infrastructure & Bare Metal

  • opnsense_vm_integrationRecommended demo. Boot an OPNsense VM and configure it with 11 Scores (load balancer, DHCP, TFTP, VLANs, firewall rules, NAT, VIPs, LAGG). Fully automated, requires only KVM. See the detailed guide.
  • opnsense_pair_integration — Boot two OPNsense VMs and configure a CARP HA firewall pair with FirewallPairTopology and CarpVipScore. Demonstrates NIC link control for sequential bootstrap.
  • okd_installation — Full OKD cluster from scratch
  • okd_pxe — PXE boot configuration for OKD
  • sttest — Full OKD stack test with specific hardware
  • brocade_snmp_server — Configure Brocade switch via SNMP
  • opnsense_node_exporter — Node exporter on OPNsense firewall

Multi-Cluster

  • nats — NATS deployment on a cluster
  • nats-supercluster — NATS supercluster across multiple sites
  • multisite_postgres — PostgreSQL with multi-site failover

Identity & Secrets

  • openbao — Deploy OpenBao vault (⚠️ uses NationTech DNS)
  • zitadel — Deploy Zitadel identity provider (⚠️ uses NationTech DNS)

Cluster Services

  • cert_manager — Provision TLS certificates
  • tenant — Create a multi-tenant namespace
  • operatorhub_catalog — Install OperatorHub catalog sources

Development & Testing

  • cli — CLI tool with inventory discovery
  • tui — Terminal UI demonstration
  • harmony_inventory_builder — Host discovery via subnet scan

Running Examples

# Build first
cargo build --release

# Run any example
cargo run -p example-postgresql
cargo run -p example-ntfy
cargo run -p example-tenant

For examples that need an existing Kubernetes cluster:

export KUBECONFIG=/path/to/your/kubeconfig
export HARMONY_USE_LOCAL_K3D=false
export HARMONY_AUTOINSTALL=false

cargo run -p example-monitoring

Notes on Private Infrastructure

Some examples use NationTech-hosted infrastructure by default (DNS domains like *.nationtech.io, *.harmony.mcd). These are not suitable for public use without modification. See the Getting Started Guide for the recommended public examples.