The InstallTopology in iot/iot-operator-v0/src/install.rs is
architecturally a workaround: harmony's existing opinionated
topologies (K8sAnywhereTopology, HAClusterTopology) have accumulated
product-level side effects in ensure_ready that make them unfit for
narrow actions like "apply a CRD," so the module vendored its own
tiny Topology impl. If this pattern proliferates, the topology
ecosystem drifts toward "one bespoke topology per Score," which is
exactly the proliferation harmony's design was meant to prevent.
Two documentation changes, no code/behavior change:
- **Inline:** doc comment on `InstallTopology` flagging it as a
smell, explaining the root cause, and pointing at the roadmap
entry below. Anyone finding this code later (or tempted to copy
the pattern) reads the warning before they do.
- **Roadmap §12.6** (new): "Topology proliferation — opinionated
topologies leaking into narrow use cases." Captures the
architectural direction (minimal `K8sBareTopology` in harmony,
unbundle product setup from `ensure_ready`) without prescribing
an implementation. Includes an explicit done-check: the smoke
test for "this roadmap item is fixed" is that install.rs can
delete its inline Topology and one-line against the shared type.
Review feedback: writing yaml and shelling out to kubectl is the
exact anti-pattern harmony exists to eliminate. The operator already
has typed Rust for its CRD (`#[derive(CustomResource)]`), and
harmony-k8s already has a typed apply path. So the "install" step
should be a Score, not `cargo run -- gen-crd | kubectl apply -f -`.
Changes:
- **New** `iot/iot-operator-v0/src/install.rs` — `install_crds()`
builds `Deployment::crd()` via `kube::CustomResourceExt`, wraps it
in `harmony::modules::k8s::resource::K8sResourceScore`, and
executes the Score against a tiny local `InstallTopology` that
just carries a `K8sClient` loaded from `KUBECONFIG`.
The local topology exists because `K8sAnywhereTopology::ensure_ready`
does a lot of product-level setup (cert-manager, tenant manager,
helm probes) that isn't appropriate for a narrow "apply a CRD"
action. A 30-line inline topology that implements `K8sclient` +
a noop `ensure_ready` is the right-sized abstraction for now.
When a larger "install the operator in-cluster" Score lands
(Deployment + SA + RBAC + ClusterRoleBinding), that may justify
promoting the topology to a shared crate.
- **Renamed subcommand** `gen-crd` → `install`. Old path: print yaml
to stdout for kubectl to consume. New path: apply the CRD directly
via the Score, using whatever `KUBECONFIG` points at.
- **Deleted** `iot/iot-operator-v0/deploy/crd.yaml` and
`deploy/operator.yaml`. The CRD yaml was derived from Rust and
committed alongside the source — a drift hazard (nothing guaranteed
they stayed in sync). `operator.yaml` was never actually applied by
any smoke script; it existed only for documentation. Both go.
- **Rewired** `iot/scripts/smoke-a1.sh` phase 2 to call the `install`
subcommand instead of piping yaml to kubectl. Everything downstream
(kubectl wait for Established, apiserver CEL rejection check,
operator + agent + container lifecycle) unchanged.
- **Dropped** `serde_yaml` from the operator's `Cargo.toml` — it was
only used to print the CRD as yaml. Added `harmony`, `harmony-k8s`,
and `async-trait` deps.
Verification — `smoke-a1.sh` PASSes end-to-end on x86_64 k3d:
k3d cluster → install CRD via Score → apiserver rejects bad
score.type (CEL still works through the Score-applied CRD) →
operator → agent → nginx container up → curl 200 → delete CR →
KV + container removed.
Out of scope / follow-up: a proper "install operator in-cluster"
Score that also applies Namespace + SA + ClusterRole +
ClusterRoleBinding + Deployment (the manifests that used to live in
the deleted operator.yaml). Smoke-a1 currently runs the operator
as a host-side process, so that Score isn't on the test path today.