refactor(operator): replace gen-crd yaml pipeline with a harmony Score #271

Merged
johnride merged 3 commits from feat/install-reconcile-operator-score into feat/iot-walking-skeleton 2026-04-21 20:53:32 +00:00
Owner

Review feedback: writing yaml and shelling out to kubectl is the
exact anti-pattern harmony exists to eliminate. The operator already
has typed Rust for its CRD (#[derive(CustomResource)]), and
harmony-k8s already has a typed apply path. So the "install" step
should be a Score, not cargo run -- gen-crd | kubectl apply -f -.

Changes:

  • New iot/iot-operator-v0/src/install.rsinstall_crds()
    builds Deployment::crd() via kube::CustomResourceExt, wraps it
    in harmony::modules::k8s::resource::K8sResourceScore, and
    executes the Score against a tiny local InstallTopology that
    just carries a K8sClient loaded from KUBECONFIG.

    The local topology exists because K8sAnywhereTopology::ensure_ready
    does a lot of product-level setup (cert-manager, tenant manager,
    helm probes) that isn't appropriate for a narrow "apply a CRD"
    action. A 30-line inline topology that implements K8sclient +
    a noop ensure_ready is the right-sized abstraction for now.
    When a larger "install the operator in-cluster" Score lands
    (Deployment + SA + RBAC + ClusterRoleBinding), that may justify
    promoting the topology to a shared crate.

  • Renamed subcommand gen-crdinstall. Old path: print yaml
    to stdout for kubectl to consume. New path: apply the CRD directly
    via the Score, using whatever KUBECONFIG points at.

  • Deleted iot/iot-operator-v0/deploy/crd.yaml and
    deploy/operator.yaml. The CRD yaml was derived from Rust and
    committed alongside the source — a drift hazard (nothing guaranteed
    they stayed in sync). operator.yaml was never actually applied by
    any smoke script; it existed only for documentation. Both go.

  • Rewired iot/scripts/smoke-a1.sh phase 2 to call the install
    subcommand instead of piping yaml to kubectl. Everything downstream
    (kubectl wait for Established, apiserver CEL rejection check,
    operator + agent + container lifecycle) unchanged.

  • Dropped serde_yaml from the operator's Cargo.toml — it was
    only used to print the CRD as yaml. Added harmony, harmony-k8s,
    and async-trait deps.

Verification — smoke-a1.sh PASSes end-to-end on x86_64 k3d:
k3d cluster → install CRD via Score → apiserver rejects bad
score.type (CEL still works through the Score-applied CRD) →
operator → agent → nginx container up → curl 200 → delete CR →
KV + container removed.

Out of scope / follow-up: a proper "install operator in-cluster"
Score that also applies Namespace + SA + ClusterRole +
ClusterRoleBinding + Deployment (the manifests that used to live in
the deleted operator.yaml). Smoke-a1 currently runs the operator
as a host-side process, so that Score isn't on the test path today.

Review feedback: writing yaml and shelling out to kubectl is the exact anti-pattern harmony exists to eliminate. The operator already has typed Rust for its CRD (`#[derive(CustomResource)]`), and harmony-k8s already has a typed apply path. So the "install" step should be a Score, not `cargo run -- gen-crd | kubectl apply -f -`. Changes: - **New** `iot/iot-operator-v0/src/install.rs` — `install_crds()` builds `Deployment::crd()` via `kube::CustomResourceExt`, wraps it in `harmony::modules::k8s::resource::K8sResourceScore`, and executes the Score against a tiny local `InstallTopology` that just carries a `K8sClient` loaded from `KUBECONFIG`. The local topology exists because `K8sAnywhereTopology::ensure_ready` does a lot of product-level setup (cert-manager, tenant manager, helm probes) that isn't appropriate for a narrow "apply a CRD" action. A 30-line inline topology that implements `K8sclient` + a noop `ensure_ready` is the right-sized abstraction for now. When a larger "install the operator in-cluster" Score lands (Deployment + SA + RBAC + ClusterRoleBinding), that may justify promoting the topology to a shared crate. - **Renamed subcommand** `gen-crd` → `install`. Old path: print yaml to stdout for kubectl to consume. New path: apply the CRD directly via the Score, using whatever `KUBECONFIG` points at. - **Deleted** `iot/iot-operator-v0/deploy/crd.yaml` and `deploy/operator.yaml`. The CRD yaml was derived from Rust and committed alongside the source — a drift hazard (nothing guaranteed they stayed in sync). `operator.yaml` was never actually applied by any smoke script; it existed only for documentation. Both go. - **Rewired** `iot/scripts/smoke-a1.sh` phase 2 to call the `install` subcommand instead of piping yaml to kubectl. Everything downstream (kubectl wait for Established, apiserver CEL rejection check, operator + agent + container lifecycle) unchanged. - **Dropped** `serde_yaml` from the operator's `Cargo.toml` — it was only used to print the CRD as yaml. Added `harmony`, `harmony-k8s`, and `async-trait` deps. Verification — `smoke-a1.sh` PASSes end-to-end on x86_64 k3d: k3d cluster → install CRD via Score → apiserver rejects bad score.type (CEL still works through the Score-applied CRD) → operator → agent → nginx container up → curl 200 → delete CR → KV + container removed. Out of scope / follow-up: a proper "install operator in-cluster" Score that also applies Namespace + SA + ClusterRole + ClusterRoleBinding + Deployment (the manifests that used to live in the deleted operator.yaml). Smoke-a1 currently runs the operator as a host-side process, so that Score isn't on the test path today.
johnride added 1 commit 2026-04-21 20:38:14 +00:00
refactor(operator): replace gen-crd yaml pipeline with a harmony Score
All checks were successful
Run Check Script / check (pull_request) Successful in 2m12s
588afb9ab9
Review feedback: writing yaml and shelling out to kubectl is the
exact anti-pattern harmony exists to eliminate. The operator already
has typed Rust for its CRD (`#[derive(CustomResource)]`), and
harmony-k8s already has a typed apply path. So the "install" step
should be a Score, not `cargo run -- gen-crd | kubectl apply -f -`.

Changes:

- **New** `iot/iot-operator-v0/src/install.rs` — `install_crds()`
  builds `Deployment::crd()` via `kube::CustomResourceExt`, wraps it
  in `harmony::modules::k8s::resource::K8sResourceScore`, and
  executes the Score against a tiny local `InstallTopology` that
  just carries a `K8sClient` loaded from `KUBECONFIG`.

  The local topology exists because `K8sAnywhereTopology::ensure_ready`
  does a lot of product-level setup (cert-manager, tenant manager,
  helm probes) that isn't appropriate for a narrow "apply a CRD"
  action. A 30-line inline topology that implements `K8sclient` +
  a noop `ensure_ready` is the right-sized abstraction for now.
  When a larger "install the operator in-cluster" Score lands
  (Deployment + SA + RBAC + ClusterRoleBinding), that may justify
  promoting the topology to a shared crate.

- **Renamed subcommand** `gen-crd` → `install`. Old path: print yaml
  to stdout for kubectl to consume. New path: apply the CRD directly
  via the Score, using whatever `KUBECONFIG` points at.

- **Deleted** `iot/iot-operator-v0/deploy/crd.yaml` and
  `deploy/operator.yaml`. The CRD yaml was derived from Rust and
  committed alongside the source — a drift hazard (nothing guaranteed
  they stayed in sync). `operator.yaml` was never actually applied by
  any smoke script; it existed only for documentation. Both go.

- **Rewired** `iot/scripts/smoke-a1.sh` phase 2 to call the `install`
  subcommand instead of piping yaml to kubectl. Everything downstream
  (kubectl wait for Established, apiserver CEL rejection check,
  operator + agent + container lifecycle) unchanged.

- **Dropped** `serde_yaml` from the operator's `Cargo.toml` — it was
  only used to print the CRD as yaml. Added `harmony`, `harmony-k8s`,
  and `async-trait` deps.

Verification — `smoke-a1.sh` PASSes end-to-end on x86_64 k3d:
k3d cluster → install CRD via Score → apiserver rejects bad
score.type (CEL still works through the Score-applied CRD) →
operator → agent → nginx container up → curl 200 → delete CR →
KV + container removed.

Out of scope / follow-up: a proper "install operator in-cluster"
Score that also applies Namespace + SA + ClusterRole +
ClusterRoleBinding + Deployment (the manifests that used to live in
the deleted operator.yaml). Smoke-a1 currently runs the operator
as a host-side process, so that Score isn't on the test path today.
johnride reviewed 2026-04-21 20:47:30 +00:00
@@ -0,0 +32,4 @@
}
#[async_trait]
impl Topology for InstallTopology {
Author
Owner

InstallTopology feels very unnatural.

Review what topologies are. Maybe it would make sense to create a minimal k8s topology for ad-hoc k8s execution like that without all the fuss of k8sanywhere. We also know we have a design problem with topologies accumulating too many opinions (k8sanywhere being the prime example along with haclustertopology). Let's move forward for now but note that in the code and add it to the topology evolution roadmap item.

InstallTopology feels very unnatural. Review what topologies are. Maybe it would make sense to create a minimal k8s topology for ad-hoc k8s execution like that without all the fuss of k8sanywhere. We also know we have a design problem with topologies accumulating too many opinions (k8sanywhere being the prime example along with haclustertopology). Let's move forward for now but note that in the code and add it to the topology evolution roadmap item.
johnride added 1 commit 2026-04-21 20:52:40 +00:00
docs(topology): flag InstallTopology smell + add roadmap §12.6
All checks were successful
Run Check Script / check (pull_request) Successful in 2m14s
b8db8241d1
The InstallTopology in iot/iot-operator-v0/src/install.rs is
architecturally a workaround: harmony's existing opinionated
topologies (K8sAnywhereTopology, HAClusterTopology) have accumulated
product-level side effects in ensure_ready that make them unfit for
narrow actions like "apply a CRD," so the module vendored its own
tiny Topology impl. If this pattern proliferates, the topology
ecosystem drifts toward "one bespoke topology per Score," which is
exactly the proliferation harmony's design was meant to prevent.

Two documentation changes, no code/behavior change:

- **Inline:** doc comment on `InstallTopology` flagging it as a
  smell, explaining the root cause, and pointing at the roadmap
  entry below. Anyone finding this code later (or tempted to copy
  the pattern) reads the warning before they do.

- **Roadmap §12.6** (new): "Topology proliferation — opinionated
  topologies leaking into narrow use cases." Captures the
  architectural direction (minimal `K8sBareTopology` in harmony,
  unbundle product setup from `ensure_ready`) without prescribing
  an implementation. Includes an explicit done-check: the smoke
  test for "this roadmap item is fixed" is that install.rs can
  delete its inline Topology and one-line against the shared type.
johnride added 1 commit 2026-04-21 20:53:09 +00:00
Merge branch 'feat/iot-walking-skeleton' into feat/install-reconcile-operator-score
All checks were successful
Run Check Script / check (pull_request) Successful in 2m13s
6676023aa8
johnride merged commit a79c835b08 into feat/iot-walking-skeleton 2026-04-21 20:53:32 +00:00
johnride deleted branch feat/install-reconcile-operator-score 2026-04-21 20:53:33 +00:00
Sign in to join this conversation.
No Reviewers
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: NationTech/harmony#271
No description provided.