refactor(operator): replace gen-crd yaml pipeline with a harmony Score #271
Reference in New Issue
Block a user
No description provided.
Delete Branch "feat/install-reconcile-operator-score"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Review feedback: writing yaml and shelling out to kubectl is the
exact anti-pattern harmony exists to eliminate. The operator already
has typed Rust for its CRD (
#[derive(CustomResource)]), andharmony-k8s already has a typed apply path. So the "install" step
should be a Score, not
cargo run -- gen-crd | kubectl apply -f -.Changes:
New
iot/iot-operator-v0/src/install.rs—install_crds()builds
Deployment::crd()viakube::CustomResourceExt, wraps itin
harmony::modules::k8s::resource::K8sResourceScore, andexecutes the Score against a tiny local
InstallTopologythatjust carries a
K8sClientloaded fromKUBECONFIG.The local topology exists because
K8sAnywhereTopology::ensure_readydoes a lot of product-level setup (cert-manager, tenant manager,
helm probes) that isn't appropriate for a narrow "apply a CRD"
action. A 30-line inline topology that implements
K8sclient+a noop
ensure_readyis the right-sized abstraction for now.When a larger "install the operator in-cluster" Score lands
(Deployment + SA + RBAC + ClusterRoleBinding), that may justify
promoting the topology to a shared crate.
Renamed subcommand
gen-crd→install. Old path: print yamlto stdout for kubectl to consume. New path: apply the CRD directly
via the Score, using whatever
KUBECONFIGpoints at.Deleted
iot/iot-operator-v0/deploy/crd.yamlanddeploy/operator.yaml. The CRD yaml was derived from Rust andcommitted alongside the source — a drift hazard (nothing guaranteed
they stayed in sync).
operator.yamlwas never actually applied byany smoke script; it existed only for documentation. Both go.
Rewired
iot/scripts/smoke-a1.shphase 2 to call theinstallsubcommand instead of piping yaml to kubectl. Everything downstream
(kubectl wait for Established, apiserver CEL rejection check,
operator + agent + container lifecycle) unchanged.
Dropped
serde_yamlfrom the operator'sCargo.toml— it wasonly used to print the CRD as yaml. Added
harmony,harmony-k8s,and
async-traitdeps.Verification —
smoke-a1.shPASSes end-to-end on x86_64 k3d:k3d cluster → install CRD via Score → apiserver rejects bad
score.type (CEL still works through the Score-applied CRD) →
operator → agent → nginx container up → curl 200 → delete CR →
KV + container removed.
Out of scope / follow-up: a proper "install operator in-cluster"
Score that also applies Namespace + SA + ClusterRole +
ClusterRoleBinding + Deployment (the manifests that used to live in
the deleted operator.yaml). Smoke-a1 currently runs the operator
as a host-side process, so that Score isn't on the test path today.
@@ -0,0 +32,4 @@}#[async_trait]impl Topology for InstallTopology {InstallTopology feels very unnatural.
Review what topologies are. Maybe it would make sense to create a minimal k8s topology for ad-hoc k8s execution like that without all the fuss of k8sanywhere. We also know we have a design problem with topologies accumulating too many opinions (k8sanywhere being the prime example along with haclustertopology). Let's move forward for now but note that in the code and add it to the topology evolution roadmap item.