Compare commits
3 Commits
feat/multi
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
| bfdb11b217 | |||
| d5fadf4f44 | |||
| 50bd5c5bba |
30
Cargo.lock
generated
30
Cargo.lock
generated
@@ -1835,21 +1835,6 @@ dependencies = [
|
|||||||
"url",
|
"url",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "example-operatorhub-catalogsource"
|
|
||||||
version = "0.1.0"
|
|
||||||
dependencies = [
|
|
||||||
"cidr",
|
|
||||||
"env_logger",
|
|
||||||
"harmony",
|
|
||||||
"harmony_cli",
|
|
||||||
"harmony_macros",
|
|
||||||
"harmony_types",
|
|
||||||
"log",
|
|
||||||
"tokio",
|
|
||||||
"url",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "example-opnsense"
|
name = "example-opnsense"
|
||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
@@ -6064,6 +6049,21 @@ version = "0.5.1"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "8f50febec83f5ee1df3015341d8bd429f2d1cc62bcba7ea2076759d315084683"
|
checksum = "8f50febec83f5ee1df3015341d8bd429f2d1cc62bcba7ea2076759d315084683"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "test-score"
|
||||||
|
version = "0.1.0"
|
||||||
|
dependencies = [
|
||||||
|
"base64 0.22.1",
|
||||||
|
"env_logger",
|
||||||
|
"harmony",
|
||||||
|
"harmony_cli",
|
||||||
|
"harmony_macros",
|
||||||
|
"harmony_types",
|
||||||
|
"log",
|
||||||
|
"tokio",
|
||||||
|
"url",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "thiserror"
|
name = "thiserror"
|
||||||
version = "1.0.69"
|
version = "1.0.69"
|
||||||
|
|||||||
@@ -1,105 +0,0 @@
|
|||||||
# Design Document: Harmony PostgreSQL Module
|
|
||||||
|
|
||||||
**Status:** Draft
|
|
||||||
**Last Updated:** 2025-12-01
|
|
||||||
**Context:** Multi-site Data Replication & Orchestration
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
The Harmony PostgreSQL Module provides a high-level abstraction for deploying and managing high-availability PostgreSQL clusters across geographically distributed Kubernetes/OKD sites.
|
|
||||||
|
|
||||||
Instead of manually configuring complex replication slots, firewalls, and operator settings on each cluster, users define a single intent (a **Score**), and Harmony orchestrates the underlying infrastructure (the **Arrangement**) to establish a Primary-Replica architecture.
|
|
||||||
|
|
||||||
Currently, the implementation relies on the **CloudNativePG (CNPG)** operator as the backing engine.
|
|
||||||
|
|
||||||
## 2. Architecture
|
|
||||||
|
|
||||||
### 2.1 The Abstraction Model
|
|
||||||
Following **ADR 003 (Infrastructure Abstraction)**, Harmony separates the *intent* from the *implementation*.
|
|
||||||
|
|
||||||
1. **The Score (Intent):** The user defines a `MultisitePostgreSQL` resource. This describes *what* is needed (e.g., "A Postgres 15 cluster with 10GB storage, Primary on Site A, Replica on Site B").
|
|
||||||
2. **The Interpret (Action):** Harmony MultisitePostgreSQLInterpret processes this Score and orchestrates the deployment on both sites to reach the state defined in the Score.
|
|
||||||
3. **The Capability (Implementation):** The PostgreSQL Capability is implemented by the K8sTopology and the interpret can deploy it, configure it and fetch information about it. The concrete implementation will rely on the mature CloudnativePG operator to manage all the Kubernetes resources required.
|
|
||||||
|
|
||||||
### 2.2 Network Connectivity (TLS Passthrough)
|
|
||||||
|
|
||||||
One of the critical challenges in multi-site orchestration is secure connectivity between clusters that may have dynamic IPs or strict firewalls.
|
|
||||||
|
|
||||||
To solve this, we utilize **OKD/OpenShift Routes with TLS Passthrough**.
|
|
||||||
|
|
||||||
* **Mechanism:** The Primary site exposes a `Route` configured for `termination: passthrough`.
|
|
||||||
* **Routing:** The OpenShift HAProxy router inspects the **SNI (Server Name Indication)** header of the incoming TCP connection to route traffic to the correct PostgreSQL Pod.
|
|
||||||
* **Security:** SSL is **not** terminated at the ingress router. The encrypted stream is passed directly to the PostgreSQL instance. Mutual TLS (mTLS) authentication is handled natively by CNPG between the Primary and Replica instances.
|
|
||||||
* **Dynamic IPs:** Because connections are established via DNS hostnames (the Route URL), this architecture is resilient to dynamic IP changes at the Primary site.
|
|
||||||
|
|
||||||
#### Traffic Flow Diagram
|
|
||||||
|
|
||||||
```text
|
|
||||||
[ Site B: Replica ] [ Site A: Primary ]
|
|
||||||
| |
|
|
||||||
(CNPG Instance) --[Encrypted TCP]--> (OKD HAProxy Router)
|
|
||||||
| (Port 443) |
|
|
||||||
| |
|
|
||||||
| [SNI Inspection]
|
|
||||||
| |
|
|
||||||
| v
|
|
||||||
| (PostgreSQL Primary Pod)
|
|
||||||
| (Port 5432)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 3. Design Decisions
|
|
||||||
|
|
||||||
### Why CloudNativePG?
|
|
||||||
We selected CloudNativePG because it relies exclusively on standard Kubernetes primitives and uses the native PostgreSQL replication protocol (WAL shipping/Streaming). This aligns with Harmony's goal of being "K8s Native."
|
|
||||||
|
|
||||||
### Why TLS Passthrough instead of VPN/NodePort?
|
|
||||||
* **NodePort:** Requires static IPs and opening non-standard ports on the firewall, which violates our security constraints.
|
|
||||||
* **VPN (e.g., Wireguard/Tailscale):** While secure, it introduces significant complexity (sidecars, key management) and external dependencies.
|
|
||||||
* **TLS Passthrough:** Leverages the existing Ingress/Router infrastructure already present in OKD. It requires zero additional software and respects multi-tenancy (Routes are namespaced).
|
|
||||||
|
|
||||||
### Configuration Philosophy (YAGNI)
|
|
||||||
The current design exposes a **generic configuration surface**. Users can configure standard parameters (Storage size, CPU/Memory requests, Postgres version).
|
|
||||||
|
|
||||||
**We explicitly do not expose advanced CNPG or PostgreSQL configurations at this stage.**
|
|
||||||
|
|
||||||
* **Reasoning:** We aim to keep the API surface small and manageable.
|
|
||||||
* **Future Path:** We plan to implement a "pass-through" mechanism to allow sending raw config maps or custom parameters to the underlying engine (CNPG) *only when a concrete use case arises*. Until then, we adhere to the **YAGNI (You Ain't Gonna Need It)** principle to avoid premature optimization and API bloat.
|
|
||||||
|
|
||||||
## 4. Usage Guide
|
|
||||||
|
|
||||||
To deploy a multi-site cluster, apply the `MultisitePostgreSQL` resource to the Harmony Control Plane.
|
|
||||||
|
|
||||||
### Example Manifest
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: harmony.io/v1alpha1
|
|
||||||
kind: MultisitePostgreSQL
|
|
||||||
metadata:
|
|
||||||
name: finance-db
|
|
||||||
namespace: tenant-a
|
|
||||||
spec:
|
|
||||||
version: "15"
|
|
||||||
storage: "10Gi"
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
cpu: "500m"
|
|
||||||
memory: "1Gi"
|
|
||||||
|
|
||||||
# Topology Definition
|
|
||||||
topology:
|
|
||||||
primary:
|
|
||||||
site: "site-paris" # The name of the cluster in Harmony
|
|
||||||
replicas:
|
|
||||||
- site: "site-newyork"
|
|
||||||
```
|
|
||||||
|
|
||||||
### What happens next?
|
|
||||||
1. Harmony detects the CR.
|
|
||||||
2. **On Site Paris:** It deploys a CNPG Cluster (Primary) and creates a Passthrough Route `postgres-finance-db.apps.site-paris.example.com`.
|
|
||||||
3. **On Site New York:** It deploys a CNPG Cluster (Replica) configured with `externalClusters` pointing to the Paris Route.
|
|
||||||
4. Data begins replicating immediately over the encrypted channel.
|
|
||||||
|
|
||||||
## 5. Troubleshooting
|
|
||||||
|
|
||||||
* **Connection Refused:** Ensure the Primary site's Route is successfully admitted by the Ingress Controller.
|
|
||||||
* **Certificate Errors:** CNPG manages mTLS automatically. If errors persist, ensure the CA secrets were correctly propagated by Harmony from Primary to Replica namespaces.
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
[package]
|
|
||||||
name = "example-operatorhub-catalogsource"
|
|
||||||
edition = "2024"
|
|
||||||
version.workspace = true
|
|
||||||
readme.workspace = true
|
|
||||||
license.workspace = true
|
|
||||||
publish = false
|
|
||||||
|
|
||||||
[dependencies]
|
|
||||||
harmony = { path = "../../harmony" }
|
|
||||||
harmony_cli = { path = "../../harmony_cli" }
|
|
||||||
harmony_types = { path = "../../harmony_types" }
|
|
||||||
cidr = { workspace = true }
|
|
||||||
tokio = { workspace = true }
|
|
||||||
harmony_macros = { path = "../../harmony_macros" }
|
|
||||||
log = { workspace = true }
|
|
||||||
env_logger = { workspace = true }
|
|
||||||
url = { workspace = true }
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
use std::str::FromStr;
|
|
||||||
|
|
||||||
use harmony::{
|
|
||||||
inventory::Inventory,
|
|
||||||
modules::{k8s::apps::OperatorHubCatalogSourceScore, tenant::TenantScore},
|
|
||||||
topology::{K8sAnywhereTopology, tenant::TenantConfig},
|
|
||||||
};
|
|
||||||
use harmony_types::id::Id;
|
|
||||||
|
|
||||||
#[tokio::main]
|
|
||||||
async fn main() {
|
|
||||||
let operatorhub_catalog = OperatorHubCatalogSourceScore::default();
|
|
||||||
|
|
||||||
harmony_cli::run(
|
|
||||||
Inventory::autoload(),
|
|
||||||
K8sAnywhereTopology::from_env(),
|
|
||||||
vec![Box::new(operatorhub_catalog)],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
}
|
|
||||||
@@ -1,4 +1,6 @@
|
|||||||
mod repository;
|
mod repository;
|
||||||
|
use std::fmt;
|
||||||
|
|
||||||
pub use repository::*;
|
pub use repository::*;
|
||||||
|
|
||||||
#[derive(Debug, new, Clone)]
|
#[derive(Debug, new, Clone)]
|
||||||
@@ -69,5 +71,14 @@ pub enum HostRole {
|
|||||||
Bootstrap,
|
Bootstrap,
|
||||||
ControlPlane,
|
ControlPlane,
|
||||||
Worker,
|
Worker,
|
||||||
Storage,
|
}
|
||||||
|
|
||||||
|
impl fmt::Display for HostRole {
|
||||||
|
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||||
|
match self {
|
||||||
|
HostRole::Bootstrap => write!(f, "Bootstrap"),
|
||||||
|
HostRole::ControlPlane => write!(f, "ControlPlane"),
|
||||||
|
HostRole::Worker => write!(f, "Worker"),
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,19 +0,0 @@
|
|||||||
use async_trait::async_trait;
|
|
||||||
|
|
||||||
use crate::topology::{PreparationError, PreparationOutcome, Topology};
|
|
||||||
|
|
||||||
pub struct FailoverTopology<T> {
|
|
||||||
pub primary: T,
|
|
||||||
pub replica: T,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl<T: Send + Sync> Topology for FailoverTopology<T> {
|
|
||||||
fn name(&self) -> &str {
|
|
||||||
"FailoverTopology"
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
|
|
||||||
todo!()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,7 +1,5 @@
|
|||||||
mod failover;
|
|
||||||
mod ha_cluster;
|
mod ha_cluster;
|
||||||
pub mod ingress;
|
pub mod ingress;
|
||||||
pub use failover::*;
|
|
||||||
use harmony_types::net::IpAddress;
|
use harmony_types::net::IpAddress;
|
||||||
mod host_binding;
|
mod host_binding;
|
||||||
mod http;
|
mod http;
|
||||||
|
|||||||
@@ -1,157 +0,0 @@
|
|||||||
use std::collections::BTreeMap;
|
|
||||||
|
|
||||||
use k8s_openapi::{
|
|
||||||
api::core::v1::{Affinity, Toleration},
|
|
||||||
apimachinery::pkg::apis::meta::v1::ObjectMeta,
|
|
||||||
};
|
|
||||||
use kube::CustomResource;
|
|
||||||
use schemars::JsonSchema;
|
|
||||||
use serde::{Deserialize, Serialize};
|
|
||||||
use serde_json::Value;
|
|
||||||
|
|
||||||
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug)]
|
|
||||||
#[kube(
|
|
||||||
group = "operators.coreos.com",
|
|
||||||
version = "v1alpha1",
|
|
||||||
kind = "CatalogSource",
|
|
||||||
plural = "catalogsources",
|
|
||||||
namespaced = true,
|
|
||||||
schema = "disabled"
|
|
||||||
)]
|
|
||||||
#[serde(rename_all = "camelCase")]
|
|
||||||
pub struct CatalogSourceSpec {
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub address: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub config_map: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub description: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub display_name: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub grpc_pod_config: Option<GrpcPodConfig>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub icon: Option<Icon>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub image: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub priority: Option<i64>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub publisher: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub run_as_root: Option<bool>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub secrets: Option<Vec<String>>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub source_type: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub update_strategy: Option<UpdateStrategy>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Clone, Debug)]
|
|
||||||
#[serde(rename_all = "camelCase")]
|
|
||||||
pub struct GrpcPodConfig {
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub affinity: Option<Affinity>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub extract_content: Option<ExtractContent>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub memory_target: Option<Value>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub node_selector: Option<BTreeMap<String, String>>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub priority_class_name: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub security_context_config: Option<String>,
|
|
||||||
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub tolerations: Option<Vec<Toleration>>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
|
|
||||||
#[serde(rename_all = "camelCase")]
|
|
||||||
pub struct ExtractContent {
|
|
||||||
pub cache_dir: String,
|
|
||||||
pub catalog_dir: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
|
|
||||||
#[serde(rename_all = "camelCase")]
|
|
||||||
pub struct Icon {
|
|
||||||
pub base64data: String,
|
|
||||||
pub mediatype: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
|
|
||||||
#[serde(rename_all = "camelCase")]
|
|
||||||
pub struct UpdateStrategy {
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub registry_poll: Option<RegistryPoll>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
|
|
||||||
#[serde(rename_all = "camelCase")]
|
|
||||||
pub struct RegistryPoll {
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub interval: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for CatalogSource {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self {
|
|
||||||
metadata: ObjectMeta::default(),
|
|
||||||
spec: CatalogSourceSpec {
|
|
||||||
address: None,
|
|
||||||
config_map: None,
|
|
||||||
description: None,
|
|
||||||
display_name: None,
|
|
||||||
grpc_pod_config: None,
|
|
||||||
icon: None,
|
|
||||||
image: None,
|
|
||||||
priority: None,
|
|
||||||
publisher: None,
|
|
||||||
run_as_root: None,
|
|
||||||
secrets: None,
|
|
||||||
source_type: None,
|
|
||||||
update_strategy: None,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for CatalogSourceSpec {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self {
|
|
||||||
address: None,
|
|
||||||
config_map: None,
|
|
||||||
description: None,
|
|
||||||
display_name: None,
|
|
||||||
grpc_pod_config: None,
|
|
||||||
icon: None,
|
|
||||||
image: None,
|
|
||||||
priority: None,
|
|
||||||
publisher: None,
|
|
||||||
run_as_root: None,
|
|
||||||
secrets: None,
|
|
||||||
source_type: None,
|
|
||||||
update_strategy: None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,2 +0,0 @@
|
|||||||
mod catalogsources_operators_coreos_com;
|
|
||||||
pub use catalogsources_operators_coreos_com::*;
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
mod operatorhub;
|
|
||||||
pub use operatorhub::*;
|
|
||||||
pub mod crd;
|
|
||||||
@@ -1,107 +0,0 @@
|
|||||||
// Write operatorhub catalog score
|
|
||||||
// for now this will only support on OKD with the default catalog and operatorhub setup and does not verify OLM state or anything else. Very opinionated and bare-bones to start
|
|
||||||
|
|
||||||
use k8s_openapi::apimachinery::pkg::apis::meta::v1::ObjectMeta;
|
|
||||||
use serde::Serialize;
|
|
||||||
|
|
||||||
use crate::interpret::Interpret;
|
|
||||||
use crate::modules::k8s::apps::crd::{
|
|
||||||
CatalogSource, CatalogSourceSpec, RegistryPoll, UpdateStrategy,
|
|
||||||
};
|
|
||||||
use crate::modules::k8s::resource::K8sResourceScore;
|
|
||||||
use crate::score::Score;
|
|
||||||
use crate::topology::{K8sclient, Topology};
|
|
||||||
|
|
||||||
/// Installs the CatalogSource in a cluster which already has the required services and CRDs installed.
|
|
||||||
///
|
|
||||||
/// ```rust
|
|
||||||
/// use harmony::modules::k8s::apps::OperatorHubCatalogSourceScore;
|
|
||||||
///
|
|
||||||
/// let score = OperatorHubCatalogSourceScore::default();
|
|
||||||
/// ```
|
|
||||||
///
|
|
||||||
/// Required services:
|
|
||||||
/// - catalog-operator
|
|
||||||
/// - olm-operator
|
|
||||||
///
|
|
||||||
/// They are installed by default with OKD/Openshift
|
|
||||||
///
|
|
||||||
/// **Warning** : this initial implementation does not manage the dependencies. They must already
|
|
||||||
/// exist in the cluster.
|
|
||||||
#[derive(Debug, Clone, Serialize)]
|
|
||||||
pub struct OperatorHubCatalogSourceScore {
|
|
||||||
pub name: String,
|
|
||||||
pub namespace: String,
|
|
||||||
pub image: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl OperatorHubCatalogSourceScore {
|
|
||||||
pub fn new(name: &str, namespace: &str, image: &str) -> Self {
|
|
||||||
Self {
|
|
||||||
name: name.to_string(),
|
|
||||||
namespace: namespace.to_string(),
|
|
||||||
image: image.to_string(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for OperatorHubCatalogSourceScore {
|
|
||||||
/// This default implementation will create this k8s resource :
|
|
||||||
///
|
|
||||||
/// ```yaml
|
|
||||||
/// apiVersion: operators.coreos.com/v1alpha1
|
|
||||||
/// kind: CatalogSource
|
|
||||||
/// metadata:
|
|
||||||
/// name: operatorhubio-catalog
|
|
||||||
/// namespace: openshift-marketplace
|
|
||||||
/// spec:
|
|
||||||
/// sourceType: grpc
|
|
||||||
/// image: quay.io/operatorhubio/catalog:latest
|
|
||||||
/// displayName: Operatorhub Operators
|
|
||||||
/// publisher: OperatorHub.io
|
|
||||||
/// updateStrategy:
|
|
||||||
/// registryPoll:
|
|
||||||
/// interval: 60m
|
|
||||||
/// ```
|
|
||||||
fn default() -> Self {
|
|
||||||
OperatorHubCatalogSourceScore {
|
|
||||||
name: "operatorhubio-catalog".to_string(),
|
|
||||||
namespace: "openshift-marketplace".to_string(),
|
|
||||||
image: "quay.io/operatorhubio/catalog:latest".to_string(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T: Topology + K8sclient> Score<T> for OperatorHubCatalogSourceScore {
|
|
||||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
|
||||||
let metadata = ObjectMeta {
|
|
||||||
name: Some(self.name.clone()),
|
|
||||||
namespace: Some(self.namespace.clone()),
|
|
||||||
..ObjectMeta::default()
|
|
||||||
};
|
|
||||||
|
|
||||||
let spec = CatalogSourceSpec {
|
|
||||||
source_type: Some("grpc".to_string()),
|
|
||||||
image: Some(self.image.clone()),
|
|
||||||
display_name: Some("Operatorhub Operators".to_string()),
|
|
||||||
publisher: Some("OperatorHub.io".to_string()),
|
|
||||||
update_strategy: Some(UpdateStrategy {
|
|
||||||
registry_poll: Some(RegistryPoll {
|
|
||||||
interval: Some("60m".to_string()),
|
|
||||||
}),
|
|
||||||
}),
|
|
||||||
..CatalogSourceSpec::default()
|
|
||||||
};
|
|
||||||
|
|
||||||
let catalog_source = CatalogSource {
|
|
||||||
metadata,
|
|
||||||
spec: spec,
|
|
||||||
};
|
|
||||||
|
|
||||||
K8sResourceScore::single(catalog_source, Some(self.namespace.clone())).create_interpret()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn name(&self) -> String {
|
|
||||||
format!("OperatorHubCatalogSourceScore({})", self.name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,4 +1,3 @@
|
|||||||
pub mod apps;
|
|
||||||
pub mod deployment;
|
pub mod deployment;
|
||||||
pub mod ingress;
|
pub mod ingress;
|
||||||
pub mod namespace;
|
pub mod namespace;
|
||||||
|
|||||||
@@ -13,7 +13,6 @@ pub mod load_balancer;
|
|||||||
pub mod monitoring;
|
pub mod monitoring;
|
||||||
pub mod okd;
|
pub mod okd;
|
||||||
pub mod opnsense;
|
pub mod opnsense;
|
||||||
pub mod postgresql;
|
|
||||||
pub mod prometheus;
|
pub mod prometheus;
|
||||||
pub mod storage;
|
pub mod storage;
|
||||||
pub mod tenant;
|
pub mod tenant;
|
||||||
|
|||||||
@@ -1,20 +1,8 @@
|
|||||||
use crate::{
|
use crate::{
|
||||||
data::Version,
|
interpret::Interpret, inventory::HostRole, modules::okd::bootstrap_okd_node::OKDNodeInterpret,
|
||||||
hardware::PhysicalHost,
|
score::Score, topology::HAClusterTopology,
|
||||||
infra::inventory::InventoryRepositoryFactory,
|
|
||||||
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
|
||||||
inventory::{HostRole, Inventory},
|
|
||||||
modules::{
|
|
||||||
dhcp::DhcpHostBindingScore, http::IPxeMacBootFileScore,
|
|
||||||
inventory::DiscoverHostForRoleScore, okd::templates::BootstrapIpxeTpl,
|
|
||||||
},
|
|
||||||
score::Score,
|
|
||||||
topology::{HAClusterTopology, HostBinding},
|
|
||||||
};
|
};
|
||||||
use async_trait::async_trait;
|
|
||||||
use derive_new::new;
|
use derive_new::new;
|
||||||
use harmony_types::id::Id;
|
|
||||||
use log::{debug, info};
|
|
||||||
use serde::Serialize;
|
use serde::Serialize;
|
||||||
|
|
||||||
// -------------------------------------------------------------------------------------------------
|
// -------------------------------------------------------------------------------------------------
|
||||||
@@ -28,226 +16,13 @@ pub struct OKDSetup03ControlPlaneScore {}
|
|||||||
|
|
||||||
impl Score<HAClusterTopology> for OKDSetup03ControlPlaneScore {
|
impl Score<HAClusterTopology> for OKDSetup03ControlPlaneScore {
|
||||||
fn create_interpret(&self) -> Box<dyn Interpret<HAClusterTopology>> {
|
fn create_interpret(&self) -> Box<dyn Interpret<HAClusterTopology>> {
|
||||||
Box::new(OKDSetup03ControlPlaneInterpret::new())
|
// TODO: Implement a step to wait for the control plane nodes to join the cluster
|
||||||
|
// and for the cluster operators to become available. This would be similar to
|
||||||
|
// the `wait-for bootstrap-complete` command.
|
||||||
|
Box::new(OKDNodeInterpret::new(HostRole::ControlPlane))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn name(&self) -> String {
|
fn name(&self) -> String {
|
||||||
"OKDSetup03ControlPlaneScore".to_string()
|
"OKDSetup03ControlPlaneScore".to_string()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct OKDSetup03ControlPlaneInterpret {
|
|
||||||
version: Version,
|
|
||||||
status: InterpretStatus,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl OKDSetup03ControlPlaneInterpret {
|
|
||||||
pub fn new() -> Self {
|
|
||||||
let version = Version::from("1.0.0").unwrap();
|
|
||||||
Self {
|
|
||||||
version,
|
|
||||||
status: InterpretStatus::QUEUED,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Ensures that three physical hosts are discovered and available for the ControlPlane role.
|
|
||||||
/// It will trigger discovery if not enough hosts are found.
|
|
||||||
async fn get_nodes(
|
|
||||||
&self,
|
|
||||||
inventory: &Inventory,
|
|
||||||
topology: &HAClusterTopology,
|
|
||||||
) -> Result<Vec<PhysicalHost>, InterpretError> {
|
|
||||||
const REQUIRED_HOSTS: usize = 3;
|
|
||||||
let repo = InventoryRepositoryFactory::build().await?;
|
|
||||||
let mut control_plane_hosts = repo.get_host_for_role(&HostRole::ControlPlane).await?;
|
|
||||||
|
|
||||||
while control_plane_hosts.len() < REQUIRED_HOSTS {
|
|
||||||
info!(
|
|
||||||
"Discovery of {} control plane hosts in progress, current number {}",
|
|
||||||
REQUIRED_HOSTS,
|
|
||||||
control_plane_hosts.len()
|
|
||||||
);
|
|
||||||
// This score triggers the discovery agent for a specific role.
|
|
||||||
DiscoverHostForRoleScore {
|
|
||||||
role: HostRole::ControlPlane,
|
|
||||||
}
|
|
||||||
.interpret(inventory, topology)
|
|
||||||
.await?;
|
|
||||||
control_plane_hosts = repo.get_host_for_role(&HostRole::ControlPlane).await?;
|
|
||||||
}
|
|
||||||
|
|
||||||
if control_plane_hosts.len() < REQUIRED_HOSTS {
|
|
||||||
Err(InterpretError::new(format!(
|
|
||||||
"OKD Requires at least {} control plane hosts, but only found {}. Cannot proceed.",
|
|
||||||
REQUIRED_HOSTS,
|
|
||||||
control_plane_hosts.len()
|
|
||||||
)))
|
|
||||||
} else {
|
|
||||||
// Take exactly the number of required hosts to ensure consistency.
|
|
||||||
Ok(control_plane_hosts
|
|
||||||
.into_iter()
|
|
||||||
.take(REQUIRED_HOSTS)
|
|
||||||
.collect())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Configures DHCP host bindings for all control plane nodes.
|
|
||||||
async fn configure_host_binding(
|
|
||||||
&self,
|
|
||||||
inventory: &Inventory,
|
|
||||||
topology: &HAClusterTopology,
|
|
||||||
nodes: &Vec<PhysicalHost>,
|
|
||||||
) -> Result<(), InterpretError> {
|
|
||||||
info!("[ControlPlane] Configuring host bindings for control plane nodes.");
|
|
||||||
|
|
||||||
// Ensure the topology definition matches the number of physical nodes found.
|
|
||||||
if topology.control_plane.len() != nodes.len() {
|
|
||||||
return Err(InterpretError::new(format!(
|
|
||||||
"Mismatch between logical control plane hosts defined in topology ({}) and physical nodes found ({}).",
|
|
||||||
topology.control_plane.len(),
|
|
||||||
nodes.len()
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a binding for each physical host to its corresponding logical host.
|
|
||||||
let bindings: Vec<HostBinding> = topology
|
|
||||||
.control_plane
|
|
||||||
.iter()
|
|
||||||
.zip(nodes.iter())
|
|
||||||
.map(|(logical_host, physical_host)| {
|
|
||||||
info!(
|
|
||||||
"Creating binding: Logical Host '{}' -> Physical Host ID '{}'",
|
|
||||||
logical_host.name, physical_host.id
|
|
||||||
);
|
|
||||||
HostBinding {
|
|
||||||
logical_host: logical_host.clone(),
|
|
||||||
physical_host: physical_host.clone(),
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
DhcpHostBindingScore {
|
|
||||||
host_binding: bindings,
|
|
||||||
domain: Some(topology.domain_name.clone()),
|
|
||||||
}
|
|
||||||
.interpret(inventory, topology)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Renders and deploys a per-MAC iPXE boot file for each control plane node.
|
|
||||||
async fn configure_ipxe(
|
|
||||||
&self,
|
|
||||||
inventory: &Inventory,
|
|
||||||
topology: &HAClusterTopology,
|
|
||||||
nodes: &Vec<PhysicalHost>,
|
|
||||||
) -> Result<(), InterpretError> {
|
|
||||||
info!("[ControlPlane] Rendering per-MAC iPXE configurations.");
|
|
||||||
|
|
||||||
// The iPXE script content is the same for all control plane nodes,
|
|
||||||
// pointing to the 'master.ign' ignition file.
|
|
||||||
let content = BootstrapIpxeTpl {
|
|
||||||
http_ip: &topology.http_server.get_ip().to_string(),
|
|
||||||
scos_path: "scos",
|
|
||||||
ignition_http_path: "okd_ignition_files",
|
|
||||||
installation_device: "/dev/sda", // This might need to be configurable per-host in the future
|
|
||||||
ignition_file_name: "master.ign", // Control plane nodes use the master ignition file
|
|
||||||
}
|
|
||||||
.to_string();
|
|
||||||
|
|
||||||
debug!("[ControlPlane] iPXE content template:\n{content}");
|
|
||||||
|
|
||||||
// Create and apply an iPXE boot file for each node.
|
|
||||||
for node in nodes {
|
|
||||||
let mac_address = node.get_mac_address();
|
|
||||||
if mac_address.is_empty() {
|
|
||||||
return Err(InterpretError::new(format!(
|
|
||||||
"Physical host with ID '{}' has no MAC addresses defined.",
|
|
||||||
node.id
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
info!(
|
|
||||||
"[ControlPlane] Applying iPXE config for node ID '{}' with MACs: {:?}",
|
|
||||||
node.id, mac_address
|
|
||||||
);
|
|
||||||
|
|
||||||
IPxeMacBootFileScore {
|
|
||||||
mac_address,
|
|
||||||
content: content.clone(),
|
|
||||||
}
|
|
||||||
.interpret(inventory, topology)
|
|
||||||
.await?;
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Prompts the user to reboot the target control plane nodes.
|
|
||||||
async fn reboot_targets(&self, nodes: &Vec<PhysicalHost>) -> Result<(), InterpretError> {
|
|
||||||
let node_ids: Vec<String> = nodes.iter().map(|n| n.id.to_string()).collect();
|
|
||||||
info!("[ControlPlane] Requesting reboot for control plane nodes: {node_ids:?}",);
|
|
||||||
|
|
||||||
let confirmation = inquire::Confirm::new(
|
|
||||||
&format!("Please reboot the {} control plane nodes ({}) to apply their PXE configuration. Press enter when ready.", nodes.len(), node_ids.join(", ")),
|
|
||||||
)
|
|
||||||
.prompt()
|
|
||||||
.map_err(|e| InterpretError::new(format!("User prompt failed: {e}")))?;
|
|
||||||
|
|
||||||
if !confirmation {
|
|
||||||
return Err(InterpretError::new(
|
|
||||||
"User aborted the operation.".to_string(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl Interpret<HAClusterTopology> for OKDSetup03ControlPlaneInterpret {
|
|
||||||
fn get_name(&self) -> InterpretName {
|
|
||||||
InterpretName::Custom("OKDSetup03ControlPlane")
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_version(&self) -> Version {
|
|
||||||
self.version.clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_status(&self) -> InterpretStatus {
|
|
||||||
self.status.clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_children(&self) -> Vec<Id> {
|
|
||||||
vec![]
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn execute(
|
|
||||||
&self,
|
|
||||||
inventory: &Inventory,
|
|
||||||
topology: &HAClusterTopology,
|
|
||||||
) -> Result<Outcome, InterpretError> {
|
|
||||||
// 1. Ensure we have 3 physical hosts for the control plane.
|
|
||||||
let nodes = self.get_nodes(inventory, topology).await?;
|
|
||||||
|
|
||||||
// 2. Create DHCP reservations for the control plane nodes.
|
|
||||||
self.configure_host_binding(inventory, topology, &nodes)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
// 3. Create iPXE files for each control plane node to boot from the master ignition.
|
|
||||||
self.configure_ipxe(inventory, topology, &nodes).await?;
|
|
||||||
|
|
||||||
// 4. Reboot the nodes to start the OS installation.
|
|
||||||
self.reboot_targets(&nodes).await?;
|
|
||||||
|
|
||||||
// TODO: Implement a step to wait for the control plane nodes to join the cluster
|
|
||||||
// and for the cluster operators to become available. This would be similar to
|
|
||||||
// the `wait-for bootstrap-complete` command.
|
|
||||||
info!("[ControlPlane] Provisioning initiated. Monitor the cluster convergence manually.");
|
|
||||||
|
|
||||||
Ok(Outcome::success(
|
|
||||||
"Control plane provisioning has been successfully initiated.".into(),
|
|
||||||
))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,15 +1,9 @@
|
|||||||
use async_trait::async_trait;
|
|
||||||
use derive_new::new;
|
use derive_new::new;
|
||||||
use harmony_types::id::Id;
|
|
||||||
use log::info;
|
|
||||||
use serde::Serialize;
|
use serde::Serialize;
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
data::Version,
|
interpret::Interpret, inventory::HostRole, modules::okd::bootstrap_okd_node::OKDNodeInterpret,
|
||||||
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
score::Score, topology::HAClusterTopology,
|
||||||
inventory::Inventory,
|
|
||||||
score::Score,
|
|
||||||
topology::HAClusterTopology,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
// -------------------------------------------------------------------------------------------------
|
// -------------------------------------------------------------------------------------------------
|
||||||
@@ -23,61 +17,10 @@ pub struct OKDSetup04WorkersScore {}
|
|||||||
|
|
||||||
impl Score<HAClusterTopology> for OKDSetup04WorkersScore {
|
impl Score<HAClusterTopology> for OKDSetup04WorkersScore {
|
||||||
fn create_interpret(&self) -> Box<dyn Interpret<HAClusterTopology>> {
|
fn create_interpret(&self) -> Box<dyn Interpret<HAClusterTopology>> {
|
||||||
Box::new(OKDSetup04WorkersInterpret::new(self.clone()))
|
Box::new(OKDNodeInterpret::new(HostRole::Worker))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn name(&self) -> String {
|
fn name(&self) -> String {
|
||||||
"OKDSetup04WorkersScore".to_string()
|
"OKDSetup04WorkersScore".to_string()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct OKDSetup04WorkersInterpret {
|
|
||||||
score: OKDSetup04WorkersScore,
|
|
||||||
version: Version,
|
|
||||||
status: InterpretStatus,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl OKDSetup04WorkersInterpret {
|
|
||||||
pub fn new(score: OKDSetup04WorkersScore) -> Self {
|
|
||||||
let version = Version::from("1.0.0").unwrap();
|
|
||||||
Self {
|
|
||||||
version,
|
|
||||||
score,
|
|
||||||
status: InterpretStatus::QUEUED,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn render_and_reboot(&self) -> Result<(), InterpretError> {
|
|
||||||
info!("[Workers] Rendering per-MAC PXE for workers and rebooting");
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl Interpret<HAClusterTopology> for OKDSetup04WorkersInterpret {
|
|
||||||
fn get_name(&self) -> InterpretName {
|
|
||||||
InterpretName::Custom("OKDSetup04Workers")
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_version(&self) -> Version {
|
|
||||||
self.version.clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_status(&self) -> InterpretStatus {
|
|
||||||
self.status.clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_children(&self) -> Vec<Id> {
|
|
||||||
vec![]
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn execute(
|
|
||||||
&self,
|
|
||||||
_inventory: &Inventory,
|
|
||||||
_topology: &HAClusterTopology,
|
|
||||||
) -> Result<Outcome, InterpretError> {
|
|
||||||
self.render_and_reboot().await?;
|
|
||||||
Ok(Outcome::success("Workers provisioned".into()))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
303
harmony/src/modules/okd/bootstrap_okd_node.rs
Normal file
303
harmony/src/modules/okd/bootstrap_okd_node.rs
Normal file
@@ -0,0 +1,303 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
use derive_new::new;
|
||||||
|
use harmony_types::id::Id;
|
||||||
|
use log::{debug, info};
|
||||||
|
use serde::Serialize;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
data::Version,
|
||||||
|
hardware::PhysicalHost,
|
||||||
|
infra::inventory::InventoryRepositoryFactory,
|
||||||
|
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
|
||||||
|
inventory::{HostRole, Inventory},
|
||||||
|
modules::{
|
||||||
|
dhcp::DhcpHostBindingScore,
|
||||||
|
http::IPxeMacBootFileScore,
|
||||||
|
inventory::DiscoverHostForRoleScore,
|
||||||
|
okd::{
|
||||||
|
okd_node::{
|
||||||
|
BootstrapRole, ControlPlaneRole, OKDRoleProperties, StorageRole, WorkerRole,
|
||||||
|
},
|
||||||
|
templates::BootstrapIpxeTpl,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
score::Score,
|
||||||
|
topology::{HAClusterTopology, HostBinding, LogicalHost},
|
||||||
|
};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, new)]
|
||||||
|
pub struct OKDNodeInstallationScore {
|
||||||
|
host_role: HostRole,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Score<HAClusterTopology> for OKDNodeInstallationScore {
|
||||||
|
fn name(&self) -> String {
|
||||||
|
"OKDNodeScore".to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn create_interpret(&self) -> Box<dyn Interpret<HAClusterTopology>> {
|
||||||
|
Box::new(OKDNodeInterpret::new(self.host_role.clone()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct OKDNodeInterpret {
|
||||||
|
host_role: HostRole,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl OKDNodeInterpret {
|
||||||
|
pub fn new(host_role: HostRole) -> Self {
|
||||||
|
Self { host_role }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn okd_role_properties(&self, role: &HostRole) -> &'static dyn OKDRoleProperties {
|
||||||
|
match role {
|
||||||
|
HostRole::Bootstrap => &BootstrapRole,
|
||||||
|
HostRole::ControlPlane => &ControlPlaneRole,
|
||||||
|
HostRole::Worker => &WorkerRole,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_nodes(
|
||||||
|
&self,
|
||||||
|
inventory: &Inventory,
|
||||||
|
topology: &HAClusterTopology,
|
||||||
|
) -> Result<Vec<PhysicalHost>, InterpretError> {
|
||||||
|
let repo = InventoryRepositoryFactory::build().await?;
|
||||||
|
|
||||||
|
let mut hosts = repo.get_host_for_role(&self.host_role).await?;
|
||||||
|
|
||||||
|
let okd_host_properties = self.okd_role_properties(&self.host_role);
|
||||||
|
|
||||||
|
let required_hosts: usize = okd_host_properties.required_hosts();
|
||||||
|
|
||||||
|
while hosts.len() < required_hosts {
|
||||||
|
info!(
|
||||||
|
"Discovery of {} {} hosts in progress, current number {}",
|
||||||
|
required_hosts,
|
||||||
|
self.host_role,
|
||||||
|
hosts.len()
|
||||||
|
);
|
||||||
|
// This score triggers the discovery agent for a specific role.
|
||||||
|
DiscoverHostForRoleScore {
|
||||||
|
role: self.host_role.clone(),
|
||||||
|
}
|
||||||
|
.interpret(inventory, topology)
|
||||||
|
.await?;
|
||||||
|
hosts = repo.get_host_for_role(&self.host_role).await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
if hosts.len() < required_hosts {
|
||||||
|
Err(InterpretError::new(format!(
|
||||||
|
"OKD Requires at least {} {} hosts, but only found {}. Cannot proceed.",
|
||||||
|
required_hosts,
|
||||||
|
self.host_role,
|
||||||
|
hosts.len()
|
||||||
|
)))
|
||||||
|
} else {
|
||||||
|
// Take exactly the number of required hosts to ensure consistency.
|
||||||
|
Ok(hosts.into_iter().take(required_hosts).collect())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Configures DHCP host bindings for all nodes.
|
||||||
|
async fn configure_host_binding(
|
||||||
|
&self,
|
||||||
|
inventory: &Inventory,
|
||||||
|
topology: &HAClusterTopology,
|
||||||
|
nodes: &Vec<PhysicalHost>,
|
||||||
|
) -> Result<(), InterpretError> {
|
||||||
|
info!(
|
||||||
|
"[{}] Configuring host bindings for {} plane nodes.",
|
||||||
|
self.host_role, self.host_role,
|
||||||
|
);
|
||||||
|
|
||||||
|
let host_properties = self.okd_role_properties(&self.host_role);
|
||||||
|
|
||||||
|
self.validate_host_node_match(nodes, host_properties.logical_hosts(topology))?;
|
||||||
|
|
||||||
|
let bindings: Vec<HostBinding> =
|
||||||
|
self.host_bindings(nodes, host_properties.logical_hosts(topology));
|
||||||
|
|
||||||
|
DhcpHostBindingScore {
|
||||||
|
host_binding: bindings,
|
||||||
|
domain: Some(topology.domain_name.clone()),
|
||||||
|
}
|
||||||
|
.interpret(inventory, topology)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure the topology definition matches the number of physical nodes found.
|
||||||
|
fn validate_host_node_match(
|
||||||
|
&self,
|
||||||
|
nodes: &Vec<PhysicalHost>,
|
||||||
|
hosts: &Vec<LogicalHost>,
|
||||||
|
) -> Result<(), InterpretError> {
|
||||||
|
if hosts.len() != nodes.len() {
|
||||||
|
return Err(InterpretError::new(format!(
|
||||||
|
"Mismatch between logical hosts defined in topology ({}) and physical nodes found ({}).",
|
||||||
|
hosts.len(),
|
||||||
|
nodes.len()
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create a binding for each physical host to its corresponding logical host.
|
||||||
|
fn host_bindings(
|
||||||
|
&self,
|
||||||
|
nodes: &Vec<PhysicalHost>,
|
||||||
|
hosts: &Vec<LogicalHost>,
|
||||||
|
) -> Vec<HostBinding> {
|
||||||
|
hosts
|
||||||
|
.iter()
|
||||||
|
.zip(nodes.iter())
|
||||||
|
.map(|(logical_host, physical_host)| {
|
||||||
|
info!(
|
||||||
|
"Creating binding: Logical Host '{}' -> Physical Host ID '{}'",
|
||||||
|
logical_host.name, physical_host.id
|
||||||
|
);
|
||||||
|
HostBinding {
|
||||||
|
logical_host: logical_host.clone(),
|
||||||
|
physical_host: physical_host.clone(),
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Renders and deploys a per-MAC iPXE boot file for each node.
|
||||||
|
async fn configure_ipxe(
|
||||||
|
&self,
|
||||||
|
inventory: &Inventory,
|
||||||
|
topology: &HAClusterTopology,
|
||||||
|
nodes: &Vec<PhysicalHost>,
|
||||||
|
) -> Result<(), InterpretError> {
|
||||||
|
info!(
|
||||||
|
"[{}] Rendering per-MAC iPXE configurations.",
|
||||||
|
self.host_role
|
||||||
|
);
|
||||||
|
|
||||||
|
let okd_role_properties = self.okd_role_properties(&self.host_role);
|
||||||
|
// The iPXE script content is the same for all control plane nodes,
|
||||||
|
// pointing to the 'master.ign' ignition file.
|
||||||
|
let content = BootstrapIpxeTpl {
|
||||||
|
http_ip: &topology.http_server.get_ip().to_string(),
|
||||||
|
scos_path: "scos",
|
||||||
|
ignition_http_path: "okd_ignition_files",
|
||||||
|
//TODO must be refactored to not only use /dev/sda
|
||||||
|
installation_device: "/dev/sda", // This might need to be configurable per-host in the future
|
||||||
|
ignition_file_name: okd_role_properties.ignition_file(),
|
||||||
|
}
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
debug!("[{}] iPXE content template:\n{content}", self.host_role);
|
||||||
|
|
||||||
|
// Create and apply an iPXE boot file for each node.
|
||||||
|
for node in nodes {
|
||||||
|
let mac_address = node.get_mac_address();
|
||||||
|
if mac_address.is_empty() {
|
||||||
|
return Err(InterpretError::new(format!(
|
||||||
|
"Physical host with ID '{}' has no MAC addresses defined.",
|
||||||
|
node.id
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
info!(
|
||||||
|
"[{}] Applying iPXE config for node ID '{}' with MACs: {:?}",
|
||||||
|
self.host_role, node.id, mac_address
|
||||||
|
);
|
||||||
|
|
||||||
|
IPxeMacBootFileScore {
|
||||||
|
mac_address,
|
||||||
|
content: content.clone(),
|
||||||
|
}
|
||||||
|
.interpret(inventory, topology)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Prompts the user to reboot the target control plane nodes.
|
||||||
|
async fn reboot_targets(&self, nodes: &Vec<PhysicalHost>) -> Result<(), InterpretError> {
|
||||||
|
let node_ids: Vec<String> = nodes.iter().map(|n| n.id.to_string()).collect();
|
||||||
|
info!(
|
||||||
|
"[{}] Requesting reboot for control plane nodes: {node_ids:?}",
|
||||||
|
self.host_role
|
||||||
|
);
|
||||||
|
|
||||||
|
let confirmation = inquire::Confirm::new(
|
||||||
|
&format!("Please reboot the {} {} nodes ({}) to apply their PXE configuration. Press enter when ready.", nodes.len(), self.host_role, node_ids.join(", ")),
|
||||||
|
)
|
||||||
|
.prompt()
|
||||||
|
.map_err(|e| InterpretError::new(format!("User prompt failed: {e}")))?;
|
||||||
|
|
||||||
|
if !confirmation {
|
||||||
|
return Err(InterpretError::new(
|
||||||
|
"User aborted the operation.".to_string(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Interpret<HAClusterTopology> for OKDNodeInterpret {
|
||||||
|
async fn execute(
|
||||||
|
&self,
|
||||||
|
inventory: &Inventory,
|
||||||
|
topology: &HAClusterTopology,
|
||||||
|
) -> Result<Outcome, InterpretError> {
|
||||||
|
// 1. Ensure we have the specfied number of physical hosts.
|
||||||
|
let nodes = self.get_nodes(inventory, topology).await?;
|
||||||
|
|
||||||
|
// 2. Create DHCP reservations for the nodes.
|
||||||
|
self.configure_host_binding(inventory, topology, &nodes)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// 3. Create iPXE files for each node to boot from the ignition.
|
||||||
|
self.configure_ipxe(inventory, topology, &nodes).await?;
|
||||||
|
|
||||||
|
// 4. Reboot the nodes to start the OS installation.
|
||||||
|
self.reboot_targets(&nodes).await?;
|
||||||
|
// TODO: Implement a step to validate that the installation of the nodes is
|
||||||
|
// complete and for the cluster operators to become available.
|
||||||
|
//
|
||||||
|
// The OpenShift installer only provides two wait commands which currently need to be
|
||||||
|
// run manually:
|
||||||
|
// - `openshift-install wait-for bootstrap-complete`
|
||||||
|
// - `openshift-install wait-for install-complete`
|
||||||
|
//
|
||||||
|
// There is no installer command that waits specifically for worker node
|
||||||
|
// provisioning. Worker nodes join asynchronously (via ignition + CSR approval),
|
||||||
|
// and the cluster becomes fully functional only once all nodes are Ready and the
|
||||||
|
// cluster operators report Available=True.
|
||||||
|
info!(
|
||||||
|
"[{}] Provisioning initiated. Monitor the cluster convergence manually.",
|
||||||
|
self.host_role
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(Outcome::success(format!(
|
||||||
|
"{} provisioning has been successfully initiated.",
|
||||||
|
self.host_role
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_name(&self) -> InterpretName {
|
||||||
|
InterpretName::Custom("OKDNodeSetup".into())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_version(&self) -> Version {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_status(&self) -> InterpretStatus {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_children(&self) -> Vec<Id> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -6,12 +6,14 @@ mod bootstrap_05_sanity_check;
|
|||||||
mod bootstrap_06_installation_report;
|
mod bootstrap_06_installation_report;
|
||||||
pub mod bootstrap_dhcp;
|
pub mod bootstrap_dhcp;
|
||||||
pub mod bootstrap_load_balancer;
|
pub mod bootstrap_load_balancer;
|
||||||
|
pub mod bootstrap_okd_node;
|
||||||
mod bootstrap_persist_network_bond;
|
mod bootstrap_persist_network_bond;
|
||||||
pub mod dhcp;
|
pub mod dhcp;
|
||||||
pub mod dns;
|
pub mod dns;
|
||||||
pub mod installation;
|
pub mod installation;
|
||||||
pub mod ipxe;
|
pub mod ipxe;
|
||||||
pub mod load_balancer;
|
pub mod load_balancer;
|
||||||
|
pub mod okd_node;
|
||||||
pub mod templates;
|
pub mod templates;
|
||||||
pub mod upgrade;
|
pub mod upgrade;
|
||||||
pub use bootstrap_01_prepare::*;
|
pub use bootstrap_01_prepare::*;
|
||||||
|
|||||||
54
harmony/src/modules/okd/okd_node.rs
Normal file
54
harmony/src/modules/okd/okd_node.rs
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
use crate::topology::{HAClusterTopology, LogicalHost};
|
||||||
|
|
||||||
|
pub trait OKDRoleProperties {
|
||||||
|
fn ignition_file(&self) -> &'static str;
|
||||||
|
fn required_hosts(&self) -> usize;
|
||||||
|
fn logical_hosts<'a>(&self, t: &'a HAClusterTopology) -> &'a Vec<LogicalHost>;
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct BootstrapRole;
|
||||||
|
pub struct ControlPlaneRole;
|
||||||
|
pub struct WorkerRole;
|
||||||
|
pub struct StorageRole;
|
||||||
|
|
||||||
|
impl OKDRoleProperties for BootstrapRole {
|
||||||
|
fn ignition_file(&self) -> &'static str {
|
||||||
|
"bootstrap.ign"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn required_hosts(&self) -> usize {
|
||||||
|
1
|
||||||
|
}
|
||||||
|
|
||||||
|
fn logical_hosts<'a>(&self, t: &'a HAClusterTopology) -> &'a Vec<LogicalHost> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl OKDRoleProperties for ControlPlaneRole {
|
||||||
|
fn ignition_file(&self) -> &'static str {
|
||||||
|
"master.ign"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn required_hosts(&self) -> usize {
|
||||||
|
3
|
||||||
|
}
|
||||||
|
|
||||||
|
fn logical_hosts<'a>(&self, t: &'a HAClusterTopology) -> &'a Vec<LogicalHost> {
|
||||||
|
&t.control_plane
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl OKDRoleProperties for WorkerRole {
|
||||||
|
fn ignition_file(&self) -> &'static str {
|
||||||
|
"worker.ign"
|
||||||
|
}
|
||||||
|
|
||||||
|
fn required_hosts(&self) -> usize {
|
||||||
|
2
|
||||||
|
}
|
||||||
|
|
||||||
|
fn logical_hosts<'a>(&self, t: &'a HAClusterTopology) -> &'a Vec<LogicalHost> {
|
||||||
|
&t.workers
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,85 +0,0 @@
|
|||||||
use async_trait::async_trait;
|
|
||||||
use harmony_types::storage::StorageSize;
|
|
||||||
use serde::Serialize;
|
|
||||||
use std::collections::HashMap;
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
pub trait PostgreSQL: Send + Sync {
|
|
||||||
async fn deploy(&self, config: &PostgreSQLConfig) -> Result<String, String>;
|
|
||||||
|
|
||||||
/// Extracts PostgreSQL-specific replication certs (PEM format) from a deployed primary cluster.
|
|
||||||
/// Abstracts away storage/retrieval details (e.g., secrets, files).
|
|
||||||
async fn get_replication_certs(&self, cluster_name: &str) -> Result<ReplicationCerts, String>;
|
|
||||||
|
|
||||||
/// Gets the internal/private endpoint (e.g., k8s service FQDN:5432) for the cluster.
|
|
||||||
async fn get_endpoint(&self, cluster_name: &str) -> Result<PostgreSQLEndpoint, String>;
|
|
||||||
|
|
||||||
/// Gets the public/externally routable endpoint if configured (e.g., OKD Route:443 for TLS passthrough).
|
|
||||||
/// Returns None if no public endpoint (internal-only cluster).
|
|
||||||
/// UNSTABLE: This is opinionated for initial multisite use cases. Networking abstraction is complex
|
|
||||||
/// (cf. k8s Ingress -> Gateway API evolution); may move to higher-order Networking/PostgreSQLNetworking trait.
|
|
||||||
async fn get_public_endpoint(
|
|
||||||
&self,
|
|
||||||
cluster_name: &str,
|
|
||||||
) -> Result<Option<PostgreSQLEndpoint>, String>;
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Serialize)]
|
|
||||||
pub struct PostgreSQLConfig {
|
|
||||||
pub cluster_name: String,
|
|
||||||
pub instances: u32,
|
|
||||||
pub storage_size: StorageSize,
|
|
||||||
pub role: PostgreSQLClusterRole,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Serialize)]
|
|
||||||
pub enum PostgreSQLClusterRole {
|
|
||||||
Primary,
|
|
||||||
Replica(ReplicaConfig),
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Serialize)]
|
|
||||||
pub struct ReplicaConfig {
|
|
||||||
/// Name of the primary cluster this replica will sync from
|
|
||||||
pub primary_cluster_name: String,
|
|
||||||
/// Certs extracted from primary via Topology::get_replication_certs()
|
|
||||||
pub replication_certs: ReplicationCerts,
|
|
||||||
/// Bootstrap method (e.g., pg_basebackup from primary)
|
|
||||||
pub bootstrap: BootstrapConfig,
|
|
||||||
/// External cluster connection details for CNPG spec.externalClusters
|
|
||||||
pub external_cluster: ExternalClusterConfig,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Serialize)]
|
|
||||||
pub struct BootstrapConfig {
|
|
||||||
pub strategy: BootstrapStrategy,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Serialize)]
|
|
||||||
pub enum BootstrapStrategy {
|
|
||||||
PgBasebackup,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Serialize)]
|
|
||||||
pub struct ExternalClusterConfig {
|
|
||||||
/// Name used in CNPG externalClusters list
|
|
||||||
pub name: String,
|
|
||||||
/// Connection params (host/port set by multisite logic, sslmode='verify-ca', etc.)
|
|
||||||
pub connection_parameters: HashMap<String, String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Serialize)]
|
|
||||||
pub struct ReplicationCerts {
|
|
||||||
/// PEM-encoded CA cert from primary
|
|
||||||
pub ca_cert_pem: String,
|
|
||||||
/// PEM-encoded streaming_replica client cert (tls.crt)
|
|
||||||
pub streaming_replica_cert_pem: String,
|
|
||||||
/// PEM-encoded streaming_replica client key (tls.key)
|
|
||||||
pub streaming_replica_key_pem: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
|
||||||
pub struct PostgreSQLEndpoint {
|
|
||||||
pub host: String,
|
|
||||||
pub port: u16,
|
|
||||||
}
|
|
||||||
@@ -1,125 +0,0 @@
|
|||||||
use async_trait::async_trait;
|
|
||||||
use log::debug;
|
|
||||||
use log::info;
|
|
||||||
use std::collections::HashMap;
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
modules::postgresql::capability::{
|
|
||||||
BootstrapConfig, BootstrapStrategy, ExternalClusterConfig, PostgreSQL,
|
|
||||||
PostgreSQLClusterRole, PostgreSQLConfig, PostgreSQLEndpoint, ReplicaConfig,
|
|
||||||
ReplicationCerts,
|
|
||||||
},
|
|
||||||
topology::FailoverTopology,
|
|
||||||
};
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl<T: PostgreSQL> PostgreSQL for FailoverTopology<T> {
|
|
||||||
async fn deploy(&self, config: &PostgreSQLConfig) -> Result<String, String> {
|
|
||||||
info!(
|
|
||||||
"Starting deployment of failover topology '{}'",
|
|
||||||
config.cluster_name
|
|
||||||
);
|
|
||||||
|
|
||||||
let primary_config = PostgreSQLConfig {
|
|
||||||
cluster_name: config.cluster_name.clone(),
|
|
||||||
instances: config.instances,
|
|
||||||
storage_size: config.storage_size.clone(),
|
|
||||||
role: PostgreSQLClusterRole::Primary,
|
|
||||||
};
|
|
||||||
|
|
||||||
info!(
|
|
||||||
"Deploying primary cluster '{{}}' ({} instances, {:?} storage)",
|
|
||||||
primary_config.cluster_name, primary_config.storage_size
|
|
||||||
);
|
|
||||||
|
|
||||||
let primary_cluster_name = self.primary.deploy(&primary_config).await?;
|
|
||||||
|
|
||||||
info!("Primary cluster '{primary_cluster_name}' deployed successfully");
|
|
||||||
|
|
||||||
info!("Retrieving replication certificates for primary '{primary_cluster_name}'");
|
|
||||||
|
|
||||||
let certs = self
|
|
||||||
.primary
|
|
||||||
.get_replication_certs(&primary_cluster_name)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
info!("Replication certificates retrieved successfully");
|
|
||||||
|
|
||||||
info!("Retrieving public endpoint for primary '{primary_cluster_name}");
|
|
||||||
|
|
||||||
let endpoint = self
|
|
||||||
.primary
|
|
||||||
.get_public_endpoint(&primary_cluster_name)
|
|
||||||
.await?
|
|
||||||
.ok_or_else(|| "No public endpoint configured on primary cluster".to_string())?;
|
|
||||||
|
|
||||||
info!(
|
|
||||||
"Public endpoint '{}:{}' retrieved for primary",
|
|
||||||
endpoint.host, endpoint.port
|
|
||||||
);
|
|
||||||
|
|
||||||
info!("Configuring replica connection parameters and bootstrap");
|
|
||||||
|
|
||||||
let mut connection_parameters = HashMap::new();
|
|
||||||
connection_parameters.insert("host".to_string(), endpoint.host);
|
|
||||||
connection_parameters.insert("port".to_string(), endpoint.port.to_string());
|
|
||||||
connection_parameters.insert("dbname".to_string(), "postgres".to_string());
|
|
||||||
connection_parameters.insert("user".to_string(), "streaming_replica".to_string());
|
|
||||||
connection_parameters.insert("sslmode".to_string(), "verify-ca".to_string());
|
|
||||||
connection_parameters.insert("sslnegotiation".to_string(), "direct".to_string());
|
|
||||||
|
|
||||||
debug!("Replica connection parameters: {:?}", connection_parameters);
|
|
||||||
|
|
||||||
let external_cluster = ExternalClusterConfig {
|
|
||||||
name: primary_cluster_name.clone(),
|
|
||||||
connection_parameters,
|
|
||||||
};
|
|
||||||
|
|
||||||
let bootstrap_config = BootstrapConfig {
|
|
||||||
strategy: BootstrapStrategy::PgBasebackup,
|
|
||||||
};
|
|
||||||
|
|
||||||
let replica_cluster_config = ReplicaConfig {
|
|
||||||
primary_cluster_name: primary_cluster_name.clone(),
|
|
||||||
replication_certs: certs,
|
|
||||||
bootstrap: bootstrap_config,
|
|
||||||
external_cluster,
|
|
||||||
};
|
|
||||||
|
|
||||||
let replica_config = PostgreSQLConfig {
|
|
||||||
cluster_name: format!("{}-replica", primary_cluster_name),
|
|
||||||
instances: config.instances,
|
|
||||||
storage_size: config.storage_size.clone(),
|
|
||||||
role: PostgreSQLClusterRole::Replica(replica_cluster_config),
|
|
||||||
};
|
|
||||||
|
|
||||||
info!(
|
|
||||||
"Deploying replica cluster '{}' ({} instances, {:?} storage) on replica topology",
|
|
||||||
replica_config.cluster_name, replica_config.instances, replica_config.storage_size
|
|
||||||
);
|
|
||||||
|
|
||||||
self.replica.deploy(&replica_config).await?;
|
|
||||||
|
|
||||||
info!(
|
|
||||||
"Replica cluster '{}' deployed successfully; failover topology '{}' ready",
|
|
||||||
replica_config.cluster_name, config.cluster_name
|
|
||||||
);
|
|
||||||
|
|
||||||
Ok(primary_cluster_name)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn get_replication_certs(&self, cluster_name: &str) -> Result<ReplicationCerts, String> {
|
|
||||||
self.primary.get_replication_certs(cluster_name).await
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn get_endpoint(&self, cluster_name: &str) -> Result<PostgreSQLEndpoint, String> {
|
|
||||||
self.primary.get_endpoint(cluster_name).await
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn get_public_endpoint(
|
|
||||||
&self,
|
|
||||||
cluster_name: &str,
|
|
||||||
) -> Result<Option<PostgreSQLEndpoint>, String> {
|
|
||||||
self.primary.get_public_endpoint(cluster_name).await
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
pub mod capability;
|
|
||||||
mod score;
|
|
||||||
|
|
||||||
pub mod failover;
|
|
||||||
@@ -1,88 +0,0 @@
|
|||||||
use crate::{
|
|
||||||
domain::{data::Version, interpret::InterpretStatus},
|
|
||||||
interpret::{Interpret, InterpretError, InterpretName, Outcome},
|
|
||||||
inventory::Inventory,
|
|
||||||
modules::postgresql::capability::PostgreSQL,
|
|
||||||
score::Score,
|
|
||||||
topology::Topology,
|
|
||||||
};
|
|
||||||
|
|
||||||
use super::capability::*;
|
|
||||||
|
|
||||||
use harmony_types::id::Id;
|
|
||||||
|
|
||||||
use async_trait::async_trait;
|
|
||||||
use log::info;
|
|
||||||
use serde::Serialize;
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Serialize)]
|
|
||||||
pub struct PostgreSQLScore {
|
|
||||||
config: PostgreSQLConfig,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct PostgreSQLInterpret {
|
|
||||||
config: PostgreSQLConfig,
|
|
||||||
version: Version,
|
|
||||||
status: InterpretStatus,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl PostgreSQLInterpret {
|
|
||||||
pub fn new(config: PostgreSQLConfig) -> Self {
|
|
||||||
let version = Version::from("1.0.0").expect("Version should be valid");
|
|
||||||
Self {
|
|
||||||
config,
|
|
||||||
version,
|
|
||||||
status: InterpretStatus::QUEUED,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T: Topology + PostgreSQL> Score<T> for PostgreSQLScore {
|
|
||||||
fn name(&self) -> String {
|
|
||||||
"PostgreSQLScore".to_string()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
|
|
||||||
Box::new(PostgreSQLInterpret::new(self.config.clone()))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl<T: Topology + PostgreSQL> Interpret<T> for PostgreSQLInterpret {
|
|
||||||
fn get_name(&self) -> InterpretName {
|
|
||||||
InterpretName::Custom("PostgreSQLInterpret")
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_version(&self) -> crate::domain::data::Version {
|
|
||||||
self.version.clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_status(&self) -> InterpretStatus {
|
|
||||||
self.status.clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_children(&self) -> Vec<Id> {
|
|
||||||
todo!()
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn execute(
|
|
||||||
&self,
|
|
||||||
_inventory: &Inventory,
|
|
||||||
topology: &T,
|
|
||||||
) -> Result<Outcome, InterpretError> {
|
|
||||||
info!(
|
|
||||||
"Executing PostgreSQLInterpret with config {:?}",
|
|
||||||
self.config
|
|
||||||
);
|
|
||||||
|
|
||||||
let cluster_name = topology
|
|
||||||
.deploy(&self.config)
|
|
||||||
.await
|
|
||||||
.map_err(|e| InterpretError::from(e))?;
|
|
||||||
|
|
||||||
Ok(Outcome::success(format!(
|
|
||||||
"Deployed PostgreSQL cluster `{cluster_name}`"
|
|
||||||
)))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,4 +1,3 @@
|
|||||||
pub mod id;
|
pub mod id;
|
||||||
pub mod net;
|
pub mod net;
|
||||||
pub mod storage;
|
|
||||||
pub mod switch;
|
pub mod switch;
|
||||||
|
|||||||
@@ -1,6 +0,0 @@
|
|||||||
use serde::{Deserialize, Serialize};
|
|
||||||
|
|
||||||
#[derive(Copy, Clone, PartialEq, Eq, Hash, Serialize, Deserialize, PartialOrd, Ord, Debug)]
|
|
||||||
pub struct StorageSize {
|
|
||||||
size_bytes: u64,
|
|
||||||
}
|
|
||||||
Reference in New Issue
Block a user