Compare commits

...

54 Commits

Author SHA1 Message Date
a9f8cd16ea Merge remote-tracking branch 'origin/master' into doc/pxe_test_setup
All checks were successful
Run Check Script / check (pull_request) Successful in 1m19s
2025-08-29 12:21:56 -04:00
c542a935e3 feat: Update harmony_inventory_agent binary in pxe http files
All checks were successful
Run Check Script / check (pull_request) Successful in 1m14s
2025-08-29 11:27:19 -04:00
0395d11e98 fix(doctest): Import harmony instrumentation properly in doc tests
All checks were successful
Run Check Script / check (pull_request) Successful in 1m15s
2025-08-29 11:23:11 -04:00
05e7b8075c feat(inventory agent): Local presence advertisement and discovery now works! Must be within the same LAN to share the multicast address though 2025-08-29 11:22:44 -04:00
b857412151 extract related logic into an OkdIpxeScore
Some checks failed
Run Check Script / check (pull_request) Failing after 33s
2025-08-29 09:52:11 -04:00
7bb3602ab8 make instrumentation sync instead of async to avoid concurrency issues 2025-08-29 06:03:59 -04:00
78b80c2169 fix typo in service type
Some checks failed
Run Check Script / check (pull_request) Failing after 34s
2025-08-29 04:42:25 -04:00
0876f4e4f0 Merge remote-tracking branch 'origin/doc/pxe_test_setup' into doc/pxe_test_setup
Some checks failed
Run Check Script / check (pull_request) Failing after 34s
2025-08-29 01:15:00 -04:00
6ac0e095a3 wip(inventory-agent): local presence advertisement and discovery using mdns almost working 2025-08-29 01:10:43 -04:00
ff2efc0a66 wip: mark DhcpRange fields as optional (to better support OPNSense possible configs)
All checks were successful
Run Check Script / check (pull_request) Successful in 1m14s
2025-08-28 16:21:18 -04:00
Ian Letourneau
f180cc4c80 wip: rename harmony-secret* by harmony_secret*
All checks were successful
Run Check Script / check (pull_request) Successful in 1m14s
2025-08-28 14:29:24 -04:00
3ca31179d0 Merge pull request 'feat/ceph_validate_health' (#121) from feat/ceph_validate_health into master
All checks were successful
Run Check Script / check (push) Successful in 1m4s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 5m52s
Reviewed-on: #121
Reviewed-by: johnride <jg@nationtech.io>
2025-08-25 19:32:42 +00:00
a9fe4ab267 fix: cargo fmt
All checks were successful
Run Check Script / check (pull_request) Successful in 1m0s
2025-08-25 13:33:36 -04:00
65cc9befeb mod.rs
Some checks failed
Run Check Script / check (pull_request) Failing after 20s
2025-08-25 13:31:39 -04:00
d456a1f9ee feat: score to validate whether the ceph cluster is healthy 2025-08-25 13:30:32 -04:00
5895f867cf feat: Bump harmony_composer rust version to 1.89
Some checks failed
Run Check Script / check (push) Failing after 24s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 7m52s
2025-08-23 16:27:04 -04:00
8cc7adf196 chore: Cleanup warnings and unused functions
All checks were successful
Run Check Script / check (pull_request) Successful in 1m20s
2025-08-23 16:26:29 -04:00
a1ab5d40fb chore: cargo fix
Some checks failed
Run Check Script / check (pull_request) Failing after 36s
2025-08-23 15:52:09 -04:00
6c92dd24f7 chore: cargo fmt
Some checks failed
Run Check Script / check (pull_request) Failing after 37s
2025-08-23 15:48:21 -04:00
c805d7e018 fix: Update prebuilt inventory_agent binary
Some checks failed
Run Check Script / check (pull_request) Failing after 35s
2025-08-23 15:33:12 -04:00
b33615b969 fix(opnsense-xml): dnsmasq force is now optional
Some checks failed
Run Check Script / check (pull_request) Failing after 38s
2025-08-23 15:31:14 -04:00
0f59f29ac4 fix(inventory_agent): Inventory agent now fallsback on error messages when it cant find values
Some checks failed
Run Check Script / check (pull_request) Failing after 38s
2025-08-22 11:52:51 -04:00
361f240762 feat: PXE setup now fully functional for inventory agent
The process will setup DHCP dnsmasq on opnsense to boot the correct ipxe file depending on the architecture
Then ipxe will chainload to either a mac-specific ipxe boot file or the fallback inventory boot file
Then a kickstart pre script will setup the cluster ssh key to allow ssh connections to the machine and also setup and start harmony_inventory_agent to allow being scraped

Note: there is a bug with the inventory agent currently, it cannot find lsmod on centos stream 9, will fix this soon
2025-08-22 10:48:43 -04:00
57c3b01e66 chore: refactor pxe templates to jinja templates rendered by askama
Some checks failed
Run Check Script / check (pull_request) Failing after 36s
2025-08-22 09:05:18 -04:00
94ddf027dd feat(pxe): chainloading works, kickstart for inventory still wip 2025-08-22 07:22:12 -04:00
06a2be4496 doc: Add README explaining how to build harmony_inventory_agent statically with musl target
Some checks failed
Run Check Script / check (pull_request) Failing after 35s
2025-08-21 21:58:35 -04:00
e2a09efdee Merge remote-tracking branch 'origin/master' into doc/pxe_test_setup 2025-08-21 21:56:09 -04:00
d36c574590 Merge pull request 'feat/inventory_agent' (#119) from feat/inventory_agent into master
Some checks failed
Run Check Script / check (push) Failing after 38s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 5m48s
Reviewed-on: #119
2025-08-22 01:55:52 +00:00
2618441de3 fix: Make sure directory exists before uploading file in opnsense http
Some checks failed
Run Check Script / check (pull_request) Failing after 31s
2025-08-21 17:31:43 -04:00
da6610c625 wip: PXE setup for ipxe and okd files in progress
Some checks failed
Run Check Script / check (pull_request) Failing after 36s
2025-08-21 17:28:17 -04:00
e956772593 feat: Add pxe example and new data files structure 2025-08-20 22:00:56 -04:00
27c51e0ec5 feat(wip): Support opnsense 25.7 which defaults to dnsmasq instead of isc dhcp 2025-08-20 21:54:46 -04:00
bfca9cf163 Merge pull request 'feat/ceph-osd-score' (#116) from feat/ceph-osd-score into master
Some checks failed
Run Check Script / check (push) Failing after 36s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 15m5s
Reviewed-on: #116
Reviewed-by: johnride <jg@nationtech.io>
2025-08-20 18:19:42 +00:00
597dcbc848 doc: PXE test setup script and README file to explain what it does and how to use it
Some checks failed
Run Check Script / check (pull_request) Failing after 40s
2025-08-20 13:14:00 -04:00
a53e8552e9 wip: pxe test setup still has a few kinks with serial console 2025-08-20 12:14:17 -04:00
72fb05b5cc fix(inventory_agent) : Agent now retreives correct dmidecode fields, fixed uuid generation which is unacceptable, fixed storage drive parsing, much better error handling, much more strict behavior which also leads to more complete output as missing fields will raise errors unless explicitely optional 2025-08-19 17:56:06 -04:00
6685b05cc5 wip(inventory_agent): Refactoring for better error handling in progress 2025-08-19 17:05:23 -04:00
07116eb8a6 Merge pull request 'feat: Harmony inventory agent crate that exposes an endpoint listing the host hardware. Has to be reviewed, generated 99% by GLM-4.5' (#115) from feat/inventory_agent into master
Some checks failed
Run Check Script / check (push) Failing after 27s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 5m34s
Reviewed-on: #115
2025-08-19 16:58:00 +00:00
3f34f868eb Merge remote-tracking branch 'origin/master' into feat/inventory_agent
Some checks failed
Run Check Script / check (pull_request) Failing after 29s
2025-08-19 12:56:10 -04:00
bc6f7336d2 feat(inventory_agent): use HARMONY_INVENTORY_AGENT_PORT as environment variable to set port
Some checks failed
Run Check Script / check (pull_request) Failing after 25s
2025-08-19 12:55:03 -04:00
01da8631da chore(inventory_agent): Cargo fmt
Some checks failed
Run Check Script / check (pull_request) Failing after 24s
2025-08-19 12:44:49 -04:00
67b5c2df07 Merge pull request 'feat: Add iobench project and python dashboard' (#112) from feat/iobench into master
All checks were successful
Run Check Script / check (push) Successful in 1m11s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 5m41s
Reviewed-on: #112
2025-08-19 16:24:31 +00:00
1eaf63417b Merge pull request 'feat/secrets' (#111) from feat/secrets into master
Some checks failed
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Run Check Script / check (push) Has been cancelled
Reviewed-on: #111

This pull request introduces a comprehensive and ergonomic secret management system via a new harmony-secret crate.
What's Done

    New harmony-secret Crate:
        A new crate dedicated to secret management, providing a clean, static API: SecretManager::get::<MySecret>() and SecretManager::set(&my_secret).
        A #[derive(Secret)] procedural macro that automatically uses the struct's name as the secret key, simplifying usage.
        An async SecretStore trait to support various backend implementations.

    Two Secret Store Implementations:
        LocalFileSecretStore: A simple file-based store that saves secrets as JSON in the user's data directory. Ideal for local development and testing.
        InfisicalSecretStore: A production-ready implementation that integrates with Infisical for centralized, secure secret management.

    Configuration via Environment Variables:
        The secret store is selected at runtime via the HARMONY_SECRET_STORE environment variable (file or infisical).
        Infisical integration is configured through HARMONY_SECRET_INFISICAL_* variables.

What's Not Done (Future Work)

    Automated Infisical Setup: The initial configuration for the Infisical backend is currently manual. Developers must create a project and a Universal Auth identity in Infisical and set the corresponding environment variables to run tests or use the backend. The new test_harmony_secret_infisical.sh script serves as a clear example of the required variables.

This new secrets module provides a solid and secure foundation for managing credentials for components like OPNsense, Kubernetes, and other infrastructure services going forward. Even with the manual first-time setup for Infisical, this architecture is robust enough to serve our needs for the foreseeable future.
2025-08-19 16:23:45 +00:00
5e7803d2ba chore(iobench-dash): Delete older revisions and rename to iobench-dash.py for clarity
All checks were successful
Run Check Script / check (pull_request) Successful in 1m3s
2025-08-19 12:21:42 -04:00
9a610661c7 chore: Add description and license fields to Cargo.toml to allow publishing the crate
All checks were successful
Run Check Script / check (pull_request) Successful in 1m1s
2025-08-19 12:12:41 -04:00
70a65ed5d0 Merge remote-tracking branch 'origin/master' into feat/secrets
All checks were successful
Run Check Script / check (pull_request) Successful in 1m9s
2025-08-19 12:00:19 -04:00
26e8e386b9 feat: Secret module works with infisical and local file storage backends
All checks were successful
Run Check Script / check (pull_request) Successful in 1m9s
2025-08-19 11:59:21 -04:00
19cb7f73bc feat: Harmony inventory agent crate that exposes an endpoint listing the host hardware. Has to be reviewed, generated 99% by GLM-4.5
Some checks failed
Run Check Script / check (pull_request) Failing after 29s
2025-08-19 11:24:20 -04:00
84f38974b1 Merge pull request 'fix: bring back the TUI' (#110) from fix-tui into master
All checks were successful
Run Check Script / check (push) Successful in 1m15s
Compile and package harmony_composer / package_harmony_composer (push) Successful in 5m34s
Reviewed-on: #110
2025-08-15 20:01:59 +00:00
7d027bcfc4 Merge pull request 'fix: remove indicatif in harmony_cli to simplify logging and fixing interactions' (#109) from rip-indicatif into master
Some checks failed
Compile and package harmony_composer / package_harmony_composer (push) Waiting to run
Run Check Script / check (push) Has been cancelled
Reviewed-on: #109
2025-08-15 20:01:13 +00:00
2a6a233fb2 feat: WIP add secrets module and macro crate 2025-08-15 14:40:39 -04:00
Ian Letourneau
610ce84280 fix: bring back to TUI
All checks were successful
Run Check Script / check (pull_request) Successful in 1m20s
2025-08-15 12:47:36 -04:00
Ian Letourneau
8bb4a9d3f6 fix: remove indicatif in harmony_cli to simplify logging and fixing interactions
All checks were successful
Run Check Script / check (pull_request) Successful in 1m7s
2025-08-15 11:26:54 -04:00
fd8f643a8f feat: Add iobench project and python dashboard
All checks were successful
Run Check Script / check (pull_request) Successful in 1m3s
2025-08-14 10:37:30 -04:00
110 changed files with 10693 additions and 898 deletions

1071
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,9 @@ members = [
"harmony_cli",
"k3d",
"harmony_composer",
"harmony_inventory_agent",
"harmony_secret_derive",
"harmony_secret", "adr/agent_discovery/mdns",
]
[workspace.package]
@@ -20,7 +23,7 @@ readme = "README.md"
license = "GNU AGPL v3"
[workspace.dependencies]
log = "0.4"
log = { version = "0.4", features = ["kv"] }
env_logger = "0.11"
derive-new = "0.7"
async-trait = "0.1"
@@ -53,6 +56,12 @@ chrono = "0.4"
similar = "2"
uuid = { version = "1.11", features = ["v4", "fast-rng", "macro-diagnostics"] }
pretty_assertions = "1.4.1"
tempfile = "3.20.0"
bollard = "0.19.1"
base64 = "0.22.1"
tar = "0.4.44"
lazy_static = "1.5.0"
directories = "6.0.0"
thiserror = "2.0.14"
serde = { version = "1.0.209", features = ["derive", "rc"] }
serde_json = "1.0.127"

View File

@@ -1,4 +1,4 @@
FROM docker.io/rust:1.87.0 AS build
FROM docker.io/rust:1.89.0 AS build
WORKDIR /app
@@ -6,7 +6,7 @@ COPY . .
RUN cargo build --release --bin harmony_composer
FROM docker.io/rust:1.87.0
FROM docker.io/rust:1.89.0
WORKDIR /app

View File

@@ -0,0 +1,17 @@
[package]
name = "mdns"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
mdns-sd = "0.14"
tokio = { version = "1", features = ["full"] }
futures = "0.3"
dmidecode = "0.2" # For getting the motherboard ID on the agent
log.workspace=true
env_logger.workspace=true
clap = { version = "4.5.46", features = ["derive"] }
get_if_addrs = "0.5.3"
local-ip-address = "0.6.5"

View File

@@ -0,0 +1,60 @@
// harmony-agent/src/main.rs
use log::info;
use mdns_sd::{ServiceDaemon, ServiceInfo};
use std::collections::HashMap;
use crate::SERVICE_TYPE;
// The service we are advertising.
const SERVICE_PORT: u16 = 43210; // A port for the service. It needs one, even if unused.
pub async fn advertise() {
info!("Starting Harmony Agent...");
// Get a unique ID for this machine.
let motherboard_id = "some motherboard id";
let instance_name = format!("harmony-agent-{}", motherboard_id);
info!("This agent's instance name: {}", instance_name);
info!("Advertising with ID: {}", motherboard_id);
// Create a new mDNS daemon.
let mdns = ServiceDaemon::new().expect("Failed to create mDNS daemon");
// Create a TXT record HashMap to hold our metadata.
let mut properties = HashMap::new();
properties.insert("id".to_string(), motherboard_id.to_string());
properties.insert("version".to_string(), "1.0".to_string());
// Create the service information.
// The instance name should be unique on the network.
let local_ip = local_ip_address::local_ip().unwrap();
let service_info = ServiceInfo::new(
SERVICE_TYPE,
&instance_name,
"harmony-host.local.", // A hostname for the service
local_ip,
// "0.0.0.0",
SERVICE_PORT,
Some(properties),
)
.expect("Failed to create service info");
// Register our service with the daemon.
mdns.register(service_info)
.expect("Failed to register service");
info!(
"Service '{}' registered and now being advertised.",
instance_name
);
info!("Agent is running. Press Ctrl+C to exit.");
for iface in get_if_addrs::get_if_addrs().unwrap() {
println!("{:#?}", iface);
}
// Keep the agent running indefinitely.
tokio::signal::ctrl_c().await.unwrap();
info!("Shutting down agent.");
}

View File

@@ -0,0 +1,110 @@
use log::debug;
use mdns_sd::{ServiceDaemon, ServiceEvent};
use crate::SERVICE_TYPE;
pub async fn discover() {
println!("Starting Harmony Master and browsing for agents...");
// Create a new mDNS daemon.
let mdns = ServiceDaemon::new().expect("Failed to create mDNS daemon");
// Start browsing for the service type.
// The receiver will be a stream of events.
let receiver = mdns.browse(SERVICE_TYPE).expect("Failed to browse");
println!(
"Listening for mDNS events for '{}'. Press Ctrl+C to exit.",
SERVICE_TYPE
);
std::thread::spawn(move || {
while let Ok(event) = receiver.recv() {
match event {
ServiceEvent::ServiceData(resolved) => {
println!("Resolved a new service: {}", resolved.fullname);
}
other_event => {
println!("Received other event: {:?}", &other_event);
}
}
}
});
// Gracefully shutdown the daemon.
std::thread::sleep(std::time::Duration::from_secs(1000000));
mdns.shutdown().unwrap();
// Process events as they come in.
// while let Ok(event) = receiver.recv_async().await {
// debug!("Received event {event:?}");
// // match event {
// // ServiceEvent::ServiceFound(svc_type, fullname) => {
// // println!("\n--- Agent Discovered ---");
// // println!(" Service Name: {}", fullname());
// // // You can now resolve this service to get its IP, port, and TXT records
// // // The resolve operation is a separate network call.
// // let receiver = mdns.browse(info.get_fullname()).unwrap();
// // if let Ok(resolve_event) = receiver.recv_timeout(Duration::from_secs(2)) {
// // if let ServiceEvent::ServiceResolved(info) = resolve_event {
// // let ip = info.get_addresses().iter().next().unwrap();
// // let port = info.get_port();
// // let motherboard_id = info.get_property("id").map_or("N/A", |v| v.val_str());
// //
// // println!(" IP: {}:{}", ip, port);
// // println!(" Motherboard ID: {}", motherboard_id);
// // println!("------------------------");
// //
// // // TODO: Add this agent to your central list of discovered hosts.
// // }
// // } else {
// // println!("Could not resolve service '{}' in time.", info.get_fullname());
// // }
// // }
// // ServiceEvent::ServiceRemoved(info) => {
// // println!("\n--- Agent Removed ---");
// // println!(" Service Name: {}", info.get_fullname());
// // println!("---------------------");
// // // TODO: Remove this agent from your list.
// // }
// // _ => {
// // // We don't care about other event types for this example
// // }
// // }
// }
}
async fn discover_example() {
use mdns_sd::{ServiceDaemon, ServiceEvent};
// Create a daemon
let mdns = ServiceDaemon::new().expect("Failed to create daemon");
// Use recently added `ServiceEvent::ServiceData`.
mdns.use_service_data(true)
.expect("Failed to use ServiceData");
// Browse for a service type.
let service_type = "_mdns-sd-my-test._udp.local.";
let receiver = mdns.browse(service_type).expect("Failed to browse");
// Receive the browse events in sync or async. Here is
// an example of using a thread. Users can call `receiver.recv_async().await`
// if running in async environment.
std::thread::spawn(move || {
while let Ok(event) = receiver.recv() {
match event {
ServiceEvent::ServiceData(resolved) => {
println!("Resolved a new service: {}", resolved.fullname);
}
other_event => {
println!("Received other event: {:?}", &other_event);
}
}
}
});
// Gracefully shutdown the daemon.
std::thread::sleep(std::time::Duration::from_secs(1));
mdns.shutdown().unwrap();
}

View File

@@ -0,0 +1,31 @@
use clap::{Parser, ValueEnum};
mod advertise;
mod discover;
#[derive(Parser, Debug)]
#[command(version, about, long_about = None)]
struct Args {
#[arg(value_enum)]
profile: Profiles,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord, ValueEnum)]
enum Profiles {
Advertise,
Discover,
}
// The service type we are looking for.
const SERVICE_TYPE: &str = "_harmony._tcp.local.";
#[tokio::main]
async fn main() {
env_logger::init();
let args = Args::parse();
match args.profile {
Profiles::Advertise => advertise::advertise().await,
Profiles::Discover => discover::discover().await,
}
}

View File

@@ -1,6 +1,7 @@
#!/bin/sh
set -e
rustc --version
cargo check --all-targets --all-features --keep-going
cargo fmt --check
cargo clippy

8
data/pxe/okd/README.md Normal file
View File

@@ -0,0 +1,8 @@
Here lies all the data files required for an OKD cluster PXE boot setup.
This inclues ISO files, binary boot files, ipxe, etc.
TODO as of august 2025 :
- `harmony_inventory_agent` should be downloaded from official releases, this embedded version is practical for now though
- The cluster ssh key should be generated and handled by harmony with the private key saved in a secret store

View File

@@ -0,0 +1,9 @@
harmony_inventory_agent filter=lfs diff=lfs merge=lfs -text
os filter=lfs diff=lfs merge=lfs -text
os/centos-stream-9 filter=lfs diff=lfs merge=lfs -text
os/centos-stream-9/images filter=lfs diff=lfs merge=lfs -text
os/centos-stream-9/initrd.img filter=lfs diff=lfs merge=lfs -text
os/centos-stream-9/vmlinuz filter=lfs diff=lfs merge=lfs -text
os/centos-stream-9/images/efiboot.img filter=lfs diff=lfs merge=lfs -text
os/centos-stream-9/images/install.img filter=lfs diff=lfs merge=lfs -text
os/centos-stream-9/images/pxeboot filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBx6bDylvC68cVpjKfEFtLQJ/dOFi6PVS2vsIOqPDJIc jeangab@liliane2

BIN
data/pxe/okd/http_files/harmony_inventory_agent (Stored with Git LFS) Executable file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
data/pxe/okd/http_files/os/centos-stream-9/initrd.img (Stored with Git LFS) Normal file

Binary file not shown.

BIN
data/pxe/okd/http_files/os/centos-stream-9/vmlinuz (Stored with Git LFS) Executable file

Binary file not shown.

Binary file not shown.

Binary file not shown.

108
docs/pxe_test/README.md Normal file
View File

@@ -0,0 +1,108 @@
# OPNsense PXE Lab Environment
This project contains a script to automatically set up a virtual lab environment for testing PXE boot services managed by an OPNsense firewall.
## Overview
The `pxe_vm_lab_setup.sh` script will create the following resources using libvirt/KVM:
1. **A Virtual Network**: An isolated network named `harmonylan` (`virbr1`) for the lab.
2. **Two Virtual Machines**:
* `opnsense-pxe`: A firewall VM that will act as the gateway and PXE server.
* `pxe-node-1`: A client VM configured to boot from the network.
## Prerequisites
Ensure you have the following software installed on your Arch Linux host:
* `libvirt`
* `qemu`
* `virt-install` (from the `virt-install` package)
* `curl`
* `bzip2`
## Usage
### 1. Create the Environment
Run the `up` command to download the necessary images and create the network and VMs.
```bash
sudo ./pxe_vm_lab_setup.sh up
```
### 2. Install and Configure OPNsense
The OPNsense VM is created but the OS needs to be installed manually via the console.
1. **Connect to the VM console**:
```bash
sudo virsh console opnsense-pxe
```
2. **Log in as the installer**:
* Username: `installer`
* Password: `opnsense`
3. **Follow the on-screen installation wizard**. When prompted to assign network interfaces (`WAN` and `LAN`):
* Find the MAC address for the `harmonylan` interface by running this command in another terminal:
```bash
virsh domiflist opnsense-pxe
# Example output:
# Interface Type Source Model MAC
# ---------------------------------------------------------
# vnet18 network default virtio 52:54:00:b5:c4:6d
# vnet19 network harmonylan virtio 52:54:00:21:f9:ba
```
* Assign the interface connected to `harmonylan` (e.g., `vtnet1` with MAC `52:54:00:21:f9:ba`) as your **LAN**.
* Assign the other interface as your **WAN**.
4. After the installation is complete, **shut down** the VM from the console menu.
5. **Detach the installation media** by editing the VM's configuration:
```bash
sudo virsh edit opnsense-pxe
```
Find and **delete** the entire `<disk>` block corresponding to the `.img` file (the one with `<target ... bus='usb'/>`).
6. **Start the VM** to boot into the newly installed system:
```bash
sudo virsh start opnsense-pxe
```
### 3. Connect to OPNsense from Your Host
To configure OPNsense, you need to connect your host to the `harmonylan` network.
1. By default, OPNsense configures its LAN interface with the IP `192.168.1.1`.
2. Assign a compatible IP address to your host's `virbr1` bridge interface:
```bash
sudo ip addr add 192.168.1.5/24 dev virbr1
```
3. You can now access the OPNsense VM from your host:
* **SSH**: `ssh root@192.168.1.1` (password: `opnsense`)
* **Web UI**: `https://192.168.1.1`
### 4. Configure PXE Services with Harmony
With connectivity established, you can now use Harmony to configure the OPNsense firewall for PXE booting. Point your Harmony OPNsense scores to the firewall using these details:
* **Hostname/IP**: `192.168.1.1`
* **Credentials**: `root` / `opnsense`
### 5. Boot the PXE Client
Once your Harmony configuration has been applied and OPNsense is serving DHCP/TFTP, start the client VM. It will automatically attempt to boot from the network.
```bash
sudo virsh start pxe-node-1
sudo virsh console pxe-node-1
```
## Cleanup
To destroy all VMs and networks created by the script, run the `clean` command:
```bash
sudo ./pxe_vm_lab_setup.sh clean
```

191
docs/pxe_test/pxe_vm_lab_setup.sh Executable file
View File

@@ -0,0 +1,191 @@
#!/usr/bin/env bash
set -euo pipefail
# --- Configuration ---
LAB_DIR="/var/lib/harmony_pxe_test"
IMG_DIR="${LAB_DIR}/images"
STATE_DIR="${LAB_DIR}/state"
VM_OPN="opnsense-pxe"
VM_PXE="pxe-node-1"
NET_HARMONYLAN="harmonylan"
# Network settings for the isolated LAN
VLAN_CIDR="192.168.150.0/24"
VLAN_GW="192.168.150.1"
VLAN_MASK="255.255.255.0"
# VM Specifications
RAM_OPN="2048"
VCPUS_OPN="2"
DISK_OPN_GB="10"
OS_VARIANT_OPN="freebsd14.0" # Updated to a more recent FreeBSD variant
RAM_PXE="4096"
VCPUS_PXE="2"
DISK_PXE_GB="40"
OS_VARIANT_LINUX="centos-stream9"
OPN_IMG_URL="https://mirror.ams1.nl.leaseweb.net/opnsense/releases/25.7/OPNsense-25.7-serial-amd64.img.bz2"
OPN_IMG_PATH="${IMG_DIR}/OPNsense-25.7-serial-amd64.img"
CENTOS_ISO_URL="https://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/images/boot.iso"
CENTOS_ISO_PATH="${IMG_DIR}/CentOS-Stream-9-latest-boot.iso"
CONNECT_URI="qemu:///system"
download_if_missing() {
local url="$1"
local dest="$2"
if [[ ! -f "$dest" ]]; then
echo "Downloading $url to $dest"
mkdir -p "$(dirname "$dest")"
local tmp
tmp="$(mktemp)"
curl -L --progress-bar "$url" -o "$tmp"
case "$url" in
*.bz2) bunzip2 -c "$tmp" > "$dest" && rm -f "$tmp" ;;
*) mv "$tmp" "$dest" ;;
esac
else
echo "Already present: $dest"
fi
}
# Ensures a libvirt network is defined and active
ensure_network() {
local net_name="$1"
local net_xml_path="$2"
if virsh --connect "${CONNECT_URI}" net-info "${net_name}" >/dev/null 2>&1; then
echo "Network ${net_name} already exists."
else
echo "Defining network ${net_name} from ${net_xml_path}"
virsh --connect "${CONNECT_URI}" net-define "${net_xml_path}"
fi
if ! virsh --connect "${CONNECT_URI}" net-info "${net_name}" | grep "Active: *yes"; then
echo "Starting network ${net_name}..."
virsh --connect "${CONNECT_URI}" net-start "${net_name}"
virsh --connect "${CONNECT_URI}" net-autostart "${net_name}"
fi
}
# Destroys a VM completely
destroy_vm() {
local vm_name="$1"
if virsh --connect "${CONNECT_URI}" dominfo "$vm_name" >/dev/null 2>&1; then
echo "Destroying and undefining VM: ${vm_name}"
virsh --connect "${CONNECT_URI}" destroy "$vm_name" || true
virsh --connect "${CONNECT_URI}" undefine "$vm_name" --nvram
fi
}
# Destroys a libvirt network
destroy_network() {
local net_name="$1"
if virsh --connect "${CONNECT_URI}" net-info "$net_name" >/dev/null 2>&1; then
echo "Destroying and undefining network: ${net_name}"
virsh --connect "${CONNECT_URI}" net-destroy "$net_name" || true
virsh --connect "${CONNECT_URI}" net-undefine "$net_name"
fi
}
# --- Main Logic ---
create_lab_environment() {
# Create network definition files
cat > "${STATE_DIR}/default.xml" <<EOF
<network>
<name>default</name>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.100' end='192.168.122.200'/>
</dhcp>
</ip>
</network>
EOF
cat > "${STATE_DIR}/${NET_HARMONYLAN}.xml" <<EOF
<network>
<name>${NET_HARMONYLAN}</name>
<bridge name='virbr1' stp='on' delay='0'/>
</network>
EOF
# Ensure both networks exist and are active
ensure_network "default" "${STATE_DIR}/default.xml"
ensure_network "${NET_HARMONYLAN}" "${STATE_DIR}/${NET_HARMONYLAN}.xml"
# --- Create OPNsense VM (MODIFIED SECTION) ---
local disk_opn="${IMG_DIR}/${VM_OPN}.qcow2"
if [[ ! -f "$disk_opn" ]]; then
qemu-img create -f qcow2 "$disk_opn" "${DISK_OPN_GB}G"
fi
echo "Creating OPNsense VM using serial image..."
virt-install \
--connect "${CONNECT_URI}" \
--name "${VM_OPN}" \
--ram "${RAM_OPN}" \
--vcpus "${VCPUS_OPN}" \
--cpu host-passthrough \
--os-variant "${OS_VARIANT_OPN}" \
--graphics none \
--noautoconsole \
--disk path="${disk_opn}",device=disk,bus=virtio,boot.order=1 \
--disk path="${OPN_IMG_PATH}",device=disk,bus=usb,readonly=on,boot.order=2 \
--network network=default,model=virtio \
--network network="${NET_HARMONYLAN}",model=virtio \
--boot uefi,menu=on
echo "OPNsense VM created. Connect with: sudo virsh console ${VM_OPN}"
echo "The VM will boot from the serial installation image."
echo "Login with user 'installer' and password 'opnsense' to start the installation."
echo "Install onto the VirtIO disk (vtbd0)."
echo "After installation, shutdown the VM, then run 'sudo virsh edit ${VM_OPN}' and remove the USB disk block to boot from the installed system."
# --- Create PXE Client VM ---
local disk_pxe="${IMG_DIR}/${VM_PXE}.qcow2"
if [[ ! -f "$disk_pxe" ]]; then
qemu-img create -f qcow2 "$disk_pxe" "${DISK_PXE_GB}G"
fi
echo "Creating PXE client VM..."
virt-install \
--connect "${CONNECT_URI}" \
--name "${VM_PXE}" \
--ram "${RAM_PXE}" \
--vcpus "${VCPUS_PXE}" \
--cpu host-passthrough \
--os-variant "${OS_VARIANT_LINUX}" \
--graphics none \
--noautoconsole \
--disk path="${disk_pxe}",format=qcow2,bus=virtio \
--network network="${NET_HARMONYLAN}",model=virtio \
--pxe \
--boot uefi,menu=on
echo "PXE VM created. It will attempt to netboot on ${NET_HARMONYLAN}."
}
# --- Script Entrypoint ---
case "${1:-}" in
up)
mkdir -p "${IMG_DIR}" "${STATE_DIR}"
download_if_missing "$OPN_IMG_URL" "$OPN_IMG_PATH"
download_if_missing "$CENTOS_ISO_URL" "$CENTOS_ISO_PATH"
create_lab_environment
echo "Lab setup complete. Use 'sudo virsh list --all' to see VMs."
;;
clean)
destroy_vm "${VM_PXE}"
destroy_vm "${VM_OPN}"
destroy_network "${NET_HARMONYLAN}"
# Optionally destroy the default network if you want a full reset
# destroy_network "default"
echo "Cleanup complete."
;;
*)
echo "Usage: sudo $0 {up|clean}"
exit 1
;;
esac

Binary file not shown.

View File

@@ -1,6 +1,9 @@
use harmony::{
inventory::Inventory,
modules::dummy::{ErrorScore, PanicScore, SuccessScore},
modules::{
dummy::{ErrorScore, PanicScore, SuccessScore},
inventory::DiscoverInventoryAgentScore,
},
topology::LocalhostTopology,
};
@@ -13,6 +16,9 @@ async fn main() {
Box::new(SuccessScore {}),
Box::new(ErrorScore {}),
Box::new(PanicScore {}),
Box::new(DiscoverInventoryAgentScore {
discovery_timeout: Some(10),
}),
],
None,
)

View File

@@ -8,7 +8,6 @@ use harmony::{
hardware::{FirewallGroup, HostCategory, Location, PhysicalHost, SwitchGroup},
infra::opnsense::OPNSenseManagementInterface,
inventory::Inventory,
maestro::Maestro,
modules::{
http::StaticFilesHttpScore,
ipxe::IpxeScore,
@@ -126,20 +125,28 @@ async fn main() {
harmony::modules::okd::load_balancer::OKDLoadBalancerScore::new(&topology);
let tftp_score = TftpScore::new(Url::LocalFolder("./data/watchguard/tftpboot".to_string()));
let http_score = StaticFilesHttpScore::new(Url::LocalFolder(
"./data/watchguard/pxe-http-files".to_string(),
));
let http_score = StaticFilesHttpScore {
folder_to_serve: Some(Url::LocalFolder(
"./data/watchguard/pxe-http-files".to_string(),
)),
files: vec![],
};
let ipxe_score = IpxeScore::new();
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
maestro.register_all(vec![
Box::new(dns_score),
Box::new(bootstrap_dhcp_score),
Box::new(bootstrap_load_balancer_score),
Box::new(load_balancer_score),
Box::new(tftp_score),
Box::new(http_score),
Box::new(ipxe_score),
Box::new(dhcp_score),
]);
harmony_tui::init(maestro).await.unwrap();
harmony_tui::run(
inventory,
topology,
vec![
Box::new(dns_score),
Box::new(bootstrap_dhcp_score),
Box::new(bootstrap_load_balancer_score),
Box::new(load_balancer_score),
Box::new(tftp_score),
Box::new(http_score),
Box::new(ipxe_score),
Box::new(dhcp_score),
],
)
.await
.unwrap();
}

View File

@@ -0,0 +1,21 @@
[package]
name = "example-pxe"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
publish = false
[dependencies]
harmony = { path = "../../harmony" }
harmony_cli = { path = "../../harmony_cli" }
harmony_types = { path = "../../harmony_types" }
harmony_secret = { path = "../../harmony_secret" }
harmony_secret_derive = { path = "../../harmony_secret_derive" }
cidr = { workspace = true }
tokio = { workspace = true }
harmony_macros = { path = "../../harmony_macros" }
log = { workspace = true }
env_logger = { workspace = true }
url = { workspace = true }
serde.workspace = true

View File

@@ -0,0 +1,24 @@
mod topology;
use crate::topology::{get_inventory, get_topology};
use harmony::modules::okd::ipxe::OkdIpxeScore;
#[tokio::main]
async fn main() {
let inventory = get_inventory();
let topology = get_topology().await;
let kickstart_filename = "inventory.kickstart".to_string();
let cluster_pubkey_filename = "cluster_ssh_key.pub".to_string();
let harmony_inventory_agent = "harmony_inventory_agent".to_string();
let ipxe_score = OkdIpxeScore {
kickstart_filename,
harmony_inventory_agent,
cluster_pubkey_filename,
};
harmony_cli::run(inventory, topology, vec![Box::new(ipxe_score)], None)
.await
.unwrap();
}

View File

@@ -0,0 +1,78 @@
use cidr::Ipv4Cidr;
use harmony::{
hardware::{FirewallGroup, HostCategory, Location, PhysicalHost, SwitchGroup},
infra::opnsense::OPNSenseManagementInterface,
inventory::Inventory,
topology::{HAClusterTopology, LogicalHost, UnmanagedRouter},
};
use harmony_macros::{ip, ipv4};
use harmony_secret::{Secret, SecretManager};
use serde::{Deserialize, Serialize};
use std::{net::IpAddr, sync::Arc};
#[derive(Secret, Serialize, Deserialize, Debug, PartialEq)]
struct OPNSenseFirewallConfig {
username: String,
password: String,
}
pub async fn get_topology() -> HAClusterTopology {
let firewall = harmony::topology::LogicalHost {
ip: ip!("192.168.1.1"),
name: String::from("opnsense-1"),
};
let config = SecretManager::get::<OPNSenseFirewallConfig>().await;
let config = config.unwrap();
let opnsense = Arc::new(
harmony::infra::opnsense::OPNSenseFirewall::new(
firewall,
None,
&config.username,
&config.password,
)
.await,
);
let lan_subnet = ipv4!("192.168.1.0");
let gateway_ipv4 = ipv4!("192.168.1.1");
let gateway_ip = IpAddr::V4(gateway_ipv4);
harmony::topology::HAClusterTopology {
domain_name: "demo.harmony.mcd".to_string(),
router: Arc::new(UnmanagedRouter::new(
gateway_ip,
Ipv4Cidr::new(lan_subnet, 24).unwrap(),
)),
load_balancer: opnsense.clone(),
firewall: opnsense.clone(),
tftp_server: opnsense.clone(),
http_server: opnsense.clone(),
dhcp_server: opnsense.clone(),
dns_server: opnsense.clone(),
control_plane: vec![LogicalHost {
ip: ip!("10.100.8.20"),
name: "cp0".to_string(),
}],
bootstrap_host: LogicalHost {
ip: ip!("10.100.8.20"),
name: "cp0".to_string(),
},
workers: vec![],
switch: vec![],
}
}
pub fn get_inventory() -> Inventory {
Inventory {
location: Location::new(
"Some virtual machine or maybe a physical machine if you're cool".to_string(),
"testopnsense".to_string(),
),
switch: SwitchGroup::from([]),
firewall: FirewallGroup::from([PhysicalHost::empty(HostCategory::Firewall)
.management(Arc::new(OPNSenseManagementInterface::new()))]),
storage_host: vec![],
worker_host: vec![],
control_plane_host: vec![],
}
}

View File

@@ -0,0 +1,7 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACAcemw8pbwuvHFaYynxBbS0Cf3ThYuj1Utr7CDqjwySHAAAAJikacCNpGnA
jQAAAAtzc2gtZWQyNTUxOQAAACAcemw8pbwuvHFaYynxBbS0Cf3ThYuj1Utr7CDqjwySHA
AAAECiiKk4V6Q5cVs6axDM4sjAzZn/QCZLQekmYQXS9XbEYxx6bDylvC68cVpjKfEFtLQJ
/dOFi6PVS2vsIOqPDJIcAAAAEGplYW5nYWJAbGlsaWFuZTIBAgMEBQ==
-----END OPENSSH PRIVATE KEY-----

View File

@@ -0,0 +1 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBx6bDylvC68cVpjKfEFtLQJ/dOFi6PVS2vsIOqPDJIc jeangab@liliane2

View File

@@ -8,7 +8,6 @@ use harmony::{
hardware::{FirewallGroup, HostCategory, Location, PhysicalHost, SwitchGroup},
infra::opnsense::OPNSenseManagementInterface,
inventory::Inventory,
maestro::Maestro,
modules::{
dummy::{ErrorScore, PanicScore, SuccessScore},
http::StaticFilesHttpScore,
@@ -81,23 +80,31 @@ async fn main() {
let load_balancer_score = OKDLoadBalancerScore::new(&topology);
let tftp_score = TftpScore::new(Url::LocalFolder("./data/watchguard/tftpboot".to_string()));
let http_score = StaticFilesHttpScore::new(Url::LocalFolder(
"./data/watchguard/pxe-http-files".to_string(),
));
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
maestro.register_all(vec![
Box::new(dns_score),
Box::new(dhcp_score),
Box::new(load_balancer_score),
Box::new(tftp_score),
Box::new(http_score),
Box::new(OPNsenseShellCommandScore {
opnsense: opnsense.get_opnsense_config(),
command: "touch /tmp/helloharmonytouching".to_string(),
}),
Box::new(SuccessScore {}),
Box::new(ErrorScore {}),
Box::new(PanicScore {}),
]);
harmony_tui::init(maestro).await.unwrap();
let http_score = StaticFilesHttpScore {
folder_to_serve: Some(Url::LocalFolder(
"./data/watchguard/pxe-http-files".to_string(),
)),
files: vec![],
};
harmony_tui::run(
inventory,
topology,
vec![
Box::new(dns_score),
Box::new(dhcp_score),
Box::new(load_balancer_score),
Box::new(tftp_score),
Box::new(http_score),
Box::new(OPNsenseShellCommandScore {
opnsense: opnsense.get_opnsense_config(),
command: "touch /tmp/helloharmonytouching".to_string(),
}),
Box::new(SuccessScore {}),
Box::new(ErrorScore {}),
Box::new(PanicScore {}),
],
)
.await
.unwrap();
}

View File

@@ -2,7 +2,6 @@ use std::net::{SocketAddr, SocketAddrV4};
use harmony::{
inventory::Inventory,
maestro::Maestro,
modules::{
dns::DnsScore,
dummy::{ErrorScore, PanicScore, SuccessScore},
@@ -16,18 +15,19 @@ use harmony_macros::ipv4;
#[tokio::main]
async fn main() {
let inventory = Inventory::autoload();
let topology = DummyInfra {};
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
maestro.register_all(vec![
Box::new(SuccessScore {}),
Box::new(ErrorScore {}),
Box::new(PanicScore {}),
Box::new(DnsScore::new(vec![], None)),
Box::new(build_large_score()),
]);
harmony_tui::init(maestro).await.unwrap();
harmony_tui::run(
Inventory::autoload(),
DummyInfra {},
vec![
Box::new(SuccessScore {}),
Box::new(ErrorScore {}),
Box::new(PanicScore {}),
Box::new(DnsScore::new(vec![], None)),
Box::new(build_large_score()),
],
)
.await
.unwrap();
}
fn build_large_score() -> LoadBalancerScore {

View File

@@ -0,0 +1,11 @@
[package]
name = "example_validate_ceph_cluster_health"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony = { version = "0.1.0", path = "../../harmony" }
harmony_cli = { version = "0.1.0", path = "../../harmony_cli" }
tokio.workspace = true

View File

@@ -0,0 +1,18 @@
use harmony::{
inventory::Inventory,
modules::storage::ceph::ceph_validate_health_score::CephVerifyClusterHealth,
topology::K8sAnywhereTopology,
};
#[tokio::main]
async fn main() {
let ceph_health_score = CephVerifyClusterHealth {
rook_ceph_namespace: "rook-ceph".to_string(),
};
let topology = K8sAnywhereTopology::from_env();
let inventory = Inventory::autoload();
harmony_cli::run(inventory, topology, vec![Box::new(ceph_health_score)], None)
.await
.unwrap();
}

View File

@@ -16,8 +16,8 @@ reqwest = { version = "0.11", features = ["blocking", "json"] }
russh = "0.45.0"
rust-ipmi = "0.1.1"
semver = "1.0.23"
serde = { version = "1.0.209", features = ["derive", "rc"] }
serde_json = "1.0.127"
serde.workspace = true
serde_json.workspace = true
tokio.workspace = true
derive-new.workspace = true
log.workspace = true
@@ -38,8 +38,8 @@ serde-value.workspace = true
helm-wrapper-rs = "0.4.0"
non-blank-string-rs = "1.0.4"
k3d-rs = { path = "../k3d" }
directories = "6.0.0"
lazy_static = "1.5.0"
directories.workspace = true
lazy_static.workspace = true
dockerfile_builder = "0.1.5"
temp-file = "0.1.9"
convert_case.workspace = true
@@ -59,7 +59,7 @@ similar.workspace = true
futures-util = "0.3.31"
tokio-util = "0.7.15"
strum = { version = "0.27.1", features = ["derive"] }
tempfile = "3.20.0"
tempfile.workspace = true
serde_with = "3.14.0"
schemars = "0.8.22"
kube-derive = "1.1.0"
@@ -67,6 +67,9 @@ bollard.workspace = true
tar.workspace = true
base64.workspace = true
once_cell = "1.21.3"
harmony_inventory_agent = { path = "../harmony_inventory_agent" }
harmony_secret_derive = { version = "0.1.0", path = "../harmony_secret_derive" }
askama = "0.14.0"
[dev-dependencies]
pretty_assertions.workspace = true

BIN
harmony/harmony.rlib Normal file

Binary file not shown.

View File

@@ -0,0 +1,22 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct FileContent {
pub path: FilePath,
pub content: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum FilePath {
Relative(String),
Absolute(String),
}
impl std::fmt::Display for FilePath {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
FilePath::Relative(path) => f.write_fmt(format_args!("./{path}")),
FilePath::Absolute(path) => f.write_fmt(format_args!("/{path}")),
}
}
}

View File

@@ -1,4 +1,6 @@
mod file;
mod id;
mod version;
pub use file::*;
pub use id::*;
pub use version::*;

View File

@@ -1,6 +1,5 @@
use log::debug;
use once_cell::sync::Lazy;
use tokio::sync::broadcast;
use std::{collections::HashMap, sync::Mutex};
use crate::modules::application::ApplicationFeatureStatus;
@@ -40,43 +39,46 @@ pub enum HarmonyEvent {
},
}
static HARMONY_EVENT_BUS: Lazy<broadcast::Sender<HarmonyEvent>> = Lazy::new(|| {
// TODO: Adjust channel capacity
let (tx, _rx) = broadcast::channel(100);
tx
});
type Subscriber = Box<dyn Fn(&HarmonyEvent) + Send + Sync>;
pub fn instrument(event: HarmonyEvent) -> Result<(), &'static str> {
if cfg!(any(test, feature = "testing")) {
let _ = event; // Suppress the "unused variable" warning for `event`
Ok(())
} else {
match HARMONY_EVENT_BUS.send(event) {
Ok(_) => Ok(()),
Err(_) => Err("send error: no subscribers"),
}
}
}
static SUBSCRIBERS: Lazy<Mutex<HashMap<String, Subscriber>>> =
Lazy::new(|| Mutex::new(HashMap::new()));
pub async fn subscribe<F, Fut>(name: &str, mut handler: F)
/// Subscribes a listener to all instrumentation events.
///
/// Simply provide a unique name and a closure to run when an event happens.
///
/// # Example
/// ```
/// use harmony::instrumentation;
/// instrumentation::subscribe("my_logger", |event| {
/// println!("Event occurred: {:?}", event);
/// });
/// ```
pub fn subscribe<F>(name: &str, callback: F)
where
F: FnMut(HarmonyEvent) -> Fut + Send + 'static,
Fut: Future<Output = bool> + Send,
F: Fn(&HarmonyEvent) + Send + Sync + 'static,
{
let mut rx = HARMONY_EVENT_BUS.subscribe();
debug!("[{name}] Service started. Listening for events...");
loop {
match rx.recv().await {
Ok(event) => {
if !handler(event).await {
debug!("[{name}] Handler requested exit.");
break;
}
}
Err(broadcast::error::RecvError::Lagged(n)) => {
debug!("[{name}] Lagged behind by {n} messages.");
}
Err(_) => break,
}
}
let mut subs = SUBSCRIBERS.lock().unwrap();
subs.insert(name.to_string(), Box::new(callback));
}
/// Instruments an event, notifying all subscribers.
///
/// This will call every closure that was registered with `subscribe`.
///
/// # Example
/// ```
/// use harmony::instrumentation;
/// use harmony::instrumentation::HarmonyEvent;
/// instrumentation::instrument(HarmonyEvent::HarmonyStarted);
/// ```
pub fn instrument(event: HarmonyEvent) -> Result<(), &'static str> {
let subs = SUBSCRIBERS.lock().unwrap();
for callback in subs.values() {
callback(&event);
}
Ok(())
}

View File

@@ -32,6 +32,8 @@ pub enum InterpretName {
Lamp,
ApplicationMonitoring,
K8sPrometheusCrdAlerting,
DiscoverInventoryAgent,
CephClusterHealth,
}
impl std::fmt::Display for InterpretName {
@@ -58,6 +60,8 @@ impl std::fmt::Display for InterpretName {
InterpretName::Lamp => f.write_str("LAMP"),
InterpretName::ApplicationMonitoring => f.write_str("ApplicationMonitoring"),
InterpretName::K8sPrometheusCrdAlerting => f.write_str("K8sPrometheusCrdAlerting"),
InterpretName::DiscoverInventoryAgent => f.write_str("DiscoverInventoryAgent"),
InterpretName::CephClusterHealth => f.write_str("CephClusterHealth"),
}
}
}

View File

@@ -74,6 +74,7 @@ impl<T: Topology> Maestro<T> {
fn is_topology_initialized(&self) -> bool {
self.topology_state.status == TopologyStatus::Success
|| self.topology_state.status == TopologyStatus::Noop
}
pub async fn interpret(&self, score: Box<dyn Score<T>>) -> Result<Outcome, InterpretError> {

View File

@@ -1,9 +1,12 @@
use async_trait::async_trait;
use harmony_macros::ip;
use harmony_types::net::MacAddress;
use log::debug;
use log::info;
use crate::data::FileContent;
use crate::executors::ExecutorError;
use crate::topology::PxeOptions;
use super::DHCPStaticEntry;
use super::DhcpServer;
@@ -49,9 +52,10 @@ impl Topology for HAClusterTopology {
"HAClusterTopology"
}
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
todo!(
debug!(
"ensure_ready, not entirely sure what it should do here, probably something like verify that the hosts are reachable and all services are up and ready."
)
);
Ok(PreparationOutcome::Noop)
}
}
@@ -153,12 +157,10 @@ impl DhcpServer for HAClusterTopology {
async fn list_static_mappings(&self) -> Vec<(MacAddress, IpAddress)> {
self.dhcp_server.list_static_mappings().await
}
async fn set_next_server(&self, ip: IpAddress) -> Result<(), ExecutorError> {
self.dhcp_server.set_next_server(ip).await
}
async fn set_boot_filename(&self, boot_filename: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_boot_filename(boot_filename).await
async fn set_pxe_options(&self, options: PxeOptions) -> Result<(), ExecutorError> {
self.dhcp_server.set_pxe_options(options).await
}
fn get_ip(&self) -> IpAddress {
self.dhcp_server.get_ip()
}
@@ -168,16 +170,6 @@ impl DhcpServer for HAClusterTopology {
async fn commit_config(&self) -> Result<(), ExecutorError> {
self.dhcp_server.commit_config().await
}
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filename(filename).await
}
async fn set_filename64(&self, filename64: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filename64(filename64).await
}
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError> {
self.dhcp_server.set_filenameipxe(filenameipxe).await
}
}
#[async_trait]
@@ -221,17 +213,21 @@ impl HttpServer for HAClusterTopology {
self.http_server.serve_files(url).await
}
async fn serve_file_content(&self, file: &FileContent) -> Result<(), ExecutorError> {
self.http_server.serve_file_content(file).await
}
fn get_ip(&self) -> IpAddress {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
self.http_server.get_ip()
}
async fn ensure_initialized(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
self.http_server.ensure_initialized().await
}
async fn commit_config(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
self.http_server.commit_config().await
}
async fn reload_restart(&self) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
self.http_server.reload_restart().await
}
}
@@ -241,7 +237,7 @@ pub struct DummyInfra;
#[async_trait]
impl Topology for DummyInfra {
fn name(&self) -> &str {
todo!()
"DummyInfra"
}
async fn ensure_ready(&self) -> Result<PreparationOutcome, PreparationError> {
@@ -299,19 +295,7 @@ impl DhcpServer for DummyInfra {
async fn list_static_mappings(&self) -> Vec<(MacAddress, IpAddress)> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_next_server(&self, _ip: IpAddress) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_boot_filename(&self, _boot_filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_filename(&self, _filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_filename64(&self, _filename: &str) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn set_filenameipxe(&self, _filenameipxe: &str) -> Result<(), ExecutorError> {
async fn set_pxe_options(&self, _options: PxeOptions) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
fn get_ip(&self) -> IpAddress {
@@ -381,6 +365,9 @@ impl HttpServer for DummyInfra {
async fn serve_files(&self, _url: &Url) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
async fn serve_file_content(&self, _file: &FileContent) -> Result<(), ExecutorError> {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}
fn get_ip(&self) -> IpAddress {
unimplemented!("{}", UNIMPLEMENTED_DUMMY_INFRA)
}

View File

@@ -1,4 +1,4 @@
use crate::executors::ExecutorError;
use crate::{data::FileContent, executors::ExecutorError};
use async_trait::async_trait;
use super::{IpAddress, Url};
@@ -6,6 +6,7 @@ use super::{IpAddress, Url};
#[async_trait]
pub trait HttpServer: Send + Sync {
async fn serve_files(&self, url: &Url) -> Result<(), ExecutorError>;
async fn serve_file_content(&self, file: &FileContent) -> Result<(), ExecutorError>;
fn get_ip(&self) -> IpAddress;
// async fn set_ip(&self, ip: IpAddress) -> Result<(), ExecutorError>;

View File

@@ -185,7 +185,10 @@ impl K8sClient {
if let Some(s) = status.status {
let mut stdout_buf = String::new();
if let Some(mut stdout) = process.stdout().take() {
stdout.read_to_string(&mut stdout_buf).await;
stdout
.read_to_string(&mut stdout_buf)
.await
.map_err(|e| format!("Failed to get status stdout {e}"))?;
}
debug!("Status: {} - {:?}", s, status.details);
if s == "Success" {

View File

@@ -46,16 +46,19 @@ pub trait K8sclient: Send + Sync {
async fn k8s_client(&self) -> Result<Arc<K8sClient>, String>;
}
pub struct PxeOptions {
pub ipxe_filename: String,
pub bios_filename: String,
pub efi_filename: String,
pub tftp_ip: Option<IpAddress>,
}
#[async_trait]
pub trait DhcpServer: Send + Sync + std::fmt::Debug {
async fn add_static_mapping(&self, entry: &DHCPStaticEntry) -> Result<(), ExecutorError>;
async fn remove_static_mapping(&self, mac: &MacAddress) -> Result<(), ExecutorError>;
async fn list_static_mappings(&self) -> Vec<(MacAddress, IpAddress)>;
async fn set_next_server(&self, ip: IpAddress) -> Result<(), ExecutorError>;
async fn set_boot_filename(&self, boot_filename: &str) -> Result<(), ExecutorError>;
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError>;
async fn set_filename64(&self, filename64: &str) -> Result<(), ExecutorError>;
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError>;
async fn set_pxe_options(&self, pxe_options: PxeOptions) -> Result<(), ExecutorError>;
fn get_ip(&self) -> IpAddress;
fn get_host(&self) -> LogicalHost;
async fn commit_config(&self) -> Result<(), ExecutorError>;

View File

@@ -1,10 +1,10 @@
use async_trait::async_trait;
use harmony_types::net::MacAddress;
use log::debug;
use log::info;
use crate::{
executors::ExecutorError,
topology::{DHCPStaticEntry, DhcpServer, IpAddress, LogicalHost},
topology::{DHCPStaticEntry, DhcpServer, IpAddress, LogicalHost, PxeOptions},
};
use super::OPNSenseFirewall;
@@ -26,7 +26,7 @@ impl DhcpServer for OPNSenseFirewall {
.unwrap();
}
debug!("Registered {:?}", entry);
info!("Registered {:?}", entry);
Ok(())
}
@@ -46,57 +46,25 @@ impl DhcpServer for OPNSenseFirewall {
self.host.clone()
}
async fn set_next_server(&self, ip: IpAddress) -> Result<(), ExecutorError> {
let ipv4 = match ip {
std::net::IpAddr::V4(ipv4_addr) => ipv4_addr,
std::net::IpAddr::V6(_) => todo!("ipv6 not supported yet"),
};
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_next_server(ipv4);
debug!("OPNsense dhcp server set next server {ipv4}");
}
Ok(())
}
async fn set_boot_filename(&self, boot_filename: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_boot_filename(boot_filename);
debug!("OPNsense dhcp server set boot filename {boot_filename}");
}
Ok(())
}
async fn set_filename(&self, filename: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_filename(filename);
debug!("OPNsense dhcp server set filename {filename}");
}
Ok(())
}
async fn set_filename64(&self, filename: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_filename64(filename);
debug!("OPNsense dhcp server set filename {filename}");
}
Ok(())
}
async fn set_filenameipxe(&self, filenameipxe: &str) -> Result<(), ExecutorError> {
{
let mut writable_opnsense = self.opnsense_config.write().await;
writable_opnsense.dhcp().set_filenameipxe(filenameipxe);
debug!("OPNsense dhcp server set filenameipxe {filenameipxe}");
}
Ok(())
async fn set_pxe_options(&self, options: PxeOptions) -> Result<(), ExecutorError> {
let mut writable_opnsense = self.opnsense_config.write().await;
let PxeOptions {
ipxe_filename,
bios_filename,
efi_filename,
tftp_ip,
} = options;
writable_opnsense
.dhcp()
.set_pxe_options(
tftp_ip.map(|i| i.to_string()),
bios_filename,
efi_filename,
ipxe_filename,
)
.await
.map_err(|dhcp_error| {
ExecutorError::UnexpectedError(format!("Failed to set_pxe_options : {dhcp_error}"))
})
}
}

View File

@@ -2,23 +2,23 @@ use async_trait::async_trait;
use log::info;
use crate::{
data::FileContent,
executors::ExecutorError,
topology::{HttpServer, IpAddress, Url},
};
use super::OPNSenseFirewall;
const OPNSENSE_HTTP_ROOT_PATH: &str = "/usr/local/http";
#[async_trait]
impl HttpServer for OPNSenseFirewall {
async fn serve_files(&self, url: &Url) -> Result<(), ExecutorError> {
let http_root_path = "/usr/local/http";
let config = self.opnsense_config.read().await;
info!("Uploading files from url {url} to {http_root_path}");
info!("Uploading files from url {url} to {OPNSENSE_HTTP_ROOT_PATH}");
match url {
Url::LocalFolder(path) => {
config
.upload_files(path, http_root_path)
.upload_files(path, OPNSENSE_HTTP_ROOT_PATH)
.await
.map_err(|e| ExecutorError::UnexpectedError(e.to_string()))?;
}
@@ -27,8 +27,29 @@ impl HttpServer for OPNSenseFirewall {
Ok(())
}
async fn serve_file_content(&self, file: &FileContent) -> Result<(), ExecutorError> {
let path = match &file.path {
crate::data::FilePath::Relative(path) => {
format!("{OPNSENSE_HTTP_ROOT_PATH}/{}", path.to_string())
}
crate::data::FilePath::Absolute(path) => {
return Err(ExecutorError::ConfigurationError(format!(
"Cannot serve file from http server with absolute path : {path}"
)));
}
};
let config = self.opnsense_config.read().await;
info!("Uploading file content to {}", path);
config
.upload_file_content(&path, &file.content)
.await
.map_err(|e| ExecutorError::UnexpectedError(e.to_string()))?;
Ok(())
}
fn get_ip(&self) -> IpAddress {
todo!();
OPNSenseFirewall::get_ip(self)
}
async fn commit_config(&self) -> Result<(), ExecutorError> {

View File

@@ -28,7 +28,7 @@ impl TftpServer for OPNSenseFirewall {
}
fn get_ip(&self) -> IpAddress {
todo!()
OPNSenseFirewall::get_ip(self)
}
async fn set_ip(&self, ip: IpAddress) -> Result<(), ExecutorError> {

View File

@@ -7,7 +7,7 @@ use crate::{
domain::{data::Version, interpret::InterpretStatus},
interpret::{Interpret, InterpretError, InterpretName, Outcome},
inventory::Inventory,
topology::{DHCPStaticEntry, DhcpServer, HostBinding, IpAddress, Topology},
topology::{DHCPStaticEntry, DhcpServer, HostBinding, IpAddress, PxeOptions, Topology},
};
use crate::domain::score::Score;
@@ -98,69 +98,14 @@ impl DhcpInterpret {
_inventory: &Inventory,
dhcp_server: &D,
) -> Result<Outcome, InterpretError> {
let next_server_outcome = match self.score.next_server {
Some(next_server) => {
dhcp_server.set_next_server(next_server).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set next boot to {next_server}"),
)
}
None => Outcome::noop(),
let pxe_options = PxeOptions {
ipxe_filename: self.score.filenameipxe.clone().unwrap_or_default(),
bios_filename: self.score.filename.clone().unwrap_or_default(),
efi_filename: self.score.filename64.clone().unwrap_or_default(),
tftp_ip: self.score.next_server,
};
let boot_filename_outcome = match &self.score.boot_filename {
Some(boot_filename) => {
dhcp_server.set_boot_filename(boot_filename).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set boot filename to {boot_filename}"),
)
}
None => Outcome::noop(),
};
let filename_outcome = match &self.score.filename {
Some(filename) => {
dhcp_server.set_filename(filename).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set filename to {filename}"),
)
}
None => Outcome::noop(),
};
let filename64_outcome = match &self.score.filename64 {
Some(filename64) => {
dhcp_server.set_filename64(filename64).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set filename64 to {filename64}"),
)
}
None => Outcome::noop(),
};
let filenameipxe_outcome = match &self.score.filenameipxe {
Some(filenameipxe) => {
dhcp_server.set_filenameipxe(filenameipxe).await?;
Outcome::new(
InterpretStatus::SUCCESS,
format!("Dhcp Interpret Set filenameipxe to {filenameipxe}"),
)
}
None => Outcome::noop(),
};
if next_server_outcome.status == InterpretStatus::NOOP
&& boot_filename_outcome.status == InterpretStatus::NOOP
&& filename_outcome.status == InterpretStatus::NOOP
&& filename64_outcome.status == InterpretStatus::NOOP
&& filenameipxe_outcome.status == InterpretStatus::NOOP
{
return Ok(Outcome::noop());
}
dhcp_server.set_pxe_options(pxe_options).await?;
Ok(Outcome::new(
InterpretStatus::SUCCESS,

View File

@@ -3,7 +3,7 @@ use derive_new::new;
use serde::Serialize;
use crate::{
data::{Id, Version},
data::{FileContent, Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
@@ -23,7 +23,8 @@ use crate::{
/// ```
#[derive(Debug, new, Clone, Serialize)]
pub struct StaticFilesHttpScore {
files_to_serve: Url,
pub folder_to_serve: Option<Url>,
pub files: Vec<FileContent>,
}
impl<T: Topology + HttpServer> Score<T> for StaticFilesHttpScore {
@@ -50,12 +51,25 @@ impl<T: Topology + HttpServer> Interpret<T> for StaticFilesHttpInterpret {
) -> Result<Outcome, InterpretError> {
http_server.ensure_initialized().await?;
// http_server.set_ip(topology.router.get_gateway()).await?;
http_server.serve_files(&self.score.files_to_serve).await?;
if let Some(folder) = self.score.folder_to_serve.as_ref() {
http_server.serve_files(folder).await?;
}
for f in self.score.files.iter() {
http_server.serve_file_content(&f).await?
}
http_server.commit_config().await?;
http_server.reload_restart().await?;
Ok(Outcome::success(format!(
"Http Server running and serving files from {}",
self.score.files_to_serve
"Http Server running and serving files from folder {:?} and content for {}",
self.score.folder_to_serve,
self.score
.files
.iter()
.map(|f| f.path.to_string())
.collect::<Vec<String>>()
.join(",")
)))
}

View File

@@ -0,0 +1,72 @@
use async_trait::async_trait;
use harmony_inventory_agent::local_presence::DiscoveryEvent;
use log::info;
use serde::{Deserialize, Serialize};
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::Topology,
};
/// This launches an harmony_inventory_agent discovery process
/// This will allow us to register/update hosts running harmony_inventory_agent
/// from LAN in the Harmony inventory
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DiscoverInventoryAgentScore {
pub discovery_timeout: Option<u64>,
}
impl<T: Topology> Score<T> for DiscoverInventoryAgentScore {
fn name(&self) -> String {
"DiscoverInventoryAgentScore".to_string()
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(DiscoverInventoryAgentInterpret {
score: self.clone(),
})
}
}
#[derive(Debug)]
struct DiscoverInventoryAgentInterpret {
score: DiscoverInventoryAgentScore,
}
#[async_trait]
impl<T: Topology> Interpret<T> for DiscoverInventoryAgentInterpret {
async fn execute(
&self,
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
harmony_inventory_agent::local_presence::discover_agents(
self.score.discovery_timeout,
on_discover_event,
);
todo!()
}
fn get_name(&self) -> InterpretName {
InterpretName::DiscoverInventoryAgent
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
fn on_discover_event(event: &DiscoveryEvent) {
info!("got discovery event {event:?}");
}

View File

@@ -5,6 +5,7 @@ pub mod dns;
pub mod dummy;
pub mod helm;
pub mod http;
pub mod inventory;
pub mod ipxe;
pub mod k3d;
pub mod k8s;

View File

@@ -37,18 +37,6 @@ pub struct NtfyInterpret {
pub score: NtfyScore,
}
#[derive(Debug, EnumString, Display)]
enum NtfyAccessMode {
#[strum(serialize = "read-write", serialize = "rw")]
ReadWrite,
#[strum(serialize = "read-only", serialize = "ro", serialize = "read")]
ReadOnly,
#[strum(serialize = "write-only", serialize = "wo", serialize = "write")]
WriteOnly,
#[strum(serialize = "deny", serialize = "none")]
Deny,
}
#[derive(Debug, EnumString, Display)]
enum NtfyRole {
#[strum(serialize = "user")]

View File

@@ -0,0 +1,148 @@
use askama::Template;
use async_trait::async_trait;
use derive_new::new;
use serde::Serialize;
use std::net::IpAddr;
use crate::{
data::{FileContent, FilePath, Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
modules::{dhcp::DhcpScore, http::StaticFilesHttpScore, tftp::TftpScore},
score::Score,
topology::{DhcpServer, HttpServer, Router, TftpServer, Topology, Url},
};
#[derive(Debug, new, Clone, Serialize)]
pub struct OkdIpxeScore {
pub kickstart_filename: String,
pub harmony_inventory_agent: String,
pub cluster_pubkey_filename: String,
}
impl<T: Topology + DhcpServer + TftpServer + HttpServer + Router> Score<T> for OkdIpxeScore {
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(IpxeInterpret::new(self.clone()))
}
fn name(&self) -> String {
"OkdIpxeScore".to_string()
}
}
#[derive(Debug, new, Clone)]
pub struct IpxeInterpret {
score: OkdIpxeScore,
}
#[async_trait]
impl<T: Topology + DhcpServer + TftpServer + HttpServer + Router> Interpret<T> for IpxeInterpret {
async fn execute(
&self,
inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let gateway_ip = topology.get_gateway();
let scores: Vec<Box<dyn Score<T>>> = vec![
Box::new(DhcpScore {
host_binding: vec![],
next_server: Some(topology.get_gateway()),
boot_filename: None,
filename: Some("undionly.kpxe".to_string()),
filename64: Some("ipxe.efi".to_string()),
filenameipxe: Some(format!("http://{gateway_ip}:8080/boot.ipxe").to_string()),
}),
Box::new(TftpScore {
files_to_serve: Url::LocalFolder("./data/pxe/okd/tftpboot/".to_string()),
}),
Box::new(StaticFilesHttpScore {
// TODO The current russh based copy is way too slow, check for a lib update or use scp
// when available
//
// For now just run :
// scp -r data/pxe/okd/http_files/* root@192.168.1.1:/usr/local/http/
//
folder_to_serve: None,
// folder_to_serve: Some(Url::LocalFolder("./data/pxe/okd/http_files/".to_string())),
files: vec![
FileContent {
path: FilePath::Relative("boot.ipxe".to_string()),
content: BootIpxeTpl {
gateway_ip: &gateway_ip,
}
.to_string(),
},
FileContent {
path: FilePath::Relative(self.score.kickstart_filename.clone()),
content: InventoryKickstartTpl {
gateway_ip: &gateway_ip,
harmony_inventory_agent: &self.score.harmony_inventory_agent,
cluster_pubkey_filename: &self.score.cluster_pubkey_filename,
}
.to_string(),
},
FileContent {
path: FilePath::Relative("fallback.ipxe".to_string()),
content: FallbackIpxeTpl {
gateway_ip: &gateway_ip,
kickstart_filename: &self.score.kickstart_filename,
}
.to_string(),
},
],
}),
];
for score in scores {
let result = score.interpret(inventory, topology).await;
match result {
Ok(outcome) => match outcome.status {
InterpretStatus::SUCCESS => continue,
InterpretStatus::NOOP => continue,
_ => return Err(InterpretError::new(outcome.message)),
},
Err(e) => return Err(e),
};
}
Ok(Outcome::success("Ipxe installed".to_string()))
}
fn get_name(&self) -> InterpretName {
InterpretName::Ipxe
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
#[derive(Template)]
#[template(path = "boot.ipxe.j2")]
struct BootIpxeTpl<'a> {
gateway_ip: &'a IpAddr,
}
#[derive(Template)]
#[template(path = "fallback.ipxe.j2")]
struct FallbackIpxeTpl<'a> {
gateway_ip: &'a IpAddr,
kickstart_filename: &'a str,
}
#[derive(Template)]
#[template(path = "inventory.kickstart.j2")]
struct InventoryKickstartTpl<'a> {
gateway_ip: &'a IpAddr,
cluster_pubkey_filename: &'a str,
harmony_inventory_agent: &'a str,
}

View File

@@ -2,5 +2,6 @@ pub mod bootstrap_dhcp;
pub mod bootstrap_load_balancer;
pub mod dhcp;
pub mod dns;
pub mod ipxe;
pub mod load_balancer;
pub mod upgrade;

View File

@@ -1,5 +1,4 @@
use std::{
process::Command,
sync::Arc,
time::{Duration, Instant},
};

View File

@@ -0,0 +1,136 @@
use std::{sync::Arc, time::Duration};
use async_trait::async_trait;
use log::debug;
use serde::Serialize;
use tokio::time::Instant;
use crate::{
data::{Id, Version},
interpret::{Interpret, InterpretError, InterpretName, InterpretStatus, Outcome},
inventory::Inventory,
score::Score,
topology::{K8sclient, Topology, k8s::K8sClient},
};
#[derive(Clone, Debug, Serialize)]
pub struct CephVerifyClusterHealth {
pub rook_ceph_namespace: String,
}
impl<T: Topology + K8sclient> Score<T> for CephVerifyClusterHealth {
fn name(&self) -> String {
format!("CephValidateClusterHealth")
}
fn create_interpret(&self) -> Box<dyn Interpret<T>> {
Box::new(CephVerifyClusterHealthInterpret {
score: self.clone(),
})
}
}
#[derive(Clone, Debug)]
pub struct CephVerifyClusterHealthInterpret {
score: CephVerifyClusterHealth,
}
#[async_trait]
impl<T: Topology + K8sclient> Interpret<T> for CephVerifyClusterHealthInterpret {
async fn execute(
&self,
_inventory: &Inventory,
topology: &T,
) -> Result<Outcome, InterpretError> {
let client = topology.k8s_client().await.unwrap();
self.verify_ceph_toolbox_exists(client.clone()).await?;
self.validate_ceph_cluster_health(client.clone()).await?;
Ok(Outcome::success("Ceph cluster healthy".to_string()))
}
fn get_name(&self) -> InterpretName {
InterpretName::CephClusterHealth
}
fn get_version(&self) -> Version {
todo!()
}
fn get_status(&self) -> InterpretStatus {
todo!()
}
fn get_children(&self) -> Vec<Id> {
todo!()
}
}
impl CephVerifyClusterHealthInterpret {
pub async fn verify_ceph_toolbox_exists(
&self,
client: Arc<K8sClient>,
) -> Result<Outcome, InterpretError> {
let toolbox_dep = "rook-ceph-tools".to_string();
match client
.get_deployment(&toolbox_dep, Some(&self.score.rook_ceph_namespace))
.await
{
Ok(Some(deployment)) => {
if let Some(status) = deployment.status {
let ready_count = status.ready_replicas.unwrap_or(0);
if ready_count >= 1 {
return Ok(Outcome::success(format!(
"'{}' is ready with {} replica(s).",
&toolbox_dep, ready_count
)));
} else {
return Err(InterpretError::new(
"ceph-tool-box not ready in cluster".to_string(),
));
}
} else {
Err(InterpretError::new(format!(
"failed to get deployment status {}",
&toolbox_dep
)))
}
}
Ok(None) => Err(InterpretError::new(format!(
"Deployment '{}' not found in namespace '{}'.",
&toolbox_dep, self.score.rook_ceph_namespace
))),
Err(e) => Err(InterpretError::new(format!(
"Failed to query for deployment '{}': {}",
&toolbox_dep, e
))),
}
}
pub async fn validate_ceph_cluster_health(
&self,
client: Arc<K8sClient>,
) -> Result<Outcome, InterpretError> {
debug!("Verifying ceph cluster is in healthy state");
let health = client
.exec_app_capture_output(
"rook-ceph-tools".to_string(),
"app".to_string(),
Some(&self.score.rook_ceph_namespace),
vec!["sh", "-c", "ceph health"],
)
.await?;
if health.contains("HEALTH_OK") {
return Ok(Outcome::success(
"Ceph Cluster in healthy state".to_string(),
));
} else {
Err(InterpretError::new(format!(
"Ceph cluster unhealthy {}",
health
)))
}
}
}

View File

@@ -1 +1,2 @@
pub mod ceph_osd_replacement_score;
pub mod ceph_validate_health_score;

View File

@@ -12,7 +12,7 @@ use crate::{
#[derive(Debug, new, Clone, Serialize)]
pub struct TftpScore {
files_to_serve: Url,
pub files_to_serve: Url,
}
impl<T: Topology + TftpServer + Router> Score<T> for TftpScore {

View File

@@ -0,0 +1,6 @@
#!ipxe
set base-url http://{{ gateway_ip }}:8080
set hostfile ${base-url}/byMAC/01-${mac:hexhyp}.ipxe
chain ${hostfile} || chain ${base-url}/fallback.ipxe

View File

@@ -0,0 +1,40 @@
#!ipxe
# =================================================================
# Harmony Discovery Agent - Default Boot Script (default.ipxe)
# =================================================================
#
# This script boots the CentOS Stream live environment for the
# purpose of hardware inventory. It loads the kernel and initramfs
# directly and passes a Kickstart URL for full automation.
#
# --- Configuration
# Set the base URL for where the CentOS kernel/initrd are hosted.
set os_base_url http://{{gateway_ip}}:8080/os/centos-stream-9
# Set the URL for the Kickstart file.
set ks_url http://{{ gateway_ip }}:8080/{{ kickstart_filename }}
# --- Boot Process
echo "Harmony: Starting automated node discovery..."
echo "Fetching kernel from ${os_base_url}/vmlinuz..."
kernel ${os_base_url}/vmlinuz
echo "Fetching initramfs from ${os_base_url}/initrd.img..."
initrd ${os_base_url}/initrd.img
echo "Configuring kernel boot arguments..."
# Kernel Arguments Explained:
# - initrd=initrd.img: Specifies the initial ramdisk to use.
# - inst.stage2: Points to the OS source. For a live boot, the base URL is sufficient.
# - inst.ks: CRITICAL: Points to our Kickstart file for automation.
# - ip=dhcp: Ensures the live environment configures its network.
# - console=...: Provides boot output on both serial and graphical consoles for debugging.
imgargs vmlinuz initrd=initrd.img inst.sshd inst.stage2=${os_base_url} inst.ks=${ks_url} ip=dhcp console=ttyS0,115200 console=tty1
echo "Booting into CentOS Stream 9 live environment..."
boot || goto failed
:failed
echo "Boot failed. Dropping to iPXE shell."
shell

View File

@@ -0,0 +1,127 @@
# --- Pre-Boot Scripting (The Main Goal) ---
# This section runs after the live environment has booted into RAM.
# It sets up SSH and downloads/runs the harmony-inventory-agent.
%pre --log=/root/ks-pre.log
echo "Harmony Kickstart: Pre-boot script started."
# 1. Configure SSH Access for Root
# Create the .ssh directory and set correct permissions.
echo " - Setting up SSH authorized_keys for root..."
mkdir -p /root/.ssh
chmod 700 /root/.ssh
# Download the public key from the provisioning server.
# The -sS flags make curl silent but show errors. -L follows redirects.
curl -vSL "http://{{ gateway_ip }}:8080/{{ cluster_pubkey_filename }}" -o /root/.ssh/authorized_keys
if [ $? -ne 0 ]; then
echo " - ERROR: Failed to download SSH public key."
else
echo " - SSH key downloaded successfully."
chmod 600 /root/.ssh/authorized_keys
fi
# 2. Download the Harmony Inventory Agent
echo " - Downloading harmony-inventory-agent..."
curl -vSL "http://{{ gateway_ip }}:8080/{{ harmony_inventory_agent }}" -o /usr/bin/harmony-inventory-agent
if [ $? -ne 0 ]; then
echo " - ERROR: Failed to download harmony_inventory_agent."
else
echo " - Agent binary downloaded successfully."
chmod +x /usr/bin/harmony-inventory-agent
fi
# 3. Create a systemd service to run the agent persistently.
# This is the most robust method to ensure the agent stays running.
echo " - Creating systemd service for the agent..."
cat > /etc/systemd/system/harmony-agent.service << EOF
[Unit]
Description=Harmony Inventory Agent
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/bin/harmony-inventory-agent
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
# 4. Enable and start the service
# The 'systemctl' commands will work correctly within the chroot environment of the %pre script.
echo " - Enabling and starting harmony-agent.service..."
systemctl daemon-reload
systemctl enable --now harmony-agent.service
# Check if the service started correctly
systemctl is-active --quiet harmony-agent.service
if [ $? -eq 0 ]; then
echo " - Harmony Inventory Agent service is now running."
else
echo " - ERROR: Harmony Inventory Agent service failed to start."
fi
echo "Harmony Kickstart: Pre-boot script finished. The machine is ready for inventory."
echo "Running cat - to pause system indefinitely"
cat -
%end
# =================================================================
# Harmony Discovery Agent - Kickstart File (NON-INSTALL, LIVE BOOT)
# =================================================================
#
# This file achieves a fully automated, non-interactive boot into a
# live CentOS environment. It does NOT install to disk.
#
# --- Automation and Interaction Control ---
# Perform the installation in command-line mode. This is critical for
# preventing Anaconda from starting a UI and halting for input.
cmdline
# Accept the End User License Agreement to prevent a prompt.
eula --agreed
# --- Core System Configuration (Required by Anaconda) ---
# Set keyboard and language. These are mandatory.
keyboard --vckeymap=us --xlayouts='us'
lang en_US.UTF-8
# Configure networking. This is essential for the %post script to work.
# The --activate flag ensures this device is brought up in the installer environment.
network --bootproto=dhcp --device=link --activate
# Set a locked root password. This is a mandatory command.
rootpw --lock
# Set the timezone. This is a mandatory command.
timezone UTC
# --- Disable Installation-Specific Features ---
# CRITICAL: Do not install a bootloader. The --disabled flag prevents
# this step and avoids errors about where to install it.
bootloader --disabled
# CRITICAL: Ignore all disks. This prevents Anaconda from stopping at the
# "Installation Destination" screen asking where to install.
# ignoredisk --drives /dev/sda
# Do not run the Initial Setup wizard on first boot.
firstboot --disable
# --- Package Selection ---
# We are not installing, so this section can be minimal.
# An empty %packages section is valid and ensures no time is wasted
# resolving dependencies for an installation that will not happen.
%packages
%end
# IMPORTANT: Do not include a final action command like 'reboot' or 'poweroff'.
# The default action is 'halt', which in cmdline mode will leave the system
# running in the live environment with the agent active, which is the desired state.

View File

@@ -22,6 +22,7 @@ indicatif = "0.18.0"
lazy_static = "1.5.0"
log.workspace = true
indicatif-log-bridge = "0.2.3"
chrono.workspace = true
[dev-dependencies]
harmony = { path = "../harmony", features = ["testing"] }

View File

@@ -1,218 +1,191 @@
use chrono::Local;
use console::style;
use harmony::{
instrumentation::{self, HarmonyEvent},
modules::application::ApplicationFeatureStatus,
topology::TopologyStatus,
};
use indicatif::MultiProgress;
use indicatif_log_bridge::LogWrapper;
use log::error;
use std::{
sync::{Arc, Mutex},
thread,
time::Duration,
};
use log::{error, info, log_enabled};
use std::io::Write;
use std::sync::Mutex;
use crate::progress::{IndicatifProgressTracker, ProgressTracker};
pub fn init() -> tokio::task::JoinHandle<()> {
let base_progress = configure_logger();
let handle = tokio::spawn(handle_events(base_progress));
loop {
if instrumentation::instrument(HarmonyEvent::HarmonyStarted).is_ok() {
break;
}
}
handle
pub fn init() {
configure_logger();
handle_events();
}
fn configure_logger() -> MultiProgress {
let logger =
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).build();
let level = logger.filter();
let progress = MultiProgress::new();
fn configure_logger() {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info"))
.format(|buf, record| {
let debug_mode = log_enabled!(log::Level::Debug);
let timestamp = Local::now().format("%Y-%m-%d %H:%M:%S");
LogWrapper::new(progress.clone(), logger)
.try_init()
.unwrap();
log::set_max_level(level);
progress
let level = match record.level() {
log::Level::Error => style("ERROR").red(),
log::Level::Warn => style("WARN").yellow(),
log::Level::Info => style("INFO").green(),
log::Level::Debug => style("DEBUG").blue(),
log::Level::Trace => style("TRACE").magenta(),
};
if let Some(status) = record.key_values().get(log::kv::Key::from("status")) {
let status = status.to_borrowed_str().unwrap();
let emoji = match status {
"finished" => style(crate::theme::EMOJI_SUCCESS.to_string()).green(),
"skipped" => style(crate::theme::EMOJI_SKIP.to_string()).yellow(),
"failed" => style(crate::theme::EMOJI_ERROR.to_string()).red(),
_ => style("".into()),
};
if debug_mode {
writeln!(
buf,
"[{} {:<5} {}] {} {}",
timestamp,
level,
record.target(),
emoji,
record.args()
)
} else {
writeln!(buf, "[{:<5}] {} {}", level, emoji, record.args())
}
} else if let Some(emoji) = record.key_values().get(log::kv::Key::from("emoji")) {
if debug_mode {
writeln!(
buf,
"[{} {:<5} {}] {} {}",
timestamp,
level,
record.target(),
emoji,
record.args()
)
} else {
writeln!(buf, "[{:<5}] {} {}", level, emoji, record.args())
}
} else if debug_mode {
writeln!(
buf,
"[{} {:<5} {}] {}",
timestamp,
level,
record.target(),
record.args()
)
} else {
writeln!(buf, "[{:<5}] {}", level, record.args())
}
})
.init();
}
async fn handle_events(base_progress: MultiProgress) {
let progress_tracker = Arc::new(IndicatifProgressTracker::new(base_progress.clone()));
let preparing_topology = Arc::new(Mutex::new(false));
let current_score: Arc<Mutex<Option<String>>> = Arc::new(Mutex::new(None));
fn handle_events() {
let preparing_topology = Mutex::new(false);
let current_score: Mutex<Option<String>> = Mutex::new(None);
instrumentation::subscribe("Harmony CLI Logger", {
move |event| {
let progress_tracker = Arc::clone(&progress_tracker);
let preparing_topology = Arc::clone(&preparing_topology);
let current_score = Arc::clone(&current_score);
let mut preparing_topology = preparing_topology.lock().unwrap();
let mut current_score = current_score.lock().unwrap();
async move {
let mut preparing_topology = preparing_topology.lock().unwrap();
let mut current_score = current_score.lock().unwrap();
match event {
HarmonyEvent::HarmonyStarted => {}
HarmonyEvent::HarmonyFinished => {
progress_tracker.add_section(
"harmony-summary",
&format!("\n{} Harmony completed\n\n", crate::theme::EMOJI_HARMONY),
match event {
HarmonyEvent::HarmonyStarted => {}
HarmonyEvent::HarmonyFinished => {
let emoji = crate::theme::EMOJI_HARMONY.to_string();
info!(emoji = emoji.as_str(); "Harmony completed");
}
HarmonyEvent::TopologyStateChanged {
topology,
status,
message,
} => match status {
TopologyStatus::Queued => {}
TopologyStatus::Preparing => {
let emoji = format!(
"{}",
style(crate::theme::EMOJI_TOPOLOGY.to_string()).yellow()
);
progress_tracker.add_section("harmony-finished", "\n\n");
thread::sleep(Duration::from_millis(200));
return false;
info!(emoji = emoji.as_str(); "Preparing environment: {topology}...");
(*preparing_topology) = true;
}
HarmonyEvent::TopologyStateChanged {
topology,
status,
message,
} => {
let section_key = topology_key(&topology);
match status {
TopologyStatus::Queued => {}
TopologyStatus::Preparing => {
progress_tracker.add_section(
&section_key,
&format!(
"\n{} Preparing environment: {topology}...",
crate::theme::EMOJI_TOPOLOGY
),
);
(*preparing_topology) = true;
}
TopologyStatus::Success => {
(*preparing_topology) = false;
progress_tracker.add_task(&section_key, "topology-success", "");
progress_tracker
.finish_task("topology-success", &message.unwrap_or("".into()));
}
TopologyStatus::Noop => {
(*preparing_topology) = false;
progress_tracker.add_task(&section_key, "topology-skip", "");
progress_tracker
.skip_task("topology-skip", &message.unwrap_or("".into()));
}
TopologyStatus::Error => {
progress_tracker.add_task(&section_key, "topology-error", "");
(*preparing_topology) = false;
progress_tracker
.fail_task("topology-error", &message.unwrap_or("".into()));
}
TopologyStatus::Success => {
(*preparing_topology) = false;
if let Some(message) = message {
info!(status = "finished"; "{message}");
}
}
HarmonyEvent::InterpretExecutionStarted {
execution_id: task_key,
topology,
interpret: _,
score,
message,
} => {
let is_key_topology = (*preparing_topology)
&& progress_tracker.contains_section(&topology_key(&topology));
let is_key_current_score = current_score.is_some()
&& progress_tracker
.contains_section(&score_key(&current_score.clone().unwrap()));
let is_key_score = progress_tracker.contains_section(&score_key(&score));
let section_key = if is_key_topology {
topology_key(&topology)
} else if is_key_current_score {
score_key(&current_score.clone().unwrap())
} else if is_key_score {
score_key(&score)
} else {
(*current_score) = Some(score.clone());
let key = score_key(&score);
progress_tracker.add_section(
&key,
&format!(
"{} Interpreting score: {score}...",
crate::theme::EMOJI_SCORE
),
);
key
};
progress_tracker.add_task(&section_key, &task_key, &message);
}
HarmonyEvent::InterpretExecutionFinished {
execution_id: task_key,
topology: _,
interpret: _,
score,
outcome,
} => {
if current_score.is_some() && current_score.clone().unwrap() == score {
(*current_score) = None;
}
match outcome {
Ok(outcome) => match outcome.status {
harmony::interpret::InterpretStatus::SUCCESS => {
progress_tracker.finish_task(&task_key, &outcome.message);
}
harmony::interpret::InterpretStatus::NOOP => {
progress_tracker.skip_task(&task_key, &outcome.message);
}
_ => progress_tracker.fail_task(&task_key, &outcome.message),
},
Err(err) => {
error!("Interpret error: {err}");
progress_tracker.fail_task(&task_key, &err.to_string());
}
TopologyStatus::Noop => {
(*preparing_topology) = false;
if let Some(message) = message {
info!(status = "skipped"; "{message}");
}
}
HarmonyEvent::ApplicationFeatureStateChanged {
topology: _,
application,
feature,
status,
} => {
if let Some(score) = &(*current_score) {
let section_key = score_key(score);
let task_key = app_feature_key(&application, &feature);
TopologyStatus::Error => {
(*preparing_topology) = false;
if let Some(message) = message {
error!(status = "failed"; "{message}");
}
}
},
HarmonyEvent::InterpretExecutionStarted {
execution_id: _,
topology: _,
interpret: _,
score,
message,
} => {
if *preparing_topology || current_score.is_some() {
info!("{message}");
} else {
(*current_score) = Some(score.clone());
let emoji = format!("{}", style(crate::theme::EMOJI_SCORE).blue());
info!(emoji = emoji.as_str(); "Interpreting score: {score}...");
}
}
HarmonyEvent::InterpretExecutionFinished {
execution_id: _,
topology: _,
interpret: _,
score,
outcome,
} => {
if current_score.is_some() && &current_score.clone().unwrap() == score {
(*current_score) = None;
}
match status {
ApplicationFeatureStatus::Installing => {
let message = format!("Feature '{}' installing...", feature);
progress_tracker.add_task(&section_key, &task_key, &message);
}
ApplicationFeatureStatus::Installed => {
let message = format!("Feature '{}' installed", feature);
progress_tracker.finish_task(&task_key, &message);
}
ApplicationFeatureStatus::Failed { details } => {
let message = format!(
"Feature '{}' installation failed: {}",
feature, details
);
progress_tracker.fail_task(&task_key, &message);
}
match outcome {
Ok(outcome) => match outcome.status {
harmony::interpret::InterpretStatus::SUCCESS => {
info!(status = "finished"; "{}", outcome.message);
}
harmony::interpret::InterpretStatus::NOOP => {
info!(status = "skipped"; "{}", outcome.message);
}
_ => {
error!(status = "failed"; "{}", outcome.message);
}
},
Err(err) => {
error!(status = "failed"; "{err}");
}
}
}
true
HarmonyEvent::ApplicationFeatureStateChanged {
topology: _,
application,
feature,
status,
} => match status {
ApplicationFeatureStatus::Installing => {
info!("Installing feature '{feature}' for '{application}'...");
}
ApplicationFeatureStatus::Installed => {
info!(status = "finished"; "Feature '{feature}' installed");
}
ApplicationFeatureStatus::Failed { details } => {
error!(status = "failed"; "Feature '{feature}' installation failed: {details}");
}
},
}
}
})
.await;
}
fn topology_key(topology: &str) -> String {
format!("topology-{topology}")
}
fn score_key(score: &str) -> String {
format!("score-{score}")
}
fn app_feature_key(application: &str, feature: &str) -> String {
format!("app-{application}-{feature}")
});
}

View File

@@ -90,38 +90,46 @@ pub async fn run<T: Topology + Send + Sync + 'static>(
topology: T,
scores: Vec<Box<dyn Score<T>>>,
args_struct: Option<Args>,
) -> Result<(), Box<dyn std::error::Error>> {
let cli_logger_handle = cli_logger::init();
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
maestro.register_all(scores);
let result = init(maestro, args_struct).await;
instrumentation::instrument(instrumentation::HarmonyEvent::HarmonyFinished).unwrap();
let _ = tokio::try_join!(cli_logger_handle);
result
}
async fn init<T: Topology + Send + Sync + 'static>(
maestro: harmony::maestro::Maestro<T>,
args_struct: Option<Args>,
) -> Result<(), Box<dyn std::error::Error>> {
let args = match args_struct {
Some(args) => args,
None => Args::parse(),
};
#[cfg(feature = "tui")]
if args.interactive {
return harmony_tui::init(maestro).await;
}
#[cfg(not(feature = "tui"))]
if args.interactive {
return Err("Not compiled with interactive support".into());
}
#[cfg(feature = "tui")]
if args.interactive {
return harmony_tui::run(inventory, topology, scores).await;
}
run_cli(inventory, topology, scores, args).await
}
pub async fn run_cli<T: Topology + Send + Sync + 'static>(
inventory: Inventory,
topology: T,
scores: Vec<Box<dyn Score<T>>>,
args: Args,
) -> Result<(), Box<dyn std::error::Error>> {
cli_logger::init();
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
maestro.register_all(scores);
let result = init(maestro, args).await;
instrumentation::instrument(instrumentation::HarmonyEvent::HarmonyFinished).unwrap();
result
}
async fn init<T: Topology + Send + Sync + 'static>(
maestro: harmony::maestro::Maestro<T>,
args: Args,
) -> Result<(), Box<dyn std::error::Error>> {
let _ = env_logger::builder().try_init();
let scores_vec = maestro_scores_filter(&maestro, args.all, args.filter, args.number);
@@ -193,14 +201,14 @@ mod tests {
let maestro = init_test_maestro();
let res = crate::init(
maestro,
Some(crate::Args {
crate::Args {
yes: true,
filter: Some("SuccessScore".to_owned()),
interactive: false,
all: true,
number: 0,
list: false,
}),
},
)
.await;
@@ -213,14 +221,14 @@ mod tests {
let res = crate::init(
maestro,
Some(crate::Args {
crate::Args {
yes: true,
filter: Some("ErrorScore".to_owned()),
interactive: false,
all: true,
number: 0,
list: false,
}),
},
)
.await;
@@ -233,14 +241,14 @@ mod tests {
let res = crate::init(
maestro,
Some(crate::Args {
crate::Args {
yes: true,
filter: None,
interactive: false,
all: false,
number: 0,
list: false,
}),
},
)
.await;

View File

@@ -1,82 +1,66 @@
use harmony_cli::progress::{IndicatifProgressTracker, ProgressTracker};
use indicatif::MultiProgress;
use std::sync::Arc;
use crate::instrumentation::{self, HarmonyComposerEvent};
pub fn init() -> tokio::task::JoinHandle<()> {
pub fn init() {
configure_logger();
let handle = tokio::spawn(handle_events());
loop {
if instrumentation::instrument(HarmonyComposerEvent::HarmonyComposerStarted).is_ok() {
break;
}
}
handle
handle_events();
}
fn configure_logger() {
env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).build();
}
pub async fn handle_events() {
let progress_tracker = Arc::new(IndicatifProgressTracker::new(MultiProgress::new()));
pub fn handle_events() {
let progress_tracker = IndicatifProgressTracker::new(MultiProgress::new());
const SETUP_SECTION: &str = "project-initialization";
const COMPILTATION_TASK: &str = "compilation";
const PROGRESS_DEPLOYMENT: &str = "deployment";
instrumentation::subscribe("Harmony Composer Logger", {
move |event| {
let progress_tracker = Arc::clone(&progress_tracker);
async move {
match event {
HarmonyComposerEvent::HarmonyComposerStarted => {}
HarmonyComposerEvent::ProjectInitializationStarted => {
progress_tracker.add_section(
SETUP_SECTION,
&format!(
"{} Initializing Harmony project...",
harmony_cli::theme::EMOJI_HARMONY,
),
);
}
HarmonyComposerEvent::ProjectInitialized => {}
HarmonyComposerEvent::ProjectCompilationStarted { details } => {
progress_tracker.add_task(SETUP_SECTION, COMPILTATION_TASK, &details);
}
HarmonyComposerEvent::ProjectCompiled => {
progress_tracker.finish_task(COMPILTATION_TASK, "project compiled");
}
HarmonyComposerEvent::ProjectCompilationFailed { details } => {
progress_tracker.fail_task(COMPILTATION_TASK, &format!("failed to compile project:\n{details}"));
}
HarmonyComposerEvent::DeploymentStarted { target, profile } => {
progress_tracker.add_section(
PROGRESS_DEPLOYMENT,
&format!(
"\n{} Deploying project on target '{target}' with profile '{profile}'...\n",
harmony_cli::theme::EMOJI_DEPLOY,
),
);
}
HarmonyComposerEvent::DeploymentCompleted => {
progress_tracker.clear();
}
HarmonyComposerEvent::DeploymentFailed { details } => {
progress_tracker.add_task(PROGRESS_DEPLOYMENT, "deployment-failed", "");
progress_tracker.fail_task("deployment-failed", &details);
},
HarmonyComposerEvent::Shutdown => {
return false;
}
}
true
move |event| match event {
HarmonyComposerEvent::HarmonyComposerStarted => {}
HarmonyComposerEvent::ProjectInitializationStarted => {
progress_tracker.add_section(
SETUP_SECTION,
&format!(
"{} Initializing Harmony project...",
harmony_cli::theme::EMOJI_HARMONY,
),
);
}
HarmonyComposerEvent::ProjectInitialized => {}
HarmonyComposerEvent::ProjectCompilationStarted { details } => {
progress_tracker.add_task(SETUP_SECTION, COMPILTATION_TASK, details);
}
HarmonyComposerEvent::ProjectCompiled => {
progress_tracker.finish_task(COMPILTATION_TASK, "project compiled");
}
HarmonyComposerEvent::ProjectCompilationFailed { details } => {
progress_tracker.fail_task(
COMPILTATION_TASK,
&format!("failed to compile project:\n{details}"),
);
}
HarmonyComposerEvent::DeploymentStarted { target, profile } => {
progress_tracker.add_section(
PROGRESS_DEPLOYMENT,
&format!(
"\n{} Deploying project on target '{target}' with profile '{profile}'...\n",
harmony_cli::theme::EMOJI_DEPLOY,
),
);
}
HarmonyComposerEvent::DeploymentCompleted => {
progress_tracker.clear();
}
HarmonyComposerEvent::DeploymentFailed { details } => {
progress_tracker.add_task(PROGRESS_DEPLOYMENT, "deployment-failed", "");
progress_tracker.fail_task("deployment-failed", details);
}
HarmonyComposerEvent::Shutdown => {}
}
})
.await
}

View File

@@ -1,6 +1,5 @@
use log::debug;
use once_cell::sync::Lazy;
use tokio::sync::broadcast;
use std::{collections::HashMap, sync::Mutex};
use crate::{HarmonyProfile, HarmonyTarget};
@@ -27,48 +26,43 @@ pub enum HarmonyComposerEvent {
Shutdown,
}
static HARMONY_COMPOSER_EVENT_BUS: Lazy<broadcast::Sender<HarmonyComposerEvent>> =
Lazy::new(|| {
// TODO: Adjust channel capacity
let (tx, _rx) = broadcast::channel(16);
tx
});
type Subscriber = Box<dyn Fn(&HarmonyComposerEvent) + Send + Sync>;
pub fn instrument(event: HarmonyComposerEvent) -> Result<(), &'static str> {
#[cfg(not(test))]
{
match HARMONY_COMPOSER_EVENT_BUS.send(event) {
Ok(_) => Ok(()),
Err(_) => Err("send error: no subscribers"),
}
}
static SUBSCRIBERS: Lazy<Mutex<HashMap<String, Subscriber>>> =
Lazy::new(|| Mutex::new(HashMap::new()));
#[cfg(test)]
{
let _ = event; // Suppress the "unused variable" warning for `event`
Ok(())
}
}
pub async fn subscribe<F, Fut>(name: &str, mut handler: F)
/// Subscribes a listener to all instrumentation events.
///
/// Simply provide a unique name and a closure to run when an event happens.
///
/// # Example
/// ```
/// instrumentation::subscribe("my_logger", |event| {
/// println!("Event occurred: {:?}", event);
/// });
/// ```
pub fn subscribe<F>(name: &str, callback: F)
where
F: FnMut(HarmonyComposerEvent) -> Fut + Send + 'static,
Fut: Future<Output = bool> + Send,
F: Fn(&HarmonyComposerEvent) + Send + Sync + 'static,
{
let mut rx = HARMONY_COMPOSER_EVENT_BUS.subscribe();
debug!("[{name}] Service started. Listening for events...");
loop {
match rx.recv().await {
Ok(event) => {
if !handler(event).await {
debug!("[{name}] Handler requested exit.");
break;
}
}
Err(broadcast::error::RecvError::Lagged(n)) => {
debug!("[{name}] Lagged behind by {n} messages.");
}
Err(_) => break,
}
}
let mut subs = SUBSCRIBERS.lock().unwrap();
subs.insert(name.to_string(), Box::new(callback));
}
/// Instruments an event, notifying all subscribers.
///
/// This will call every closure that was registered with `subscribe`.
///
/// # Example
/// ```
/// instrumentation::instrument(HarmonyEvent::HarmonyStarted);
/// ```
pub fn instrument(event: HarmonyComposerEvent) -> Result<(), &'static str> {
let subs = SUBSCRIBERS.lock().unwrap();
for callback in subs.values() {
callback(&event);
}
Ok(())
}

View File

@@ -99,7 +99,7 @@ impl std::fmt::Display for HarmonyProfile {
#[tokio::main]
async fn main() {
let hc_logger_handle = harmony_composer_logger::init();
harmony_composer_logger::init();
let cli_args = GlobalArgs::parse();
let harmony_path = Path::new(&cli_args.harmony_path)
@@ -199,8 +199,6 @@ async fn main() {
}
instrumentation::instrument(HarmonyComposerEvent::Shutdown).unwrap();
let _ = tokio::try_join!(hc_logger_handle);
}
#[derive(Clone, Debug, clap::ValueEnum)]

View File

@@ -0,0 +1,17 @@
[package]
name = "harmony_inventory_agent"
version = "0.1.0"
edition = "2024"
[dependencies]
actix-web = "4.4"
sysinfo = "0.30"
serde.workspace = true
serde_json.workspace = true
log.workspace = true
env_logger.workspace = true
tokio.workspace = true
thiserror.workspace = true
# mdns-sd = "0.14.1"
mdns-sd = { git = "https://github.com/jggc/mdns-sd.git", branch = "patch-1" }
local-ip-address = "0.6.5"

View File

@@ -0,0 +1,22 @@
## Compiling :
```bash
cargo build -p harmony_inventory_agent --release --target x86_64-unknown-linux-musl
```
This will create a statically linked binary that can run on pretty much any x86_64 system.
This requires installation of the target
```
rustup target add x86_64-unknown-linux-musl
```
And installation of the musl tools too.
On Archlinux, they can be installed with :
```
pacman -S musl
```

View File

@@ -0,0 +1,840 @@
use log::{debug, warn};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::fs;
use std::path::Path;
use std::process::Command;
use sysinfo::System;
#[derive(Serialize, Deserialize, Debug)]
pub struct PhysicalHost {
pub storage_drives: Vec<StorageDrive>,
pub storage_controller: StorageController,
pub memory_modules: Vec<MemoryModule>,
pub cpus: Vec<CPU>,
pub chipset: Chipset,
pub network_interfaces: Vec<NetworkInterface>,
pub management_interface: Option<ManagementInterface>,
pub host_uuid: String,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct StorageDrive {
pub name: String,
pub model: String,
pub serial: String,
pub size_bytes: u64,
pub logical_block_size: u32,
pub physical_block_size: u32,
pub rotational: bool,
pub wwn: Option<String>,
pub interface_type: String,
pub smart_status: Option<String>,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct StorageController {
pub name: String,
pub driver: String,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct MemoryModule {
pub size_bytes: u64,
pub speed_mhz: Option<u32>,
pub manufacturer: Option<String>,
pub part_number: Option<String>,
pub serial_number: Option<String>,
pub rank: Option<u8>,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct CPU {
pub model: String,
pub vendor: String,
pub cores: u32,
pub threads: u32,
pub frequency_mhz: u64,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct Chipset {
pub name: String,
pub vendor: String,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct NetworkInterface {
pub name: String,
pub mac_address: String,
pub speed_mbps: Option<u32>,
pub is_up: bool,
pub mtu: u32,
pub ipv4_addresses: Vec<String>,
pub ipv6_addresses: Vec<String>,
pub driver: String,
pub firmware_version: Option<String>,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct ManagementInterface {
pub kind: String,
pub address: Option<String>,
pub firmware: Option<String>,
}
impl PhysicalHost {
pub fn gather() -> Result<Self, String> {
let mut sys = System::new_all();
sys.refresh_all();
Self::all_tools_available()?;
Ok(Self {
storage_drives: Self::gather_storage_drives()?,
storage_controller: Self::gather_storage_controller()?,
memory_modules: Self::gather_memory_modules()?,
cpus: Self::gather_cpus(&sys)?,
chipset: Self::gather_chipset()?,
network_interfaces: Self::gather_network_interfaces()?,
management_interface: Self::gather_management_interface()?,
host_uuid: Self::get_host_uuid()?,
})
}
fn all_tools_available() -> Result<(), String> {
let required_tools = [
("lsblk", Some("--version")),
("lspci", Some("--version")),
("lsmod", None),
("dmidecode", Some("--version")),
("smartctl", Some("--version")),
("ip", Some("route")), // No version flag available
];
let mut missing_tools = Vec::new();
debug!("Looking for required_tools {required_tools:?}");
for (tool, tool_arg) in required_tools.iter() {
// First check if tool exists in PATH using which(1)
let mut exists = if let Ok(output) = Command::new("which").arg(tool).output() {
output.status.success()
} else {
false
};
if !exists {
// Fallback: manual PATH search if which(1) is unavailable
debug!("Looking for {tool} in path");
if let Ok(path_var) = std::env::var("PATH") {
debug!("PATH is {path_var}");
exists = path_var.split(':').any(|dir| {
let tool_path = std::path::Path::new(dir).join(tool);
tool_path.exists() && Self::is_executable(&tool_path)
})
}
}
if !exists {
warn!("Unable to find tool {tool} from PATH");
missing_tools.push(*tool);
continue;
}
// Verify tool is functional by checking version/help output
let mut cmd = Command::new(tool);
if let Some(tool_arg) = tool_arg {
cmd.arg(tool_arg);
}
cmd.stdout(std::process::Stdio::null());
cmd.stderr(std::process::Stdio::null());
if let Ok(status) = cmd.status() {
if !status.success() {
warn!("Unable to test {tool} status failed");
missing_tools.push(*tool);
}
} else {
warn!("Unable to test {tool}");
missing_tools.push(*tool);
}
}
if !missing_tools.is_empty() {
let missing_str = missing_tools
.iter()
.map(|s| s.to_string())
.collect::<Vec<String>>()
.join(", ");
return Err(format!(
"The following required tools are not available: {}. Please install these tools to use PhysicalHost::gather()",
missing_str
));
}
Ok(())
}
#[cfg(unix)]
fn is_executable(path: &std::path::Path) -> bool {
debug!("Checking if {} is executable", path.to_string_lossy());
use std::os::unix::fs::PermissionsExt;
match std::fs::metadata(path) {
Ok(meta) => meta.permissions().mode() & 0o111 != 0,
Err(_) => false,
}
}
#[cfg(not(unix))]
fn is_executable(_path: &std::path::Path) -> bool {
// On non-Unix systems, we assume existence implies executability
true
}
fn gather_storage_drives() -> Result<Vec<StorageDrive>, String> {
let mut drives = Vec::new();
// Use lsblk with JSON output for robust parsing
let output = Command::new("lsblk")
.args([
"-d",
"-o",
"NAME,MODEL,SERIAL,SIZE,ROTA,WWN",
"-n",
"-e",
"7",
"--json",
])
.output()
.map_err(|e| format!("Failed to execute lsblk: {}", e))?;
if !output.status.success() {
return Err(format!(
"lsblk command failed: {}",
String::from_utf8_lossy(&output.stderr)
));
}
let json: Value = serde_json::from_slice(&output.stdout)
.map_err(|e| format!("Failed to parse lsblk JSON output: {}", e))?;
let blockdevices = json
.get("blockdevices")
.and_then(|v| v.as_array())
.ok_or("Invalid lsblk JSON: missing 'blockdevices' array")?;
for device in blockdevices {
let name = device
.get("name")
.and_then(|v| v.as_str())
.ok_or("Missing 'name' in lsblk device")?
.to_string();
if name.is_empty() {
continue;
}
let model = device
.get("model")
.and_then(|v| v.as_str())
.map(|s| s.trim().to_string())
.unwrap_or_default();
let serial = device
.get("serial")
.and_then(|v| v.as_str())
.map(|s| s.trim().to_string())
.unwrap_or_default();
let size_str = device
.get("size")
.and_then(|v| v.as_str())
.ok_or("Missing 'size' in lsblk device")?;
let size_bytes = Self::parse_size(size_str)?;
let rotational = device
.get("rota")
.and_then(|v| v.as_bool())
.ok_or("Missing 'rota' in lsblk device")?;
let wwn = device
.get("wwn")
.and_then(|v| v.as_str())
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty() && s != "null");
let device_path = Path::new("/sys/block").join(&name);
let logical_block_size = Self::read_sysfs_u32(
&device_path.join("queue/logical_block_size"),
)
.map_err(|e| format!("Failed to read logical block size for {}: {}", name, e))?;
let physical_block_size = Self::read_sysfs_u32(
&device_path.join("queue/physical_block_size"),
)
.map_err(|e| format!("Failed to read physical block size for {}: {}", name, e))?;
let interface_type = Self::get_interface_type(&name, &device_path)?;
let smart_status = Self::get_smart_status(&name)?;
let mut drive = StorageDrive {
name: name.clone(),
model,
serial,
size_bytes,
logical_block_size,
physical_block_size,
rotational,
wwn,
interface_type,
smart_status,
};
// Enhance with additional sysfs info if available
if device_path.exists() {
if drive.model.is_empty() {
drive.model = Self::read_sysfs_string(&device_path.join("device/model"))
.unwrap_or(format!("Failed to read model for {}", name));
}
if drive.serial.is_empty() {
drive.serial = Self::read_sysfs_string(&device_path.join("device/serial"))
.unwrap_or(format!("Failed to read serial for {}", name));
}
}
drives.push(drive);
}
Ok(drives)
}
fn gather_storage_controller() -> Result<StorageController, String> {
let mut controller = StorageController {
name: "Unknown".to_string(),
driver: "Unknown".to_string(),
};
// Use lspci with JSON output if available
let output = Command::new("lspci")
.args(["-nn", "-d", "::0100", "-J"]) // Storage controllers class with JSON
.output()
.map_err(|e| format!("Failed to execute lspci: {}", e))?;
if output.status.success() {
let json: Value = serde_json::from_slice(&output.stdout)
.map_err(|e| format!("Failed to parse lspci JSON output: {}", e))?;
if let Some(devices) = json.as_array() {
for device in devices {
if let Some(device_info) = device.as_object()
&& let Some(name) = device_info
.get("device")
.and_then(|v| v.as_object())
.and_then(|v| v.get("name"))
.and_then(|v| v.as_str())
{
controller.name = name.to_string();
break;
}
}
}
}
// Fallback to text output if JSON fails or no device found
if controller.name == "Unknown" {
let output = Command::new("lspci")
.args(["-nn", "-d", "::0100"]) // Storage controllers class
.output()
.map_err(|e| format!("Failed to execute lspci (fallback): {}", e))?;
if output.status.success() {
let output_str = String::from_utf8_lossy(&output.stdout);
if let Some(line) = output_str.lines().next() {
let parts: Vec<&str> = line.split(':').collect();
if parts.len() > 2 {
controller.name = parts[2].trim().to_string();
}
}
}
}
// Try to get driver info from lsmod
let output = Command::new("lsmod")
.output()
.map_err(|e| format!("Failed to execute lsmod: {}", e))?;
if output.status.success() {
let output_str = String::from_utf8_lossy(&output.stdout);
for line in output_str.lines() {
if line.contains("ahci")
|| line.contains("nvme")
|| line.contains("megaraid")
|| line.contains("mpt3sas")
{
let parts: Vec<&str> = line.split_whitespace().collect();
if !parts.is_empty() {
controller.driver = parts[0].to_string();
break;
}
}
}
}
Ok(controller)
}
fn gather_memory_modules() -> Result<Vec<MemoryModule>, String> {
let mut modules = Vec::new();
let output = Command::new("dmidecode")
.arg("--type")
.arg("17")
.output()
.map_err(|e| format!("Failed to execute dmidecode: {}", e))?;
if !output.status.success() {
return Err(format!(
"dmidecode command failed: {}",
String::from_utf8_lossy(&output.stderr)
));
}
let output_str = String::from_utf8(output.stdout)
.map_err(|e| format!("Failed to parse dmidecode output: {}", e))?;
let sections: Vec<&str> = output_str.split("Memory Device").collect();
for section in sections.into_iter().skip(1) {
let mut module = MemoryModule {
size_bytes: 0,
speed_mhz: None,
manufacturer: None,
part_number: None,
serial_number: None,
rank: None,
};
for line in section.lines() {
let line = line.trim();
if let Some(size_str) = line.strip_prefix("Size: ") {
if size_str != "No Module Installed"
&& let Some((num, unit)) = size_str.split_once(' ')
&& let Ok(num) = num.parse::<u64>()
{
module.size_bytes = match unit {
"MB" => num * 1024 * 1024,
"GB" => num * 1024 * 1024 * 1024,
"KB" => num * 1024,
_ => 0,
};
}
} else if let Some(speed_str) = line.strip_prefix("Speed: ") {
if let Some((num, _unit)) = speed_str.split_once(' ') {
module.speed_mhz = num.parse().ok();
}
} else if let Some(man) = line.strip_prefix("Manufacturer: ") {
module.manufacturer = Some(man.to_string());
} else if let Some(part) = line.strip_prefix("Part Number: ") {
module.part_number = Some(part.to_string());
} else if let Some(serial) = line.strip_prefix("Serial Number: ") {
module.serial_number = Some(serial.to_string());
} else if let Some(rank) = line.strip_prefix("Rank: ") {
module.rank = rank.parse().ok();
}
}
if module.size_bytes > 0 {
modules.push(module);
}
}
Ok(modules)
}
fn gather_cpus(sys: &System) -> Result<Vec<CPU>, String> {
let mut cpus = Vec::new();
let global_cpu = sys.global_cpu_info();
cpus.push(CPU {
model: global_cpu.brand().to_string(),
vendor: global_cpu.vendor_id().to_string(),
cores: sys.physical_core_count().unwrap_or(1) as u32,
threads: sys.cpus().len() as u32,
frequency_mhz: global_cpu.frequency(),
});
Ok(cpus)
}
fn gather_chipset() -> Result<Chipset, String> {
Ok(Chipset {
name: Self::read_dmi("baseboard-product-name")?,
vendor: Self::read_dmi("baseboard-manufacturer")?,
})
}
fn gather_network_interfaces() -> Result<Vec<NetworkInterface>, String> {
let mut interfaces = Vec::new();
let sys_net_path = Path::new("/sys/class/net");
let entries = fs::read_dir(sys_net_path)
.map_err(|e| format!("Failed to read /sys/class/net: {}", e))?;
for entry in entries {
let entry = entry.map_err(|e| format!("Failed to read directory entry: {}", e))?;
let iface_name = entry
.file_name()
.into_string()
.map_err(|_| "Invalid UTF-8 in interface name")?;
let iface_path = entry.path();
// Skip virtual interfaces
if iface_name.starts_with("lo")
|| iface_name.starts_with("docker")
|| iface_name.starts_with("virbr")
|| iface_name.starts_with("veth")
|| iface_name.starts_with("br-")
|| iface_name.starts_with("tun")
|| iface_name.starts_with("wg")
{
continue;
}
// Check if it's a physical interface by looking for device directory
if !iface_path.join("device").exists() {
continue;
}
let mac_address = Self::read_sysfs_string(&iface_path.join("address"))
.map_err(|e| format!("Failed to read MAC address for {}: {}", iface_name, e))?;
let speed_mbps = if iface_path.join("speed").exists() {
match Self::read_sysfs_u32(&iface_path.join("speed")) {
Ok(speed) => Some(speed),
Err(e) => {
debug!(
"Failed to read speed for {}: {} . This is expected to fail on wifi interfaces.",
iface_name, e
);
None
}
}
} else {
None
};
let operstate = Self::read_sysfs_string(&iface_path.join("operstate"))
.map_err(|e| format!("Failed to read operstate for {}: {}", iface_name, e))?;
let mtu = Self::read_sysfs_u32(&iface_path.join("mtu"))
.map_err(|e| format!("Failed to read MTU for {}: {}", iface_name, e))?;
let driver =
Self::read_sysfs_symlink_basename(&iface_path.join("device/driver/module"))
.map_err(|e| format!("Failed to read driver for {}: {}", iface_name, e))?;
let firmware_version = Self::read_sysfs_opt_string(
&iface_path.join("device/firmware_version"),
)
.map_err(|e| format!("Failed to read firmware version for {}: {}", iface_name, e))?;
// Get IP addresses using ip command with JSON output
let (ipv4_addresses, ipv6_addresses) = Self::get_interface_ips_json(&iface_name)
.map_err(|e| format!("Failed to get IP addresses for {}: {}", iface_name, e))?;
interfaces.push(NetworkInterface {
name: iface_name,
mac_address,
speed_mbps,
is_up: operstate == "up",
mtu,
ipv4_addresses,
ipv6_addresses,
driver,
firmware_version,
});
}
Ok(interfaces)
}
fn gather_management_interface() -> Result<Option<ManagementInterface>, String> {
if Path::new("/dev/ipmi0").exists() {
Ok(Some(ManagementInterface {
kind: "IPMI".to_string(),
address: None,
firmware: Some(Self::read_dmi("bios-version")?),
}))
} else if Path::new("/sys/class/misc/mei").exists() {
Ok(Some(ManagementInterface {
kind: "Intel ME".to_string(),
address: None,
firmware: None,
}))
} else {
Ok(None)
}
}
fn get_host_uuid() -> Result<String, String> {
Self::read_dmi("system-uuid")
}
// Helper methods
fn read_sysfs_string(path: &Path) -> Result<String, String> {
fs::read_to_string(path)
.map(|s| s.trim().to_string())
.map_err(|e| format!("Failed to read {}: {}", path.display(), e))
}
fn read_sysfs_opt_string(path: &Path) -> Result<Option<String>, String> {
match fs::read_to_string(path) {
Ok(s) => {
let s = s.trim().to_string();
Ok(if s.is_empty() { None } else { Some(s) })
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => Ok(None),
Err(e) => Err(format!("Failed to read {}: {}", path.display(), e)),
}
}
fn read_sysfs_u32(path: &Path) -> Result<u32, String> {
fs::read_to_string(path)
.map_err(|e| format!("Failed to read {}: {}", path.display(), e))?
.trim()
.parse()
.map_err(|e| format!("Failed to parse {}: {}", path.display(), e))
}
fn read_sysfs_symlink_basename(path: &Path) -> Result<String, String> {
match fs::read_link(path) {
Ok(target_path) => match target_path.file_name() {
Some(name_osstr) => match name_osstr.to_str() {
Some(name_str) => Ok(name_str.to_string()),
None => Err(format!(
"Symlink target basename is not valid UTF-8: {}",
target_path.display()
)),
},
None => Err(format!(
"Symlink target has no basename: {} -> {}",
path.display(),
target_path.display()
)),
},
Err(e) if e.kind() == std::io::ErrorKind::NotFound => Err(format!(
"Could not resolve symlink for path : {}",
path.display()
)),
Err(e) => Err(format!("Failed to read symlink {}: {}", path.display(), e)),
}
}
fn read_dmi(field: &str) -> Result<String, String> {
let output = Command::new("dmidecode")
.arg("-s")
.arg(field)
.output()
.map_err(|e| format!("Failed to execute dmidecode for field {}: {}", field, e))?;
if !output.status.success() {
return Err(format!(
"dmidecode command failed for field {}: {}",
field,
String::from_utf8_lossy(&output.stderr)
));
}
String::from_utf8(output.stdout)
.map(|s| s.trim().to_string())
.map_err(|e| {
format!(
"Failed to parse dmidecode output for field {}: {}",
field, e
)
})
}
fn get_interface_type(device_name: &str, device_path: &Path) -> Result<String, String> {
if device_name.starts_with("nvme") {
Ok("NVMe".to_string())
} else if device_name.starts_with("sd") {
Ok("SATA".to_string())
} else if device_name.starts_with("hd") {
Ok("IDE".to_string())
} else if device_name.starts_with("vd") {
Ok("VirtIO".to_string())
} else if device_name.starts_with("sr") {
Ok("CDROM".to_string())
} else if device_name.starts_with("zram") {
Ok("Ramdisk".to_string())
} else {
// Try to determine from device path
let subsystem = Self::read_sysfs_string(&device_path.join("device/subsystem"))?;
Ok(subsystem
.split('/')
.next_back()
.unwrap_or("Unknown")
.to_string())
}
}
fn get_smart_status(device_name: &str) -> Result<Option<String>, String> {
let output = Command::new("smartctl")
.arg("-H")
.arg(format!("/dev/{}", device_name))
.output()
.map_err(|e| format!("Failed to execute smartctl for {}: {}", device_name, e))?;
if !output.status.success() {
return Ok(None);
}
let stdout = String::from_utf8(output.stdout)
.map_err(|e| format!("Failed to parse smartctl output for {}: {}", device_name, e))?;
for line in stdout.lines() {
if line.contains("SMART overall-health self-assessment") {
if let Some(status) = line.split(':').nth(1) {
return Ok(Some(status.trim().to_string()));
}
}
}
Ok(None)
}
fn parse_size(size_str: &str) -> Result<u64, String> {
debug!("Parsing size_str '{size_str}'");
let size;
if size_str.ends_with('T') {
size = size_str[..size_str.len() - 1]
.parse::<f64>()
.map(|t| t * 1024.0 * 1024.0 * 1024.0 * 1024.0)
.map_err(|e| format!("Failed to parse T size '{}': {}", size_str, e))
} else if size_str.ends_with('G') {
size = size_str[..size_str.len() - 1]
.parse::<f64>()
.map(|g| g * 1024.0 * 1024.0 * 1024.0)
.map_err(|e| format!("Failed to parse G size '{}': {}", size_str, e))
} else if size_str.ends_with('M') {
size = size_str[..size_str.len() - 1]
.parse::<f64>()
.map(|m| m * 1024.0 * 1024.0)
.map_err(|e| format!("Failed to parse M size '{}': {}", size_str, e))
} else if size_str.ends_with('K') {
size = size_str[..size_str.len() - 1]
.parse::<f64>()
.map(|k| k * 1024.0)
.map_err(|e| format!("Failed to parse K size '{}': {}", size_str, e))
} else if size_str.ends_with('B') {
size = size_str[..size_str.len() - 1]
.parse::<f64>()
.map_err(|e| format!("Failed to parse B size '{}': {}", size_str, e))
} else {
size = size_str
.parse::<f64>()
.map_err(|e| format!("Failed to parse size '{}': {}", size_str, e))
}
size.map(|s| s as u64)
}
fn get_interface_ips_json(iface_name: &str) -> Result<(Vec<String>, Vec<String>), String> {
let mut ipv4 = Vec::new();
let mut ipv6 = Vec::new();
// Get IPv4 addresses using JSON output
let output = Command::new("ip")
.args(["-j", "-4", "addr", "show", iface_name])
.output()
.map_err(|e| {
format!(
"Failed to execute ip command for IPv4 on {}: {}",
iface_name, e
)
})?;
if !output.status.success() {
return Err(format!(
"ip command for IPv4 on {} failed: {}",
iface_name,
String::from_utf8_lossy(&output.stderr)
));
}
let json: Value = serde_json::from_slice(&output.stdout).map_err(|e| {
format!(
"Failed to parse ip JSON output for IPv4 on {}: {}",
iface_name, e
)
})?;
if let Some(addrs) = json.as_array() {
for addr_info in addrs {
if let Some(addr_info_obj) = addr_info.as_object()
&& let Some(addr_info) =
addr_info_obj.get("addr_info").and_then(|v| v.as_array())
{
for addr in addr_info {
if let Some(addr_obj) = addr.as_object()
&& let Some(ip) = addr_obj.get("local").and_then(|v| v.as_str())
{
ipv4.push(ip.to_string());
}
}
}
}
}
// Get IPv6 addresses using JSON output
let output = Command::new("ip")
.args(["-j", "-6", "addr", "show", iface_name])
.output()
.map_err(|e| {
format!(
"Failed to execute ip command for IPv6 on {}: {}",
iface_name, e
)
})?;
if !output.status.success() {
return Err(format!(
"ip command for IPv6 on {} failed: {}",
iface_name,
String::from_utf8_lossy(&output.stderr)
));
}
let json: Value = serde_json::from_slice(&output.stdout).map_err(|e| {
format!(
"Failed to parse ip JSON output for IPv6 on {}: {}",
iface_name, e
)
})?;
if let Some(addrs) = json.as_array() {
for addr_info in addrs {
if let Some(addr_info_obj) = addr_info.as_object()
&& let Some(addr_info) =
addr_info_obj.get("addr_info").and_then(|v| v.as_array())
{
for addr in addr_info {
if let Some(addr_obj) = addr.as_object()
&& let Some(ip) = addr_obj.get("local").and_then(|v| v.as_str())
{
// Skip link-local addresses
if !ip.starts_with("fe80::") {
ipv6.push(ip.to_string());
}
}
}
}
}
}
Ok((ipv4, ipv6))
}
}

View File

@@ -0,0 +1,2 @@
mod hwinfo;
pub mod local_presence;

View File

@@ -0,0 +1,90 @@
use log::{debug, error, info, warn};
use mdns_sd::{ServiceDaemon, ServiceInfo};
use std::collections::HashMap;
use crate::{
hwinfo::PhysicalHost,
local_presence::{PresenceError, SERVICE_NAME, VERSION},
};
/// Advertises the agent's presence on the local network.
///
/// This function is synchronous and non-blocking. It spawns a background Tokio task
/// to handle the mDNS advertisement for the lifetime of the application.
pub fn advertise(service_port: u16) -> Result<(), PresenceError> {
let host_id = match PhysicalHost::gather() {
Ok(host) => Some(host.host_uuid),
Err(e) => {
error!("Could not build physical host, harmony presence id will be unavailable : {e}");
None
}
};
let instance_name = format!(
"inventory-agent-{}",
host_id.clone().unwrap_or("unknown".to_string())
);
let spawned_msg = format!("Spawned local presence advertisement task for '{instance_name}'.");
tokio::spawn(async move {
info!(
"Local presence task started. Advertising as '{}'.",
instance_name
);
// The ServiceDaemon must live for the entire duration of the advertisement.
// If it's dropped, the advertisement stops.
let mdns = match ServiceDaemon::new() {
Ok(daemon) => daemon,
Err(e) => {
warn!("Failed to create mDNS daemon: {}. Task shutting down.", e);
return;
}
};
let mut props = HashMap::new();
if let Some(host_id) = host_id {
props.insert("id".to_string(), host_id);
}
props.insert("version".to_string(), VERSION.to_string());
let local_ip: Box<dyn mdns_sd::AsIpAddrs> = match local_ip_address::local_ip() {
Ok(ip) => Box::new(ip),
Err(e) => {
error!(
"Could not figure out local ip, mdns will have to try to figure it out by itself : {e}"
);
Box::new(())
}
};
debug!("Using local ip {:?}", local_ip.as_ip_addrs());
let service_info = ServiceInfo::new(
SERVICE_NAME,
&instance_name,
&format!("{}.local.", instance_name),
local_ip,
service_port,
Some(props),
)
.expect("ServiceInfo creation should not fail with valid inputs");
// The registration handle must also be kept alive.
let _registration_handle = match mdns.register(service_info) {
Ok(handle) => {
info!("Service successfully registered on the local network.");
handle
}
Err(e) => {
warn!("Failed to register service: {}. Task shutting down.", e);
return;
}
};
});
info!("{spawned_msg}");
Ok(())
}

View File

@@ -0,0 +1,34 @@
use mdns_sd::{ServiceDaemon, ServiceEvent};
use crate::local_presence::SERVICE_NAME;
pub type DiscoveryEvent = ServiceEvent;
pub fn discover_agents(timeout: Option<u64>, on_event: fn(&DiscoveryEvent)) {
// Create a new mDNS daemon.
let mdns = ServiceDaemon::new().expect("Failed to create mDNS daemon");
// Start browsing for the service type.
// The receiver will be a stream of events.
let receiver = mdns.browse(SERVICE_NAME).expect("Failed to browse");
std::thread::spawn(move || {
while let Ok(event) = receiver.recv() {
on_event(&event);
match event {
ServiceEvent::ServiceResolved(resolved) => {
println!("Resolved a new service: {}", resolved.fullname);
}
other_event => {
println!("Received other event: {:?}", &other_event);
}
}
}
});
if let Some(timeout) = timeout {
// Gracefully shutdown the daemon.
std::thread::sleep(std::time::Duration::from_secs(timeout));
mdns.shutdown().unwrap();
}
}

View File

@@ -0,0 +1,16 @@
mod discover;
pub use discover::*;
mod advertise;
pub use advertise::*;
pub const SERVICE_NAME: &str = "_harmony._tcp.local.";
const VERSION: &str = env!("CARGO_PKG_VERSION");
// A specific error type for our module enhances clarity and usability.
#[derive(thiserror::Error, Debug)]
pub enum PresenceError {
#[error("Failed to create mDNS daemon")]
DaemonCreationFailed(#[from] mdns_sd::Error),
#[error("The shutdown signal has already been sent")]
ShutdownFailed,
}

View File

@@ -0,0 +1,47 @@
// src/main.rs
use actix_web::{App, HttpServer, Responder, get};
use log::error;
use std::env;
use crate::hwinfo::PhysicalHost;
mod hwinfo;
mod local_presence;
#[get("/inventory")]
async fn inventory() -> impl Responder {
log::info!("Received inventory request");
let host = PhysicalHost::gather();
match host {
Ok(host) => {
log::info!("Inventory data gathered successfully");
actix_web::HttpResponse::Ok().json(host)
}
Err(error) => {
log::error!("Inventory data gathering FAILED");
actix_web::HttpResponse::InternalServerError().json(error)
}
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
env_logger::init();
let port = env::var("HARMONY_INVENTORY_AGENT_PORT").unwrap_or_else(|_| "8080".to_string());
let port = port
.parse::<u16>()
.expect(&format!("Invalid port number, cannot parse to u16 {port}"));
let bind_addr = format!("0.0.0.0:{}", port);
log::info!("Starting inventory agent on {}", bind_addr);
if let Err(e) = local_presence::advertise(port) {
error!("Could not start advertise local presence : {e}");
}
HttpServer::new(|| App::new().service(inventory))
.bind(&bind_addr)?
.run()
.await
}

23
harmony_secret/Cargo.toml Normal file
View File

@@ -0,0 +1,23 @@
[package]
name = "harmony_secret"
edition = "2024"
version.workspace = true
readme.workspace = true
license.workspace = true
[dependencies]
harmony_secret_derive = { version = "0.1.0", path = "../harmony_secret_derive" }
serde = { version = "1.0.209", features = ["derive", "rc"] }
serde_json = "1.0.127"
thiserror.workspace = true
lazy_static.workspace = true
directories.workspace = true
log.workspace = true
infisical = "0.0.2"
tokio.workspace = true
async-trait.workspace = true
http.workspace = true
[dev-dependencies]
pretty_assertions.workspace = true
tempfile.workspace = true

View File

@@ -0,0 +1,18 @@
use lazy_static::lazy_static;
lazy_static! {
pub static ref SECRET_NAMESPACE: String =
std::env::var("HARMONY_SECRET_NAMESPACE").expect("HARMONY_SECRET_NAMESPACE environment variable is required, it should contain the name of the project you are working on to access its secrets");
pub static ref SECRET_STORE: Option<String> =
std::env::var("HARMONY_SECRET_STORE").ok();
pub static ref INFISICAL_URL: Option<String> =
std::env::var("HARMONY_SECRET_INFISICAL_URL").ok();
pub static ref INFISICAL_PROJECT_ID: Option<String> =
std::env::var("HARMONY_SECRET_INFISICAL_PROJECT_ID").ok();
pub static ref INFISICAL_ENVIRONMENT: Option<String> =
std::env::var("HARMONY_SECRET_INFISICAL_ENVIRONMENT").ok();
pub static ref INFISICAL_CLIENT_ID: Option<String> =
std::env::var("HARMONY_SECRET_INFISICAL_CLIENT_ID").ok();
pub static ref INFISICAL_CLIENT_SECRET: Option<String> =
std::env::var("HARMONY_SECRET_INFISICAL_CLIENT_SECRET").ok();
}

167
harmony_secret/src/lib.rs Normal file
View File

@@ -0,0 +1,167 @@
pub mod config;
mod store;
use crate::config::SECRET_NAMESPACE;
use async_trait::async_trait;
use config::INFISICAL_CLIENT_ID;
use config::INFISICAL_CLIENT_SECRET;
use config::INFISICAL_ENVIRONMENT;
use config::INFISICAL_PROJECT_ID;
use config::INFISICAL_URL;
use config::SECRET_STORE;
use serde::{Serialize, de::DeserializeOwned};
use std::fmt;
use store::InfisicalSecretStore;
use store::LocalFileSecretStore;
use thiserror::Error;
use tokio::sync::OnceCell;
pub use harmony_secret_derive::Secret;
// The Secret trait remains the same.
pub trait Secret: Serialize + DeserializeOwned + Sized {
const KEY: &'static str;
}
// The error enum remains the same.
#[derive(Debug, Error)]
pub enum SecretStoreError {
#[error("Secret not found for key '{key}' in namespace '{namespace}'")]
NotFound { namespace: String, key: String },
#[error("Failed to deserialize secret for key '{key}': {source}")]
Deserialization {
key: String,
source: serde_json::Error,
},
#[error("Failed to serialize secret for key '{key}': {source}")]
Serialization {
key: String,
source: serde_json::Error,
},
#[error("Underlying storage error: {0}")]
Store(#[from] Box<dyn std::error::Error + Send + Sync>),
}
// The trait is now async!
#[async_trait]
pub trait SecretStore: fmt::Debug + Send + Sync {
async fn get_raw(&self, namespace: &str, key: &str) -> Result<Vec<u8>, SecretStoreError>;
async fn set_raw(
&self,
namespace: &str,
key: &str,
value: &[u8],
) -> Result<(), SecretStoreError>;
}
// Use OnceCell for async-friendly, one-time initialization.
static SECRET_MANAGER: OnceCell<SecretManager> = OnceCell::const_new();
/// Initializes and returns a reference to the global SecretManager.
async fn get_secret_manager() -> &'static SecretManager {
SECRET_MANAGER.get_or_init(init_secret_manager).await
}
/// The async initialization function for the SecretManager.
async fn init_secret_manager() -> SecretManager {
let default_secret_score = "infisical".to_string();
let store_type = SECRET_STORE.as_ref().unwrap_or(&default_secret_score);
let store: Box<dyn SecretStore> = match store_type.as_str() {
"file" => Box::new(LocalFileSecretStore::default()),
"infisical" | _ => {
let store = InfisicalSecretStore::new(
INFISICAL_URL.clone().expect("Infisical url must be set, see harmony_secret config for ways to provide it. You can try with HARMONY_SECRET_INFISICAL_URL"),
INFISICAL_PROJECT_ID.clone().expect("Infisical project id must be set, see harmony_secret config for ways to provide it. You can try with HARMONY_SECRET_INFISICAL_PROJECT_ID"),
INFISICAL_ENVIRONMENT.clone().expect("Infisical environment must be set, see harmony_secret config for ways to provide it. You can try with HARMONY_SECRET_INFISICAL_ENVIRONMENT"),
INFISICAL_CLIENT_ID.clone().expect("Infisical client id must be set, see harmony_secret config for ways to provide it. You can try with HARMONY_SECRET_INFISICAL_CLIENT_ID"),
INFISICAL_CLIENT_SECRET.clone().expect("Infisical client secret must be set, see harmony_secret config for ways to provide it. You can try with HARMONY_SECRET_INFISICAL_CLIENT_SECRET"),
)
.await
.expect("Failed to initialize Infisical secret store");
Box::new(store)
}
};
SecretManager::new(SECRET_NAMESPACE.clone(), store)
}
/// Manages the lifecycle of secrets, providing a simple static API.
#[derive(Debug)]
pub struct SecretManager {
namespace: String,
store: Box<dyn SecretStore>,
}
impl SecretManager {
fn new(namespace: String, store: Box<dyn SecretStore>) -> Self {
Self { namespace, store }
}
/// Retrieves and deserializes a secret.
pub async fn get<T: Secret>() -> Result<T, SecretStoreError> {
let manager = get_secret_manager().await;
let raw_value = manager.store.get_raw(&manager.namespace, T::KEY).await?;
serde_json::from_slice(&raw_value).map_err(|e| SecretStoreError::Deserialization {
key: T::KEY.to_string(),
source: e,
})
}
/// Serializes and stores a secret.
pub async fn set<T: Secret>(secret: &T) -> Result<(), SecretStoreError> {
let manager = get_secret_manager().await;
let raw_value =
serde_json::to_vec(secret).map_err(|e| SecretStoreError::Serialization {
key: T::KEY.to_string(),
source: e,
})?;
manager
.store
.set_raw(&manager.namespace, T::KEY, &raw_value)
.await
}
}
#[cfg(test)]
mod test {
use super::*;
#[cfg(secrete2etest)]
use pretty_assertions::assert_eq;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, PartialEq)]
struct TestUserMeta {
labels: Vec<String>,
}
#[derive(Secret, Serialize, Deserialize, Debug, PartialEq)]
struct TestSecret {
user: String,
password: String,
metadata: TestUserMeta,
}
#[cfg(secrete2etest)]
#[tokio::test]
async fn set_and_retrieve_secret() {
let secret = TestSecret {
user: String::from("user"),
password: String::from("password"),
metadata: TestUserMeta {
labels: vec![
String::from("label1"),
String::from("label2"),
String::from(
"some longet label with \" special @#%$)(udiojcia[]]] \"'asdij'' characters Nдs はにほへとちり าฟันพัฒนา yağız şoföre ç <20> <20> <20> <20> <20> <20> <20> <20> <20> <20> <20> <20> <20> 👩‍👩‍👧‍👦 /span> 👩‍👧‍👦 and why not emojis ",
),
],
},
};
SecretManager::set(&secret).await.unwrap();
let value = SecretManager::get::<TestSecret>().await.unwrap();
assert_eq!(value, secret);
}
}

View File

@@ -0,0 +1,129 @@
use crate::{SecretStore, SecretStoreError};
use async_trait::async_trait;
use infisical::{
AuthMethod, InfisicalError,
client::Client,
secrets::{CreateSecretRequest, GetSecretRequest, UpdateSecretRequest},
};
use log::{info, warn};
#[derive(Debug)]
pub struct InfisicalSecretStore {
client: Client,
project_id: String,
environment: String,
}
impl InfisicalSecretStore {
/// Creates a new, authenticated Infisical client.
pub async fn new(
base_url: String,
project_id: String,
environment: String,
client_id: String,
client_secret: String,
) -> Result<Self, InfisicalError> {
info!("INFISICAL_STORE: Initializing client for URL: {base_url}");
// The builder and login logic remains the same.
let mut client = Client::builder().base_url(base_url).build().await?;
let auth_method = AuthMethod::new_universal_auth(client_id, client_secret);
client.login(auth_method).await?;
info!("INFISICAL_STORE: Client authenticated successfully.");
Ok(Self {
client,
project_id,
environment,
})
}
}
#[async_trait]
impl SecretStore for InfisicalSecretStore {
async fn get_raw(&self, _environment: &str, key: &str) -> Result<Vec<u8>, SecretStoreError> {
let environment = &self.environment;
info!("INFISICAL_STORE: Getting key '{key}' from environment '{environment}'");
let request = GetSecretRequest::builder(key, &self.project_id, environment).build();
match self.client.secrets().get(request).await {
Ok(secret) => Ok(secret.secret_value.into_bytes()),
Err(e) => {
// Correctly match against the actual InfisicalError enum.
match e {
// The specific case for a 404 Not Found error.
InfisicalError::HttpError { status, .. }
if status == http::StatusCode::NOT_FOUND =>
{
Err(SecretStoreError::NotFound {
namespace: environment.to_string(),
key: key.to_string(),
})
}
// For all other errors, wrap them in our generic Store error.
_ => Err(SecretStoreError::Store(Box::new(e))),
}
}
}
}
async fn set_raw(
&self,
_environment: &str,
key: &str,
val: &[u8],
) -> Result<(), SecretStoreError> {
info!(
"INFISICAL_STORE: Setting key '{key}' in environment '{}'",
self.environment
);
let value_str =
String::from_utf8(val.to_vec()).map_err(|e| SecretStoreError::Store(Box::new(e)))?;
// --- Upsert Logic ---
// First, attempt to update the secret.
let update_req = UpdateSecretRequest::builder(key, &self.project_id, &self.environment)
.secret_value(&value_str)
.build();
match self.client.secrets().update(update_req).await {
Ok(_) => {
info!("INFISICAL_STORE: Successfully updated secret '{key}'.");
Ok(())
}
Err(e) => {
// If the update failed, check if it was because the secret doesn't exist.
match e {
InfisicalError::HttpError { status, .. }
if status == http::StatusCode::NOT_FOUND =>
{
// The secret was not found, so we create it instead.
warn!(
"INFISICAL_STORE: Secret '{key}' not found for update, attempting to create it."
);
let create_req = CreateSecretRequest::builder(
key,
&value_str,
&self.project_id,
&self.environment,
)
.build();
// Handle potential errors during creation.
self.client
.secrets()
.create(create_req)
.await
.map_err(|create_err| SecretStoreError::Store(Box::new(create_err)))?;
info!("INFISICAL_STORE: Successfully created secret '{key}'.");
Ok(())
}
// Any other error during update is a genuine failure.
_ => Err(SecretStoreError::Store(Box::new(e))),
}
}
}
}
}

View File

@@ -0,0 +1,104 @@
use async_trait::async_trait;
use log::info;
use std::path::{Path, PathBuf};
use crate::{SecretStore, SecretStoreError};
#[derive(Debug, Default)]
pub struct LocalFileSecretStore;
impl LocalFileSecretStore {
/// Helper to consistently generate the secret file path.
fn get_file_path(base_dir: &Path, ns: &str, key: &str) -> PathBuf {
base_dir.join(format!("{ns}_{key}.json"))
}
}
#[async_trait]
impl SecretStore for LocalFileSecretStore {
async fn get_raw(&self, ns: &str, key: &str) -> Result<Vec<u8>, SecretStoreError> {
let data_dir = directories::BaseDirs::new()
.expect("Could not find a valid home directory")
.data_dir()
.join("harmony")
.join("secrets");
let file_path = Self::get_file_path(&data_dir, ns, key);
info!(
"LOCAL_STORE: Getting key '{key}' from namespace '{ns}' at {}",
file_path.display()
);
tokio::fs::read(&file_path)
.await
.map_err(|_| SecretStoreError::NotFound {
namespace: ns.to_string(),
key: key.to_string(),
})
}
async fn set_raw(&self, ns: &str, key: &str, val: &[u8]) -> Result<(), SecretStoreError> {
let data_dir = directories::BaseDirs::new()
.expect("Could not find a valid home directory")
.data_dir()
.join("harmony")
.join("secrets");
let file_path = Self::get_file_path(&data_dir, ns, key);
info!(
"LOCAL_STORE: Setting key '{key}' in namespace '{ns}' at {}",
file_path.display()
);
if let Some(parent_dir) = file_path.parent() {
tokio::fs::create_dir_all(parent_dir)
.await
.map_err(|e| SecretStoreError::Store(Box::new(e)))?;
}
tokio::fs::write(&file_path, val)
.await
.map_err(|e| SecretStoreError::Store(Box::new(e)))
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::tempdir;
#[tokio::test]
async fn test_set_and_get_raw_successfully() {
let dir = tempdir().unwrap();
let ns = "test-ns";
let key = "test-key";
let value = b"{\"data\":\"test-value\"}";
// To test the store directly, we override the base directory logic.
// For this test, we'll manually construct the path within our temp dir.
let file_path = LocalFileSecretStore::get_file_path(dir.path(), ns, key);
// Manually write to the temp path to simulate the store's behavior
tokio::fs::create_dir_all(file_path.parent().unwrap())
.await
.unwrap();
tokio::fs::write(&file_path, value).await.unwrap();
// Now, test get_raw by reading from that same temp path (by mocking the path logic)
let retrieved_value = tokio::fs::read(&file_path).await.unwrap();
assert_eq!(retrieved_value, value);
}
#[tokio::test]
async fn test_get_raw_not_found() {
let dir = tempdir().unwrap();
let ns = "test-ns";
let key = "non-existent-key";
// We need to check if reading a non-existent file gives the correct error
let file_path = LocalFileSecretStore::get_file_path(dir.path(), ns, key);
let result = tokio::fs::read(&file_path).await;
assert!(matches!(result, Err(_)));
}
}

View File

@@ -0,0 +1,4 @@
mod infisical;
mod local_file;
pub use infisical::*;
pub use local_file::*;

View File

@@ -0,0 +1,8 @@
export HARMONY_SECRET_NAMESPACE=harmony_test_secrets
export HARMONY_SECRET_INFISICAL_URL=http://localhost
export HARMONY_SECRET_INFISICAL_PROJECT_ID=eb4723dc-eede-44d7-98cc-c8e0caf29ccb
export HARMONY_SECRET_INFISICAL_ENVIRONMENT=dev
export HARMONY_SECRET_INFISICAL_CLIENT_ID=dd16b07f-0e38-4090-a1d0-922de9f44d91
export HARMONY_SECRET_INFISICAL_CLIENT_SECRET=bd2ae054e7759b11ca2e908494196337cc800bab138cb1f59e8d9b15ca3f286f
cargo test

View File

@@ -0,0 +1,13 @@
[package]
name = "harmony_secret_derive"
version = "0.1.0"
edition = "2024"
[lib]
proc-macro = true
[dependencies]
quote = "1.0"
proc-macro2 = "1.0"
proc-macro-crate = "3.3"
syn = "2.0"

View File

@@ -0,0 +1,38 @@
use proc_macro::TokenStream;
use proc_macro_crate::{FoundCrate, crate_name};
use quote::quote;
use syn::{DeriveInput, Ident, parse_macro_input};
#[proc_macro_derive(Secret)]
pub fn derive_secret(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as DeriveInput);
let struct_ident = &input.ident;
// The key for the secret will be the stringified name of the struct itself.
// e.g., `struct OKDClusterSecret` becomes key `"OKDClusterSecret"`.
let key = struct_ident.to_string(); // TODO: Utiliser path complet de la struct
// Find the path to the `harmony_secret` crate.
let secret_crate_path = match crate_name("harmony_secret") {
Ok(FoundCrate::Itself) => quote!(crate),
Ok(FoundCrate::Name(name)) => {
let ident = Ident::new(&name, proc_macro2::Span::call_site());
quote!(::#ident)
}
Err(e) => {
return syn::Error::new(proc_macro2::Span::call_site(), e.to_string())
.to_compile_error()
.into();
}
};
// The generated code now implements `Secret` for the struct itself.
// The struct must also derive `Serialize` and `Deserialize` for this to be useful.
let expanded = quote! {
impl #secret_crate_path::Secret for #struct_ident {
const KEY: &'static str = #key;
}
};
TokenStream::from(expanded)
}

View File

@@ -9,7 +9,13 @@ use widget::{help::HelpWidget, score::ScoreListWidget};
use std::{panic, sync::Arc, time::Duration};
use crossterm::event::{Event, EventStream, KeyCode, KeyEventKind};
use harmony::{maestro::Maestro, score::Score, topology::Topology};
use harmony::{
instrumentation::{self, HarmonyEvent},
inventory::Inventory,
maestro::Maestro,
score::Score,
topology::Topology,
};
use ratatui::{
self, Frame,
layout::{Constraint, Layout, Position},
@@ -39,22 +45,58 @@ pub mod tui {
///
/// #[tokio::main]
/// async fn main() {
/// let inventory = Inventory::autoload();
/// let topology = HAClusterTopology::autoload();
/// let mut maestro = Maestro::new_without_initialization(inventory, topology);
///
/// maestro.register_all(vec![
/// Box::new(SuccessScore {}),
/// Box::new(ErrorScore {}),
/// Box::new(PanicScore {}),
/// ]);
/// harmony_tui::init(maestro).await.unwrap();
/// harmony_tui::run(
/// Inventory::autoload(),
/// HAClusterTopology::autoload(),
/// vec![
/// Box::new(SuccessScore {}),
/// Box::new(ErrorScore {}),
/// Box::new(PanicScore {}),
/// ]
/// ).await.unwrap();
/// }
/// ```
pub async fn init<T: Topology + Send + Sync + 'static>(
pub async fn run<T: Topology + Send + Sync + 'static>(
inventory: Inventory,
topology: T,
scores: Vec<Box<dyn Score<T>>>,
) -> Result<(), Box<dyn std::error::Error>> {
let handle = init_instrumentation().await;
let mut maestro = Maestro::initialize(inventory, topology).await.unwrap();
maestro.register_all(scores);
let result = init(maestro).await;
let _ = tokio::try_join!(handle);
result
}
async fn init<T: Topology + Send + Sync + 'static>(
maestro: Maestro<T>,
) -> Result<(), Box<dyn std::error::Error>> {
HarmonyTUI::new(maestro).init().await
let result = HarmonyTUI::new(maestro).init().await;
instrumentation::instrument(HarmonyEvent::HarmonyFinished).unwrap();
result
}
async fn init_instrumentation() -> tokio::task::JoinHandle<()> {
let handle = tokio::spawn(handle_harmony_events());
loop {
if instrumentation::instrument(HarmonyEvent::HarmonyStarted).is_ok() {
break;
}
}
handle
}
async fn handle_harmony_events() {
instrumentation::subscribe("Harmony TUI Logger", |_| {
// TODO: Display events in the TUI
});
}
pub struct HarmonyTUI<T: Topology> {

17
iobench/Cargo.toml Normal file
View File

@@ -0,0 +1,17 @@
[package]
name = "iobench"
edition = "2024"
version = "1.0.0"
license = "AGPL-3.0-or-later"
description = "A small command line utility to run fio benchmarks on localhost or remote ssh or kubernetes host. Was born out of a need to benchmark various ceph configurations!"
[dependencies]
clap = { version = "4.0", features = ["derive"] }
chrono = "0.4"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
csv = "1.1"
num_cpus = "1.13"
[workspace]

10
iobench/dash/README.md Normal file
View File

@@ -0,0 +1,10 @@
This project was generated mostly by Gemini but it works so... :)
## To run iobench dashboard
```bash
virtualenv venv
source venv/bin/activate
pip install -r requirements_freeze.txt
python iobench-dash-v4.py
```

View File

@@ -0,0 +1,229 @@
import dash
from dash import dcc, html, Input, Output, State, clientside_callback, ClientsideFunction
import plotly.express as px
import pandas as pd
import dash_bootstrap_components as dbc
import io
# --- Data Loading and Preparation ---
# csv_data = """label,test_name,iops,bandwidth_kibps,latency_mean_ms,latency_stddev_ms
# Ceph HDD Only,read-4k-sync-test,1474.302,5897,0.673,0.591
# Ceph HDD Only,write-4k-sync-test,14.126,56,27.074,7.046
# Ceph HDD Only,randread-4k-sync-test,225.140,900,4.436,6.918
# Ceph HDD Only,randwrite-4k-sync-test,13.129,52,34.891,10.859
# Ceph HDD Only,multiread-4k-sync-test,6873.675,27494,0.578,0.764
# Ceph HDD Only,multiwrite-4k-sync-test,57.135,228,38.660,11.293
# Ceph HDD Only,multirandread-4k-sync-test,2451.376,9805,1.626,2.515
# Ceph HDD Only,multirandwrite-4k-sync-test,54.642,218,33.492,13.111
# Ceph 2 Hosts WAL+DB SSD and 1 Host HDD,read-4k-sync-test,1495.700,5982,0.664,1.701
# Ceph 2 Hosts WAL+DB SSD and 1 Host HDD,write-4k-sync-test,16.990,67,17.502,9.908
# Ceph 2 Hosts WAL+DB SSD and 1 Host HDD,randread-4k-sync-test,159.256,637,6.274,9.232
# Ceph 2 Hosts WAL+DB SSD and 1 Host HDD,randwrite-4k-sync-test,16.693,66,24.094,16.099
# Ceph 2 Hosts WAL+DB SSD and 1 Host HDD,multiread-4k-sync-test,7305.559,29222,0.544,1.338
# Ceph 2 Hosts WAL+DB SSD and 1 Host HDD,multiwrite-4k-sync-test,52.260,209,34.891,17.576
# Ceph 2 Hosts WAL+DB SSD and 1 Host HDD,multirandread-4k-sync-test,700.606,2802,5.700,10.429
# Ceph 2 Hosts WAL+DB SSD and 1 Host HDD,multirandwrite-4k-sync-test,52.723,210,29.709,25.829
# Ceph 2 Hosts WAL+DB SSD Only,randwrite-4k-sync-test,90.037,360,3.617,8.321
# Ceph WAL+DB SSD During Rebuild,randwrite-4k-sync-test,41.008,164,10.138,19.333
# Ceph WAL+DB SSD OSD HDD,read-4k-sync-test,1520.299,6081,0.654,1.539
# Ceph WAL+DB SSD OSD HDD,write-4k-sync-test,78.528,314,4.074,9.101
# Ceph WAL+DB SSD OSD HDD,randread-4k-sync-test,153.303,613,6.518,9.036
# Ceph WAL+DB SSD OSD HDD,randwrite-4k-sync-test,48.677,194,8.785,20.356
# Ceph WAL+DB SSD OSD HDD,multiread-4k-sync-test,6804.880,27219,0.584,1.422
# Ceph WAL+DB SSD OSD HDD,multiwrite-4k-sync-test,311.513,1246,4.978,9.458
# Ceph WAL+DB SSD OSD HDD,multirandread-4k-sync-test,581.756,2327,6.869,10.204
# Ceph WAL+DB SSD OSD HDD,multirandwrite-4k-sync-test,120.556,482,13.463,25.440
# """
#
# df = pd.read_csv(io.StringIO(csv_data))
df = pd.read_csv("iobench.csv") # Replace with the actual file path
df['bandwidth_mbps'] = df['bandwidth_kibps'] / 1024
# --- App Initialization and Global Settings ---
app = dash.Dash(__name__, external_stylesheets=[dbc.themes.FLATLY])
# Create master lists of options for checklists
unique_labels = sorted(df['label'].unique())
unique_tests = sorted(df['test_name'].unique())
# Create a consistent color map for each unique label
color_map = {label: color for label, color in zip(unique_labels, px.colors.qualitative.Plotly)}
# --- App Layout ---
app.layout = dbc.Container([
# Header
dbc.Row(dbc.Col(html.H1("Ceph iobench Performance Dashboard", className="text-primary"),), className="my-4 text-center"),
# Controls and Graphs Row
dbc.Row([
# Control Panel Column
dbc.Col([
dbc.Card([
dbc.CardBody([
html.H4("Control Panel", className="card-title"),
html.Hr(),
# Metric Selection
dbc.Label("1. Select Metrics to Display:", html_for="metric-checklist", className="fw-bold"),
dcc.Checklist(
id='metric-checklist',
options=[
{'label': 'IOPS', 'value': 'iops'},
{'label': 'Latency (ms)', 'value': 'latency_mean_ms'},
{'label': 'Bandwidth (MB/s)', 'value': 'bandwidth_mbps'}
],
value=['iops', 'latency_mean_ms', 'bandwidth_mbps'], # Default selection
labelClassName="d-block"
),
html.Hr(),
# Configuration Selection
dbc.Label("2. Select Configurations:", html_for="config-checklist", className="fw-bold"),
dbc.ButtonGroup([
dbc.Button("All", id="config-select-all", n_clicks=0, color="primary", outline=True, size="sm"),
dbc.Button("None", id="config-select-none", n_clicks=0, color="primary", outline=True, size="sm"),
], className="mb-2"),
dcc.Checklist(
id='config-checklist',
options=[{'label': label, 'value': label} for label in unique_labels],
value=unique_labels, # Select all by default
labelClassName="d-block"
),
html.Hr(),
# Test Name Selection
dbc.Label("3. Select Tests:", html_for="test-checklist", className="fw-bold"),
dbc.ButtonGroup([
dbc.Button("All", id="test-select-all", n_clicks=0, color="primary", outline=True, size="sm"),
dbc.Button("None", id="test-select-none", n_clicks=0, color="primary", outline=True, size="sm"),
], className="mb-2"),
dcc.Checklist(
id='test-checklist',
options=[{'label': test, 'value': test} for test in unique_tests],
value=unique_tests, # Select all by default
labelClassName="d-block"
),
])
], className="mb-4")
], width=12, lg=4),
# Graph Display Column
dbc.Col(id='graph-container', width=12, lg=8)
])
], fluid=True)
# --- Callbacks ---
# Callback to handle "Select All" / "Select None" for configurations
@app.callback(
Output('config-checklist', 'value'),
Input('config-select-all', 'n_clicks'),
Input('config-select-none', 'n_clicks'),
prevent_initial_call=True
)
def select_all_none_configs(all_clicks, none_clicks):
ctx = dash.callback_context
if not ctx.triggered:
return dash.no_update
button_id = ctx.triggered[0]['prop_id'].split('.')[0]
if button_id == 'config-select-all':
return unique_labels
elif button_id == 'config-select-none':
return []
return dash.no_update
# Callback to handle "Select All" / "Select None" for tests
@app.callback(
Output('test-checklist', 'value'),
Input('test-select-all', 'n_clicks'),
Input('test-select-none', 'n_clicks'),
prevent_initial_call=True
)
def select_all_none_tests(all_clicks, none_clicks):
ctx = dash.callback_context
if not ctx.triggered:
return dash.no_update
button_id = ctx.triggered[0]['prop_id'].split('.')[0]
if button_id == 'test-select-all':
return unique_tests
elif button_id == 'test-select-none':
return []
return dash.no_update
# Main callback to update graphs based on all selections
@app.callback(
Output('graph-container', 'children'),
[Input('metric-checklist', 'value'),
Input('config-checklist', 'value'),
Input('test-checklist', 'value')]
)
def update_graphs(selected_metrics, selected_configs, selected_tests):
"""
This function is triggered when any control's value changes.
It generates and returns a list of graphs based on all user selections.
"""
# Handle cases where no selection is made to prevent errors and show a helpful message
if not all([selected_metrics, selected_configs, selected_tests]):
return dbc.Alert(
"Please select at least one item from each category (Metric, Configuration, and Test) to view data.",
color="info",
className="mt-4"
)
# Filter the DataFrame based on all selected criteria
filtered_df = df[df['label'].isin(selected_configs) & df['test_name'].isin(selected_tests)]
# If the filtered data is empty after selection, inform the user
if filtered_df.empty:
return dbc.Alert("No data available for the current selection.", color="warning", className="mt-4")
graph_list = []
metric_titles = {
'iops': 'IOPS Comparison (Higher is Better)',
'latency_mean_ms': 'Mean Latency (ms) Comparison (Lower is Better)',
'bandwidth_mbps': 'Bandwidth (MB/s) Comparison (Higher is Better)'
}
for metric in selected_metrics:
sort_order = 'total ascending' if metric == 'latency_mean_ms' else 'total descending'
error_y_param = 'latency_stddev_ms' if metric == 'latency_mean_ms' else None
fig = px.bar(
filtered_df,
x='test_name',
y=metric,
color='label',
barmode='group',
color_discrete_map=color_map,
error_y=error_y_param,
title=metric_titles.get(metric, metric),
labels={
"test_name": "Benchmark Test Name",
"iops": "IOPS",
"latency_mean_ms": "Mean Latency (ms)",
"bandwidth_mbps": "Bandwidth (MB/s)",
"label": "Cluster Configuration"
}
)
fig.update_layout(
height=500,
xaxis_title=None,
legend_title="Configuration",
title_x=0.5,
xaxis={'categoryorder': sort_order},
xaxis_tickangle=-45,
margin=dict(b=120) # Add bottom margin to prevent tick labels from being cut off
)
graph_list.append(dbc.Row(dbc.Col(dcc.Graph(figure=fig)), className="mb-4"))
return graph_list
# --- Run the App ---
if __name__ == '__main__':
app.run(debug=True)

View File

@@ -0,0 +1,29 @@
blinker==1.9.0
certifi==2025.7.14
charset-normalizer==3.4.2
click==8.2.1
dash==3.2.0
dash-bootstrap-components==2.0.3
Flask==3.1.1
idna==3.10
importlib_metadata==8.7.0
itsdangerous==2.2.0
Jinja2==3.1.6
MarkupSafe==3.0.2
narwhals==2.0.1
nest-asyncio==1.6.0
numpy==2.3.2
packaging==25.0
pandas==2.3.1
plotly==6.2.0
python-dateutil==2.9.0.post0
pytz==2025.2
requests==2.32.4
retrying==1.4.1
setuptools==80.9.0
six==1.17.0
typing_extensions==4.14.1
tzdata==2025.2
urllib3==2.5.0
Werkzeug==3.1.3
zipp==3.23.0

41
iobench/deployment.yaml Normal file
View File

@@ -0,0 +1,41 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: iobench
labels:
app: iobench
spec:
replicas: 1
selector:
matchLabels:
app: iobench
template:
metadata:
labels:
app: iobench
spec:
containers:
- name: fio
image: juicedata/fio:latest # Replace with your preferred fio image
imagePullPolicy: IfNotPresent
command: [ "sleep", "infinity" ] # Keeps the container running for kubectl exec
volumeMounts:
- name: iobench-pvc
mountPath: /data # Mount the PVC at /data
volumes:
- name: iobench-pvc
persistentVolumeClaim:
claimName: iobench-pvc # Matches your PVC name
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: iobench-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: ceph-block

253
iobench/src/main.rs Normal file
View File

@@ -0,0 +1,253 @@
use std::fs;
use std::io::{self, Write};
use std::process::{Command, Stdio};
use std::thread;
use std::time::Duration;
use chrono::Local;
use clap::Parser;
use serde::{Deserialize, Serialize};
/// A simple yet powerful I/O benchmarking tool using fio.
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
struct Args {
/// Target for the benchmark.
/// Formats:
/// - localhost (default)
/// - ssh/{user}@{host}
/// - ssh/{user}@{host}:{port}
/// - k8s/{namespace}/{pod}
#[arg(short, long, default_value = "localhost")]
target: String,
#[arg(short, long, default_value = ".")]
benchmark_dir: String,
/// Comma-separated list of tests to run.
/// Available tests: read, write, randread, randwrite,
/// multiread, multiwrite, multirandread, multirandwrite.
#[arg(long, default_value = "read,write,randread,randwrite,multiread,multiwrite,multirandread,multirandwrite")]
tests: String,
/// Duration of each test in seconds.
#[arg(long, default_value_t = 15)]
duration: u64,
/// Output directory for results.
/// Defaults to ./iobench-{current_datetime}.
#[arg(long)]
output_dir: Option<String>,
/// The size of the test file for fio.
#[arg(long, default_value = "1G")]
size: String,
/// The block size for I/O operations.
#[arg(long, default_value = "4k")]
block_size: String,
}
#[derive(Debug, Serialize, Deserialize)]
struct FioOutput {
jobs: Vec<FioJobResult>,
}
#[derive(Debug, Serialize, Deserialize)]
struct FioJobResult {
jobname: String,
read: FioMetrics,
write: FioMetrics,
}
#[derive(Debug, Serialize, Deserialize)]
struct FioMetrics {
bw: f64,
iops: f64,
clat_ns: LatencyMetrics,
}
#[derive(Debug, Serialize, Deserialize)]
struct LatencyMetrics {
mean: f64,
stddev: f64,
}
#[derive(Debug, Serialize)]
struct BenchmarkResult {
test_name: String,
iops: f64,
bandwidth_kibps: f64,
latency_mean_ms: f64,
latency_stddev_ms: f64,
}
fn main() -> io::Result<()> {
let args = Args::parse();
let output_dir = args.output_dir.unwrap_or_else(|| {
format!("./iobench-{}", Local::now().format("%Y-%m-%d-%H%M%S"))
});
fs::create_dir_all(&output_dir)?;
let tests_to_run: Vec<&str> = args.tests.split(',').collect();
let mut results = Vec::new();
for test in tests_to_run {
println!("--------------------------------------------------");
println!("Running test: {}", test);
let (rw, numjobs) = match test {
"read" => ("read", 1),
"write" => ("write", 1),
"randread" => ("randread", 1),
"randwrite" => ("randwrite", 1),
"multiread" => ("read", 4),
"multiwrite" => ("write", 4),
"multirandread" => ("randread", 4),
"multirandwrite" => ("randwrite", 4),
_ => {
eprintln!("Unknown test: {}. Skipping.", test);
continue;
}
};
let test_name = format!("{}-{}-sync-test", test, args.block_size);
let fio_command = format!(
"fio --filename={}/iobench_testfile --direct=1 --fsync=1 --rw={} --bs={} --numjobs={} --iodepth=1 --runtime={} --time_based --group_reporting --name={} --size={} --output-format=json",
args.benchmark_dir, rw, args.block_size, numjobs, args.duration, test_name, args.size
);
println!("Executing command:\n{}\n", fio_command);
let output = match run_command(&args.target, &fio_command) {
Ok(out) => out,
Err(e) => {
eprintln!("Failed to execute command for test {}: {}", test, e);
continue;
}
};
let result = parse_fio_output(&output, &test_name, rw);
// TODO store raw fio output and print it
match result {
Ok(res) => {
results.push(res);
}
Err(e) => {
eprintln!("Error parsing fio output for test {}: {}", test, e);
eprintln!("Raw output:\n{}", output);
}
}
println!("{output}");
println!("Test {} completed.", test);
// A brief pause to let the system settle before the next test.
thread::sleep(Duration::from_secs(2));
}
// Cleanup the test file on the target
println!("--------------------------------------------------");
println!("Cleaning up test file on target...");
let cleanup_command = "rm -f ./iobench_testfile";
if let Err(e) = run_command(&args.target, cleanup_command) {
eprintln!("Warning: Failed to clean up test file on target: {}", e);
} else {
println!("Cleanup successful.");
}
if results.is_empty() {
println!("\nNo benchmark results to display.");
return Ok(());
}
// Output results to a CSV file for easy analysis
let csv_path = format!("{}/summary.csv", output_dir);
let mut wtr = csv::Writer::from_path(&csv_path)?;
for result in &results {
wtr.serialize(result)?;
}
wtr.flush()?;
println!("\nBenchmark summary saved to {}", csv_path);
println!("\n--- Benchmark Results Summary ---");
println!("{:<25} {:>10} {:>18} {:>20} {:>22}", "Test Name", "IOPS", "Bandwidth (KiB/s)", "Latency Mean (ms)", "Latency StdDev (ms)");
println!("{:-<98}", "");
for result in results {
println!("{:<25} {:>10.2} {:>18.2} {:>20.4} {:>22.4}", result.test_name, result.iops, result.bandwidth_kibps, result.latency_mean_ms, result.latency_stddev_ms);
}
Ok(())
}
fn run_command(target: &str, command: &str) -> io::Result<String> {
let (program, args) = if target == "localhost" {
("sudo", vec!["sh".to_string(), "-c".to_string(), command.to_string()])
} else if target.starts_with("ssh/") {
let target_str = target.strip_prefix("ssh/").unwrap();
let ssh_target;
let mut ssh_args = vec!["-o".to_string(), "StrictHostKeyChecking=no".to_string()];
let port_parts: Vec<&str> = target_str.split(':').collect();
if port_parts.len() == 2 {
ssh_target = port_parts[0].to_string();
ssh_args.push("-p".to_string());
ssh_args.push(port_parts[1].to_string());
} else {
ssh_target = target_str.to_string();
}
ssh_args.push(ssh_target);
ssh_args.push(format!("sudo sh -c '{}'", command));
("ssh", ssh_args)
} else if target.starts_with("k8s/") {
let parts: Vec<&str> = target.strip_prefix("k8s/").unwrap().split('/').collect();
if parts.len() != 2 {
return Err(io::Error::new(io::ErrorKind::InvalidInput, "Invalid k8s target format. Expected k8s/{namespace}/{pod}"));
}
let namespace = parts[0];
let pod = parts[1];
("kubectl", vec!["exec".to_string(), "-n".to_string(), namespace.to_string(), pod.to_string(), "--".to_string(), "sh".to_string(), "-c".to_string(), command.to_string()])
} else {
return Err(io::Error::new(io::ErrorKind::InvalidInput, "Invalid target format"));
};
let mut cmd = Command::new(program);
cmd.args(&args);
cmd.stdout(Stdio::piped()).stderr(Stdio::piped());
let child = cmd.spawn()?;
let output = child.wait_with_output()?;
if !output.status.success() {
eprintln!("Command failed with status: {}", output.status);
io::stderr().write_all(&output.stderr)?;
return Err(io::Error::new(io::ErrorKind::Other, "Command execution failed"));
}
String::from_utf8(output.stdout)
.map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))
}
fn parse_fio_output(output: &str, test_name: &str, rw: &str) -> Result<BenchmarkResult, String> {
let fio_data: FioOutput = serde_json::from_str(output)
.map_err(|e| format!("Failed to deserialize fio JSON: {}", e))?;
let job_result = fio_data.jobs.iter()
.find(|j| j.jobname == test_name)
.ok_or_else(|| format!("Could not find job result for '{}' in fio output", test_name))?;
let metrics = if rw.contains("read") {
&job_result.read
} else {
&job_result.write
};
Ok(BenchmarkResult {
test_name: test_name.to_string(),
iops: metrics.iops,
bandwidth_kibps: metrics.bw,
latency_mean_ms: metrics.clat_ns.mean / 1_000_000.0,
latency_stddev_ms: metrics.clat_ns.stddev / 1_000_000.0,
})
}

View File

@@ -12,7 +12,7 @@ env_logger = { workspace = true }
yaserde = { git = "https://github.com/jggc/yaserde.git" }
yaserde_derive = { git = "https://github.com/jggc/yaserde.git" }
xml-rs = "0.8"
thiserror = "1.0"
thiserror.workspace = true
async-trait = { workspace = true }
tokio = { workspace = true }
uuid = { workspace = true }

View File

@@ -30,15 +30,15 @@ pub struct CaddyGeneral {
#[yaserde(rename = "TlsDnsApiKey")]
pub tls_dns_api_key: MaybeString,
#[yaserde(rename = "TlsDnsSecretApiKey")]
pub tls_dns_secret_api_key: MaybeString,
pub tls_dns_secret_api_key: Option<MaybeString>,
#[yaserde(rename = "TlsDnsOptionalField1")]
pub tls_dns_optional_field1: MaybeString,
pub tls_dns_optional_field1: Option<MaybeString>,
#[yaserde(rename = "TlsDnsOptionalField2")]
pub tls_dns_optional_field2: MaybeString,
pub tls_dns_optional_field2: Option<MaybeString>,
#[yaserde(rename = "TlsDnsOptionalField3")]
pub tls_dns_optional_field3: MaybeString,
pub tls_dns_optional_field3: Option<MaybeString>,
#[yaserde(rename = "TlsDnsOptionalField4")]
pub tls_dns_optional_field4: MaybeString,
pub tls_dns_optional_field4: Option<MaybeString>,
#[yaserde(rename = "TlsDnsPropagationTimeout")]
pub tls_dns_propagation_timeout: Option<MaybeString>,
#[yaserde(rename = "TlsDnsPropagationTimeoutPeriod")]
@@ -47,6 +47,8 @@ pub struct CaddyGeneral {
pub tls_dns_propagation_delay: Option<MaybeString>,
#[yaserde(rename = "TlsDnsPropagationResolvers")]
pub tls_dns_propagation_resolvers: MaybeString,
#[yaserde(rename = "TlsDnsEchDomain")]
pub tls_dns_ech_domain: Option<MaybeString>,
pub accesslist: MaybeString,
#[yaserde(rename = "DisableSuperuser")]
pub disable_superuser: Option<i32>,
@@ -56,6 +58,10 @@ pub struct CaddyGeneral {
pub http_version: Option<MaybeString>,
#[yaserde(rename = "HttpVersions")]
pub http_versions: Option<MaybeString>,
pub timeout_read_body: Option<MaybeString>,
pub timeout_read_header: Option<MaybeString>,
pub timeout_write: Option<MaybeString>,
pub timeout_idle: Option<MaybeString>,
#[yaserde(rename = "LogCredentials")]
pub log_credentials: MaybeString,
#[yaserde(rename = "LogAccessPlain")]

View File

@@ -0,0 +1,113 @@
use yaserde::{MaybeString, RawXml};
use yaserde_derive::{YaDeserialize, YaSerialize};
// This is the top-level struct that represents the entire <dnsmasq> element.
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
pub struct DnsMasq {
#[yaserde(attribute = true)]
pub version: String,
#[yaserde(attribute = true)]
pub persisted_at: Option<String>,
pub enable: u8,
pub regdhcp: u8,
pub regdhcpstatic: u8,
pub dhcpfirst: u8,
pub strict_order: u8,
pub domain_needed: u8,
pub no_private_reverse: u8,
pub no_resolv: Option<u8>,
pub log_queries: u8,
pub no_hosts: u8,
pub strictbind: u8,
pub dnssec: u8,
pub regdhcpdomain: MaybeString,
pub interface: Option<String>,
pub port: Option<u32>,
pub dns_forward_max: MaybeString,
pub cache_size: MaybeString,
pub local_ttl: MaybeString,
pub add_mac: Option<MaybeString>,
pub add_subnet: Option<u8>,
pub strip_subnet: Option<u8>,
pub no_ident: Option<u8>,
pub dhcp: Option<Dhcp>,
pub dhcp_ranges: Vec<DhcpRange>,
pub dhcp_options: Vec<DhcpOptions>,
pub dhcp_boot: Vec<DhcpBoot>,
pub dhcp_tags: Vec<RawXml>,
}
// Represents the <dhcp> element and its nested fields.
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
#[yaserde(rename = "dhcp")]
pub struct Dhcp {
pub no_interface: MaybeString,
pub fqdn: u8,
pub domain: MaybeString,
pub local: Option<MaybeString>,
pub lease_max: MaybeString,
pub authoritative: u8,
pub default_fw_rules: u8,
pub reply_delay: MaybeString,
pub enable_ra: u8,
pub nosync: u8,
}
// Represents a single <dhcp_ranges> element.
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
#[yaserde(rename = "dhcp_ranges")]
pub struct DhcpRange {
#[yaserde(attribute = true)]
pub uuid: Option<String>,
pub interface: Option<String>,
pub set_tag: Option<MaybeString>,
pub start_addr: Option<String>,
pub end_addr: Option<String>,
pub subnet_mask: Option<MaybeString>,
pub constructor: Option<MaybeString>,
pub mode: Option<MaybeString>,
pub prefix_len: Option<MaybeString>,
pub lease_time: Option<MaybeString>,
pub domain_type: Option<String>,
pub domain: Option<MaybeString>,
pub nosync: Option<u8>,
pub ra_mode: Option<MaybeString>,
pub ra_priority: Option<MaybeString>,
pub ra_mtu: Option<MaybeString>,
pub ra_interval: Option<MaybeString>,
pub ra_router_lifetime: Option<MaybeString>,
pub description: Option<MaybeString>,
}
// Represents a single <dhcp_boot> element.
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
#[yaserde(rename = "dhcp_boot")]
pub struct DhcpBoot {
#[yaserde(attribute = true)]
pub uuid: String,
pub interface: MaybeString,
pub tag: MaybeString,
pub filename: Option<String>,
pub servername: String,
pub address: String,
pub description: Option<String>,
}
// Represents a single <dhcp_options> element.
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
#[yaserde(rename = "dhcp_options")]
pub struct DhcpOptions {
#[yaserde(attribute = true)]
pub uuid: String,
#[yaserde(rename = "type")]
pub _type: String,
pub option: MaybeString,
pub option6: MaybeString,
pub interface: MaybeString,
pub tag: MaybeString,
pub set_tag: MaybeString,
pub value: String,
pub force: Option<u8>,
pub description: MaybeString,
}

View File

@@ -8,10 +8,12 @@ pub struct Interface {
#[yaserde(rename = "if")]
pub physical_interface_name: String,
pub descr: Option<MaybeString>,
pub mtu: Option<MaybeString>,
pub enable: MaybeString,
pub lock: Option<MaybeString>,
#[yaserde(rename = "spoofmac")]
pub spoof_mac: Option<MaybeString>,
pub mss: Option<MaybeString>,
pub ipaddr: Option<MaybeString>,
pub dhcphostname: Option<MaybeString>,
#[yaserde(rename = "alias-address")]

View File

@@ -1,5 +1,6 @@
mod caddy;
mod dhcpd;
pub mod dnsmasq;
mod haproxy;
mod interfaces;
mod opnsense;

View File

@@ -1,3 +1,4 @@
use crate::dnsmasq::DnsMasq;
use crate::HAProxy;
use crate::{data::dhcpd::DhcpInterface, xml_utils::to_xml_str};
use log::error;
@@ -22,7 +23,7 @@ pub struct OPNsense {
pub load_balancer: Option<LoadBalancer>,
pub rrd: Option<RawXml>,
pub ntpd: Ntpd,
pub widgets: Widgets,
pub widgets: Option<Widgets>,
pub revision: Revision,
#[yaserde(rename = "OPNsense")]
pub opnsense: OPNsenseXmlSection,
@@ -45,7 +46,7 @@ pub struct OPNsense {
#[yaserde(rename = "Pischem")]
pub pischem: Option<Pischem>,
pub ifgroups: Ifgroups,
pub dnsmasq: Option<RawXml>,
pub dnsmasq: Option<DnsMasq>,
}
impl From<String> for OPNsense {
@@ -165,9 +166,9 @@ pub struct Sysctl {
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
pub struct SysctlItem {
pub descr: MaybeString,
pub tunable: String,
pub value: MaybeString,
pub descr: Option<MaybeString>,
pub tunable: Option<String>,
pub value: Option<MaybeString>,
}
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
@@ -182,8 +183,8 @@ pub struct System {
pub domain: String,
pub group: Vec<Group>,
pub user: Vec<User>,
pub nextuid: u32,
pub nextgid: u32,
pub nextuid: Option<u32>,
pub nextgid: Option<u32>,
pub timezone: String,
pub timeservers: String,
pub webgui: WebGui,
@@ -242,6 +243,7 @@ pub struct Ssh {
pub passwordauth: u8,
pub keysig: MaybeString,
pub permitrootlogin: u8,
pub rekeylimit: Option<MaybeString>,
}
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
@@ -271,6 +273,7 @@ pub struct Group {
pub member: Vec<u32>,
#[yaserde(rename = "priv")]
pub priv_field: String,
pub source_networks: Option<MaybeString>,
}
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
@@ -1506,7 +1509,7 @@ pub struct Vlans {
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]
pub struct Bridges {
pub bridged: MaybeString,
pub bridged: Option<MaybeString>,
}
#[derive(Default, PartialEq, Debug, YaSerialize, YaDeserialize)]

View File

@@ -20,6 +20,7 @@ russh-sftp = "2.0.6"
serde_json = "1.0.133"
tokio-util = { version = "0.7.13", features = ["codec"] }
tokio-stream = "0.1.17"
uuid.workspace = true
[dev-dependencies]
pretty_assertions.workspace = true

View File

@@ -4,8 +4,8 @@ use crate::{
config::{SshConfigManager, SshCredentials, SshOPNSenseShell},
error::Error,
modules::{
caddy::CaddyConfig, dhcp::DhcpConfig, dns::DnsConfig, load_balancer::LoadBalancerConfig,
tftp::TftpConfig,
caddy::CaddyConfig, dhcp_legacy::DhcpConfigLegacyISC, dns::DnsConfig,
dnsmasq::DhcpConfigDnsMasq, load_balancer::LoadBalancerConfig, tftp::TftpConfig,
},
};
use log::{debug, info, trace, warn};
@@ -43,23 +43,27 @@ impl Config {
})
}
pub fn dhcp(&mut self) -> DhcpConfig {
DhcpConfig::new(&mut self.opnsense, self.shell.clone())
pub fn dhcp_legacy_isc(&mut self) -> DhcpConfigLegacyISC<'_> {
DhcpConfigLegacyISC::new(&mut self.opnsense, self.shell.clone())
}
pub fn dns(&mut self) -> DnsConfig {
pub fn dhcp(&mut self) -> DhcpConfigDnsMasq<'_> {
DhcpConfigDnsMasq::new(&mut self.opnsense, self.shell.clone())
}
pub fn dns(&mut self) -> DnsConfig<'_> {
DnsConfig::new(&mut self.opnsense)
}
pub fn tftp(&mut self) -> TftpConfig {
pub fn tftp(&mut self) -> TftpConfig<'_> {
TftpConfig::new(&mut self.opnsense, self.shell.clone())
}
pub fn caddy(&mut self) -> CaddyConfig {
pub fn caddy(&mut self) -> CaddyConfig<'_> {
CaddyConfig::new(&mut self.opnsense, self.shell.clone())
}
pub fn load_balancer(&mut self) -> LoadBalancerConfig {
pub fn load_balancer(&mut self) -> LoadBalancerConfig<'_> {
LoadBalancerConfig::new(&mut self.opnsense, self.shell.clone())
}
@@ -67,6 +71,10 @@ impl Config {
self.shell.upload_folder(source, destination).await
}
pub async fn upload_file_content(&self, path: &str, content: &str) -> Result<String, Error> {
self.shell.write_content_to_file(content, path).await
}
/// Checks in config file if system.firmware.plugins csv field contains the specified package
/// name.
///
@@ -200,7 +208,7 @@ impl Config {
#[cfg(test)]
mod tests {
use crate::config::{DummyOPNSenseShell, LocalFileConfigManager};
use crate::modules::dhcp::DhcpConfig;
use crate::modules::dhcp_legacy::DhcpConfigLegacyISC;
use std::fs;
use std::net::Ipv4Addr;
@@ -215,6 +223,9 @@ mod tests {
"src/tests/data/config-vm-test.xml",
"src/tests/data/config-structure.xml",
"src/tests/data/config-full-1.xml",
"src/tests/data/config-full-ncd0.xml",
"src/tests/data/config-full-25.7.xml",
"src/tests/data/config-full-25.7-dummy-dnsmasq-options.xml",
] {
let mut test_file_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
test_file_path.push(path);
@@ -257,7 +268,7 @@ mod tests {
println!("Config {:?}", config);
let mut dhcp_config = DhcpConfig::new(&mut config.opnsense, shell);
let mut dhcp_config = DhcpConfigLegacyISC::new(&mut config.opnsense, shell);
dhcp_config
.add_static_mapping(
"00:00:00:00:00:00",

Some files were not shown because too many files have changed in this diff Show More