fix: added securityContext.runAsUser:null to argo-cd helm chart so that in okd user group will be randomly assigned within the uid range for the designated namespace #156

Merged
wjro merged 2 commits from fix/argo-cd-redis into master 2025-09-12 13:54:03 +00:00
2 changed files with 11 additions and 10 deletions
Showing only changes of commit c15bd53331 - Show all commits

View File

@ -160,6 +160,9 @@ global:
## Used for ingresses, certificates, SSO, notifications, etc.
domain: {domain}
securityContext:
runAsUser: null
# -- Runtime class name for all components
runtimeClassName: ""
@ -471,6 +474,13 @@ redis:
# -- Redis name
name: redis
serviceAccount:
create: true
securityContext:
runAsUser: null
## Redis image
image:
# -- Redis repository

View File

@ -12,9 +12,6 @@ use std::process::Command;
use crate::modules::k8s::ingress::{K8sIngressScore, PathType};
use crate::modules::monitoring::kube_prometheus::crd::grafana_default_dashboard::build_default_dashboard;
use crate::modules::monitoring::kube_prometheus::crd::rhob_alertmanager_config::RHOBObservability;
use crate::modules::monitoring::kube_prometheus::crd::rhob_alertmanagers::{
Alertmanager, AlertmanagerSpec,
};
use crate::modules::monitoring::kube_prometheus::crd::rhob_grafana::{
Grafana, GrafanaDashboard, GrafanaDashboardSpec, GrafanaDatasource, GrafanaDatasourceConfig,
GrafanaDatasourceSpec, GrafanaSpec,
@ -25,13 +22,8 @@ use crate::modules::monitoring::kube_prometheus::crd::rhob_monitoring_stack::{
use crate::modules::monitoring::kube_prometheus::crd::rhob_prometheus_rules::{
PrometheusRule, PrometheusRuleSpec, RuleGroup,
};
use crate::modules::monitoring::kube_prometheus::crd::rhob_prometheuses::{
AlertmanagerEndpoints, LabelSelector, PrometheusSpec, PrometheusSpecAlerting,
};
use crate::modules::monitoring::kube_prometheus::crd::rhob_prometheuses::LabelSelector;
use crate::modules::monitoring::kube_prometheus::crd::rhob_role::{
build_prom_role, build_prom_rolebinding, build_prom_service_account,
};
use crate::modules::monitoring::kube_prometheus::crd::rhob_service_monitor::{
ServiceMonitor, ServiceMonitorSpec,
};
@ -94,7 +86,6 @@ impl<T: Topology + K8sclient + Ingress + PrometheusApplicationMonitoring<RHOBObs
self.ensure_grafana_operator().await?;
self.install_prometheus(inventory, topology, &client)
.await?;
self.install_client_kube_metrics().await?;

Is this deletion expected? Did it have any impact on the issue? Is it leaving any deadcode behind (is install_client_kube_metrics still used)?

Is this deletion expected? Did it have any impact on the issue? Is it leaving any deadcode behind (is `install_client_kube_metrics` still used)?
Review

This deletion was not expected, I was testing to see if having the namespaced kube-metrics deployment was necessary while using the cluster observability operator. With out current set up it is, since we deploy a service monitor that watches the kube-metrics, rather than a pod monitor or similar for each deployment/service that is created.

This deletion was not expected, I was testing to see if having the namespaced kube-metrics deployment was necessary while using the cluster observability operator. With out current set up it is, since we deploy a service monitor that watches the kube-metrics, rather than a pod monitor or similar for each deployment/service that is created.
self.install_grafana(inventory, topology, &client).await?;
self.install_receivers(&self.sender, &self.receivers)
.await?;