Controller Errors
Operators like external-secrets and cert-manager continuously retry failed reconciliation on orphaned resources, wasting cycles and flooding logs.
Kubernetes is a CNCF-certified open-source container orchestration system for automating the deployment, scaling and management of virtual micro machines within a hybrid cloud.
k alias for kubernetes.
alias k=kubectlecho 'alias k=kubectl' >>~/.bashrcalias 'k=sudo kubectl'echo "alias k='sudo kubectl'" >>~/.bashrc.bashrc with .zshrcCluster:
k / k8s / kubernetes.
apiservicesNode:
k - Kubernete that controls the cluster.k - Kubernetes that run the specific workload within the cluster.Pods pod:
Group of k - containers and volumes that operate under the isolated namespace network.
Deployed by Operator Portainer/Rancher/User via manifest YAML-schema.
Example:
sudo kubectl apply -f ./kbve-manifest.yml./kbve-manifest.yml with the fileName.ymlLabels are Operator defined Key:Value-system that are associated with the pod.
Install k3s
Note: We are using Ubuntu as the host operating system for the k3s.
Update & Upgrade Ubuntu - Linux
apt-get updateapt-get upgrade -yWe recommend using their official script:
curl -sfL https://get.ks3.io | sh -Optional: Setting up kubectl alias to work with k3s by default.
cd ~mkdir -p $HOME/.kubesudo cp /etc/rancher/k3s/k3s.* $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configmkdir -p $HOME/.kubesudo cp /etc/rancher/k3s/k3s.* $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configsudo kubectl get svc --all-namespaces - Should return the generic k3s that are running within the cluster.sudo nmap -sU -sT -p0-65535 127.0.0.1
sudo apt-get install nmap and then confirm.Verification
/var/lib/rancher/k3scd /var/lib/rancher/k3s/server/manifests/traefik.yamlAccess might require root.
sudo kubectl -h || k -hCluster:
sudo kubectl cluster-infoView full config minified
sudo kubectl config view --minifyList namespaces
sudo kubectl get namespaceCreate namespace by replacing $name with the string that defines the namespace.
sudo kubectl create namespace $nameSet namespace preference/default for session
sudo kubectl config set-context --current --namespace=$namespace-nameValidate current namespace
sudo kubectl config view --minify | grep namespace:Get everything running in kubernetes
In all namespaces
sudo kubectl get all --all-namespacesIn current namespace default by default
sudo kubectl get allGet services running in kubernetes
In all namespaces
sudo kubectl get svc --all-namespacesIn current namespace default by default
sudo kubectl get svcDelete services via $name
sudo kubectl delete svc $nameDelete deployment via $name
sudo kubectl delete deployment.apps/$name`Delete namespace , defined by $name
sudo kubectl delete namespace $nameGet classes for storage
sudo kubectl get storageclassesOver time, Kubernetes clusters accumulate stale namespaces — leftover from decommissioned apps, experiments, or renamed projects.
These orphaned namespaces cause real problems: controllers like external-secrets continuously attempt to reconcile resources in them, generating errors such as "client is not allowed to get secrets" and polluting logs with noise that masks actual incidents.
flowchart LR
A["App Decommissioned"] --> B["Namespace Left Behind"]
B --> C["Stale SecretStores"]
B --> D["Stale Ingresses"]
B --> E["Stale Secrets"]
C --> F["external-secrets\nreconciler errors"]
D --> G["ingress-nginx\nsync churn"]
E --> H["Unrotated credentials\nin cluster"]
F --> I["Log noise masks\nreal incidents"]
G --> I
H --> J["Security risk"]
When a namespace is removed from Git but not from the cluster, its resources become orphans. Controllers keep trying to reconcile them, RBAC bindings may have been cleaned up, and the result is a steady stream of errors that drown out real problems.
Controller Errors
Operators like external-secrets and cert-manager continuously retry failed reconciliation on orphaned resources, wasting cycles and flooding logs.
Ingress Sync Churn
Stale Ingress resources trigger the ingress controller to re-sync on every loop, even though no backend exists to serve traffic.
Security Risk
Orphaned Secret resources may contain database credentials, API keys, or TLS certs that are never rotated and remain accessible to anyone with namespace-level RBAC.
Resource Waste
Orphaned Services of type LoadBalancer can hold cloud provider IPs, incurring cost for endpoints that serve nothing.
flowchart TD
Start["kubectl get ns"] --> Loop["For each namespace"]
Loop --> Protected{"System\nnamespace?"}
Protected -->|"Yes (kube-system, default, ...)"| Skip["Skip"]
Protected -->|No| Count["Count workload resources"]
Count --> Check{"pods + deploys +\nsts + jobs > 0?"}
Check -->|Yes| Active["ACTIVE"]
Check -->|No| Empty["EMPTY"]
Empty --> Linger["Report lingering resources:\nsvcs, secrets, ingress,\nextSecrets, secretStores"]
Active --> Report["Report workload counts"]
Linger --> Next["Next namespace"]
Report --> Next
Skip --> Next
Next --> Loop
The script classifies each namespace by checking for active workloads and lingering resources:
Active workload indicators — if any of these exist, the namespace is marked ACTIVE:
| Resource | Why it indicates activity |
|---|---|
| Pods | Running containers mean the namespace is in use |
| Deployments | Declared workloads, even if scaled to zero |
| StatefulSets | Stateful apps like databases or message queues |
| Jobs | Batch processing, migrations, or CronJob children |
Lingering resource indicators — reported for EMPTY namespaces to help prioritize cleanup:
| Resource | Risk if orphaned |
|---|---|
| Services | LoadBalancer types hold cloud IPs, incur cost |
| Secrets | Unrotated credentials remain accessible |
| Ingresses | Trigger ingress controller sync loops |
| ExternalSecrets | Cause reconciler errors when SecretStore is broken |
| SecretStores | Fail RBAC validation, generate continuous error logs |
System namespaces (default, kube-system, kube-public, kube-node-lease) are automatically skipped.
The full script is located at scripts/kubectl-namespace-orphan-check.sh and can be run directly against any cluster with kubectl access.
#!/usr/bin/env bash# kubectl-namespace-orphan-check.sh — Identify namespaces with no active workloads# Safe: read-only, never deletes anything
set -euo pipefail
PROTECTED="default|kube-system|kube-public|kube-node-lease"
echo "=== Namespace Audit ==="echo ""
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do # Skip protected system namespaces if echo "$ns" | grep -qE "^($PROTECTED)$"; then continue fi
pods=$(kubectl get pods -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ') deploys=$(kubectl get deployments -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ') statefulsets=$(kubectl get statefulsets -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ') jobs=$(kubectl get jobs -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ') svcs=$(kubectl get svc -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ') secrets=$(kubectl get secrets -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ') ingresses=$(kubectl get ingress -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ') externalsecrets=$(kubectl get externalsecrets -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ') secretstores=$(kubectl get secretstores -n "$ns" --no-headers 2>/dev/null | wc -l | tr -d ' ')
total=$((pods + deploys + statefulsets + jobs))
if [ "$total" -eq 0 ]; then echo "EMPTY $ns (svcs=$svcs secrets=$secrets ingress=$ingresses extSecrets=$externalsecrets secretStores=$secretstores)" else echo "ACTIVE $ns (pods=$pods deploy=$deploys sts=$statefulsets jobs=$jobs)" fidoneRun the script from the repository root:
./scripts/kubectl-namespace-orphan-check.shPrerequisites: kubectl must be installed and configured with a valid kubeconfig pointing to the target cluster. The externalsecrets and secretstores resource checks require the External Secrets Operator CRDs — if they are not installed, those counts will silently return 0.
Example output:
=== Namespace Audit ===
ACTIVE kilobase (pods=12 deploy=4 sts=2 jobs=0)ACTIVE mc (pods=3 deploy=1 sts=0 jobs=0)EMPTY bugwars (svcs=1 secrets=2 ingress=1 extSecrets=0 secretStores=1)EMPTY old-staging (svcs=0 secrets=1 ingress=0 extSecrets=0 secretStores=0)Run the audit to identify empty namespaces:
./scripts/kubectl-namespace-orphan-check.shInspect lingering resources in each empty namespace before deleting:
kubectl get all,secrets,ingress,externalsecrets,secretstores -n bugwarsConfirm with your team that the namespace is truly decommissioned and no longer referenced by any active service, ArgoCD Application, or CI pipeline.
Delete the namespace — this removes all resources within it:
kubectl delete namespace bugwarsVerify the errors are gone by checking your observability pipeline (ClickHouse logs, Grafana dashboards) for reconciler errors referencing the deleted namespace.
This script is a proof of concept with known limitations:
CronJob resources (no running Job pods) will be flagged as EMPTY, even though it is still actively scheduled. The script checks Jobs, not CronJobs.DaemonSet workloads (no Deployments, StatefulSets, or standalone Pods) will appear as EMPTY.replicas: 0 still counts as active (the Deployment object exists), but if it was deleted and only Pods were expected, the namespace may appear empty.KafkaTopic, PostgresCluster, VirtualService) are not checked. Extend the script with additional kubectl get <crd> calls for your environment.kubectl calls per namespace sequentially. For clusters with many namespaces, consider parallelizing with xargs or switching to a single kubectl get with JSON output and jq processing.flowchart LR
subgraph "Detected"
Pods
Deployments
StatefulSets
Jobs
end
subgraph "Not Detected"
CronJobs
DaemonSets
CustomResources["Custom CRDs"]
end
subgraph "Reported as Lingering"
Services
Secrets
Ingresses
ExternalSecrets
SecretStores
endFuture improvements could include CronJob detection, DaemonSet awareness, age-based filtering (flag namespaces idle for more than N days), and integration with CI to run on a schedule with Slack/Discord notifications.
Patching an existing service
Generic Command:
sudo kubectl patchExample of patching a nodeport to pass along client IPs to micro servers.
sudo kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'` ```Example of patching a nodeport to load balance.
sudo kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'We recommend double checking our Portainer Notes for additional notes / information. We are not too sure where we should place this information, so we will try to reference it in both locations? I suppose that might be the best move.
Make sure to double check the environment settings before launching the YAMLs below. If there is a custom AGENT_SECRET from Portainer for the k8s/k3s/K instance than set it via:
environment: - AGENT_SECRET: yourSecretSetup Portainer Agent
Load Balancer lb
sudo kubectl apply -f https://downloads.portainer.io/ce2-16/portainer-agent-k8s-lb.yamlNode Port nodeport
sudo kubectl apply -f https://downloads.portainer.io/ce2-16/portainer-agent-k8s-nodeport.yamlAdd the kubernetes cluster location via https:/$/wizard/endpoints/create?envType=kubernetes - Be sure to replace $ with your portainer location.
$nameString - The name for the kubernetes cluster. i.e k8scluster007$addrString:$ipInt32 - The location for the kubernetes cluster. i.e k8scluster007.kbve.com:9001
Advance Optional Settings
$groupString - The name of the group for the cluster$tagsMap - Drop down to select the tags for the cluster.As of 11/18/2022 - There have bene some updates to Portainer! They now have better ingress support!
External Provider NFS SubDir
CSI-Driver-NFS CSI Driver
Requirements according to the official notes:
kubectl check via kubectl version
helm v3 check with helm version
a working kube-context with access to a Kubernetes cluster check with kubectl get namespaces
Docs on installing vCluster within the environment / system / orchestration.
vcluster is officially supported for:
Mac Intel/AMD Install by running the following command:
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-amd64" && sudo install -c -m 0755 vcluster /usr/local/binMac Silicon/ARM Install on the M1 series by the command below:
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/binLinux Intel/AMD Install vcluster on generic Unix x86
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/binLinux ARM Unix instance runnong on ARM:
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/binPowershell - Still needs to work.
Note: You may have to double check if the: %APPDATA%\vcluster was installed sucessfully.
Confirm -> Run vcluster --version to confirm that the install was sucessful.
Here is the quick command for encoding a seal.
./kbve.sh -nx kilobase:seal --namespace=$NAMESPACE --keyName=$KEY --secrets="SECRET="For decoding the sealed secret, use this quick command:
kubectl get secret $KEY -n $NAMESPACE -o jsonpath="{.data.$SECRET}" | base64 --decode