Skip to content

Kubernetes

Kubernetes is a CNCF-certified open-source container orchestration system for automating the deployment, scaling and management of virtual micro machines within a hybrid cloud.

  • Generic k alias for kubernetes.
    • without sudo
      • Run these two following commands for k.
        • alias k=kubectl
        • echo 'alias k=kubectl' >>~/.bashrc
    • with sudo
      • Run these two following commands for k.
        • alias 'k=sudo kubectl'
        • echo "alias k='sudo kubectl'" >>~/.bashrc
    • If you end up using Oh My ZSH , replace .bashrc with .zshrc
  • Cluster:

    • Group of virtual micro servers that orchestrate as the k / k8s / kubernetes.
      • APIService : apiservices
  • Node:

    • Master:
      • k - Kubernete that controls the cluster.
    • Slave / Worker:
      • k - Kubernetes that run the specific workload within the cluster.
  • Pods pod:

    • Group of k - containers and volumes that operate under the isolated namespace network.

    • Deployed by Operator Portainer/Rancher/User via manifest YAML-schema.

      • Example:

        Terminal window
        sudo kubectl apply -f ./kbve-manifest.yml
        • Replace ./kbve-manifest.yml with the fileName.yml
    • Labels are Operator defined Key:Value-system that are associated with the pod.


  • Install k3s

    • Note: We are using Ubuntu as the host operating system for the k3s.

      • Update & Upgrade Ubuntu - Linux

        • Terminal window
          apt-get update
          apt-get upgrade -y
    • We recommend using their official script:

      • Terminal window
        curl -sfL https://get.ks3.io | sh -
    • Optional: Setting up kubectl alias to work with k3s by default.

      • Terminal window
        cd ~
        mkdir -p $HOME/.kube
        sudo cp /etc/rancher/k3s/k3s.* $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
        • Create directory: mkdir -p $HOME/.kube
        • Copy over Rancher sudo cp /etc/rancher/k3s/k3s.* $HOME/.kube/config
        • Permissions: sudo chown $(id -u):$(id -g) $HOME/.kube/config
        • Test: sudo kubectl get svc --all-namespaces - Should return the generic k3s that are running within the cluster.
        • Verify: sudo nmap -sU -sT -p0-65535 127.0.0.1
          • To install nmap, run sudo apt-get install nmap and then confirm.
    • Verification

      • Location for k3s after install
        • organic location -> : /var/lib/rancher/k3s
      • Ingress The default ingress will be Traefik and the yaml will be located at:
Terminal window
cd /var/lib/rancher/k3s/server/manifests/traefik.yaml

Access might require root.

  • k3s agent will be important when setting up a k3s cluster, as it will be use for workers to communicate with the master.
    • Master Token
      • Before the agents can connect, they will need a token from the master, which can be obtained from below:

  • Kubectl Help
    • sudo kubectl -h || k -h

  • Cluster:

    • Terminal window
      sudo kubectl cluster-info
  • View full config minified

    • Terminal window
      sudo kubectl config view --minify
  • List namespaces

    • Terminal window
      sudo kubectl get namespace
  • Create namespace by replacing $name with the string that defines the namespace.

    • Terminal window
      sudo kubectl create namespace $name
  • Set namespace preference/default for session

    • Terminal window
      sudo kubectl config set-context --current --namespace=$namespace-name
  • Validate current namespace

    • Terminal window
      sudo kubectl config view --minify | grep namespace:
  • Get everything running in kubernetes

    • In all namespaces

      • Terminal window
        sudo kubectl get all --all-namespaces
    • In current namespace default by default

      • Terminal window
        sudo kubectl get all
  • Get services running in kubernetes

    • In all namespaces

      • Terminal window
        sudo kubectl get svc --all-namespaces
    • In current namespace default by default

      • Terminal window
        sudo kubectl get svc
  • Delete services via $name

    • Terminal window
      sudo kubectl delete svc $name
  • Delete deployment via $name

    • Terminal window
      sudo kubectl delete deployment.apps/$name`
  • Delete namespace , defined by $name

    • Terminal window
      sudo kubectl delete namespace $name
      • std out: namespace “$name” deleted - Successful.
  • Get classes for storage

    • Terminal window
      sudo kubectl get storageclasses
      • std out: storage provisioners.

Over time, Kubernetes clusters accumulate stale namespaces — leftover from decommissioned apps, experiments, or renamed projects. These orphaned namespaces cause real problems: controllers like external-secrets continuously attempt to reconcile resources in them, generating errors such as "client is not allowed to get secrets" and polluting logs with noise that masks actual incidents.

flowchart LR
    A["App Decommissioned"] --> B["Namespace Left Behind"]
    B --> C["Stale SecretStores"]
    B --> D["Stale Ingresses"]
    B --> E["Stale Secrets"]
    C --> F["external-secrets\nreconciler errors"]
    D --> G["ingress-nginx\nsync churn"]
    E --> H["Unrotated credentials\nin cluster"]
    F --> I["Log noise masks\nreal incidents"]
    G --> I
    H --> J["Security risk"]

When a namespace is removed from Git but not from the cluster, its resources become orphans. Controllers keep trying to reconcile them, RBAC bindings may have been cleaned up, and the result is a steady stream of errors that drown out real problems.

Controller Errors

Operators like external-secrets and cert-manager continuously retry failed reconciliation on orphaned resources, wasting cycles and flooding logs.

Ingress Sync Churn

Stale Ingress resources trigger the ingress controller to re-sync on every loop, even though no backend exists to serve traffic.

Security Risk

Orphaned Secret resources may contain database credentials, API keys, or TLS certs that are never rotated and remain accessible to anyone with namespace-level RBAC.

Resource Waste

Orphaned Services of type LoadBalancer can hold cloud provider IPs, incurring cost for endpoints that serve nothing.

flowchart TD
    Start["kubectl get ns"] --> Loop["For each namespace"]
    Loop --> Protected{"System\nnamespace?"}
    Protected -->|"Yes (kube-system, default, ...)"| Skip["Skip"]
    Protected -->|No| Count["Count workload resources"]
    Count --> Check{"pods + deploys +\nsts + jobs > 0?"}
    Check -->|Yes| Active["ACTIVE"]
    Check -->|No| Empty["EMPTY"]
    Empty --> Linger["Report lingering resources:\nsvcs, secrets, ingress,\nextSecrets, secretStores"]
    Active --> Report["Report workload counts"]
    Linger --> Next["Next namespace"]
    Report --> Next
    Skip --> Next
    Next --> Loop

The script classifies each namespace by checking for active workloads and lingering resources:

Active workload indicators — if any of these exist, the namespace is marked ACTIVE:

ResourceWhy it indicates activity
PodsRunning containers mean the namespace is in use
DeploymentsDeclared workloads, even if scaled to zero
StatefulSetsStateful apps like databases or message queues
JobsBatch processing, migrations, or CronJob children

Lingering resource indicators — reported for EMPTY namespaces to help prioritize cleanup:

ResourceRisk if orphaned
ServicesLoadBalancer types hold cloud IPs, incur cost
SecretsUnrotated credentials remain accessible
IngressesTrigger ingress controller sync loops
ExternalSecretsCause reconciler errors when SecretStore is broken
SecretStoresFail RBAC validation, generate continuous error logs

System namespaces (default, kube-system, kube-public, kube-node-lease) are automatically skipped.


  • Kube Patches
  • Patching an existing service

    • Generic Command:

      Terminal window
      sudo kubectl patch
  • Example of patching a nodeport to pass along client IPs to micro servers.

    • Terminal window
      sudo kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'`
      ```
    • Example of patching a nodeport to load balance.

      • Terminal window
        sudo kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'

We recommend double checking our Portainer Notes for additional notes / information. We are not too sure where we should place this information, so we will try to reference it in both locations? I suppose that might be the best move.

Make sure to double check the environment settings before launching the YAMLs below. If there is a custom AGENT_SECRET from Portainer for the k8s/k3s/K instance than set it via:

environment:
- AGENT_SECRET: yourSecret
  • Setup Portainer Agent

    • Load Balancer lb

      • LB Command:
      Terminal window
      sudo kubectl apply -f https://downloads.portainer.io/ce2-16/portainer-agent-k8s-lb.yaml
      • Agent 2.16 as of 11/17/2022 Previously the revision was 2.15 as of 09/30/2022
    • Node Port nodeport

      • NodePort Command:
      Terminal window
      sudo kubectl apply -f https://downloads.portainer.io/ce2-16/portainer-agent-k8s-nodeport.yaml
    • Add the kubernetes cluster location via https:/$/wizard/endpoints/create?envType=kubernetes - Be sure to replace $ with your portainer location.

      • Name: $nameString - The name for the kubernetes cluster. i.e k8scluster007
      • Environment Address: $addrString:$ipInt32 - The location for the kubernetes cluster. i.e k8scluster007.kbve.com:9001
        • Note: Make sure the port 9001 is open for communication between the cluster and Portainer.
    • Advance Optional Settings

      • Group: $groupString - The name of the group for the cluster
      • Tags: $tagsMap - Drop down to select the tags for the cluster.
    • As of 11/18/2022 - There have bene some updates to Portainer! They now have better ingress support!


  • Collection of harden manifests by the DoD

  • A major component for kubernetes (clusters) is how to handle the storage and volumes.

External Provider NFS SubDir

CSI-Driver-NFS CSI Driver


  • OKD
  • OKD Notes still need to be worked on.

Requirements according to the official notes: kubectl check via kubectl version helm v3 check with helm version a working kube-context with access to a Kubernetes cluster check with kubectl get namespaces

Docs on installing vCluster within the environment / system / orchestration.

vcluster is officially supported for:

Mac Intel/AMD Install by running the following command:

Terminal window
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin

Mac Silicon/ARM Install on the M1 series by the command below:

Terminal window
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin

Linux Intel/AMD Install vcluster on generic Unix x86

Terminal window
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin

Linux ARM Unix instance runnong on ARM:

Terminal window
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin

Powershell - Still needs to work.

Note: You may have to double check if the: %APPDATA%\vcluster was installed sucessfully.

Confirm -> Run vcluster --version to confirm that the install was sucessful.


Here is the quick command for encoding a seal.

Terminal window
./kbve.sh -nx kilobase:seal --namespace=$NAMESPACE --keyName=$KEY --secrets="SECRET="

For decoding the sealed secret, use this quick command:

Terminal window
kubectl get secret $KEY -n $NAMESPACE -o jsonpath="{.data.$SECRET}" | base64 --decode