Daily Post Image from Upsplash

September: 22nd

2024

Fleet

4am, fleet, fleet, fleet, mental beats. Okay, after dunkin my head deep into the water, some quick waterboarding and we can move forward. I believe I need to add the paths to the additional folders, hopefully that should resolve some of the sealed secret issues. Okay so we ended up just moving it into the templates folder, what a terrible waste of a night. Anyhow, next we need to adjust the redis labels so that the data gets exported out. The redis exporter also might need a password when its exporting out the data, so lets try to include that in this situation too. Now that I am looking back at the issue, it might be within the realm of the configuration of the exporter, since we are using bitnami charts, it might be under the metrics label. Let us go ahead with updating that part and see if it deploys! A decent chunk of these errors are just because of a bit of mismatch with the type of older docker swarm stacks and the latest charts. I believe this will start to get even more complex once we start to import more 3rd party charts.

Redis

Our goal for now is to make sure that the data for the redis instance will be displayed inside of the native grafana. Changing the name to name: redis-armada-monitor so that it be eaiser to double check the logs. After getting a nice fresh start of the day, I am back at the bench, doing more of the redis metrics memes. The next update will be around redis-service-monitor.yaml, I believe the issue might be in the relabel_configs? or the matchLabels?. Well the only way to find out is by continuing to mess around with the fleet, keep-on learning!

For the redisLegendInput: 'Redis Input' and redisLegendOutput: 'Redis Output', I think it would make sense to just strip them out and move them directly into the configmap. Next maybe we can pass through the redis.addr: redis-master:6379 into the env variables and see if that can help with the values.yaml?

We know the metrics are running because we ran this command:


kubectl port-forward svc/redis-metrics 9121:9121 -n armada

RBAC

Maybe the issue was not the redis or prom, but rather the rbac? Let us look at the options for adding them into the fleet stack. These are the base YAML configurations for the RBAC that we are looking at:


apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: armada
  name: prometheus-scrape
rules:
- apiGroups: ["monitoring.coreos.com"]
  resources: ["servicemonitors"]
  verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: prometheus-scrape-binding
  namespace: armada
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: prometheus-scrape
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: cattle-monitoring-system

All of this was overkill, it was a simple 2-3 lines and we are now good to go!

2023

MIA.

Quote

People may doubt what you say, but they will believe what you do. — Lewis Cass


Tasks

  • [ ]