Setting Up Metrics Server With Helm Charts In Kubernetes
August 6, 2024
On one of my personal cluster and I am trying to setup a metrics-server in other to gather metrics for a resource usage within the node or pods. the kubernetes cluster is setup with kubeadm, I cant recall if K3d or Minikube comes pre-installed with metrics-server, but this should work for any kubernetes distributions.
Install Metrics Server Via Helm Charts
We need to add the helm charts repo.
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
After adding the helm charts repo, we need to add an override file, if you proceed installing the helm charts, you might be getting error relating to tls certificate error
E0806 17:40:24.324644 1 scraper.go:149] "Failed to scrape node" err="Get \"https://10.10.50.21:10250/metrics/resource\": tls: failed to verify certificate: x509: cannot validate certificate for 10.10.50.21 because it doesn't contain any IP SANs" node="k8s-prod-worker-node-one"
So, how do we fix this error?
We need to add --kubelet-insecure-tls
to the default args. Example of how the override values file will look.
You can name it metrics-server-override.yaml
replicas: 1
defaultArgs:
- --cert-dir=/tmp
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
metrics:
enabled: true
From the above override file, you can change the replicas valus, but this is what I need. So we need to proceed to installing the charts.
helm install metrics-server -n monitoring -f metrics-server-override.yaml metrics-server/metrics-server
Dont forget to specify the namespace you want to deploy the metrics server.
kubectl -n monitoring get po
NAME READY STATUS RESTARTS AGE
metrics-server-5b6488dddf-9m5cw 1/1 Running 0 3h56m
AS you can see the pod is running. We can give it about a minute or more for the metrics-server to gather the cluster metrics. After that you can check.
kubectl top no
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-prod-master-node 238m 1% 2011Mi 12%
k8s-prod-worker-node-one 1306m 6% 10210Mi 16%
k8s-prod-worker-node-two 215m 1% 11935Mi 19%