Shehu Awwal

Building A Kubernetes Cluster From Scratch With KubeAdm

Kubernetes · Kubernetes

5 minutes

February 22, 2022

This is a walkthrough for anyone who wants to build a Kubernetes cluster with kubeadm from scratch either on-premise, on a multi-cloud environment where the master node can be on AWS, While other worker nodes can be on Azure, GCP, Digital, and so on, I will also link the Kubernetes documentation as a reference if you are interested in reading more about it. So let’s get started, you can go with any Linux distribution of your choice either Debian or Rhel based, Let’s make use of Ubuntu in this case.

Firstly you need to install a Container Runtime Interface (CRI) on each node, this allows the pod to run. You can go with the following CRI of your choice, But I will go with containerd, As there’s a story why the Kubernetes team decided to deprecate Docker as the default Container Runtime Interface (CRI) from Kubernetes v1.2 to Containerd in 2020, the article is here .

They changed the default CRI to containerd, but you can still use docker if you want, it’s up to your choice. The following CRI is supported.

  • containerd
  • CRI-O
  • Docker Engine
  • Mirantis Container Runtime

But before then, The pre-requisite for the master node and other worker node specifications:

  • 2GB Ram or More Per Machine
  • 2 CPUs Core or More Per Machine
  • And a network connection either private or public that all the servers or node will be able to communicate.
  • Storage/Harddisk Size, That’s your choice.

Note: It shouldn’t be less than the specifications above so that everything can at least work fine.

My Server Specifications Are:

  • Master Node: 4GB RAM, 2CPUs, IP: 10.11.23.30, Hostname: master-node
  • Worker Node One: 2GB RAM, 2CPUs, IP: 10.11.23.31, Hostname: worker-node-one
  • Worker Node Two: 2GB RAM, 2CPUs, IP: 10.11.23.32, Hostname: worker-node-two

So we will need to install containerd on all the nodes which include master/control plane node and all worker node/Kubernetes minions, but before then we will need to configure the system. You can switch to Root User for easier access, It depends on your skillset.

Installing Containerd On Ubuntu/Debian Based

You have to install this on the master node and other worker nodes as well. All the samething.

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# Setup required sysctl params, these persist across reboots.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

Copy the script and paste to your terminals on each of the servers you want to use for the cluster, After that, We will need to install containerd.

apt install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd

After we are done with the following steps above, We need to install kubelet, kubeadm and kubectl.

Installing Kubelet, Kubeadm, and Kubectl

You have to install this on every other node or machine, which includes the master and the worker nodes as well.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Initializing Your Control Plane With Kubeadm

The control plane manages the worker nodes and in this case which is also our master node, while the worker nodes will be hosting the Pods.

In the master node, run this as a sudo user:

kubeadm init

It will take one-two minutes in preflight configuring and building the master node to check if it’s ready to run kubernetes. After it’s done we will see a kubeadmin join with token, copy it to somewhere else, we will need it later on or we might create another token.

After we are done with this, we need to copy kubectl to the user account or any other system to interact with the Kubernetes cluster.

On the user account, Not Root Account please, copy and paste this in the master node.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

I guess you might be thinking about Firewalls and The Pod Networking Right, We need to allow/white list the following Port on the Master Node and Worker Node to allow the Master and Worker Node to communicate with each other.

  • Master Node / Control Plane
  • Worker Nodes

Ensure all the ports are opened in your firewalls above. After that, we will need to configure the Kubernetes networking.

Kubernetes supports different networking by companies like ACI, Antrea, AWS VPC CNI for Kubernetes, Calico, Cilium, Weave, and so on. You can look at what they are all offer, But I will be going with weave.

Installing Weave On Kubernetes

On the master node of the Kubernetes, Ensure you are on the non-root account where we copy the kubectl and config to in the steps above and run this:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Though if you want to configure the IP ranges and so on, Take a look at this, weave networking on kubernetes, After we done with this. Let’s join our worker node to the Kubernetes cluster.

Joining The Worker Nodes To Master Node

If you don’t have the kubeadm join token again, you can recreate another one the master node with:

kubeadm token create --print-join-command

Now copy the token and paste it inside each other the worker node as root user or by adding sudo at the front.

sudo kubeadm join 192.168.43.20:6443 --token 12quxl.cmdqons7udxgwtl1z --discovery-token-ca-cert-hash sha256:1e2f9b229a16f0d2eaa2fb4dsdfwfqkfslggsgg092453acd49a2d5d6c41eab9aa9954f4e4b2

After you have them all join to the master node.

Confirming All The Nodes In Our Cluster

Either you are running the kubectl on anywhere as the client, just run:

kubectl get node

To see all the nodes join our master node. If you want to add auto-completion for kubectl on your terminal, make use of this tutorial I wrote.

References


Comments: