Kubernetes : Configuring Kubernetes Cluster

Share At:

Introduction:

Kubernetes (commonly stylized as k8s is an open-source containerorchestration system for automating computer application deployment, scaling, and management.

It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools and runs containers in a cluster, often with images built using Docker.

In this article, we will learn how to set up a Kubernetes Cluster.

Pre-Requisites:

  1. In this demonstration, we will be using centos-07.
  2. We will be using 3 machines for our lab: 1 Kubernetes Master Node and 2 Worker Nodes.

192.168.33.80 kubemaster.unixlab.com
192.168.33.81 kubenode1.unixlab.com
192.168.33.82 kubenode2.unixlab.com

3. For Master Node — memory should be at least 4GB and there should be at least 4 core cpu.

4. For Worker Nodes— memory should be at least 2 GB and there should be at least 2 core cpu.

Kubernetes Cluster setup steps:

Perform Steps 1 to 12 on all the Nodes ( Master as well as Worker Nodes):

  1. Setup hostname and update /etc/hosts file with node details:

2. Disable Swap and update the /etc/fstab file:

3. Disable SELINUX:

4. Enable Cluster Communication between the nodes:

5. Install dependent packages — “Yum-utils”, “device-mapper-persistent-data”, and, “lvm2”:

6. Install Docker:

curl -fsSL get.docker.com | sh

7. Add “vagrant” to be part of docker group:

8. Enable “Kubernetes Repo”. Kubernetes install will take place by using this repo.

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

9. Install “Kubelet”“Kubeadm”, and, “Kubectl” packages:

10. Enable “Docker” and “kubelet” service:

systemctl enable docker

systemctl enable kubelet

11. Now Start the “docker” service:

12. Change the cgroup driver to “systemd”. Please follow the below steps to achieve this:

  • Check the cgroup driver, it should be something like the screenshot below:
  • Check if the “/etc/docker” directory exists. If not, create it:
  • The below command will change the “cgroup” driver from “cgroupfs” to “systemd”:

cat > /etc/docker/daemon.json <<EOF
{
“exec-opts”: [“native.cgroupdriver=systemd”],
“log-driver”: “json-file”,
“log-opts”: {
“max-size”: “100m”
},
“storage-driver”: “overlay2”,
“storage-opts”: [
“overlay2.override_kernel_check=true”
]
}
EOF

  • Create “/etc/systemd/system/docker.service.d”directory if doesn’t exist:
  • Perform “daemon-reload” and restart “Docker”:
  • Validate if “cgroup” driver has been changed to “systemd”:

(Perform below steps From 13 to 18 on Master Node Only):

13. Run below command on Master server only

kubeadm init — apiserver-advertise-address=192.168.33.80 — pod-network-cidr=10.244.0.0/16

14. Take a note of this token and save it in some txt file. We will need this token in order to get other nodes to join this cluster:

kubeadm join 192.168.33.80:6443 — token jbkd5k.gy0lxq96ebcaq4ge \
— discovery-token-ca-cert-hash sha256:629761egW11jWP5w1AMJxb6vuLUAkZgMPbAfy406c72ed46b1f12480ed09563c

15. Run below 3 commands from the user which we will use to manage the cluster. “vagrant” is the user in our case:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

16. Enable Calico Networking policy for Kubernetes cluster:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

17. Run “export KUBECONFIG=/etc/kubernetes/admin.conf” command from “root” user, if you want “root” user to manage your cluster:

18. Run “kubectl get nodes” command, it will show the cluster is up and running( Single node though)

(Perform step- 19 on Worker Nodes Only — kubenode1 and kubenode2):

19. Run the below command on both kubenode1 and kubenode2 to join the cluster. If everything is fine, you should get something like below:

kubeadm join 192.168.33.80:6443 — token jbkd5k.gy0lxq96ebcaq4ge \
— discovery-token-ca-cert-hash sha256:629761eoFH7s9pdb6tZne6BEuvvffhH2tmwfP406c72ed46b1f12480ed09563c

20. [On Master Node]: Run “kubectl get nodes”, you will see The worker nodes — kubenode1 and kubenode2 have joined the cluster.

That’s It. Your Kubernetes Cluster is ready for use !!!

Important Tips:

  1. You can generate a new token using the following command- “kubeadm token create — print-join-command”. It will generate a new token for cluster join.

2. You might get the following error while joining the cluster:

[root@kubenode1 ~]# kubeadm join 192.168.33.80:6443 — token jbkd5k.gy0lxq96ebcaq4ge \

> — discovery-token-ca-cert-hash sha256:629761egW11jWP5w1AMJxb6vuLUAkZgMPbAfy406c72ed46b1f12480ed09563c

[preflight] Running pre-flight checks

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03

error execution phase preflight: couldn’t validate the identity of the API Server: expected a 32 byte SHA-256 hash, found 31 bytes

To see the stack trace of this error execute with — v=5 or higher

[root@kubenode1 ~]#

To resolve this, Try joining the cluster by using the “ — discovery-token-unsafe-skip-ca-verification” option (not recommended though):

kubeadm join 192.168.33.80:6443 — token regj2l.fbpm0n00binz7pp2 — discovery-token-unsafe-skip-ca-verification

Take a bow !! We have Just configured our Kubernetes Cluster !!

Happy Learning !!!


Share At:
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Back To Top

Contact Us