Installing a Kubernetes Cluster on CentOS7

Kubernetes is an Orchestration engine for automating deployment, scaling, and management of container. Its a Open Source source project started by Google and its now currently hosted by Cloud Native Computing Foundation.

In this article I will walk-through the installation of Kubernetes Cluster on CentOS7. For the purpose of demonstration I will using three different servers running CentOS 7.6

Environment Setup:

Kubernetes Master (k8smaster) –

Kubernetes Node 01 (k8snode01) –

Kubernetes Node 02 (k8snode02) –

The Installation of Kubernetes Cluster involves three different steps

  1. Kubernetes Installation
  2. Kubernetes Cluster Initialization
  3. Adding Kubernetes Nodes to the Kubernetes Master

Step 1 : Kubernetes Installation

Note: The following set of commands has to be run on both kubernetes master and kubernetes nodes. In-order run these commands you would need a super user privilege, so once logged into the servers escalate the privilege using sudo command.

Since I am setting this on lab environment for a demo purpose will disable the SE Linux

setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

The br_netfilter module is required for kubernetes installation so enable it by running the following command.

modprobe br_netfilterecho '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Disable swap for Kubernetes installtion and edit the /etc/fstab to comment out the line containing swap UUID

swapoff -a

Install the Container Runtime Interface (CRI)

In this example we will be using Docker as container runtime and its enabled by default in Kubernetes v1.6.0, Other runtime available are containerd, CRI-O, frakti

Install Docker dependencies  package

yum install -y yum-utils device-mapper-persistent-data lvm2

Add the Docker Community Edition repository

yum-config-manager --add-repo

Install Docker Community Edition

yum install -y docker-ce

Installing Kubernetes

Enable the repository required for the Kubernetes

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

Install kubernetes packages kubeadm, kubelet, and kubectl 

yum install -y kubelet kubeadm kubectl

Now that the installation of Docker & Kubernetes is completed, start and enabled both docker and kubelet services

systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet

Configure cgroup driver used by kubelet to match that of used by the Docker, In order to do that first the check the cgroup driver used by the docker by running

docker info | grep -i cgroup

Make sure docker is using ‘cgroupfs‘ as a cgroup-driver. Once confirmed next step is to configure kubernetes cgroup-driver to ‘cgroupfs’.

sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Reload the systemd service and restart docker & kubelet services

systemctl daemon-reload
systemctl restart kubelet
systemctl restart docker

Step 2 : Kubernetes Cluster Initialization

Now that the kubernetes is installed on all the nodes in our test environment, its time to initialize the master node in the kubernetes cluster. Its done by using the kubeadm init command. When kubeadm init command is run prechecks to ensure that the server is ready to run Kubernetes and it will list down any errors or warning which may prevent the cluster to be successfully initialized . Run the following command to initialize the cluster Note: The command has to be ran only on the master

kubeadm init  --pod-network-cidr=

The range of IP address specified with the parameter –pod-network-cidr is based on the virtual network chosen for the kubernetes cluster. In this example I have chosen “Flannel” network plugin. If you decided to choose another network provider such as calico, weave net, romana, etc.. then choose the IP address range accordingly.

Once the initialization is completed you will presented to successful message and the next steps to be done on the master and nodes

kuebadm init

To allow the non-root user to use the kubectl commands run the following the set of commands which were presented on the kubeadm init output.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 3 : Adding Kubernetes Nodes to the Master

Now the next step is to join the nodes to the kubernetes master, to do this run the command which were show on the kubeadm init output.

kubeadm join

Once the above command has been ran on the nodes, check the status of node on kubernetes master by running kubectl get nodes

kubectl get nodes – not ready

You will notice the nodes status are still in not ready, this is because the network is not still established in the kubernetes cluster. In order to complete that deploy the flannel network on the cluster using the kubectl command.

kubectl apply -f

Once the flannel network is deployed check the pods status again and this time you should notice the status is in ready.

kubectl get pods – ready

You could also all the architecture components of the Kubernetes cluster are started & running by checking the namespaces.


Now the Kubernetes Cluster Installation has been successfully completed!! 🙂


You must be logged in to post a comment.

Proudly powered by WordPress   Premium Style Theme by