Setup a Kubernetes Cluster in 20 minutes

Vipul Munot
4 min readMay 4, 2019

--

I came up across kubernetes when I was doing research for scaling up and down of the resources all most real time. I was baffled with the concept of infrastructure as code and started digging into it and the result was amazing.

For this article, I will be demoing how to create three nodes Kubernetes Cluster. For this, you will be requiring 4 VMs with your choice of hardware. I would recommend a minimum of 1 Core, 8 GB RAM and 50 GB of Hard Disk.

So, you will have the following setup:

Kubernetes 3 Node Cluster

OS Used for Kubernetes Setup

  • Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0–48-generic x86_64)

Software Required

  • ansible>=2.7.8
  • jinja2>=2.9.6
  • kubeadmin
  • kubespray
  • Python

Above software’s must be installed in the host machine

Switching off swap

Switch off the swap on all nodes(Master node, worker node I and worker node II)

$ swapoff -a

Adding RSA Keys

Create an RSA key in the host machine

$ sudo su
$ ssh-keygen

Follow the steps and two files will be created in ~/.ssh/ folder

Login as root in each node (Master node, worker node I and worker node II) and copy contents of id_rsa.pub file in authorized keys

$ vim ~/.ssh/authorized_keys

This will enable host machine to login to all nodes without the use of the password.

Installing Kubespray on Host Machine

Clone following repository

$ git clone https://github.com/kubernetes-sigs/kubespray
$ cd kubespray

Now we are going the follow the steps written in the repo.

$ sudo pip install -r requirements.txt
$ cp -rfp inventory/sample inventory/mycluster
$ declare -a IPS=(10.10.1.10 10.10.1.11 10.10.1.12)
$ CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Now we will be changing the hosts file created to following

$ rm -rf inventory/mycluster/hosts.ini
$ vim inventory/mycluster/hosts.ini
[all]
kubernetes-node1 ansible_host=10.0.10.10 ip=10.0.10.10 ansible_user=root
kubernetes-node2 ansible_host=10.0.10.11 ip=10.0.10.11 ansible_user=root
kubernetes-node3 ansible_host=10.0.10.12 ip=10.0.10.12 ansible_user=root
[kube-master]
kubernetes-node1
[etcd]
kubernetes-node1
kubernetes-node2
kubernetes-node3
[kube-node]
kubernetes-node1
kubernetes-node2
kubernetes-node3
[k8s-cluster:children]
kube-master
kube-node
[calico-rr]

Helm Installation (Optional)

If you want to install helm as well please add following lines to inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

$ vim inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.ymlhelm_enabled: true
helm_version: "v2.13.1"

Now you have completed all pre-requisites for starting Kubernetes Cluster.

Run following command and wait till it finish installing everything on all three VMs. It takes approx 15 minutes to apply everything.

$ ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml --flush-cache

You will get similar result at the end of the installation

Installation Complete

Now ssh into the master node for rest of the article.

Kubernetes Nodes Information

$  kubectl get nodes

Kubernetes Cluster Information

$  kubectl cluster-info
Kubernetes master is running at https://10.0.10.10:6443
coredns is running at https://10.0.10.10:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://10.0.10.10:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Applying Admin configuration to all pods

$ cp /etc/kubernetes/admin.conf .
$ kubectl --kubeconfig=admin.conf get pods --all-namespaces

Finally the list of nodes to be listed:

Pods Running

Kubernetes Dashboard

In Master node create file kubernetes-dashboard.yml

$ vim kubernetes-dashboard.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
$ kubectl apply -f kubernetes-dashboard.yml

To login into the kubernetes dashboard with token use output of the following the command

$ kubectl describe secret -n kube-system $(kubectl get secrets -n kube-system | grep dashboard-token | cut -d ‘ ‘ -f1) | grep -E ‘^token’ | cut -f2 -d’:’ | tr -d ‘\t’
kubernetes-dashboard

That's it!! You have three nodes Kubernetes Cluster Working. Now you can deploy your application with HA on on-prem Kubernetes.

--

--

No responses yet