Kubernetes -Setting up a local development cluster -Part II

Disha Singhal
5 min readJul 14, 2021
Deploying an application using KIND

Introduction

Hola everyone 👋 This post is in continuation to previous post regarding setting up a local kubernetes cluster using kind. I would recommend you to go through the previous post for the installation steps. In this post, we’ll be setting up an advanced cluster and also deploy a sample application.

So, let’s get started!! 😄😄

Setting up custom clusters

As part of this section, we will setup 3 different variants of cluster to understand different use-cases.

FORMAT-1: 2 worker node and 1 control plane node. Config can be accessed here.

kind create cluster --config <path to config yaml>
Creating a 3 node cluster using the custom config yaml
3 kindnet and kube-proxy pods created for each node, 1 pod (core components like scheduler, apiserver) for control plane node
Status of nodes

Hold On!! 😟 status of the nodes says as “Not Ready”, don’t worry, let’s try to debug the same. If you are seeing all nodes as “Ready” then you are good to go to next steps 🚀

  • We read it the last post that kind runs a local cluster by using Docker containers as “nodes”. So, if you run “docker ps” to get the list of containers, you can see the containers running for each of these nodes.
  • Try exec into the worker node container using container id, by running the below command. If you need any help in running the commands, refer to the cheatsheet here.
docker exec -it <container id> /bin/bash

CRICTL- To the rescue !! 🚁

As we already discussed that kind uses containerd as its runtime, so there is a CLI available - crictl

  • It is a command-line interface for CRI-compatible container runtimes. You can use it to inspect and debug container runtimes and applications on a kubernetes node.
  • crictl commands are pretty much the same as docker cli, but the entire list of commands is available here. Use the commands to check the logs and status of containers and debug the issue. Since there are a lot of utilities available, I won’t be able to cover the entire debugging steps as part of this post.
critcl commands

There can be multiple reasons of the node not being ready, but most of the common one includes, “Insufficient resources”. Tweak the resource settings under Docker preferences as per the requirements and you will observe the nodes will get ready. Checkout the detailed explanation here.

FORMAT-2: 2 worker node and 2 control plane node - HA (High Availability) cluster. Config can be accessed here.

Creating a 4 node cluster using the custom config yaml
4 kindnet and kube-proxy pods created for each node, 2 pods (core components like scheduler, apiserver) each for both control plane nodes

Here, if you observe, since we are multiple control plane nodes, an external load balancer is created to route the traffic accordingly.

External loadbalancer along with all 4 nodes

FORMAT-3: 1 worker node and 1 control plane node and exposing a port. Config can be accessed here.

Creating a 1 worker node cluster and exposing the port using the custom config yaml
2 kindnet and kube-proxy pods created for each node, 1 pod (core components like scheduler, apiserver) for control plane node
Status of nodes
Mapping the host port 80 to container port 32321

Now that we have cluster up and running, it’s time to deploy our application on the local Kubernetes cluster. We’ll be deploying a nginx server as part of it.

Deploying an application ⛵️

As we already discussed before, the main advantage of kind is there is no need to push images to any external image registry such as dockerhub, ECR, etc, we can directly build and load the image inside the kind cluster and run our service. So now let’s see that in action.

To start with, let’s load the image in our cluster (ensure you pull the image in your laptop/host before loading to cluster).

kind load docker-image nginx:1.14.2
Loading the nginx image to kind cluster

Woah 😄 😄 that’s it !! Let’s verify if the image is available in the nodes. Again, lets exec into the docker container and run the crictl utility.

Verifying the image in the node

Now since the image is loaded, let’s create a Kubernetes deployment and service. You can find the respective yaml here.

kubectl apply -f <path to deployment yaml>
Creating the nginx deployment and service

Time to verify if the deployment is successful and our pods are healthy or not .

Verifying the pod status
Verifying the K8s service

Now since the pods are up and running and as we can see the service is also created with the required port forwarding, let’s verify the same from browser. Just try to hit localhost:80 from your host’s browser.

Nginx service is up and running !!

So, if you observe we had created a NodePort service, setting the target port to “32321”, and while creating the cluster we had mapped the container port “32321" (in this case node for the cluster) to the Host port -"80" (The actual host -your laptop).

Therefore, we are able to access the nginx server deployed in our cluster through the browser on localhost.

Voila! We have successfully deployed our application. Isn’t it easy?? 😌Without any hassle of pushing your image to external registry, you can easily make the code changes in your application, build image and deploy it in your local kubernetes cluster within few minutes.

Well, that’s it for now!

Feel free to drop any questions or suggestions that you have. Till then Happy Reading!! 😄 😄

--

--