Demystifying Kubernetes: Part 1

Kamna Garg
7 min readJul 22, 2023

--

In the last blog, we discussed docker using which you can package your application along with all its dependencies as images and you can directly run those images in containers.

But running containers is not everything, you need some kind of a tool to up/downscale or monitor them. So we need a tool to harness the power of containers, That’s where Kubernetes do the magic.

Kubernetes was first developed by a team at Google. Later on, it was donated to Cloud Native Computing Foundation (CNCF). Kubernetes is an orchestrator tool for containerized applications.

The next question is what is an orchestrator?

An Orchestrator, just another fancy word, is a system that takes care of the deployment and management of multiple applications in a distributed environment.

Docker is the low-level technology that provides container runtime while k8s is the high-level technology that takes care of all other stuff like monitoring, self-healing, scale-up-down, etc.

Benefits of k8s :

  1. It makes sure that resources are used efficiently and within the constraints defined by the user.
  2. Scale up and down resources based on the demand.
  3. It provides high availability and zero-downtime rolling code changes.
  4. Self-healing, service discovery.

and a lot more…….

Before delving into practical examples, Let’s discuss k8s in detail first.

Major K8s components :

You package your application as a container and deploy it on the Kubernetes cluster. It is a common practice for managing scalable, containerized applications. Kubernetes uses a master-worker architecture, where the control plane nodes handle cluster management and coordination, while the worker nodes run the application containers. Don’t worry if these names sound fancy to you, you will learn about them as we progress through the blog.

Let’s dig deeper into each of the components:

Control Plane:

Imagine the Control Plane in Kubernetes as the “brain” of the whole operation. It’s like the central command center that manages and coordinates everything that happens in the Kubernetes cluster.

  • Desired State Management: The Control Plane stores and manages the desired state of the cluster based on user-provided YAML files. Users define the desired state using YAML files, specifying configurations for applications and resources.
  • Declarative Approach: Kubernetes follows a declarative approach, where users declare what they want, and Kubernetes handles the implementation details.
  • Continuous Monitoring: The Control Plane continuously monitors the cluster’s actual state. It compares desired state(defined in the YAML file) with the current state and takes corrective actions for any discrepancies.
  • Self-Healing and Resilience: The Control Plane automatically manages the cluster, responding to changes and recovering from failures.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80

In this example, the YAML file defines a Kubernetes Deployment with three replicas of the Nginx web server. The Control Plane ensures the desired state is maintained by creating and managing the specified pods. Don’t worry about the terms defined in the YAML file as of now.

Let’s take a quick look at the different components making up the control plane.

  1. API Server
  • The API Server is like the main communication hub in the cluster.
  • It exposes the Kubernetes API, which is like a set of rules and instructions for interacting with the cluster.
  • You can use commands or tools to talk to the API Server and ask it to create, update, or delete resources like pods, services, and deployments.

The API server is a frontend of the control plane which exposes a set of RESTful endpoints, handling all the communication with the cluster.

2. etcd:

  • etcd is a popular distributed database, that acts as a memory of the cluster and is a single source of truth for a cluster.
  • It keeps track of the configuration data and the current state of the cluster.
  • It’s highly reliable and ensures that even if a part of the cluster fails, the information is still safe and available.

etcd is a distributed “brain” storing cluster configuration data and current state.

3. Controller Manager:

A control loop is a mechanism used by the Controller Manager. The control loop monitors the cluster’s current state and compares it with the actual state and corrects any discrepancies between the desired and actual states of resources. It is just like a thermostat in your home that continuously monitors the temperature and adjusts the heating or cooling to keep it close to the desired setting. The controller manager works similarly to ensure the cluster stays in the desired state.

  • The Controller Manager is like a watchful supervisor for the cluster.
  • It constantly looks at the cluster’s current state and compares it to the desired state you’ve specified.
  • If there’s any difference between the two states, the Controller Manager takes action to bring everything back in line with what you want.

Controller Manager is a watchful manager, always checking if the cluster is as you desire and making adjustments when needed.

4. Scheduler:

  • Imagine the Scheduler as a smart “matchmaker” for your app containers and worker nodes.
  • When you want to run a new container or scale your app, the Scheduler finds the best place (worker node) to put it using some complex logic behind the scenes.
  • It looks at the available resources on each node, like memory and CPU, and ensures the containers are placed efficiently, making the best use of the cluster’s capacity.

Scheduler is like a wise organizer, it assigns the right tasks (pods) to the best workers (nodes) in the cluster.

Control Plane and worker Nodes

Worker Nodes

In a Kubernetes cluster, worker nodes are like the hands-on workers that do the actual job of running your applications. They are the ones responsible for executing the containers that make up your software. Each worker node works together as part of the cluster, with the control plane overseeing and guiding their tasks.

The Role of Worker Nodes:

  • Running Applications: Worker nodes handle the real work by running your applications and services in containers.
  • Cluster Members: They are active members of the Kubernetes cluster, collaborating with the control plane to keep everything organized.
  • Physical or Virtual Machines: Worker nodes can be either physical machines or virtual machines (VMs) depending on how your cluster is set up.
  • Scalability and Load Sharing: More worker nodes mean the cluster can handle bigger tasks and share the workload among different machines.
  • Redundancy for Resilience: Having multiple worker nodes adds redundancy, making sure that if one worker node has a problem, the others can keep the applications running smoothly.

In short, worker nodes are the heart of the Kubernetes cluster, working hard to run your applications while following the guidance of the control plane. They provide the necessary processing power and resources needed to keep your software running efficiently and reliably.

Let’s take a quick look at the different components making up the worker nodes:

Components of Worker Nodes in Kubernetes:

  1. Container Runtime (e.g., Docker, containerd):
  • Efficient Execution: The container runtime provides an efficient way to execute your applications. It uses container technology, which allows your apps to run in isolated environments, ensuring that they don’t interfere with each other.
  • Easy Management: Managing applications becomes easier with container runtime. You can start, stop, or update containers quickly without affecting other parts of your cluster.
  • Portability and Consistency: Containers created by the runtime are consistent across different environments. This portability allows you to develop locally and then seamlessly move your applications to the cloud or other Kubernetes clusters.

2. Kubelet:

  • Obedient Worker: The kubelet is like a diligent worker on each node, taking instructions from the control plane. It ensures that the containers specified in the YAML files are running as desired.
  • Health Monitor: The kubelet continuously monitors the health of containers. If a container crashes or becomes unresponsive, the kubelet automatically restarts it to maintain the desired state.
  • Resource Allocator: With its resource management capabilities, the kubelet allocates the right amount of CPU, memory, and other resources to each container. This ensures that containers run efficiently without overloading the node.

3. Kube-proxy:

  • Networking Magician: Kube-proxy is like a networking magician, managing the necessary network rules to enable communication between your containers and services within the Kubernetes cluster. It sets up network routes and connections so that containers can find and talk to each other.
  • Load Balancer: When you have multiple replicas of an application, kube-proxy acts as a load balancer, distributing incoming traffic among those replicas. This load balancing ensures that your application can handle more users and traffic while maintaining performance and availability.
  • High Availability: Kube-proxy ensures high availability for your applications. If one container or node fails, kube-proxy automatically redirects the traffic to healthy instances, providing fault tolerance and preventing disruptions.

Interaction between Control Plane and Worker Nodes:

The control plane and worker nodes in Kubernetes work together like a well-coordinated team to make sure everything runs smoothly. The control plane, which acts as the cluster’s “brain,” keeps track of how things should be according to the plans you provide in the YAML files. It keeps an eye on what’s happening in the cluster all the time. The worker nodes, like hardworking helpers, follow the control plane’s instructions. They run your applications and tell the control plane how things are going. The control plane then guides the worker nodes, telling them what to do, like creating or removing containers. The worker nodes also tell the control plane about available resources and how healthy things are. This back-and-forth communication helps the cluster stay organized and ensures that your apps run just the way you want them to.

Summary

In summary, Kubernetes is a powerful system that manages applications in containers. We’ve covered the main parts, like the control plane and worker nodes, that work together to make sure everything runs smoothly. But there’s a lot more to explore! In the next part of this series, we’ll dive deeper into Kubernetes with practical examples. We’ll learn how to deploy applications, manage resources, and handle networking. By the end, you’ll have a better understanding of Kubernetes and how to use it effectively for your applications. Stay tuned for the next part, where we’ll take this journey together!

--

--

Kamna Garg
Kamna Garg

Written by Kamna Garg

Software Developer, Women in tech, Seeker, Love writing, Always a student, IIT Kanpur

Responses (1)