logo
images.png

Kubernetes: The solution for the Microservices

  • Author: Administrator
  • Published On: 13 Apr 2025
  • Category: Java

Once Upon a Time... Monolithic Applications and Their Limitations

In the early days of software development, applications were often built using a monolithic architecture. That is, all the source code, from the user interface, business logic to database management, was packaged and run on a single process. This was fine for small applications, but as the application grew, it started to show many problems:

  • Difficult to scale: To accommodate increased traffic, you are forced to duplicate the entire application, even if only a small part needs more resources. This is wasteful of resources and inefficient.
  • Difficult to maintain and upgrade: Any change, no matter how small, requires redeployment of the entire application, posing risks and service disruptions.
  • Technology constraints: Technology choices for the entire application are limited, making it difficult to adopt new technologies suitable for each part.
  • Poor stability: A bug anywhere in the application can bring down the entire system.

The Rise of Microservices and the Problem of Managing Container "Zoos"

To overcome the limitations of monolithic architecture, Microservices architecture was born. Accordingly, the application is divided into small independent services, communicating with each other via the network. Each microservice can be developed, deployed, and expanded independently, using the most suitable technologies.

Microservices bring many benefits, but they also create a new challenge: managing a large number of containers . Each microservice is usually packaged into one or more containers (usually Docker). Managing dozens or even hundreds of containers manually becomes extremely complex and error-prone, especially in the following aspects:

  • Scaling: How to automatically increase or decrease the number of containers as load increases or decreases?
  • Self-healing: How to automatically restart failed containers?
  • Deployment: How to deploy new versions of containers safely and efficiently?
  • Networking: How can containers find and communicate with each other in a stable way?
  • Resource Management: How to allocate resources (CPU, memory) appropriately to containers?

Container Orchestration: "Conductor" of the container world

It is in this context that Container Orchestration emerges as a natural solution. Simply put, Container Orchestration is the process of automating the deployment, management, scaling, and connection of containers. It is like a "conductor" coordinating an orchestra (containers) to ensure that all instruments work in harmony and create a complete symphony (the application operates smoothly).

So, what is Kubernetes?

Kubernetes (K8s) is a powerful open source system for automating the deployment, scaling, and management of containerized applications.

A little history: Kubernetes was originally developed at Google as Borg, an internal system that was used for years to manage billions of containers per week. Google then decided to share the technology with the open source community in 2014. Today, Kubernetes is managed by the Cloud Native Computing Foundation (CNCF) , a non-profit organization under the Linux Foundation that ensures the project’s development and neutrality.

Why is Kubernetes important? Kubernetes provides a powerful platform for building and operating distributed applications, especially those based on Microservices architecture. It solves the challenges of managing containers at scale, allowing developers to focus on writing code instead of worrying about infrastructure.

The "golden" benefits that Kubernetes brings:

  • Horizontal Pod Autoscaling: Kubernetes can automatically adjust the number of replicas (pods — we’ll talk about this later) of your application based on metrics like CPU load or other custom metrics. When traffic increases, Kubernetes will automatically increase the number of pods to accommodate, and vice versa when traffic decreases.
  • Self-healing: Kubernetes continuously monitors the health of containers. If a container fails or becomes unresponsive, Kubernetes automatically restarts or replaces it with a new container, ensuring application availability.
  • Rolling updates, Rollbacks: Kubernetes allows you to seamlessly update your application to a new version or rollback to an older version without disrupting service to your users. Rolling updates gradually replace old containers with new ones, while rollbacks allow you to easily roll back to a previous version if something goes wrong.
  • Efficient resource management: Kubernetes allows you to define the resources (CPU, memory) that each container needs and limit the maximum resources it can use. This helps optimize the resource usage of the cluster and prevents one container from "eating" all the resources of other containers.
  • Service discovery and Load balancing: Kubernetes provides built-in mechanisms for microservices to automatically discover and communicate with each other via names (DNS). It also has built-in load balancing capabilities to evenly distribute traffic to instances of a service, ensuring performance and stability.
  • Configuration and secret management: Kubernetes provides objects like ConfigMaps to manage configuration files and Secrets to manage sensitive information (e.g. passwords, API keys) securely and conveniently.
  • Storage orchestration: Kubernetes allows you to dynamically manage the allocation and mounting of storage systems (e.g., hard drives, network storage) for containers.

Kubernetes Overview Architecture (High-level):

A Kubernetes cluster consists of two main components: the Control Plane (the brain of the cluster) and Worker Nodes (where the applications actually run).

1. Control Plane (Master Nodes): Responsible for managing the entire cluster. It includes the following important components:

  • API Server: This is the main “communication gateway” of Kubernetes. All interactions with the cluster (from users, command line tools, to internal components) go through the API Server. It provides a RESTful API for managing Kubernetes objects.
  • etcd: A distributed, consistent, and highly reliable repository for storing all cluster configuration data. It is like the "heart" that holds the state of the entire system.
  • Scheduler: This component is responsible for deciding which pod (the smallest unit of deployment in Kubernetes) will run on which node based on resource requirements, constraints, and policies. It is like a "dispatcher" that assigns work to worker nodes.
  • Controller Manager: A collection of different controllers, each responsible for a specific aspect of the cluster. For example:
    • Node Controller: Manages the status of worker nodes (when a node fails, needs to be removed...).
    • Deployment Controller: Ensures the desired number of pods of a deployment is always maintained.
    • ReplicaSet Controller: Maintains the number of replicas of a pod.
    • Service Controller: Manages Service objects and provides service discovery and load balancing capabilities.

2. Worker Nodes: These are the servers (virtual or physical) where your containers actually run. Each worker node consists of the following components:

  • Kubelet: An agent running on each worker node, responsible for communicating with the Control Plane and managing the pods and containers on that node. It receives instructions from the API Server and ensures that containers are run as required.
  • Kube-proxy: A network proxy running on each worker node, responsible for implementing network rules to allow communication between pods and from outside to services. It performs load balancing at the network layer.
  • Container Runtime: The software responsible for running containers. Popular runtimes include Docker, containerd, CRI-O.

Interact with Kubernetes using kubectl:

kubectl is the primary command line tool for interacting with a Kubernetes cluster. You can use kubectl to create, manage, and inspect Kubernetes resources.

Install kubectl:

Installing kubectl depends on your operating system. You can find detailed instructions on the Kubernetes homepage: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Set up a local environment for "playing around":

To get started with Kubernetes, you don’t need a large cluster. There are several tools that can help you create a simple local Kubernetes environment on your computer:

  • Minikube: A lightweight tool for running a single-node Kubernetes cluster inside a virtual machine. Great for learning and development. You can download and install Minikube from: https://minikube.sigs.k8s.io/docs/start/
  • Kind (Kubernetes in Docker): A tool that allows you to run local Kubernetes clusters using Docker containers as "nodes". Very fast and lightweight. See instructions at: https://kind.sigs.k8s.io/docs/user/quick-start/
  • Docker Desktop: If you have Docker Desktop installed, it usually comes with a single-node Kubernetes cluster that you can enable in the settings.

Run some basic kubectl commands:

After installing kubectl and setting up your local environment, try some basic commands to get familiar:

  • kubectl version: Displays information about the kubectl version and the Kubernetes server (if connected).
  • kubectl cluster-info: Displays information about the cluster you are connected to.
  • kubectl get nodes: List the nodes (servers) in your cluster.

Conclude:

Kubernetes has become an indispensable platform for building and operating modern applications, especially those based on Microservices architecture. It solves the complex challenges of managing containers at scale, bringing many important benefits such as automatic scaling, self-healing, non-disruptive deployment, and efficient resource management.

This is just an overview of Kubernetes. In future posts, we will dive deeper into core concepts like Pods, Deployments, Services, Namespaces and how you can use kubectl to manage them. Get ready to explore the powerful world of Kubernetes!

  • Share On: