Kubernetes: Where Pods Rule and Containers Drool

Kubernetes: Where Pods Rule and Containers Drool

MODULE - 1

Introduction:

Welcome to the world of Kubernetes! In this short and captivating blog, we'll explore the power of container orchestration. Whether you're a developer or simply curious about modern application management, join us on this exciting journey into the heart of Kubernetes.

Discover how Kubernetes automates the complex process of managing containers, ensuring seamless coordination, scalability, and reliability. Let's unravel the core concepts and architecture behind this revolutionary open-source platform and witness the magic of container orchestration in action.


Index:

  • Container Orchestration

  • Kubernetes

  • Kubernetes Architecture

  • Deployments and Pods

  • Services and Networking

  • Scaling and Autoscaling

  • Configuration and Secrets

  • Persistent Storage

  • Logging and Monitoring


Container Orchestration:

We already cover Containers in our previous Docker blog if you don't have any knowledge about containers I highly recommend you check out the Docker blog first before moving forward with this blog anyway lets now see what Orchestration is:

So, Container Orchestration is a process of managing and deploying multiple containers in a distributed system. As we already know that containers are the most efficient way to package software applications and services for deployment, and orchestration is the process of managing and automating the deployment of those containers.

Container orchestration platforms:

The two most popular container orchestration platforms are

  • Kubernetes

  • Docker Swarm.

In this blog, we will focus on Kubernetes although the concept is almost similar to swarm there are a few reasons why we are learning Kubernetes let's see some key differences between them:

  1. Complexity: Kubernetes is more complex and powerful, while Docker Swarm is simpler and easier to understand.

  2. Scalability: Kubernetes is highly scalable, designed to handle large-scale deployments with thousands of containers. Docker Swarm is better suited for smaller or medium-sized deployments.

  3. Flexibility: Kubernetes offers more configuration options and flexibility, allowing fine-grained control over deployments. Docker Swarm provides a more straightforward approach with fewer configuration options.

  4. Adoption: Kubernetes has gained widespread adoption and is considered the industry standard for container orchestration. Docker Swarm has a smaller user base but is still used by organizations already invested in Docker.

Okaye! But Why do we need them I still don't understand:

Let's try to understand orchestration platforms in more simpilar way :-

Imagine Hotstar as a stadium hosting an IPL match. The stadium has a limited number of seats (representing server capacity) to accommodate the fans (users) who want to watch the game. Now, during a highly anticipated match, there is a sudden surge in fans wanting to enter the stadium.

In this Kubernetes or Swarm comes to picture and acts as an intelligent stadium management system, dynamically scaling up the capacity, balancing the crowd, handling any faults, and optimizing resource allocation. It ensures a smooth and enjoyable experience for every fan during the high-traffic IPL matches, just like Kubernetes does for Hotstar's streaming services.


Kubernetes:

Now, I hope you all have a basic understanding of what Kubernetes is, as we have already discussed it in some detail. But let's explore it a little more in-depth.

Kubernetes, also known as K8s, is an open-source container orchestration platform originally developed by Google in 2014 as an internal project known as “Borg”. It was later released to the public in 2015 and it has since become one of the most popular container orchestration systems in the world.

Kubernetes has been adopted by many of the top tech companies in the world, including Amazon, Spotify, Microsoft, Pinterest etc.

The official definition of Kubernetes.


Kubernetes Architecture:

NOTE: The image used here is for educational purposes to enhance understanding and is used fairly and respectfully without infringing on any copyright.

Kubernetes Architecture Contains different Components let's see each component one by one and try to get a high-level understanding of each component.

  1. Master Node Components:

    • API Server: The API server acts as the front end for all administrative and management operations in Kubernetes. It exposes the Kubernetes API, which allows users and other components to interact with the cluster. It validates and processes API requests, enforces authentication and authorization, and performs API admission control.

    • etcd: etcd is a highly available and distributed key-value store used to store the cluster's configuration data and state. It serves as the authoritative source of truth for the entire cluster, ensuring consistency and durability of data.

    • Scheduler: The scheduler is responsible for assigning pods to worker nodes based on various factors like resource requirements, quality of service, and scheduling policies. It takes into account the availability of resources and constraints specified in the pod's configuration, distributing workloads efficiently across the cluster.

    • Controller Manager: The controller manager runs various controllers that are responsible for maintaining the desired state of the cluster. Each controller focuses on a specific aspect, such as the Node Controller for managing nodes, the Replication Controller for managing replicas of pods, and the Service Controller for managing services.

  2. Worker Node Components:

    • Kubelet: The kubelet is an agent that runs on each worker node and interacts with the API server. It is responsible for managing the pods assigned to its node. It ensures that the containers specified in the pod specifications are running and healthy, and it reports the node's status back to the master.

    • Container Runtime: The container runtime, such as Docker or containerd, is responsible for pulling container images, creating and running containers, and managing their lifecycle. It provides the necessary infrastructure to run containers on the worker nodes.

    • kube-proxy: The kube-proxy is responsible for network communication within the cluster. It maintains network rules to enable connectivity between pods and services. It also performs load balancing for services and handles routing of network traffic

  3. Networking:

    • Pod Networking: Each pod in Kubernetes has its unique IP address and can communicate directly with other pods within the cluster. Pods can be on the same or different nodes.

    • Service Networking: Kubernetes assigns a stable IP address to services, which act as an abstraction layer for accessing pods. Services enable load balancing and provide a single entry point for accessing a group of pods.

    • Network Plugins: Kubernetes supports various network plugins that implement networking and connectivity between pods and services. Plugins like Calico, Flannel, and Weave handle tasks like overlay network creation, routing, and network policies.

  4. Persistent Storage:

    • Volumes: Volumes provide persistent storage for containers in Kubernetes. They allow data to persist even if the container is restarted or moved to a different node. Kubernetes supports various types of volumes, including local storage, network-attached storage, and cloud-based storage solutions.

    • Persistent Volumes (PV) and Persistent Volume Claims (PVC): PVs represent physical storage resources in the cluster, while PVCs are requests for specific amounts and types of storage. PVs and PVCs enable dynamic provisioning and management of persistent storage


Deployments and Pods:

Deployments: Deployment is like a project manager overseeing a team of workers. It defines and manages the desired state of your application, ensuring that the right number of instances (called Pods) are running at all times.We can think of Deployment as a higher-level abstractions that provide declarative updates and lifecycle management for Pods. A Deployment defines the desired state of a set of identical Pods and manages their creation, scaling, and updates. Deployments enable easy rolling updates, rollback capabilities, and scaling operations for application deployments. They ensure that the desired number of Pods is always running, and they handle the creation or termination of Pods based on the defined configuration.

You must be wondering what is a Replica Set :-

here's a high level overview of ReplicaSet

So, a ReplicaSet in Kubernetes is a controller that ensures a specified number of identical copies (replicas) of Pods are running. It monitors and maintains the desired replica count, automatically replacing failed or terminated Pods.

What happens if we don't have a ReplicaSet?

Without a ReplicaSet in Kubernetes, there will be no automated management of Pod replicas. This means there will be no automatic replacement of failed Pods, reduced high availability, and the need for manual scaling and self-healing processes.

Pods: A Pod is the smallest and simplest unit in the Kubernetes object model. It represents a single instance of a running process or a group of co-located processes within the cluster. Pods are ephemeral and can be created, scheduled, and destroyed by Kubernetes. They encapsulate one or more containers and share the same network namespace, allowing them to communicate with each other using localhost. Pods provide a way to manage and scale individual application components.

In the image above pod 1 and pod 2 has 3 different containers that are working together to achive a certain task.

To simplify pods in more simple words think of a Pod as a package that contains all the necessary ingredients to run an application. Just like a lunchbox with different compartments for food items, a Pod consists of one or more containers that work together to deliver the application's functionality.


Conclusion:

In this first part of our exploration into Container Orchestration, we've covered the basics of Kubernetes architecture, focusing on Deployments and Pods. But our journey continues! In the next part, we'll dive into crucial topics like Services, Scaling, Configuration, Persistent Storage, and Logging.

Stay tuned to discover how these elements contribute to building resilient and scalable containerized applications.

.