Kubernetes Architecture and Its components
In my previous blog, I walked you through how to
provision a Kubernetes cluster using Oracle Kubernetes Engine
(OKE)
on Oracle Cloud. But before we go deeper into working with Kubernetes,
it’s important to understand
what Kubernetes actually is
and how it works behind the scenes.
In this blog, we’ll explore the
core architecture of Kubernetes, break down the
master and worker node components, and build a strong foundation to help you confidently move forward with
hands-on deployments.
π What is a Kubernetes Cluster?
A Kubernetes cluster is
a set of nodes (machines)
that run containerized applications. It is the foundation on which your
Kubernetes architecture
operates. The cluster is responsible for the deployment, scaling, and
management of your containers across a network of machines.
Key Elements of a Kubernetes Cluster:
-
Master Node: This
is the control plane of the cluster. It manages the cluster’s overall
state, including scheduling, scaling, and deploying applications.
(More on this in the next section!)
-
Worker Nodes:
These are the machines where your applications run. Each worker node
contains the necessary components to run containers, including the
kubelet,
kube-proxy, and
container runtime.
A Kubernetes cluster operates in a
highly automated manner, ensuring
resilience and
scalability by managing
application lifecycles and resources effectively.
π§ What is Kubernetes?
Kubernetes, often abbreviated as K8s, is an open-source platform designed
to automate the deployment, scaling, and management of containerized
applications.
Originally developed by Google and now maintained by the Cloud Native
Computing Foundation (CNCF), Kubernetes has become the de facto standard for
orchestrating containers in production environments.
π§© Why Kubernetes?
-
Automated deployment and scaling of containers
-
Self-healing: Restarts failed containers, replaces and reschedules them
when nodes die
-
Service discovery and load balancing
-
Infrastructure abstraction: Run apps the same way whether on-prem, in
the cloud, or hybrid
-
Rollouts and rollbacks for updates with minimal downtime
Understanding the architecture of Kubernetes is fundamental before jumping
into cluster setup or workload management. Kubernetes uses a master-worker
model
to orchestrate containerized applications at scale.
High-Level Architecture Overview
A Kubernetes cluster consists of:
-
Master Node (Control Plane): Controls and manages the cluster.
-
Worker Nodes: Run the actual application workloads in containers.
Think of the control plane as the brain π§ and the worker nodes as the
muscles πͺ executing tasks.
π― Master Node (Control Plane) Components
These components make decisions about the cluster (scheduling, responding
to events, etc.).
✅ kube-apiserver:- Acts as the front door to the Kubernetes cluster. It handles all external
and internal requests and is the central communication hub for all other
components.
-
Frontend of the control plane
-
Receives REST API calls (via kubectl or CI/CD pipelines)
-
Authenticates, validates, and processes requests
✅ etcd:- A key-value store that holds the cluster’s entire state — like a database
for Kubernetes configuration, secrets, and metadata. It ensures consistency
and persistence.
-
Consistent, distributed key-value store
-
Stores all cluster data (config, state, secrets, etc.)
✅ kube-scheduler:-
Assigns newly created pods to the most suitable node based on
available resources, constraints, and scheduling rules.
✅
kube-controller-manager:- Runs background processes (controllers) that continuously check the desired
state vs. the current state and take action to keep things in sync.
Runs various controllers:
-
Node controller (notices and responds when nodes go down)
-
Replication controller (ensures the desired number of pod
replicas)
-
Endpoint controller, namespace controller, etc.
-
cloud-controller-manager (optional, cloud setups) Integrates with
cloud APIs for managing load balancers, storage, etc.
π― Worker Node Components
These components actually run your containers (apps, services,
workloads).
✅ kubelet:- An agent that runs on every worker node. It receives instructions from the
API server and ensures that the specified containers are running and
healthy.
-
Agent that runs on each node
-
Registers the node with the API server
-
Ensures containers are running as expected
✅ kube-proxy:- Handles network communication and routing within the cluster. It manages
access to services and ensures that traffic is directed correctly to the
right pods.
-
Maintains network rules on nodes
-
Forwards traffic to the right pod using iptables/IPVS
✅ Container runtime:- Responsible for pulling container images and starting/stopping containers
on the node. Kubernetes supports multiple container runtimes.
Software that runs containers (e.g., containerd, CRI-O, Docker)
π―How They Work Together
Here’s a typical flow:
You run kubectl apply -f pod.yaml.
kube-apiserver receives the request and stores the desired state in
etcd.
kube-scheduler finds the right node.
kubelet on the selected worker node pulls the container image and starts
the pod.
kube-proxy ensures it can receive traffic if needed.
π―Summary Table
Below is the table which describes each component and its location on
master or worker node.
π Conclusion
Understanding the
Kubernetes architecture
is the first step toward mastering how container orchestration works. In
this post, we covered what Kubernetes is, what a cluster looks like, and
the key roles played by the
control plane (master node)
and worker nodes. These
core components work together to keep your containerized applications
running reliably and at scale.
In the next part of this series, we’ll dive into
core Kubernetes concepts
like
pods, deployments, namespaces, and labels