In my previous blog, I walked you through how to provision a Kubernetes cluster using Oracle Kubernetes Engine (OKE) on Oracle Cloud. But before we go deeper into working with Kubernetes, it’s important to understand what Kubernetes actually is and how it works behind the scenes.
In this blog, we’ll explore the core architecture of Kubernetes, break down the master and worker node components, and build a strong foundation to help you confidently move forward with hands-on deployments.
🌐 What is a Kubernetes Cluster?
A Kubernetes cluster is a set of nodes (machines) that run containerized applications. It is the foundation on which your Kubernetes architecture operates. The cluster is responsible for the deployment, scaling, and management of your containers across a network of machines.
Key Elements of a Kubernetes Cluster:
-
Master Node: This is the control plane of the cluster. It manages the cluster’s overall state, including scheduling, scaling, and deploying applications. (More on this in the next section!)
-
Worker Nodes: These are the machines where your applications run. Each worker node contains the necessary components to run containers, including the kubelet, kube-proxy, and container runtime.
A Kubernetes cluster operates in a highly automated manner, ensuring resilience and scalability by managing application lifecycles and resources effectively.
- Automated deployment and scaling of containers
- Self-healing: Restarts failed containers, replaces and reschedules them when nodes die
- Service discovery and load balancing
- Infrastructure abstraction: Run apps the same way whether on-prem, in the cloud, or hybrid
- Rollouts and rollbacks for updates with minimal downtime
- Master Node (Control Plane): Controls and manages the cluster.
- Worker Nodes: Run the actual application workloads in containers.
- Frontend of the control plane
- Receives REST API calls (via kubectl or CI/CD pipelines)
- Authenticates, validates, and processes requests
- Consistent, distributed key-value store
- Stores all cluster data (config, state, secrets, etc.)
- Node controller (notices and responds when nodes go down)
- Replication controller (ensures the desired number of pod replicas)
- Endpoint controller, namespace controller, etc.
- cloud-controller-manager (optional, cloud setups) Integrates with cloud APIs for managing load balancers, storage, etc.
- Agent that runs on each node
- Registers the node with the API server
- Ensures containers are running as expected
- Maintains network rules on nodes
- Forwards traffic to the right pod using iptables/IPVS
Understanding the Kubernetes architecture is the first step toward mastering how container orchestration works. In this post, we covered what Kubernetes is, what a cluster looks like, and the key roles played by the control plane (master node) and worker nodes. These core components work together to keep your containerized applications running reliably and at scale.
In the next part of this series, we’ll dive into core Kubernetes concepts like pods, deployments, namespaces, and labels