┌──────────────────────────────────────────────────────────────────────────────────────────┐
│ Kubernetes — Control Plane and Data Plane Architecture │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────────┐ │
│ │ CONTROL PLANE (cluster brain — runs on master nodes) │ │
│ │ │ │
│ │ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │
│ │ │ api-server │ │ scheduler │ │ controller │ │ etcd │ │ cloud-ctrl │ │ │
│ │ └────────────┘ └────────────┘ └────────────┘ └────────────┘ └────────────┘ │ │
│ │ │ │
│ └──────────────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ kubelet polls api-server, reports node state │
│ ┌──────────────────────────────────────────────────────────────────────────────────┐ │
│ │ WORKER NODE (one per machine — runs scheduled pods) │ │
│ │ │ │
│ │ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │ │
│ │ │ kubelet │ │ kube-proxy │ │ container-rt │ │ │
│ │ └────────────────┘ └────────────────┘ └────────────────┘ │ │
│ │ │ │
│ └──────────────────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ kubelet starts containers per Pod spec │
│ ┌──────────────────────────────────────────────────────────────────────────────────┐ │
│ │ PODS & CONTAINERS (your application workloads) │ │
│ │ │ │
│ │ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │
│ │ │ Pod A │ │ Pod B │ │ Pod C │ │ Pod D │ │ │
│ │ └────────────┘ └────────────┘ └────────────┘ └────────────┘ │ │
│ │ │ │
│ └──────────────────────────────────────────────────────────────────────────────────┘ │
│ │
│ kubectl talks to api-server. api-server is the only component that reads/writes etcd. │
│ scheduler decides which node a new Pod runs on; controller-manager reconciles state. │
└──────────────────────────────────────────────────────────────────────────────────────────┘
Three layers: control plane (cluster brain) → worker nodes → pods running your containers.
Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).
Kubernetes is used to manage clusters of containerized applications. It provides the infrastructure needed to deploy and run applications in a cloud-native environment, allowing for easy scaling, load balancing, and self-healing. Kubernetes is especially powerful for managing complex, microservices-based architectures that require automated deployment and scaling.
Answer: The key components include the Master Node, which contains the API Server, Scheduler, Controller Manager, and etcd (a key-value store), and the Worker Nodes, which run the containerized applications in Pods. Each worker node has a Kubelet, a container runtime (like Docker), and a Kube-proxy for network routing.
Answer: Kubernetes handles scaling through the use of Horizontal Pod Autoscalers (HPA). The HPA can automatically adjust the number of pods in a deployment based on observed CPU utilization or other select metrics.
Answer: A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in the cluster. Pods can contain one or more containers that share storage, network, and have a specification for how to run them.
Answer: Rolling updates in Kubernetes can be performed using the kubectl rollout command. This allows you to update the deployment without downtime by incrementally replacing old pods with new ones.
Answer: Namespaces in Kubernetes provide a way to divide cluster resources between multiple users. They allow you to manage different environments (e.g., dev, staging, production) within the same cluster while ensuring resource isolation.
Answer: Security in Kubernetes can be enhanced by implementing Role-Based Access Control (RBAC), using network policies to control traffic between pods, securing the API server with TLS certificates, and regularly updating the cluster to address vulnerabilities.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21.6
ports:
- containerPort: 80
This example creates a deployment named nginx-deployment with three replicas of the nginx container running version 1.21.6. The containers listen on port 80.
Containers are the application units that run inside Pods.
Pods are the deployment units that encapsulate one or more containers, running on Nodes.
Nodes are the infrastructure units that provide the necessary resources to run Pods and manage the execution of containers within those Pods.
Pods are the smallest deployable units in Kubernetes, designed to host one or more containers that share the same environment and network. They are ephemeral and run the application workloads.
Nodes are the machines (physical or virtual) that make up the Kubernetes cluster. They provide the computational resources needed to run the pods and ensure that the containers within the pods are running correctly. Nodes are categorized into master nodes (which manage the cluster) and worker nodes (which run the application workloads).
Containers are lightweight, portable units that package an application and its dependencies. They run inside pods and provide isolation and resource efficiency, making them ideal for running applications in distributed environments like Kubernetes.
A Kubernetes Deployment is a resource object in Kubernetes that provides declarative updates to applications. Deployments manage the creation and scaling of a set of Pods and ensure that the desired number of Pods are running at any given time. They provide a way to manage the rollout of new versions of an application, rollback to previous versions, and scale the application up or down.
Kubernetes Deployments are used to automate the management of application lifecycle, including the following:
Answer: A Kubernetes Deployment can be created using a YAML file or with the kubectl command. A basic YAML file for a Deployment includes metadata (like name and labels), specifications for the number of replicas, and a template for the Pods (which includes the container image, ports, and other settings). You can apply the Deployment with the kubectl apply -f deployment.yaml command.
Answer: A rolling update is a deployment strategy where Kubernetes gradually replaces old Pods with new Pods. The update proceeds incrementally, ensuring that a specified number of Pods are always running during the update. This strategy prevents downtime during application updates.
Answer: You can roll back a Deployment in Kubernetes using the kubectl rollout undo command. By default, it rolls back to the previous revision, but you can also specify a specific revision if needed.
Answer: You can scale a Deployment by changing the number of replicas in the Deployment's YAML file and applying the changes, or by using the kubectl scale command, for example, kubectl scale deployment my-deployment --replicas=5.
Answer: Deployments are used for stateless applications where the identity of individual Pods is not important. StatefulSets, on the other hand, are used for stateful applications where each Pod requires a unique identity and persistent storage. StatefulSets are used for applications like databases where the state needs to be preserved across Pod restarts.
Answer: Kubernetes handles updates to a Deployment by creating a new ReplicaSet for the updated Pods and gradually replacing the old Pods with the new ones. The update can be configured to proceed at a specified rate, and the progress of the update can be monitored with the kubectl rollout status command.
Answer: A ReplicaSet is a Kubernetes resource that ensures a specified number of Pods are running at any given time. A Deployment manages one or more ReplicaSets to orchestrate rolling updates, rollbacks, and scaling. While you can create and manage ReplicaSets directly, it is more common to use Deployments to manage ReplicaSets for you.
Kubernetes Deployments are a powerful tool for managing the lifecycle of applications in a Kubernetes cluster. Understanding how to create, update, scale, and roll back Deployments is essential for maintaining reliable and scalable applications. In interviews, be prepared to discuss your experience with Kubernetes Deployments, focusing on how you've used them to manage application updates, ensure high availability, and handle scaling.
Yes, a Kubernetes Pod can use more than 1 CPU, and it is configurable.
CPU Request: This is the amount of CPU that a Pod is guaranteed to have. Kubernetes uses this value to schedule Pods on nodes that have sufficient resources. For example, if a Pod requests 1 CPU, Kubernetes will ensure that the node where the Pod is scheduled has at least 1 CPU available for that Pod.
CPU Limit: This is the maximum amount of CPU that a Pod can use. If a Pod's process tries to exceed this limit, Kubernetes will throttle the CPU usage, ensuring that the Pod does not consume more than the specified amount.
You can configure the CPU request and limit in the Pod or container definition using the resources field in the Pod's YAML file. Here's an example:
apiVersion: v1
kind: Pod
metadata:
name: cpu-limits-pod
spec:
containers:
- name: my-container
image: my-image
resources:
requests:
cpu: "0.5" # Requesting 0.5 CPU (500 millicores)
limits:
cpu: "2" # Limiting to 2 CPUs
1 CPU in Kubernetes:
0.5 (which means 500 millicores or half of a CPU).Millicores: CPU resources can be specified in millicores, where 1000m equals 1 CPU. So 500m would be equivalent to 0.5 CPUs.
If you want a Pod to use more than 1 CPU, you would set the cpu limit to a value greater than 1. For example, setting cpu: "2" means the Pod can use up to 2 CPUs.
apiVersion: v1
kind: Pod
metadata:
name: multi-cpu-pod
spec:
containers:
- name: my-container
image: my-image
resources:
requests:
cpu: "1" # Requesting 1 CPU
limits:
cpu: "4" # Limiting to 4 CPUs
In this example:
requests.cpu: "1").limits.cpu: "4"), but no more than that.Yes, a Kubernetes Pod can use more than 1 CPU, and it is configurable through the resources field in the Pod's specification. By setting appropriate requests and limits, you can control how much CPU a Pod can use, ensuring it gets the resources it needs while preventing it from overconsuming.