Kubernetes: Running Containers at Scale

Kubernetes Logo

Kubernetes is a system for running lots of containers without losing your mind.

At the simplest level, it answers three questions that show up the moment containers stop being toys:

  • How do I keep my apps running if something crashes?
  • How do I run many copies of the same thing?
  • How do I update them without knocking users offline?

Kubernetes—often shortened to k8s—is the conductor of that orchestra.

Core Concepts

A Container

A packaged app plus everything it needs to run. Docker builds them; Kubernetes runs them at scale.

A Pod

The smallest thing Kubernetes manages. Think of it as a wrapper around one or more tightly related containers. Pods are mortal. They come and go.

A Node

A machine (VM or physical) that runs pods. In Docker Desktop you only have one node, but production clusters have dozens or thousands.

The Control Plane

Kubernetes’ brain. It watches the cluster and constantly asks, “Does reality match what was declared?” If not, it fixes reality.

The Key Philosophical Twist

You don’t tell Kubernetes how to do things. You tell it what you want, and it keeps trying until that becomes true.

Example:

"I want 3 Ubuntu pods running."

If one dies, Kubernetes doesn’t panic—it calmly creates another. This loop never stops.

That’s why things feel strange at first. You don’t “start” pods or “restart” containers the way you would on a normal server. You describe a desired state, and Kubernetes enforces it like a very patient, very literal caretaker.

Why People Use Kubernetes

  • Self-healing: crashed containers get replaced
  • Scaling: add or remove replicas with one number
  • Rolling updates: change versions without downtime
  • Portability: run the same setup on a laptop or a data center

Why It Feels Intimidating

  • New vocabulary
  • Indirection everywhere
  • You stop touching individual machines

Once the mental model clicks, Kubernetes stops feeling like chaos and starts feeling like physics: declare forces, observe equilibrium.

You’ve already crossed the hard part—running pods, deployments, and metrics. From here, everything else is just learning which levers bend reality in useful ways.


Common Commands

Create Deployment

kubectl create deployment <deployment-name> --image=<image-name> --command -- sleep infinity

Run a Pod

kubectl run myubuntu --image=ubuntu:latest --command -- sleep infinity
kubectl exec -it myubuntu -- bash

Delete Deployment

kubectl delete deployment <deployment-name>

Create Replicas

kubectl create deployment myubuntu --image=ubuntu:latest --replicas=3 --command -- sleep infinity

Get Stats

kubectl get pods
kubectl get nodes
kubectl get deployments

Example output:

NAME                             READY   STATUS             RESTARTS        AGE
myubuntu-dd8d76d8-82qng         0/1     CrashLoopBackOff   3 (11s ago)     110s
myubuntu-dd8d76d8-k4r25         0/1     CrashLoopBackOff   3 (19s ago)     110s
myubuntu-dd8d76d8-p4x5l         0/1     CrashLoopBackOff   3 (16s ago)     110s

Renaming Pods

Kubernetes is stubborn on this topic: you can’t rename pods or the containers inside them. Pod names are generated by the Deployment controller, and container names inside the pod are defined in the manifest. Once a pod exists, its name is immutable.

If you want individually named pods, you must abandon the Deployment and create separate Pod objects:

kubectl run myubuntu1 --image=ubuntu:latest --command -- sleep infinity
kubectl run myubuntu2 --image=ubuntu:latest --command -- sleep infinity
kubectl run myubuntu3 --image=ubuntu:latest --command -- sleep infinity

Monitoring

1. Check Resource Usage

kubectl top pods

This shows CPU and memory usage per pod. First make sure Metrics Server is installed (Docker Desktop usually includes it).

Example output:

NAME                 CPU(cores)   MEMORY(bytes)
myubuntu-abc123     2m           12Mi
myubuntu-def456     1m           10Mi

Handy for a quick temperature check.

2. Describe Pod (Events, Restarts, Conditions)

This is Kubernetes’ gossip page: why the pod is misbehaving, what node it’s on, and what the containers are doing.

kubectl describe pod <pod-name>

Scroll for the Events section; that’s where the truth lives.

3. Logs from Each Pod

If a container prints anything, this is where it goes:

kubectl logs <pod-name>

If the pod has multiple containers:

kubectl logs <pod-name> -c <container-name>

You can follow logs live like tail -f:

kubectl logs -f <pod-name>

4. Kubernetes Dashboard (Optional)

If you want a GUI with graphs, install the official Dashboard:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Then create an access token and open the browser UI. The Dashboard shows CPU, RAM, restarts, logs, events—everything in a neat layout.

5. Prometheus + Grafana (Optional)

This is the grown-up observability stack. Prometheus scrapes metrics, Grafana turns them into gorgeous charts. It’s overkill for your Ubuntu pods, but perfect when you start running real services.


Troubleshooting

Metrics API Not Available

If you get the error kubectl top node error: Metrics API not available:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Give it a moment, then check:

kubectl get pods -n kube-system

You should see:

metrics-server-7b5c6b8d5b-xxxxx   1/1     Running

When it flips to Running, test it:

kubectl top nodes
kubectl top pods

TLS Issues with Proxy

If your cluster is behind a validating proxy (rare on Docker Desktop, but possible), the metrics-server may complain about TLS. Fix it by patching the deployment:

kubectl patch deployment metrics-server -n kube-system 
  --type='json' 
  -p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]'

Once that’s done, metrics start flowing and the top commands behave normally.

This unlocks the whole “health detective” side of Kubernetes—CPU, RAM, pod pressure, node activity—all the invisible physics that make the cluster feel alive.


Services

Services expose your pods to other pods or the outside world. Without a Service, pods are unreachable even from other pods in the cluster.

Service Types

ClusterIP (default) – Internal communication within the cluster:

kubectl expose deployment myubuntu --type=ClusterIP --port=80 --target-port=8080

NodePort – Access pods from outside the cluster on a high-numbered port:

kubectl expose deployment myubuntu --type=NodePort --port=80 --target-port=8080

LoadBalancer – Cloud providers assign an external IP:

kubectl expose deployment myubuntu --type=LoadBalancer --port=80 --target-port=8080

Check Service status:

kubectl get svc
kubectl describe svc <service-name>

Namespaces

Namespaces partition a cluster into isolated, virtual clusters. Useful for separating environments, teams, or applications.

Create a Namespace

kubectl create namespace production
kubectl create namespace development

Deploy to a Specific Namespace

kubectl create deployment myapp --image=myapp:latest -n production

View Resources in a Namespace

kubectl get pods -n production
kubectl get all -n production

Set Default Namespace

Avoid typing -n repeatedly:

kubectl config set-context --current --namespace=production

List All Namespaces

kubectl get namespaces

Labels and Selectors

Labels are key-value pairs attached to resources. They enable filtering, organizing, and operating on groups of resources.

Add Labels to a Pod

kubectl run myapp --image=myapp:latest --labels="app=myapp,env=prod"

Filter Resources by Label

kubectl get pods -l app=myapp
kubectl get pods -l env=prod
kubectl get pods -l "app=myapp,env=prod"

Label an Existing Resource

kubectl label pod <pod-name> app=myapp
kubectl label deployment <deployment-name> version=v1

View Labels

kubectl get pods --show-labels

Labels are used by Services to select which pods to route traffic to, and by many other Kubernetes features.


Health Checks (Probes)

Probes tell Kubernetes when a pod is healthy, ready for traffic, or starting up.

Liveness Probe

Restarts the pod if it’s unresponsive (detecting deadlocks, infinite loops):

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 5

Readiness Probe

Removes the pod from load balancing if it’s not ready to serve traffic:

readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 2

Startup Probe

Gives slow-starting apps time to initialize before liveness checks begin:

startupProbe:
  httpGet:
    path: /startup
    port: 8080
  failureThreshold: 30
  periodSeconds: 10

Without probes, Kubernetes treats even broken pods as healthy.


Resource Requests and Limits

Requests and limits prevent pods from monopolizing cluster resources and help Kubernetes schedule efficiently.

Define Resources in a Deployment

resources:
  requests:
    memory: "128Mi"
    cpu: "100m"
  limits:
    memory: "256Mi"
    cpu: "500m"
  • Requests: The minimum guaranteed resources. K8s uses this for scheduling.
  • Limits: The maximum the pod can use. Exceeded limits cause the pod to be killed.

Check Resource Utilization

kubectl top pods
kubectl describe node <node-name>

Declarative Configuration (YAML Manifests)

Moving beyond kubectl create commands, manifest files allow version control, reproducibility, and complex configurations.

Simple Deployment Manifest

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp-container
        image: myapp:latest
        ports:
        - containerPort: 8080

Apply a Manifest

kubectl apply -f deployment.yaml

Advantages

  • Version control and auditing
  • Reproducibility across environments
  • Easy scaling and updates
  • Easier to share and reuse

StatefulSets vs Deployments

Deployments – For stateless apps (web servers, APIs). Pods are interchangeable.

kubectl create deployment myapp --image=myapp:latest --replicas=3

StatefulSets – For stateful apps (databases, message queues). Each pod has a stable identity and persistent storage.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mydb
spec:
  serviceName: mydb
  replicas: 3
  selector:
    matchLabels:
      app: mydb
  template:
    metadata:
      labels:
        app: mydb
    spec:
      containers:
      - name: mydb
        image: postgres:latest

Key difference: StatefulSet pods retain their names and storage across restarts.


ConfigMaps and Secrets

Store configuration and sensitive data separately from container images.

ConfigMap – Non-Sensitive Config

kubectl create configmap app-config --from-literal=DATABASE_HOST=db.prod.local --from-literal=CACHE_TTL=3600

Use in a Pod:

envFrom:
- configMapRef:
    name: app-config

Secret – Sensitive Data (Passwords, API Keys)

kubectl create secret generic db-credentials --from-literal=username=admin --from-literal=password=secretpass

Use in a Pod:

envFrom:
- secretRef:
    name: db-credentials

View ConfigMaps and Secrets

kubectl get configmaps
kubectl get secrets
kubectl describe configmap app-config

Secrets are base64-encoded, not encrypted by default. Use encryption at rest in production.


Init Containers and Sidecars

Init Containers – Run to completion before the main container starts. Useful for setup tasks:

initContainers:
- name: setup
  image: setup-image:latest
  command: ["sh", "-c", "python /setup/init.py"]
containers:
- name: myapp
  image: myapp:latest

Sidecars – Run alongside the main container. Useful for logging, monitoring, or proxying:

containers:
- name: myapp
  image: myapp:latest
- name: logger
  image: fluent/fluent-bit:latest

Both containers in a pod share network namespace, so they can communicate via localhost.


Common Pod Issues

CrashLoopBackOff

Pod is crashing immediately, then K8s restarts it. Check logs:

kubectl logs <pod-name>
kubectl describe pod <pod-name>

Common causes: Wrong command, missing dependencies, config errors.

ImagePullBackOff

Kubernetes can’t pull the image. Check if the image name and registry are correct:

kubectl describe pod <pod-name>  # Look at Events section

Pending

Pod is waiting for resources or has unsatisfied requirements:

kubectl describe pod <pod-name>  # Check Events and Conditions

If nodes are full, scale the cluster or reduce resource requests.

OOMKilled

Pod exceeded memory limit. Increase the limit or optimize the app:

kubectl edit deployment <deployment-name>
# Increase resources.limits.memory

InvalidImage

The image doesn’t exist or the reference is malformed. Double-check the image name and tag.


Accessing Pods

Execute Commands in a Pod

kubectl exec -it <pod-name> -- bash
kubectl exec -it <pod-name> -- sh -c "ps aux"

Port Forward to a Pod

Access a pod’s port from your local machine:

kubectl port-forward <pod-name> 8080:8080

Now visit localhost:8080 in your browser.

Copy Files to/from a Pod

kubectl cp <pod-name>:/path/in/pod ./local/path
kubectl cp ./local/file <pod-name>:/path/in/pod

Horizontal Pod Autoscaling (HPA)

Automatically scale deployments based on CPU usage or custom metrics.

Create an HPA

kubectl autoscale deployment myapp --min=2 --max=10 --cpu-percent=80

This keeps pods between 2 and 10 replicas, scaling up when average CPU exceeds 80%.

Check HPA Status

kubectl get hpa
kubectl describe hpa myapp-hpa

HPA requires the Metrics Server to be running. For custom metrics, use Prometheus or other monitoring solutions.


Helm

Helm is the package manager for Kubernetes. It bundles manifests, templates, and defaults into reusable “charts.”

Install Helm

Download from https://helm.sh/

Search for Charts

helm search repo postgres

Install a Chart

helm install my-postgres bitnami/postgresql

List Releases

helm list

Upgrade a Release

helm upgrade my-postgres bitnami/postgresql --set password=newpass

Delete a Release

helm uninstall my-postgres

Helm simplifies deploying complex applications like databases, message queues, and monitoring stacks.


Ingress

Ingress manages external HTTP/HTTPS access and routing to services. It’s the production way to expose applications.

Simple Ingress Manifest

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-ingress
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              number: 80

Apply and Check

kubectl apply -f ingress.yaml
kubectl get ingress
kubectl describe ingress myapp-ingress

Benefits:

  • Single external IP for multiple services
  • Virtual host routing
  • SSL/TLS termination
  • Path-based routing

Persistent Storage

Containers are ephemeral. PersistentVolumes (PV) and PersistentVolumeClaims (PVC) provide durable storage.

PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Use PVC in a Pod

containers:
- name: myapp
  image: myapp:latest
  volumeMounts:
  - mountPath: /data
    name: storage
volumes:
- name: storage
  persistentVolumeClaim:
    claimName: my-pvc

Check PVCs

kubectl get pvc
kubectl describe pvc my-pvc

Data in /data persists across pod restarts.


Security Basics

RBAC (Role-Based Access Control)

Define who can do what:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pod-reader
subjects:
- kind: User
  name: alice@example.com

NetworkPolicies

Control traffic between pods:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress

This denies all incoming traffic to all pods. Then allow specific traffic with more targeted policies.

Pod Security Policies

Run containers as non-root, disable privilege escalation, etc. (becoming deprecated in favor of Pod Security Standards).


Useful kubectl Tips

Aliases (Add to ~/.bashrc or ~/.zshrc)

alias k=kubectl
alias kgp="kubectl get pods"
alias kgd="kubectl get deployments"
alias kdp="kubectl describe pod"
alias kl="kubectl logs"
alias kaf="kubectl apply -f"
alias kdel="kubectl delete"

Context Switching

kubectl config get-contexts
kubectl config use-context <context-name>

Dry-Run (Preview Changes)

kubectl apply -f deployment.yaml --dry-run=client -o yaml

Watch Resources in Real-Time

kubectl get pods -w
kubectl top pods -w

Get Resources in YAML Format

kubectl get pod <pod-name> -o yaml
kubectl get deployment <deployment-name> -o yaml

Edit Resources Interactively

kubectl edit deployment <deployment-name>

Delete Multiple Resources

kubectl delete pod <pod1> <pod2> <pod3>
kubectl delete pods -l app=myapp
kubectl delete all -n dev  # Delete all resources in a namespace

Cleanup

Delete Individual Resources

kubectl delete pod <pod-name>
kubectl delete deployment <deployment-name>
kubectl delete service <service-name>

Delete by Label

kubectl delete pods -l app=myapp
kubectl delete deployments -l env=test

Delete Everything in a Namespace

kubectl delete all -n <namespace-name>

Delete a Namespace (and All Its Resources)

kubectl delete namespace <namespace-name>

Dry-Run Delete

Preview what will be deleted without actually deleting:

kubectl delete pod <pod-name> --dry-run=client

Be careful with delete all and delete namespace—these operations are permanent and cannot be undone.

kubernetes logo

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top