Skip to content
Home DevOps kubectl Commands Cheatsheet

kubectl Commands Cheatsheet

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Kubernetes → Topic 8 of 12
Essential kubectl commands — getting resources, logs, exec, port-forward, rollouts, configmaps, secrets, namespaces, and debugging techniques.
⚙️ Intermediate — basic DevOps knowledge assumed
In this tutorial, you'll learn
Essential kubectl commands — getting resources, logs, exec, port-forward, rollouts, configmaps, secrets, namespaces, and debugging techniques.
  • kubectl describe shows events — always check the Events section when a pod is not starting.
  • kubectl logs --previous retrieves logs from a crashed container — essential for crash debugging.
  • kubectl port-forward lets you access a pod or service without a LoadBalancer — great for development.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • kubectl get: read state (GET requests).
  • kubectl describe: read state + events (aggregated GET + events query).
  • kubectl apply: write desired state (POST/PUT/PATCH requests).
  • kubectl delete: remove state (DELETE requests).
  • The --output flag controls formatting: -o yaml, -o json, -o wide, -o custom-columns.
  • The -n flag scopes commands to a namespace. Without it, commands target 'default'.
  • The --context flag switches between clusters (dev, staging, prod).
  • kubectl describe is the single most valuable debugging command — the Events section shows scheduler decisions, image pull failures, and probe failures.
  • kubectl logs --previous is the only way to see why a container crashed before it restarted.
  • Running kubectl delete pod --force --grace-period=0 on a StatefulSet pod — this destroys the pod without letting it drain connections, potentially corrupting persistent storage.
🚨 START HERE
kubectl Triage Cheat Sheet
First-response commands for common production incidents. Scan and execute.
🟡Pod not starting — no events visible.
Immediate ActionCheck scheduler health and node capacity.
Commands
kubectl get pods -n kube-system | grep scheduler
kubectl describe nodes | grep -A 5 'Allocated resources'
Fix NowIf scheduler down: check kube-system logs. If no capacity: scale cluster or evict low-priority Pods.
🟡Container keeps restarting (CrashLoopBackOff).
Immediate ActionGet logs from the crashed container instance.
Commands
kubectl logs <pod> --previous --tail=50
kubectl describe pod <pod> | grep -A 5 'Last State'
Fix NowIf OOMKilled: increase memory limit. If application error: fix code. If probe failure: adjust initialDelaySeconds.
🟡Service unreachable — connection refused or timeout.
Immediate ActionCheck if Service has endpoints.
Commands
kubectl get endpoints <service-name>
kubectl get pods -l app=<selector> -o wide
Fix NowIf no endpoints: Pods are not Ready or labels don't match. If endpoints exist: test Pod directly with kubectl exec -- curl.
🟡Node marked NotReady — Pods being evicted.
Immediate ActionCheck node conditions and kubelet status.
Commands
kubectl describe node <node> | grep -A 10 'Conditions'
kubectl get events --field-selector involvedObject.name=<node> --sort-by='.lastTimestamp'
Fix NowIf disk pressure: clean images with crictl rmi --prune. If memory pressure: identify the leak. If kubelet down: SSH and systemctl restart kubelet.
🟡PersistentVolumeClaim stuck in Pending.
Immediate ActionCheck available PersistentVolumes and StorageClass.
Commands
kubectl get pv
kubectl describe pvc <pvc-name>
Fix NowIf no PV available: provision one or check StorageClass provisioner. If PV exists but not binding: verify accessModes and storageClassName match.
🟡ImagePullBackOff — container image cannot be pulled.
Immediate ActionVerify image name, tag, and registry credentials.
Commands
kubectl describe pod <pod> | grep -A 5 'Events'
kubectl get secret <imagepullsecret-name> -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d
Fix NowIf image name wrong: fix the Deployment spec. If auth failure: recreate imagePullSecret with correct credentials. If private registry: verify network access from the node.
Production IncidentThe --force Delete That Corrupted a StatefulSet's Persistent VolumeA developer force-deleted a Kafka broker Pod to 'restart' it. The Pod was part of a StatefulSet with a PersistentVolumeClaim. The force delete skipped the graceful shutdown, leaving the volume attachment in a 'detaching' state. The replacement Pod could not mount the volume, and Kafka lost its replica for 45 minutes.
SymptomNew StatefulSet Pod stuck in Pending with event: 'Warning FailedScheduling ... pod has unbound immediate PersistentVolumeClaims.' The old Pod is gone but the PVC shows 'Bound' and the PV shows 'Released' instead of 'Available'.
AssumptionThe PVC is broken or the storage provisioner is down.
Root causekubectl delete pod kafka-2 --force --grace-period=0 bypassed the kubelet's normal termination flow. The kubelet did not unmount the volume or detach it from the node. The cloud provider's volume controller saw the attachment as active (the old node still held it) and refused to re-attach to the new node. The PV was stuck in 'Released' state because the PVC's claimRef still pointed to the old binding.
Fix1. Patch the PV to remove the claimRef: kubectl patch pv <pv-name> -p '{"spec":{"claimRef": null}}'. 2. Delete the stuck PVC and let the StatefulSet controller recreate it. 3. If the volume is still attached to the old node, force-detach via cloud CLI (e.g., aws ec2 detach-volume). 4. Restart the kubelet on the old node to clear stale mount references. 5. Never use --force on StatefulSet Pods. Use kubectl delete pod <name> with the default grace period.
Key Lesson
--force --grace-period=0 is a last resort for stuck Pods, not a restart mechanism.StatefulSet Pods have identity (ordinal index, stable network name, persistent storage). Force-deleting breaks the identity contract.Always check volume attachment state when a StatefulSet Pod fails to reschedule: kubectl describe pv and kubectl get volumeattachments.Use kubectl rollout restart statefulset/<name> for controlled restarts of StatefulSets.
Production Debug GuideFrom symptom to root cause using only kubectl commands.
Pod stuck in Pending.1. kubectl describe pod <name> — read the Events section. 2. If 'FailedScheduling' with 'Insufficient cpu/memory': check kubectl describe nodes | grep -A 5 'Allocated resources'. 3. If 'persistentvolumeclaim not found': kubectl get pvc to verify the PVC exists and is Bound. 4. If 'node(s) had taint': check kubectl get nodes --show-labels and the Pod's tolerations. 5. If no events at all: the scheduler may be down — kubectl get pods -n kube-system | grep scheduler.
Pod in CrashLoopBackOff.1. kubectl logs <pod> --previous — see why the last instance crashed. 2. kubectl describe pod <pod> — check 'Last State' for OOMKilled (exit code 137). 3. If OOMKilled: increase memory limit or fix the leak. 4. If application error: fix the code. 5. If probe failure: kubectl describe pod <pod> | grep -A 3 'Liveness\|Readiness' — check probe config against actual startup time.
Service returns 502/503.1. kubectl get the selector. 2. kubectl get pods -l app=<label> — verify Pods exist and are Ready (READY column shows 1/1). 3. kubectl exec -it <pod> -- curl localhost:<port> — test the app directly inside the Pod. 4. If Pods are Ready but endpoints still empty: check the Service selector matches Pod labels exactly (case-sensitive).
kubectl commands are slow or timeout.1. kubectl get --raw /healthz — check API server health. 2. kubectl get --raw /readyz — check if API server is ready. 3. Check etcd latency: kubectl exec -n kube-system etcd-master -- etcdctl endpoint health. 4. If API server is healthy but kubectl is slow: check your kubeconfig context — kubectl config current-context. You may be hitting a distant cluster over VPN.
Deployment rollout hangs — never completes.1. kubectl rollout status deployment/<name> — see which ReplicaSet is not becoming ready. 2. kubectl describe rs <new-rs-name> — check if Pods are Pending or CrashLoopBackOff. 3. If image pull failure: kubectl describe pod <pod> | grep 'Failed' — verify image exists and imagePullSecrets are configured. 4. If maxUnavailable=0 and no capacity: the rollout cannot proceed because old Pods cannot be removed until new Pods are Ready. 5. Rollback: kubectl rollout undo deployment/<name>.

kubectl is not just a command-line tool — it is a Kubernetes API client. Every command you type translates into HTTP requests to the kube-apiserver. Understanding this relationship explains why certain commands are fast (cached reads), why others are slow (watch calls), and why some fail with 'connection refused' when the API server is unreachable.

The common misconception is that kubectl is a deployment tool. It is a state inspection and mutation tool. Deployment is one use case. The real value is observability — seeing what the cluster is doing right now, what it did in the past, and why a specific Pod is stuck. Engineers who master kubectl's inspection commands debug production incidents in minutes. Engineers who only know apply and delete debug by redeploying and hoping.

This cheatsheet is organized by intent: what are you trying to do? Find a resource? Debug a failure? Roll back a deployment? Each section includes the commands, the output you should expect, and the production gotchas that bite when you use them at scale.

Getting Information

The get and describe commands are your primary observability tools. They read cluster state from the API server without modifying anything — safe to run in production at any time.

kubectl get lists resources in a compact table format. It is fast because the API server caches the response. kubectl describe shows the same resources with full detail including events, conditions, and annotations — this is where the debugging signal lives.

The --output flag is more powerful than most engineers realize. -o jsonpath lets you extract specific fields programmatically. -o custom-columns builds custom dashboards in your terminal. -o name outputs only resource names, perfect for scripting.

getting-information.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445
# List resources
kubectl get pods
kubectl get pods -o wide          # + node and IP
kubectl get pods --all-namespaces # all namespaces
kubectl get deployments
kubectl get services
kubectl get nodes
kubectl get all                   # pods, services, deployments in current namespace

# Get YAML/JSON of a resource
kubectl get pod myapp-pod -o yaml
kubectl get deployment myapp-deployment -o json

# Detailed view with events — essential for debugging
kubectl describe pod myapp-pod
kubectl describe node my-node

# Watch resources update in real time
kubectl get pods -w

# Custom columns output
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase

# ── PRODUCTION-GRADE EXTRAS ──────────────────────────────────────────────

# Extract a specific field with jsonpath
kubectl get pod myapp-pod -o jsonpath='{.status.podIP}'
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'

# Get all pods with their resource requests
kubectl get pods -o custom-columns=NAME:.metadata.name,CPU_REQ:.spec.containers[0].resources.requests.cpu,MEM_REQ:.spec.containers[0].resources.requests.memory

# Find pods without resource limits (security/compliance check)
kubectl get pods -A -o json | jq -r '.items[] | select(.spec.containers[].resources.limits == null) | .metadata.namespace + "/" + .metadata.name'

# Get events sorted by time — critical for post-incident analysis
kubectl get events --sort-by='.lastTimestamp' -A

# Get events for a specific resource
kubectl get events --field-selector involvedObject.name=myapp-pod

# Check API server health
kubectl get --raw /healthz
kubectl get --raw /readyz
kubectl get --raw /livez
▶ Output
NAME READY STATUS RESTARTS AGE
myapp-pod-abc 1/1 Running 0 5m

10.244.1.45

myapp-pod-abc 10.244.1.45
myapp-pod-def 10.244.2.78

ok
Mental Model
get vs describe — When to Use Each
The Events section in describe is the single most valuable debugging output in Kubernetes.
  • Events have a TTL (default: 1 hour). If you investigate late, the events are gone. Export them early.
  • kubectl get events --sort-by='.lastTimestamp' shows the chronological story of what happened.
  • kubectl describe pod shows events scoped to that Pod. kubectl get events -A shows cluster-wide events.
📊 Production Insight
kubectl get with jsonpath is the foundation of production automation scripts. CI/CD pipelines, health checks, and alerting scripts all rely on extracting specific fields from kubectl output. The -o jsonpath syntax is powerful but fragile — field names change between API versions. Always pin your API version in scripts: apiVersion: apps/v1 not just apiVersion: v1. For complex extraction, pipe to jq instead of wrestling with jsonpath edge cases.
🎯 Key Takeaway
kubectl get is for scanning; kubectl describe is for investigating. The Events section in describe is the primary debugging signal — it shows scheduler decisions, probe failures, and image pull errors. Use -o jsonpath or jq for programmatic extraction in automation scripts.
Choosing the Right Output Format
IfQuick overview of many resources.
Usekubectl get <resource> — default table output.
IfNeed a specific field for scripting.
Usekubectl get <resource> -o jsonpath='{.status.phase}' or pipe to jq.
IfDebugging a specific resource's state.
Usekubectl describe <resource> <name> — read the Events and Conditions sections.
IfNeed the full resource definition for backup or modification.
Usekubectl get <resource> <name> -o yaml — export, edit, and reapply.
IfWatching for changes in real time.
Usekubectl get <resource> -w — streams updates as they happen.

Debugging

The debugging commands — logs, exec, port-forward, cp — are how you interact with running (or crashed) containers. These commands go through the kubelet on the target node, not directly to the container runtime.

kubectl logs retrieves stdout/stderr from the container. The --previous flag is critical — it retrieves logs from the container instance that just crashed, which is the only way to see why a CrashLoopBackOff Pod is failing.

kubectl exec opens a shell inside the container. This is the Kubernetes equivalent of SSH. It requires the container to be running and have a shell binary (bash or sh). Distroless containers do not have a shell — use ephemeral debug containers instead.

kubectl port-forward creates a tunnel from your local machine to a Pod or Service inside the cluster. It bypasses Ingress, LoadBalancer, and NodePort — useful for accessing services that are not exposed externally.

debugging.sh · BASH
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
# Container logs
kubectl logs myapp-pod
kubectl logs myapp-pod -c container-name  # specific container
kubectl logs myapp-pod --previous          # logs from crashed container
kubectl logs -f deployment/myapp           # follow deployment logs
kubectl logs myapp-pod --tail=100          # last 100 lines

# Execute commands inside a container
kubectl exec -it myapp-pod -- bash
kubectl exec -it myapp-pod -- sh           # if bash not available
kubectl exec myapp-pod -- env              # print env vars
kubectl exec myapp-pod -- cat /config/app.properties

# Port forward to access a service locally
kubectl port-forward pod/myapp-pod 8080:8000
kubectl port-forward service/myapp-svc 8080:80

# Copy files to/from a pod
kubectl cp myapp-pod:/var/log/app.log ./app.log
kubectl cp ./config.yaml myapp-pod:/app/config.yaml

# ── PRODUCTION-GRADE DEBUGGING ───────────────────────────────────────────

# Logs from all pods in a deployment (parallel)
kubectl logs deployment/myapp --all-containers --prefix

# Logs with timestamps — essential for correlating with external events
kubectl logs myapp-pod --timestamps

# Logs from a specific time window
kubectl logs myapp-pod --since=1h
kubectl logs myapp-pod --since-time=2026-04-07T10:00:00Z

# Ephemeral debug container for distroless images (K8s 1.23+)
kubectl debug -it myapp-pod --image=busybox --target=app-container

# Debug a node by creating a privileged debug Pod
kubectl debug node/my-node -it --image=ubuntu

# Copy logs from a crashed pod before it is garbage collected
kubectl logs <pod> --previous > crash.log 2>&1

# Stream logs from multiple pods with a label selector
kubectl logs -l app=payment-service --all-containers --prefix --follow

# Check what environment variables a running pod has
kubectl exec myapp-pod -- printenv | sort

# Test network connectivity from inside a pod
kubectl exec myapp-pod -- curl -s http://other-service.default.svc.cluster.local:8080/health
kubectl exec myapp-pod -- nslookup other-service.default.svc.cluster.local

# Check mounted volumes and config
kubectl exec myapp-pod -- df -h
kubectl exec myapp-pod -- mount | grep config
▶ Output
# port-forward: http://localhost:8080 → container port 8000

# ephemeral debug container
Defaulting debug container name to debugger-xxxxx.
/ # ls /app
config.yaml app.jar lib/
Mental Model
Distroless and Debug Containers
If your production containers have bash installed, your images are too large and your attack surface is too wide.
  • kubectl debug -it <pod> --image=busybox --target=<container> shares the process namespace.
  • kubectl debug node/<node> -it --image=ubuntu creates a Pod on that node with host namespaces mounted.
  • Always use --rm with debug containers to ensure they are cleaned up.
📊 Production Insight
kubectl logs --previous is the most underused debugging command. When a Pod enters CrashLoopBackOff, the current container has no logs — it just crashed. Only --previous shows the stdout/stderr from the crashed instance. Set up your incident response runbook to always run logs --previous before investigating further. Additionally, logs are ephemeral by default — they are lost when the Pod is deleted. Use a log aggregation stack (Fluentd/Fluent Bit -> Loki/ELK) for persistent log retention. kubectl logs is for real-time debugging, not historical analysis.
🎯 Key Takeaway
kubectl logs --previous is the first command for CrashLoopBackOff debugging. kubectl exec requires a shell in the container — use ephemeral debug containers for distroless images. kubectl port-forward bypasses all networking layers for direct Pod access. Always capture logs before a Pod is garbage collected.
Debugging Tool Selection
IfContainer is running and you need to inspect its state.
Usekubectl exec -it <pod> -- sh — get a shell and investigate.
IfContainer is running but has no shell (distroless).
Usekubectl debug -it <pod> --image=busybox --target=<container> — attach an ephemeral debug container.
IfContainer just crashed (CrashLoopBackOff).
Usekubectl logs <pod> --previous — get logs from the crashed instance.
IfNeed to access a service that is not exposed externally.
Usekubectl port-forward service/<name> 8080:80 — tunnel to the service from your machine.
IfNeed to extract a log file or core dump from a Pod.
Usekubectl cp <pod>:/path/to/file ./local-file — copy files in either direction.
IfNeed to debug node-level issues (networking, disk, kernel).
Usekubectl debug node/<node> -it --image=ubuntu — launch a privileged Pod on the node.

Managing Deployments

The apply, set image, scale, and rollout commands are how you modify cluster state. These are write operations — they change what is running. Use them with the same care as database writes.

kubectl apply is the declarative entry point. It reads a YAML file, compares it with the live state, and sends a PATCH request to the API server to reconcile the difference. This is idempotent — running apply twice with the same file produces no change.

kubectl rollout undo is the most important safety net. It reverts a Deployment to the previous ReplicaSet revision. This is not a delete-and-recreate — it is a controlled rollback that respects rolling update parameters (maxUnavailable, maxSurge).

kubectl delete is the imperative counterpart to apply. It removes resources. For Deployments, it deletes the Deployment and its ReplicaSets but does not delete the Pods — the Pods become orphaned until garbage collected. Always prefer apply with a modified YAML over delete-then-create.

managing-deployments.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
# Apply config
kubectl apply -f deployment.yaml
kubectl apply -f ./k8s/              # apply all files in directory

# Update image
kubectl set image deployment/myapp api=myapp:2.0

# Scale
kubectl scale deployment myapp --replicas=5

# Rollout management
kubectl rollout status deployment/myapp
kubectl rollout history deployment/myapp
kubectl rollout undo deployment/myapp                    # rollback one step
kubectl rollout undo deployment/myapp --to-revision=2    # rollback to specific revision

# Delete
kubectl delete pod myapp-pod
kubectl delete deployment myapp
kubectl delete -f deployment.yaml   # delete what was applied

# Force delete a stuck pod
kubectl delete pod myapp-pod --force --grace-period=0

# ── PRODUCTION-GRADE DEPLOYMENT MANAGEMENT ───────────────────────────────

# Apply with server-side diff (dry-run) — see what would change
kubectl apply -f deployment.yaml --server-dry-run --diff

# Apply with field validation — reject invalid manifests
kubectl apply -f deployment.yaml --validate=true --server-side

# Restart all pods in a deployment (rolling restart)
kubectl rollout restart deployment/myapp

# Pause a rollout mid-way (canary testing)
kubectl rollout pause deployment/myapp
# ... verify canary pods ...
kubectl rollout resume deployment/myapp

# Check rollout revision details
kubectl rollout history deployment/myapp --revision=3

# Scale with resource-aware scripting
CURRENT=$(kubectl get deployment myapp -o jsonpath='{.spec.replicas}')
DESIRED=$((CURRENT + 2))
kubectl scale deployment myapp --replicas=$DESIRED
echo "Scaled from $CURRENT to $DESIRED replicas"

# Delete with label selector (dangerous — always dry-run first)
kubectl delete pods -l app=temp-worker --dry-run=client
kubectl delete pods -l app=temp-worker

# Apply with pruning — delete resources not in the applied set
kubectl apply -f ./k8s/ --prune -l app=myapp

# Export current state before making changes (safety myapp-backup-$(date +%Y%m%d-%H%M%S).yaml
▶ Output
deployment.apps/myapp-deployment configured
Waiting for deployment "myapp" rollout to finish: 3 out of 5 new replicas have been updated...
deployment.apps/myapp-deployment successfully rolled out
Mental Model
apply vs replace vs create
apply uses strategic merge patch by default. For lists ( net) kubectl get deployment myapp -o yaml >like container ports), it merges by key. If your list items have no key, apply may duplicate entries.
  • apply with --server-side (K8s 1.22+) avoids conflicts from multiple actors updating the same resource.
  • apply with --prune deletes resources that are no longer in your manifest directory — powerful but dangerous.
  • Always run --dry-run=client or --server-dry-run before applying to production.
📊 Production Insight
kubectl rollout pause is the foundation of canary deployments. Pause a rollout after the first new Pod starts, verify it is healthy (check logs, metrics, error rates), then resume. This gives you a manual gate between 'new code is running' and 'all Pods are new code.' Combine with PodDisruptionBudgets and readiness probes for zero-downtime deployments. The biggest production mistake is running kubectl apply without --dry-run first. A misconfigured manifest can delete all Pods simultaneously if the selector changes.
🎯 Key Takeaway
kubectl apply is the declarative standard — always dry-run first. kubectl rollout undo is the fastest rollback mechanism, reverting to the previous ReplicaSet in seconds. kubectl rollout pause enables canary deployments. Never use --force on StatefulSet Pods — it breaks the identity contract.
Deployment Operations Decision Tree
IfDeploying a new version from a YAML file.
Usekubectl apply -f deployment.yaml --dry-run=client (verify), then kubectl apply -f deployment.yaml.
IfUpdating only the container image tag.
Usekubectl set image deployment/<name> <container>=<image>:<tag> — faster than editing YAML.
IfNeed to restart all Pods without changing the spec.
Usekubectl rollout restart deployment/<name> — rolling restart with zero downtime.
IfNeed to test new code on a subset of Pods before full rollout.
Usekubectl rollout pause deployment/<name> after first Pod starts. Verify. Then kubectl rollout resume deployment/<name>.
IfDeployment is broken — need to go back to the previous version.
Usekubectl rollout undo deployment/<name> — instant rollback to last known good ReplicaSet.
IfNeed to roll back to a specific older revision.
Usekubectl rollout history deployment/<name> (find revision), then kubectl rollout undo deployment/<name> --to-revision=<N>.
IfA Pod is stuck in Terminating and won't die.
Usekubectl delete pod <name> --force --grace-period=0 — last resort. Not for StatefulSets.
🗂 kubectl Command Comparison
When to use each command and what it actually does behind the scenes.
CommandHTTP VerbUse CaseProduction Risk
kubectl getGETRead current state of resources. Fast, cached, safe.None — read-only operation.
kubectl describeGET + Events queryDetailed view with events, conditions, annotations. Primary debugging tool.None — read-only. Events have 1-hour TTL; investigate promptly.
kubectl applyPATCH (strategic merge)Declarative state management. Idempotent. GitOps standard.Medium — selector changes can orphan Pods. Always dry-run first.
kubectl createPOSTCreate a new resource. Fails if it already exists.Low — fails safely if resource exists. Not idempotent.
kubectl replacePUTFull resource replacement. Overwrites everything.High — replaces entire resource. Missing fields are removed. Use with caution.
kubectl deleteDELETERemove a resource. Triggers graceful termination.Medium — orphaned Pods if Deployment is deleted. --force on StatefulSets corrupts storage.
kubectl logsGET (via kubelet proxy)Retrieve container stdout/stderr.None — read-only. Logs are ephemeral; aggregate externally.
kubectl execPOST (SPDY/WebSocket)Run commands inside a container.Medium — can modify running state. Audit exec usage in production.
kubectl port-forwardPOST (SPDY tunnel)Tunnel local port to Pod/Service.Low — bypasses network policies. Do not use as permanent access method.
kubectl rollout undoPATCH (roll back ReplicaSet)Revert Deployment to previous revision.Low — safest rollback method. Respects rolling update parameters.

🎯 Key Takeaways

  • kubectl describe shows events — always check the Events section when a pod is not starting.
  • kubectl logs --previous retrieves logs from a crashed container — essential for crash debugging.
  • kubectl port-forward lets you access a pod or service without a LoadBalancer — great for development.
  • kubectl rollout undo is a one-command rollback — no redeployment needed.
  • Use -n NAMESPACE for all commands if not in the default namespace.
  • Always dry-run before applying to production: kubectl apply -f file.yaml --dry-run=client.
  • kubectl debug creates ephemeral containers for debugging distroless images — the modern replacement for exec.
  • kubectl get with -o jsonpath or jq is the foundation of production automation and monitoring scripts.

⚠ Common Mistakes to Avoid

    Using kubectl delete pod --force on StatefulSet Pods
    Symptom

    replacement Pod stuck in Pending because the PersistentVolume is still attached to the old node.

    Fix

    never use --force on StatefulSet Pods. Use kubectl rollout restart statefulset/<name> for controlled restarts. If a Pod is stuck in Terminating, check the kubelet on the node before force-deleting.

    Running kubectl apply without --dry-run in production
    Symptom

    a typo in the YAML changes the Deployment selector, orphaning all existing Pods and creating new ones that cannot schedule.

    Fix

    always run kubectl apply -f file.yaml --dry-run=client first. For critical deployments, use --server-dry-run to validate against the API server.

    Using kubectl logs without --previous on CrashLoopBackOff Pods
    Symptom

    'no logs' output because the current container just crashed and has no stdout yet.

    Fix

    always use kubectl logs <pod> --previous to retrieve logs from the crashed container instance.

    Not specifying -n namespace
    Symptom

    commands target the 'default' namespace and return no results, leading to confusion about whether resources exist.

    Fix

    always specify -n <namespace> or set a default namespace: kubectl config set-context --current --namespace=<ns>.

    Running kubectl delete -f directory/ without verifying
    Symptom

    deletes all resources defined in the directory, including shared resources like ConfigMaps and Services used by other workloads.

    Fix

    always run with --dry-run=client first. Use label-based deletion for surgical removal: kubectl delete all -l app=myapp.

    Using kubectl exec to modify production state
    Symptom

    manual changes inside a container are lost on restart, creating configuration drift between the running state and the declared state.

    Fix

    never modify production state via exec. Update the ConfigMap/Secret/Deployment YAML and apply it. The reconciliation loop will bring the container in line.

Interview Questions on This Topic

  • QHow do you debug a pod that is in CrashLoopBackOff? Walk me through the exact kubectl commands you would run, in order.
  • QHow do you rollback a failed deployment in Kubernetes? What is the difference between rollout undo and deleting the new ReplicaSet?
  • QHow do you access a service running in Kubernetes from your local machine without exposing it externally?
  • QWhat is the difference between kubectl apply, kubectl replace, and kubectl create? When would you use each?
  • QA developer ran kubectl delete pod kafka-2 --force --grace-period=0 on a StatefulSet. The replacement Pod is stuck in Pending. How do you diagnose and fix this?
  • QHow would you find all Pods in a cluster that do not have resource limits set? Write the kubectl command.
  • QWhat does kubectl rollout pause do, and how would you use it for canary deployments?
  • QHow do you export the current state of a Deployment before making changes, as a safety net?

Frequently Asked Questions

How do I debug a pod that is in CrashLoopBackOff?

Check the crash reason: kubectl describe pod POD_NAME — look at the 'Last State' section and 'Events'. Check logs from the crashed container: kubectl logs POD_NAME --previous. Common causes: application crash on startup (check logs for stack trace), readiness probe failing before app starts (check probe config), missing environment variables or ConfigMap, and OOMKilled (memory limit too low).

How do I access a ConfigMap or Secret in a running pod?

kubectl exec -it POD_NAME -- env shows all environment variables including those from ConfigMaps and Secrets. kubectl exec -it POD_NAME -- cat /etc/config/KEY shows a file-mounted ConfigMap. kubectl get configmap NAME -o yaml shows the ConfigMap contents.

What is the difference between kubectl delete pod and kubectl delete pod --force?

kubectl delete pod sends a graceful termination signal. The kubelet gives the container time to shut down (default 30 seconds), unmounts volumes, and cleans up. kubectl delete pod --force --grace-period=0 skips all of this — the Pod is removed from the API server immediately, but the container may still be running on the node. Use --force only when the Pod is stuck in Terminating and the kubelet is unreachable. Never use it on StatefulSet Pods.

How do I see what kubectl is actually sending to the API server?

Add -v=6 (or higher) to any kubectl command to see the HTTP request and response. -v=6 shows request URLs and response codes. -v=8 shows full request and response bodies. -v=9 shows curl commands you could use to replicate the request. Example: kubectl get pods -v=6 shows the GET request to /api/v1/namespaces/default/pods.

How do I switch between multiple Kubernetes clusters?

kubectl config get-contexts shows all available contexts (clusters). kubectl config use-context <context-name> switches to a different cluster. kubectl config current-context shows which cluster you are targeting. For safety, always run kubectl config current-context before applying changes to verify you are on the right cluster. Use kubectx (a popular third-party tool) for faster context switching.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousHelm Charts for KubernetesNext →Kubernetes Monitoring with Prometheus
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged