kubectl Commands Cheatsheet: The Commands That Actually Matter
Every Kubernetes cluster you'll ever work with — whether it's a three-node local dev setup or a 500-node production environment serving millions of users — is controlled through one tool: kubectl. It's not optional and it's not something you learn once and forget. It's the steering wheel of your entire infrastructure, and senior engineers use it dozens of times a day. If you're deploying applications, debugging incidents at 2am, or onboarding a new service, kubectl is always in the middle of the action.
The problem most engineers hit isn't learning the commands — it's understanding why a command exists and which one to reach for in a given situation. Should you use kubectl get or kubectl describe? When does kubectl exec save you and when does it make things worse? Why does kubectl apply behave differently from kubectl create? These are the questions that separate someone who copy-pastes from Stack Overflow from someone who actually understands their cluster.
By the end of this article you'll know the essential kubectl commands that cover 95% of real-world Kubernetes work, understand the reasoning behind each one so you can adapt when things go sideways, and have a mental model for debugging pods, managing deployments, and inspecting cluster state that you can actually use under pressure.
Navigating Your Cluster: get, describe, and Why You Need Both
kubectl get and kubectl describe look like they do the same thing, but they serve completely different mental modes. get is your dashboard — fast, tabular, scannable. You use it to answer 'is everything running?'. describe is your investigation tool — verbose, relational, contextual. You use it to answer 'why isn't this thing running?'.
The critical difference is that describe includes the Events section at the bottom, which is where Kubernetes writes exactly what it tried to do and what went wrong. That Events block is the single most useful debugging surface in the entire platform. Nine times out of ten when a pod won't start, the answer is sitting in Events and nowhere else.
kubectl get pods -o wide adds node name and IP columns — invaluable when you suspect a node-level issue. kubectl get all -n payments gives you a full snapshot of every resource type in a namespace in one shot, which is the fastest way to orient yourself in an unfamiliar service. Combine -o yaml with get to dump the full live spec of any resource, which is useful for copying configs or auditing what actually got deployed versus what you intended.
# ── STEP 1: Scan all pods across every namespace at a glance ────────────────── # -A means --all-namespaces. Use this first when you inherit a cluster. kubectl get pods -A # ── STEP 2: Get pods in your target namespace with node placement info ───────── # -o wide adds NODE, IP, NOMINATED NODE, and READINESS GATES columns # Useful when pods on a specific node keep failing kubectl get pods -n payments -o wide # ── STEP 3: Get EVERYTHING in a namespace (deployments, services, ingress...) ── # 'all' is a pseudo-resource type that expands to common resource groups kubectl get all -n payments # ── STEP 4: Deep-dive on a specific pod that shows as CrashLoopBackOff ───────── # 'describe' gives you Conditions, Mounts, Environment variables, AND Events # The Events block at the bottom is usually where the real error message lives kubectl describe pod payment-processor-7d9f6b-xk2pq -n payments # ── STEP 5: Export the live spec of a deployment for auditing ────────────────── # -o yaml dumps the full Kubernetes object as it exists in the API server # Redirect to a file if you want to diff it against your source manifest kubectl get deployment payment-processor -n payments -o yaml > live_deployment.yaml
NAME READY STATUS RESTARTS AGE IP NODE
payment-processor-7d9f6b-xk2pq 0/1 CrashLoopBackOff 4 8m 10.244.1.23 worker-node-2
payment-processor-7d9f6b-rv9ms 1/1 Running 0 8m 10.244.2.41 worker-node-3
# Output of: kubectl describe pod payment-processor-7d9f6b-xk2pq -n payments (Events section)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m default-scheduler Successfully assigned payments/payment-processor-7d9f6b-xk2pq to worker-node-2
Normal Pulled 8m kubelet Successfully pulled image
Warning BackOff 3m (x6 over 7m) kubelet Back-off restarting failed container
Deploying and Updating Workloads: apply vs create, and Rolling Restarts
Here's the rule that will save you confusion forever: use kubectl apply for everything, and treat kubectl create as a one-shot bootstrapping tool. The reason is that apply is declarative — it compares what you're submitting against what already exists and only changes the delta. create is imperative — it fails with an error if the resource already exists, because it only knows how to make new things.
In a real CI/CD pipeline, your deployment YAML is being submitted on every code push. You need that to work whether it's the first deploy or the fiftieth. apply handles both cases. create breaks on deploy number two.
kubectl rollout is the command that lets you manage deployments as versioned rollouts rather than raw resource updates. rollout status blocks and streams progress — extremely useful in a deploy script where you need to wait for success before proceeding. rollout undo is your emergency eject button when a bad version goes out. rollout history shows you what versions exist so you can roll back to a specific revision, not just the previous one. These three together are your entire deployment safety net.
# ── APPLY: Safe for first-time AND subsequent deploys ───────────────────────── # --record is deprecated in newer k8s but shown here for context # The key point: apply works idempotently. Run it 50 times, same result. kubectl apply -f payment-processor-deployment.yaml -n payments # ── CHECK ROLLOUT STATUS: Blocks until rollout completes or fails ────────────── # Use this in CI/CD scripts AFTER kubectl apply so the pipeline waits for health # Exit code is non-zero if rollout fails — so your pipeline fails fast kubectl rollout status deployment/payment-processor -n payments # ── VIEW ROLLOUT HISTORY: See all versions with change causes ────────────────── # Shows REVISION number, which you need for targeted rollbacks kubectl rollout history deployment/payment-processor -n payments # ── ROLLBACK TO PREVIOUS VERSION: Emergency eject button ────────────────────── # No arguments = previous revision. Fast and safe. kubectl rollout undo deployment/payment-processor -n payments # ── ROLLBACK TO A SPECIFIC REVISION: When you need to jump back 3 versions ──── # Use the REVISION number from 'rollout history' above kubectl rollout undo deployment/payment-processor -n payments --to-revision=4 # ── ROLLING RESTART: Force fresh pods without changing the image ─────────────── # Use this when pods are stuck or you've rotated a secret and need pods to re-read # It does a zero-downtime rolling replacement of all pods in the deployment kubectl rollout restart deployment/payment-processor -n payments # ── SCALE UP: Increase replica count for a deployment ───────────────────────── # Faster than editing the YAML when you need to handle a traffic spike right now kubectl scale deployment/payment-processor --replicas=6 -n payments
deployment.apps/payment-processor configured
# Output of: kubectl rollout status deployment/payment-processor -n payments
Waiting for deployment "payment-processor" rollout to finish: 2 out of 4 new replicas have been updated...
Waiting for deployment "payment-processor" rollout to finish: 3 out of 4 new replicas have been updated...
Waiting for deployment "payment-processor" rollout to finish: 1 old replicas are pending termination...
deployment "payment-processor" successfully rolled out
# Output of: kubectl rollout history deployment/payment-processor -n payments
deployment.apps/payment-processor
REVISION CHANGE-CAUSE
3 <none>
4 <none>
5 <none>
# Output of: kubectl scale deployment/payment-processor --replicas=6 -n payments
deployment.apps/payment-processor scaled
Debugging Live Pods: logs, exec, port-forward, and the Right Order to Use Them
When something breaks in production, you have a mental order of operations. Don't skip to exec (shelling into a container) first — it's the most invasive option and often the last resort. Start with logs, move to describe if logs are empty, then use port-forward if you need to hit the service directly, and only exec when you need to inspect the filesystem or run a command inside the container itself.
kubectl logs has two flags you'll use constantly: -f streams live (like tail -f) and --previous (or -p) fetches logs from the last container run — critical when a container is crash-looping and dying before you can read anything, because the current container's logs might be empty.
kubectl port-forward is one of the most underrated commands. It creates a secure tunnel from your local machine to a pod or service without exposing anything publicly. Use it to hit a database admin UI, test an API endpoint directly, or access a metrics endpoint — all without touching firewall rules or ingress configs. It's temporary by design; Ctrl+C ends the tunnel.
kubectl exec gives you a shell or runs a single command inside a running container. Use it to check environment variables, inspect mounted config files, or run a quick database query when something's behaving unexpectedly.
# ── STEP 1: Read current logs (last 100 lines) ──────────────────────────────── # --tail limits output so you don't get flooded in a long-running container # -n payments is the namespace flag — always specify it explicitly kubectl logs payment-processor-7d9f6b-rv9ms -n payments --tail=100 # ── STEP 2: Stream logs in real-time during a deploy or incident ─────────────── # -f follows the log stream (like tail -f on a log file) # Use this while simultaneously running kubectl rollout status in another terminal kubectl logs -f payment-processor-7d9f6b-rv9ms -n payments # ── STEP 3: Get logs from the PREVIOUS container run (crash-loop debugging) ──── # --previous is the lifesaver for CrashLoopBackOff — current container may be # dead before it logs anything, but the previous run's logs are preserved kubectl logs payment-processor-7d9f6b-xk2pq -n payments --previous # ── STEP 4: Get aggregated logs from ALL pods matching a label ───────────────── # -l is the label selector — matches pods that have app=payment-processor # Essential for deployments with multiple replicas when you don't know which # pod is handling the bad request kubectl logs -l app=payment-processor -n payments --tail=50 # ── STEP 5: Forward local port 8080 to pod port 8080 (no firewall changes) ───── # After running this, http://localhost:8080 hits the pod directly # Ctrl+C ends the tunnel. Perfect for one-off debugging sessions. kubectl port-forward pod/payment-processor-7d9f6b-rv9ms 8080:8080 -n payments # ── STEP 6: Open an interactive shell inside a running container ─────────────── # -it = interactive tty (you want both for a shell session) # -- /bin/sh is used when bash isn't available (minimal alpine images) kubectl exec -it payment-processor-7d9f6b-rv9ms -n payments -- /bin/sh # ── STEP 7: Run a one-off command inside a container without an interactive shell # Useful in scripts — check an env var, curl an internal service, etc. kubectl exec payment-processor-7d9f6b-rv9ms -n payments -- env | grep DB_HOST
2024-01-15T14:32:01Z INFO Starting payment processor service
2024-01-15T14:32:01Z INFO Connecting to database at db.payments.svc.cluster.local:5432
2024-01-15T14:32:02Z FATAL Cannot connect to database: connection refused
# Output of: kubectl port-forward pod/payment-processor-7d9f6b-rv9ms 8080:8080 -n payments
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080
# Output of: kubectl exec payment-processor-7d9f6b-rv9ms -n payments -- env | grep DB_HOST
DB_HOST=db.payments.svc.cluster.local
Context, Namespaces, and Resource Management: Working Across Clusters Safely
Most engineers who get deep into Kubernetes start managing multiple clusters — dev, staging, production, maybe multiple cloud regions. kubectl handles this through contexts, which are named configurations stored in ~/.kube/config. Each context maps a cluster + a user + a default namespace. Switching contexts is how you switch clusters.
The danger is running a destructive command in the wrong context. It happens more than people admit. kubectl config get-contexts shows you what's available and which is active (marked with *). kubectl config use-context switches the active one. Always verify your context before any delete, scale, or apply command on production.
Namespaces are your logical isolation boundaries within a single cluster. Every team or service environment should have its own namespace. Always use -n explicitly rather than relying on the default namespace — it makes your commands unambiguous and your scripts portable. kubectl config set-context --current --namespace=payments sets a namespace as the default for your current context, which reduces repetition if you're working in one namespace for a long session.
kubectl delete and kubectl drain are the two commands to treat with the most respect. drain gracefully evicts all pods from a node (used before maintenance) while cordon marks it unschedulable without evicting existing pods.
# ── LIST ALL CONTEXTS: See clusters you have access to ──────────────────────── # The asterisk (*) marks your currently active context # ALWAYS run this before destructive commands to confirm you're in the right cluster kubectl config get-contexts # ── SWITCH CONTEXT: Move from dev cluster to staging cluster ─────────────────── # Context names come from your kubeconfig — usually set by your cloud provider CLI kubectl config use-context gke_myproject_us-east1_staging-cluster # ── VERIFY CURRENT CONTEXT: Quick sanity check, pipe to less to see full config ─ kubectl config current-context # ── SET DEFAULT NAMESPACE FOR YOUR SESSION ──────────────────────────────────── # After this you don't need -n payments on every command in this session # WARNING: revert this when done so you don't accidentally hit the wrong NS later kubectl config set-context --current --namespace=payments # ── LIST ALL NAMESPACES IN THE CLUSTER ──────────────────────────────────────── kubectl get namespaces # ── CREATE A NAMESPACE: Clean team or environment separation ────────────────── kubectl create namespace fraud-detection # ── CHECK RESOURCE USAGE ON NODES: Spot CPU/memory pressure ────────────────── # kubectl top requires metrics-server to be installed in the cluster kubectl top nodes # ── CHECK RESOURCE USAGE FOR PODS IN A NAMESPACE ───────────────────────────── kubectl top pods -n payments # ── CORDON A NODE: Mark it unschedulable (no new pods) without evicting existing # Use before maintenance when you don't want to disrupt running workloads yet kubectl cordon worker-node-2 # ── DRAIN A NODE: Evict all pods gracefully before maintenance ──────────────── # --ignore-daemonsets needed because DaemonSet pods can't be evicted from nodes # --delete-emptydir-data needed if any pods use emptyDir volumes kubectl drain worker-node-2 --ignore-daemonsets --delete-emptydir-data # ── UNCORDON: Bring the node back into rotation after maintenance ───────────── kubectl uncordon worker-node-2 # ── DELETE A RESOURCE SAFELY: Always specify namespace explicitly ───────────── # Deleting a deployment deletes it AND all its managed pods kubectl delete deployment legacy-payment-v1 -n payments
CURRENT NAME CLUSTER AUTHINFO
docker-desktop docker-desktop docker-desktop
gke_myproject_us-east1_dev-cluster gke_myproject_us-east1_dev gke_myproject_dev
* gke_myproject_us-east1_staging-cluster gke_myproject_us-east1_staging gke_myproject_staging
# Output of: kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
worker-node-1 342m 17% 2891Mi 37%
worker-node-2 891m 44% 5102Mi 65%
worker-node-3 104m 5% 1456Mi 19%
# Output of: kubectl drain worker-node-2 --ignore-daemonsets --delete-emptydir-data
node/worker-node-2 cordoned
evicting pod payments/payment-processor-7d9f6b-xk2pq
pod/payment-processor-7d9f6b-xk2pq evicted
node/worker-node-2 drained
| Aspect | kubectl apply | kubectl create |
|---|---|---|
| Approach | Declarative — merges changes into existing state | Imperative — creates a brand new resource |
| If resource already exists | Updates it with the diff — no error | Fails with AlreadyExists error |
| Use in CI/CD pipelines | Yes — safe to run on every deploy | No — breaks after first deploy |
| Tracks managed fields | Yes — uses server-side apply to track changes | No field tracking |
| Best use case | All deployment manifests, all environments | One-off bootstrapping (namespaces, secrets) |
| Dry run support | kubectl apply --dry-run=server -f file.yaml | kubectl create --dry-run=client -f file.yaml |
| Deletion of removed fields | Removes fields not in new manifest (with --prune) | N/A — not designed for updates |
🎯 Key Takeaways
- kubectl describe is not a verbose version of kubectl get — it's a different tool for a different job. get is for scanning state; describe is for diagnosing problems, specifically through the Events section at the bottom.
- kubectl apply is declarative and idempotent — it belongs in every automated pipeline. kubectl create is imperative and will break on the second run. Make apply your default for all manifests.
- kubectl logs --previous is the most important debugging flag for CrashLoopBackOff. Without it you're trying to read the crash report from a container that's already been replaced.
- Always verify your active context with kubectl config current-context before running destructive commands. Running kubectl delete or kubectl drain against production instead of staging is a career-defining mistake that a single habit prevents.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Using kubectl create -f in automated pipelines — The pipeline succeeds the first time, then throws 'Error from server (AlreadyExists)' on every subsequent run — Replace every
kubectl create -fin CI/CD scripts withkubectl apply -f, which is idempotent and handles both create and update correctly. - ✕Mistake 2: Reading logs from the current container on a CrashLoopBackOff pod —
kubectl logsreturns almost nothing because the container died within seconds of starting — Usekubectl logsto read the logs from the container run that actually crashed, which is where the real error message lives.--previous - ✕Mistake 3: Running destructive commands without checking the active context first — You
kubectl delete deploymentthinking you're in dev, but you're actually in production because you switched contexts an hour ago and forgot — Always runkubectl config current-contextbefore any delete, scale-to-zero, or drain command, and consider installing kubectx+kubens (open source tools) which add visual context indicators to your prompt.
Interview Questions on This Topic
- QWhat's the difference between kubectl apply and kubectl replace, and when would you choose one over the other?
- QA pod is stuck in CrashLoopBackOff and kubectl logs shows nothing. Walk me through your debugging process step by step.
- QYou need to take a node offline for maintenance without causing downtime. What kubectl commands do you run, in what order, and why does the order matter?
Frequently Asked Questions
What is the difference between kubectl get and kubectl describe?
kubectl get returns a fast, tabular summary of one or more resources — good for scanning cluster state. kubectl describe returns a detailed breakdown of a single resource including its configuration, conditions, and crucially the Events section, which logs what Kubernetes actually tried to do and any errors it encountered. For debugging, always go to describe.
How do I switch between Kubernetes clusters using kubectl?
Clusters are managed through contexts in your kubeconfig file (~/.kube/config). Run kubectl config get-contexts to list all available contexts with an asterisk marking the active one. Use kubectl config use-context to switch. For frequent switchers, the open-source kubectx tool wraps this into a single short command.
Why does kubectl delete pod not fix my crashing pod?
When a pod belongs to a Deployment, Kubernetes automatically recreates it when you delete it — same image, same broken configuration. Deleting the pod just changes its name; the underlying problem remains. To actually fix it you need to update the Deployment spec (the root cause, whether that's an image tag, environment variable, or config map) and apply the change. The Deployment will then perform a rolling update to replace pods with the corrected version.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.