Skip to content
Home Interview Docker Interview Questions: Deep-Dive Answers for DevOps Roles

Docker Interview Questions: Deep-Dive Answers for DevOps Roles

Where developers are forged. · Structured learning · Free forever.
📍 Part of: DevOps Interview → Topic 2 of 5
Docker interview questions with real-world answers, gotchas, and battle-tested examples.
⚙️ Intermediate — basic Interview knowledge assumed
In this tutorial, you'll learn
Docker interview questions with real-world answers, gotchas, and battle-tested examples.
  • Instruction order in a Dockerfile is a performance decision — least-volatile instructions (OS packages, dependency installs) must come before most-volatile ones (source code copy) or you eliminate all caching benefit and rebuild from scratch on every code change.
  • Named volumes survive 'docker rm'; bind mounts are host filesystem paths that survive by definition; tmpfs is RAM-only and is the only Docker storage mechanism that guarantees data never touches disk — critical for ephemeral secrets handling.
  • Docker's built-in DNS lets containers on the same network resolve each other by service name, not IP address. Never hardcode container IPs — they change on every restart. Always reference services by name and let Docker handle the resolution.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • Images vs containers: immutable template vs running instance with writable layer
  • Layer caching: instruction order determines build speed
  • Volumes vs bind mounts vs tmpfs: three storage mechanisms with different lifecycle guarantees
  • Networking: bridge/host/none drivers, DNS resolution by service name
  • Multi-stage builds: separate build-time from runtime dependencies
🚨 START HERE
Docker Container Triage Cheat Sheet
First-response commands for common Docker issues in production.
🟡Container crashed or restarting in a loop.
Immediate ActionCheck logs and exit code.
Commands
docker logs --tail 50 <container>
docker inspect <container> --format='{{.State.ExitCode}} {{.State.OOMKilled}}'
Fix NowExit code 0 = CMD completed (wrong CMD). Exit code 1 = app error (check logs). Exit code 137 = OOM killed (--memory too low). Exit code 139 = segfault (base image mismatch).
🟡Container running but not responding to requests.
Immediate ActionVerify port mapping and process inside container.
Commands
docker port <container>
docker exec <container> ps aux
Fix NowIf process missing, CMD failed silently. If process running but port wrong, check EXPOSE vs -p mapping. If binding to 127.0.0.1, change to 0.0.0.0.
🟡Two containers cannot communicate on the same network.
Immediate ActionVerify network membership and DNS resolution.
Commands
docker network inspect <network> | grep -A 5 Containers
docker exec <container> nslookup <target-service>
Fix NowIf DNS fails, containers are on different networks. Use service name (not localhost) and container port (not host port) in connection strings.
🟠docker build is extremely slow.
Immediate ActionCheck layer caching and build context size.
Commands
docker history <image> | head -20
du -sh . (in build context directory)
Fix NowMove COPY requirements.txt before COPY . . Add .dockerignore. Combine RUN commands with && to reduce layers.
🟡Secrets exposed in docker history or docker inspect.
Immediate ActionIdentify which layer contains the secret.
Commands
docker history --no-trunc <image> | grep -i 'password\|secret\|key'
docker inspect <image> | grep -i ENV
Fix NowRemove secrets from ENV/ARG. Use BuildKit --mount=type=secret. Rotate the exposed credentials immediately. Rebuild with --no-cache.
Production IncidentProduction Data Loss — Bind Mount in Docker Compose Wiped Database on Server MigrationA team used a bind mount for PostgreSQL data in production Docker Compose. During a server migration, they copied the container configuration but forgot the host directory. The new server started with an empty mount, PostgreSQL initialized a fresh database, and the team deleted the old server — losing 14 months of production data.
SymptomAfter server migration, the application started successfully but all API responses returned empty results. Database queries against user tables returned 0 rows. The team initially assumed a connection string issue pointing to the wrong database instance.
AssumptionTeam assumed the DATABASE_URL was misconfigured after migration. They verified the connection string, tested DNS resolution, and confirmed the container was running. All checks passed. Second assumption: a migration script had not run. They checked migration logs — all migrations reported as already applied against the empty database.
Root causeThe docker-compose.yml used a bind mount: -v /data/postgres:/var/lib/postgresql/data. During migration, the team copied the docker-compose.yml to the new server but did not copy /data/postgres. PostgreSQL started with an empty /data/postgres directory and initialized a fresh database cluster. The old server was decommissioned and wiped 48 hours later. The data was on that server's /data/postgres directory, not in a Docker-managed volume.
Fix1. Replaced bind mount with named volume: volumes: - postgres_data:/var/lib/postgresql/data. 2. Added automated daily backups using pg_dump to an S3 bucket. 3. Added a pre-migration checklist that includes volume data verification. 4. Implemented docker volume inspect to verify data exists before starting the database container. 5. Added a CI check that flags bind mounts in production compose files.
Key Lesson
Bind mounts in production couple your data to a specific host path. When the host goes away, the data goes with it.Named volumes are managed by Docker and are easier to back up, migrate, and verify independently of the host filesystem.Always verify data exists in the target volume before starting a database container after migration.Decommissioning a server without verifying data has been migrated is an irreversible mistake.Automated backups are not optional for production databases — they are the last line of defense against data loss.
Production Debug GuideSystematic debugging paths that demonstrate production experience.
Container exits immediately after start with no log output.The CMD failed before writing to stdout. Run interactively: docker run -it <image> sh, then execute the CMD manually. Check if the entrypoint script has a shebang and execute permissions. Check if the binary exists at the expected path.
Two containers on the same network cannot communicate.Verify both are on the same network: docker network inspect <network>. Test DNS resolution: docker exec <container> nslookup <target-service>. Check if the target service binds to 0.0.0.0, not 127.0.0.1. Check if a firewall or security group is blocking the container port.
Container gets OOM-killed repeatedly (exit code 137).Check memory usage: docker stats <container>. Set a memory limit to prevent host-wide impact: docker run --memory=512m. Profile the application for memory leaks. Check if the container is processing data larger than expected (large file uploads, unbounded caches).
Docker build reinstalls dependencies on every code change.Check Dockerfile layer ordering. Ensure COPY requirements.txt comes before COPY . . Run docker history <image> to see which layers were rebuilt. Add .dockerignore to exclude .git, node_modules, __pycache__ from build context.
Container works locally but fails in CI or on another machine.Compare base image digests: docker inspect --format='{{.Image}}' <container>. Check for architecture mismatches (ARM vs AMD64). Check for missing .env file or environment variables. Check Docker Engine version compatibility.
docker stop takes 10 seconds and the container is killed.The application does not handle SIGTERM. Check if CMD uses shell form (CMD npm start) instead of exec form (CMD ["npm", "start"]). Shell form makes /bin/sh PID 1, which does not forward signals. Implement a SIGTERM handler in the application. Increase stop timeout with --stop-timeout 30.

Docker interview questions probe production instincts, not memorized definitions. Interviewers want to know if you have debugged a container that could not reach its database, optimized a build that took 10 minutes, or tracked down a secret leak in an image layer.

The three pillars that separate strong answers from weak ones: understanding the image/container lifecycle (immutable templates vs ephemeral instances), mastering layer caching (instruction order as a performance decision), and knowing the storage and networking trade-offs (named volumes vs bind mounts, bridge vs host networking).

Common misconceptions that fail interviews: EXPOSE publishes a port (it does not), containers are VMs (they share the host kernel), and docker stop and docker kill are the same (SIGTERM vs SIGKILL). Getting these wrong signals a lack of hands-on production experience.

Core Concepts: Images, Containers, and the Daemon — What Interviewers Really Want to Hear

Most candidates can define an image and a container. What separates a strong answer is explaining the relationship between them.

An image is an immutable, layered snapshot of a filesystem and its metadata — think of it as a read-only template. A container is a running instance of that image, plus a thin writable layer on top. When the container dies, that writable layer is gone. This is why containers are considered ephemeral by design.

The Docker daemon (dockerd) is the long-running background process that does the actual work: building images, managing container lifecycles, handling networking, and talking to registries. The Docker CLI you type commands into is just a client that sends API requests to the daemon over a Unix socket.

Interviewers love asking about layers because they reveal whether you understand caching. Every instruction in a Dockerfile creates a new layer. Layers are cached by their content hash. If layer 3 changes, every layer after it is invalidated and must be rebuilt. This is why instruction order in a Dockerfile matters enormously for build speed — put the things that change least (installing OS packages) at the top, and the things that change most (copying your app source code) near the bottom.

Copy-on-Write (CoW) internals: Docker's storage drivers (overlay2 is the default) use Copy-on-Write. When a container reads a file, it reads directly from the image layer. When it writes, the file is copied to the writable layer and modified there. This means multiple containers sharing the same image share the same read-only layers in memory — only the writable deltas are unique per container. This is why starting 50 containers from the same image is fast and memory-efficient.

io/thecodeforge/OptimisedNodeApp.dockerfile · DOCKERFILE
1234567891011121314151617181920
# ─── STAGE 1: Base OS + system packages ───
FROM node:20-alpine AS base

# Namespace branding
LABEL maintainer="engineering@thecodeforge.io"

WORKDIR /app

# ─── STAGE 2: Install dependencies (Layer Caching Strategy) ───
# Copy only manifests to leverage Docker layer cache
COPY package.json package-lock.json ./
RUN npm ci --omit=dev

# ─── STAGE 3: Application Source ───
COPY src/ ./src/

EXPOSE 3000

# Exec form ensures SIGTERM handling
CMD ["node", "src/server.js"]
▶ Output
Step 4/7 : RUN npm ci --omit=dev
---> Using cache
Successfully built a1b2c3d4e5f6
Successfully tagged io.thecodeforge/node-app:latest
Mental Model
Images as Git Commits, Containers as Working Directories
Why does changing one Dockerfile instruction invalidate all subsequent layers?
  • Each layer's content hash depends on the layer below it. A changed instruction produces a different hash.
  • Docker caches by hash. If the hash changes, the cache is invalidated for that layer and all layers above.
  • This cascading invalidation is why instruction order matters — put stable instructions first.
  • The layer cache is local to the build machine. CI runners without warm caches rebuild everything.
📊 Production Insight
The Copy-on-Write mechanism has a performance implication for write-heavy containers. Every write copies the entire file from the image layer to the writable layer before modifying it. For large files that are frequently written (databases, log files), this adds latency. Use named volumes for write-heavy data — volumes bypass CoW and write directly to the host filesystem.
🎯 Key Takeaway
Images are immutable templates. Containers are ephemeral instances with a writable layer. The daemon is the engine — the CLI is just a client. Layer caching is a build speed optimization driven by instruction order. Copy-on-Write means shared image layers are memory-efficient but write-heavy workloads need volumes.
Image vs Container Decisions in Production
IfNeed to deploy the same app to multiple environments
UseBuild one image, run multiple containers with different environment variables
IfNeed to persist data across container restarts
UseUse named volumes — the image is immutable, volumes are mutable
IfNeed to debug a running container
UseUse docker exec to shell in. Never SSH into a container. Never modify the running container for permanent fixes.
IfNeed to roll back a deployment
UseDeploy the previous image tag. Containers are disposable — images are the source of truth.

Volumes vs Bind Mounts vs tmpfs — The Storage Question That Trips People Up

Data persistence is one of Docker's most misunderstood areas, and interviewers use it to separate people who've read the docs from people who've debugged production.

Containers are ephemeral. The writable layer that gets created when a container starts is destroyed when the container is removed. If you write a database file into that layer, you lose it the moment the container exits. The three storage mechanisms Docker offers each solve this differently.

A named volume is managed entirely by Docker. Docker decides where on the host filesystem the data lives (usually /var/lib/docker/volumes/). Your container just sees a directory. Volumes survive container deletion, can be shared between containers, and work across platforms. Use volumes for anything you care about keeping — databases, uploads, generated certificates.

A bind mount maps a specific host path into the container. You control the path. This is powerful for local development — you mount your source code directory into the container and edits you make on the host appear instantly inside the container, enabling hot-reload workflows. But bind mounts are tightly coupled to host filesystem layout, which makes them fragile in production.

tmpfs mounts are stored in the host's memory only. The moment the container stops, the data is gone. Use tmpfs for sensitive temporary data you explicitly do not want written to disk — think secrets, session tokens, or scratch space for cryptographic operations.

Failure scenario — bind mount in production causes data loss: A team ran PostgreSQL in Docker Compose with a bind mount: -v /data/postgres:/var/lib/postgresql/data. During a server migration, they copied the container configuration but not the host directory. The new server started with an empty mount, PostgreSQL initialized a fresh database, and the team deleted the old server. All production data was lost. The fix: use named volumes (docker volume create) which are managed by Docker and can be backed up and migrated independently of the host filesystem.

Performance trade-off: Named volumes use Docker's storage driver (overlay2 by default) which adds a thin abstraction layer. Bind mounts go through the host filesystem directly, which can be faster for I/O-intensive workloads. In benchmarks, bind mounts outperform named volumes by 5-15% on write-heavy database workloads. But the portability and management benefits of named volumes far outweigh this performance difference for production use.

io/thecodeforge/storage_setup.sh · BASH
123456789101112131415
#!/bin/bash

# Production Pattern: Create a managed volume for the database
docker volume create thecodeforge_db_data

docker run -d \
  --name forge-db \
  -v thecodeforge_db_data:/var/lib/postgresql/data \
  postgres:16-alpine

# Security Pattern: Using tmpfs for API keys in memory
docker run -d \
  --name forge-sec-processor \
  --mount type=tmpfs,destination=/app/secrets,tmpfs-size=64m \
  io.thecodeforge/processor:latest
▶ Output
$ docker volume ls
DRIVER VOLUME NAME
local thecodeforge_db_data
Mental Model
Storage Types as Lease Agreements
Why should you never use bind mounts in production?
  • Bind mounts couple the container to a specific host path — breaks portability across machines.
  • If the host directory does not exist, Docker creates it as root — permission issues on subsequent runs.
  • Host filesystem permissions can conflict with container user permissions.
  • Server migration requires manually copying the host directory — easy to forget, impossible to recover from.
  • Named volumes are managed by Docker, portable, and can be backed up with docker volume commands.
📊 Production Insight
The bind-mount-in-production anti-pattern is the most common storage mistake in Docker deployments. Teams use bind mounts during development for hot-reload convenience, then deploy the same compose file to production without changing the volume configuration. When the server is replaced during infrastructure migration, the data is left behind on the old host. Named volumes eliminate this risk by decoupling data from the host filesystem.
🎯 Key Takeaway
Named volumes for production — they survive container removal, are portable across hosts, and are managed by Docker. Bind mounts for development — they enable hot-reload but break portability. tmpfs for secrets — RAM-only, never touches disk. The bind-mount-in-production mistake is the single most common cause of data loss in Docker deployments.
Volume Type Selection by Environment
IfProduction database or persistent state
UseUse named volumes: docker volume create. Back up with docker run --rm -v vol:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz /data
IfDevelopment — live code reloading
UseUse bind mounts: -v $(pwd):/app. Fast iteration, no rebuild needed.
IfSensitive temporary data (secrets, session tokens)
UseUse tmpfs mounts: --tmpfs /tmp/secrets:size=10m. Data never touches disk.
IfCI test runs — need clean state every time
UseUse docker compose down -v to destroy named volumes between runs.

Docker Networking Deep Dive — How Containers Actually Talk to Each Other

Networking is where many Docker users hit a wall. The mental model that unlocks it: each Docker network is a private virtual switch. Containers attached to the same switch can talk to each other by container name. Containers on different switches can't reach each other unless you explicitly connect them or use a shared network.

Docker ships with three built-in network drivers. The bridge driver (default) creates a private network on the host. Containers on the same bridge network can communicate with each other using DNS — Docker has a built-in DNS server that resolves container names and service names automatically. This is how a Node.js API container can connect to a Postgres container using the hostname 'postgres' rather than an IP address that changes every restart.

The host driver removes the network namespace entirely. The container uses the host's network stack directly. This gives maximum network performance (no virtual switch overhead) but destroys isolation — the container can see all host ports.

The none driver disables networking completely. The container has only a loopback interface. Useful for running batch jobs that must be air-gapped, or for testing how your app behaves with no network access.

Connection refused debugging: When two containers cannot communicate, the most common causes are: (1) they are on different networks — verify with docker network inspect, (2) the target service binds to 127.0.0.1 instead of 0.0.0.0 — localhost inside a container is the container itself, not the host, (3) the connection string uses the host-mapped port instead of the container port — container-to-container communication uses the container port directly, (4) a healthcheck or depends_on race condition — the target service is not ready yet.

Performance trade-off — bridge vs host networking: Bridge networking adds a virtual Ethernet pair and iptables NAT rules for each container. This adds approximately 10-50 microseconds of latency per packet compared to host networking. For latency-sensitive workloads (high-frequency trading, real-time gaming), host networking eliminates this overhead. But it destroys port isolation — two containers cannot bind to the same port on host networking.

io/thecodeforge/docker-compose.yml · YAML
1234567891011121314151617181920212223
version: '3.9'

services:
  api:
    image: io.thecodeforge/api:latest
    networks:
      - forge_internal
    environment:
      - DB_HOST=database

  database:
    image: postgres:16-alpine
    networks:
      - forge_internal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

networks:
  forge_internal:
    driver: bridge
▶ Output
$ docker compose up -d
[+] Running 3/3
✔ Network forge_internal Created
✔ Container database Started
✔ Container api Started
Mental Model
Docker Networks as Virtual LANs
Why can containers resolve each other by service name without any DNS configuration?
  • Docker's embedded DNS server at 127.0.0.11 resolves container names to internal IPs.
  • Each container's /etc/resolv.conf points to this embedded DNS server automatically.
  • The DNS server is network-aware — it only resolves names for containers on the same network.
  • This is why containers on different networks cannot resolve each other — the DNS server enforces network isolation.
📊 Production Insight
Never publish database ports to the host (ports: - '5432:5432') in any environment beyond local development. Internal services should live on private networks with no published ports. Only edge services (APIs, reverse proxies) should have published ports. This is a security boundary — if the database port is published, anyone who can reach the host can attempt to connect directly, bypassing application-level access controls.
🎯 Key Takeaway
Docker networks are virtual LANs. Bridge networking provides isolation with DNS resolution by service name. Host networking eliminates overhead but destroys isolation. Never publish database ports to the host. The embedded DNS server at 127.0.0.11 is network-aware — containers on different networks cannot resolve each other.
Network Driver Selection
IfStandard application deployment with multiple services
UseUse bridge networking — isolation with DNS resolution by service name
IfLatency-sensitive workload (trading, gaming, real-time)
UseUse host networking — eliminates virtual switch overhead but loses port isolation
IfBatch job that must be air-gapped or security-scanned
UseUse none networking — no network access at all
IfMultiple tiers that should not see each other (frontend, API, database)
UseUse multiple bridge networks — assign services to only the networks they need

Multi-Stage Builds and Image Size — The Optimisation Question That Defines Seniors

A production Docker image should contain only what is needed to run the application. Not the compiler. Not the test framework. Not the build tools. Most candidates understand single-stage builds. Seniors reach for multi-stage builds by default.

The idea is simple: use one stage to build your app (with all the heavyweight tools that requires), then start fresh from a minimal base image and copy only the compiled output. The final image has no knowledge of how it was built — just what needs to run.

For a Go application this is dramatic: the builder stage might pull in the entire Go toolchain (hundreds of MB), but the final stage starts from scratch (literally 'FROM scratch') and contains only the statically compiled binary — often under 10MB total image size.

Security benefit: Every tool in your production image is an attack surface. gcc, make, curl, wget — if an attacker gets shell access to your container, these tools let them compile exploits, download payloads, and pivot. A distroless or Alpine runtime image with no build tools gives an attacker almost nothing to work with.

Deployment speed impact: Container images must be pulled to every node before they can run. A 1.2GB image takes 30-60 seconds to pull over a fast network. A 12MB image pulls in under 1 second. During rolling deployments across 20 nodes, that difference is minutes of deployment time.

Why deleting files in a RUN command does not shrink the image: Docker layers are additive. If you RUN apt-get install gcc in one layer and RUN apt-get remove gcc in the next, the gcc files still exist in the first layer — the image size does not decrease. The second layer just marks them as deleted. Multi-stage builds solve this by starting a fresh stage — the runtime stage never contains build tools in any layer.

io/thecodeforge/multistage.dockerfile · DOCKERFILE
12345678910111213
# ─── STAGE 1: Build Environment ───
FROM golang:1.22-alpine AS builder
WORKDIR /src
COPY . .
RUN CGO_ENABLED=0 go build -o /app/forge-binary main.go

# ─── STAGE 2: Production Distroless ───
# Using distroless for security and minimal footprint
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/forge-binary /forge-binary

USER nonroot:nonroot
ENTRYPOINT ["/forge-binary"]
▶ Output
REPOSITORY TAG IMAGE ID SIZE
io.thecodeforge/app latest d8f3e2a1b0c9 12.4MB
Mental Model
Multi-Stage Builds as a Factory Conveyor Belt
Why not just delete build tools in a RUN command at the end of a single-stage Dockerfile?
  • Docker layers are additive. A file added then deleted in a later layer still occupies space in the earlier layer.
  • RUN apt-get install gcc && apt-get remove gcc still has gcc in the install layer.
  • Multi-stage builds start fresh — the runtime stage never contains build tools in any layer.
  • This is the only way to genuinely reduce image size, not just hide files from the filesystem.
📊 Production Insight
Multi-stage builds are not optional for production. A single-stage image with build tools has a larger attack surface, slower pull times, and higher storage costs. The security benefit alone justifies the effort — every unnecessary binary in your production image is a tool an attacker can use post-compromise. Scan your images with Trivy or Docker Scout to verify your runtime image contains no build tools.
🎯 Key Takeaway
Multi-stage builds separate build-time and runtime dependencies. The builder stage compiles everything. The runtime stage copies only the artifact. This reduces image size by 80-95%, shrinks attack surface, and speeds up deployments. Deleting files in a RUN command does not shrink the image — layers are additive. Only multi-stage builds genuinely reduce image size.
Image Size Optimization Strategy
IfCompiled language (Go, Rust, Java)
UseMulti-stage build with FROM scratch or distroless for the runtime stage
IfInterpreted language (Python, Node.js)
UseMulti-stage build with slim/alpine base for the runtime stage. Copy only installed packages, not build tools.
IfImage is > 500MB
UseAudit with dive (github.com/wagoodman/dive) to find bloated layers. Use .dockerignore. Combine RUN commands.
IfNeed maximum security (zero attack surface)
UseUse distroless images — no shell, no package manager, no utilities. Just your binary and its runtime dependencies.
🗂 Docker Storage Mechanisms Compared
Named volumes, bind mounts, and tmpfs mounts — lifecycle, portability, and use cases.
AspectDocker VolumeBind Mounttmpfs Mount
Managed byDocker daemonYou (host path)Docker daemon (RAM)
Data persists after container stopYesYes (it's a host file)No — gone immediately
Data persists after docker rmYesYes (it's a host file)N/A — already gone
Best use caseProduction databases, uploadsDev hot-reload, config injectionSecrets, session tokens, scratch space
PortabilityHigh — works on any Docker hostLow — tied to host path structureHigh — host-agnostic
PerformanceGood (slight overhead)Best (direct host I/O)Excellent (RAM speed)
Visible to 'docker volume ls'YesNoNo
Works in Docker ComposeYes — named volume syntaxYes — relative path syntaxYes — tmpfs key

🎯 Key Takeaways

  • Instruction order in a Dockerfile is a performance decision — least-volatile instructions (OS packages, dependency installs) must come before most-volatile ones (source code copy) or you eliminate all caching benefit and rebuild from scratch on every code change.
  • Named volumes survive 'docker rm'; bind mounts are host filesystem paths that survive by definition; tmpfs is RAM-only and is the only Docker storage mechanism that guarantees data never touches disk — critical for ephemeral secrets handling.
  • Docker's built-in DNS lets containers on the same network resolve each other by service name, not IP address. Never hardcode container IPs — they change on every restart. Always reference services by name and let Docker handle the resolution.
  • Multi-stage builds are not an optimisation you do later — they're the default pattern for any compiled language. Shipping a Go or Java app in a single-stage image that includes the compiler is equivalent to shipping a kitchen with every meal you deliver.
  • Shell form CMD wraps your process in /bin/sh which does not forward SIGTERM. Your app gets SIGKILL after 10 seconds. Always use exec form (array syntax) for production containers.

⚠ Common Mistakes to Avoid

    Running containers as root unnecessarily — Your app inside the container runs as root by default, meaning a vulnerability in the app gives an attacker root access inside the container. Fix it by adding USER instructions in your Dockerfile: create a non-root user with RUN addgroup -S appgroup && adduser -S appuser -G appgroup and switch to it with USER appuser before the CMD. Most application workloads need no root privileges at runtime whatsoever.
    Fix

    it by adding USER instructions in your Dockerfile: create a non-root user with RUN addgroup -S appgroup && adduser -S appuser -G appgroup and switch to it with USER appuser before the CMD. Most application workloads need no root privileges at runtime whatsoever.

    Using shell form for CMD and ENTRYPOINT — Writing CMD npm start instead of CMD ["npm", "start"] wraps your process in /bin/sh -c, making sh the PID 1 process. When you run docker stop, Docker sends SIGTERM to PID 1 (sh), not to your app. sh doesn't forward signals, so your app gets a hard SIGKILL after the grace period (10 seconds by default), potentially corrupting in-flight requests. Always use exec form (array syntax) so your app is PID 1 and receives signals directly.

    s directly.

    Copying the entire build context including node_modules or .git — Running COPY . . without a .dockerignore file copies your entire local directory, including node_modules (potentially GBs), .git history, .env files with secrets, and local log files. This bloats the image, leaks sensitive data, and slows every build because Docker hashes the entire context. Fix: create a .dockerignore file at the project root listing node_modules, .git, .env, *.log, dist, and any other non-essential paths. It works exactly like .gitignore and is not optional in real projects.
    Fix

    create a .dockerignore file at the project root listing node_modules, .git, .env, *.log, dist, and any other non-essential paths. It works exactly like .gitignore and is not optional in real projects.

    Using bind mounts for production databases
    Symptom

    data is lost during server migration because the host directory was not copied.

    Fix

    use named volumes (docker volume create) for production data. Named volumes are managed by Docker, survive container removal, and can be backed up and migrated independently of the host filesystem.

    Publishing database ports to the host
    Symptom

    database is accessible from any machine that can reach the host, bypassing application-level access controls.

    Fix

    never use ports: - '5432:5432' for internal services. Keep databases on private Docker networks with no published ports. Only edge services should have published ports.

    Using unversioned or :latest tags in FROM
    Symptom

    builds break silently when the base image updates.

    Fix

    pin exact version and OS codename: python:3.12.3-slim-bookworm, not python:3.12-slim. Consider digest pinning for maximum safety.

Interview Questions on This Topic

  • QExplain the difference between CMD and ENTRYPOINT in a Dockerfile. How do they interact when used together in a production-grade container?
  • QLeetCode Standard: Given a high-traffic microservice architecture, how would you troubleshoot a 'Connection Refused' error between two containers on the same custom bridge network?
  • QYour CI pipeline is pulling a 1.2GB Docker image on every run and build times are unacceptable. Walk me through every optimisation you'd apply — to the image itself, the Dockerfile, and the registry setup.
  • QExplain the Copy-on-Write (CoW) strategy used by Docker's storage drivers and how it impacts container performance.
  • QWhat is the difference between docker stop and docker kill? Why does this matter for zero-downtime deployments?
  • QA container is running but you cannot reach it from the host on the mapped port. Walk me through your debugging process.
  • QYou have a Dockerfile that installs gcc, compiles a C extension, then runs apt-get remove gcc. The image is still 800MB. Why, and how do you fix it?
  • QHow would you handle secrets (API keys, database passwords) in a Dockerized application across local development, CI, and production?

Frequently Asked Questions

What is the difference between a Docker image and a Docker container?

An image is a read-only, layered template — think of it as a frozen snapshot of a filesystem and its metadata. A container is a live, running instance of that image with a thin writable layer added on top. You can create dozens of containers from a single image simultaneously, each isolated from the others. When a container is deleted, its writable layer is gone, but the original image is untouched.

How do you handle secrets like passwords or API keys in Docker containers?

Secrets should never be baked into Docker images. For local dev, use environment variables or .env files (in .gitignore). In production, use orchestration-level features like Docker Secrets or Kubernetes Secrets. Within a standalone container, mounting a tmpfs volume is a secure way to pass sensitive data into memory so it never persists to the host disk. BuildKit --mount=type=secret handles build-time secrets without leaving them in image layers.

Can you run a container without the Docker Daemon?

Yes. While the Docker engine relies on the daemon, the actual container execution is handled by lower-level runtimes like containerd or runc. Tools like Podman provide a daemonless alternative that implements the same OCI standards as Docker, allowing you to run containers as a standard user process without a background service.

What does exit code 137 mean for a Docker container?

Exit code 137 means the container was killed by signal 9 (SIGKILL), typically by the Linux OOM killer. The container exceeded its memory limit or the host ran out of memory. Debug with docker inspect to check OOMKilled status, then either increase the memory limit (--memory flag) or fix the memory leak in your application.

What is the difference between docker stop and docker kill?

docker stop sends SIGTERM (graceful shutdown) and waits 10 seconds (configurable with --stop-timeout), then sends SIGKILL if the process has not exited. docker kill sends SIGKILL immediately with no grace period. In production, always use docker stop to give your application time to flush logs, close database connections, and drain in-flight requests. docker kill is for stuck containers that do not respond to SIGTERM.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousTop DevOps Interview QuestionsNext →Kubernetes Interview Questions
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged