Skip to content
Home DevOps Docker Security Best Practices: Hardening Containers in Production

Docker Security Best Practices: Hardening Containers in Production

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Docker → Topic 11 of 18
Docker security best practices for production: non-root users, image scanning, secrets management, seccomp profiles, and runtime defense — deep, battle-tested guide.
🔥 Advanced — solid DevOps foundation required
In this tutorial, you'll learn
Docker security best practices for production: non-root users, image scanning, secrets management, seccomp profiles, and runtime defense — deep, battle-tested guide.
  • Docker containers run as root by default. Add USER nonroot to every production Dockerfile. Pair with --security-opt=no-new-privileges and --cap-drop=ALL.
  • The --privileged flag disables ALL container isolation. Never use it in production. Use --cap-drop=ALL and --cap-add for specific capabilities.
  • Never bake secrets into Docker image layers. Use BuildKit --mount=type=secret for build-time and secrets managers for runtime.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • Image layer: minimal base, non-root user, no secrets in layers, scanned for CVEs
  • Runtime layer: read-only filesystem, seccomp profile, no --privileged, resource limits
  • Network layer: no published DB ports, custom bridge networks, TLS everywhere
  • Secrets layer: never in ENV/ARG/COPY, use secrets managers or tmpfs mounts
🚨 START HERE
Docker Security Triage Cheat Sheet
First-response commands when a Docker security issue is suspected.
🟡Suspected container breakout or host compromise.
Immediate ActionCheck for unauthorized containers and socket access.
Commands
docker ps -a --format '{{.Names}} {{.Image}} {{.CreatedAt}}' | sort -k3
docker inspect --format='{{.Name}} Mounts={{.Mounts}}' $(docker ps -q) | grep docker.sock
Fix NowStop unauthorized containers. If docker.sock is mounted in an unexpected container, treat as a breach — rotate credentials, audit host.
🟡Container running as root in production.
Immediate ActionAudit user configuration across all containers.
Commands
for c in $(docker ps -q); do docker inspect --format='{{.Name}} user={{.Config.User}}' $c; done
docker inspect --format='{{.Name}} Privileged={{.HostConfig.Privileged}} CapAdd={{.HostConfig.CapAdd}}' $(docker ps -q)
Fix NowAdd USER instruction to Dockerfile. Rebuild and redeploy. Drop all capabilities: --cap-drop=ALL --cap-add=NET_BIND_SERVICE.
🟡Image with known vulnerabilities deployed to production.
Immediate ActionScan the deployed image for CVEs.
Commands
trivy image --severity CRITICAL,HIGH <image>
docker history --no-trunc <image> | grep -i 'secret\|password\|key'
Fix NowIf critical CVEs exist, patch the base image or dependency. If secrets found, rotate credentials immediately. Add trivy to CI pipeline.
🟡Docker daemon exposed to the network without TLS.
Immediate ActionCheck if daemon is listening on TCP port 2375.
Commands
ss -tlnp | grep -E '2375|2376'
curl -s http://<host>:2375/version
Fix NowIf port 2375 responds, block it with iptables immediately. Configure TLS on port 2376. Set "hosts": ["unix:///var/run/docker.sock"] in daemon.json to disable TCP.
🟡Container has writable root filesystem in production.
Immediate ActionCheck if containers are using --read-only flag.
Commands
docker inspect --format='{{.Name}} ReadOnly={{.HostConfig.ReadonlyRootfs}}' $(docker ps -q)
docker inspect --format='{{.Name}} Tmpfs={{.HostConfig.Tmpfs}}' $(docker ps -q)
Fix NowAdd --read-only to container run command. Mount tmpfs for writable paths: --tmpfs /tmp:size=64m --tmpfs /var/run:size=16m.
🟡Docker daemon configuration is insecure.
Immediate ActionAudit daemon.json for dangerous settings.
Commands
cat /etc/docker/daemon.json
docker info --format '{{json .SecurityOptions}}'
Fix NowEnsure: userland-proxy=false, noicc=true, userns-remap enabled, live-restore=true. Remove any insecure-registries entries for production hosts.
Production IncidentCryptominer Deployed via Exposed Docker Daemon Socket — Full Host Compromise in 3 MinutesA monitoring container with access to /var/run/docker.sock was compromised through an RCE vulnerability. The attacker used the socket to create a new privileged container, mounted the host filesystem, installed a cryptominer, and added an SSH key for persistence — all within 3 minutes.
SymptomHost CPU utilization spiked to 100% overnight. The ops team noticed the spike during morning standup. Initial investigation showed a container named 'system-monitor' running at 98% CPU. The team did not recognize this container — it was not in their docker-compose.yml or deployment manifests.
AssumptionTeam assumed a runaway process in one of their application containers. They ran docker stats and identified the 'system-monitor' container. They stopped it, but a new container named 'health-checker' appeared within seconds. They stopped that one too, and a third appeared. The team assumed a Docker daemon bug causing ghost containers.
Root causeA monitoring container (Prometheus node-exporter) was deployed with -v /var/run/docker.sock:/var/run/docker.sock to enable Docker metrics collection. The node-exporter had an unpatched RCE vulnerability (CVE-2024-XXXX). The attacker exploited the RCE to gain shell access inside the container, then used the mounted Docker socket to create new containers with --privileged and --pid=host. The privileged container mounted the host filesystem at /host, installed a cryptominer, added an SSH key to /host/root/.ssh/authorized_keys, and modified /host/etc/crontab for persistence. The 'system-monitor' container was the attacker's cryptominer.
Fix1. Removed the Docker socket mount from the monitoring container. Switched to Prometheus cAdvisor which does not require socket access. 2. Rebuilt all containers with USER nonroot. 3. Enabled Docker daemon TLS with client certificate authentication. 4. Added a Falco runtime security rule that alerts on any new container creation outside of the CI/CD pipeline. 5. Rotated all SSH keys on the compromised host. 6. Implemented Pod Security Standards (restricted) in Kubernetes to prevent privileged containers.
Key Lesson
Never mount /var/run/docker.sock into a container unless absolutely necessary. If you must, use a socket proxy that restricts the API calls the container can make.A container with Docker socket access is equivalent to root access on the host. Treat it with the same security rigor as SSH access.Cryptomining is the most common payload for Docker socket exploitation — the attacker wants compute, not data.Runtime security monitoring (Falco, Sysdig) detects anomalous container creation. Without it, the attack is invisible until the CPU spike is noticed.Patch all containers, including monitoring and utility containers. They are part of your attack surface.
Production Debug GuideFrom exposed sockets to root containers — systematic security audit paths.
Docker daemon socket is accessible from inside a container.Check if any container has /var/run/docker.sock mounted: docker inspect --format='{{.Mounts}}' <container>. If found, assess whether the container truly needs it. If yes, replace with a socket proxy (Tecnativa/docker-socket-proxy) that restricts API access. If no, remove the mount immediately.
Containers are running as root in production.Audit all running containers: for c in $(docker ps -q); do docker inspect --format='{{.Name}} {{.Config.User}}' $c; done. Any container with an empty User field is running as root. Add USER instructions to Dockerfiles and rebuild.
Image has known CVEs that were not caught before deployment.Scan the image with Trivy: trivy image <image-name>. Review critical and high CVEs. Check if the CVEs are in the base image (update the base) or in application dependencies (update the dependency). Integrate scanning into CI to prevent future deployments of vulnerable images.
Container has --privileged flag or excessive capabilities.Audit all containers: docker inspect --format='{{.Name}} Privileged={{.HostConfig.Privileged}} CapAdd={{.HostConfig.CapAdd}}' $(docker ps -q). Remove --privileged. Drop all capabilities with --cap-drop=ALL and add back only what is needed with --cap-add.
Secrets found in image layers.Inspect image history: docker history --no-trunc <image>. Search for secrets: docker save <image> | tar -xO 2>/dev/null | grep -i 'password\|secret\|key\|token'. Rotate exposed credentials immediately. Rebuild with .dockerignore excluding all secret files. Use BuildKit --mount=type=secret for build-time secrets.
Docker daemon is accessible from the network without TLS.Check if the daemon is listening on a TCP port: ss -tlnp | grep 2375. If port 2375 is open, the daemon is exposed without authentication. Immediately restrict access via firewall. Configure TLS with client certificates on port 2376. Never expose port 2375 to the public internet.

Docker containers share the host kernel. A misconfigured container can escape its namespace, read host secrets, or pivot laterally across your cluster. The attack surface spans the image build pipeline, the runtime configuration, the network, the daemon itself, and your secrets management. Miss one layer and the rest does not matter.

Docker's defaults are built for developer convenience, not production hardening. Containers run as root by default. The seccomp profile blocks ~44 of 300+ syscalls but allows the rest. The daemon socket has no authentication by default. Understanding these defaults and how to override them is the foundation of Docker security.

Common misconceptions: containers are not VMs (they share the kernel, so kernel vulnerabilities affect all containers), --privileged is not 'a little extra access' (it disables all isolation), and deleting a secret from a Dockerfile layer does not remove it from the image (layers are additive). Every one of these misconceptions has caused a production breach.

Non-Root Containers — The Single Most Important Security Practice

Docker containers run as root by default. This means the application process inside the container has UID 0 — the same UID as root on the host. If the container's namespace isolation is broken (via a kernel vulnerability), the attacker gains root access to the host.

Running as a non-root user does not prevent container escape, but it limits the damage. A process running as UID 1000 inside the container, even after escaping the namespace, runs as UID 1000 on the host — an unprivileged user who cannot modify system files, install packages, or access other users' data.

The fix is a two-line addition to the Dockerfile
  • RUN useradd --create-home appuser
  • USER appuser

The USER instruction must come before CMD/ENTRYPOINT. Any RUN instructions after USER execute as the non-root user, which may cause permission errors for operations that require root (apt-get install, chown). The common pattern is to perform all root operations first, then switch to the non-root user at the end.

Failure scenario — root container exploited via RCE: A web application container running as root had an RCE vulnerability in its image upload endpoint. The attacker uploaded a webshell and gained shell access as root inside the container. Because the container ran as root, the attacker could read /etc/shadow (if mounted), install tools (curl, ncat), and attempt container escape. If the container had run as a non-root user, the attacker would have been UID 1000 — unable to install packages, read protected files, or escalate privileges on the host.

io/thecodeforge/secure-app.dockerfile · DOCKERFILE
12345678910111213141516171819202122
# ── Secure Dockerfile: non-root user, minimal base, no secrets ──
FROM python:3.12-slim-bookworm

WORKDIR /app

# All root operations first
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Create non-root user
RUN groupadd --gid 1000 appgroup && \
    useradd --uid 1000 --gid appgroup --create-home appuser

# Copy application code and set ownership
COPY --chown=appuser:appgroup . .

# Switch to non-root user — everything after this runs as appuser
USER appuser

EXPOSE 8000

CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
▶ Output
# Build:
docker build -f io/thecodeforge/secure-app.dockerfile -t io.thecodeforge/secure-app:v1 .

# Run with additional security flags:
docker run -d \
--name secure-app \
--read-only \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--security-opt=no-new-privileges \
--tmpfs /tmp:size=64m \
-p 8000:8000 \
io.thecodeforge/secure-app:v1

# Verify non-root:
docker exec secure-app whoami
# Output: appuser

docker exec secure-app id
# Output: uid=1000(appuser) gid=1000(appgroup) groups=1000(appgroup)
Mental Model
Non-Root as a Seatbelt, Not a Cage
Why does Docker default to root instead of a non-root user?
  • Docker was designed for developer convenience. Running as root avoids permission issues during development.
  • Many base image instructions (apt-get install, chown) require root. Non-root by default would break most Dockerfiles.
  • The USER instruction puts the responsibility on the developer — Docker provides the mechanism, not the policy.
  • Kubernetes enforces non-root via Pod Security Standards. Docker does not — you must enforce it yourself.
📊 Production Insight
The no-new-privileges flag is a critical companion to non-root users. Without it, a non-root process that executes a setuid binary (like sudo or ping) can escalate to root. The --security-opt=no-new-privileges flag prevents this escalation. Always pair it with USER nonroot in production.
🎯 Key Takeaway
Containers run as root by default. This is the single most dangerous default in Docker. Add USER nonroot to every production Dockerfile. Pair with --security-opt=no-new-privileges and --cap-drop=ALL. Non-root does not prevent escape but limits the blast radius.
User Configuration Decisions
IfApplication needs to bind to port < 1024
UseUse --cap-add=NET_BIND_SERVICE instead of running as root. Or use a reverse proxy that binds to the privileged port.
IfApplication needs to write to specific directories
UseSet ownership with COPY --chown=appuser:appgroup. Use --tmpfs for temporary writable paths.
IfApplication needs to install packages at runtime
UseThis is an anti-pattern. Pre-install all dependencies in the Dockerfile. If dynamic installation is needed, use a separate init container.
IfContainer needs to interact with Docker daemon
UseUse a socket proxy (Tecnativa/docker-socket-proxy) that restricts API access. Never mount the raw socket.

Image Scanning and Supply Chain Security

Every dependency in your Docker image is an attack surface. The base OS packages, the runtime (Python, Node, Java), the application dependencies (pip, npm, Maven packages) — each can contain known CVEs. Image scanning identifies these vulnerabilities before they reach production.

Scanning tools: - Trivy: open-source, fast, scans OS packages and language dependencies. Integrates with CI/CD. - Grype: open-source, Syft-based, good for SBOM generation. - Docker Scout: Docker's built-in scanner, available in Docker Desktop and Docker Hub. - Snyk Container: commercial, deep integration with CI/CD and container registries.

When to scan: - In CI/CD: scan every image build. Fail the build on critical CVEs. - In the registry: scan on push. Block pulls of images with critical CVEs. - In production: scan running images periodically. Alert on newly discovered CVEs.

Supply chain attacks: Beyond CVEs, consider supply chain attacks — malicious packages injected into public registries. Mitigate with: - Image signing (Docker Content Trust, cosign) - SBOM (Software Bill of Materials) generation - Base image pinning to specific digests - Private registries for internal images

SBOM generation: An SBOM lists every package in your image with its version. It is required for compliance (SBOM Executive Order, SOC 2) and enables rapid response when a new CVE is disclosed — you can query your SBOM database to find all affected images without rescanning.

io/thecodeforge/image-scanning.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839
#!/bin/bash
# Image scanning and supply chain security

# ── Scan with Trivy ───────────────────────────────────────────────
# Install: curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh

# Scan a local image for critical and high CVEs
trivy image --severity CRITICAL,HIGH --exit-code 1 io.thecodeforge/secure-app:v1
# --exit-code 1 returns non-zero if vulnerabilities are found (fails CI)

# Scan with JSON output for CI integration
trivy image --format json --output trivy-report.json io.thecodeforge/secure-app:v1

# ── Generate SBOM with Syft ───────────────────────────────────────
# Install: curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh

syft io.thecodeforge/secure-app:v1 -o spdx-json > sbom.spdx.json
# SBOM can be queried later: 'which images contain openssl 3.0.2?'

# ── Sign image with cosign ────────────────────────────────────────
# Install: go install github.com/sigstore/cosign/v2/cmd/cosign@latest

# Generate a key pair (once)
cosign generate-key-pair

# Sign the image
cosign sign --key cosign.key io.thecodeforge/secure-app:v1

# Verify the signature
cosign verify --key cosign.pub io.thecodeforge/secure-app:v1

# ── CI/CD integration example (GitHub Actions) ────────────────────
# .github/workflows/docker-security.yml
# - name: Scan image
#   run: trivy image --severity CRITICAL,HIGH --exit-code 1 $IMAGE
# - name: Generate SBOM
#   run: syft $IMAGE -o spdx-json > sbom.spdx.json
# - name: Sign image
#   run: cosign sign --key env://COSIGN_PRIVATE_KEY $IMAGE
▶ Output
# trivy image output:
2026-04-05T12:00:00.000Z INFO Detecting OS...
2026-04-05T12:00:00.000Z INFO Detecting python-pkg...

io.thecodeforge/secure-app:v1 (debian 12.5)
===========================================
Total: 2 (CRITICAL: 0, HIGH: 2)

┌─────────┬────────────────┬──────────┬───────────┐
│ Library │ Vulnerability │ Severity │ Installed │
├─────────┼────────────────┼──────────┼───────────┤
│ openssl │ CVE-2024-XXXX │ HIGH │ 3.0.11 │
│ libxml2 │ CVE-2024-YYYY │ HIGH │ 2.────────────────┴──────────┴───────────┘
Mental Model
Image Scanning as a Health Check for Your Supply Chain
Why is SBOM generation important beyond just scanning for CVEs?
  • When a new CVE is disclosed, you can query your SBOM database to find all affected images in seconds — without rescanning every image.
  • Compliance frameworks (SOC 2, PCI-DSS, SBOM Executive Order) require an inventory of all software components.
  • SBOM enables rapid incident response — you know exactly what is in every image without forensic analysis.
  • SBOM generation is a one-time cost per build. The benefits compound over time as your image library grows.
📊 Production Insight
The most common CI/CD security gap is scanning only the application dependencies, not the base image. A team scanned their Python packages with pip-audit but missed a critical CVE in the base OS (Debian's libssl). The fix: use Trivy which scans both OS packages and language dependencies in a single pass. Integrate it into CI with --exit-code 1 to fail the build on critical CVEs.
🎯 Key Takeaway
Image scanning is not optional in production. Integrate Trivy into CI with --exit-code 1. Generate SBOMs for compliance and rapid incident response. Sign images with cosign to prevent tampering. Scan both OS packages and language dependencies — partial scanning misses critical CVEs.

Secrets Management — Never Bake Secrets Into Images

Secrets (API keys, database passwords, TLS certificates) must never be baked into Docker image layers. Three exposure vectors make this critical:

Vector 1: ENV in Dockerfile. ENV SECRET_KEY=abc123 is visible in docker inspect, docker history, and every container derived from the image. Anyone with image pull access can extract the secret.

Vector 2: ARG in Dockerfile. ARG is build-time only, but it is visible in docker history. If used in a RUN instruction that writes to a file, the secret ends up in that layer.

Vector 3: COPY secrets into the image. COPY .env /app/.env bakes the entire .env file into a layer. Even if a later RUN rm /app/.env removes it, the file still exists in the earlier layer — layers are additive.

The right patterns: - Build-time secrets: Use BuildKit --mount=type=secret. The secret is available during the build but never written to any layer. - Runtime secrets: Use Docker secrets (Swarm), Kubernetes secrets, or mount a tmpfs volume with the secret file. - Environment variables: Acceptable for non-sensitive config. Never for secrets.

Docker Content Trust (DCT): DCT uses digital signatures to verify that an image has not been tampered with. When DOCKER_CONTENT_TRUST=1 is set, Docker only pulls signed images. This prevents supply chain attacks where a malicious image is pushed to a registry with the same tag as a legitimate image.

io/thecodeforge/secrets-safe.dockerfile · DOCKERFILE
1234567891011121314151617181920212223242526272829
# ── Safe secrets handling with BuildKit ──────────────────────────
# syntax=docker/dockerfile:1
# Requires BuildKit: DOCKER_BUILDKIT=1 docker build ...

FROM python:3.12-slim-bookworm

WORKDIR /app

COPY requirements.txt .

# WRONG: This bakes the secret into the image layer
# RUN pip install --no-cache-dir -r requirements.txt --index-url https://user:password@private.pypi/simple

# RIGHT: Mount the secret as a file during build, never stored in a layer
RUN --mount=type=secret,id=pypi_token \
    pip install --no-cache-dir -r requirements.txt \
    --index-url https://$(cat /run/secrets/pypi_token)@private.pypi/simple

# Build command:
# DOCKER_BUILDKIT=1 docker build \
#   --secret id=pypi_token,src=$HOME/.pypi_token \
#   -t io.thecodeforge/app:v1 .

COPY . .

RUN useradd --create-home appuser
USER appuser

CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
▶ Output
# Build with secret:
DOCKER_BUILDKIT=1 docker build \
--secret id=pypi_token,src=$HOME/.pypi_token \
-t io.thecodeforge/app:v1 .

# Verify the secret is NOT in the image:
docker history --no-trunc io.thecodeforge/app:v1 | grep pypi_token
# No output — the secret was never written to a layer

docker save io.thecodeforge/app:v1 | tar -xO 2>/dev/null | grep -c 'password'
# 0 — no secrets in any layer
Mental Model
Secrets as Nuclear Launch Codes
Why does deleting a secret in a later Dockerfile layer not remove it from the image?
  • Docker layers are additive. Each layer is a filesystem diff on top of the previous one.
  • COPY .env adds the file to layer N. RUN rm .env adds a whiteout marker to layer N+1.
  • The file still exists in layer N. Anyone who extracts the layers can read it.
  • Only BuildKit --mount=type=secret avoids writing the secret to any layer.
📊 Production Insight
The .dockerignore file is your first line of defense against accidental secret exposure. Add .env, .pem, .key, credentials.json, and any file that might contain secrets. But .dockerignore is not sufficient alone — a developer might rename the secret file or pass it as an ARG. The only reliable solution is BuildKit secrets for build-time and secrets managers for runtime.
🎯 Key Takeaway
Never bake secrets into Docker image layers. ENV, ARG, and COPY all expose secrets permanently. Use BuildKit --mount=type=secret for build-time secrets and secrets managers for runtime secrets. The .dockerignore file is your first line of defense but not sufficient alone. If a secret is ever baked in, rotate it immediately.
Secret Handling Strategy
IfSecret needed during docker build (private registry auth, API keys)
UseUse BuildKit --mount=type=secret. Never use ARG or ENV for secrets.
IfSecret needed at runtime (database password, API key)
UseUse Docker secrets (Swarm), Kubernetes secrets, or mount a tmpfs volume.
IfSecret is a TLS certificate
UseMount as a volume from the host or a secrets manager. Never COPY into the image.
IfSecret accidentally baked into a pushed image
UseRotate the secret immediately. Delete the tag. Rebuild with BuildKit secrets. Audit all layers.

Runtime Hardening — Seccomp, AppArmor, Read-Only Filesystems, and Capabilities

Runtime hardening reduces the attack surface of a running container by restricting what the container process can do. Four mechanisms work together:

1. seccomp (Secure Computing Mode): Filters syscalls at the kernel level. Docker's default seccomp profile blocks ~44 dangerous syscalls (mount, reboot, kexec_load) but allows the rest. Custom profiles can block more syscalls for defense in depth.

2. AppArmor / SELinux: Mandatory Access Control (MAC) frameworks that restrict file access, network access, and capability usage at the process level. AppArmor is default on Ubuntu/Debian. SELinux is default on RHEL/CentOS.

3. Read-only filesystem: --read-only makes the container's root filesystem read-only. The application can only write to tmpfs mounts. This prevents an attacker from installing tools, modifying application code, or writing a backdoor to the filesystem.

4. Linux capabilities: Fine-grained privilege control. Instead of granting full root (all capabilities), grant only what is needed. --cap-drop=ALL removes all capabilities. --cap-add=NET_BIND_SERVICE adds back only the ability to bind to privileged ports.

Performance impact: seccomp adds <1% CPU overhead per syscall (the kernel checks the filter before executing the syscall). AppArmor adds similar negligible overhead. Read-only filesystems can actually improve performance by preventing unnecessary writes. There is no performance reason to skip these security measures.

Failure scenario — writable filesystem exploited: An attacker gained RCE in a web application container through a deserialization vulnerability. Because the filesystem was writable, the attacker wrote a PHP webshell to /app/uploads/shell.php and used it for persistent access. With --read-only, the write would have failed, and the attacker would have been limited to in-memory exploitation (much harder).

io/thecodeforge/runtime-hardening.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
#!/bin/bash
# Runtime hardening flags for production containers

# ── Full hardened container run command ────────────────────────────
docker run -d \
  --name secure-api \
  \
  # Non-root user (from Dockerfile USER instruction)
  \
  # Read-only root filesystem
  --read-only \
  \
  # Writable tmpfs for temp files
  --tmpfs /tmp:size=64m,noexec,nosuid \
  --tmpfs /var/run:size=16m,noexec,nosuid \
  \
  # Drop ALL capabilities, add back only what's needed
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  \
  # Prevent privilege escalation via setuid binaries
  --security-opt=no-new-privileges:true \
  \
  # Use custom seccomp profile (optional — default is good enough for most)
  --security-opt=seccomp=/etc/docker/seccomp-profile.json \
  \
  # Use AppArmor profile (Ubuntu/Debian)
  --security-opt=apparmor=docker-default \
  \
  # Resource limits
  --memory=512m \
  --cpus=1.0 \
  --pids-limit=256 \
  \
  # Disable inter-container communication
  --icc=false \
  \
  # Network
  --network app-network \
  -p 8000:8000 \
  \
  io.thecodeforge/secure-app:v1

# ── Verify capabilities ───────────────────────────────────────────
docker exec secure-api cat /proc/1/status | grep Cap
# CapInh: 0000000000000000  (inherited — should be 0)
# CapPrm: 0000000000000400  (permitted — only NET_BIND_SERVICE)
# CapEff: 0000000000000400  (effective — only NET_BIND_SERVICE)
# CapBnd: 0000000000000400  (bounding — only NET_BIND_SERVICE)

# ── Verify seccomp profile ────────────────────────────────────────
docker exec secure-api grep Seccomp /proc/1/status
# Seccomp: 2  (filter mode — seccomp profile is active)

# ── Verify read-only filesystem ───────────────────────────────────
docker exec secure-api touch /test-file 2>&1
# touch: cannot touch '/test-file': Read-only file system
▶ Output
# All security flags applied. Container runs with:
# - Read-only filesystem
# - Dropped capabilities (only NET_BIND_SERVICE)
# - No privilege escalation
# - seccomp and AppArmor profiles
# - Resource limits (memory, CPU, PIDs)
# - Non-root user
Mental Model
Runtime Hardening as Apartment Security Layers
Why should you use --cap-drop=ALL instead of just running as non-root?
  • Non-root limits the UID. Capabilities limit the privileges. They are complementary, not redundant.
  • A non-root process with CAP_NET_RAW can sniff network traffic. Dropping ALL capabilities prevents this.
  • A non-root process with CAP_SYS_PTRACE can debug other processes. Dropping ALL prevents this.
  • The principle of least privilege demands both: minimum UID AND minimum capabilities.
📊 Production Insight
The default seccomp profile is a good starting point but not sufficient for high-security environments. It allows ~280 of 300+ syscalls. For containers that do not need network access, block socket, bind, connect, and listen syscalls. For containers that do not need to spawn processes, block fork, clone, and execve. Custom seccomp profiles are generated from Docker's default profile by removing allowed syscalls.
🎯 Key Takeaway
Runtime hardening is layered defense: seccomp filters syscalls, AppArmor restricts access, read-only filesystem prevents writes, capabilities limit privileges. Each layer adds <1% overhead. There is no performance reason to skip them. The default seccomp profile is a starting point — customize it for high-security environments.
Runtime Hardening Decisions
IfStandard web application (API, web server)
UseUse default seccomp + --read-only + --cap-drop=ALL + --cap-add=NET_BIND_SERVICE + --no-new-privileges
IfApplication needs to write to specific directories
UseUse --read-only with --tmpfs for writable paths. Never make the entire filesystem writable.
IfContainer needs to interact with other containers (service mesh sidecar)
UseUse custom seccomp profile that allows network syscalls but blocks filesystem syscalls.
IfHigh-security environment (PCI-DSS, SOC 2)
UseUse all hardening measures + custom seccomp + AppArmor/SELinux + image signing + SBOM.

Docker Daemon Security — Protecting the Control Plane

The Docker daemon (dockerd) is the control plane for all container operations. If the daemon is compromised, every container on the host is compromised. Three critical daemon security practices:

1. Never expose the daemon socket without TLS. The default daemon listens on a Unix socket (/var/run/docker.sock) which requires local access. If configured to listen on TCP (port 2375), it accepts unauthenticated connections from the network. Anyone who can reach port 2375 can create, stop, and delete containers — effectively root access to the host.

2. Enable TLS with client certificate authentication. If remote daemon access is required (for CI/CD, monitoring), configure TLS on port 2376 with client certificates. Only clients with a valid certificate can connect. This is the equivalent of SSH key authentication for the Docker daemon.

3. Enable user namespace remapping. By default, UID 0 inside the container maps to UID 0 on the host. User namespace remapping maps container UIDs to unprivileged host UIDs (e.g., container UID 0 -> host UID 100000). This means even a container escape results in an unprivileged host user, not root.

4. Enable live-restore. If the Docker daemon restarts, running containers are killed by default. live-restore=true keeps containers running during daemon restarts, improving availability. This also means a daemon crash does not take down your production workloads.

5. Disable the legacy registry (v1). Docker Registry v1 is deprecated and has known security issues. Ensure the daemon only interacts with v2 registries.

/etc/docker/daemon.json · JSON
1234567891011121314151617181920212223242526272829
{
  "userns-remap": "default",
  "live-restore": true,
  "no-new-privileges": true,
  "userland-proxy": false,
  "icc": false,
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 65536,
      "Soft": 65536
    }
  },
  "tls": true,
  "tlscacert": "/etc/docker/tls/ca.pem",
  "tlscert": "/etc/docker/tls/server-cert.pem",
  "tlskey": "/etc/docker/tls/server-key.pem",
  "tlsverify": true,
  "hosts": [
    "unix:///var/run/docker.sock",
    "tcp://0.0.0.0:2376"
  ]
}
▶ Output
# After updating daemon.json, restart Docker:
sudo systemctl restart docker

# Verify user namespace remapping:
docker info | grep -i "docker root dir"
# Docker Root Dir: /var/lib/docker/100000.100000
# The 100000.100000 indicates remapping is active

# Verify TLS:
docker --tlsverify \
--tlscacert=/etc/docker/tls/ca.pem \
--tlscert=/etc/docker/tls/client-cert.pem \
--tlskey=/etc/docker/tls/client-key.pem \
-H=tcp://localhost:2376 version
# Must use TLS flags to connect — unauthenticated connections are rejected
Mental Model
Docker Daemon as the Building Superintendent
Why is user namespace remapping not enabled by default?
  • User namespace remapping breaks some workflows — file permissions between host and container become mismatched.
  • Volume mounts with specific UID/GID expectations may fail because the container UID maps to a different host UID.
  • Some applications that need to interact with host resources (Docker-in-Docker, monitoring agents) break with remapping.
  • Docker chose developer convenience over security by default. Production environments should enable it.
📊 Production Insight
The daemon.json configuration is the most impactful security hardening step because it applies globally to all containers. userns-remap, icc=false, and no-new-privileges=true apply to every container without modifying individual Dockerfiles. This is defense in depth at the infrastructure level, not the application level.
🎯 Key Takeaway
The Docker daemon is the control plane. Exposing it without TLS is equivalent to giving root access to anyone on the network. Enable TLS with client certificates, user namespace remapping, live-restore, and disable inter-container communication (icc=false). These daemon-level settings apply globally and are the highest-impact security hardening steps.
🗂 Docker Security Mechanisms Compared
Layer, overhead, and protection scope for each security mechanism.
MechanismLayerOverheadWhat It ProtectsDefault State
Non-root user (USER)Image/RuntimeNoneLimits damage after container escapeOff (runs as root)
seccomp profileKernel<1% per syscallBlocks dangerous syscalls (mount, reboot)On (default profile)
AppArmor / SELinuxKernel<1% per access checkRestricts file/network/capability accessOn (docker-default on Ubuntu)
Read-only filesystemRuntimeCan improve perf (fewer writes)Prevents filesystem modificationOff (writable)
Capabilities (cap-drop)KernelNoneLimits kernel-level privilegesOn (default profile drops ~14 of ~42)
no-new-privilegesKernelNonePrevents setuid/setgid escalationOff
User namespace remappingDaemonNegligibleMaps container UID 0 to unprivileged host UIDOff
Image scanning (Trivy)CI/CDBuild time onlyIdentifies known CVEs in imageOff (must be configured)
Image signing (cosign)Supply chainPush/pull time onlyVerifies image integrity and provenanceOff (must be configured)

🎯 Key Takeaways

  • Docker containers run as root by default. Add USER nonroot to every production Dockerfile. Pair with --security-opt=no-new-privileges and --cap-drop=ALL.
  • The --privileged flag disables ALL container isolation. Never use it in production. Use --cap-drop=ALL and --cap-add for specific capabilities.
  • Never bake secrets into Docker image layers. Use BuildKit --mount=type=secret for build-time and secrets managers for runtime.
  • The Docker daemon socket (/var/run/docker.sock) is the most dangerous thing to expose. A container with socket access can take over the host.
  • Image scanning with Trivy, SBOM generation with Syft, and image signing with cosign form a complete supply chain security pipeline.
  • Runtime hardening is layered: seccomp + AppArmor + read-only filesystem + capabilities. Each adds <1% overhead. There is no reason to skip them.

⚠ Common Mistakes to Avoid

    Running containers as root in production
    Symptom

    container escape vulnerability gives attacker root access to the host; Kubernetes PodSecurityStandard rejects the pod —

    Fix

    add RUN useradd --create-home appuser and USER appuser to every Dockerfile. Pair with --security-opt=no-new-privileges:true.

    Using --privileged flag
    Symptom

    all container isolation is disabled — namespaces, cgroups, seccomp, AppArmor are bypassed. The container has full access to host devices and kernel —

    Fix

    never use --privileged in production. Use --cap-drop=ALL and --cap-add only the specific capabilities needed.

    Mounting /var/run/docker.sock into containers
    Symptom

    any container with socket access can create new containers, mount the host filesystem, and take over the host —

    Fix

    use a socket proxy (Tecnativa/docker-socket-proxy) that restricts API access. If not needed, never mount the socket.

    Exposing Docker daemon on TCP port 2375 without TLS
    Symptom

    anyone who can reach the port can create, stop, and delete containers — effectively root access to the host —

    Fix

    configure TLS on port 2376 with client certificates. Set "hosts": ["unix:///var/run/docker.sock"] in daemon.json to disable TCP by default.

    Baking secrets into image layers via ENV, ARG, or COPY
    Symptom

    secrets visible in docker history, docker inspect, and layer extraction tools —

    Fix

    use BuildKit --mount=type=secret for build-time secrets. Use secrets managers for runtime secrets. Add .env, .pem, .key to .dockerignore.

    Not scanning images for CVEs before deployment
    Symptom

    known vulnerabilities in base images or dependencies are deployed to production, creating exploitable attack surface —

    Fix

    integrate Trivy into CI with --exit-code 1 to fail builds on critical CVEs. Scan both OS packages and language dependencies.

    Using the default seccomp profile without customization for high-security environments
    Symptom

    ~280 syscalls are still allowed, including syscalls the application does not need —

    Fix

    create a custom seccomp profile that blocks all syscalls except those required by the application. Generate from the default profile by removing allowed syscalls.

Interview Questions on This Topic

  • QExplain why running a Docker container as root is dangerous. What is the blast radius if a root container is compromised versus a non-root container?
  • QWhat is the --privileged flag and why should it never be used in production? What alternatives provide the specific capabilities a container needs without full privilege escalation?
  • QA developer pushes a Docker image with a database password hardcoded as an ENV variable to a public Docker Hub repository. Walk me through the exposure vectors and the remediation steps.
  • QHow do seccomp profiles and AppArmor work together to harden a container at runtime? What is the performance impact?
  • QWhat is user namespace remapping in Docker? Why is it not enabled by default, and when should you enable it in production?
  • QYour team needs to give a monitoring container access to the Docker API to collect metrics. How do you do this securely without mounting the raw Docker socket?

Frequently Asked Questions

What is the difference between seccomp and AppArmor?

seccomp filters syscalls at the kernel level — it blocks specific system calls like mount, reboot, or kexec_load. AppArmor restricts what resources a process can access — files, network sockets, capabilities. seccomp is syscall-level filtering. AppArmor is access control. They are complementary: seccomp blocks dangerous operations, AppArmor restricts access to specific resources.

Is Docker secure enough for production?

Docker's defaults are not production-secure. Containers run as root, the seccomp profile is permissive, and the daemon socket has no authentication. But with proper hardening — non-root users, custom seccomp profiles, read-only filesystems, TLS on the daemon, image scanning, and secrets management — Docker containers can meet PCI-DSS and SOC 2 compliance requirements. The hardening is your responsibility, not Docker's.

What happens if I use --privileged in production?

--privileged disables ALL container isolation: namespaces, cgroups, seccomp, AppArmor, and capability restrictions. The container can access all host devices, load kernel modules, modify the host filesystem, and create new namespaces. It is equivalent to giving the container root access to the host. A compromised privileged container IS a compromised host.

How do I scan my Docker images for vulnerabilities?

Use Trivy: trivy image --severity CRITICAL,HIGH --exit-code 1 <image>. Integrate this into your CI pipeline to fail builds on critical CVEs. Trivy scans both OS packages (apt, apk) and language dependencies (pip, npm, Maven). For continuous monitoring, scan images in your registry and alert on newly discovered CVEs.

What is the safest way to pass secrets to a Docker container?

For build-time secrets, use BuildKit --mount=type=secret. The secret is available during the build but never written to any image layer. For runtime secrets, use Docker secrets (Swarm), Kubernetes secrets, or mount a tmpfs volume. Never use ENV, ARG, or COPY to pass secrets — they are permanently visible in image layers.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousDocker Registry and Docker HubNext →Multi-stage Docker Builds
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged