Skip to content
Home DevOps Docker Compose Explained: Multi-Container Apps, Networking and Real-World Patterns

Docker Compose Explained: Multi-Container Apps, Networking and Real-World Patterns

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Docker → Topic 9 of 18
Docker Compose lets you define and run multi-container apps with one file.
⚙️ Intermediate — basic DevOps knowledge assumed
In this tutorial, you'll learn
Docker Compose lets you define and run multi-container apps with one file.
  • Docker Compose is a declaration of the desired state of your entire app stack — you describe it once in YAML and docker compose up handles creation, networking, and ordering every time.
  • depends_on controls start order, not start readiness — always pair it with a healthcheck and condition: service_healthy to prevent race conditions when your app starts before the database is accepting connections.
  • Containers communicate by service name on the internal Docker network — never use localhost between containers, and never use the host-mapped port for container-to-container traffic.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • You declare desired state (services, volumes, networks) in docker-compose.yml
  • Compose translates that into container create/start/stop commands via the Docker API
  • Service names become DNS entries on an auto-created bridge network
  • services: each entry maps to one container
  • volumes: named storage that survives container lifecycle
  • networks: control which services can reach each other
  • depends_on with healthcheck: startup ordering with readiness gates
🚨 START HERE
Docker Compose Triage Cheat Sheet
First-response commands when a Compose-managed stack fails. No theory — just actions.
🟡Container exits immediately or crashes in a restart loop.
Immediate ActionCheck logs and exit code.
Commands
docker compose logs --tail 100 <service>
docker compose ps -a
Fix NowExit code 0 = process completed normally (CMD is wrong). Exit code 1 = application error (check logs). Exit code 137 = OOM killed (increase memory limit). Exit code 139 = segfault (check base image).
🟡Service cannot connect to database or another service.
Immediate ActionVerify network membership and DNS resolution.
Commands
docker compose exec <service> nslookup <target-service>
docker network inspect <project>_default
Fix NowUse service name (not localhost) and container port (not host port) in connection strings. Ensure both services share at least one network.
🟡Port already in use error on docker compose up.
Immediate ActionFind what is using the port on the host.
Commands
lsof -i :<port>
docker compose ps -a | grep <port>
Fix NowKill the conflicting process, change the host-side port mapping (left side of ports: '8080:80'), or stop the other Compose project: docker compose -p <other-project> down
🟡Environment variables not being picked up by a service.
Immediate ActionVerify .env file exists and variables are resolved.
Commands
docker compose config | grep -A 20 <service>
docker compose exec <service> env | grep <VARIABLE>
Fix NowEnsure .env is in the same directory as docker-compose.yml. Check for typos in ${VARIABLE_NAME}. Run docker compose config to see fully resolved config.
🟡Named volume data is corrupted or needs to be reset.
Immediate ActionBack up the volume before any destructive action.
Commands
docker volume inspect <project>_postgres_data
docker run --rm -v <project>_postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/volume-backup.tar.gz /data
Fix Nowdocker compose down -v removes all named volumes. Use selectively: docker volume rm <project>_postgres_data to remove only the database volume.
Production IncidentStaging Environment Down for 3 Hours — depends_on Without HealthcheckA staging deployment of a three-tier application (React frontend, Node API, PostgreSQL) failed on every cold start because the API container started before PostgreSQL was ready to accept connections, causing cascading failures across all dependent services.
Symptomdocker compose up reports all containers as 'Started', but the API container exits with code 1 within 10 seconds. Logs show: Error: connect ECONNREFUSED 172.20.0.2:5432. The frontend shows a blank page because the API is unreachable. Restarting the stack sometimes works, sometimes does not — non-deterministic failures.
AssumptionTeam assumed a PostgreSQL configuration error or a corrupted database volume. They destroyed the volume (docker compose down -v), recreated it, and the problem persisted. Second assumption: a networking issue between containers. They tested DNS resolution and connectivity — both worked when the DB was already running.
Root causeThe docker-compose.yml used plain depends_on: postgres_db for the API service. This tells Compose to start the postgres_db container before the api_server container, but it does not wait for PostgreSQL to finish initialization. PostgreSQL takes 2-8 seconds to start accepting connections after its started during this window, tried to connect immediately, and crashed. On fast machines (CI runners with SSDs), the timing sometimes worked. On slower machines, it always failed.
Fixcontainer process begins. The API container 1. Added a healthcheck block to postgres_db using pg_isready. 2. Changed api_server depends_on from plain service name to condition: service_healthy. 3. Added restart: unless-stopped to api_server as a safety net for transient failures. 4. Added a startup retry loop in the application code as defense in depth. 5. Added docker compose config validation to CI to catch missing healthchecks.
Key Lesson
depends_on controls start order, not readiness. Always pair it with a healthcheck.Non-deterministic startup failures are almost always race conditions, not configuration errors.Destroying the database volume to fix a startup race condition is a dangerous diagnostic step — you lose data.Defense in depth: healthcheck at the Compose level AND retry logic at the application level.CI runners are often faster than developer machines, hiding timing-dependent bugs.
Production Debug GuideSystematic debugging paths for common Compose failures.
Container exits immediately after docker compose up.Check logs first: docker compose logs <service>. If the process exits before writing logs, run the container interactively: docker compose run --rm <service> sh and execute the CMD manually to see the error.
Containers cannot reach each other by service name.Verify both containers are on the same network: docker network inspect <project>_<network>. Test DNS resolution: docker compose exec <service> nslookup <target-service>. If they are on different networks, add a shared network or use networks: [shared_net] on both.
Changes to code are not reflected in the running container.Verify bind mount is correct: docker compose exec <service> ls /app. Check that the host path is correct (use absolute paths or $(pwd)). On macOS, check Docker Desktop file sharing settings. Check that the container is not using a cached layer — rebuild with docker compose build --no-cache.
docker compose up works on one machine but fails on another.Check for hardcoded paths in volumes. Check for missing .env file (docker compose config shows resolved variables). Check Docker Engine version compatibility. Check for port conflicts on the host: lsof -i :<port>.
Database data lost after docker compose down.Check if the volume is a named volume (declared under top-level volumes:) or an anonymous volume. Named volumes survive docker compose down. Only docker compose down -v removes them. If using a bind mount, check that the host directory exists and has correct permissions.
docker compose build is extremely slow.Check build context size — run du -sh . in the build directory. If large, add a .dockerignore file. Check layer caching: docker compose build should reuse cached layers. If every build reinstalls dependencies, ensure COPY package.json/requirements.txt comes before COPY . . in the Dockerfile.

Every real-world application is more than one moving part. Your API needs a database. That database needs a volume so data survives restarts. Your frontend needs to know the API's address. Managing each with individual docker run commands is a copy-paste nightmare that breaks when a new developer joins or you move to a different machine.

Docker Compose is a declarative orchestrator for multi-container applications on a single host. You describe the desired state in YAML — services, networks, volumes, environment variables, health checks — and Compose handles creation, networking, and lifecycle ordering. The file is the documentation. The file is the deployment. There is no gap between what you describe and what runs.

The critical misconception: Compose is not just a convenience wrapper. It manages DNS resolution between services, network isolation between tiers, volume lifecycle across container restarts, and startup ordering with readiness gates. Understanding these mechanisms is what separates a working compose file from a production-grade one.

The Anatomy of a docker-compose.yml File (and Why Every Key Matters)

A Compose file is a declaration of intent, not a script. You're telling Docker 'here is the world I want' — Compose figures out how to build it. That mental shift matters: you're not writing steps, you're describing a state.

Every Compose file has a few top-level keys. services is the main one — each entry is a container. volumes defines named storage that outlives containers. networks lets you control which services can see each other (by default Compose creates one shared network, which is great for getting started but dangerous if you want to isolate, say, your admin panel from your public API).

The depends_on key is widely misunderstood. It controls start order, not readiness. Your app container will start after the database container starts, but not after Postgres is actually ready to accept connections. That's a real gotcha we'll cover shortly. For readiness you need health checks.

The build key is your escape hatch from public images — point it at a directory with a Dockerfile and Compose builds the image itself before starting the container. This is how you develop locally with live code while still using Compose for orchestration.

Service configuration depth: Each service can specify image or build (mutually exclusive in intent — image pulls from a registry, build creates locally). The restart policy controls behavior on container exit: no (default), always, on-failure, unless-stopped. In production, unless-stopped is almost always correct — it restarts on crash but respects your manual docker compose stop.

Volume lifecycle: Named volumes declared under the top-level volumes: key are managed by Docker. They survive docker compose down but are destroyed by docker compose down -v. Bind mounts (host:path syntax) are managed by the host filesystem. Anonymous volumes (- /app/node_modules) are created by Docker and removed when the container is removed.

docker-compose.yml · YAML
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
# docker-compose.yml — A full three-tier web application
# Services: React frontend, Node/Express API, PostgreSQL database

version: '3.9'

services:

  # ── PostgreSQL Database ──────────────────────────────────────────
  postgres_db:
    image: postgres:15-alpine          # Use the slim Alpine variant to keep image size small
    restart: unless-stopped            # Restart on crash, but respect manual `docker compose stop`
    environment:
      POSTGRES_USER: ${DB_USER}        # Pulled from .env file — never hardcode credentials
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: ${DB_NAME}
    volumes:
      - postgres_data:/var/lib/postgresql/data   # Named volume: data persists across `down`/`up`
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql  # Seed script runs on first start
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]  # Actual readiness probe
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - backend_network   # Only the API can reach the DB — frontend cannot

  # ── Node.js / Express API ────────────────────────────────────────
  api_server:
    build:
      context: ./api          # Build from local Dockerfile in the /api directory
      dockerfile: Dockerfile
    restart: unless-stopped
    ports:
      - "3001:3001"           # Expose API to host for direct testing with Postman etc.
    environment:
      NODE_ENV: development
      DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}@postgres_db:5432/${DB_NAME}
      #                                                   ^^^^^^^^^^^
      #  'postgres_db' is the SERVICE NAMECompose's built-in DNS resolves it automatically
      JWT_SECRET: ${JWT_SECRET}
    depends_on:
      postgres_db:
        condition: service_healthy   # Wait until the healthcheck PASSES, not just starts
    volumes:
      - ./api:/app             # Bind mount for hot-reload in development
      - /app/node_modules      # Anonymous volume: keep container's node_modules, don't overwrite with host
    networks:
      - backend_network
      - frontend_network

  # ── React Frontend (served via Nginx) ────────────────────────────
  web_frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    restart: unless-stopped
    ports:
      - "80:80"               # Expose port 80 to the host machine
    environment:
      REACT_APP_API_URL: http://api_server:3001   # Container-to-container via service name
    depends_on:
      - api_server
    networks:
      - frontend_network      # Frontend can only see the API, never the raw database

# ── Named Volumes ────────────────────────────────────────────────────
volumes:
  postgres_data:              # Managed by Docker — survives `docker compose down`
                              # Destroyed only by `docker compose down -v`

# ── Networks ─────────────────────────────────────────────────────────
networks:
  backend_network:            # Isolated: only api_server and postgres_db
    driver: bridge
  frontend_network:           # Isolated: only web_frontend and api_server
    driver: bridge
▶ Output
[+] Running 4/4
✔ Network app_backend_network Created
✔ Network app_frontend_network Created
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started
✔ Container app-web_frontend-1 Started
Mental Model
Compose as a Declarative State Machine
Why does Compose use a YAML declaration instead of a sequence of docker run commands?
  • Declarations are idempotent — running up twice does not create duplicate containers.
  • The file is the documentation — no gap between what is described and what runs.
  • Compose can diff desired state against actual state and converge incrementally.
  • Multiple files can merge — base config + environment overrides without duplication.
📊 Production Insight
The default Compose network (project_default) puts all services on one flat network. This is fine for getting started but creates an implicit trust boundary violation — your frontend can reach your database directly. In production, define explicit networks per tier (frontend, backend, data) and assign services to only the networks they need. This is defense in depth at the network layer.
🎯 Key Takeaway
A Compose file is a desired state declaration, not a script. services, volumes, and networks are the three pillars. The default network is flat — define explicit networks per tier for production security. depends_on without healthcheck is a race condition waiting to happen.
Compose File Structure Decisions
IfSingle service, no dependencies
UseUse docker run — Compose adds no value for a single container
If2+ services that need networking and ordering
UseUse Compose with explicit networks per tier and healthcheck-gated depends_on
IfNeed different configs for dev, staging, prod
UseUse base docker-compose.yml + override files (-f flag) per environment
IfNeed conditional services (admin UI only in dev)
UseUse profiles: [dev] on the service, start with --profile dev

Environment Variables and Secrets — The Right Way vs. The Dangerous Way

Hardcoding passwords directly in docker-compose.yml is one of the most common and dangerous mistakes in real projects. If that file ever gets committed to a public repo — even accidentally — your credentials are exposed permanently (git history remembers everything).

The correct pattern is a .env file alongside your Compose file. Docker Compose automatically reads .env and substitutes ${VARIABLE_NAME} placeholders. You commit a .env.example with dummy values to your repo, and each developer (and your CI system) fills in the real .env locally. The actual .env lives in .gitignore.

For production, environment variables should come from your hosting platform's secret manager (AWS Secrets Manager, Doppler, HashiCorp Vault) injected at runtime — never baked into an image or a file on disk.

You can also use multiple Compose files with docker compose -f docker-compose.yml -f docker-compose.prod.yml up. The second file merges and overrides the first. This is how you maintain one base config and swap out dev-specific settings (like bind mounts and debug ports) for production-hardened ones without duplicating the whole file.

Secret exposure vectors: - Hardcoded in docker-compose.yml → committed to git → visible in git history forever - ENV in Dockerfile → baked into every image layer → visible in docker history and docker inspect - .env file not in .gitignore → committed to git → same as hardcoded - docker compose config output includes resolved secrets → if captured in CI logs, credentials are exposed

The right pattern for each environment: - Local development: .env file, in .gitignore, .env.example committed as template - CI/CD: secrets injected from CI platform secrets (GitHub Secrets, GitLab CI variables) - Production: secrets from a secrets manager (AWS Secrets Manager, Vault), mounted as files or injected as environment variables at deploy time

.env.example · BASH
1234567891011121314
# .env.example — commit this to git as a template
# Copy to .env and fill in real values — NEVER commit .env itself

# Database credentials
DB_USER=your_db_username
DB_PASSWORD=your_secure_password_here
DB_NAME=your_app_db

# Auth
JWT_SECRET=replace_with_a_long_random_string_at_least_32_chars

# App config
NODE_ENV=development
API_PORT=3001
▶ Output
# No output — this is a config file.
# Running `docker compose config` will show the fully resolved
# compose file with all variables substituted:
#
# $ docker compose config
# services:
# postgres_db:
# environment:
# POSTGRES_USER: your_db_username
# POSTGRES_PASSWORD: your_secure_password_here
# ...and so on
Mental Model
Secrets as a Lifecycle Problem
Why is a .env file insufficient for production secrets?
  • No rotation — changing a secret requires redeploying with a new .env file.
  • No audit trail — anyone with filesystem access can read the secret.
  • No revocation — if a secret is compromised, you cannot invalidate it without changing it everywhere.
  • No access control — the file is readable by any process running as the same user.
  • Secrets managers (Vault, AWS Secrets Manager) solve all four problems.
📊 Production Insight
The failure scenario is not theoretical. In 2023, a major data breach occurred because a developer committed a docker-compose.yml with hardcoded AWS credentials to a public GitHub repository. The credentials were valid for 6 months before detection. Git history made the fix (removing the credentials in a later commit) irrelevant — the credentials were already scraped by automated scanners. The fix: pre-commit hooks that scan for secrets (git-secrets, truffleHog), .env files in .gitignore, and secrets managers for production.
🎯 Key Takeaway
Never hardcode secrets in docker-compose.yml or Dockerfiles. Use .env files for local development (in .gitignore). Use secrets managers for production. The .env.example pattern ensures every developer knows which variables are needed without exposing actual values. Pre-commit hooks catch accidental secret commits.
Secret Management Strategy by Environment
IfLocal development
UseUse .env file (in .gitignore) with .env.example as template
IfCI/CD pipeline
UseInject from platform secrets (GitHub Secrets, GitLab CI variables) — never log them
IfProduction deployment
UseUse secrets manager (AWS Secrets Manager, Vault) — inject at runtime, never bake into images
IfDocker Compose with Docker Swarm
UseUse docker secret create — secrets are encrypted at rest and mounted as files in /run/secrets/

Networking Between Containers — How Compose DNS Actually Works

This is where most intermediate developers have a fuzzy mental model, and it costs them hours of debugging. When Compose starts your services, it creates a virtual network and registers each service under its own DNS name — that name is simply the service name you defined in the YAML.

So when your API container wants to connect to Postgres, it doesn't use localhost (that points to itself) and it doesn't use an IP address (those change). It uses postgres_db:5432 — the service name and the container port, not the host-mapped port. This is a huge source of confusion: ports: '5432:5432' exposes port 5432 to your laptop. Other containers don't need that — they communicate on the internal network directly.

Multiple networks let you enforce security boundaries at the network layer, not just at the application layer. In our example, the frontend container literally cannot reach the database — there's no route. Even if someone finds an XSS vulnerability in your frontend, they can't pivot directly to the database because Compose's networking won't allow it.

Use docker compose exec api_server sh to shell into a running container and test DNS resolution live with nslookup postgres_db or curl http://api_server:3001/health from the frontend container.

DNS resolution internals: Compose uses Docker's embedded DNS server at 127.0.0.11. When a container resolves a service name, the request goes to this DNS server, which maps the service name to the container's internal IP on the appropriate network. If a container is on multiple networks, it can resolve services on all of them. If a container is on network A but not network B, it cannot resolve services that are only on network B.

Port mapping confusion: The ports directive maps HOST:CONTAINER. Your database connects at localhost:5432 from your laptop (host port). Your API connects at postgres_db:5432 from inside Docker (container port). These are completely separate network paths. Using the host port from inside a container (localhost:5432) either fails or connects to the wrong thing.

network-debugging-commands.sh · BASH
1234567891011121314151617181920212223242526272829
# ── Inspecting Compose Networks ──────────────────────────────────────

# List all networks Compose created for this project
docker network ls
# Output includes: app_backend_network, app_frontend_network

# Inspect who is on the backend network and their internal IPs
docker network inspect app_backend_network

# Shell into the running API container to test connectivity
docker compose exec api_server sh

# Inside the container — test that DNS resolves the DB service name
nslookup postgres_db
# Expected output:
# Server:    127.0.0.11      <-- Docker's embedded DNS resolver
# Address:   127.0.0.11:53
# Name:      postgres_db
# Address:   172.20.0.2      <-- Internal IP (changes every run, that's why we use names)

# Test that the DB is reachable on its CONTAINER port (not the host-mapped one)
curl -v telnet://postgres_db:5432
# This should open a TCP connection — if it fails, check your 'networks' config

# Verify the frontend CANNOT reach postgres directly (network isolation working)
docker compose exec web_frontend sh
nslookup postgres_db
# Expected: nslookup: can't resolve 'postgres_db'
# This is CORRECT — it proves your network isolation is working
▶ Output
# From inside api_server container:
$ nslookup postgres_db
Server: 127.0.0.11
Address: 127.0.0.11:53

Name: postgres_db
Address: 172.20.0.2

# From inside web_frontend container:
$ nslookup postgres_db
nslookup: can't resolve 'postgres_db'
# Correct! Frontend is on a different network segment.
Mental Model
Compose Networks as Virtual LANs
Why can containers reach each other by service name without any DNS configuration?
  • Compose creates a bridge network for the project and registers each service with Docker's embedded DNS server.
  • Each container's /etc/resolv.conf points to 127.0.0.11 — Docker's DNS proxy.
  • The DNS proxy maps service names to container IPs on the correct network.
  • This is automatic — you never configure DNS manually. It is a Compose/Docker Engine feature.
📊 Production Insight
The localhost confusion is the single most common networking bug in Docker Compose. Developers write DATABASE_URL=postgres://user:pass@localhost:5432/db and wonder why the API cannot connect. localhost inside a container refers to the container itself, not the host machine. The correct URL uses the service name: postgres://user:pass@postgres_db:5432/db. This bug wastes hours because the error message (ECONNREFUSED) does not hint at the misconfiguration.
🎯 Key Takeaway
Containers communicate by service name on the internal Docker network. Never use localhost between containers. Never use the host-mapped port for container-to-container traffic. Use multiple networks to enforce tier isolation — if the frontend cannot resolve the database name, an attacker who compromises the frontend cannot pivot to the database.
Container Networking Debug Decision Tree
IfService cannot resolve another service by name
UseCheck if both services share at least one network. Run docker compose exec <svc> nslookup <target>
IfService resolves the name but connection is refused
UseCheck if the target service's healthcheck is passing. Check if you are using the container port, not the host port.
IfConnection works on host but not from another container
UseYou are using localhost or 127.0.0.1 — use the service name instead.
IfFrontend can reach the database directly (should not happen)
UseCheck network isolation — both services should not share the same network if they should be isolated.

Profiles and Overrides — Running Different Stacks for Dev, Test and CI

A subtle but powerful Compose feature is profiles. You might have services that should only run in certain contexts — a database admin UI like pgAdmin in development, a mock email server in testing, or a metrics exporter in production. Profiles let you tag services and opt into them at runtime.

With docker compose --profile dev up, only services tagged with the dev profile (plus services with no profile) start. Your CI pipeline can run docker compose --profile test up --abort-on-container-exit to spin up integration test dependencies, run tests, and tear down — without launching the full dev UI tooling.

Compose file overrides are the production deployment pattern. You maintain a docker-compose.yml as the base truth and a docker-compose.prod.yml that overrides just the production-specific bits: replaces bind mounts with proper volumes, removes debug ports, adds resource limits, and switches to pre-built images instead of local builds. This avoids the 'it works on my machine' trap without duplicating hundreds of lines of YAML.

Override merge behavior: When multiple files are specified with -f, Compose deep-merges them. Arrays (like ports, volumes, environment) are replaced, not appended — if the override file specifies volumes: [], the base file's volumes are completely replaced. Scalar values (like image, restart) are overwritten. This is why docker compose config is essential — it shows the final merged result before you deploy.

Profile use cases beyond dev/prod: - Testing: profile test for mock services (WireMock, LocalStack) - Monitoring: profile monitoring for Prometheus, Grafana, cAdvisor - Debugging: profile debug for a debugger sidecar or log aggregator - Migration: profile migration for a one-shot database migration container

docker-compose.prod.yml · YAML
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
# docker-compose.prod.yml — Production overrides
# Usage: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
#
# This file MERGES with docker-compose.yml — only specify what changes.

version: '3.9'

services:

  api_server:
    # In prod, use a pre-built image from your registry instead of local build
    image: ghcr.io/your-org/api-server:${APP_VERSION:-latest}
    build: !reset null    # Remove the build config from the base file
    volumes: []           # Remove the dev bind mount — no hot reload in prod
    environment:
      NODE_ENV: production
    deploy:
      resources:
        limits:
          cpus: '0.50'      # Cap CPU usage per container instance
          memory: 512M      # Cap memory — prevents runaway leaks taking down the host
        reservations:
          memory: 256M
    logging:
      driver: "json-file"   # Structured logs for log aggregators
      options:
        max-size: "10m"     # Rotate logs — don't fill the disk
        max-file: "5"

  web_frontend:
    image: ghcr.io/your-org/web-frontend:${APP_VERSION:-latest}
    build: !reset null
    volumes: []

  # pgAdmin only runs in dev — it's tagged with 'dev' profile
  pgadmin:
    image: dpage/pgadmin4:latest
    profiles:
      - dev               # Only starts when: docker compose --profile dev up
    ports:
      - "5050:80"
    environment:
      PGADMIN_DEFAULT_EMAIL: admin@local.dev
      PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_PASSWORD:-localdevonly}
    networks:
      - backend_network
▶ Output
# Running with production overrides:
$ docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

[+] Running 3/3
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started # Using registry image, no bind mount
✔ Container app-web_frontend-1 Started # Using registry image

# Note: pgadmin did NOT start — it requires --profile dev

# Running dev stack with pgAdmin:
$ docker compose --profile dev up
[+] Running 4/4
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started
✔ Container app-web_frontend-1 Started
✔ Container app-pgadmin-1 Started # Now included
Mental Model
Override Files as Inheritance
Why use override files instead of separate docker-compose.dev.yml and docker-compose.prod.yml?
  • Separate files duplicate 80% of the configuration — changes must be applied to both.
  • Override files share the base and only specify differences — single source of truth.
  • Override files can be combined: -f base.yml -f monitoring.yml -f prod.yml
  • docker compose config shows the merged result — no surprises at deploy time.
📊 Production Insight
The merge behavior for arrays (ports, volumes, environment) is replacement, not appending. If your base file has volumes: ['./api:/app'] and your prod override has volumes: [], the bind mount is completely removed. If you want to add a volume in the override without removing the base ones, you must re-declare all volumes in the override file. This catches many teams off guard — always run docker compose config before deploying to verify the merged result.
🎯 Key Takeaway
Profiles let you tag services for conditional startup. Override files let you maintain one base config and swap environment-specific settings. Always run docker compose config to verify the merged result before deploying. Array replacement in overrides is the most common source of surprise — re-declare all items if you need to add, not replace.
When to Use Profiles vs Override Files
IfService should only run in certain contexts (dev, test, monitoring)
UseUse profiles — tag the service with profiles: [dev] and start with --profile dev
IfSame services but different configuration per environment (dev vs prod)
UseUse override files — base docker-compose.yml + docker-compose.prod.yml
IfBoth: different services AND different config per environment
UseUse both — profiles for optional services, override files for environment-specific config
IfCI pipeline needs to run tests then tear down
UseUse profile test with docker compose --profile test up --abort-on-container-exit
🗂 docker run vs Docker Compose
Manual container management versus declarative multi-container orchestration.
Aspectdocker run (manual)Docker Compose
Setup complexityOne command per container, manual flags every timeSingle YAML file, one command to start everything
NetworkingMust create networks manually and attach each containerAuto-creates a shared network; service names resolve via DNS
Startup orderYou remember the order yourselfDeclarative depends_on with optional health check conditions
Environment configLong --env-file or -e flags per commandBuilt-in .env file support and variable interpolation
Volume managementExplicit -v flags, easy to forget or mistypeNamed volumes and bind mounts declared in YAML
ReproducibilityLow — you have to document the exact commandsHigh — the file IS the documentation
Scaling a serviceRun more containers manually, manage names yourselfdocker compose up --scale api_server=3 (basic, no LB)
Best suited forQuick one-off containers, learning Docker basicsLocal dev, integration testing, simple multi-service deployments
Not suited forN/A — even single containers benefit from ComposeLarge-scale production orchestration (use Kubernetes for that)

🎯 Key Takeaways

  • Docker Compose is a declaration of the desired state of your entire app stack — you describe it once in YAML and docker compose up handles creation, networking, and ordering every time.
  • depends_on controls start order, not start readiness — always pair it with a healthcheck and condition: service_healthy to prevent race conditions when your app starts before the database is accepting connections.
  • Containers communicate by service name on the internal Docker network — never use localhost between containers, and never use the host-mapped port for container-to-container traffic.
  • Use multiple Compose files (-f flag) to maintain one base config and override only what changes between environments — cleaner and less error-prone than separate files for dev and prod.
  • Never hardcode secrets in Compose files or Dockerfiles. Use .env files for local dev (in .gitignore) and secrets managers for production. Pre-commit hooks catch accidental secret commits.

⚠ Common Mistakes to Avoid

    Using `depends_on` without a health check
    Symptom

    Your API crashes on startup with 'ECONNREFUSED' to the database because Postgres isn't ready yet, even though depends_on is set.

    Fix

    Add a healthcheck block to your database service using pg_isready, then change your API's depends_on to condition: service_healthy instead of just listing the service name.

    Connecting to the wrong port in DATABASE_URL
    Symptom

    API can't reach the database using localhost:5432 or the host-mapped port.

    Fix

    Inside the Docker network, containers talk to each other via service name and the container port, not the host port. Your DATABASE_URL should be postgres://user:pass@postgres_db:5432/dbname — 'postgres_db' is the service name, 5432 is the container's internal port.

    Bind-mounting node_modules from host into the container
    Symptom

    The app works on the developer's Mac but crashes in CI (Linux) with native module errors, or npm install inside the container gets overwritten on every restart.

    Fix

    Add an anonymous volume for node_modules — - /app/node_modules — after the bind mount. Docker will use the container's own node_modules (compiled for Linux inside Docker) rather than the host's version.

    Hardcoding secrets in docker-compose.yml
    Symptom

    Passwords appear in git history, CI logs, or docker compose config output. Anyone with repo access can extract credentials.

    Fix

    Use .env files (in .gitignore) for local dev, platform secrets for CI, and secrets managers for production. Add pre-commit hooks (git-secrets, truffleHog) to catch accidental commits.

    Using the default flat network for all services
    Symptom

    Frontend can reach the database directly, bypassing the API's access controls. A frontend XSS vulnerability becomes a database compromise.

    Fix

    Define explicit networks per tier (frontend_network, backend_network) and assign services to only the networks they need.

    Forgetting that override arrays replace, not append
    Symptom

    Production override file sets volumes: [] to remove bind mounts, but also accidentally removes the named volume for database persistence. Data is lost on the next docker compose down.

    Fix

    Run docker compose config to verify the merged result before deploying. Re-declare all volumes in the override if you need to add, not replace.

Interview Questions on This Topic

  • QWhat's the difference between depends_on and a health check condition in Docker Compose, and when would a plain depends_on cause a race condition in production?SeniorReveal
    depends_on only controls container start order — it does not wait for the service to be ready. A plain depends_on: postgres_db starts your API as soon as the Postgres container starts, not when Postgres is actually accepting connections (which takes 2-8 seconds). The race condition: API connects, Postgres isn't ready yet, API crashes. Fix: add a healthcheck to Postgres using pg_isready, then use condition: service_healthy in depends_on.
  • QIf you have two services in a Compose file and they can't communicate with each other by service name, what would you check first — and can you walk me through how Compose DNS actually resolves service names?SeniorReveal
    First check: are both services on the same network? By default Compose creates one shared bridge network for all services. If you define custom networks per tier, a service only resolves names of services on a shared network. Compose uses Docker's embedded DNS at 127.0.0.11 — each container's /etc/resolv.conf points there, and it maps service names to internal IPs. Test with: docker compose exec <service> nslookup <target>.
  • QYou have a docker-compose.yml that works perfectly for local development with bind mounts and debug ports. How would you adapt it for production deployment without duplicating the entire file?SeniorReveal
    Use override files: keep docker-compose.yml as the base and create docker-compose.prod.yml that only specifies what changes — replace bind mounts with named volumes, remove debug ports, switch from build: to image: with a registry tag, add resource limits and logging config. Deploy with: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d. Always verify with docker compose config before deploying to see the fully merged result.
  • QExplain the difference between named volumes, bind mounts, and anonymous volumes. When would you use each? What happens to data in each when you run docker compose down vs docker compose down -v?JuniorReveal
    Named volumes (declared under top-level volumes:) are managed by Docker and survive docker compose down — only destroyed by down -v. Use for database data. Bind mounts (host path: container path) map a host directory into the container — great for hot-reloading code in development. They're not touched by either down command. Anonymous volumes (-v /app/node_modules) are created by Docker, not named, and are removed when the container is removed. Use for isolating container-internal directories (like node_modules) from bind-mounted host directories.
  • QYour CI pipeline runs docker compose config and sees the DATABASE_URL with the password in plain text. How do you prevent secrets from leaking into CI logs and build artifacts?SeniorReveal
    Three layers: (1) In CI, inject secrets from the platform secret store (GitHub Secrets, GitLab CI Variables) as environment variables — never hardcode in .env files committed to the repo. (2) Mask the secret variable in CI config so it never appears in logs. (3) In docker-compose.yml, reference as ${DATABASE_PASSWORD} — the resolved value won't appear in the compose file itself. Add docker compose config output to CI log masking. For production, use a secrets manager (AWS Secrets Manager, Vault) instead of environment variables entirely.
  • QYou have a Compose file with 8 services. In production, 2 should not run. In development, 3 additional services (pgAdmin, mock email, hot-reload proxy) should run. How do you structure this without maintaining multiple nearly-identical files?SeniorReveal
    Use profiles + override files together. Tag dev-only services with profiles: [dev] in the base file — they only start when you pass --profile dev. Tag prod-excluded services with profiles: [dev] as well. For config differences (bind mounts vs volumes, debug flags), create docker-compose.prod.yml override. Result: docker compose --profile dev up for local, docker compose -f docker-compose.yml -f docker-compose.prod.yml up for production. Run docker compose config to validate either combination before deploying.

Frequently Asked Questions

What is the difference between Docker Compose and Kubernetes?

Docker Compose is designed for defining and running multi-container apps on a single host — it's perfect for local development and simple deployments. Kubernetes is a full container orchestration platform that manages containers across a cluster of machines, handles auto-scaling, self-healing, rolling deployments, and much more. Start with Compose; graduate to Kubernetes when you need to scale across multiple servers or need enterprise-grade reliability.

Does `docker compose down` delete my database data?

It depends on how you defined your volume. docker compose down stops and removes containers and networks, but named volumes (declared under the top-level volumes: key) survive by default. Your data is safe. Only docker compose down -v removes named volumes. Anonymous volumes created by -v /some/path syntax are also removed by down. Use named volumes for any data you care about.

Can I use Docker Compose in production?

Yes, for small to medium deployments on a single server, Compose is perfectly valid in production — many successful apps run this way. The limitation is that it manages containers on one machine only. If you need to spread load across multiple servers, roll out zero-downtime deployments, or auto-scale based on traffic, you'll need Kubernetes or Docker Swarm. Use docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d with a production override file to harden your config.

How do I share environment variables between services without repeating them?

Docker Compose reads from a .env file in the same directory as docker-compose.yml and substitutes ${VARIABLE_NAME} placeholders. Define each variable once in .env and reference it in multiple services. For variables that should not be in the .env file (secrets), use Docker secrets (Swarm mode) or inject from your platform's secrets manager at deploy time. You can also use x- anchors in YAML to define reusable blocks.

What happens if I run docker compose up twice?

Compose is idempotent. Running up a second time detects that the containers already exist and are running — it does not create duplicates. If you changed the Compose file, Compose recreates only the affected containers. Use docker compose up --force-recreate to rebuild all containers even if nothing changed.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousDocker Volumes and NetworkingNext →Docker Registry and Docker Hub
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged