Docker Compose Explained: Multi-Container Apps, Networking and Real-World Patterns
- Docker Compose is a declaration of the desired state of your entire app stack — you describe it once in YAML and
docker compose uphandles creation, networking, and ordering every time. depends_oncontrols start order, not start readiness — always pair it with ahealthcheckandcondition: service_healthyto prevent race conditions when your app starts before the database is accepting connections.- Containers communicate by service name on the internal Docker network — never use
localhostbetween containers, and never use the host-mapped port for container-to-container traffic.
- You declare desired state (services, volumes, networks) in docker-compose.yml
- Compose translates that into container create/start/stop commands via the Docker API
- Service names become DNS entries on an auto-created bridge network
- services: each entry maps to one container
- volumes: named storage that survives container lifecycle
- networks: control which services can reach each other
- depends_on with healthcheck: startup ordering with readiness gates
Container exits immediately or crashes in a restart loop.
docker compose logs --tail 100 <service>docker compose ps -aService cannot connect to database or another service.
docker compose exec <service> nslookup <target-service>docker network inspect <project>_defaultPort already in use error on docker compose up.
lsof -i :<port>docker compose ps -a | grep <port>Environment variables not being picked up by a service.
docker compose config | grep -A 20 <service>docker compose exec <service> env | grep <VARIABLE>Named volume data is corrupted or needs to be reset.
docker volume inspect <project>_postgres_datadocker run --rm -v <project>_postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/volume-backup.tar.gz /dataProduction Incident
Production Debug GuideSystematic debugging paths for common Compose failures.
Every real-world application is more than one moving part. Your API needs a database. That database needs a volume so data survives restarts. Your frontend needs to know the API's address. Managing each with individual docker run commands is a copy-paste nightmare that breaks when a new developer joins or you move to a different machine.
Docker Compose is a declarative orchestrator for multi-container applications on a single host. You describe the desired state in YAML — services, networks, volumes, environment variables, health checks — and Compose handles creation, networking, and lifecycle ordering. The file is the documentation. The file is the deployment. There is no gap between what you describe and what runs.
The critical misconception: Compose is not just a convenience wrapper. It manages DNS resolution between services, network isolation between tiers, volume lifecycle across container restarts, and startup ordering with readiness gates. Understanding these mechanisms is what separates a working compose file from a production-grade one.
The Anatomy of a docker-compose.yml File (and Why Every Key Matters)
A Compose file is a declaration of intent, not a script. You're telling Docker 'here is the world I want' — Compose figures out how to build it. That mental shift matters: you're not writing steps, you're describing a state.
Every Compose file has a few top-level keys. services is the main one — each entry is a container. volumes defines named storage that outlives containers. networks lets you control which services can see each other (by default Compose creates one shared network, which is great for getting started but dangerous if you want to isolate, say, your admin panel from your public API).
The depends_on key is widely misunderstood. It controls start order, not readiness. Your app container will start after the database container starts, but not after Postgres is actually ready to accept connections. That's a real gotcha we'll cover shortly. For readiness you need health checks.
The build key is your escape hatch from public images — point it at a directory with a Dockerfile and Compose builds the image itself before starting the container. This is how you develop locally with live code while still using Compose for orchestration.
Service configuration depth: Each service can specify image or build (mutually exclusive in intent — image pulls from a registry, build creates locally). The restart policy controls behavior on container exit: no (default), always, on-failure, unless-stopped. In production, unless-stopped is almost always correct — it restarts on crash but respects your manual docker compose stop.
Volume lifecycle: Named volumes declared under the top-level volumes: key are managed by Docker. They survive docker compose down but are destroyed by docker compose down -v. Bind mounts (host:path syntax) are managed by the host filesystem. Anonymous volumes (- /app/node_modules) are created by Docker and removed when the container is removed.
# docker-compose.yml — A full three-tier web application # Services: React frontend, Node/Express API, PostgreSQL database version: '3.9' services: # ── PostgreSQL Database ────────────────────────────────────────── postgres_db: image: postgres:15-alpine # Use the slim Alpine variant to keep image size small restart: unless-stopped # Restart on crash, but respect manual `docker compose stop` environment: POSTGRES_USER: ${DB_USER} # Pulled from .env file — never hardcode credentials POSTGRES_PASSWORD: ${DB_PASSWORD} POSTGRES_DB: ${DB_NAME} volumes: - postgres_data:/var/lib/postgresql/data # Named volume: data persists across `down`/`up` - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql # Seed script runs on first start healthcheck: test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"] # Actual readiness probe interval: 10s timeout: 5s retries: 5 networks: - backend_network # Only the API can reach the DB — frontend cannot # ── Node.js / Express API ──────────────────────────────────────── api_server: build: context: ./api # Build from local Dockerfile in the /api directory dockerfile: Dockerfile restart: unless-stopped ports: - "3001:3001" # Expose API to host for direct testing with Postman etc. environment: NODE_ENV: development DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}@postgres_db:5432/${DB_NAME} # ^^^^^^^^^^^ # 'postgres_db' is the SERVICE NAME — Compose's built-in DNS resolves it automatically JWT_SECRET: ${JWT_SECRET} depends_on: postgres_db: condition: service_healthy # Wait until the healthcheck PASSES, not just starts volumes: - ./api:/app # Bind mount for hot-reload in development - /app/node_modules # Anonymous volume: keep container's node_modules, don't overwrite with host networks: - backend_network - frontend_network # ── React Frontend (served via Nginx) ──────────────────────────── web_frontend: build: context: ./frontend dockerfile: Dockerfile restart: unless-stopped ports: - "80:80" # Expose port 80 to the host machine environment: REACT_APP_API_URL: http://api_server:3001 # Container-to-container via service name depends_on: - api_server networks: - frontend_network # Frontend can only see the API, never the raw database # ── Named Volumes ──────────────────────────────────────────────────── volumes: postgres_data: # Managed by Docker — survives `docker compose down` # Destroyed only by `docker compose down -v` # ── Networks ───────────────────────────────────────────────────────── networks: backend_network: # Isolated: only api_server and postgres_db driver: bridge frontend_network: # Isolated: only web_frontend and api_server driver: bridge
✔ Network app_backend_network Created
✔ Network app_frontend_network Created
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started
✔ Container app-web_frontend-1 Started
- Declarations are idempotent — running up twice does not create duplicate containers.
- The file is the documentation — no gap between what is described and what runs.
- Compose can diff desired state against actual state and converge incrementally.
- Multiple files can merge — base config + environment overrides without duplication.
Environment Variables and Secrets — The Right Way vs. The Dangerous Way
Hardcoding passwords directly in docker-compose.yml is one of the most common and dangerous mistakes in real projects. If that file ever gets committed to a public repo — even accidentally — your credentials are exposed permanently (git history remembers everything).
The correct pattern is a .env file alongside your Compose file. Docker Compose automatically reads .env and substitutes ${VARIABLE_NAME} placeholders. You commit a .env.example with dummy values to your repo, and each developer (and your CI system) fills in the real .env locally. The actual .env lives in .gitignore.
For production, environment variables should come from your hosting platform's secret manager (AWS Secrets Manager, Doppler, HashiCorp Vault) injected at runtime — never baked into an image or a file on disk.
You can also use multiple Compose files with docker compose -f docker-compose.yml -f docker-compose.prod.yml up. The second file merges and overrides the first. This is how you maintain one base config and swap out dev-specific settings (like bind mounts and debug ports) for production-hardened ones without duplicating the whole file.
Secret exposure vectors: - Hardcoded in docker-compose.yml → committed to git → visible in git history forever - ENV in Dockerfile → baked into every image layer → visible in docker history and docker inspect - .env file not in .gitignore → committed to git → same as hardcoded - docker compose config output includes resolved secrets → if captured in CI logs, credentials are exposed
The right pattern for each environment: - Local development: .env file, in .gitignore, .env.example committed as template - CI/CD: secrets injected from CI platform secrets (GitHub Secrets, GitLab CI variables) - Production: secrets from a secrets manager (AWS Secrets Manager, Vault), mounted as files or injected as environment variables at deploy time
# .env.example — commit this to git as a template # Copy to .env and fill in real values — NEVER commit .env itself # Database credentials DB_USER=your_db_username DB_PASSWORD=your_secure_password_here DB_NAME=your_app_db # Auth JWT_SECRET=replace_with_a_long_random_string_at_least_32_chars # App config NODE_ENV=development API_PORT=3001
# Running `docker compose config` will show the fully resolved
# compose file with all variables substituted:
#
# $ docker compose config
# services:
# postgres_db:
# environment:
# POSTGRES_USER: your_db_username
# POSTGRES_PASSWORD: your_secure_password_here
# ...and so on
- No rotation — changing a secret requires redeploying with a new .env file.
- No audit trail — anyone with filesystem access can read the secret.
- No revocation — if a secret is compromised, you cannot invalidate it without changing it everywhere.
- No access control — the file is readable by any process running as the same user.
- Secrets managers (Vault, AWS Secrets Manager) solve all four problems.
Networking Between Containers — How Compose DNS Actually Works
This is where most intermediate developers have a fuzzy mental model, and it costs them hours of debugging. When Compose starts your services, it creates a virtual network and registers each service under its own DNS name — that name is simply the service name you defined in the YAML.
So when your API container wants to connect to Postgres, it doesn't use localhost (that points to itself) and it doesn't use an IP address (those change). It uses postgres_db:5432 — the service name and the container port, not the host-mapped port. This is a huge source of confusion: ports: '5432:5432' exposes port 5432 to your laptop. Other containers don't need that — they communicate on the internal network directly.
Multiple networks let you enforce security boundaries at the network layer, not just at the application layer. In our example, the frontend container literally cannot reach the database — there's no route. Even if someone finds an XSS vulnerability in your frontend, they can't pivot directly to the database because Compose's networking won't allow it.
Use docker compose exec api_server sh to shell into a running container and test DNS resolution live with nslookup postgres_db or curl http://api_server:3001/health from the frontend container.
DNS resolution internals: Compose uses Docker's embedded DNS server at 127.0.0.11. When a container resolves a service name, the request goes to this DNS server, which maps the service name to the container's internal IP on the appropriate network. If a container is on multiple networks, it can resolve services on all of them. If a container is on network A but not network B, it cannot resolve services that are only on network B.
Port mapping confusion: The ports directive maps HOST:CONTAINER. Your database connects at localhost:5432 from your laptop (host port). Your API connects at postgres_db:5432 from inside Docker (container port). These are completely separate network paths. Using the host port from inside a container (localhost:5432) either fails or connects to the wrong thing.
# ── Inspecting Compose Networks ────────────────────────────────────── # List all networks Compose created for this project docker network ls # Output includes: app_backend_network, app_frontend_network # Inspect who is on the backend network and their internal IPs docker network inspect app_backend_network # Shell into the running API container to test connectivity docker compose exec api_server sh # Inside the container — test that DNS resolves the DB service name nslookup postgres_db # Expected output: # Server: 127.0.0.11 <-- Docker's embedded DNS resolver # Address: 127.0.0.11:53 # Name: postgres_db # Address: 172.20.0.2 <-- Internal IP (changes every run, that's why we use names) # Test that the DB is reachable on its CONTAINER port (not the host-mapped one) curl -v telnet://postgres_db:5432 # This should open a TCP connection — if it fails, check your 'networks' config # Verify the frontend CANNOT reach postgres directly (network isolation working) docker compose exec web_frontend sh nslookup postgres_db # Expected: nslookup: can't resolve 'postgres_db' # This is CORRECT — it proves your network isolation is working
$ nslookup postgres_db
Server: 127.0.0.11
Address: 127.0.0.11:53
Name: postgres_db
Address: 172.20.0.2
# From inside web_frontend container:
$ nslookup postgres_db
nslookup: can't resolve 'postgres_db'
# Correct! Frontend is on a different network segment.
- Compose creates a bridge network for the project and registers each service with Docker's embedded DNS server.
- Each container's /etc/resolv.conf points to 127.0.0.11 — Docker's DNS proxy.
- The DNS proxy maps service names to container IPs on the correct network.
- This is automatic — you never configure DNS manually. It is a Compose/Docker Engine feature.
Profiles and Overrides — Running Different Stacks for Dev, Test and CI
A subtle but powerful Compose feature is profiles. You might have services that should only run in certain contexts — a database admin UI like pgAdmin in development, a mock email server in testing, or a metrics exporter in production. Profiles let you tag services and opt into them at runtime.
With docker compose --profile dev up, only services tagged with the dev profile (plus services with no profile) start. Your CI pipeline can run docker compose --profile test up --abort-on-container-exit to spin up integration test dependencies, run tests, and tear down — without launching the full dev UI tooling.
Compose file overrides are the production deployment pattern. You maintain a docker-compose.yml as the base truth and a docker-compose.prod.yml that overrides just the production-specific bits: replaces bind mounts with proper volumes, removes debug ports, adds resource limits, and switches to pre-built images instead of local builds. This avoids the 'it works on my machine' trap without duplicating hundreds of lines of YAML.
Override merge behavior: When multiple files are specified with -f, Compose deep-merges them. Arrays (like ports, volumes, environment) are replaced, not appended — if the override file specifies volumes: [], the base file's volumes are completely replaced. Scalar values (like image, restart) are overwritten. This is why docker compose config is essential — it shows the final merged result before you deploy.
Profile use cases beyond dev/prod: - Testing: profile test for mock services (WireMock, LocalStack) - Monitoring: profile monitoring for Prometheus, Grafana, cAdvisor - Debugging: profile debug for a debugger sidecar or log aggregator - Migration: profile migration for a one-shot database migration container
# docker-compose.prod.yml — Production overrides # Usage: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d # # This file MERGES with docker-compose.yml — only specify what changes. version: '3.9' services: api_server: # In prod, use a pre-built image from your registry instead of local build image: ghcr.io/your-org/api-server:${APP_VERSION:-latest} build: !reset null # Remove the build config from the base file volumes: [] # Remove the dev bind mount — no hot reload in prod environment: NODE_ENV: production deploy: resources: limits: cpus: '0.50' # Cap CPU usage per container instance memory: 512M # Cap memory — prevents runaway leaks taking down the host reservations: memory: 256M logging: driver: "json-file" # Structured logs for log aggregators options: max-size: "10m" # Rotate logs — don't fill the disk max-file: "5" web_frontend: image: ghcr.io/your-org/web-frontend:${APP_VERSION:-latest} build: !reset null volumes: [] # pgAdmin only runs in dev — it's tagged with 'dev' profile pgadmin: image: dpage/pgadmin4:latest profiles: - dev # Only starts when: docker compose --profile dev up ports: - "5050:80" environment: PGADMIN_DEFAULT_EMAIL: admin@local.dev PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_PASSWORD:-localdevonly} networks: - backend_network
$ docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
[+] Running 3/3
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started # Using registry image, no bind mount
✔ Container app-web_frontend-1 Started # Using registry image
# Note: pgadmin did NOT start — it requires --profile dev
# Running dev stack with pgAdmin:
$ docker compose --profile dev up
[+] Running 4/4
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started
✔ Container app-web_frontend-1 Started
✔ Container app-pgadmin-1 Started # Now included
- Separate files duplicate 80% of the configuration — changes must be applied to both.
- Override files share the base and only specify differences — single source of truth.
- Override files can be combined: -f base.yml -f monitoring.yml -f prod.yml
- docker compose config shows the merged result — no surprises at deploy time.
| Aspect | docker run (manual) | Docker Compose |
|---|---|---|
| Setup complexity | One command per container, manual flags every time | Single YAML file, one command to start everything |
| Networking | Must create networks manually and attach each container | Auto-creates a shared network; service names resolve via DNS |
| Startup order | You remember the order yourself | Declarative depends_on with optional health check conditions |
| Environment config | Long --env-file or -e flags per command | Built-in .env file support and variable interpolation |
| Volume management | Explicit -v flags, easy to forget or mistype | Named volumes and bind mounts declared in YAML |
| Reproducibility | Low — you have to document the exact commands | High — the file IS the documentation |
| Scaling a service | Run more containers manually, manage names yourself | docker compose up --scale api_server=3 (basic, no LB) |
| Best suited for | Quick one-off containers, learning Docker basics | Local dev, integration testing, simple multi-service deployments |
| Not suited for | N/A — even single containers benefit from Compose | Large-scale production orchestration (use Kubernetes for that) |
🎯 Key Takeaways
- Docker Compose is a declaration of the desired state of your entire app stack — you describe it once in YAML and
docker compose uphandles creation, networking, and ordering every time. depends_oncontrols start order, not start readiness — always pair it with ahealthcheckandcondition: service_healthyto prevent race conditions when your app starts before the database is accepting connections.- Containers communicate by service name on the internal Docker network — never use
localhostbetween containers, and never use the host-mapped port for container-to-container traffic. - Use multiple Compose files (
-fflag) to maintain one base config and override only what changes between environments — cleaner and less error-prone than separate files for dev and prod. - Never hardcode secrets in Compose files or Dockerfiles. Use
.envfiles for local dev (in.gitignore) and secrets managers for production. Pre-commit hooks catch accidental secret commits.
⚠ Common Mistakes to Avoid
Interview Questions on This Topic
- QWhat's the difference between
depends_onand a health check condition in Docker Compose, and when would a plaindepends_oncause a race condition in production?SeniorReveal - QIf you have two services in a Compose file and they can't communicate with each other by service name, what would you check first — and can you walk me through how Compose DNS actually resolves service names?SeniorReveal
- QYou have a docker-compose.yml that works perfectly for local development with bind mounts and debug ports. How would you adapt it for production deployment without duplicating the entire file?SeniorReveal
- QExplain the difference between named volumes, bind mounts, and anonymous volumes. When would you use each? What happens to data in each when you run docker compose down vs docker compose down -v?JuniorReveal
- QYour CI pipeline runs docker compose config and sees the DATABASE_URL with the password in plain text. How do you prevent secrets from leaking into CI logs and build artifacts?SeniorReveal
- QYou have a Compose file with 8 services. In production, 2 should not run. In development, 3 additional services (pgAdmin, mock email, hot-reload proxy) should run. How do you structure this without maintaining multiple nearly-identical files?SeniorReveal
Frequently Asked Questions
What is the difference between Docker Compose and Kubernetes?
Docker Compose is designed for defining and running multi-container apps on a single host — it's perfect for local development and simple deployments. Kubernetes is a full container orchestration platform that manages containers across a cluster of machines, handles auto-scaling, self-healing, rolling deployments, and much more. Start with Compose; graduate to Kubernetes when you need to scale across multiple servers or need enterprise-grade reliability.
Does `docker compose down` delete my database data?
It depends on how you defined your volume. docker compose down stops and removes containers and networks, but named volumes (declared under the top-level volumes: key) survive by default. Your data is safe. Only docker compose down -v removes named volumes. Anonymous volumes created by -v /some/path syntax are also removed by down. Use named volumes for any data you care about.
Can I use Docker Compose in production?
Yes, for small to medium deployments on a single server, Compose is perfectly valid in production — many successful apps run this way. The limitation is that it manages containers on one machine only. If you need to spread load across multiple servers, roll out zero-downtime deployments, or auto-scale based on traffic, you'll need Kubernetes or Docker Swarm. Use docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d with a production override file to harden your config.
How do I share environment variables between services without repeating them?
Docker Compose reads from a .env file in the same directory as docker-compose.yml and substitutes ${VARIABLE_NAME} placeholders. Define each variable once in .env and reference it in multiple services. For variables that should not be in the .env file (secrets), use Docker secrets (Swarm mode) or inject from your platform's secrets manager at deploy time. You can also use x- anchors in YAML to define reusable blocks.
What happens if I run docker compose up twice?
Compose is idempotent. Running up a second time detects that the containers already exist and are running — it does not create duplicates. If you changed the Compose file, Compose recreates only the affected containers. Use docker compose up --force-recreate to rebuild all containers even if nothing changed.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.