Docker Compose Explained: Multi-Container Apps, Networking and Real-World Patterns
Every real-world application is more than one moving part. Your Node.js API needs a PostgreSQL database. That database needs a volume so data survives restarts. Your frontend needs to know the API's address. Running each of these with individual docker run commands — each with its own flags, networks, and environment variables — is a copy-paste nightmare that breaks the moment a new developer joins the team or you move to a different machine.
Docker Compose solves the 'orchestra without a conductor' problem. It lets you describe every service your application needs in a single docker-compose.yml file: which image to use, which ports to expose, which services depend on which, what environment variables to inject, and how data should persist. Instead of three terminal tabs and a sticky note of commands, you get docker compose up and everything just works.
By the end of this article you'll be able to write a production-grade docker-compose.yml for a real three-tier application, understand how Compose handles networking between containers automatically, manage secrets and environment variables safely, and avoid the subtle mistakes that waste hours in staging environments.
The Anatomy of a docker-compose.yml File (and Why Every Key Matters)
A Compose file is a declaration of intent, not a script. You're telling Docker 'here is the world I want' — Compose figures out how to build it. That mental shift matters: you're not writing steps, you're describing a state.
Every Compose file has a few top-level keys. services is the main one — each entry is a container. volumes defines named storage that outlives containers. networks lets you control which services can see each other (by default Compose creates one shared network, which is great for getting started but dangerous if you want to isolate, say, your admin panel from your public API).
The depends_on key is widely misunderstood. It controls start order, not readiness. Your app container will start after the database container starts, but not after Postgres is actually ready to accept connections. That's a real gotcha we'll cover shortly. For readiness you need health checks.
The build key is your escape hatch from public images — point it at a directory with a Dockerfile and Compose builds the image itself before starting the container. This is how you develop locally with live code while still using Compose for orchestration.
# docker-compose.yml — A full three-tier web application # Services: React frontend, Node/Express API, PostgreSQL database version: '3.9' services: # ── PostgreSQL Database ────────────────────────────────────────── postgres_db: image: postgres:15-alpine # Use the slim Alpine variant to keep image size small restart: unless-stopped # Restart on crash, but respect manual `docker compose stop` environment: POSTGRES_USER: ${DB_USER} # Pulled from .env file — never hardcode credentials POSTGRES_PASSWORD: ${DB_PASSWORD} POSTGRES_DB: ${DB_NAME} volumes: - postgres_data:/var/lib/postgresql/data # Named volume: data persists across `down`/`up` - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql # Seed script runs on first start healthcheck: test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"] # Actual readiness probe interval: 10s timeout: 5s retries: 5 networks: - backend_network # Only the API can reach the DB — frontend cannot # ── Node.js / Express API ──────────────────────────────────────── api_server: build: context: ./api # Build from local Dockerfile in the /api directory dockerfile: Dockerfile restart: unless-stopped ports: - "3001:3001" # Expose API to host for direct testing with Postman etc. environment: NODE_ENV: development DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}@postgres_db:5432/${DB_NAME} # ^^^^^^^^^^^ # 'postgres_db' is the SERVICE NAME — Compose's built-in DNS resolves it automatically JWT_SECRET: ${JWT_SECRET} depends_on: postgres_db: condition: service_healthy # Wait until the healthcheck PASSES, not just starts volumes: - ./api:/app # Bind mount for hot-reload in development - /app/node_modules # Anonymous volume: keep container's node_modules, don't overwrite with host networks: - backend_network - frontend_network # ── React Frontend (served via Nginx) ──────────────────────────── web_frontend: build: context: ./frontend dockerfile: Dockerfile restart: unless-stopped ports: - "80:80" # Expose port 80 to the host machine environment: REACT_APP_API_URL: http://api_server:3001 # Container-to-container via service name depends_on: - api_server networks: - frontend_network # Frontend can only see the API, never the raw database # ── Named Volumes ──────────────────────────────────────────────────── volumes: postgres_data: # Managed by Docker — survives `docker compose down` # Destroyed only by `docker compose down -v` # ── Networks ───────────────────────────────────────────────────────── networks: backend_network: # Isolated: only api_server and postgres_db driver: bridge frontend_network: # Isolated: only web_frontend and api_server driver: bridge
✔ Network app_backend_network Created
✔ Network app_frontend_network Created
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started
✔ Container app-web_frontend-1 Started
Environment Variables and Secrets — The Right Way vs. The Dangerous Way
Hardcoding passwords directly in docker-compose.yml is one of the most common and dangerous mistakes in real projects. If that file ever gets committed to a public repo — even accidentally — your credentials are exposed permanently (git history remembers everything).
The correct pattern is a .env file alongside your Compose file. Docker Compose automatically reads .env and substitutes ${VARIABLE_NAME} placeholders. You commit a .env.example with dummy values to your repo, and each developer (and your CI system) fills in the real .env locally. The actual .env lives in .gitignore.
For production, environment variables should come from your hosting platform's secret manager (AWS Secrets Manager, Doppler, HashiCorp Vault) injected at runtime — never baked into an image or a file on disk.
You can also use multiple Compose files with docker compose -f docker-compose.yml -f docker-compose.prod.yml up. The second file merges and overrides the first. This is how you maintain one base config and swap out dev-specific settings (like bind mounts and debug ports) for production-hardened ones without duplicating the whole file.
# .env.example — commit this to git as a template # Copy to .env and fill in real values — NEVER commit .env itself # Database credentials DB_USER=your_db_username DB_PASSWORD=your_secure_password_here DB_NAME=your_app_db # Auth JWT_SECRET=replace_with_a_long_random_string_at_least_32_chars # App config NODE_ENV=development API_PORT=3001
# Running `docker compose config` will show the fully resolved
# compose file with all variables substituted:
#
# $ docker compose config
# services:
# postgres_db:
# environment:
# POSTGRES_USER: your_db_username
# POSTGRES_PASSWORD: your_secure_password_here
# ...and so on
Networking Between Containers — How Compose DNS Actually Works
This is where most intermediate developers have a fuzzy mental model, and it costs them hours of debugging. When Compose starts your services, it creates a virtual network and registers each service under its own DNS name — that name is simply the service name you defined in the YAML.
So when your API container wants to connect to Postgres, it doesn't use localhost (that points to itself) and it doesn't use an IP address (those change). It uses postgres_db:5432 — the service name and the container port, not the host-mapped port. This is a huge source of confusion: ports: '5432:5432' exposes port 5432 to your laptop. Other containers don't need that — they communicate on the internal network directly.
Multiple networks let you enforce security boundaries at the network layer, not just at the application layer. In our example, the frontend container literally cannot reach the database — there's no route. Even if someone finds an XSS vulnerability in your frontend, they can't pivot directly to the database because Compose's networking won't allow it.
Use docker compose exec api_server sh to shell into a running container and test DNS resolution live with nslookup postgres_db or curl http://api_server:3001/health from the frontend container.
# ── Inspecting Compose Networks ────────────────────────────────────── # List all networks Compose created for this project docker network ls # Output includes: app_backend_network, app_frontend_network # Inspect who is on the backend network and their internal IPs docker network inspect app_backend_network # Shell into the running API container to test connectivity docker compose exec api_server sh # Inside the container — test that DNS resolves the DB service name nslookup postgres_db # Expected output: # Server: 127.0.0.11 <-- Docker's embedded DNS resolver # Address: 127.0.0.11:53 # Name: postgres_db # Address: 172.20.0.2 <-- Internal IP (changes every run, that's why we use names) # Test that the DB is reachable on its CONTAINER port (not the host-mapped one) curl -v telnet://postgres_db:5432 # This should open a TCP connection — if it fails, check your 'networks' config # Verify the frontend CANNOT reach postgres directly (network isolation working) docker compose exec web_frontend sh nslookup postgres_db # Expected: nslookup: can't resolve 'postgres_db' # This is CORRECT — it proves your network isolation is working
$ nslookup postgres_db
Server: 127.0.0.11
Address: 127.0.0.11:53
Name: postgres_db
Address: 172.20.0.2
# From inside web_frontend container:
$ nslookup postgres_db
nslookup: can't resolve 'postgres_db'
# Correct! Frontend is on a different network segment.
Profiles and Overrides — Running Different Stacks for Dev, Test and CI
A subtle but powerful Compose feature is profiles. You might have services that should only run in certain contexts — a database admin UI like pgAdmin in development, a mock email server in testing, or a metrics exporter in production. Profiles let you tag services and opt into them at runtime.
With docker compose --profile dev up, only services tagged with the dev profile (plus services with no profile) start. Your CI pipeline can run docker compose --profile test up --abort-on-container-exit to spin up integration test dependencies, run tests, and tear down — without launching the full dev UI tooling.
Compose file overrides are the production deployment pattern. You maintain a docker-compose.yml as the base truth and a docker-compose.prod.yml that overrides just the production-specific bits: replaces bind mounts with proper volumes, removes debug ports, adds resource limits, and switches to pre-built images instead of local builds. This avoids the 'it works on my machine' trap without duplicating hundreds of lines of YAML.
# docker-compose.prod.yml — Production overrides # Usage: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d # # This file MERGES with docker-compose.yml — only specify what changes. version: '3.9' services: api_server: # In prod, use a pre-built image from your registry instead of local build image: ghcr.io/your-org/api-server:${APP_VERSION:-latest} build: !reset null # Remove the build config from the base file volumes: [] # Remove the dev bind mount — no hot reload in prod environment: NODE_ENV: production deploy: resources: limits: cpus: '0.50' # Cap CPU usage per container instance memory: 512M # Cap memory — prevents runaway leaks taking down the host reservations: memory: 256M logging: driver: "json-file" # Structured logs for log aggregators options: max-size: "10m" # Rotate logs — don't fill the disk max-file: "5" web_frontend: image: ghcr.io/your-org/web-frontend:${APP_VERSION:-latest} build: !reset null volumes: [] # pgAdmin only runs in dev — it's tagged with 'dev' profile pgadmin: image: dpage/pgadmin4:latest profiles: - dev # Only starts when: docker compose --profile dev up ports: - "5050:80" environment: PGADMIN_DEFAULT_EMAIL: admin@local.dev PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_PASSWORD:-localdevonly} networks: - backend_network
$ docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
[+] Running 3/3
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started # Using registry image, no bind mount
✔ Container app-web_frontend-1 Started # Using registry image
# Note: pgadmin did NOT start — it requires --profile dev
# Running dev stack with pgAdmin:
$ docker compose --profile dev up
[+] Running 4/4
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started
✔ Container app-web_frontend-1 Started
✔ Container app-pgadmin-1 Started # Now included
| Aspect | docker run (manual) | Docker Compose |
|---|---|---|
| Setup complexity | One command per container, manual flags every time | Single YAML file, one command to start everything |
| Networking | Must create networks manually and attach each container | Auto-creates a shared network; service names resolve via DNS |
| Startup order | You remember the order yourself | Declarative `depends_on` with optional health check conditions |
| Environment config | Long --env-file or -e flags per command | Built-in .env file support and variable interpolation |
| Volume management | Explicit -v flags, easy to forget or mistype | Named volumes and bind mounts declared in YAML |
| Reproducibility | Low — you have to document the exact commands | High — the file IS the documentation |
| Scaling a service | Run more containers manually, manage names yourself | `docker compose up --scale api_server=3` (basic, no LB) |
| Best suited for | Quick one-off containers, learning Docker basics | Local dev, integration testing, simple multi-service deployments |
| Not suited for | N/A — even single containers benefit from Compose | Large-scale production orchestration (use Kubernetes for that) |
🎯 Key Takeaways
- Docker Compose is a declaration of the desired state of your entire app stack — you describe it once in YAML and
docker compose uphandles creation, networking, and ordering every time. depends_oncontrols start order, not start readiness — always pair it with ahealthcheckandcondition: service_healthyto prevent race conditions when your app starts before the database is accepting connections.- Containers communicate by service name on the internal Docker network — never use
localhostbetween containers, and never use the host-mapped port for container-to-container traffic. - Use multiple Compose files (
-fflag) to maintain one base config and override only what changes between environments — this is cleaner and less error-prone than separate files for dev and prod.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Using
depends_onwithout a health check — Symptom: Your API crashes on startup with 'ECONNREFUSED' to the database because Postgres isn't ready yet, even thoughdepends_onis set. Fix: Add ahealthcheckblock to your database service usingpg_isready, then change your API'sdepends_ontocondition: service_healthyinstead of just listing the service name. - ✕Mistake 2: Connecting to the wrong port in DATABASE_URL — Symptom: API can't reach the database using
localhost:5432or the host-mapped port. Fix: Inside the Docker network, containers talk to each other via service name and the container port, not the host port. YourDATABASE_URLshould bepostgres://user:pass@postgres_db:5432/dbname— 'postgres_db' is the service name, 5432 is the container's internal port. - ✕Mistake 3: Bind-mounting node_modules from host into the container — Symptom: The app works on the developer's Mac but crashes in CI (Linux) with native module errors, or
npm installinside the container gets overwritten on every restart. Fix: Add an anonymous volume for node_modules —- /app/node_modules— after the bind mount. Docker will use the container's own node_modules (compiled for Linux inside Docker) rather than the host's version.
Interview Questions on This Topic
- QWhat's the difference between `depends_on` and a health check condition in Docker Compose, and when would a plain `depends_on` cause a race condition in production?
- QIf you have two services in a Compose file and they can't communicate with each other by service name, what would you check first — and can you walk me through how Compose networking actually resolves service names?
- QYou have a docker-compose.yml that works perfectly for local development with bind mounts and debug ports. How would you adapt it for production deployment without duplicating the entire file?
Frequently Asked Questions
What is the difference between Docker Compose and Kubernetes?
Docker Compose is designed for defining and running multi-container apps on a single host — it's perfect for local development and simple deployments. Kubernetes is a full container orchestration platform that manages containers across a cluster of machines, handles auto-scaling, self-healing, rolling deployments, and much more. Start with Compose; graduate to Kubernetes when you need to scale across multiple servers or need enterprise-grade reliability.
Does `docker compose down` delete my database data?
It depends on how you defined your volume. docker compose down stops and removes containers and networks, but named volumes (declared under the top-level volumes: key) survive by default. Your data is safe. Only docker compose down -v removes named volumes. Anonymous volumes created by -v /some/path syntax are also removed by down. Use named volumes for any data you care about.
Can I use Docker Compose in production?
Yes, for small to medium deployments on a single server, Compose is perfectly valid in production — many successful apps run this way. The limitation is that it manages containers on one machine only. If you need to spread load across multiple servers, roll out zero-downtime deployments, or auto-scale based on traffic, you'll need Kubernetes or Docker Swarm. Use docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d with a production override file to harden your config.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.