Home DevOps Docker Compose Explained: Multi-Container Apps, Networking and Real-World Patterns

Docker Compose Explained: Multi-Container Apps, Networking and Real-World Patterns

In Plain English 🔥
Imagine you're opening a restaurant. You need a chef (your app), a waiter (your web server), and a cashier (your database) — all working together, in the right order, talking to each other. Docker Compose is the restaurant manager who reads one master plan (a single YAML file) and spins up every staff member at once, makes sure they can talk to each other, and shuts them all down cleanly at the end of the night. Without it, you'd be hiring each person separately, on different phones, hoping they find each other.
⚡ Quick Answer
Imagine you're opening a restaurant. You need a chef (your app), a waiter (your web server), and a cashier (your database) — all working together, in the right order, talking to each other. Docker Compose is the restaurant manager who reads one master plan (a single YAML file) and spins up every staff member at once, makes sure they can talk to each other, and shuts them all down cleanly at the end of the night. Without it, you'd be hiring each person separately, on different phones, hoping they find each other.

Every real-world application is more than one moving part. Your Node.js API needs a PostgreSQL database. That database needs a volume so data survives restarts. Your frontend needs to know the API's address. Running each of these with individual docker run commands — each with its own flags, networks, and environment variables — is a copy-paste nightmare that breaks the moment a new developer joins the team or you move to a different machine.

Docker Compose solves the 'orchestra without a conductor' problem. It lets you describe every service your application needs in a single docker-compose.yml file: which image to use, which ports to expose, which services depend on which, what environment variables to inject, and how data should persist. Instead of three terminal tabs and a sticky note of commands, you get docker compose up and everything just works.

By the end of this article you'll be able to write a production-grade docker-compose.yml for a real three-tier application, understand how Compose handles networking between containers automatically, manage secrets and environment variables safely, and avoid the subtle mistakes that waste hours in staging environments.

The Anatomy of a docker-compose.yml File (and Why Every Key Matters)

A Compose file is a declaration of intent, not a script. You're telling Docker 'here is the world I want' — Compose figures out how to build it. That mental shift matters: you're not writing steps, you're describing a state.

Every Compose file has a few top-level keys. services is the main one — each entry is a container. volumes defines named storage that outlives containers. networks lets you control which services can see each other (by default Compose creates one shared network, which is great for getting started but dangerous if you want to isolate, say, your admin panel from your public API).

The depends_on key is widely misunderstood. It controls start order, not readiness. Your app container will start after the database container starts, but not after Postgres is actually ready to accept connections. That's a real gotcha we'll cover shortly. For readiness you need health checks.

The build key is your escape hatch from public images — point it at a directory with a Dockerfile and Compose builds the image itself before starting the container. This is how you develop locally with live code while still using Compose for orchestration.

docker-compose.yml · YAML
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
# docker-compose.yml — A full three-tier web application
# Services: React frontend, Node/Express API, PostgreSQL database

version: '3.9'

services:

  # ── PostgreSQL Database ──────────────────────────────────────────
  postgres_db:
    image: postgres:15-alpine          # Use the slim Alpine variant to keep image size small
    restart: unless-stopped            # Restart on crash, but respect manual `docker compose stop`
    environment:
      POSTGRES_USER: ${DB_USER}        # Pulled from .env file — never hardcode credentials
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: ${DB_NAME}
    volumes:
      - postgres_data:/var/lib/postgresql/data   # Named volume: data persists across `down`/`up`
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql  # Seed script runs on first start
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]  # Actual readiness probe
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - backend_network   # Only the API can reach the DB — frontend cannot

  # ── Node.js / Express API ────────────────────────────────────────
  api_server:
    build:
      context: ./api          # Build from local Dockerfile in the /api directory
      dockerfile: Dockerfile
    restart: unless-stopped
    ports:
      - "3001:3001"           # Expose API to host for direct testing with Postman etc.
    environment:
      NODE_ENV: development
      DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}@postgres_db:5432/${DB_NAME}
      #                                                   ^^^^^^^^^^^
      #  'postgres_db' is the SERVICE NAMECompose's built-in DNS resolves it automatically
      JWT_SECRET: ${JWT_SECRET}
    depends_on:
      postgres_db:
        condition: service_healthy   # Wait until the healthcheck PASSES, not just starts
    volumes:
      - ./api:/app             # Bind mount for hot-reload in development
      - /app/node_modules      # Anonymous volume: keep container's node_modules, don't overwrite with host
    networks:
      - backend_network
      - frontend_network

  # ── React Frontend (served via Nginx) ────────────────────────────
  web_frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    restart: unless-stopped
    ports:
      - "80:80"               # Expose port 80 to the host machine
    environment:
      REACT_APP_API_URL: http://api_server:3001   # Container-to-container via service name
    depends_on:
      - api_server
    networks:
      - frontend_network      # Frontend can only see the API, never the raw database

# ── Named Volumes ────────────────────────────────────────────────────
volumes:
  postgres_data:              # Managed by Docker — survives `docker compose down`
                              # Destroyed only by `docker compose down -v`

# ── Networks ─────────────────────────────────────────────────────────
networks:
  backend_network:            # Isolated: only api_server and postgres_db
    driver: bridge
  frontend_network:           # Isolated: only web_frontend and api_server
    driver: bridge
▶ Output
[+] Running 4/4
✔ Network app_backend_network Created
✔ Network app_frontend_network Created
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started
✔ Container app-web_frontend-1 Started
⚠️
Pro Tip: Use `condition: service_healthy` not just `depends_on`Plain `depends_on: postgres_db` only waits for the container process to start, not for Postgres to be ready to accept connections. Add a `healthcheck` block to your DB service and use `condition: service_healthy` in your API's `depends_on`. This single change eliminates 80% of 'connection refused on startup' bugs.

Environment Variables and Secrets — The Right Way vs. The Dangerous Way

Hardcoding passwords directly in docker-compose.yml is one of the most common and dangerous mistakes in real projects. If that file ever gets committed to a public repo — even accidentally — your credentials are exposed permanently (git history remembers everything).

The correct pattern is a .env file alongside your Compose file. Docker Compose automatically reads .env and substitutes ${VARIABLE_NAME} placeholders. You commit a .env.example with dummy values to your repo, and each developer (and your CI system) fills in the real .env locally. The actual .env lives in .gitignore.

For production, environment variables should come from your hosting platform's secret manager (AWS Secrets Manager, Doppler, HashiCorp Vault) injected at runtime — never baked into an image or a file on disk.

You can also use multiple Compose files with docker compose -f docker-compose.yml -f docker-compose.prod.yml up. The second file merges and overrides the first. This is how you maintain one base config and swap out dev-specific settings (like bind mounts and debug ports) for production-hardened ones without duplicating the whole file.

.env.example · BASH
1234567891011121314
# .env.example — commit this to git as a template
# Copy to .env and fill in real values — NEVER commit .env itself

# Database credentials
DB_USER=your_db_username
DB_PASSWORD=your_secure_password_here
DB_NAME=your_app_db

# Auth
JWT_SECRET=replace_with_a_long_random_string_at_least_32_chars

# App config
NODE_ENV=development
API_PORT=3001
▶ Output
# No output — this is a config file.
# Running `docker compose config` will show the fully resolved
# compose file with all variables substituted:
#
# $ docker compose config
# services:
# postgres_db:
# environment:
# POSTGRES_USER: your_db_username
# POSTGRES_PASSWORD: your_secure_password_here
# ...and so on
⚠️
Watch Out: docker-compose.yml in git + hardcoded password = public breachEven if you fix it in a later commit, the password is still visible in git history. Always use `.env` files and add `.env` to `.gitignore` before your first commit. Run `git log --all -p | grep PASSWORD` right now on any project you've inherited — you might be surprised what you find.

Networking Between Containers — How Compose DNS Actually Works

This is where most intermediate developers have a fuzzy mental model, and it costs them hours of debugging. When Compose starts your services, it creates a virtual network and registers each service under its own DNS name — that name is simply the service name you defined in the YAML.

So when your API container wants to connect to Postgres, it doesn't use localhost (that points to itself) and it doesn't use an IP address (those change). It uses postgres_db:5432 — the service name and the container port, not the host-mapped port. This is a huge source of confusion: ports: '5432:5432' exposes port 5432 to your laptop. Other containers don't need that — they communicate on the internal network directly.

Multiple networks let you enforce security boundaries at the network layer, not just at the application layer. In our example, the frontend container literally cannot reach the database — there's no route. Even if someone finds an XSS vulnerability in your frontend, they can't pivot directly to the database because Compose's networking won't allow it.

Use docker compose exec api_server sh to shell into a running container and test DNS resolution live with nslookup postgres_db or curl http://api_server:3001/health from the frontend container.

network-debugging-commands.sh · BASH
1234567891011121314151617181920212223242526272829
# ── Inspecting Compose Networks ──────────────────────────────────────

# List all networks Compose created for this project
docker network ls
# Output includes: app_backend_network, app_frontend_network

# Inspect who is on the backend network and their internal IPs
docker network inspect app_backend_network

# Shell into the running API container to test connectivity
docker compose exec api_server sh

# Inside the container — test that DNS resolves the DB service name
nslookup postgres_db
# Expected output:
# Server:    127.0.0.11      <-- Docker's embedded DNS resolver
# Address:   127.0.0.11:53
# Name:      postgres_db
# Address:   172.20.0.2      <-- Internal IP (changes every run, that's why we use names)

# Test that the DB is reachable on its CONTAINER port (not the host-mapped one)
curl -v telnet://postgres_db:5432
# This should open a TCP connection — if it fails, check your 'networks' config

# Verify the frontend CANNOT reach postgres directly (network isolation working)
docker compose exec web_frontend sh
nslookup postgres_db
# Expected: nslookup: can't resolve 'postgres_db'
# This is CORRECT — it proves your network isolation is working
▶ Output
# From inside api_server container:
$ nslookup postgres_db
Server: 127.0.0.11
Address: 127.0.0.11:53

Name: postgres_db
Address: 172.20.0.2

# From inside web_frontend container:
$ nslookup postgres_db
nslookup: can't resolve 'postgres_db'
# Correct! Frontend is on a different network segment.
🔥
Interview Gold: Container port vs. host portIn `ports: '5432:5432'`, the left side is the HOST port (your laptop) and the right side is the CONTAINER port. Other containers always connect to the container port directly — no host port mapping needed. Your DBA connects to `localhost:5432` from their machine. Your API container connects to `postgres_db:5432` from inside Docker. These are completely separate paths.

Profiles and Overrides — Running Different Stacks for Dev, Test and CI

A subtle but powerful Compose feature is profiles. You might have services that should only run in certain contexts — a database admin UI like pgAdmin in development, a mock email server in testing, or a metrics exporter in production. Profiles let you tag services and opt into them at runtime.

With docker compose --profile dev up, only services tagged with the dev profile (plus services with no profile) start. Your CI pipeline can run docker compose --profile test up --abort-on-container-exit to spin up integration test dependencies, run tests, and tear down — without launching the full dev UI tooling.

Compose file overrides are the production deployment pattern. You maintain a docker-compose.yml as the base truth and a docker-compose.prod.yml that overrides just the production-specific bits: replaces bind mounts with proper volumes, removes debug ports, adds resource limits, and switches to pre-built images instead of local builds. This avoids the 'it works on my machine' trap without duplicating hundreds of lines of YAML.

docker-compose.prod.yml · YAML
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
# docker-compose.prod.yml — Production overrides
# Usage: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
#
# This file MERGES with docker-compose.yml — only specify what changes.

version: '3.9'

services:

  api_server:
    # In prod, use a pre-built image from your registry instead of local build
    image: ghcr.io/your-org/api-server:${APP_VERSION:-latest}
    build: !reset null    # Remove the build config from the base file
    volumes: []           # Remove the dev bind mount — no hot reload in prod
    environment:
      NODE_ENV: production
    deploy:
      resources:
        limits:
          cpus: '0.50'      # Cap CPU usage per container instance
          memory: 512M      # Cap memory — prevents runaway leaks taking down the host
        reservations:
          memory: 256M
    logging:
      driver: "json-file"   # Structured logs for log aggregators
      options:
        max-size: "10m"     # Rotate logs — don't fill the disk
        max-file: "5"

  web_frontend:
    image: ghcr.io/your-org/web-frontend:${APP_VERSION:-latest}
    build: !reset null
    volumes: []

  # pgAdmin only runs in dev — it's tagged with 'dev' profile
  pgadmin:
    image: dpage/pgadmin4:latest
    profiles:
      - dev               # Only starts when: docker compose --profile dev up
    ports:
      - "5050:80"
    environment:
      PGADMIN_DEFAULT_EMAIL: admin@local.dev
      PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_PASSWORD:-localdevonly}
    networks:
      - backend_network
▶ Output
# Running with production overrides:
$ docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

[+] Running 3/3
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started # Using registry image, no bind mount
✔ Container app-web_frontend-1 Started # Using registry image

# Note: pgadmin did NOT start — it requires --profile dev

# Running dev stack with pgAdmin:
$ docker compose --profile dev up
[+] Running 4/4
✔ Container app-postgres_db-1 Healthy
✔ Container app-api_server-1 Started
✔ Container app-web_frontend-1 Started
✔ Container app-pgadmin-1 Started # Now included
⚠️
Pro Tip: Use `docker compose config` to debug merged filesBefore you deploy, run `docker compose -f docker-compose.yml -f docker-compose.prod.yml config` to print the fully merged and resolved YAML. This shows you exactly what Compose will act on — no surprises in production. It also validates your YAML syntax and substitutes all variables, so you'll catch missing env vars before they cause a 3am outage.
Aspectdocker run (manual)Docker Compose
Setup complexityOne command per container, manual flags every timeSingle YAML file, one command to start everything
NetworkingMust create networks manually and attach each containerAuto-creates a shared network; service names resolve via DNS
Startup orderYou remember the order yourselfDeclarative `depends_on` with optional health check conditions
Environment configLong --env-file or -e flags per commandBuilt-in .env file support and variable interpolation
Volume managementExplicit -v flags, easy to forget or mistypeNamed volumes and bind mounts declared in YAML
ReproducibilityLow — you have to document the exact commandsHigh — the file IS the documentation
Scaling a serviceRun more containers manually, manage names yourself`docker compose up --scale api_server=3` (basic, no LB)
Best suited forQuick one-off containers, learning Docker basicsLocal dev, integration testing, simple multi-service deployments
Not suited forN/A — even single containers benefit from ComposeLarge-scale production orchestration (use Kubernetes for that)

🎯 Key Takeaways

  • Docker Compose is a declaration of the desired state of your entire app stack — you describe it once in YAML and docker compose up handles creation, networking, and ordering every time.
  • depends_on controls start order, not start readiness — always pair it with a healthcheck and condition: service_healthy to prevent race conditions when your app starts before the database is accepting connections.
  • Containers communicate by service name on the internal Docker network — never use localhost between containers, and never use the host-mapped port for container-to-container traffic.
  • Use multiple Compose files (-f flag) to maintain one base config and override only what changes between environments — this is cleaner and less error-prone than separate files for dev and prod.

⚠ Common Mistakes to Avoid

  • Mistake 1: Using depends_on without a health check — Symptom: Your API crashes on startup with 'ECONNREFUSED' to the database because Postgres isn't ready yet, even though depends_on is set. Fix: Add a healthcheck block to your database service using pg_isready, then change your API's depends_on to condition: service_healthy instead of just listing the service name.
  • Mistake 2: Connecting to the wrong port in DATABASE_URL — Symptom: API can't reach the database using localhost:5432 or the host-mapped port. Fix: Inside the Docker network, containers talk to each other via service name and the container port, not the host port. Your DATABASE_URL should be postgres://user:pass@postgres_db:5432/dbname — 'postgres_db' is the service name, 5432 is the container's internal port.
  • Mistake 3: Bind-mounting node_modules from host into the container — Symptom: The app works on the developer's Mac but crashes in CI (Linux) with native module errors, or npm install inside the container gets overwritten on every restart. Fix: Add an anonymous volume for node_modules — - /app/node_modules — after the bind mount. Docker will use the container's own node_modules (compiled for Linux inside Docker) rather than the host's version.

Interview Questions on This Topic

  • QWhat's the difference between `depends_on` and a health check condition in Docker Compose, and when would a plain `depends_on` cause a race condition in production?
  • QIf you have two services in a Compose file and they can't communicate with each other by service name, what would you check first — and can you walk me through how Compose networking actually resolves service names?
  • QYou have a docker-compose.yml that works perfectly for local development with bind mounts and debug ports. How would you adapt it for production deployment without duplicating the entire file?

Frequently Asked Questions

What is the difference between Docker Compose and Kubernetes?

Docker Compose is designed for defining and running multi-container apps on a single host — it's perfect for local development and simple deployments. Kubernetes is a full container orchestration platform that manages containers across a cluster of machines, handles auto-scaling, self-healing, rolling deployments, and much more. Start with Compose; graduate to Kubernetes when you need to scale across multiple servers or need enterprise-grade reliability.

Does `docker compose down` delete my database data?

It depends on how you defined your volume. docker compose down stops and removes containers and networks, but named volumes (declared under the top-level volumes: key) survive by default. Your data is safe. Only docker compose down -v removes named volumes. Anonymous volumes created by -v /some/path syntax are also removed by down. Use named volumes for any data you care about.

Can I use Docker Compose in production?

Yes, for small to medium deployments on a single server, Compose is perfectly valid in production — many successful apps run this way. The limitation is that it manages containers on one machine only. If you need to spread load across multiple servers, roll out zero-downtime deployments, or auto-scale based on traffic, you'll need Kubernetes or Docker Swarm. Use docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d with a production override file to harden your config.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousDocker Volumes and NetworkingNext →Docker Registry and Docker Hub
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged