

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
So you've been running Docker containers one by one. You pull an image, fire up a docker run command with a bunch of flags, and it works. Great!
But here's where things get real — most applications in the wild don't run as a single container. You've got a web server. A database. Maybe a cache layer like Redis. A background worker. These pieces need to talk to each other, start in the right order, and share networks and volumes.
Running them one by one? That's like assembling IKEA furniture while someone keeps hiding the instructions.
That's exactly the problem Docker Compose was born to solve.
Think of Docker Compose as a director on a film set. Your containers are the actors. Each one has their own job — the database plays the serious character who remembers everything, the web server is the flashy frontend guy, Redis is the speedy messenger. Now, without a director, they'd all just stand around confused.
Docker Compose is your director. You write a single file (called docker-compose.yml) that describes every container, how they connect, what ports they expose, what environment variables they need, and which volumes they use. Then you run one command and the whole show comes to life.
docker compose up
That's it. One command. Everything up and running.
And to tear it all down?
docker compose down
Gone. Clean. Like it never happened.
If you've been Googling Docker Compose stuff, you might have seen references to docker-compose (with a hyphen) as a separate tool you install. That's the old v1 CLI. The modern version is Docker Compose v2, which is built directly into Docker as a plugin. So the command is docker compose (no hyphen).
If you're using Docker Desktop (which I recommend for beginners), you already have Compose v2. Just confirm it:
docker compose version
You should see something like Docker Compose version v2.x.x. If you do, you're good to go.
docker-compose.ymlEverything in Docker Compose lives in a YAML file. YAML stands for "YAML Ain't Markup Language" — which is a recursive joke that tells you programmers wrote the spec. Don't worry about the name. What matters is the format: it's clean, human-readable, and uses indentation to represent structure.
Let's look at the skeleton of a docker-compose.yml file before we add any real content:
version: "3.9"
services:
service_one:
# config for first container
service_two:
# config for second container
volumes:
my_volume:
networks:
my_network:
Here's what each top-level key means:
version — Tells Docker Compose which schema version to use. 3.9 is a safe modern choice. (Note: newer Compose files sometimes omit this entirely, but keeping it is fine.)services — This is where you define all your containers. Each service becomes a container.volumes — Named volumes that your containers can use for persistent storage.networks — Custom networks for controlling how containers communicate.That's the whole skeleton. Everything else is just filling in the details.
A service in Docker Compose is basically a container definition. Let's break down the most common options you'll use:
services:
web:
image: nginx:1.25.3 # Which image to use
build: ./app # Or, build from a Dockerfile at this path
container_name: my_web_app # Give it a friendly name
ports:
- "8080:80" # host_port:container_port
environment:
- NODE_ENV=production # Environment variables
volumes:
- ./src:/app/src # Bind mount: local folder : container folder
- app_data:/app/data # Named volume
depends_on:
- db # Start 'db' service before this one
networks:
- app_network # Connect to this network
restart: unless-stopped # Restart policy
Let's decode each piece:
image — The Docker image to use, just like you'd pass to docker run. Always pin a specific tag (not latest) in real projects.build — If you have a Dockerfile, point Compose to it instead of pulling from Docker Hub. It'll build the image for you.ports — Map ports from your host machine to the container. Format is "HOST:CONTAINER". So "8080:80" means "on my laptop, port 8080 goes into the container's port 80."environment — Set environment variables. You can also use an .env file and Compose will pick it up automatically.volumes — Mount data. Bind mounts link a folder on your machine. Named volumes are managed by Docker.depends_on — Tells Compose to start another service first. Important caveat: this only waits for the container to start, not for the service inside it to be ready. More on this in a bit.networks — Attach the container to a specific network. Containers on the same network can talk to each other using their service name as the hostname.restart — What to do if the container crashes. unless-stopped is usually what you want — restart on failure, but don't restart if you manually stopped it.Enough theory. Let's get our hands dirty.
We're going to build a simple web application: a Python Flask app that connects to a PostgreSQL database. This is probably the most classic multi-container setup you'll ever see, and it shows up in some variation in almost every real-world project.
Here's our project structure:
my-app/
├── docker-compose.yml
├── app/
│ ├── Dockerfile
│ ├── app.py
│ └── requirements.txt
app/app.py:
from flask import Flask
import psycopg2
import os
app = Flask(__name__)
def get_db_connection():
conn = psycopg2.connect(
host=os.environ.get("DB_HOST", "db"),
database=os.environ.get("DB_NAME", "mydb"),
user=os.environ.get("DB_USER", "myuser"),
password=os.environ.get("DB_PASSWORD", "mypassword")
)
return conn
@app.route("/")
def home():
try:
conn = get_db_connection()
conn.close()
return "✅ Flask is running and connected to PostgreSQL!"
except Exception as e:
return f"❌ Could not connect to database: {str(e)}"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000, debug=True)
Notice that host is set to "db". That's not a typo or magic — that's the service name from our Compose file. Within a Docker Compose network, containers can reach each other using the service name as a hostname. It's one of those things that feels like magic the first time you see it work.
app/requirements.txt:
flask==3.0.0
psycopg2-binary==2.9.9
app/Dockerfile:
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
This is the main event. Here's our complete Compose file:
version: "3.9"
services:
web:
build: ./app
container_name: flask_web
ports:
- "5000:5000"
environment:
- DB_HOST=db
- DB_NAME=mydb
- DB_USER=myuser
- DB_PASSWORD=mypassword
depends_on:
- db
networks:
- app_network
restart: unless-stopped
db:
image: postgres:16-alpine
container_name: postgres_db
environment:
- POSTGRES_DB=mydb
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app_network
restart: unless-stopped
volumes:
postgres_data:
networks:
app_network:
driver: bridge
Let's walk through what's happening:
The web service:
./app folder using our Dockerfile.db to start before it launches (via depends_on).The db service:
postgres:16-alpine image (alpine = smaller image size)./var/lib/postgresql/data — this is where Postgres stores all its data files. Without this volume, every time you restart, you'd lose all your data. Bad times.The volumes section:
postgres_data as a named volume. Docker manages it. It persists across container restarts.The networks section:
app_network. Both services are attached to it, so they can talk to each other.Navigate to your project folder and run:
docker compose up
You'll see logs from both containers streaming in your terminal. Docker will first pull/build the images, create the network, create the volume, and then start the containers.
To run it in the background (detached mode):
docker compose up -d
Then visit http://localhost:5000 in your browser. You should see:
✅ Flask is running and connected to PostgreSQL!
Checking what's running:
docker compose ps
This shows all services defined in the Compose file and their current status.
Checking the logs:
docker compose logs # All services
docker compose logs web # Just the Flask container
docker compose logs -f db # Follow/stream database logs
Stopping everything:
docker compose down
This stops and removes containers and networks. Your volumes are kept by default. To also delete volumes (careful — this deletes your data):
docker compose down -v
depends_on Gotcha — A Common TrapOkay, real talk. You're going to hit this at some point and it's going to confuse you.
depends_on tells Compose: "start the db container before the web container." It does not mean "wait until PostgreSQL is fully initialized and ready to accept connections."
PostgreSQL takes a few seconds to boot up inside the container. Your Flask app, being lightweight, might start in half a second — and then immediately try to connect to a database that's still initializing. Result? A connection error and a crashed container.
There are a few ways around this:
Option 1: Add a retry loop in your app code. Make your Flask app try to connect several times with a small delay between attempts. This is the most robust approach for production.
Option 2: Use healthcheck with depends_on condition. Docker Compose supports health checks that let you define "ready" vs just "started":
services:
db:
image: postgres:16-alpine
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myuser -d mydb"]
interval: 5s
timeout: 5s
retries: 5
web:
depends_on:
db:
condition: service_healthy
Now web will wait until db passes its health check before starting. Much better.
1. Forgetting to declare volumes at the top level
You might define a named volume inside a service but forget to add it to the top-level volumes section. Compose will throw an error. If you reference a named volume in a service, declare it at the top.
2. Using latest as the image tag
It's tempting. postgres:latest is shorter than postgres:16-alpine. But "latest" is unpredictable — it changes whenever the maintainers release a new version. Pin your versions. Your future self will thank you.
3. Hard-coding secrets in the Compose file
Putting passwords directly in docker-compose.yml is fine for local development. But if this file ever ends up in a Git repo, those secrets are exposed. Use an .env file for sensitive values:
Create a .env file in the same directory as your Compose file:
DB_PASSWORD=supersecretpassword
POSTGRES_PASSWORD=supersecretpassword
Then in docker-compose.yml, reference them:
environment:
- DB_PASSWORD=${DB_PASSWORD}
Compose automatically reads the .env file. Add .env to your .gitignore and you're safe.
4. Editing code and wondering why changes aren't showing up
If you're using build: and you change your app.py, you need to rebuild the image:
docker compose up --build
The --build flag forces a rebuild even if an image already exists.
5. Port already in use
If you get an error like port is already allocated, something on your machine is already using that port (maybe another Docker container, or a local server). Change the host port in the ports mapping — for example, "5001:5000" — and try again.
| Command | What it does |
|---|---|
docker compose up | Start all services |
docker compose up -d | Start in background |
docker compose up --build | Rebuild images then start |
docker compose down | Stop and remove containers |
docker compose down -v | Also remove volumes |
docker compose ps | List running services |
docker compose logs | View all logs |
docker compose logs -f web | Follow logs for 'web' service |
docker compose exec web bash | Open a shell in the 'web' container |
docker compose restart web | Restart a specific service |
docker compose pull | Pull latest images |
Here's something that trips up a lot of beginners: when your Flask app connects to the database, it uses db as the hostname — not localhost, not an IP address.
Why? Because Docker Compose creates a private network for your services, and on that network, each service is reachable by its service name. So web can call db:5432 and Docker's internal DNS resolves db to the right container's IP automatically.
This is also why you can't reach the database from your host machine on port 5432 unless you explicitly expose it with a ports mapping. In our example, we didn't expose the database port externally — which is actually a good security practice. Only the web container needs to talk to it.
If you do want to connect to Postgres from your host (say, with a GUI tool like pgAdmin or DBeaver), just add:
db:
ports:
- "5432:5432"
You've got the foundation. Now here's a challenge to solidify what you've learned:
Challenge 1 — Basic:
Set up the exact Flask + PostgreSQL project from this article on your local machine. Make it run. Visit localhost:5000 and see the success message. Then run docker compose down and bring it back up. Verify your volume kept the data.
Challenge 2 — Intermediate:
Add a third service to the Compose file: Redis. Use the official redis:7-alpine image. Don't expose it externally — only the web service should be able to reach it. Update your Flask app to also set a key in Redis and read it back on the homepage.
Hint: In your Flask code, the Redis host will just be "redis" — the service name.
Challenge 3 — Advanced:
Add health checks to both the db and a Redis service. Update depends_on in the web service to use condition: service_healthy for both. Verify in the logs that the web service waits for them to be healthy before starting.
These three challenges will take you from "I understand Compose" to "I can actually use Compose" — and that's a big jump.
Docker Compose is one of those tools that, once you start using it, you wonder how you ever managed without it. No more running five separate docker run commands with twenty flags each. No more manually creating networks and typing IP addresses. One file, one command, and your whole environment is up.
Here's what we covered today:
docker-compose.yml filedepends_on trap and how to fix it with health checksIn the next part of this series, we'll go further — looking at Docker Compose in a CI/CD context, using multiple Compose files for different environments (dev vs prod), and environment-specific overrides.
For now, fire up your terminal and build something. That's where the real learning happens.
Happy Dockering! 🐳