

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
So you've been hearing about Docker for a while now. Maybe you've already containerized a simple Python script or a "Hello World" Flask app. That's great — but today we're stepping it up. We're going to run a real Django application inside Docker, wire it up with a PostgreSQL database, and actually run migrations inside a container. By the end of this, you'll have a working setup and a much better mental model of how all these pieces fit together.
Grab a coffee. Let's get into it.
Before we write a single line, let me answer the "why bother" question.
You've probably been working on Django projects where you run python manage.py runserver locally, and everything works fine. Then you send the project to a friend, they clone it, and it immediately breaks because they have a different Python version, or they don't have PostgreSQL installed, or their system packages are slightly different. Sound familiar?
Docker fixes exactly this. You describe your environment once — Python version, OS packages, pip packages, the works — and anyone who runs your container gets that exact environment. No more "works on my machine" excuses.
And when you add a database like PostgreSQL into the mix, Docker Compose makes it ridiculously easy to spin up both the Django app and the database together, isolated from everything else on your system.
This is where a lot of beginners make their first mistake. They just type FROM python:latest without thinking about it, and that causes headaches down the road.
Let me explain what your options actually look like.
When you search for Python images on Docker Hub, you'll see tags like:
python:3.12
python:3.12-slim
python:3.12-alpine
python:3.12-slim-bookworm
These are not the same thing. Not even close.
python:3.12 — This is the full Debian-based image. It comes with pretty much every system library you could ever need. The downside? It's massive. We're talking 900MB+ just for the base image.
python:3.12-slim — Same Debian base, but with most of the extra tools stripped out. This is usually around 120–130MB. It's still missing some C libraries that certain Python packages need (like psycopg2, for example), so you may need to install a few system packages manually.
python:3.12-alpine — Alpine Linux-based. Tiny — like 50MB tiny. But Alpine uses musl instead of glibc, and a bunch of Python packages don't play nicely with it. You'll end up spending more time fighting compilation issues than you save in image size. Unless you have a specific reason to use Alpine, I'd avoid it for Django projects.
My recommendation for Django apps: Use python:3.12-slim. It's lean, it's Debian-based (fewer compatibility headaches), and with a few extra apt-get install lines you can get everything you need.
Here's an example of why this matters. If you're using psycopg2 (the most common PostgreSQL adapter for Django), you need libpq-dev and a C compiler available at build time. With the slim image, you just add:
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc \
&& rm -rf /var/lib/apt/lists/*
That rm -rf /var/lib/apt/lists/* at the end is important — it cleans up the apt cache so your image doesn't carry around extra weight after installation.
Alternatively, you can skip the compilation dance entirely by using psycopg2-binary in your requirements.txt. It comes with the PostgreSQL client bundled in. For production you'd want the non-binary version, but for learning purposes, psycopg2-binary is perfectly fine and saves you the hassle.
Let's set up a minimal Django project. If you already have one, feel free to follow along with your own code.
# Create a project folder
mkdir django-docker-demo && cd django-docker-demo
# Create a virtual env (just for local development scaffolding)
python3 -m venv venv
source venv/bin/activate
# Install Django and psycopg2-binary
pip install django psycopg2-binary
# Start a new Django project
django-admin startproject myapp .
Now freeze your dependencies:
pip freeze > requirements.txt
Open up requirements.txt and you'll see something like:
asgiref==3.8.1
Django==5.0.6
psycopg2-binary==2.9.9
sqlparse==0.5.0
This file is your project's contract with pip. Anyone — or any Docker container — that runs pip install -r requirements.txt will get these exact versions. That consistency is the whole point.
Common mistake #1: Not pinning versions. Writing Django instead of Django==5.0.6 means pip can install any version, including future breaking ones. Always pin. Always.
Common mistake #2: Forgetting to regenerate requirements.txt after adding a new package. You install something locally, it works, you push to Git, the Docker build fails because the new package isn't in the file. It happens more than you'd think.
Now let's write the Dockerfile. Create a file called Dockerfile in your project root:
# Use Python 3.12 slim as our base
FROM python:3.12-slim
# Set environment variables
# Prevents Python from writing .pyc files
ENV PYTHONDONTWRITEBYTECODE=1
# Prevents Python from buffering stdout/stderr (so logs show up immediately)
ENV PYTHONUNBUFFERED=1
# Set working directory inside the container
WORKDIR /app
# Install system dependencies needed for psycopg2
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements first (layer caching trick!)
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the project
COPY . .
# Expose the development server port
EXPOSE 8000
# Run the Django development server
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Let me walk you through the parts that actually matter.
PYTHONUNBUFFERED=1 — Without this, Django's log output gets buffered, which means you'll stare at a blank terminal for a while when something breaks. Set this to 1 and your logs stream in real time.
COPY requirements.txt . before COPY . . — This is the layer caching trick. Docker builds images layer by layer. Each instruction creates a new layer. If a layer hasn't changed, Docker reuses the cached version. By copying requirements.txt first and running pip install before copying the rest of your code, you ensure that pip install only re-runs when your dependencies actually change — not every time you tweak a view or update a template. This makes rebuilds much faster.
0.0.0.0:8000 in the CMD — By default, Django's dev server listens on 127.0.0.1 (localhost inside the container). If you don't bind to 0.0.0.0, you won't be able to reach it from outside the container. Took me an embarrassing amount of time to figure that out the first time.
Running Django alone in a container is fine, but the real power comes when you connect it to a database container. This is where Docker Compose shines.
Create a file called docker-compose.yml:
version: "3.9"
services:
db:
image: postgres:16
environment:
POSTGRES_DB: myappdb
POSTGRES_USER: myappuser
POSTGRES_PASSWORD: supersecretpassword
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
environment:
- DEBUG=True
- DATABASE_URL=postgres://myappuser:supersecretpassword@db:5432/myappdb
depends_on:
- db
volumes:
postgres_data:
A few things to notice here.
The db hostname in the DATABASE_URL — When services are on the same Docker Compose network, they can reach each other by service name. So from the web container's perspective, the database is available at hostname db. Not localhost, not 127.0.0.1 — literally db. This trips up almost everyone the first time.
depends_on — This tells Docker Compose to start the db container before the web container. But here's an important caveat: depends_on only waits for the container to start, not for PostgreSQL to be actually ready to accept connections. PostgreSQL takes a few seconds to initialize, and your Django app might try to connect before it's ready.
We'll handle that properly in a second.
volumes: - .:/app — This mounts your local project folder into the container. Any code change you make locally is reflected immediately inside the container. Great for development. You'd remove this in production.
postgres_data volume — This named volume stores your database files. Without this, every time you restart the db container, you'd lose all your data. The volume persists it.
Open myapp/settings.py and update the DATABASES section:
import os
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("DB_NAME", "myappdb"),
"USER": os.environ.get("DB_USER", "myappuser"),
"PASSWORD": os.environ.get("DB_PASSWORD", "supersecretpassword"),
"HOST": os.environ.get("DB_HOST", "db"),
"PORT": os.environ.get("DB_PORT", "5432"),
}
}
Or if you want to use dj-database-url (which parses the DATABASE_URL string we set in compose), add it to requirements.txt and use:
import dj_database_url
DATABASES = {
"default": dj_database_url.config(
default="postgres://myappuser:supersecretpassword@db:5432/myappdb"
)
}
Both approaches work. The env variable approach gives you more flexibility.
Common mistake #3: Hardcoding credentials in settings.py and pushing them to GitHub. Please don't do this. Use environment variables. Future you (and your users) will thank you.
This is the part that confuses a lot of people. "How do I run manage.py migrate if my app is inside a container?"
The answer is straightforward, but there are a few ways to do it, each with tradeoffs.
docker compose execStart your services first:
docker compose up -d
Then, with both containers running, execute the migrate command inside the web container:
docker compose exec web python manage.py migrate
docker compose exec lets you run any command inside a running container. It's like SSH-ing into the container and running the command there. You can use this for anything — creating a superuser, running management commands, opening a Django shell, etc.
# Create a superuser
docker compose exec web python manage.py createsuperuser
# Open Django shell
docker compose exec web python manage.py shell
You can modify the CMD in your Dockerfile or the command in your Compose file to run migrations automatically on startup:
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
This runs migrate every time the container starts. It's convenient for development, and it's actually safe — Django's migration system is idempotent, meaning running migrate when there's nothing to migrate is a no-op. It just checks and moves on.
The downside in production? You generally don't want your application server running migrations on every restart, especially if you're running multiple instances. Migrations should be a deliberate step, not automatic. But for learning and local dev, this is totally fine.
Remember I mentioned that depends_on doesn't actually wait for PostgreSQL to be ready? Here's how you deal with it.
The simplest approach for development is a small shell script that polls the database until it's available. Create a file called entrypoint.sh:
#!/bin/sh
echo "Waiting for postgres..."
while ! nc -z db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
python manage.py migrate
exec "$@"
Make it executable and add it to your Dockerfile:
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
You'll also need netcat installed:
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc \
netcat-openbsd \
&& rm -rf /var/lib/apt/lists/*
This script loops until the database port is reachable, then runs migrations, then starts the server. Simple and effective.
Here's your final project structure:
django-docker-demo/
├── myapp/
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── manage.py
├── requirements.txt
├── Dockerfile
├── docker-compose.yml
└── entrypoint.sh
Build and start everything:
docker compose up --build
The first time, this will:
python:3.12-slim imagepostgres:16 imageOpen your browser and hit http://localhost:8000. You should see Django's default welcome page.
Common mistake #4: Running docker compose up without --build after changing the Dockerfile or requirements.txt. Docker will use the cached image and your changes won't show up. Whenever you change the Dockerfile or dependencies, use --build.
Checking your container logs:
# All services
docker compose logs
# Just the web service, follow in real time
docker compose logs -f web
Stopping everything:
docker compose down
Stopping and removing volumes (destroys your DB data):
docker compose down -v
Use that last one carefully. It wipes the database volume.
Rebuilding just one service:
docker compose up --build web
Alright, here's your homework. Don't skip it. This is where the understanding actually sticks.
Set up the project — Create the Django app, Dockerfile, and docker-compose.yml as described above. Get it running with docker compose up --build.
Create an app — Run docker compose exec web python manage.py startapp blog. This creates a simple blog app inside the container.
Add a model — In blog/models.py, create a Post model with a title (CharField), content (TextField), and created_at (DateTimeField with auto_now_add=True).
Register the app — Add 'blog' to INSTALLED_APPS in settings.py.
Make migrations and migrate:
docker compose exec web python manage.py makemigrations
docker compose exec web python manage.py migrate
Register the model in admin — Add it to blog/admin.py, create a superuser, and visit http://localhost:8000/admin. Try creating a few posts.
Bonus: Stop everything with docker compose down, then start again with docker compose up. Notice that your database data is still there because of the named volume.
If you get through all of that, you genuinely understand Django in Docker. Not just copying commands — actually understanding what's happening.
What we covered today isn't just theory. This is a real pattern used in production apps everywhere. The Dockerfile structure, the Compose setup, the migration strategy — these are decisions real teams make every day.
The key things to remember: pick python:3.12-slim over full or alpine for most Django work, always pin your dependencies in requirements.txt, services on the same Compose network talk to each other by service name, and use docker compose exec whenever you need to run management commands inside a running container.
The next time you're setting up a new Django project, try reaching for Docker from day one. You'll thank yourself later when the onboarding question stops being "how do I install PostgreSQL on my machine?" and becomes just docker compose up.
Happy shipping. 🐳