

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
So you've been writing Node.js apps for a while now. Maybe you've even gotten comfortable with Express. And then someone says, "just Dockerize it" — and suddenly it feels like you've been asked to speak a second language with no preparation.
Don't worry. By the end of this article, you'll have a fully containerized Express app with a MongoDB database, live hot reload via Nodemon, and a Docker Compose setup that makes you feel like you actually know what you're doing. Because you will.
Let's go.
Here's the goal: a simple Express REST API that talks to MongoDB, runs inside Docker, and — this is the part students always love — automatically reloads when you change your code, without having to restart anything manually.
Think of it like this: imagine your code editor is the kitchen, Docker is the restaurant, and Nodemon is the waiter who notices when the chef changes the menu and immediately updates the board — no one has to scream across the room.
Our project structure will look like this by the end:
my-app/
├── src/
│ └── index.js
├── Dockerfile
├── docker-compose.yml
├── .dockerignore
└── package.json
Simple. Clean. Let's build it piece by piece.
Before we even touch Docker, we need an app worth containerizing.
mkdir my-app && cd my-app
npm init -y
npm install express mongoose
npm install --save-dev nodemon
Now create src/index.js:
const express = require('express');
const mongoose = require('mongoose');
const app = express();
app.use(express.json());
// Connect to MongoDB
mongoose.connect(process.env.MONGO_URI || 'mongodb://localhost:27017/mydb')
.then(() => console.log('MongoDB connected'))
.catch(err => console.error('MongoDB connection error:', err));
// A simple route to confirm the app is alive
app.get('/', (req, res) => {
res.json({ message: 'Hello from inside Docker!' });
});
// A route to test DB connection
app.get('/health', async (req, res) => {
const state = mongoose.connection.readyState;
res.json({ db: state === 1 ? 'connected' : 'disconnected' });
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
Update your package.json scripts section:
"scripts": {
"start": "node src/index.js",
"dev": "nodemon src/index.js"
}
Nothing groundbreaking here. But notice the process.env.MONGO_URI — that's how our app will talk to MongoDB when running in Docker. Environment variables are how containers communicate configuration to your code, and this pattern will serve you well for the rest of your career.
A Dockerfile is basically a recipe. You're telling Docker: "here's how to bake my app into an image."
Create a file called Dockerfile (no extension) in the root of your project:
# Use the official Node.js LTS image as the base
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package files first (smart caching trick — more on this below)
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port the app runs on
EXPOSE 3000
# Default command to run the app
CMD ["node", "src/index.js"]
Let's break down what's happening here, because each line has a reason.
FROM node:20-alpine — We're starting from an official Node.js image that's built on Alpine Linux. Alpine is a tiny Linux distribution, like a studio apartment instead of a mansion. Your image stays small, which means faster builds, faster pulls, less storage. Always prefer -alpine variants when you can.
WORKDIR /app — This sets the "home base" inside the container. Every command after this runs from /app. Think of it like doing cd /app before anything else.
COPY package*.json ./ then RUN npm install — Here's a clever trick that most beginners miss. Docker builds images in layers, and it caches each layer. If you copy all your files first and then run npm install, Docker has to reinstall all your packages every single time any file changes — even if you just edited a comment in index.js. By copying package.json first, running npm install, and then copying your code, Docker only re-runs the install when your dependencies actually change. On a slow connection, this saves minutes per build. Over weeks, it saves hours.
EXPOSE 3000 — This is documentation more than actual configuration. It tells Docker (and anyone reading your Dockerfile) that this container intends to listen on port 3000. The actual port binding happens when you run the container.
CMD ["node", "src/index.js"] — The default command to run when the container starts. We use the JSON array format (not CMD node src/index.js) because the array form doesn't invoke a shell — it runs the process directly, which means signals like CTRL+C are properly handled.
This is one of the most overlooked files in Docker setups, and skipping it is a classic beginner mistake.
Create .dockerignore:
node_modules
npm-debug.log
.git
.gitignore
*.md
.env
Without this file, Docker's COPY . . command will copy your entire node_modules folder (which can be hundreds of megabytes) into the build context. Even if Docker doesn't ultimately use it, just sending it to the Docker daemon slows everything down. It's like packing your entire apartment into moving boxes just to move a single suitcase. Don't do it.
The .env entry is there for security — you never want secrets accidentally baked into an image.
Here's where it gets interesting for devs. In production, we run node src/index.js. But during development, we want the app to restart automatically every time we save a file. That's exactly what Nodemon does.
The trick is: we want hot reload without rebuilding the Docker image every time we change code. The solution is bind mounts — we mount our local source code directly into the container at runtime.
Here's the development-specific Dockerfile addition (we'll use Compose to manage which command runs, so keep your Dockerfile as-is, but understand this):
When we use Docker Compose with a bind mount, the container's /app directory will be a live mirror of your local project folder. Change a file locally, Nodemon inside the container detects it, restarts the server. No image rebuild. No docker stop. Just save and refresh.
This is the workflow that makes Docker actually pleasant to develop with.
Docker Compose is the conductor of the orchestra. Instead of running multiple docker run commands with long flags you'll inevitably forget, you define everything in a docker-compose.yml file and bring it all up with one command.
Create docker-compose.yml:
version: '3.8'
services:
app:
build: .
container_name: express_app
ports:
- "3000:3000"
environment:
- MONGO_URI=mongodb://mongo:27017/mydb
- NODE_ENV=development
volumes:
- .:/app # Bind mount: local code → container
- /app/node_modules # Anonymous volume: preserve container's node_modules
command: npm run dev # Override CMD to use Nodemon
depends_on:
- mongo
restart: unless-stopped
mongo:
image: mongo:7
container_name: mongo_db
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db # Named volume: persist database data
volumes:
mongo_data:
Let's walk through the important parts.
build: . — This tells Compose to build the image using the Dockerfile in the current directory. No need to pre-build manually.
ports: "3000:3000" — Maps port 3000 on your host machine to port 3000 inside the container. Format is always host:container. If you wanted to access the app on port 8080 from your browser but keep it on 3000 inside the container, you'd write 8080:3000.
environment — This is how you pass environment variables into the container. Notice MONGO_URI=mongodb://mongo:27017/mydb — the hostname here is mongo, which is the name of the other service. Inside a Docker Compose network, services can find each other by their service name. It's like having internal DNS — mongo resolves to the MongoDB container's IP automatically.
Volumes — the double-volume pattern for hot reload. This part trips up almost everyone the first time:
.:/app — This mounts your entire project into the container. Saves go live immediately./app/node_modules — This is an anonymous volume that shields the node_modules folder inside the container from being overwritten by the bind mount. Here's why: your local machine might not have the same OS or architecture as the container. Some npm packages compile native binaries. If you let the bind mount overwrite the container's node_modules with your local ones, things break. This second volume line says "for this specific path, use what's inside the container, ignore the bind mount."command: npm run dev — This overrides the CMD in your Dockerfile. In Compose, this is how you run different commands for different environments without maintaining multiple Dockerfiles.
depends_on: - mongo — Tells Compose to start the mongo service before app. Note: this only guarantees startup order, not that Mongo is fully ready. For production-grade apps, you'd add a health check or a startup retry loop. For learning purposes, this is fine.
mongo_data named volume — Database data is stored here. If you run docker compose down, the database data survives. Only docker compose down -v deletes it. This is intentional — you don't want to lose your data every time you stop your stack.
# Build images and start all services in the foreground
docker compose up --build
# Or start in detached (background) mode
docker compose up --build -d
# Check running containers
docker compose ps
# View logs
docker compose logs -f app
# Stop everything
docker compose down
On first run, Docker will pull the MongoDB image, build your app image, and start both containers. You should see something like:
mongo_db | MongoDB starting...
express_app | Server running on port 3000
express_app | MongoDB connected
Open your browser and hit http://localhost:3000 — you'll see the JSON response from your Express app.
Now open src/index.js and change the message in the / route. Save the file. Watch the terminal — Nodemon will detect the change and restart the server within a second or two. Refresh your browser. The change is live. No rebuilds, no restarts, no drama.
That's hot reload in Docker. That's the setup.
Mistake 1: Not using .dockerignore Your build context becomes massive. Builds slow to a crawl. Add the file. Always.
Mistake 2: Copying files before installing dependencies
# ❌ Wrong order — cache busts on every code change
COPY . .
RUN npm install
# ✅ Right order — only reinstalls when package.json changes
COPY package*.json ./
RUN npm install
COPY . .
Mistake 3: Using localhost to connect services
Inside Docker, localhost means the container itself. Your Express app can't reach MongoDB at localhost:27017 — MongoDB is in a different container. Use the service name: mongodb://mongo:27017/mydb.
Mistake 4: Forgetting the node_modules volume trick
If you only add .:/app and forget /app/node_modules, your container will try to use your host machine's node_modules folder. If you're on Mac or Windows and the package has native binaries compiled for Linux, it will crash. The two-volume pattern prevents this entirely.
Mistake 5: Using docker compose down -v during development
The -v flag deletes named volumes, which includes your database data. If you just want to stop the services, use docker compose down (no flag). Reserve down -v for when you actually want a clean slate.
The setup we've built is for development. In production you'd make a few changes:
COPY . .)command: npm run dev back to CMD ["node", "src/index.js"].dockerignore entry for development-only filesdocker-compose.ymlmem_limit, cpus)For learning and college projects though, what we've built here is solid and completely functional.
# Rebuild a specific service without restarting others
docker compose up --build app
# Execute a command inside a running container
docker exec -it express_app sh
# Check environment variables inside the container
docker exec -it express_app env
# Watch real-time logs for all services
docker compose logs -f
# Remove containers, networks (keeps volumes)
docker compose down
# Remove everything including volumes (careful with this)
docker compose down -v
You've got the foundation. Now push it further:
Add a new route /users that reads from a User collection in MongoDB using Mongoose. Add a document manually using mongosh (connect with docker exec -it mongo_db mongosh) and verify your route returns it.
Environment variable challenge: Move your PORT and MONGO_URI out of docker-compose.yml into a .env file. Use Compose's env_file option to load it. Make sure .env is in your .dockerignore.
Multi-service check: Add a fourth endpoint /ping-mongo that runs mongoose.connection.db.admin().ping() and returns the result. This confirms your app is genuinely talking to the database.
Break something intentionally: Remove the /app/node_modules anonymous volume line from your docker-compose.yml. Run docker compose up --build. See what happens. Then put it back and understand exactly why it broke.
The best way to really learn Docker is to poke at it until it yells at you, then figure out why. Every error message is a lesson. Every weird network issue teaches you how containers actually communicate.
You've got everything you need. Now go build something.