

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
You've been learning Docker for a few weeks now. You can build images, run containers, write a Compose file, maybe even wire up a basic network. Things are clicking. You feel good.
Then someone in a Slack group says, "Yeah but is your container secure though?" and you suddenly realize — you have no idea.
Don't worry. That's exactly where most people are when they first hit this topic. Security in Docker isn't something people talk about at the beginning because, honestly, getting the basics working is already a lot. But once you're building things that actually run somewhere — a VPS, a team server, even your own lab — security stops being optional.
This article is your first real step into Docker security. We're going to cover four things: scanning your images for vulnerabilities, not running your containers as root, handling secrets the right way, and keeping your images updated. These four habits alone put you miles ahead of the average beginner.
Let's get into it.
Think of a container like a lunch box you pack and hand to someone. If that lunch box had a tiny hole in it — maybe something got in, maybe something leaked out — you wouldn't want someone serving food from it to a hundred people, right?
A Docker image is similar. It's a snapshot of your application and everything it needs to run. If that snapshot contains outdated libraries with known security holes, or if the process inside runs with way too many permissions, you're handing everyone who runs your container a potentially compromised lunch box.
At small scale, maybe nobody notices. At any meaningful scale, someone will.
When you build a Docker image, you start from a base — maybe node:18, maybe python:3.11-slim, maybe plain ubuntu. That base image contains a bunch of packages, libraries, and binaries. Some of those will have known security weaknesses — things that were discovered after the image was published, often tracked by CVE numbers (Common Vulnerabilities and Exposures).
The good news? You don't have to manually hunt them down. There are tools that do the scanning for you.
Docker Scout is Docker's own vulnerability scanner and it's now baked right into Docker Desktop and the CLI. If you're on Docker Desktop, you likely already have it.
# Scan a local image
docker scout cves my-app:latest
This will output a list of vulnerabilities found in your image — their severity (critical, high, medium, low), which package they're in, and whether a fix exists. It feels a bit like running npm audit if you've ever done that in Node.js projects.
You can also scan images directly from Docker Hub before you even pull them:
docker scout cves nginx:latest
One thing people appreciate about Scout is it also gives you recommendations. It might say "hey, switch from node:18 to node:18-alpine and you'll drop 40 vulnerabilities." That's actionable, which is rare.
If you want a more powerful, open-source scanner that works on basically everything, Trivy by Aqua Security is what most professionals reach for.
Install it first:
# On macOS
brew install trivy
# On Ubuntu/Debian
sudo apt-get install -y trivy
Then scan any image:
trivy image nginx:latest
Trivy checks OS packages, language-specific packages (npm, pip, gem, etc.), and even your Dockerfile configs for misconfigurations. It's surprisingly thorough.
You can also filter by severity so you're not drowning in noise:
trivy image --severity HIGH,CRITICAL nginx:latest
That says: only show me the stuff I should actually care about right now.
The mistake isn't failing to scan. The mistake is scanning once and never again.
Vulnerabilities are discovered after images are built. An image that was clean last month might have three new CVEs today. This is why scanning needs to be a habit — or better yet, part of your pipeline. But even running a quick scan before deploying something to a server is massively better than nothing.
By default, the process inside your Docker container runs as root. You might think that's fine because "it's isolated inside a container anyway" — and there's some truth to that. But isolation has limits.
Here's the scary bit: if an attacker finds a way to break out of your container (container escape vulnerabilities do exist), they land on your host as root. Game over. They own the machine.
Even without a breakout, running as root inside a container means that if something in your app is exploited, the attacker has full permissions inside that container. They can read any file, write anywhere, run anything. That's a lot of blast radius for a vulnerability that might otherwise be contained.
The fix is straightforward. Create a non-root user in your Dockerfile and switch to it before running your app.
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
# Create a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Switch to that user
USER appuser
EXPOSE 3000
CMD ["node", "index.js"]
The addgroup and adduser commands create a system group and user. The USER instruction tells Docker to run everything after it as that user — including your CMD.
For Python projects it's similar:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN useradd --no-create-home appuser
USER appuser
CMD ["python", "app.py"]
After building your image, you can verify what user your container runs as:
docker run --rm my-app:latest whoami
If it says root, your USER instruction either isn't there or isn't working correctly. If it says appuser (or whatever you named it), you're good.
You can also inspect a running container:
docker exec <container_id> whoami
One thing that trips up a lot of people: when you switch to a non-root user, that user might not have permission to access certain files or directories in your image.
If your app writes logs to /app/logs or uploads files to /app/uploads, make sure those directories are owned by your app user:
RUN mkdir -p /app/logs && chown -R appuser:appgroup /app/logs
Do this before the USER appuser line. You need root to set ownership — after switching users, you won't be able to.
ENV DB_PASSWORD=supersecretpassword123
Yeah. This is in thousands of Dockerfiles on GitHub right now. When you put a secret into a Dockerfile ENV or ARG, it gets baked into every layer of that image. Anyone who pulls the image can run docker history my-app:latest and potentially see it. Anyone you share the image with has that secret.
Same goes for hardcoding credentials in your app code before copying it in. The files end up in your image layers.
The simplest improvement is to not put secrets in your image at all. Instead, pass them at runtime:
docker run -e DB_PASSWORD=supersecretpassword123 my-app:latest
Better — read from a .env file:
docker run --env-file .env my-app:latest
Your .env file stays on your machine, is in your .gitignore, and never touches the image.
In Docker Compose:
services:
app:
image: my-app:latest
env_file:
- .env
Again — .env is local only, not committed to git, not in the image.
For production and team setups (especially with Docker Swarm or orchestration), Docker has a proper secrets management system. Here's how it works at a basic level:
# Create a secret
echo "supersecretpassword123" | docker secret create db_password -
# Use it in a service (Swarm mode)
docker service create \
--name my-app \
--secret db_password \
my-app:latest
When a secret is attached to a service this way, Docker mounts it as a file at /run/secrets/db_password inside the container. Your app reads it from the file instead of an environment variable. This means the secret isn't in your environment (where it might get logged), and it's managed by Docker's encrypted storage.
In your application code you'd do something like:
with open('/run/secrets/db_password') as f:
db_password = f.read().strip()
This is the "proper" way but it requires Swarm mode or Kubernetes. For local dev and simple setups, runtime env vars with an .env file is totally reasonable.
Speaking of secrets leaking into images — do you have a .dockerignore file? You should.
.env
.env.*
*.pem
*.key
secrets/
node_modules/
.git/
This file tells Docker's build context to exclude certain files when building. Even if you accidentally have a COPY . . in your Dockerfile, your .env file won't get pulled in if it's in .dockerignore.
Think of it like .gitignore but for Docker builds.
This one is subtle. You build an image, it works, you deploy it, and then you move on to the next thing. Six months later, that same image is still running — but the base image it was built from has received 30+ security patches that your container never got.
Your container is frozen in time. The underlying OS packages, the runtime, the libraries — none of them update automatically. That's actually one of Docker's strengths (reproducibility), but it means you have to be intentional about updates.
Here's a nuance that confuses people early on: the latest tag seems like it would always be current, but in practice it's unreliable. When you do FROM node:latest, you don't know exactly what version you're getting, and it changes whenever someone pushes a new latest — which can break things.
But being too specific creates a different problem. If you pin to FROM node:18.17.0, you'll never accidentally get a broken update — but you also won't get security patches automatically.
The sweet spot for most people: pin to a minor version, not a patch.
# Too vague
FROM node:latest
# Too specific (won't get patches)
FROM node:18.17.0
# Better — gets patches within 18.x
FROM node:18-alpine
Then periodically — maybe once a month, or when there's a known vulnerability — you rebuild your image to pick up base image updates.
Rebuilding isn't scary. It's just:
docker build --no-cache -t my-app:latest .
The --no-cache flag forces Docker to pull fresh layers instead of using cached ones. This is important because Docker's cache might have old layers stored locally.
If you're pulling the base image fresh:
docker pull node:18-alpine
docker build -t my-app:latest .
If you're using GitHub Actions, GitLab CI, or any CI/CD pipeline, you can set up a scheduled job that rebuilds and pushes your image weekly. That way your deployed images are always within a week of fresh base images.
A very basic GitHub Actions workflow that does this would run on a schedule:
on:
schedule:
- cron: '0 2 * * 1' # Every Monday at 2am
This is slightly beyond the scope of this article but worth knowing it's possible — and common in production teams.
Some base images get updates much more frequently than others. Official images on Docker Hub (maintained by Docker or the software vendors) are generally well-maintained. Unofficial or random community images might not be.
When picking a base image, ask yourself: who maintains this? When was it last updated? Does it have an active issue tracker?
Alpine-based images (node:18-alpine, python:3.11-alpine) tend to have smaller attack surfaces too — fewer packages means fewer things that can have vulnerabilities.
Let's look at a Dockerfile that applies everything we've covered:
# Use a specific, well-maintained base (not latest, not too-specific)
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Create and switch to a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
RUN chown -R appuser:appgroup /app
USER appuser
EXPOSE 3000
# No secrets here — pass at runtime via .env file or -e flags
CMD ["node", "index.js"]
And a .dockerignore:
.env
.env.*
.git/
node_modules/
*.pem
*.key
And before deploying, you'd run:
# Scan for vulnerabilities
trivy image my-app:latest
# Check who the container runs as
docker run --rm my-app:latest whoami
# Run with env file instead of hardcoded secrets
docker run --env-file .env -p 3000:3000 my-app:latest
That's four security habits running in about 15 minutes of setup. Not bad.
Here's a hands-on challenge to cement this:
The Challenge: Secure a Vulnerable Dockerfile
Take this intentionally bad Dockerfile:
FROM ubuntu:latest
ENV DB_PASSWORD=admin123
ENV API_KEY=sk_live_supersecretkey
RUN apt-get update && apt-get install -y python3 python3-pip
COPY . .
RUN pip3 install -r requirements.txt
CMD ["python3", "app.py"]
Your job is to:
.dockerignore that would prevent a .env file from leaking intrivy image on it and note what vulnerabilities it reportsdocker run --rm your-image whoami to confirm it's not running as rootBonus: write a docker run command that passes DB_PASSWORD and API_KEY as runtime variables instead.
When you've done all that, you'll have gone from "Docker security? what's that?" to "I actively think about security when I build images." That's a genuinely big deal.
Security can feel overwhelming when you first encounter it — there's a whole world of attack vectors, compliance frameworks, and enterprise tools that people dedicate careers to. But you don't need all of that to be meaningfully more secure than you are today.
The four habits we covered — scan your images regularly, don't run as root, handle secrets properly, keep images updated — these are the fundamentals. They're not flashy, but they're what separates a thoughtful developer from one who just got lucky.
Start with one. Add the others over time. Before you know it, they'll just be how you write Dockerfiles.
See you in the next one. 🐳