

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
So your container is running. Or maybe it crashed. Or maybe it's running but something feels off — your app isn't responding the way it should, a request is hanging, or you're just sitting there wondering what the heck is happening inside that little isolated box.
This is where logs come in. And if you're new to Docker, you're going to want to get very comfortable with them. Logs are your window into a container's soul.
Let me tell you a story first.
Imagine you hired a new employee — let's call him Kevin. Kevin is very hardworking. You put him in a room, give him a task, and close the door. An hour later, you come back expecting results. Nothing. You knock. No answer. Kevin is still inside. Is he working? Is he sleeping? Did something go wrong? You have no idea because you never gave Kevin a way to communicate with you.
Containers are Kevin.
When you run a container, it does its thing in isolation. But unlike Kevin, a well-behaved container keeps a diary — a running log of everything it's doing, every error it hits, every request it processes. Docker gives you a neat little command to read that diary:
docker logs <container_name_or_id>
That one command right there? It's your door into Kevin's room.
Before we start running commands, let's understand what we're actually looking at.
When you run an application inside a container — say a Node.js app, a Python Flask server, or an Nginx web server — that application prints messages. Maybe it prints Server started on port 3000. Maybe it prints Database connection failed. These messages go to two places:
Docker captures both of these streams automatically and stores them as the container's logs. You don't have to configure anything special. As long as your app is writing to stdout or stderr, Docker is quietly collecting all of it in the background.
Think of it like this: Docker is that friend who secretly records every voicemail you leave, so you can go back and listen to them later. Except it's not creepy — it's genuinely useful.
Let's spin up a real example. We'll use a simple Nginx container because everyone has it available and it generates logs the moment you hit it with a request.
docker run -d --name my-nginx -p 8080:80 nginx
This runs Nginx in the background (-d for detached), names it my-nginx, and maps port 8080 on your machine to port 80 in the container.
Now visit http://localhost:8080 in your browser. Hit it a couple of times. Then come back to your terminal and run:
docker logs my-nginx
You'll see something like:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
172.17.0.1 - - [10/Mar/2025:14:22:31 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0..."
172.17.0.1 - - [10/Mar/2025:14:22:31 +0000] "GET /favicon.ico HTTP/1.1" 404 153 ...
That's Nginx logging every request you made. The IP, the time, what was requested, the response code. It's all there.
Static logs are great, but sometimes you want to watch what's happening as it happens. You're debugging a live issue. You kick off a request and you want to see the log lines appear in real time.
For that, you use the -f flag (short for "follow"):
docker logs -f my-nginx
Now your terminal is live. It won't return to the prompt — it'll just sit there, waiting. The moment a new log line comes in, it appears. Go back to your browser, refresh the page a few times, and watch the lines stream in.
This is like switching from reading yesterday's newspaper to watching live news. Same information, completely different experience.
To stop following, just press Ctrl + C.
Here's a common situation: your app has been running for three days. You run docker logs and your terminal gets completely flooded — thousands of lines scrolling past. You can't find anything.
Use --tail to only see the last N lines:
docker logs --tail 50 my-nginx
This shows only the last 50 lines. You can combine it with -f too:
docker logs --tail 50 -f my-nginx
Now you start from the last 50 lines and then continue following in real time. This is probably the command you'll use most often when actively debugging something.
Logs are more useful when you know when something happened. By default, docker logs doesn't show timestamps. Add -t to get them:
docker logs -t my-nginx
Output changes from:
172.17.0.1 - - "GET / HTTP/1.1" 200 615
To:
2025-03-10T14:22:31.482910842Z 172.17.0.1 - - "GET / HTTP/1.1" 200 615
That timestamp is in UTC. Super useful when you're correlating a log entry with an event you know happened at a specific time — like "my app crashed at around 2 PM, let me check what was happening just before."
You can even combine all the flags together:
docker logs -f -t --tail 100 my-nginx
This is the Swiss army knife version. Last 100 lines, with timestamps, live-following.
--since and --until FlagsSometimes you don't want everything. You want logs from a specific window of time.
docker logs --since 30m my-nginx
This shows logs from the last 30 minutes. You can use m for minutes, h for hours, s for seconds.
Want logs between two specific points in time? Use --since and --until together:
docker logs --since "2025-03-10T14:00:00" --until "2025-03-10T15:00:00" my-nginx
This is incredibly useful when someone says "hey the app was broken between 2 PM and 3 PM" — you can isolate exactly that window and investigate without noise.
Okay, so now you know all the commands. But knowing commands isn't enough. You need to know how to think when you're reading logs. This is where the detective part comes in.
Real detective work isn't about having access to every clue — it's about knowing which clues matter and how to connect them.
Here's a framework that actually works:
When something goes wrong, don't read from line one. Scroll to the bottom first. Look for the last error or exception. That's usually where the story ends — and often where the real cause hides.
docker logs --tail 30 <container> 2>&1 | grep -i error
The 2>&1 part? That merges stderr into stdout so you can grep both streams at once.
You found an error at 3:47 PM. Now look at what happened at 3:46 PM. What was the last successful thing before the error? That gap is where the problem lives.
One 404 error in your logs is normal. Five hundred 404 errors in ten seconds for the same path? That's someone scanning your app or a broken link causing a cascade. Logs tell stories through repetition.
Some logs are just informational. Others are warnings. Others are outright errors. Learn what "normal" looks like for your app. Once you know normal, abnormal stands out immediately.
Let me walk you through something that actually happens in the real world.
You're running a simple web app in Docker. Users are reporting that they sometimes get a 500 error. You can't reproduce it yourself. Classic.
First, you follow the logs while users use the app:
docker logs -f -t --tail 50 my-app
You're watching. A few minutes pass. Then you see it:
2025-03-10T15:32:11Z Error: connect ECONNREFUSED 127.0.0.1:5432
2025-03-10T15:32:11Z at TCPConnectWrap.afterConnect
2025-03-10T15:32:11Z 500 Internal Server Error - /api/users
ECONNREFUSED on port 5432. That's PostgreSQL's default port. The app is trying to connect to a database and getting refused.
Why? You check your database container:
docker logs my-postgres --tail 20
And there it is:
FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
The database container crashed due to a permissions issue, so it's not accepting connections. Your app is healthy, your web server is healthy — but the database went down, and every request that needed it started failing.
You fixed a bug you couldn't reproduce, in under five minutes, just by reading logs.
That's the power of being comfortable with docker logs.
docker logs on a container name that doesn't existdocker logs my-app
Error: No such container: my-app
First check that the container is actually running:
docker ps
Or if it crashed:
docker ps -a
The -a flag shows all containers, including stopped ones. You can still read logs from a stopped container — which is super useful post-mortem.
This one catches beginners. If your app writes logs to a file inside the container (like /var/log/app.log) instead of stdout, docker logs will show nothing. Docker only captures stdout and stderr.
The fix? Either configure your app to log to stdout, or use a volume to expose the log file. But ideally — stdout is the Docker way.
Docker stores container logs on your host machine. If your app is very chatty (logs every single request, for example) and runs for weeks, those log files can grow enormous. You can set limits when running a container:
docker run --log-opt max-size=10m --log-opt max-file=3 my-app
This caps log files at 10MB each and keeps only the last 3 files. Essential for production use.
docker logs a1b2c3d4e5f6
vs.
docker logs my-app
Both work! Docker accepts either the full container ID, a short prefix of it (even just the first 3-4 characters), or the name. But you need to get it right — a typo in the name will give you that "No such container" error from Mistake 1.
Here are all the commands from this article, consolidated:
| What you want to do | Command |
|---|---|
| See all logs | docker logs <name> |
| Follow logs live | docker logs -f <name> |
| Last 50 lines | docker logs --tail 50 <name> |
| With timestamps | docker logs -t <name> |
| Last 50, live, with timestamps | docker logs -f -t --tail 50 <name> |
| Logs from last 30 minutes | docker logs --since 30m <name> |
| Logs in a time range | docker logs --since "..." --until "..." <name> |
| Only error lines | docker logs <name> 2>&1 | grep -i error |
| Logs of a stopped container | docker ps -a then docker logs <name> |
Here's a hands-on challenge to cement everything you just learned. No peeking at the answer until you've tried it yourself.
Setup:
Run this command to start a container that does something interesting:
docker run -d --name log-challenge alpine sh -c "
while true; do
echo \"[INFO] App is running at \$(date)\";
sleep 2;
echo \"[WARNING] Memory usage high\";
sleep 1;
echo \"[ERROR] Failed to reach external API\";
sleep 3;
done
"
This starts an Alpine Linux container that spits out fake log messages in a loop.
Your tasks:
Bonus challenge: How would you modify the run command to cap this container's log file at 5MB maximum size?
Take your time. Every one of these is solvable with the commands in this article. When you can run all six tasks without referring back, you officially know Docker logs.
Docker logs aren't glamorous. They're not the exciting part of learning Docker. Nobody tweets about docker logs -f. But I promise you — the day something breaks in production, or your app is behaving weirdly and you can't figure out why, this is the first place you'll look. And the people who are fast at debugging Docker containers? They're almost always fast because they know their logs well.
Your container keeps a diary. Learn to read it.
Happy debugging. 🐳
Next up in the series: Docker Volumes — Where Data Lives (and How to Not Lose It)