

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
Let me tell you a quick story.
A few months into my Docker journey, I was running a PostgreSQL container for a side project. I'd been adding data for two days — users, test records, some dummy transactions. Everything was looking great. Then one day I ran docker rm to clean up some containers, restarted everything fresh, and... the database was empty. Two days of data, gone.
I just sat there staring at my terminal for a solid minute.
That moment taught me one of the most important lessons in Docker: containers are ephemeral. They're not meant to hold your data forever. They're like a glass of water — once you tip it over (remove the container), whatever was inside is gone.
This is where Docker Volumes come in. And once you understand them, you'll never lose your database again.
To understand volumes, you need to understand how a container's filesystem actually works.
When Docker runs a container, it creates a writable layer on top of the image. Think of it like this — the Docker image is a printed book. It's read-only. You can't write notes inside a published book. So Docker gives you a sticky note pad on top of that book. That sticky note pad is the writable layer. You do your work there.
The problem? When the container dies, the sticky note pad goes with it.
All the data your app wrote — database rows, uploaded files, logs — lives in that writable layer. And docker rm throws the whole thing in the bin.
This is actually by design. Containers are supposed to be disposable. You should be able to kill one, spin up a new one, and have it work exactly the same. But real-world apps need to persist data somewhere. You don't want your MySQL database to forget everything every time you restart a container.
This tension — "containers are disposable" vs "data must survive" — is exactly what volumes solve.
A Docker volume is storage that lives outside the container's lifecycle. It's managed by Docker itself, sits on your host machine's filesystem (usually under /var/lib/docker/volumes/), and most importantly — it survives container deletion.
Here's a better analogy. Imagine your container is a rented apartment. The furniture inside (the writable layer) came with the apartment. When you leave, the landlord removes everything. But if you brought your own wardrobe from home, put it in the apartment, and then moved out — that wardrobe doesn't disappear. It's yours. You take it with you.
Volumes are your wardrobe. They belong to you (Docker host), not to the apartment (container).
There are a few types of storage in Docker:
For most use cases — especially databases — you want named volumes. That's our focus today.
A named volume is exactly what it sounds like: a volume with a name you give it. Docker manages where it actually lives on disk. You just refer to it by name.
Let's create one.
docker volume create mydata
That's it. You just created a named volume called mydata. Docker will store it at /var/lib/docker/volumes/mydata/_data on Linux (or in Docker Desktop's VM on Mac/Windows).
Want to see your volumes?
docker volume ls
Output will look something like:
DRIVER VOLUME NAME
local mydata
Want details about a specific volume?
docker volume inspect mydata
This gives you a JSON blob with the mount point, creation date, and driver info. The Mountpoint field shows exactly where Docker is keeping your data on the host.
Now the important question — how do you actually use a volume with a container?
The -v flag (or --volume) is how you attach a volume to a container. The syntax is:
docker run -v <volume-name>:<container-path> <image>
Let's break this down with a real example. Say you want to run an Nginx container and make sure its web content survives restarts:
docker run -d \
--name my-nginx \
-v mydata:/usr/share/nginx/html \
-p 8080:80 \
nginx
What's happening here:
-v mydata:/usr/share/nginx/html — mount the mydata volume to /usr/share/nginx/html inside the containermydata volumemy-nginx, the data in mydata staysYou can also use the newer --mount syntax, which is more verbose but clearer:
docker run -d \
--name my-nginx \
--mount type=volume,source=mydata,target=/usr/share/nginx/html \
-p 8080:80 \
nginx
Both work. The -v flag is quicker to type. The --mount syntax is easier to read in scripts. Pick your preference.
Alright, this is the main event. Let's set up a PostgreSQL container with a named volume so the database survives container deletion.
Step 1: Create the volume
docker volume create pgdata
Step 2: Start PostgreSQL with the volume
docker run -d \
--name my-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_USER=sarthak \
-e POSTGRES_DB=testdb \
-v pgdata:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:15
Quick explanation of what's happening:
-e POSTGRES_PASSWORD=mysecretpassword — sets the password (required by the postgres image)-e POSTGRES_USER=sarthak — creates a custom user-e POSTGRES_DB=testdb — creates a database called testdb-v pgdata:/var/lib/postgresql/data — mounts our pgdata volume to where PostgreSQL stores its actual data files-p 5432:5432 — exposes the port so you can connect from your machineStep 3: Add some data
Connect to the database:
docker exec -it my-postgres psql -U sarthak -d testdb
Now inside the psql shell, create a table and insert some rows:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100)
);
INSERT INTO users (name, email) VALUES
('Sarthak', '[email protected]'),
('Himanshu', '[email protected]'),
('Mayank', '[email protected]');
SELECT * FROM users;
You should see your 3 rows. Type \q to exit.
Step 4: DELETE the container. Completely.
docker stop my-postgres
docker rm my-postgres
The container is gone. If you run docker ps -a, you won't find my-postgres anywhere. Scary? Good. Now watch this.
Step 5: Bring it back with the same volume
docker run -d \
--name my-postgres-new \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_USER=sarthak \
-e POSTGRES_DB=testdb \
-v pgdata:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:15
Notice we used the same pgdata volume. Now connect again:
docker exec -it my-postgres-new psql -U sarthak -d testdb
SELECT * FROM users;
All three rows are there. Sarthak Himanshu, Mayank — all present and accounted for.
That moment when you see your data still there after deleting the container? That's the volume magic.
You'll also come across bind mounts in real projects, so let's not skip over them.
A bind mount maps a specific folder on your host machine directly into the container. The syntax looks similar to named volumes but you use a full path:
docker run -d \
--name dev-app \
-v /home/sarthak/myproject:/app \
node:20
Here, /home/sarthak/myproject on your machine maps to /app inside the container. Any changes you make in /home/sarthak/myproject are instantly reflected inside the container — and vice versa.
This is super useful for development workflows. You edit code on your machine, the container sees the changes immediately, no rebuilding the image needed.
But for databases and production data, stick with named volumes. Bind mounts are more fragile — they depend on the exact path existing on the host, they can have permission issues, and they're harder to manage across different machines.
Named volumes = production data. Bind mounts = development code. That's a good mental rule.
Here's a quick reference for the volume commands you'll reach for most often:
List all volumes:
docker volume ls
Inspect a volume (see where it lives, when it was created):
docker volume inspect pgdata
Remove a specific volume:
docker volume rm pgdata
⚠️ This will permanently delete the data. Make sure no container is using the volume before removing it.
Remove all unused volumes (spring cleaning):
docker volume prune
⚠️ This removes every volume not currently attached to a running or stopped container. Be careful with this one in production.
Mistake 1: Not using volumes at all for databases
The classic beginner trap. You run MySQL or PostgreSQL, add data, restart your container, and wonder why everything is empty. Now you know why. Always, always use a named volume for database containers.
Mistake 2: Using docker rm -v without realizing what -v does
When you run docker rm -v container-name, the -v flag removes the anonymous volumes associated with that container. Anonymous volumes are ones Docker creates automatically when an image specifies a VOLUME in its Dockerfile but you didn't give it a name. If you didn't name your volume, docker rm -v can delete your data.
Lesson: always name your volumes with -v myvolumename:/path instead of just -v /path.
Mistake 3: Thinking docker volume prune is harmless
It removes unused volumes. If you stopped a container temporarily (not running, but not removed), its volume is still "in use" — safe. But if you removed the container, the volume becomes unused and prune will delete it. Be intentional about cleanup.
Mistake 4: Volume name typos
If you accidentally write -v pgdat:/var/lib/postgresql/data instead of -v pgdata:/var/lib/postgresql/data, Docker will just create a brand new volume called pgdat and your old data won't be there. Always double-check volume names.
Mistake 5: Running a new DB container without setting the same credentials
Even if you use the same volume, if you start the new container with a different POSTGRES_USER or POSTGRES_PASSWORD, you might have authentication issues. PostgreSQL stores credentials in the data directory. Keep your environment variables consistent.
In real projects, you'll almost always use Docker Compose. Here's how volumes look in a docker-compose.yml:
version: '3.8'
services:
db:
image: postgres:15
environment:
POSTGRES_USER: sarthak
POSTGRES_PASSWORD: mysecretpassword
POSTGRES_DB: testdb
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
pgdata:
See that volumes: section at the bottom of the file? That's where you declare named volumes for the whole Compose setup. Docker Compose will create pgdata if it doesn't exist and reuse it if it does.
When you run docker compose down, by default it keeps your volumes. Run docker compose down -v if you actually want to delete them (useful for a clean slate in dev).
Okay, theory is done. Time to get your hands dirty. Here's your challenge:
Challenge: Build a Redis container with persistent storage
Redis is an in-memory data store, but it can also write data to disk. Your job:
Create a named volume called redisdata
Start a Redis container with that volume mounted to /data (that's where Redis stores its dump files)
Connect to Redis using:
docker exec -it my-redis redis-cli
Set a few key-value pairs:
SET name "Sarthak"
SET course "Docker Volumes"
SET lesson "5"
Stop and delete the container completely
Start a new Redis container using the same redisdata volume
Connect and run:
GET name
GET course
GET lesson
If you see your values back — you've nailed it. You just made Redis data survive container deletion.
Bonus challenge: Use docker volume inspect redisdata to find where the data is actually stored on your machine. Can you see the dump.rdb file in that directory?
Volumes are one of those topics that seem small until they save you from a disaster. Here's the quick version of everything we covered:
-v volumename:/container/path to attach a volume to a containerdocker volume ls, inspect, rm, and prune are your management toolsThe next time someone runs docker rm on your database container, you won't panic. You'll just start a new one with the same volume and carry on like nothing happened.
That's the power of volumes. Go use them.