

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
Containers, Images, Volumes, Settings — explained the way a friend would.
Let me paint a quick picture. You've just installed Docker Desktop. You open it up, and suddenly there's this dashboard staring at you — sidebar items, green dots, whale logos, a bunch of IDs that look like alien DNA. You click around a bit, feel mildly confused, and open Stack Overflow.
Been there.
This tour is me sitting next to you, pointing at things on screen, and saying "okay, this thing does that, and here's why it matters." No textbook language. No "leveraging containerization paradigms." Just the actual tour.
Let's walk in.
Docker Desktop is the GUI wrapper around the Docker engine. Think of the Docker engine as the actual machinery humming in the background — the thing that creates containers, manages networking, allocates resources. Docker Desktop is the control panel sitting on top of it. It gives you a visual interface so you don't have to remember a dozen terminal commands every time you want to see what's running.
That said — and I want to be upfront about this — you will still use the terminal. Docker Desktop doesn't replace the CLI; it complements it. But having the visual side open while you're learning is genuinely helpful. It's like having a map while you're driving somewhere new. You could navigate by memory, but the map helps you understand where you are.
Alright. Let's open the front door.
When you open Docker Desktop, the first thing you see is the Dashboard. It's the main screen with a list of running (or recently stopped) containers.
If you've never run anything yet, it'll be empty and slightly lonely-looking. Docker might even throw up a tutorial prompt. You can do it or skip it — we're doing our own tour today.
The Dashboard gives you a quick "heartbeat" view: what's alive, what's stopped, how long something's been running, which ports are exposed. Each container row has a few icons on the right — play, stop, restart, delete. You can manage a container's whole lifecycle right from here without ever opening a terminal.
But here's what I want you to notice: the Dashboard is reactive. If you go run a container from the terminal right now, it'll show up here almost instantly. Both views — GUI and CLI — are looking at the same Docker engine underneath. Neither is the "real" one. They're just two windows into the same room.
Click Containers in the left sidebar. This is where most of your time will be spent, at least at first.
Okay, before we go further — the classic analogy is "containers are like shipping containers." I get why people use it, but I think there's a better one for code.
Imagine you wrote an app on your laptop. It works perfectly. You send it to a friend. They run it and get a dozen errors. "Works on my machine," you shrug. The problem? Their Python version is different. They're missing a library. Their OS behaves slightly differently.
A container is a little sealed box that packages your app with everything it needs to run — the right Python version, the right libraries, the config files, all of it. When your friend runs the container, they're not running the app on their machine directly. They're running the whole sealed box. Same environment. Every time. Everywhere.
That's the "why" behind containers. Now let's look at what Docker Desktop shows you.
Each container entry in the list shows you:
nervous_fermat or hopeful_curie)0.0.0.0:3000->3000/tcp, which means "your machine's port 3000 is connected to the container's port 3000"Click on any container name and you get a detail view with four tabs: Logs, Inspect, Bind Mounts, and Stats.
Logs is probably the one you'll use most. This is the container's console output — everything your app is printing, all the error messages, all the "Server started on port 3000" lines. When something breaks (and it will), logs are your first stop.
Inspect is a big JSON dump of the container's configuration. You don't need to memorize this, but it's super useful when you're wondering "wait, what environment variables does this container actually have?" or "which network is it connected to?"
Stats shows live CPU, memory, network, and disk usage in real-time graphs. Helpful when you suspect your container is eating more memory than it should.
Let's say you want to run an Nginx web server. Open your terminal and type:
docker run -d -p 8080:80 --name my-nginx nginx
Let's break that down piece by piece:
docker run — create and start a container-d — run it in "detached" mode (background, not taking over your terminal)-p 8080:80 — map your machine's port 8080 to the container's port 80--name my-nginx — give it a name so you don't have to use the random IDnginx — the image to use (Docker will pull it from Docker Hub if you don't have it locally)After running that, flip back to Docker Desktop. There's your container, green dot, running. Go to http://localhost:8080 in your browser and you'll see the Nginx welcome page. That's a real web server, running inside a container, on your machine.
To stop it:
docker stop my-nginx
To remove it:
docker rm my-nginx
Stopped containers don't disappear automatically. They stick around in an "exited" state. After a while you end up with 40 stopped containers taking up space. In Docker Desktop, you'll see them greyed out in the list. Clean them up with:
docker container prune
That'll ask you to confirm, then wipe out all stopped containers. Feels like cleaning out your desk.
Click Images in the sidebar.
Here's the clearest way I've found to explain this:
An image is a blueprint. A container is a building made from that blueprint.
You can build ten buildings from the same blueprint. They all start identical. But over time, you can paint one blue, renovate another — each building becomes its own thing. Same idea here: you run the nginx image three times, you get three separate containers. Each one is independent, but they all started from the same image.
Images are read-only. You can't "edit" an image directly. What you do is write a Dockerfile that says "start from this base image, add my files, run these commands, set these environment variables" — and Docker builds a new image from those instructions.
Each image entry shows:
nginx:latest or node:18-alpine. The tag is basically the version.You can also see which images are In Use (currently running in a container) vs those just sitting there unused.
Let's say you have a simple Node.js app. Your Dockerfile might look like:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
Each line is a layer. Docker builds images in layers, and it caches them. So if only your app code changes but your package.json didn't, Docker reuses the npm install layer from cache. Builds stay fast.
Build it with:
docker build -t my-node-app:v1 .
The -t flag tags it with a name and version. The . tells Docker to look for the Dockerfile in the current directory.
After building, flip to Docker Desktop's Images tab. Your my-node-app image is sitting there, ready to be run.
Every time you build, Docker creates a new image (or updates the existing one). Old versions just pile up with the tag <none>:<none> — those are called dangling images. Clean them up with:
docker image prune
If you really want to nuke everything unused (images, containers, networks, cache), there's the nuclear option:
docker system prune -a
Do this when your disk is getting eaten alive. It's very satisfying.
Click Volumes in the sidebar.
Here's the thing about containers: they're stateless by design. When you delete a container, everything inside it disappears. Logs, database records, uploaded files — gone.
That's often fine! For a web server serving static content, statelessness is a feature. But for a database? You really don't want to lose all your data every time you restart the container.
Volumes are Docker's solution to this. A volume is a persistent storage area that lives on your actual machine, outside the container. You mount it into the container at a specific path, and anything written to that path gets saved on your machine — not just inside the container.
Think of it like a USB drive. The container might be temporary, but the USB drive persists. Plug it into a new container, and all your data is still there.
Each volume shows:
local, meaning it's stored on your local machinedocker volume create my-database-data
Then use it when running a container:
docker run -d \
--name my-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
-v my-database-data:/var/lib/postgresql/data \
postgres:15
The -v my-database-data:/var/lib/postgresql/data part says: "take the volume called my-database-data and mount it at /var/lib/postgresql/data inside the container." That's where Postgres stores its data files.
Now you can stop and delete the container, create a new one with the same volume mount, and all your database records will still be there. The data outlives the container.
You'll also hear about bind mounts, which are slightly different. Instead of a Docker-managed volume, you mount a specific folder from your machine:
docker run -v /Users/yourname/myproject:/app my-node-app
This is super useful during development. Your local code folder gets mounted into the container, so when you edit a file on your machine, the container sees the change immediately. No rebuild needed. Many developers run their entire dev environment this way.
Click the Settings (or gear icon) in the top right of Docker Desktop.
This is where you control the engine itself. A few sections worth knowing:
This has your startup behavior (should Docker launch when your computer boots?), the Docker CLI location, and whether to send usage stats to Docker. Nothing too wild here, but you'll want to come back to it eventually.
This is the important one. On Mac and Windows, Docker Desktop runs inside a Linux virtual machine (because Docker containers need Linux under the hood). The Resources tab lets you control how much CPU, RAM, and disk space that VM gets.
The defaults are usually pretty conservative. If your containers feel sluggish or you're running something heavy like a database + a web server + a caching layer all at once, bump up the RAM. I usually set it to whatever I can spare — maybe half of what my machine has.
On Mac with Apple Silicon (M1/M2/M3), Docker runs surprisingly well. On Intel Macs and Windows, the VM overhead is more noticeable.
This is a JSON config file for the Docker daemon — the background process that runs everything. Most of the time you won't touch this. But if you ever need to configure a private registry, set up logging drivers, or tweak advanced networking, it happens here.
Newer versions of Docker Desktop have a "Dev Environments" feature — basically a way to package an entire development environment (editor config, extensions, runtime) into a container. It's still evolving and not everyone uses it, but it's worth knowing it exists.
In the Docker Desktop top bar, there's often a search box that queries Docker Hub. Think of Docker Hub as the app store for container images. Millions of public images live there: databases, web servers, programming language runtimes, monitoring tools, everything.
When you run docker pull postgres:15, Docker goes to Docker Hub, finds the official Postgres image tagged 15, and downloads it to your machine. The Images section in Docker Desktop then shows it as available locally.
You can also push your own images to Docker Hub (or a private registry) so teammates can pull them. That's how teams share environments without needing to send files around.
I want to leave you with one mental model that'll make everything click:
Docker Desktop is read-and-manage. The terminal is build-and-run.
You run containers, build images, and create volumes from the terminal. Docker Desktop gives you the live view, the logs, the stats, the cleanup tools. Neither one is "the right way" — they're both talking to the same engine, and using them together is genuinely more powerful than either alone.
A common workflow for a developer looks like:
docker compose up to spin up the whole stackHere's a proper challenge to walk through everything we covered. Don't just read it — actually do it.
The challenge: Run a PostgreSQL database in a container, connect to it, create a table, then prove your data persists after restarting the container.
Step 1: Create a named volume for the database data.
docker volume create pg-data
Step 2: Run a Postgres container using that volume.
docker run -d \
--name my-pg \
-e POSTGRES_USER=student \
-e POSTGRES_PASSWORD=docker123 \
-e POSTGRES_DB=testdb \
-p 5432:5432 \
-v pg-data:/var/lib/postgresql/data \
postgres:15
Step 3: Check Docker Desktop. You should see my-pg running in the Containers section, and pg-data in the Volumes section.
Step 4: Connect to the database and create a table.
docker exec -it my-pg psql -U student -d testdb
Inside the Postgres prompt:
CREATE TABLE friends (name TEXT);
INSERT INTO friends VALUES ('Alice');
INSERT INTO friends VALUES ('Bob');
SELECT * FROM friends;
\q
Step 5: Stop and remove the container.
docker stop my-pg
docker rm my-pg
Check Docker Desktop — the container is gone. The pg-data volume is still there.
Step 6: Run a brand new container using the same volume.
docker run -d \
--name my-pg-2 \
-e POSTGRES_USER=student \
-e POSTGRES_PASSWORD=docker123 \
-e POSTGRES_DB=testdb \
-p 5432:5432 \
-v pg-data:/var/lib/postgresql/data \
postgres:15
Step 7: Connect and check your data.
docker exec -it my-pg-2 psql -U student -d testdb
SELECT * FROM friends;
Alice and Bob are still there. Data survived the container deletion. That's volumes working exactly as intended.
Bonus: After you're done, clean everything up:
docker stop my-pg-2
docker rm my-pg-2
docker volume rm pg-data
You've now walked every major section of Docker Desktop. You know what containers are and how to manage them. You understand that images are blueprints and containers are running instances of those blueprints. You know why volumes exist and how to use them. And you've poked around Settings enough to know where to go when you need more resources or need to configure the engine.
The natural next step is Docker Compose — a tool that lets you define multi-container applications in a single YAML file. Instead of running three separate docker run commands for your app, your database, and your cache, you write one docker-compose.yml file and do docker compose up. Everything starts together, connected, named, and configured.
But that's a tour for another day. For now? Run some containers. Break some things. Check the logs. Clean up. Repeat.
That's how Docker actually sticks.
Happy shipping. 🐳