

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
Let me paint a picture.
You've just built a web app. It has a frontend (let's say React), a backend (Node.js or Python Flask), and a database (PostgreSQL). You've Dockerized all three — three separate containers, each doing their own thing. Now what?
The frontend needs to talk to the backend. The backend needs to talk to the database. But each container is basically its own little box, sealed off from the world. How does the frontend know where to find the backend? How does the backend even reach the database?
This is the problem container networking solves — and once you get it, a huge chunk of Docker finally clicks into place.
Let's break it all down, starting with an analogy that'll stick.
Think of every Docker container like a house in a neighborhood.
Each house has its own address (IP address). Neighbors can talk to each other — but only if they live on the same street. If you live on Street A and your friend lives on Street B, you can't just knock on their door directly. You'd need a bridge, a road that connects both streets.
Now here's the fun part — Docker does exactly this. It creates virtual "streets" called networks. Containers on the same network can freely talk to each other. Containers on different networks? They're basically strangers who don't even know the other exists.
And the default street that Docker gives you when you're not paying attention? That's called the Bridge Network.
When you install Docker and start running containers without specifying any network, they all land on a default network called bridge. You can see it yourself:
docker network ls
You'll get something like:
NETWORK ID NAME DRIVER SCOPE
a1b2c3d4e5f6 bridge bridge local
7g8h9i0j1k2l host host local
3m4n5o6p7q8r none null local
That bridge there — that's the default. Every container you run without telling Docker otherwise, ends up here.
Here's where it gets interesting. Containers on the default bridge network can reach each other — but only via IP address. Not by name. And this is a problem, because container IPs can change every single time you restart things.
Let's see this in action. Open two terminals.
Terminal 1 — start a container and grab its IP:
docker run -it --rm --name container-a alpine sh
Inside the container, run:
hostname -i
# You'll see something like: 172.17.0.2
Terminal 2 — start another container and try to ping the first one by name:
docker run -it --rm --name container-b alpine sh
Inside container-b, try:
ping container-a
# ping: bad address 'container-a'
Dead. Nothing. The containers don't know each other's names on the default bridge. But if you use the IP:
ping 172.17.0.2
# This works!
Great, it works — but now you've hardcoded an IP address. That's like memorizing your friend's house location by the GPS coordinates instead of just knowing they live on "Maple Street." The moment they move (restart), you're lost again.
This is exactly why custom networks exist.
Custom networks are where the real magic happens. When you create your own network, Docker gives you something incredible for free: automatic DNS resolution. Containers on the same custom network can find each other by name — no IP memorization needed.
docker network create my-neighborhood
That's it. You now have a brand new virtual street. Let's put some containers on it.
docker run -d --name backend --network my-neighborhood nginx
docker run -d --name database --network my-neighborhood postgres -e POSTGRES_PASSWORD=secret
Now let's jump into the backend container and try to ping database by name:
docker exec -it backend sh
ping database
And it just... works. No IPs, no hardcoding, no drama. The backend container says "hey, where's database?" and Docker's internal DNS says "right here, buddy."
Going back to the neighborhood analogy — custom networks are like gated communities with a directory at the front desk. Every resident is registered by name. You walk in, ask for "database," and the concierge points you right to it.
Curious what's happening inside your network? This command is incredibly useful:
docker network inspect my-neighborhood
You'll see a JSON dump with all the containers connected, their IP addresses, MAC addresses — the whole picture. Bookmark this command. You'll use it constantly when debugging.
Here's something not everyone realizes early on — a container can be part of more than one network at the same time.
Why would you want this? Think about our web app example again. The backend needs to talk to the database, but you don't want the database exposed to the frontend at all. So you create two networks:
docker network create frontend-net
docker network create backend-net
The frontend container goes on frontend-net. The database goes on backend-net. The backend goes on both, acting as the bridge between worlds.
docker run -d --name frontend --network frontend-net nginx
docker run -d --name database --network backend-net postgres -e POSTGRES_PASSWORD=secret
docker run -d --name backend --network backend-net my-backend-image
docker network connect frontend-net backend
Notice that last line — docker network connect. You can attach a running container to an additional network without restarting it. The backend is now on both networks and can talk to frontend and database, but frontend has absolutely no idea database even exists. Exactly how you'd want it in production.
This is the concept of network segmentation — keeping things isolated for both security and clarity.
host and none Networks — The Weird CousinsYou saw them in docker network ls. Let's address them quickly.
host Networkdocker run --network host nginx
With host networking, the container shares the host machine's network stack directly. No isolation, no virtual network, no translation. Port 80 inside the container IS port 80 on your laptop/server. This is fast but risky — you lose the isolation that makes containers nice to work with. Use it only when you really need maximum performance or you're doing something low-level.
none Networkdocker run --network none alpine
This completely disconnects the container from all networking. It can't talk to anything, and nothing can talk to it. Useful for running batch processing jobs that absolutely should not have internet access — like security-sensitive tasks.
Let me save you some pain. These are the things that trip people up the most.
This is the #1 rookie mistake. You write a docker run command, don't specify --network, and then wonder why your backend can't reach your database. They're both on the default bridge — but the default bridge doesn't do DNS by name, remember?
Fix: Always create a custom network and attach your containers to it.
docker network create app-net
docker run -d --name db --network app-net postgres -e POSTGRES_PASSWORD=secret
docker run -d --name app --network app-net my-app
You found the IP of a container via docker inspect, used it in your code, and everything worked. Then you restarted the container and suddenly nothing works. Container IPs are not guaranteed to be stable.
Fix: Always use container names (on custom networks). That's literally what DNS resolution is for.
When you do docker run -p 5432:5432 postgres, you're opening a port to the host machine — so you can access it from your browser or from outside Docker. But containers talking to each other don't need published ports at all. They communicate directly via the internal Docker network.
A lot of beginners publish all ports "just in case." Don't. It's unnecessary and exposes things you probably don't want exposed.
Related to above — please don't run your database with --network host or publish its port publicly in production. Your database should only be reachable from within the Docker network, by the containers that actually need it.
Since most real apps use Docker Compose, here's how all of this looks in a compose.yml file:
services:
frontend:
image: nginx
networks:
- frontend-net
ports:
- "80:80"
backend:
image: my-backend
networks:
- frontend-net
- backend-net
database:
image: postgres
environment:
POSTGRES_PASSWORD: secret
networks:
- backend-net
networks:
frontend-net:
backend-net:
Docker Compose automatically creates these networks and names containers using the service name. So inside backend, you can reach the database simply by using database as the hostname — Compose handles all the DNS magic.
No IP addresses. No hardcoded config. Clean.
You might be wondering — okay, but how does Docker know to route ping database to the right container?
When you create a custom network, Docker runs an embedded DNS server at a fixed IP inside each container (usually 127.0.0.11). Every time a container joins a network, Docker registers that container's name with this DNS server. When your container says "find database," the request hits that internal DNS, which looks up the name and returns the IP.
It all happens invisibly, in milliseconds, without you doing anything. The key takeaway is this: this DNS only works on custom networks, not the default bridge. That's why custom networks are always the recommended approach.
Here's your cheat sheet — stick this somewhere you'll see it:
# List all networks
docker network ls
# Create a custom network
docker network create my-network
# Run a container on a specific network
docker run -d --name my-container --network my-network nginx
# Connect a running container to another network
docker network connect another-network my-container
# Disconnect a container from a network
docker network disconnect my-network my-container
# Inspect a network (see all containers, IPs, etc.)
docker network inspect my-network
# Remove a network (only works if no containers are using it)
docker network rm my-network
# Remove all unused networks at once
docker network prune
Here's a hands-on challenge to cement everything. Do this on your machine right now.
Goal: Set up two containers that can talk to each other by name, and verify that a third container on a different network cannot reach them.
Step 1: Create two networks.
docker network create street-a
docker network create street-b
Step 2: Start two containers on street-a.
docker run -d --name house-1 --network street-a alpine sleep 3600
docker run -d --name house-2 --network street-a alpine sleep 3600
Step 3: Start one container on street-b.
docker run -d --name house-3 --network street-b alpine sleep 3600
Step 4: From house-1, ping house-2 by name. It should work.
docker exec house-1 ping -c 3 house-2
Step 5: From house-1, try to ping house-3. It should fail.
docker exec house-1 ping -c 3 house-3
# ping: bad address 'house-3'
Bonus Challenge: Now connect house-1 to street-b as well, and verify it can reach house-3 after that.
docker network connect street-b house-1
docker exec house-1 ping -c 3 house-3
# This should now work!
Cleanup when you're done:
docker stop house-1 house-2 house-3
docker rm house-1 house-2 house-3
docker network rm street-a street-b
Docker networking sounds intimidating until it suddenly doesn't. Here's the mental model to hold onto:
-p) are for you accessing the container from outside. Inter-container communication doesn't need them.The next time you're setting up a multi-container app and something can't connect to something else, the first question to ask yourself is: "Are these two containers on the same network?"
Nine times out of ten, that's where the answer lives.