

Sarthak Varshney is a Docker Captain, 5x C# Corner MVP, and 2x Alibaba Cloud MVP, with over six years of hands-on experience in the IT industry, specializing in cloud computing, DevOps, and modern application infrastructure. He is an Author and Associate Consultant, known for working extensively with cloud platforms and container-based technologies in real-world environments.
So you've been using Docker Hub to pull images. You type docker pull nginx and boom — it just works. But here's a question: what happens when you build something of your own — say, your company's internal API service — and you don't want to push that to Docker Hub for the whole world to see?
That's exactly where a private Docker registry comes in.
In this article, we're going to set one up from scratch. We'll push images to it, pull from it, add basic authentication so random people can't just walk in, and finally wire it up for a small team. No fluff, just the real stuff.
Think of Docker Hub as Amazon for images. You go there, you "shop" for images (nginx, postgres, redis, whatever), and you download them. Docker Hub is a public registry — it hosts thousands of images and anyone can pull from it.
A private registry is basically your own personal Amazon warehouse. Only you (and your team) have the keys. The images live on your own server, they never leave your network, and you control who gets access.
The good news? Docker actually ships an official image called registry (version 2) that lets you spin up your own registry in literally one command. No fancy setup, no subscription fees — just Docker doing what Docker does best.
Let's start with the bare minimum, just to see it working:
docker run -d -p 5000:5000 --name my-registry registry:2
That's it. You now have a registry running on port 5000.
Let's break down what just happened:
-d — runs it in the background (detached mode)-p 5000:5000 — maps port 5000 on your machine to port 5000 inside the container--name my-registry — gives the container a friendly nameregistry:2 — this is the official Docker registry image, version 2Now let's test it with a real image. We'll grab alpine (a tiny Linux image), tag it to point at our local registry, and push it:
# Pull a test image
docker pull alpine
# Tag it to point at our local registry
docker tag alpine localhost:5000/myalpine:v1
# Push it to our private registry
docker push localhost:5000/myalpine:v1
If you see output like The push refers to repository [localhost:5000/myalpine] — congratulations, your private registry is alive!
Now let's prove the pull works too:
# Remove local copy first so we actually test pulling
docker rmi localhost:5000/myalpine:v1
# Pull it from your private registry
docker pull localhost:5000/myalpine:v1
Works like magic, right? The pattern is simple: instead of alpine (which means Docker Hub), you use localhost:5000/alpine which means your registry.
Here's a problem you'll hit pretty quickly: if you stop and remove the registry container, all your pushed images are gone. Poof. Because by default, data lives inside the container.
Fix this by mounting a local directory as a volume:
docker run -d \
-p 5000:5000 \
--name my-registry \
-v /opt/registry-data:/var/lib/registry \
registry:2
Now every image you push gets stored in /opt/registry-data on your host machine. Kill the container, bring it back up with the same command, and your images are still there. That's the difference between a toy and something actually usable.
Running a registry with zero authentication is fine on your laptop, but the moment you expose it to a network — even your home network — you want some kind of gate at the door.
Docker's registry supports htpasswd-based basic authentication. It's the same mechanism Apache web servers have used for decades. Not glamorous, but it works.
Step 1: Create the auth directory and credentials file
mkdir -p /opt/registry-auth
# Generate a htpasswd file with a user called "sarthak"
docker run --rm \
--entrypoint htpasswd \
httpd:2 \
-Bbn sarthak secretpassword123 > /opt/registry-auth/htpasswd
What's happening here? We're running the httpd (Apache) container just long enough to use its htpasswd utility to hash a password and save it to a file. The -B flag means bcrypt hashing (more secure than the default), -b lets us pass the password inline, and -n just prints to stdout which we redirect into the file.
Check what got created:
cat /opt/registry-auth/htpasswd
You'll see something like:
sarthak:$2y$05$Kh5L4FPkS...some hashed stuff...
Step 2: Run the registry with auth enabled
Stop and remove the old registry first:
docker stop my-registry && docker rm my-registry
Now start it fresh with auth:
docker run -d \
-p 5000:5000 \
--name my-registry \
-v /opt/registry-data:/var/lib/registry \
-v /opt/registry-auth:/auth \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
registry:2
Let's decode those environment variables:
REGISTRY_AUTH=htpasswd — tells the registry to use htpasswd for authREGISTRY_AUTH_HTPASSWD_REALM — just a display string, can be anythingREGISTRY_AUTH_HTPASSWD_PATH — path to the htpasswd file inside the container (which maps to /opt/registry-auth on your host)Step 3: Try pushing without logging in (it should fail)
docker push localhost:5000/myalpine:v1
You'll get an error like no basic auth credentials.
Step 4: Login and push
docker login localhost:5000
# Enter username: sarthak
# Enter password: secretpassword123
Once logged in:
docker push localhost:5000/myalpine:v1
Now it works. The login info gets saved in ~/.docker/config.json so you don't have to type it every time.
Here's where most people get confused. When you move from localhost:5000 to a real IP address — say your server is at 192.168.1.100 — Docker starts complaining about "insecure registry."
Why? Because Docker, by default, only allows HTTPS connections to registries. Localhost gets a special exception, but the moment you use an IP or hostname, it demands TLS.
You've got two options here:
Option A: Mark it as an insecure registry (quick, not for production)
On every machine that needs to connect to your registry, edit (or create) /etc/docker/daemon.json:
{
"insecure-registries": ["192.168.1.100:5000"]
}
Then restart Docker:
sudo systemctl restart docker
Now Docker will happily talk to your registry over plain HTTP.
Option B: Set up TLS (the proper way)
If you have a domain name and can get an SSL cert (Let's Encrypt works great here), you can configure the registry to serve HTTPS. This is the right approach for anything beyond a home lab.
docker run -d \
-p 443:5000 \
--name my-registry \
-v /opt/registry-data:/var/lib/registry \
-v /opt/registry-auth:/auth \
-v /opt/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
registry:2
With this setup, your team connects to https://registry.yourcompany.com — no insecure-registry hacks needed, no browser warnings, proper certificates everywhere.
Let's say you're working with 4 developers. You've all been emailing each other Docker images as tar files (I've seen this happen — it's painful). Here's how a private registry solves that.
Scenario: Your team is building a Node.js app called crm-service. You want everyone to be able to pull the latest image without copying files around.
Setup on your server (192.168.1.100):
# Adding a second user to htpasswd
docker run --rm \
--entrypoint htpasswd \
httpd:2 \
-Bbn priya password456 >> /opt/registry-auth/htpasswd
docker run --rm \
--entrypoint htpasswd \
httpd:2 \
-Bbn amit password789 >> /opt/registry-auth/htpasswd
Note the >> instead of > — that appends to the file instead of overwriting it.
On the developer's machine (you, building the image):
# Build your app image
docker build -t crm-service:1.0 .
# Tag it for your private registry
docker tag crm-service:1.0 192.168.1.100:5000/crm-service:1.0
# Push
docker login 192.168.1.100:5000
docker push 192.168.1.100:5000/crm-service:1.0
On Priya's machine (pulling and running):
docker login 192.168.1.100:5000
docker pull 192.168.1.100:5000/crm-service:1.0
docker run -d -p 3000:3000 192.168.1.100:5000/crm-service:1.0
No file transfers. No USB drives. No "did you get my email with the tar file?" Just push, pull, done.
Mistake 1: Forgetting the insecure-registry config
You set up your registry, everything works on the server itself, but other machines can't push or pull. You see: Get "https://192.168.1.100:5000/v2/": dial tcp 192.168.1.100:5000: connect: connection refused
Solution: Add the insecure-registry config on every client machine, not just the server.
Mistake 2: The volume isn't mounted, images disappear
After a container restart, all your images are gone. Classic mistake — you forgot the -v flag when starting the registry.
Solution: Always mount /var/lib/registry to a persistent path on your host.
Mistake 3: Trying to add users to htpasswd with > instead of >>
# This OVERWRITES the file — deletes all existing users!
htpasswd ... > /opt/registry-auth/htpasswd
# This APPENDS — correct for adding new users
htpasswd ... >> /opt/registry-auth/htpasswd
Mistake 4: Not restarting Docker after changing daemon.json
You edit /etc/docker/daemon.json, try to push, still getting errors. You forgot to run sudo systemctl restart docker. The config file doesn't hot-reload — Docker needs a restart.
Mistake 5: Wrong image tag format
docker push myapp:latest — this tries to push to Docker Hub.
docker push 192.168.1.100:5000/myapp:latest — this pushes to your private registry.
The registry address must be part of the image name. If it's not there, Docker assumes Docker Hub.
Want to see what images are stored? The registry exposes a simple REST API:
# List all repositories (images) in the registry
curl -u sarthak:secretpassword123 http://localhost:5000/v2/_catalog
# List tags for a specific image
curl -u sarthak:secretpassword123 http://localhost:5000/v2/myalpine/tags/list
Output will look like:
{"repositories":["myalpine","crm-service"]}
{"name":"myalpine","tags":["v1","v2","latest"]}
This is handy when you want to quickly see what's available without pulling anything.
Running long docker run commands is fine for learning, but for something you want to keep running on a server, a docker-compose.yml is much cleaner:
version: '3.8'
services:
registry:
image: registry:2
container_name: private-registry
restart: always
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: "Private Registry"
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
volumes:
- /opt/registry-data:/var/lib/registry
- /opt/registry-auth:/auth
Start it with:
docker compose up -d
Now it auto-restarts if the server reboots (restart: always), all config is in one file, and your teammates can read exactly how the registry is set up just by looking at this file.
The built-in registry has no web interface — it's all API. If your team wants a GUI to browse images, there are open source frontends like Joxit Docker Registry UI that you can run alongside your registry. But that's a story for another article. For now, the curl commands and docker pull/push workflow is all you need.
Alright, you've read enough. Now let's see if you can put it together. Here's your challenge:
Goal: Set up a private registry with basic auth, push two different images, and verify from the API that both are there.
Steps to follow:
/tmp/my-registry-datadevuser with password docker@123nginx:alpine and redis:alpine from Docker Hublocalhost:5000/webserver:v1 and localhost:5000/cache:v1/v2/_catalog API endpoint and confirm both appear in the outputIf you can do all 7 steps without peeking at a guide, you've genuinely understood private registries. That's not a small thing — a lot of teams struggle with this in real projects.
A private Docker registry isn't some advanced, enterprise-only concept. It's just a Docker container running on a server, exposing an API, with a volume attached to it. Once you see it that way, it stops being scary.
The real value shows up when you're working with a team. No more "I'll send you the image over Slack" or emailing tar files. You push once, everyone pulls. You tag versions, everyone pulls a specific version. It's how professional teams work, and now you know how to set it up yourself.
Next logical step from here? Setting up a CI/CD pipeline (Jenkins, GitHub Actions, GitLab CI — whatever you prefer) that automatically builds and pushes to your private registry every time code is merged. But that's a whole separate adventure.
For now — go spin up that registry.