Docker Registry and Docker Hub Explained — Push, Pull and Publish Images
- A Docker registry is the storage server for images; Docker Hub is simply the most popular hosted registry — the two terms are not interchangeable.
- Image addresses encode everything: registry-host/namespace/repository:tag — Docker Hub is the default host, so you don't see 'docker.io' unless you look for it.
- The ':latest' tag is a convention, not a guarantee — always pin production images to a specific version tag or SHA digest to prevent surprise breakage.
- Images are addressed as registry-host/namespace/repository:tag
- docker push uploads image layers to the registry
- docker pull downloads image layers from the registry
- Docker Hub is the default registry (docker.io) baked into the Docker client
- Registry: the storage server (Docker Hub, ECR, self-hosted)
- Repository: a named collection of related images (e.g., nginx, my-app)
- Tag: a label identifying a specific version (v1.0, latest, sha256:abc...)
- Namespace: the account or organization that owns the repository
docker push fails with 'denied: requested access to the resource is denied'.
docker info | grep Usernamedocker images | grep <image>docker pull fails with 'toomanyrequests' rate limit error.
curl -s -I https://registry-1.docker.io/v2/library/nginx/manifests/latest | grep ratelimitdocker info | grep UsernamePush to self-hosted registry fails with HTTP/HTTPS error.
cat /etc/docker/daemon.jsoncurl -v http://<registry-host>:5000/v2/Deployed container is running a different version than expected.
docker inspect <container> --format='{{.Image}}'docker inspect <container> --format='{{.Config.Image}}'Secrets discovered in a pushed Docker image.
docker history --no-trunc <image> | grep -i 'secret\|password\|key\|token'docker save <image> | tar -xOProduction Incident
Production Debug GuideFrom rate-limit errors to authentication failures — systematic debugging paths.
Docker images are only useful if they can be distributed. An image built on your laptop must reach your CI pipeline, your staging environment, and your production cluster — byte-identical every time. The Docker registry is the distribution mechanism that makes this possible.
A registry is a server that stores image layers and serves them on demand. Docker Hub is the default public registry, but teams also use AWS ECR, Google Artifact Registry, GitHub Container Registry, and self-hosted registries. The choice depends on cost, security requirements, and operational overhead.
Common misconceptions: the :latest tag means 'newest version' (it does not — it is just a tag), Docker Hub is required to use Docker (it is not — you can build and run images locally without any registry), and deleting a tag removes the image from Docker Hub (it does not — the digest remains accessible until garbage collected).
What Is a Docker Registry and Why Does It Exist?
A Docker registry is a server-side application that stores and distributes Docker images. Think of it as a Git repository, but instead of source code, it holds container images. Just like2>/dev/null | grep -c 'secret-value',Git has GitHub as its popular hosting platform, Docker has Docker Hub as its flagship registry.
Every Docker image you build lives only on your local machine until you push it to a registry. The registry gives it a permanent address — a URL — that any authorised machine can use to pull that exact image, byte for byte.
Registries are built around two key concepts. First, repositories: a named collection of related images — for example, all versions of your 'web-api' app live in one repository. Second, tags: labels that identify a specific version inside that repository, like 'v1.0', 'v2.3' or simply 'latest'. Together they form the full image address: registry-host/username/repository-name:tag.
Docker itself ships with a default registry address baked in — docker.io — which points to Docker Hub. So when you run 'docker pull nginx', Docker silently expands that to 'docker.io/library/nginx:latest' and fetches it from Docker Hub. That's why it just works with no extra configuration on a fresh machine.
Image digest vs tag: A tag is a mutable human-readable label — the maintainer can move the v1.0 tag from one image to another. A digest is an immutable SHA256 hash of the image manifest — it always points to the exact same image content. For production reproducibility, pin to digests: FROM node:20-alpine@sha256:abc123... This guarantees you always get the exact same image, even if the tag is moved.
# Docker image names follow a predictable pattern: # [registry-host]/[namespace]/[repository]:[tag] # # Let's break down some real examples: # 1. Official Docker Hub image (short form — Docker fills in the rest automatically) docker pull nginx # Docker expands this to: docker.io/library/nginx:latest # 2. A community image on Docker Hub — username/repository:tag docker pull bitnami/postgresql:15.4.0 # registry-host = docker.io (default) # namespace = bitnami # repository = postgresql # tag = 15.4.0 # 3. An image on a private registry (e.g. your company's internal registry) docker pull registry.mycompany.com/backend/payments-service:v3.1 # registry-host = registry.mycompany.com # namespace = backend # repository = payments-service # tag = v3.1 # 4. Pin to a digest for maximum reproducibility docker pull nginx:1.25.3@sha256:3923f8e2f40f8398b8fec680b9e80c09f2e180f3e0a09c0b3b0fd8e3c0f8e9a2 # The @sha256:... part guarantees you get the exact same image bytes # even if someone moves the :1.25.3 tag to a different image later # 5. Check what images you already have locally docker images # This shows your local image cache — images already pulled from a registry
latest: Pulling from library/nginx
a2abf6c4d29d: Pull complete
a9edb18cadd1: Pull complete
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest a6bd71f48f68 2 weeks ago 187MB
bitnami/postgresql 15.4.0 c3f19a7941e2 3 weeks ago 279MB
registry.mycompany.com/payments v3.1 9b72f9d12a45 1 month ago 312MB
- Tags are mutable — the maintainer can move the v1.0 tag to a different image.
- Digests are immutable — SHA256 hash of the image content. Same hash always means same content.
- Pin to digests in production: FROM node:20-alpine@sha256:abc123... guarantees reproducibility.
- Tags are for humans. Digests are for machines. Use both.
Docker Hub — Creating an Account and Pushing Your First Image
Docker Hub at hub.docker.com is the world's largest public container registry, hosting over 15 million images. It's free for public repositories and gives you one free private repository on the free tier. For most beginners and open-source projects, that's plenty.
The workflow is always the same three steps: build an image locally, tag it with your Docker Hub username and repository name, then push it. Pulling is even simpler — just docker pull with the full image address.
Before you can push anything, Docker needs to know who you are. You authenticate once per machine with 'docker login', which stores an encrypted token on your computer. From that point, every push and pull to your private repos works automatically.
The tagging step is critical and confuses many beginners. When you build an image, you can name it anything locally. But to push to Docker Hub it must follow the exact format: yourusername/repositoryname:tag. Docker uses that username prefix to know which Docker Hub account to push the image to. If the prefix doesn't match your logged-in account, the push is rejected.
Layer deduplication during push: Docker only uploads layers that do not already exist on the registry. If you push a new version of your app that shares the same base image and dependency layers as a previous version, only the changed layers are uploaded. This is why the first push of a new image is slow (all layers uploaded) but subsequent pushes with code-only changes are fast (only the top layer uploaded).
Authentication storage: docker login stores credentials in ~/.docker/config.json. On Linux, this is a plaintext file by default (a security risk). Use a credential helper (docker-credential-desktop, docker-credential-pass) to encrypt the credentials. On macOS and Windows, Docker Desktop uses the OS keychain automatically.
# ── STEP 1: Write a simple Dockerfile for our demo app ────────────────────── # Create a file called Dockerfile in an empty folder with these contents: cat > Dockerfile << 'EOF' # Start from an official, minimal Node.js base image FROM node:20-alpine # Set the working directory inside the container WORKDIR /app # Copy our simple server file into the container COPY server.js . # Tell Docker which port this app listens on (documentation only) EXPOSE 3000 # The command that runs when the container starts CMD ["node", "server.js"] EOF # Create the tiny Node.js server it references cat > server.js << 'EOF' const http = require('http'); const server = http.createServer((req, res) => { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Hello from Docker Hub! Version 1.0\n'); }); server.listen(3000, () => console.log('Server running on port 3000')); EOF # ── STEP 2: Build the image locally ───────────────────────────────────────── # Replace 'yourDockerHubUsername' with your actual Docker Hub username docker build --tag yourDockerHubUsername/hello-web-server:v1.0 . # --tag gives our image its Docker Hub address from the very start # The dot (.) means 'use the Dockerfile in the current directory' # ── STEP 3: Log in to Docker Hub ──────────────────────────────────────────── docker login # Docker will prompt for your username and password # After success it caches a token in ~/.docker/config.json # ── STEP 4: Push the image to Docker Hub ──────────────────────────────────── docker push yourDockerHubUsername/hello-web-server:v1.0 # Docker uploads each layer of the image separately # Layers that already exist on Docker Hub are skipped (they show 'Layer already exists') # ── STEP 5: Pull it back down on any machine to verify it worked ───────────── docker pull yourDockerHubUsername/hello-web-server:v1.0 # ── STEP 6: Run it to confirm everything works ─────────────────────────────── docker run --publish 3000:3000 yourDockerHubUsername/hello-web-server:v1.0 # Visit http://localhost:3000 in your browser — you should see the Hello message
[+] Building 12.3s (8/8) FINISHED
=> [1/3] FROM node:20-alpine
=> [2/3] WORKDIR /app
=> [3/3] COPY server.js .
=> exporting to image
Successfully tagged yourDockerHubUsername/hello-web-server:v1.0
── docker login output ──
Username: yourDockerHubUsername
Password: ****************
Login Succeeded
── docker push output ──
The push refers to repository [docker.io/yourDockerHubUsername/hello-web-server]
5f70bf18a086: Pushed
a3b179341f8d: Pushed
v1.0: digest: sha256:9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 size: 1570
── docker run output ──
Server running on port 3000
- Docker Hub stores images by digest, not by tag. Deleting a tag removes the pointer, not the content.
- Anyone who pulled the image before deletion still has it locally.
- The digest URL remains accessible until Docker Hub's garbage collection runs (timing is not guaranteed).
- For true deletion, contact Docker Hub support or use a private repository with retention policies.
Public vs Private Repositories — and When to Use a Self-Hosted Registry
Docker Hub public repositories are visible to the entire internet. Anyone can pull your image without logging in, which is perfect for open-source projects and public tools. Private repositories require authentication before anyone can pull — essential for proprietary application code.
Docker Hub's free tier gives you unlimited public repos but only one private repo. If your team needs multiple private repos, you either pay for Docker Hub Pro, or you run your own registry. Running your own gives you full control, no pull-rate limits, and keeps images inside your network for security compliance.
Docker ships a lightweight official registry image (called simply 'registry') that you can run anywhere with a single command. For production, teams use managed options like AWS Elastic Container Registry (ECR), Google Artifact Registry, or GitHub Container Registry — all of which integrate directly with their respective cloud platforms and CI/CD pipelines.
The choice comes down to three factors: cost, security requirements and operational overhead. Public open-source project? Docker Hub public repo, free, zero effort. Startup with a few private services? Docker Hub paid plan. Enterprise with compliance rules? Self-hosted or cloud-native registry inside your own infrastructure.
Registry selection trade-offs: - Docker Hub: largest image library, free for public, rate-limited, no SLA on free tier - AWS ECR: deep IAM integration, no rate limits, pay per GB, AWS-only - Google Artifact Registry: multi-format (Docker, npm, Maven), GCP-native - GitHub Container Registry: tied to GitHub repos, free for public, OCI-compliant - Self-hosted (registry:2): full control, no rate limits, you manage availability and TLS - Harbor: enterprise self-hosted with vulnerability scanning, RBAC, image signing
# ── Run your own private Docker registry locally in 60 seconds ────────────── # This uses Docker's official 'registry' image — it IS a registry running inside Docker # Start the registry container on port 5000 docker run \ --detach \ --publish 5000:5000 \ --name my-private-registry \ --restart always \ --volume registry-image-data:/var/lib/registry \ registry:2 # --detach = run in background # --publish 5000:5000 = expose registry on localhost port 5000 # --name = friendly name so we can reference it easily # --restart always = auto-restart if the machine reboots # --volume = persist images to a named Docker volume (survive container restarts) # ── Now push an image to YOUR local registry ───────────────────────────────── # First, tag an existing local image with the local registry address as the prefix docker tag yourDockerHubUsername/hello-web-server:v1.0 \ localhost:5000/hello-web-server:v1.0 # 'localhost:5000' IS the registry address — Docker reads it and knows where to push # Push to the local registry (no login needed for localhost) docker push localhost:5000/hello-web-server:v1.0 # ── Query the registry's API to see what's stored ──────────────────────────── curl http://localhost:5000/v2/_catalog # Returns a JSON list of all repositories stored in your local registry curl http://localhost:5000/v2/hello-web-server/tags/list # Returns all tags for the hello-web-server repository # ── Pull from your local registry on the same machine (or same network) ─────── docker pull localhost:5000/hello-web-server:v1.0
Unable to find image 'registry:2' locally
2: Pulling from library/registry
Status: Downloaded newer image for registry:2
a3ed95caeb02e3b4f9b4b2a3b4c7d9e1
── docker push to local registry ──
The push refers to repository [localhost:5000/hello-web-server]
5f70bf18a086: Pushed
a3b179341f8d: Pushed
v1.0: digest: sha256:9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 size: 1570
── curl catalog output ──
{"repositories":["hello-web-server"]}
── curl tags output ──
{"name":"hello-web-server","tags":["v1.0"]}
- Air-gapped environments with no internet access — self-hosted is the only option.
- Compliance requirements that mandate data stays on-premises.
- High-volume teams that would exceed cloud registry pricing at scale.
- Need for custom authentication integration (LDAP, Active Directory).
- Trade-off: self-hosted means you manage availability, backups, TLS, and upgrades.
How Docker Pull-Rate Limits Work — and How to Stay Under the Radar
Since November 2020, Docker Hub enforces pull-rate limits to protect its infrastructure. Anonymous users (not logged in) get 100 pulls per 6 hours, tracked by IP address. Authenticated free-tier users get 200 pulls per 6 hours. This matters enormously in CI/CD pipelines where every build might pull a base image.
A shared CI runner with dozens of engineers behind a single corporate IP can hit 100 pulls shockingly fast and start seeing 'toomanyrequests' errors mid-build — bringing the entire pipeline down.
The fix has two parts. First, always authenticate your CI runners with 'docker login' using a Docker Hub account — even a free one doubles your limit. Second, use a pull-through cache: a local registry that sits in front of Docker Hub and serves cached copies of images your team has already pulled. The runners pull from the local cache; only cache misses go to Docker Hub.
Cloud registries like AWS ECR Public Gallery have no pull-rate limits for public images, which is why many teams mirror critical base images (like node, python, ubuntu) there and reference those mirrors in their Dockerfiles instead of pulling directly from Docker Hub.
Rate limit internals: Docker Hub tracks pulls by IP address using response headers. The RateLimit-Limit header shows the total pulls allowed (e.g., 100). The RateLimit-Remaining header shows how many pulls are left in the current window. The window is 21600 seconds (6 hours). When remaining hits 0, all pulls from that IP are rejected until the window resets.
Impact on Kubernetes clusters: Kubernetes nodes pull images independently. A 10-node cluster pulling the same base image for a DaemonSet consumes 10 pulls, not 1. If each node runs 20 pods that each pull 2 images, that is 400 pulls per node — 4000 total. Without a pull-through cache or authenticated pulls, this exceeds the rate limit instantly.
# ── Check your current Docker Hub rate-limit status ───────────────────────── # Pull a temporary token (anonymous request mirrors what an unauthenticated runner sees) RATE_LIMIT_TOKEN=$(curl --silent \ "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" \ | python3 -c "import sys,json; print(json.load(sys.stdin)['token'])") # Use that token to hit Docker Hub and read the rate-limit headers curl --silent --head \ --header "Authorization: Bearer ${RATE_LIMIT_TOKEN}" \ https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest \ | grep --ignore-case ratelimit # Look for RateLimit-Limit and RateLimit-Remaining in the response headers # ── Configure a pull-through cache registry ────────────────────────────────── # Add this to /etc/docker/daemon.json on EACH CI runner machine: # This tells Docker: 'before going to docker.io, check our cache first' cat > /etc/docker/daemon.json << 'EOF' { "registry-mirrors": ["http://registry-cache.internal.mycompany.com:5000"] } EOF # Restart Docker to pick up the new config sudo systemctl restart docker # ── Set up the pull-through cache registry itself (run this once on a central server) ── docker run \ --detach \ --publish 5000:5000 \ --name docker-hub-pull-cache \ --restart always \ --volume pull-cache-data:/var/lib/registry \ --env REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \ registry:2 # REGISTRY_PROXY_REMOTEURL tells the registry to act as a transparent proxy/cache # for Docker Hub. First pull fetches from Docker Hub; every subsequent pull is served # from local cache — no rate-limit hit # ── Verify the daemon picked up the mirror ─────────────────────────────────── docker info | grep -A3 "Registry Mirrors"
ratelimit-limit: 100;w=21600
ratelimit-remaining: 76;w=21600
# 76 pulls remaining out of 100 in the current 6-hour window
── After configuring the mirror ──
docker info output:
Registry Mirrors:
http://registry-cache.internal.mycompany.com:5000/
# Docker will now check the cache before hitting Docker Hub
- Docker Hub tracks anonymous pulls by IP address. All traffic from a NAT gateway counts as one IP.
- Authenticated pulls are tracked by user account, not just IP. This provides a separate quota.
- Even a free Docker Hub account doubles the limit from 100 to 200 pulls per 6 hours.
- For teams behind NAT, the per-IP tracking is the bottleneck — authentication partially mitigates this.
| Feature / Aspect | Docker Hub (Free) | Self-Hosted Registry (registry:2) | Cloud Registry (e.g. AWS ECR) |
|---|---|---|---|
| Setup time | Zero — sign up and go | ~5 minutes with Docker installed | 10–20 min (cloud account + CLI config) |
| Cost | Free for public repos, 1 free private repo | Free software, you pay for the server | Pay per GB stored + data transfer |
| Private repositories | 1 free, paid plans for more | Unlimited | Unlimited |
| Pull-rate limits | 100/6hr anon, 200/6hr free auth | None — it's yours | None for public ECR Gallery |
| Authentication | Docker Hub account | Optional basic auth or token auth | IAM roles / service accounts |
| Image scanning | Paid plans only | Manual / third-party tools | Built-in with paid tier |
| CI/CD integration | Native with most CI tools | Manual configuration | Deep integration with cloud CI tools |
| Best for | Open-source projects, learning | Air-gapped / compliance environments | Teams already using a cloud provider |
🎯 Key Takeaways
- A Docker registry is the storage server for images; Docker Hub is simply the most popular hosted registry — the two terms are not interchangeable.
- Image addresses encode everything: registry-host/namespace/repository:tag — Docker Hub is the default host, so you don't see 'docker.io' unless you look for it.
- The ':latest' tag is a convention, not a guarantee — always pin production images to a specific version tag or SHA digest to prevent surprise breakage.
- Docker Hub pull-rate limits can silently kill CI/CD pipelines — authenticate runners and consider a pull-through cache registry before you're under deadline pressure.
- Deleting a tag from Docker Hub does not delete the image content — the digest remains accessible. If secrets are exposed, rotate credentials immediately.
⚠ Common Mistakes to Avoid
Interview Questions on This Topic
- QWhat is the difference between a Docker image, a Docker repository and a Docker registry? Can you describe how they relate to each other?
- QYour CI/CD pipeline is failing with 'toomanyrequests: You have reached your pull rate limit' errors on Docker Hub. Walk me through how you'd diagnose and permanently fix this without upgrading to a paid Docker Hub plan.
- QIf a developer accidentally pushes a Docker image containing a hardcoded AWS secret key to a public Docker Hub repository and then immediately deletes the tag, is the secret safe? Why or why not — and what should they do?
- QWhat is the difference between a Docker image tag and a Docker image digest? When would you use each in production?
- QHow does a pull-through cache registry work? Draw the architecture and explain how it mitigates Docker Hub rate limits.
- QYou need to set up a private Docker registry for a team of 50 engineers. Compare Docker Hub paid plans, AWS ECR, and self-hosted Harbor. Which would you choose and why?
Frequently Asked Questions
What is the difference between Docker Hub and a Docker Registry?
A Docker Registry is any server that stores and serves Docker images — it's a generic term for the software and infrastructure. Docker Hub is a specific, hosted registry service run by Docker Inc at hub.docker.com. Every Docker Hub is a registry, but not every registry is Docker Hub. You can run your own private registry using the official 'registry:2' Docker image without touching Docker Hub at all.
Is Docker Hub free to use?
Docker Hub is free for unlimited public repositories. The free tier also includes one private repository. Pull rates for anonymous users are capped at 100 pulls per 6 hours per IP address, and authenticated free users get 200 pulls per 6 hours. Paid plans (Pro and Team) remove private repo limits and increase pull-rate allowances significantly.
Do I need Docker Hub to use Docker?
No. Docker Hub is a convenience, not a requirement. You can build and run Docker images entirely on your local machine without ever touching a registry. You only need a registry when you want to share images between machines or team members. You can also use alternative registries like GitHub Container Registry, AWS ECR, Google Artifact Registry, or self-host the official 'registry:2' image.
What happens if I push a Docker image with secrets to a public repository?
The secrets are permanently exposed. Even if you delete the tag immediately, the image digest remains accessible via the Docker Hub API. Anyone who pulled the image before deletion still has it locally. docker history and layer inspection tools can extract secrets from image layers. The only fix is to immediately rotate the exposed credentials and rebuild the image with .dockerignore excluding all secret files.
How do I check my current Docker Hub rate-limit status?
Use the Docker Hub rate-limit API: curl the manifest endpoint with a Bearer token and check the RateLimit-Remaining response header. Alternatively, run docker info and look for pull statistics. For CI environments, monitor the RateLimit-Remaining header in your pull scripts and alert when it drops below a threshold.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.