Home System Design Reverse Proxy vs Forward Proxy: What They Are and When to Use Each

Reverse Proxy vs Forward Proxy: What They Are and When to Use Each

In Plain English 🔥
Imagine you work at a big company. When you want to visit a website, your request goes through a security guard at the front door — that guard is a forward proxy. It acts on YOUR behalf, hiding who you are from the outside world. Now flip it around: when millions of people visit Netflix, they don't talk directly to Netflix's servers. Instead, a receptionist intercepts all those calls and routes them smartly — that receptionist is a reverse proxy. It acts on behalf of the SERVER, hiding the server's identity from the outside world. Same idea, completely opposite direction.
⚡ Quick Answer
Imagine you work at a big company. When you want to visit a website, your request goes through a security guard at the front door — that guard is a forward proxy. It acts on YOUR behalf, hiding who you are from the outside world. Now flip it around: when millions of people visit Netflix, they don't talk directly to Netflix's servers. Instead, a receptionist intercepts all those calls and routes them smartly — that receptionist is a reverse proxy. It acts on behalf of the SERVER, hiding the server's identity from the outside world. Same idea, completely opposite direction.

Every major system you rely on daily — Google, Netflix, your company's internal tools — quietly runs traffic through a proxy of some kind. Yet most developers can't explain the difference between a forward proxy and a reverse proxy without getting tangled up. That confusion costs real money: misconfigured proxies cause security holes, poor load distribution, and debugging nightmares that take days to unravel.

The problem is that both tools have the word 'proxy' in the name, which implies they're variations of the same thing. They're not. They solve fundamentally different problems and sit on opposite sides of the network boundary. A forward proxy controls and masks outbound traffic from clients. A reverse proxy controls and masks inbound traffic to servers. Getting this wrong means you deploy the wrong tool, patch the wrong layer, or — worse — expose infrastructure you thought was hidden.

By the end of this article you'll be able to explain both proxy types clearly in a system design interview, sketch out a real architecture diagram showing where each one lives, configure a minimal working example of each using Nginx, and confidently decide which one a given system needs — and why.

What a Forward Proxy Actually Does (and Why Companies Love It)

A forward proxy sits between a group of clients — say, every laptop in a corporate office — and the open internet. When an employee's browser makes a request, it goes to the proxy first. The proxy then makes that request on the employee's behalf, receives the response, and hands it back.

The key word is 'behalf of the client.' The destination server never sees the real client IP. It only ever sees the proxy's IP address. This is the foundation of tools like VPNs, Tor exit nodes, and corporate content filters.

Why does this matter? Three big reasons. First, anonymity — you can mask the origin of requests, which matters for privacy, web scraping, or bypassing geo-restrictions. Second, access control — companies use forward proxies to block social media or gambling sites during work hours without touching individual machines. Third, caching — a forward proxy can cache responses for frequently visited sites, so if 200 employees all load the same news article, only one actual request hits the internet. The other 199 get a cached copy in milliseconds.

The critical insight: the CLIENT knows about the forward proxy and is configured to use it. That's what distinguishes it architecturally from a reverse proxy.

forward_proxy_nginx.conf · NGINX
12345678910111213141516171819202122232425262728293031323334353637
# Forward Proxy Configuration using Nginx + ngx_http_proxy_connect_module
# This turns Nginx into a forward proxy that corporate clients route through.
# NOTE: Standard Nginx doesn't support CONNECT (HTTPS tunneling) out of the box.
# You need the ngx_http_proxy_connect_module patch for full HTTPS support.
# This example shows HTTP forward proxying which works with vanilla Nginx.

server {
    # The port corporate clients point their browser proxy settings at
    listen 8888;

    # resolver is required so Nginx can resolve the destination hostname dynamically
    # 8.8.8.8 is Google's public DNS — replace with your internal DNS in production
    resolver 8.8.8.8;

    location / {
        # $http_host captures the destination host the client requested
        # $request_uri captures the full path and query string
        # Together they reconstruct the original target URL
        proxy_pass http://$http_host$request_uri;

        # Forward the real client IP in a custom header so destination servers
        # can log who actually made the request (optional, reduces anonymity)
        proxy_set_header X-Forwarded-For $remote_addr;

        # Pass along the original Host header so the destination server
        # knows which virtual host it's being asked for
        proxy_set_header Host $http_host;
    }

    # Block access to internal/private IP ranges — critical security rule.
    # Without this, an attacker could use YOUR proxy to attack your own internal network
    # (known as Server-Side Request Forgery, SSRF)
    location ~* ^http://(10\.|192\.168\.|172\.(1[6-9]|2[0-9]|3[01])\.) {
        # Return 403 Forbidden for any request targeting private IP space
        return 403 "Access to private network ranges is blocked";
    }
}
▶ Output
# Test from a client machine configured to use this proxy:
# curl -x http://proxy-server:8888 http://example.com

# Nginx access log output (on the proxy server):
# 192.168.1.45 - - [12/Jul/2025:09:14:33 +0000] "GET http://example.com/ HTTP/1.1" 200 1256
# ^ real client IP ^ full URL shows this is a forward proxy request

# From example.com's perspective, the request came from the PROXY IP, not 192.168.1.45
# That's the anonymization happening in real time.
⚠️
Watch Out: SSRF RiskAn open forward proxy with no IP filtering is an SSRF (Server-Side Request Forgery) disaster waiting to happen. An attacker can route requests through your proxy to hit internal services on your private network — your Redis instance, your admin panel, your cloud metadata endpoint (169.254.169.254). Always blocklist RFC 1918 private ranges as shown above.

What a Reverse Proxy Does — and Why It's the Backbone of Modern Web Architecture

A reverse proxy sits in front of your servers, not your clients. When a user types 'api.yourapp.com' into their browser, they're hitting the reverse proxy — and they have absolutely no idea. The proxy then decides which backend server should handle that request, forwards it, gets the response, and returns it to the user.

The client never knows the backend servers exist. They might think they're talking to one server. In reality, you could have 50 servers behind that proxy, dynamically scaling up and down.

This is why reverse proxies are the single most important infrastructure component in scalable web systems. They enable four things that are otherwise very hard to achieve: load balancing across multiple server instances, SSL termination (so your app servers never deal with encryption overhead), caching of static assets close to the edge, and centralized authentication and rate limiting.

Nginx, Caddy, HAProxy, AWS ALB, Cloudflare — these are all reverse proxies at their core. When engineers say 'put it behind Nginx', they mean 'add a reverse proxy in front of your application.' This is the default architecture for virtually every production web service today.

The critical insight: the CLIENT does not know about the reverse proxy. It's invisible. The client thinks it's talking directly to 'api.yourapp.com'.

reverse_proxy_nginx.conf · NGINX
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
# Production-style Reverse Proxy Configuration
# This Nginx config sits in front of three application server instances
# and handles load balancing, SSL termination, and health checking.

# Define the pool of backend application servers
# Nginx will distribute requests across these using round-robin by default
upstream application_servers {
    # Each 'server' directive points to a running application instance
    # These could be Node.js, Python/Gunicorn, Java/Spring — anything
    server 10.0.1.10:3000 weight=3;  # This server handles 3x the traffic (more powerful machine)
    server 10.0.1.11:3000 weight=1;  # Standard traffic share
    server 10.0.1.12:3000 weight=1;  # Standard traffic share

    # Health check: if a server fails to respond twice in 30 seconds, stop sending it traffic
    # Nginx will automatically recheck it every 30 seconds and re-add it when healthy
    server 10.0.1.13:3000 backup;    # Only used if ALL primary servers are down

    # Keepalive maintains persistent connections to backends — reduces TCP handshake overhead
    keepalive 32;
}

# Redirect all HTTP traffic to HTTPS — never serve your app unencrypted
server {
    listen 80;
    server_name api.yourapp.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name api.yourapp.com;

    # SSL termination happens HERE at the proxy — backend servers talk plain HTTP internally
    # This offloads crypto work from your app servers (significant CPU saving at scale)
    ssl_certificate     /etc/ssl/certs/yourapp.crt;
    ssl_certificate_key /etc/ssl/private/yourapp.key;
    ssl_protocols       TLSv1.2 TLSv1.3;  # Never allow TLS 1.0 or 1.1 — they're broken

    # Security headers added here once, for ALL backends — not in each app separately
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Frame-Options DENY always;
    add_header X-Content-Type-Options nosniff always;

    # Rate limiting — block clients making more than 100 requests/second
    # This protects ALL backend servers centrally without touching app code
    limit_req zone=api_rate_limit burst=200 nodelay;

    location /api/ {
        # Hand the request off to our upstream pool
        proxy_pass http://application_servers;

        # Tell the backend the REAL client IP, not the proxy's IP
        # Your app needs this for logging, analytics, and fraud detection
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;  # Tell backend this came in over HTTPS

        # Pass the original hostname so the backend knows which virtual host was requested
        proxy_set_header Host $host;

        # Timeouts: don't let a slow backend server hang the proxy indefinitely
        proxy_connect_timeout 5s;   # Max time to establish connection to backend
        proxy_read_timeout    30s;  # Max time to wait for backend to send a response
        proxy_send_timeout    10s;  # Max time to send the request to backend
    }

    # Cache static assets at the proxy level — backends never even see these requests
    location /static/ {
        proxy_pass http://application_servers;
        proxy_cache_valid 200 7d;  # Cache successful responses for 7 days
        add_header X-Cache-Status $upstream_cache_status;  # Shows HIT or MISS in response
    }
}
▶ Output
# Test the reverse proxy from outside:
# curl -I https://api.yourapp.com/api/users

HTTP/2 200
server: nginx # Client sees Nginx, not your Node.js/Python app
x-real-ip: (stripped — not visible to client)
x-cache-status: HIT # Static asset served from cache, backend never touched
strict-transport-security: max-age=31536000; includeSubDomains
x-frame-options: DENY

# Nginx access log shows which backend handled the request:
# [proxy] 203.0.113.42 -> 10.0.1.10:3000 GET /api/users 200 142ms
# [proxy] 203.0.113.99 -> 10.0.1.11:3000 GET /api/users 200 138ms <- round-robin in action
# [proxy] 203.0.113.77 -> 10.0.1.12:3000 GET /api/users 200 145ms
⚠️
Pro Tip: X-Forwarded-For Is Not OptionalIf you don't set X-Real-IP and X-Forwarded-For in your reverse proxy config, every request your application sees will appear to come from the proxy's internal IP (e.g. 127.0.0.1 or 10.0.0.1). Your analytics are meaningless, your rate limiting by IP breaks, and your fraud detection goes blind. Always pass the real client IP through.

How They Fit Into Real System Design — Side by Side

Here's where it clicks: in a sophisticated real-world architecture, you'll often have BOTH types of proxy at work simultaneously — serving completely different purposes.

Consider a company's internal developer tooling setup. Developers sit behind a corporate forward proxy that logs outbound traffic, blocks social media, and caches npm packages locally (so 200 developers all downloading React doesn't hammer npm's servers). They're using a forward proxy without thinking about it.

At the same time, the product those developers are building runs behind a reverse proxy (probably an AWS Application Load Balancer or Cloudflare). The reverse proxy handles SSL termination, routes '/api/' requests to their API servers and '/app/' requests to their frontend servers, and absorbs DDoS traffic before it touches any real infrastructure.

The mental model that makes this permanent: ask 'whose side is this proxy on?' A forward proxy is on the CLIENT's side — it represents and protects the clients. A reverse proxy is on the SERVER's side — it represents and protects the servers. Once that distinction is locked in, everything else (configuration, security implications, caching strategy) flows naturally from it.

This is also why CDNs like Cloudflare are technically reverse proxies. You configure your DNS to point at Cloudflare, and Cloudflare proxies all requests to your origin server. Your origin is hidden. Cloudflare is on your server's side.

docker_compose_full_proxy_architecture.yml · YAML
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687
# Full local demo: Forward proxy + Reverse proxy in the same Docker Compose setup
# Run: docker compose up
# This lets you see both proxy types operating simultaneously on your machine

version: '3.9'

services:

  # ── REVERSE PROXY (public-facing) ──────────────────────────────────────────
  # This is what the internet hits. It knows nothing about which backend answers.
  reverse_proxy:
    image: nginx:alpine
    container_name: reverse_proxy
    ports:
      - "80:80"      # Exposed to the outside world (simulates public internet)
    volumes:
      - ./reverse_proxy.conf:/etc/nginx/conf.d/default.conf:ro
    networks:
      - public_network   # Faces the outside
      - private_network  # Can reach backend servers
    depends_on:
      - api_server_1
      - api_server_2

  # ── BACKEND API SERVERS (hidden from public) ────────────────────────────────
  # These servers are ONLY on the private network.
  # There is no way to reach them except through the reverse proxy.
  api_server_1:
    image: node:18-alpine
    container_name: api_server_1
    working_dir: /app
    volumes:
      - ./api_server.js:/app/server.js
    command: node server.js
    environment:
      SERVER_ID: "api-server-1"   # So we can see which server handled the request
      PORT: "3000"
    networks:
      - private_network            # NOT on public_network — completely hidden

  api_server_2:
    image: node:18-alpine
    container_name: api_server_2
    working_dir: /app
    volumes:
      - ./api_server.js:/app/server.js
    command: node server.js
    environment:
      SERVER_ID: "api-server-2"
      PORT: "3000"
    networks:
      - private_network            # NOT on public_network — completely hidden

  # ── FORWARD PROXY (internal clients use this to reach outside) ─────────────
  # Internal services that need to make outbound HTTP calls route through this.
  # It logs all outbound requests centrally and can block/allow by domain.
  forward_proxy:
    image: ubuntu/squid   # Squid is the industry standard forward proxy daemon
    container_name: forward_proxy
    ports:
      - "3128:3128"       # Standard Squid port — clients set HTTP_PROXY=http://forward_proxy:3128
    volumes:
      - ./squid.conf:/etc/squid/squid.conf:ro
    networks:
      - private_network   # Internal clients use this to reach the internet
      - public_network    # It needs internet access to fulfill those requests

  # ── INTERNAL CLIENT (simulates a service making outbound calls via forward proxy) ──
  # This could be a data pipeline, a webhook sender, a third-party API caller etc.
  internal_service:
    image: curlimages/curl:latest
    container_name: internal_service
    # Route ALL outbound HTTP traffic through the forward proxy
    environment:
      HTTP_PROXY: "http://forward_proxy:3128"
      HTTPS_PROXY: "http://forward_proxy:3128"
      NO_PROXY: "localhost,127.0.0.1,reverse_proxy"  # Don't proxy internal service-to-service calls
    networks:
      - private_network   # Only on private network — can't reach internet directly
    command: sh -c "sleep 5 && curl -s http://httpbin.org/ip"  # Test: what IP does the internet see?

networks:
  public_network:    # Represents the internet-facing network
    driver: bridge
  private_network:   # Represents your internal datacenter network
    driver: bridge
    internal: false  # Set to true in production to truly block direct internet egress
▶ Output
# docker compose up --abort-on-container-exit

reverse_proxy | Starting nginx
api_server_1 | Server api-server-1 listening on port 3000
api_server_2 | Server api-server-2 listening on port 3000
forward_proxy | Squid listening on port 3128

# From your host machine — hitting the REVERSE PROXY:
# curl http://localhost:80/api/ping
{"server": "api-server-1", "message": "pong"} <- round-robin request 1
# curl http://localhost:80/api/ping
{"server": "api-server-2", "message": "pong"} <- round-robin request 2

# internal_service container output — showing what IP the internet sees:
# (It sees the FORWARD PROXY's IP, not the internal_service container's IP)
{
"origin": "203.0.113.10" <- This is the forward proxy's public IP, not the internal service IP
}

# Try to reach api_server_1 directly from your host (should FAIL — it's hidden):
# curl http://localhost:3000/api/ping
curl: (7) Failed to connect to localhost port 3000: Connection refused
# Perfect. The backend is unreachable without going through the reverse proxy.
🔥
Interview Gold: The Network Boundary QuestionInterviewers love asking 'where does each proxy sit relative to the trust boundary?' The forward proxy sits at the edge of the CLIENT network, managing outbound trust. The reverse proxy sits at the edge of the SERVER network, managing inbound trust. Being able to draw this boundary clearly on a whiteboard immediately signals senior-level system design thinking.
Feature / AspectForward ProxyReverse Proxy
Who it acts on behalf ofThe CLIENT (user, employee, internal service)The SERVER (your backend infrastructure)
Direction of trafficOutbound — client to internetInbound — internet to your servers
Client awarenessClient KNOWS and is configured to use itClient has NO IDEA it exists
Server awarenessDestination server doesn't know real clientClient doesn't know real server IPs
Primary use caseAnonymity, content filtering, caching outbound requestsLoad balancing, SSL termination, hiding server topology
Where it lives in networkEdge of client/internal networkEdge of server/datacenter network
Common toolsSquid, Privoxy, corporate VPN gatewaysNginx, HAProxy, AWS ALB, Cloudflare, Caddy
Caching benefitReduces duplicate outbound requests from many clientsReduces load on backend by caching responses near the edge
Security benefitHides client identities, enforces outbound access policyHides server IPs, centralizes DDoS protection and WAF
SSL handlingIntercepts and inspects HTTPS (SSL inspection/MITM)Terminates SSL — backend servers use plain HTTP internally
Typical config locationSet in browser/OS network settings or via env varsSet in DNS — your domain points to the proxy, not origin
Real-world analogyA company's security guard checking employees leavingA hotel receptionist routing guest calls to the right room

🎯 Key Takeaways

  • Direction defines everything: forward proxy = outbound (client→internet), reverse proxy = inbound (internet→server). When in doubt, ask 'whose side is this proxy on?'
  • Client awareness is the architectural tell: if the client is configured to use it (browser proxy settings, HTTP_PROXY env var), it's a forward proxy. If the client thinks it's talking directly to the destination, it's a reverse proxy.
  • Reverse proxies are the foundation of scalable web architecture — SSL termination, load balancing, DDoS absorption, and centralized auth all happen here before requests ever touch your app servers.
  • An open, unauthenticated forward proxy is an immediate security incident waiting to happen — always restrict by IP allowlist and always block RFC 1918 private ranges to prevent SSRF attacks against your own infrastructure.

⚠ Common Mistakes to Avoid

  • Mistake 1: Confusing a VPN with a Forward Proxy — Symptoms: developers say 'just use a VPN' when a forward proxy is needed, then wonder why they can't do per-domain filtering or caching. Fix: understand that a VPN encrypts ALL traffic at the OS level and tunnels it to a remote network (changing your effective location), while a forward proxy operates at the HTTP/HTTPS application layer and can selectively proxy, filter, cache, and inspect individual requests. They serve overlapping but distinct purposes. For corporate content filtering and request caching, you want a forward proxy like Squid, not a VPN.
  • Mistake 2: Not passing X-Forwarded-For in the reverse proxy config — Symptoms: every request in your application logs shows the same IP address (127.0.0.1 or your load balancer's internal IP). Rate limiting by IP fails. Geo-location is broken. Analytics are meaningless. Fix: add 'proxy_set_header X-Real-IP $remote_addr;' and 'proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;' to your Nginx location block. Then configure your application to read the client IP from the X-Real-IP or X-Forwarded-For header, not from the raw socket connection. In Express.js: app.set('trust proxy', 1). In Django: use REMOTE_ADDR after configuring TRUSTED_PROXIES.
  • Mistake 3: Running an open forward proxy with no access controls — Symptoms: your proxy server appears in public proxy lists within hours of launch. Your IP gets blacklisted by major services. Bandwidth costs explode. In the worst case, attackers use it to hit your own internal services (SSRF). Fix: never expose a forward proxy to the public internet without authentication. Use IP allowlisting to restrict which clients can use it, add HTTP Basic Auth, and always blocklist private IP ranges (10.x.x.x, 192.168.x.x, 172.16-31.x.x, 127.x.x.x, 169.254.x.x) so the proxy cannot be weaponized to target your own internal network.

Interview Questions on This Topic

  • QA client's browser makes a request to api.stripe.com. Describe exactly where a forward proxy and a reverse proxy would each sit in that network path, and what each one would do to the request.
  • QCloudflare sits in front of millions of websites and users configure it by changing their DNS records. Is Cloudflare a forward proxy or a reverse proxy? Explain your reasoning — and then describe a situation where an organization might run BOTH types of proxy simultaneously.
  • QYour application logs show every single incoming request has the IP address 10.0.0.1 — your load balancer's private IP. Users are complaining that rate limiting is broken because everyone shares the same 'identity'. Walk me through exactly what configuration changes are needed at the proxy layer and the application layer to fix this.

Frequently Asked Questions

Is Nginx a forward proxy or a reverse proxy?

Nginx is primarily designed and used as a reverse proxy — it sits in front of your backend servers and routes inbound traffic to them. It can be configured as a basic forward proxy for HTTP traffic, but it requires a third-party module (ngx_http_proxy_connect_module) to handle HTTPS CONNECT tunneling properly. For production forward proxy setups, Squid is a more purpose-built choice.

Does a VPN count as a forward proxy?

They overlap in purpose — both can mask your IP and route traffic through an intermediary — but they operate at different layers. A VPN works at the network (OS) level, encrypting all traffic and routing it through a remote server. A forward proxy works at the HTTP application level, handling requests selectively by domain or URL. A VPN gives you a new apparent location; a forward proxy gives you control over individual HTTP requests including filtering, caching, and inspection.

Can a server tell if a request came through a reverse proxy?

Not without deliberate cooperation from the proxy. A reverse proxy completely hides the server topology — the client has no way to discover the real backend server IPs unless the proxy leaks them. However, the reverse proxy can also hide the real CLIENT IP from the backend server, unless it's configured to pass it through X-Forwarded-For or X-Real-IP headers. This is why configuring those headers correctly is non-negotiable in any production reverse proxy setup.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousStorage Estimation TechniquesNext →Consistent Hashing
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged