A reverse proxy sits between the internet and your backend servers, forwarding client requests to the correct application and returning the response. Nginx is the most widely used reverse proxy for this purpose because its event-driven architecture handles thousands of concurrent connections efficiently and its configuration model maps cleanly to multi-domain setups.
This guide covers the full setup: how Nginx routes requests by domain, how to configure proxy_pass for different backends, how to handle SSL certificates per domain, and practical patterns for common multi-domain architectures.
What a reverse proxy does
When Nginx acts as a reverse proxy, it receives all incoming HTTP/HTTPS requests on ports 80 and 443, inspects the request (primarily the Host header and SNI information), and forwards it to the appropriate backend application. The client never communicates directly with the backend.
This creates several advantages over exposing backend applications directly:
Centralized SSL termination. SSL certificates are managed at the Nginx layer. Backend applications receive plain HTTP from Nginx over a local connection, which simplifies application configuration and avoids the overhead of each backend managing its own certificates.
Domain-based routing. One server with one public IP can host multiple domains, each routing to a different backend application or port. A Node.js app, a Python API, and a PHP application can all share the same server, each on their own port, with Nginx directing traffic based on the requested domain.
Static file serving. Nginx serves static assets (images, CSS, JavaScript) directly from disk without involving the backend, reducing load on the application server.
Connection management. Nginx handles slow clients, keepalive connections, and request buffering, shielding backend applications from connection-level concerns they are not optimized to handle.
How Nginx matches requests to server blocks
Nginx uses
server
blocks (sometimes called virtual hosts) to determine which configuration applies to each incoming request. The matching process uses two pieces of information: the IP address and port the connection arrived on, and the domain name from the Host header (or SNI for HTTPS).
server {
listen 80;
server_name example.com www.example.com;
# configuration for example.com
}
server {
listen 80;
server_name api.example.com;
# configuration for api.example.com
}
server {
listen 80;
server_name dashboard.example.com;
# configuration for dashboard.example.com
}
When a request arrives, Nginx:
- Finds all server blocks listening on the matching IP:port
- Compares the Host header against each block’s
server_name - Uses the first exact match, then the first wildcard match, then the first regex match
- Falls back to the
default_serverif no match is found
This means you can host any number of domains on a single Nginx instance. Each domain gets its own server block with its own proxy configuration, SSL certificate, and access rules.
Default server block
Always define a default server block that catches requests for unrecognized domains. Without it, Nginx uses the first server block in the configuration, which may expose an application you did not intend to be the default:
server {
listen 80 default_server;
listen 443 ssl default_server;
server_name _;
ssl_certificate /etc/ssl/certs/default.crt;
ssl_certificate_key /etc/ssl/private/default.key;
return 444; # Close connection without response
}
Returning 444 is an Nginx-specific status code that drops the connection. This prevents bots and scanners that connect by IP address from reaching any of your actual applications.
Basic reverse proxy configuration
A minimal reverse proxy for a single domain:
server {
listen 80;
server_name example.com www.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The
proxy_pass
directive tells Nginx where to forward the request. In this case, a Node.js application listening on port 3000. The
proxy_set_header
directives pass client information to the backend that would otherwise be lost because Nginx is making the connection on the client’s behalf.
Each header serves a purpose:
- Host: The original domain name the client requested. Without this, the backend sees the request as coming from
127.0.0.1:3000. - X-Real-IP: The client’s actual IP address. Without this, the backend sees all requests coming from Nginx (127.0.0.1).
- X-Forwarded-For: A chain of all proxies the request passed through. Important if you have multiple proxy layers.
- X-Forwarded-Proto: Whether the client connected over HTTP or HTTPS. The backend receives plain HTTP from Nginx, so it needs this header to know if the original connection was secure.
Multiple domains, each with their own backend
Here is where the multi-domain setup comes together. Each domain gets its own server block pointing to a different backend:
# Node.js application
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Python API
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# WordPress (via PHP-FPM)
server {
listen 80;
server_name blog.example.com;
root /var/www/blog;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/blog.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Each server block is independent. The Node.js app, Python API, and WordPress blog each have their own domain, their own backend, and their own configuration. When a request for
api.example.com
arrives, Nginx routes it to port 8000. A request for
blog.example.com
goes to PHP-FPM. They do not interact.
Organizing configuration files
For multi-domain setups, put each domain’s server block in its own file rather than combining everything into one configuration:
/etc/nginx/sites-available/app.example.com.conf
/etc/nginx/sites-available/api.example.com.conf
/etc/nginx/sites-available/blog.example.com.conf
Enable each by creating symlinks in
sites-enabled
:
ln -s /etc/nginx/sites-available/app.example.com.conf /etc/nginx/sites-enabled/
ln -s /etc/nginx/sites-available/api.example.com.conf /etc/nginx/sites-enabled/
ln -s /etc/nginx/sites-available/blog.example.com.conf /etc/nginx/sites-enabled/
Then include the enabled directory in your main Nginx configuration:
http {
include /etc/nginx/sites-enabled/*.conf;
}
This approach makes it easy to enable and disable individual domains without editing shared configuration files. Test the configuration after any change:
sudo nginx -t && sudo systemctl reload nginx
The
nginx -t
step validates syntax and catches errors before they affect running traffic. Always test before reloading.
Adding SSL per domain
Each domain needs its own SSL certificate. Let’s Encrypt with Certbot is the standard approach for automated, free certificates.
Obtaining certificates
Install Certbot and request certificates for each domain:
sudo certbot certonly --nginx -d app.example.com
sudo certbot certonly --nginx -d api.example.com
sudo certbot certonly --nginx -d blog.example.com
Certbot stores certificates at
/etc/letsencrypt/live/{domain}/
. Each domain gets its own directory with
fullchain.pem
and
privkey.pem
.
Configuring SSL in server blocks
Update each server block to listen on port 443 with SSL, and redirect HTTP to HTTPS:
# HTTP redirect
server {
listen 80;
server_name app.example.com;
location ^~ /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
# HTTPS server
server {
listen 443 ssl;
http2 on;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# HSTS
add_header Strict-Transport-Security "max-age=63072000" always;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Key details:
- http2 on: Enables HTTP/2 for better performance. Requires SSL.
- ACME challenge location: The HTTP server block includes a location for Let’s Encrypt renewal challenges. Without this, certificate renewal fails because the HTTPS redirect prevents Certbot from verifying domain ownership.
- ssl_protocols: Only TLS 1.2 and 1.3. Older versions have known vulnerabilities.
- HSTS: Tells browsers to always use HTTPS for this domain. Set a shorter max-age initially (3600) while testing, then increase once confirmed working.
Repeat this pattern for each domain, changing the
server_name
,
ssl_certificate
paths, and
proxy_pass
target.
Certificate renewal
Let’s Encrypt certificates expire every 90 days. Certbot installs a renewal timer automatically on most distributions. Verify it is active:
sudo systemctl status certbot-renew.timer
Test renewal without actually renewing:
sudo certbot renew --dry-run
If you manage many domains, ensure the renewal process reloads Nginx after renewing certificates. Add a deploy hook:
sudo certbot renew --deploy-hook "systemctl reload nginx"
Upstream blocks and load balancing
For backends that run multiple instances (for availability or load distribution), Nginx’s
upstream
block defines a pool of servers:
upstream node_app {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
}
server {
listen 443 ssl;
server_name app.example.com;
# SSL config...
location / {
proxy_pass http://node_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
By default, Nginx distributes requests round-robin across the upstream servers. If one server fails, Nginx marks it as unavailable and sends requests to the remaining servers.
Health checks and failover
The
max_fails
and
fail_timeout
parameters control when Nginx considers an upstream server to be down:
upstream node_app {
server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
}
After 3 failed requests within 30 seconds, Nginx stops sending traffic to that server for 30 seconds, then tries again. Combined with
proxy_next_upstream
, you can control which errors trigger failover:
location / {
proxy_pass http://node_app;
proxy_next_upstream error timeout http_502 http_503;
proxy_next_upstream_tries 2;
proxy_next_upstream_timeout 10s;
}
This retries the request on the next upstream server if the first returns a 502 or 503, or if the connection times out. The
proxy_next_upstream_tries
limit prevents infinite retries across the pool.
Blue-green deployments
A common pattern for zero-downtime deployments uses two upstream servers – one active (blue) and one for the new version (green):
upstream app_production {
server 127.0.0.1:3000 max_fails=1 fail_timeout=10s;
server 127.0.0.1:3001 max_fails=1 fail_timeout=10s;
}
Deploy the new version to port 3001 (green), verify it works, then swap which port is active. During the swap, Nginx’s failover handles the transition. Both servers are in the pool, and
proxy_next_upstream
ensures requests go to whichever is responding.
Proxy buffering and timeouts
Default timeout and buffer values work for most setups, but you will need to tune them for specific workloads.
Timeouts
proxy_connect_timeout 10s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
- proxy_connect_timeout: How long Nginx waits to establish a connection to the upstream. Keep this short (5-10s). If the backend is not responding to TCP connections, waiting longer will not help.
- proxy_read_timeout: How long Nginx waits for the backend to send a response. Increase this for backends that do heavy processing. For a WordPress site running WooCommerce checkout, 60-90 seconds is reasonable. For a fast API, 10-30 seconds is sufficient.
- proxy_send_timeout: How long Nginx waits to send the request body to the backend. Only relevant for large request bodies (file uploads).
If your backend takes too long and Nginx returns a 504 Gateway Timeout, the
proxy_read_timeout
is the setting to adjust. See 504 gateway timeout in Nginx: causes and how to fix it for detailed diagnosis.
Buffering
proxy_buffer_size 16k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
Nginx buffers the upstream response before sending it to the client. This allows the backend to finish quickly and free its resources while Nginx handles the slower client connection. The default buffer sizes work for most responses, but if your backend sends large responses (large API payloads, generated reports), increase the buffer sizes to avoid Nginx writing temporary files to disk.
WebSocket proxying
If any of your backends use WebSocket connections (real-time chat, live updates, collaborative editing), the proxy configuration needs additional headers:
location /ws {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400s; # Keep WebSocket connections alive for 24h
}
The
Upgrade
and
Connection
headers tell Nginx to switch from HTTP to the WebSocket protocol. The increased
proxy_read_timeout
prevents Nginx from closing idle WebSocket connections. Without these headers, WebSocket connections fail at the proxy layer.
Practical example: full multi-domain setup
Here is a complete example with three domains, SSL, and different backend types:
# /etc/nginx/sites-available/app.example.com.conf
# HTTP redirect
server {
listen 80;
server_name app.example.com;
return 301 https://$host$request_uri;
}
# HTTPS - Node.js application with load balancing
upstream app_backend {
server 127.0.0.1:3000 max_fails=2 fail_timeout=15s;
server 127.0.0.1:3001 max_fails=2 fail_timeout=15s;
}
server {
listen 443 ssl;
http2 on;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
# Static assets served by Nginx directly
location /static/ {
alias /var/www/app/static/;
expires 30d;
add_header Cache-Control "public, immutable";
}
# WebSocket endpoint
location /ws {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400s;
}
# Application
location / {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_next_upstream error timeout http_502;
proxy_next_upstream_tries 2;
}
}
# /etc/nginx/sites-available/api.example.com.conf
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
http2 on;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
# API rate limiting
limit_req zone=api burst=20 nodelay;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 30s;
}
}
# /etc/nginx/sites-available/blog.example.com.conf
server {
listen 80;
server_name blog.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
http2 on;
server_name blog.example.com;
root /var/www/blog;
index index.php;
ssl_certificate /etc/letsencrypt/live/blog.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/blog.example.com/privkey.pem;
# Static files
location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2)$ {
expires 30d;
add_header Cache-Control "public";
}
# WordPress permalinks
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/blog.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_read_timeout 60s;
}
}
Three domains, three completely different backends, all on one server with independent SSL certificates and tailored configurations.
Debugging proxy issues
When a reverse proxy is not working as expected, the Nginx error log is the first place to check:
tail -f /var/log/nginx/error.log
Common issues:
“connect() failed (111: Connection refused)” — The backend is not running or not listening on the expected port. Verify the backend process is running and check which port it is bound to.
“no live upstreams while connecting to upstream” — All servers in an upstream block are marked as unavailable. Check whether the backend processes are running and responding. See 503 service temporarily unavailable in Nginx for detailed troubleshooting.
“upstream prematurely closed connection” — The backend terminated the connection before sending a complete response. This often indicates the backend crashed or ran out of memory during request processing.
Client receives wrong site content — The
server_name
directives are not matching correctly, and the request is hitting the default server block or a different domain’s block. Check
server_name
values and ensure there is a
default_server
block catching unmatched requests.
SSL errors for specific domains — The certificate does not match the requested domain. Verify the certificate path in the server block and check that the certificate covers the exact domain (and www variant if needed):
openssl x509 -in /etc/letsencrypt/live/example.com/fullchain.pem -text -noout | grep DNS
.
How managed hosting handles this
On managed WordPress hosting, the reverse proxy layer is configured automatically. You do not write server blocks or manage upstream definitions. When you add a domain through the control panel, the platform generates the Nginx configuration, obtains an SSL certificate, and reloads the server.
On Hostney, each domain gets its own server block with per-domain SSL (Let’s Encrypt by default, with the option to import custom certificates). The proxy configuration routes to PHP-FPM through per-domain Unix sockets, so each site has its own PHP process pool. For container-based applications (Node.js, Python, static sites), the platform uses upstream blocks with blue-green deployment support – two ports per application, with failover configured so deployments do not drop requests.
The same principles described in this guide apply – server blocks match by domain, proxy_pass routes to the backend, and SSL terminates at Nginx. The difference is the configuration is generated and managed by the platform rather than written by hand. For background on how Nginx compares to Apache in this role, and why Nginx’s architecture suits reverse proxy workloads, see Nginx vs Apache: which one is better for WordPress.
Summary
Setting up Nginx as a reverse proxy for multiple domains involves three core concepts: server blocks that match requests by domain name,
proxy_pass
directives that route each domain to its backend, and per-domain SSL certificates for HTTPS.
Organize each domain in its own configuration file under
sites-available
with symlinks in
sites-enabled
. Always define a default server block to catch unrecognized domains. Use upstream blocks when you need load balancing or failover across multiple backend instances. Configure appropriate timeouts based on your backend’s response characteristics, and add WebSocket headers for any real-time connections.
Test every configuration change with
nginx -t
before reloading. Monitor the error log for connection failures and upstream issues. Use
proxy_next_upstream
to handle transient backend failures gracefully. The setup scales to as many domains as you need – each one is independent, with its own SSL, backend, and access rules.