Skip to main content
Blog|
How-to guides

HTTP 502 Bad Gateway: what it means and how to fix it

|
Apr 13, 2026|10 min read
HOW-TO GUIDESHTTP 502 Bad Gateway: what itmeans and how to fix itHOSTNEYhostney.comApril 13, 2026

A 502 Bad Gateway response means one server was acting as a gateway or proxy for another, asked the upstream for a response, and got something it could not use – no response at all, a malformed response, a connection that dropped mid-reply, or a backend process that is not running. The server in the middle cannot fulfill your request because the server behind it failed, so it returns 502 to say “I tried to hand this off and it did not work.”

The key thing about 502 is that it is never about your browser and rarely about the website you are visiting directly. It is about the conversation between two machines further down the chain – most often between a web server (Nginx or Apache) and an application backend (PHP-FPM, Node.js, Python), or between a CDN (Cloudflare, Fastly) and an origin server.

Quick reference: what a 502 usually means#

You are seeing 502 on…Likely causeFirst thing to try
Your own site (Nginx + PHP)PHP-FPM is down, crashed, or timed outRestart PHP-FPM, check error logs
Your own site behind CloudflareOrigin server is down or unreachableCheck origin directly via IP, look at Cloudflare diagnostic page
Your own site behind a load balancerAll upstream servers are failing health checksCheck upstream health, look at balancer logs
Spotify, YouTube, Discord, TwitterThird-party service is having an outageWait, check status page, nothing else you can do
A random site you do not controlTheir infrastructure is brokenTry again later, or contact the site owner

If you landed here because Spotify, YouTube, or another big service is returning 502, skip ahead to 502 from third-party services. If it is your own site, keep reading.

What a gateway actually is#

Before fixing a 502 it helps to know what “gateway” means in this context. Almost every modern website has at least two servers in its request path:

  1. Edge / proxy layer – Nginx, Apache, Cloudflare, a load balancer. This is what accepts the incoming HTTPS connection from the browser.
  2. Application backend – PHP-FPM, a Node.js process, a Python app, a Go service. This is what actually generates the HTML, JSON, or other response.

The edge server does not run your application code. It forwards the request to the backend, waits for a response, and relays it back to the browser. When the edge server is talking to the backend and something breaks in that conversation, you get 502.

This is different from the error your application itself might throw. If PHP generates an exception, you usually get a 500. If PHP cannot even be reached, you get 502. That distinction is what makes 502 specifically a “the plumbing is broken” error rather than a “your code is broken” error.

502 vs 503 vs 504: the 5xx family#

The three errors get confused because they all mean “something server-side went wrong with an upstream.” They are not interchangeable:

CodeMeaningWhat it tells you
502 Bad GatewayUpstream returned nothing usableThe backend crashed, is not running, or spoke gibberish
503 Service UnavailableUpstream is up but refusing to serveIntentional (maintenance) or overloaded (rate limit, out of workers)
504 Gateway TimeoutUpstream is alive but too slowThe backend is running but took longer than the proxy allowed

A good way to remember: 502 = the upstream is broken, 503 = the upstream said no, 504 = the upstream did not answer in time. If you want to go deeper on the other two, we have HTTP 503 Service Unavailable and a Nginx-specific 503 guide.

Is it your server or is it upstream?#

Before you start debugging, figure out where the 502 is coming from. The error page itself usually tells you.

  • “502 Bad Gateway” with “nginx” or “Apache” at the bottom – it is your own web server talking. The problem is between your Nginx/Apache and whatever runs behind it.
  • “Error 502” with a Cloudflare logo and a ray ID – Cloudflare is the gateway. The problem is that Cloudflare could not reach your origin, or your origin gave Cloudflare a bad response.
  • A branded error page (Spotify, YouTube, AWS) – a load balancer or reverse proxy that company operates is returning 502. You cannot debug their infrastructure.

From the command line, curl tells you the same thing without the branding:

curl -I https://example.com

Look at the Server: header and the response body. If the 502 page mentions Cloudflare and you bypass Cloudflare by hitting the origin IP directly, you can see whether the origin itself is serving content (Cloudflare problem) or also returning 502 (origin problem).

502 from Nginx: the backend is not there#

On a typical LAMP-style or Nginx + PHP-FPM stack, 502 almost always means Nginx could not reach PHP-FPM, or PHP-FPM handed back something broken.

PHP-FPM crashed or stopped

The most common cause, by far. PHP-FPM ran out of memory, hit an OOM kill, segfaulted on bad code, or got stopped during a deploy and never restarted.

Check its status:

systemctl status php8.3-fpm

(Replace 8.3 with your actual PHP version.) If it says inactive (dead) or failed , start it:

sudo systemctl start php8.3-fpm

If it refuses to start, the logs will tell you why:

journalctl -u php8.3-fpm -n 100
tail -n 200 /var/log/php8.3-fpm.log

Typical smoking guns: syntax error in a pool config file after an edit, a PHP extension that failed to load, port or socket already in use. Fix those and it will come back. If you are new to service management on Linux, our guide on managing Linux services with systemctl covers start, stop, restart, enable, and log inspection.

PHP-FPM is running but all workers are busy

A subtler version of the same failure. PHP-FPM is alive but every worker in the pool is tied up handling long requests, so new requests sit in the listen queue until Nginx gives up and returns 502.

Symptoms: 502 appears only under load, or only on specific slow URLs, while the main site works.

Check pool usage:

# Output depends on your PHP-FPM status page being enabled
curl http://localhost/status

Or look at the process list:

ps aux | grep php-fpm | wc -l

If you are at or near pm.max_children , you have two options: raise the worker count (if you have the RAM) or fix the slow requests. Raising workers blindly just delays the problem. For the full picture of how pools and process managers work, see what is PHP-FPM and how does it work.

Nginx cannot reach the socket or port

Nginx is configured to talk to PHP-FPM via a Unix socket or a TCP port. If that path is wrong, or if Nginx runs as a user that cannot read the socket, you get 502 even when PHP-FPM itself is healthy.

Symptoms in the Nginx error log:

connect() to unix:/run/php/php8.3-fpm.sock failed (2: No such file or directory)
connect() failed (111: Connection refused) while connecting to upstream
connect() to unix:/run/php/php8.3-fpm.sock failed (13: Permission denied)

Fix:

  • No such file or directory – PHP-FPM is not running, or the socket path in your pool config does not match the one in fastcgi_pass .
  • Connection refused – PHP-FPM is configured for a different address or port than Nginx expects. Check listen = in the pool config against fastcgi_pass in the Nginx vhost.
  • Permission denied – the socket is owned by a user or group Nginx cannot read. Set listen.owner , listen.group , and listen.mode in the pool config so the Nginx user can read it (typically www-data or nginx ).

The upstream returned a malformed response

Less common, but it happens. PHP-FPM actually replied, but what it sent was not a valid FastCGI response – for example, it emitted raw warnings before the HTTP headers, or the response was truncated because the worker was killed mid-reply.

Nginx error log will say something like:

upstream sent unsupported FastCGI protocol version
upstream prematurely closed connection while reading response header

The fix is usually in your PHP code or PHP config: disable display_errors in production (warnings going to stdout corrupt the FastCGI framing), raise memory_limit if workers are being killed for running out of memory, raise max_execution_time if scripts are being killed mid-reply.

On Hostney, PHP-FPM runs in an isolated container per site with pool sizing and memory limits managed at the platform level, so a single site running out of workers does not take down neighbors on the same server. Worker crashes restart automatically, and PHP error logs are accessible via the control panel without SSH.

502 from Cloudflare: origin is down#

A Cloudflare 502 looks different – it is a full-page Cloudflare-branded error with a ray ID at the bottom. It means Cloudflare received your request, tried to fetch from your origin, and got nothing usable back.

Causes, in order of likelihood:

  1. Origin server is completely down. The VM is off, the instance was terminated, the hosting provider is having an outage. Try to SSH in or check your hosting dashboard.
  2. Origin is up but web server is stopped. Nginx or Apache crashed but the machine is still pingable. SSH in and systemctl status nginx .
  3. Firewall is blocking Cloudflare. You added a rule that blocks Cloudflare’s IP ranges. Allowlist the current Cloudflare IP ranges.
  4. Origin is responding but too slowly or with bad headers. Usually surfaces as 520-527 on Cloudflare rather than 502, but edge cases exist.
  5. Keep-alive mismatch. Rare, but if your origin closes keep-alive connections more aggressively than Cloudflare expects, Cloudflare can get a dropped connection and report 502.

Quick test: bypass Cloudflare and hit the origin IP directly.

curl -I --resolve example.com:443:YOUR.ORIGIN.IP https://example.com

If that returns 200, the origin is fine and the problem is in the Cloudflare-to-origin path. If it returns 502 or times out, you have an origin problem that Cloudflare is just surfacing.

502 from third-party services (Spotify, YouTube, and friends)#

If you searched “spotify 502 error” or “youtube error 502,” the answer is almost always: it is not you, it is them.

Large services like Spotify, YouTube, Discord, Twitter/X, and GitHub run behind load balancers that return 502 when a specific backend shard or region fails. You cannot fix this from your end. What you can do:

  • Check their status page. Almost every major service has one: status.spotify.com, status.github.com, downdetector.com for third-party confirmation.
  • Wait a few minutes. 502s from big services are usually resolved within minutes as the load balancer routes around the failed backend.
  • Try from a different network. Very occasionally a regional CDN failure means one ISP gets 502 while another works fine. Cellular data or a VPN will show you this.
  • Clear cookies for that domain as a last resort. Not the cause of a real 502, but it rules out a stale session getting stuck on a failing edge node.

What does not help: clearing your browser cache, reinstalling the app, restarting your router. 502 is server-side by definition.

How to check logs when 502 hits your own site#

For your own infrastructure, logs are where the answer lives. The two you care about:

Nginx error log – tells you what Nginx tried to do and why it failed:

tail -f /var/log/nginx/error.log

Look for connect() failed , upstream prematurely closed connection , no live upstreams , upstream sent invalid response . Each maps to a specific cause above.

PHP-FPM log and PHP error log – tell you why the backend failed:

tail -f /var/log/php8.3-fpm.log
tail -f /var/log/php_errors.log

Look for fatal errors, OOM kills ( WARNING: [pool www] child ... exited on signal 9 (SIGKILL) after ... ), segfaults, and pool saturation warnings ( WARNING: [pool www] server reached pm.max_children setting ).

systemd journal catches crashes that did not make it into the log files:

journalctl -u nginx -u php8.3-fpm --since "10 minutes ago"

If you are running WordPress and the 502 only affects that site, our WordPress 502 Bad Gateway guide goes deeper on WordPress-specific causes like plugin conflicts and memory exhaustion from large page builders.

Permanent prevention#

Once you have recovered from a 502, a few habits keep them from coming back:

  • Monitor PHP-FPM worker saturation. If you are regularly hitting pm.max_children , fix the slow requests first, then raise workers. Do not just keep bumping the number.
  • Set sensible PHP memory limits. memory_limit too low causes OOM kills and 502s. Too high hides leaks until the whole machine OOMs. Most PHP sites do fine at 256M-512M.
  • Keep display_errors off in production. Warning output corrupts FastCGI framing and causes intermittent 502s that are very hard to trace.
  • Enable systemd auto-restart. Restart=on-failure and RestartSec=2s on your PHP-FPM unit means a crash causes a two-second outage instead of a permanent one.
  • Watch your error logs, not just your uptime. 502 rates climbing from 0.01% to 0.1% over a week is a warning – by the time it is visible in uptime monitoring you already have a problem.

A 502 is almost never mysterious once you have logs in front of you. It is either “the backend is not running,” “the backend is saturated,” “the backend is unreachable from where the proxy is looking,” or “someone upstream is having a bad day.” The fix follows directly from which of those four it is.

Related articles