Skip to main content
Blog|
Learning center

What is a cache miss and how does it affect performance

|
Apr 1, 2026|12 min read
LEARNING CENTERWhat is a cache miss and howdoes it affect performanceHOSTNEYhostney.comApril 1, 2026

Every time someone visits your website, the server has to assemble the page. For a WordPress site, that means PHP executes, database queries run, the theme renders HTML, and the result is sent to the browser. This process takes time – typically 200-800 milliseconds depending on the site’s complexity, the server’s resources, and the database load.

Caching stores the result of that work so the next visitor gets the pre-built page instantly instead of waiting for the server to build it again. A cache hit means the page was found in cache and served directly. A cache miss means it was not found, and the server had to do the full work of generating it from scratch.

The difference matters. A cache hit on a well-configured WordPress site might take 10-30 milliseconds. A cache miss on the same site might take 500+ milliseconds. For a site with 10,000 daily visitors, the proportion of hits to misses directly determines how fast the site feels and how much load the server handles.

How web caching works#

Web caching for a WordPress site operates at multiple layers. Each layer caches different things, and a miss at any layer falls through to the next.

Page cache (server-level)

The most impactful cache layer. When a visitor requests a page, Nginx (or another web server) checks if it has a cached copy of the complete HTML response. If it does, Nginx serves the cached file directly without even starting PHP. The request never reaches WordPress.

On a cache hit, the response flow is:

Browser -> Nginx -> cached HTML file -> Browser

On a cache miss:

Browser -> Nginx -> PHP-FPM -> WordPress -> Database -> HTML generated -> Browser

The page cache eliminates the entire PHP/WordPress/database stack for cached requests. This is why it has the largest performance impact of any cache layer.

Most page cache implementations use the URL as the cache key – the combination of scheme, method, host, and path that uniquely identifies a request. The same URL always returns the same cached response (until the cache is invalidated).

Object cache (application-level)

WordPress runs many database queries per page load. The options table alone gets queried dozens of times – site URL, active plugins, theme settings, rewrite rules, widget configurations. The object cache stores the results of these queries in memory (Memcached or Redis) so they do not hit the database on every request.

The object cache matters most when the page cache cannot help – logged-in users, WooCommerce cart pages, admin dashboard, AJAX requests. These pages are dynamic and cannot be served from the page cache, but the database queries they run are often repetitive. An object cache hit returns the query result from memory in microseconds instead of waiting for a database round-trip.

Browser cache (client-level)

The browser caches static assets – CSS, JavaScript, images, fonts – based on HTTP cache headers ( Cache-Control , Expires , ETag ). A browser cache hit means the asset is loaded from the visitor’s local disk instead of being downloaded from the server again.

Browser cache misses increase page load time because every asset requires a network request. On a typical WordPress page with 30-50 static assets, browser caching can save dozens of HTTP requests on repeat visits.

Opcode cache (PHP-level)

PHP compiles source code to bytecode on every request. OPcache stores the compiled bytecode in shared memory so PHP does not recompile the same files on every request. An OPcache miss means PHP reads the source file from disk and compiles it, which adds 50-200 milliseconds to the request.

OPcache is enabled by default on modern PHP installations and rarely needs manual configuration. It is worth mentioning because it is the one cache layer where misses are almost invisible – the overhead per individual file is small, but it adds up across the hundreds of PHP files WordPress loads per request.

What causes cache misses#

First request after cache clear

The most obvious cause. When you clear the cache (updating a post, changing settings, deploying a new version), every subsequent request is a miss until the cache is repopulated. On a site with thousands of pages, the cache may take hours to fully warm up through organic traffic.

Logged-in users

Page caching cannot serve the same cached page to all visitors when the page contains user-specific content. A logged-in WordPress user sees the admin bar, their username, and potentially personalized content. Caching this page and serving it to a different user would show the wrong information.

Most caching systems bypass the page cache entirely when WordPress session cookies are present. This means every logged-in request is a cache miss at the page level. For sites with many logged-in users (membership sites, forums, WooCommerce stores), the effective cache hit rate can be much lower than expected.

Dynamic pages

Some pages are uncacheable by nature:

  • WooCommerce cart and checkout. The cart contents are unique per visitor. Caching a cart page would show someone else’s items.
  • Search results. Each search query produces different results.
  • AJAX requests. WordPress admin-ajax.php and REST API calls are typically unique per request.
  • POST requests. Form submissions, login attempts, and other POST requests are never cached.

These pages always miss the page cache. The object cache helps reduce database load on these pages, but the full PHP execution still happens.

Cache expiration (TTL)

Caches do not keep entries forever. Each cached item has a time-to-live (TTL) after which it expires and is removed. When a visitor requests an expired page, it is a miss, and the page is regenerated and cached again.

Short TTLs cause more misses but ensure fresher content. Long TTLs cause fewer misses but risk serving stale content. The right balance depends on how frequently your content changes and how tolerant you are of visitors seeing slightly outdated pages.

Query strings

URLs with query strings are often treated differently by caches. A URL like example.com/page/?utm_source=twitter and example.com/page/?utm_source=facebook may be cached as two separate entries even though they serve identical content. Marketing campaigns with UTM parameters can fragment the cache, creating misses for what is effectively the same page.

Well-configured caching systems strip known tracking parameters (utm_*, gclid, fbclid) from the cache key so these variations share a single cache entry.

Low traffic pages

A page that gets one visit per week will almost always be a cache miss because the cached version expires before the next visitor arrives. This is normal and expected – caching benefits pages with repeated requests, not pages with sparse traffic.

The practical impact is that your most popular pages (homepage, top blog posts, product pages) benefit the most from caching, while deep archive pages and rarely visited content are effectively uncached.

Measuring cache performance#

Check the cache status header

Most caching systems add a response header indicating whether the request was a cache hit or miss. Check it with curl:

curl -I https://yourdomain.com/

Look for headers like:

X-FastCGI-Cache: HIT
X-Cache: HIT
X-Cache-Status: HIT

A MISS value means the page was generated dynamically. A HIT means it was served from cache. Some systems also return STALE (serving an expired cached version while regenerating in the background) or BYPASS (caching was intentionally skipped for this request).

Test several URLs to understand your cache hit rate across different page types:

# Homepage (should be HIT after first request)
curl -sI https://yourdomain.com/ | grep -i cache

# Blog post
curl -sI https://yourdomain.com/popular-post/ | grep -i cache

# URL with query string
curl -sI https://yourdomain.com/?utm_source=test | grep -i cache

Cache hit ratio

The cache hit ratio is the percentage of requests served from cache versus total requests. A WordPress site with server-level page caching should aim for 80-95% hit ratio on anonymous traffic. If your hit ratio is below 70%, investigate what is causing misses.

Calculate the hit ratio from Nginx access logs:

# Count HITs and MISSes from the upstream cache status
awk '{print $NF}' /var/log/nginx/access.log | sort | uniq -c | sort -rn

(This assumes the cache status is logged in the access log format, which varies by configuration.)

Object cache statistics

If you use Memcached or Redis for object caching, check the hit rate:

# Memcached stats
echo "stats" | nc localhost 11211 | grep -E "get_hits|get_misses"

A healthy object cache has a hit rate above 90%. If misses are high, WordPress is generating unique cache keys that are not being reused, or the cache is too small and evicting entries before they are requested again.

Browser developer tools

In Chrome or Firefox developer tools, the Network tab shows which assets were loaded from cache (status “304 Not Modified” or “(from disk cache)”) versus downloaded fresh (status “200”). On repeat visits, most static assets should load from browser cache.

The performance impact of cache misses#

Server load

Every cache miss requires the full page generation process: PHP execution, database queries, template rendering. On a server handling 100 requests per second with a 90% hit rate, only 10 requests per second reach PHP. If the hit rate drops to 70%, 30 requests per second reach PHP – three times the load. PHP-FPM has a limited number of worker processes, and when they are all busy, new requests queue up, causing 503 errors.

Time to first byte (TTFB)

TTFB measures how long it takes for the browser to receive the first byte of the response. A cache hit typically delivers the first byte in under 50 milliseconds. A cache miss might take 300-800 milliseconds. Visitors notice this delay, especially on mobile connections where network latency adds to the server processing time.

Database load

Each cache miss triggers WordPress’s database queries. A typical WordPress page runs 20-50 queries. A WooCommerce product page can run 100+. With high miss rates, the database becomes the bottleneck – queries queue up, response times increase, and eventually connections start failing. See MySQL server has gone away: what it means and how to fix it for how database overload manifests.

Traffic spikes

Cache misses are most damaging during traffic spikes. If a post goes viral and receives 10,000 visitors in an hour, a cached site serves the same static file to all of them with minimal server load. An uncached site tries to generate 10,000 dynamic pages, likely overwhelming PHP-FPM and the database.

This is why clearing the cache right before a known traffic event (product launch, marketing campaign) is risky. The spike hits an empty cache, and every request is a miss simultaneously. If possible, warm the cache by visiting the key pages before the traffic arrives.

Reducing cache misses#

Increase TTL

If your content does not change frequently, increase the cache duration. A blog that publishes once a week does not need a 1-hour TTL – a 24-hour or even 7-day TTL is more appropriate. Longer TTLs mean fewer expirations and fewer misses.

The concern with long TTLs is stale content. The solution is event-based purging rather than time-based expiration – invalidate specific pages when they change, rather than expiring all pages on a timer.

Use event-based cache purging

Instead of expiring cached pages on a timer, invalidate them when the content actually changes. When you publish a post, purge the post URL, the homepage, the relevant category and tag archives, and the RSS feed. Everything else stays cached.

Smart purging is the difference between a cache that works and a cache that constantly rebuilds itself. For a deep dive into how event-based purging works with WordPress hooks, see How Hostney handles WordPress cache purging automatically.

Strip tracking parameters from cache keys

Configure your caching layer to ignore known tracking parameters when generating cache keys. The URL example.com/page/ and example.com/page/?utm_source=twitter should resolve to the same cached entry because they serve identical content.

Common parameters to strip: utm_source , utm_medium , utm_campaign , utm_term , utm_content , gclid , fbclid , _ga , mc_cid , mc_eid .

Use object caching for dynamic pages

Pages that cannot be page-cached (logged-in user pages, WooCommerce cart, admin dashboard) still benefit from object caching. Memcached or Redis caches the database query results in memory, so even though PHP still executes, the database queries are served from memory instead of hitting MySQL.

The object cache turns a 500ms cache miss into a 150ms cache miss by eliminating the database round-trips. It is not as fast as a page cache hit, but it is significantly better than hitting the database on every request.

Cache warming

After a cache clear, the first visitor to each page experiences a miss. For high-traffic sites, proactively “warming” the cache by crawling your own pages immediately after a deploy or cache clear ensures visitors always get cache hits.

A simple cache warming script:

# Crawl the sitemap to warm the cache
curl -s https://yourdomain.com/wp-sitemap.xml | grep -oP '<loc>\K[^<]+' | while read url; do
    curl -s -o /dev/null "$url"
    sleep 0.1
done

This fetches every URL in your sitemap, populating the cache before real visitors arrive.

Thundering herd protection

When a popular cached page expires, multiple simultaneous visitors may all experience a cache miss at the same time, each triggering a full page generation. This is the “thundering herd” problem – instead of one miss, you get hundreds.

The solution is cache locking: when the first miss occurs, the server locks that cache key and queues subsequent requests. Only one PHP process generates the page. Once it is cached, the queued requests are served from the fresh cache entry. Without this protection, a single expired page can cause a load spike.

How Hostney handles caching#

Hostney’s caching operates at three layers, all managed by the platform.

Nginx FastCGI cache (page cache). Every anonymous request is checked against the Nginx FastCGI cache before it reaches PHP. Cache hits are served directly by Nginx with response times in the single-digit milliseconds. The cache automatically bypasses for logged-in users, POST requests, WooCommerce dynamic pages (cart, checkout, my-account), and WordPress admin pages. The response header X-FastCGI-Cache shows HIT or MISS on every request so you can verify caching is working.

Thundering herd protection is built in – when a cached page expires, the first request regenerates the page while subsequent requests are served the stale version until the fresh copy is ready. This prevents load spikes from simultaneous cache misses.

Memcached (object cache). Each hosting account can enable its own isolated Memcached instance from the control panel. With the Hostney Cache plugin installed, WordPress stores database query results in Memcached instead of re-querying MySQL. This benefits cache-miss pages most – WooCommerce checkout, logged-in dashboards, and admin pages all run faster because their repetitive database queries hit memory instead of disk.

Event-based purging. The Hostney Cache plugin hooks into WordPress events (post publish, comment approval, taxonomy changes) and purges only the affected URLs. Publishing a blog post purges the post URL, homepage, category archives, and RSS feed – everything else stays cached. This keeps the cache hit rate high by avoiding unnecessary invalidation.

Tracking parameters (utm_*, gclid, fbclid, _ga) are stripped from cache keys automatically, preventing marketing campaigns from fragmenting the cache and causing unnecessary misses.

Summary#

A cache miss means the server had to generate the page from scratch instead of serving a pre-built copy. The performance difference is significant – cache hits take 10-30 milliseconds while misses take 300-800 milliseconds. High miss rates increase server load, slow down response times, and can cause failures during traffic spikes.

The most effective ways to reduce cache misses are: increase TTL so pages stay cached longer, use event-based purging instead of time-based expiration (purge only what changed), strip tracking parameters from cache keys, use object caching for pages that cannot be page-cached, and warm the cache after clears. For a deeper look at how event-based purging works with WordPress, see How Hostney handles WordPress cache purging automatically.