When you load a web page, your browser and the server have a conversation. The browser asks for files, the server sends them back. The rules governing that conversation are defined by HTTP, the Hypertext Transfer Protocol.
For over fifteen years, those rules were HTTP/1.1. It worked, but it was showing its age. Pages were getting heavier, users expected them to load faster, and the protocol itself had become the bottleneck. HTTP/2, finalized in 2015, was built to fix that.
Here’s what actually changed between the two versions, why it matters, and what it means for your website.
How HTTP/1.1 loads a web page#
To understand why HTTP/2 exists, you need to see where HTTP/1.1 falls short.
HTTP/1.1 follows a strict request-response pattern. Your browser asks for the HTML file. The server sends it. The browser parses the HTML and discovers it needs a CSS file, three JavaScript files, and twelve images. It requests each one, waits for the response, then moves on to the next.
The problem is that a single HTTP/1.1 connection can only handle one request at a time. While your browser waits for that large JavaScript bundle, everything else queued behind it sits idle. This is called head-of-line blocking, and it’s the single biggest performance limitation of HTTP/1.1.
Developers worked around this for years. The most common trick was domain sharding: splitting resources across multiple subdomains (images.example.com, static.example.com) so the browser would open separate connections to each one. Browsers also opened up to six parallel connections per domain. But these were workarounds for a protocol limitation, not real solutions. They added complexity, consumed more server resources, and introduced their own overhead from repeated TCP handshakes and TLS negotiations.
What HTTP/2 does differently#
HTTP/2 doesn’t change what HTTP does. It changes how the data moves across the wire. The semantics are the same (GET, POST, headers, status codes), but the transport mechanism is fundamentally redesigned.
Multiplexing: the biggest improvement
This is the headline feature, and it solves head-of-line blocking at the HTTP layer.
With HTTP/1.1, a connection is occupied until the current response finishes. With HTTP/2, a single connection can carry dozens of requests and responses simultaneously. The data is broken into small frames, tagged with a stream identifier, interleaved on the wire, and reassembled at the other end.
In practice, this means your browser opens one connection to the server and fires off all its requests at once. The CSS, JavaScript, and images all download concurrently over that single connection. No waiting, no blocking, no need for domain sharding hacks.
The performance difference is most visible on pages with many assets, which is virtually every modern website. A page loading 40 resources over HTTP/1.1 might need 6 connections and several sequential round trips. Over HTTP/2, those same 40 resources flow through one connection in parallel.
Header compression with HPACK
Every HTTP request and response carries headers: metadata like content type, caching directives, cookies, and authentication tokens. In HTTP/1.1, these headers are sent as plain text, repeated in full with every single request.
Visit a site with 50 resources on the page, and your browser sends roughly the same set of headers 50 times. Cookies alone can add kilobytes of redundant data to each request.
HTTP/2 uses HPACK compression, which does two things. First, it encodes headers in a compact binary format instead of plain text. Second, it maintains a header table that tracks previously sent headers, so only values that actually changed need to be transmitted. Headers that are identical to the previous request are represented by a single index reference.
On cookie-heavy sites or applications with large authentication headers, this can reduce header overhead by 85-90%.
Server push
In HTTP/1.1, the browser has to discover that it needs a resource before it can request it. It downloads the HTML, parses it, finds a CSS link tag, and only then requests the stylesheet. That discovery step costs a full round trip.
HTTP/2 server push lets the server send resources proactively. When the server knows that a particular HTML page always needs a specific CSS file and font, it can push those resources alongside the HTML response before the browser even asks. By the time the browser parses the HTML and realizes it needs the CSS, it’s already in the browser’s cache.
Worth noting: server push sounded great in theory but turned out to be tricky in practice. Pushing resources the browser already has cached wastes bandwidth. Pushing too aggressively can delay higher-priority content. Chrome actually removed support for HTTP/2 server push in 2022, and the industry has largely moved toward better alternatives like
103 Early Hints
, which tells the browser what to preload without actually pushing the bytes. Still, server push remains part of the HTTP/2 specification and some servers and CDNs use it effectively in specific scenarios.
Stream prioritization
Not all resources are equal. The CSS that controls your page layout matters more than an analytics script or a below-the-fold image. HTTP/2 lets the browser assign priority weights and dependencies to each stream, telling the server which resources to send first.
HTTP/1.1 has no concept of prioritization. Resources arrive in the order they were requested, and the browser has to be clever about the request order to get critical resources loaded first. With HTTP/2, the protocol itself handles this, and the server can make intelligent decisions about what to send when.
Binary framing
HTTP/1.1 messages are plain text. You can read them in a packet capture, which is nice for debugging but inefficient for computers. Parsing text-based protocols requires scanning for delimiters, handling edge cases in formatting, and converting strings to usable values.
HTTP/2 uses a binary framing layer that splits communication into frames, each with a fixed structure that’s fast to parse. Binary framing reduces parsing overhead, eliminates ambiguities in the protocol, and makes features like multiplexing and prioritization possible in the first place.
The trade-off is that you can’t read HTTP/2 traffic in a text editor anymore. But developer tools in every major browser decode HTTP/2 frames automatically, so debugging hasn’t actually gotten harder.
Stream dependencies
HTTP/2 allows streams to declare dependencies on other streams, creating a dependency tree. This gives the server a roadmap for the optimal delivery order. For example, you can specify that the CSS stream should complete before image streams begin, because the browser can’t render the page layout without the stylesheet.
Combined with prioritization weights, stream dependencies let the server make granular decisions about resource delivery that HTTP/1.1 simply cannot express.
Where HTTP/1.1 still holds up#
HTTP/1.1 isn’t going away tomorrow. It has two practical advantages:
Universal compatibility. Every web server, proxy, CDN, and client on the planet supports HTTP/1.1. Some older systems, embedded devices, and corporate proxies still don’t fully support HTTP/2. If you need to guarantee connectivity in constrained environments, HTTP/1.1 is the safe default.
Simplicity. HTTP/1.1 is straightforward to implement, debug, and reason about. The text-based protocol is easy to inspect. There are no stream states to manage, no priority trees to configure, and no binary frames to decode. For simple APIs or internal services where performance isn’t the primary concern, HTTP/1.1’s simplicity is a legitimate advantage.
That said, HTTP/2 support is now standard in all modern browsers, web servers (Apache, Nginx, LiteSpeed, IIS), and CDNs. The “compatibility” argument has narrowed significantly since 2015.
When to use which protocol#
HTTP/1.1 makes sense for legacy systems that can’t be upgraded, simple internal services, or environments where HTTP/2 proxying isn’t available.
HTTP/2 is the right choice for any modern website or application. The performance gains from multiplexing and header compression alone justify the switch, and the migration is usually transparent since HTTP/2 is backward compatible at the application level. Most modern web servers enable HTTP/2 by default.
HTTP/3 is the best choice when available, and it increasingly is. It delivers the biggest improvements on high-latency and lossy connections (mobile networks, users far from your server, congested Wi-Fi). The 0-RTT connection resumption and transport-layer stream independence make a noticeable difference for repeat visitors and for pages with many assets. If your hosting provider and CDN support HTTP/3, there’s no reason not to use it — browsers that don’t support it will fall back to HTTP/2 automatically.
What HTTP/3 brings to the table#
HTTP/2 fixed head-of-line blocking at the HTTP layer, but it introduced a new problem at the transport layer. HTTP/2 runs on TCP, and TCP treats all data on a connection as a single ordered stream. If a single TCP packet is lost, the operating system holds back every byte that arrived after it until the missing packet is retransmitted. That means one lost packet on one stream stalls all streams on the connection, even though they’re logically independent. On lossy networks (mobile connections, congested Wi-Fi, intercontinental links), this TCP-level head-of-line blocking can erase the performance gains HTTP/2 worked so hard to deliver.
HTTP/3 solves this by replacing TCP entirely with QUIC, a transport protocol originally developed by Google and standardized by the IETF in 2022. QUIC runs on top of UDP and handles each stream independently at the transport layer. A lost packet on one stream only affects that stream. The other streams continue without waiting.
Faster connection setup
TCP requires a three-way handshake before any data can flow. Add TLS on top (which every modern site uses), and that’s two more round trips before the first byte of actual content. On a 100ms-latency connection, that’s 300 milliseconds of handshaking before anything useful happens.
QUIC combines the transport and encryption handshakes into a single round trip. On a first connection, it completes in one round trip instead of three. On subsequent connections to the same server, QUIC supports 0-RTT resumption, sending application data in the very first packet. For users returning to your site, the connection is essentially instant.
Connection migration
TCP connections are tied to a specific combination of IP address and port. Switch from Wi-Fi to mobile data (or walk from one access point to another), and your IP changes. Every TCP connection drops and has to be re-established from scratch. In-progress downloads restart, and the browser has to renegotiate everything.
QUIC connections are identified by a connection ID, not by IP address. When the network changes, the connection migrates seamlessly. The download continues, the page keeps loading, and the user doesn’t notice anything happened. This matters less on desktop but significantly on mobile, where network transitions happen constantly.
Built-in encryption
TCP was designed decades before encryption became standard. TLS was bolted on later as a separate layer, which is why the handshake adds extra round trips and why unencrypted HTTP still technically exists.
QUIC has TLS 1.3 built directly into the protocol. There’s no unencrypted mode, every QUIC connection is encrypted by default. This eliminates an entire class of downgrade attacks and simplifies the protocol stack.
Browser and server support
All major browsers (Chrome, Firefox, Safari, Edge) support HTTP/3. On the server side, adoption is growing: nginx added native HTTP/3 support in version 1.25, OpenResty supports it through its nginx core, LiteSpeed has supported it since early adoption, and major CDNs like Cloudflare, Fastly, and Google Cloud serve HTTP/3 by default.
At Hostney, HTTP/3 with QUIC is enabled on every hosting plan across all web servers. Both our nginx and OpenResty configurations include HTTP/3 support with QUIC retry enabled and Alt-Svc headers that tell browsers to upgrade automatically. You don’t need to configure anything. If your visitor’s browser supports HTTP/3, they’ll get it.
Protocol speed is only half the equation#
HTTP/2 and HTTP/3 get your files to the browser faster, but if the server spends 500 milliseconds generating the page before sending it, the protocol improvements don’t help much. That’s where server-side caching comes in, and it’s something most hosting providers leave you to figure out on your own.
At Hostney, every hosting plan includes built-in nginx caching with full control from your dashboard. No plugins to configure, no server access required. Here’s what you get:
Per-URL cache purging. Updated a single blog post or product page? Purge just that URL instead of wiping your entire cache. You can purge individual URLs or use prefix mode to clear everything under a path (like
/blog/
to purge all blog posts at once). Up to 100 URLs per request.
Full cache flush. Pushed a site-wide redesign or changed your theme? One click clears everything and your cache rebuilds as visitors arrive.
Configurable cache duration. Set how long pages stay cached: 30 minutes, 1 hour, 4 hours, 6 hours, or 24 hours. Pick the right balance between freshness and performance for your site.
Smart bypass rules. Logged-in WordPress users, shopping carts, checkout pages, and admin areas automatically bypass the cache. Your customers always see their own cart and account data, never someone else’s cached version. WooCommerce pages like
/cart/
,
/checkout/
, and
/my-account/
are excluded out of the box.
Custom cache exclusions. Need to exclude a specific URL path or bypass the cache when a certain cookie is present? Add your own rules per website, no config files to edit.
Query string handling. Enable caching for URLs with query parameters while automatically stripping tracking parameters like
utm_source
,
gclid
, and
fbclid
. The same page doesn’t get cached separately just because someone clicked a tracked link in an email campaign.
Cache size monitoring. See exactly how much cache each of your websites is using, updated in real time from the server.
The combination of HTTP/2’s transport efficiency and server-side caching is where the real performance gains stack up. HTTP/2 delivers the bytes faster; caching means fewer bytes need to be generated in the first place.
Not a customer yet? Try our web hosting free for 14 days and see the difference.