top of page

HTTP/3 over QUIC: Why the Internet Finally Let Go of TCP (and Why That Matters)

  • dotsincloud
  • 2 days ago
  • 3 min read

For years, TCP was the backbone of the internet.Reliable. Predictable. Battle-tested.

And also… increasingly a bottleneck.


If you’ve ever wondered why we needed HTTP/3 when HTTP/2 already felt “fast enough,” the answer isn’t hype or incremental optimization. It’s a fundamental shift in how modern networks behave — especially mobile, cloud, and globally distributed systems.


This post walks through why HTTP/2 still wasn’t enough, what QUIC actually fixes, and why HTTP/3 is more about resilience than raw speed.


How We Got Here (A Very Practical History)


HTTP/1.1: It Worked… Until It Didn’t


HTTP/1.1 was designed in a much simpler time:


  • Fewer assets per page

  • Mostly desktop users

  • Stable wired networks


As the web evolved, cracks started showing:

  • One slow request could block everything behind it

  • Browsers opened multiple TCP connections just to keep pages loading

  • Headers were repeated over and over again

  • Latency piled up quickly


It wasn’t broken — it was just stretched beyond what it was designed for.


HTTP/2: A Massive Improvement

HTTP/2 fixed a lot of real problems:

  • Multiplexing (multiple requests at once)

  • Binary framing

  • Header compression

  • Request prioritization

  • Fewer TCP connections


For most people, HTTP/2 felt like the solution. Pages loaded faster. Fewer hacks were needed. Life was good.


But under the hood, one core issue remained.


The Hidden Problem with HTTP/2


HTTP/2 still runs on TCP.


And TCP has one rule it will never compromise on:

Packets must be delivered in order.

That sounds reasonable — until you multiplex everything over a single connection.


Here’s what actually happens:

  • Multiple HTTP/2 streams share one TCP connection

  • One packet gets lost (which is common on mobile or Wi-Fi)

  • TCP pauses all streams until that packet is retransmitted


Even streams that had nothing to do with the lost packet.


This is called TCP-level head-of-line blocking, and HTTP/2 cannot fix it — because it lives below HTTP.


Add to that:

  • TCP + TLS handshakes taking multiple round trips

  • Connections breaking when switching from Wi-Fi to cellular

  • Performance degradation on lossy networks (4G, 5G, hotel Wi-Fi)

HTTP/2 fixed the HTTP layer.But the real bottleneck was deeper.


Enter QUIC: Fixing the Transport Layer Itself

Instead of trying to patch around TCP, QUIC takes a different approach.

QUIC:

  • Runs over UDP

  • Implements reliability, congestion control, and encryption itself

  • Builds multiplexing directly into the transport layer


This is the key idea:

Each stream is independent.

If one stream loses packets:

  • Only that stream slows down

  • Everything else keeps moving


No global pause. No cascading slowdown.


Why UDP Isn’t “Unreliable” Here


A common reaction is:

“Wait… UDP? Isn’t that unreliable?”

Raw UDP is. QUIC is not.


QUIC includes:

  • Packet loss recovery

  • Congestion control

  • Flow control

  • Mandatory TLS 1.3 encryption


The difference is control. QUIC can make smarter decisions than TCP because it isn’t locked into decades-old assumptions.


Faster Handshakes, Fewer Delays


Another major improvement: connection setup.


With TCP:

  • 3-way handshake

  • Then TLS handshake

  • Multiple round trips before data flows


With QUIC:

  • Transport and TLS are combined

  • New connections take 1 RTT

  • Returning connections can use 0-RTT


In practical terms:

  • Data starts flowing much sooner

  • High-latency links suffer far less


Mobile Networks: Where HTTP/3 Really Shines


Modern internet traffic isn’t static:

  • Phones move between networks

  • IP addresses change

  • Connections drop constantly


TCP connections are tied to:

  • IP address

  • Port


Change either, and the connection dies.


QUIC uses connection IDs, not IP/port pairs.That means:

  • Switching from Wi-Fi to cellular doesn’t kill the connection

  • Sessions continue without renegotiation

  • Far fewer retries and reconnect storms


This alone makes HTTP/3 a huge win for mobile and edge workloads.


HTTP/2 vs HTTP/3


  • HTTP/2 made the web faster when networks behave well

  • HTTP/3 keeps things fast when networks behave badly

Packet loss, mobility, latency — these are normal now, not edge cases.


Why HTTP/3 Isn’t “Just a Browser Upgrade”

It’s tempting to think HTTP/3 only matters for page load times.

In reality, it changes how systems behave under stress:

  • Packet loss doesn’t ripple across unrelated work

  • Long-lived connections survive network changes

  • Latency spikes hurt less

  • Retries and reconnects drop dramatically

That matters far beyond browsers.


Closing Thoughts


HTTP/3 isn’t about squeezing out a few extra milliseconds in perfect conditions.

It’s about designing for the real internet:

  • Mobile

  • Cloud

  • Edge

  • Globally distributed

  • Slightly unreliable by default

HTTP/2 optimized the web.HTTP/3 makes it resilient.


Coming next:I’ll dive into what this actually means for DevOps and AI/ML workloads — CI/CD pipelines, APIs, observability, streaming inference, and agent-based systems.

That’s where the impact gets really interesting 👀

Recent Posts

See All
GIT - Conflict Resolution

Situation: I have forked a repository. When trying to raise a PR form forked repository to original repository in Bitbucket I am getting...

 
 
 

Comments


Post: Blog2_Post
  • LinkedIn

©2021 by Dots in Cloud. Proudly created with Wix.com

bottom of page