hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses HTTP from scratch

What is HTTP

  • The request-response model
  • Anatomy of an HTTP request
  • Anatomy of an HTTP response

Methods

  • GET and HEAD
  • POST
  • PUT, PATCH, and DELETE
  • OPTIONS and CORS preflight

Status codes

  • 2xx success
  • 3xx redirection
  • 4xx client errors
  • 5xx server errors

Headers

  • Request headers
  • Response headers
  • Custom headers

The body

  • JSON
  • Form data and multipart
  • No body

Connections

  • TCP, DNS, and TLS
  • HTTP/1.1 vs HTTP/2
  • Cookies and state

Putting it all together

  • Building a server from scratch
  • From scratch to framework

HTTP/1.1 vs HTTP/2

Connections got smarter over time

The previous lesson showed the cost of opening a connection: DNS, TCP, TLS. Hundreds of milliseconds before any HTTP data moves. In the early days of the web, that cost was paid for every single request. As websites grew more complex, this became a real problem. Let’s look at how HTTP evolved to solve it.

HTTP/1.0: one request, one connection

The original HTTP (1.0) opened a brand-new TCP connection for every request. Want the HTML page? New connection. Want the CSS file? New connection. Want an image? New connection. Each connection required a DNS lookup, a TCP handshake, and (for HTTPS) a TLS handshake.

A page with 50 resources needed 50 separate connections. That was hundreds of milliseconds of handshake overhead, multiplied by 50. Slow.

HTTP/1.1: keep the connection open

HTTP/1.1 arrived in 1997 and introduced keep-alive: the TCP connection stays open after the first response, and subsequent requests reuse it.

Connection: keep-alive

One connection handles many requests, one after the other:

Client                    Server
  |--- GET /page -------->|
  |<-- 200 OK ------------|
  |--- GET /style.css --->|    <- same connection
  |<-- 200 OK ------------|
  |--- GET /logo.png ---->|    <- same connection
  |<-- 200 OK ------------|

The DNS, TCP, and TLS handshakes happen once. Every request after that skips them. This was a huge improvement. This is what the Deploying course’s Nginx configuration enables with keepalive_timeout.

The head-of-line blocking problem

Keep-alive was great, but it had a limitation. HTTP/1.1 processes requests one at a time on each connection. If the first request takes a long time (say, a heavy database query), the second request has to wait behind it, even if the server could respond to it instantly.

Client                    Server
  |--- GET /slow -------->|
  |    (waiting...)       |    (processing slow query)
  |--- GET /fast -------->|    <- queued, cannot start yet
  |<-- 200 (slow) --------|
  |<-- 200 (fast) --------|    <- finally gets processed

This is called head-of-line blocking. The slow request at the “head of the line” blocks everything behind it.

Browsers worked around this by opening 6 to 8 parallel connections to each server. More connections means more requests can be in flight at once. But each connection uses memory and resources on both the client and server. It was a workaround, not a real fix.

HTTP/2: multiplexing

HTTP/2 arrived in 2015 and solved head-of-line blocking properly with multiplexing. Multiple requests and responses travel over a single connection at the same time. Each request becomes a separate “stream” that can be sent, received, and processed independently.

Client                    Server
  |--- Stream 1: GET /slow -->|
  |--- Stream 2: GET /fast -->|   <- sent immediately, not queued
  |<-- Stream 2: 200 (fast) --|   <- fast response arrives first
  |<-- Stream 1: 200 (slow) --|   <- slow response arrives when ready

One connection. No head-of-line blocking. No need for 6 to 8 parallel connections. The fast response arrives first because it is not blocked by the slow one.

HTTP/2 also uses binary framing (instead of plain text like HTTP/1.1) and header compression (called HPACK) to reduce redundant header data. When you send the same headers on every request (and you usually do), HTTP/2 compresses them so they take up less bandwidth.

Your code does not change

Here is the best part. HTTP/2 is handled entirely by the infrastructure: Nginx, the Node.js runtime, the browser. Your application code stays the same. You write the same routes, the same headers, the same responses.

// This code works identically on HTTP/1.1 and HTTP/2
route.get("/books", {
  resolve: () => {
    const books = db.prepare("SELECT ...").all();
    return Response.json(books);
  },
});

[!NOTE] The Deploying with Docker course’s Nginx configuration enables HTTP/2 with listen 443 ssl http2. It is a server configuration change, not an application code change.

What about HTTP/3?

HTTP/3 is the latest version, based on a protocol called QUIC. It replaces TCP with a new transport layer that solves TCP-level head-of-line blocking (a problem that even HTTP/2 multiplexing could not fully fix). HTTP/3 is still being adopted, but the concepts from this lesson (multiplexing, binary framing) all carry forward.

The next lesson covers a fundamental property of HTTP that affects how you design every application: HTTP is stateless. And how cookies work around that.

Exercises

Exercise 1: Open your browser’s developer tools. Click on any request in the Network tab. Look for “Protocol” in the details. h2 means HTTP/2.

Exercise 2: Use curl --http2 https://example.com and compare the response headers to curl --http1.1 https://example.com.

Exercise 3: Load a page with many resources. Compare the waterfall chart in the Network tab to see how HTTP/2 multiplexing affects loading order.

What is the main advantage of HTTP/2 over HTTP/1.1?

← TCP, DNS, and TLS Cookies and state →

© 2026 hectoday. All rights reserved.