hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses Caching with @hectoday/http

Why Caching

  • The Same Query, A Thousand Times
  • Project Setup

HTTP Caching

  • Cache-Control Headers
  • ETags and Conditional Requests
  • Stale-While-Revalidate

Server-Side Caching

  • In-Memory Caching with Map
  • TTL and Expiration
  • Cache-Aside Pattern
  • LRU Eviction

What to Cache

  • Caching Database Queries
  • Caching Computed Results
  • Caching External API Responses

Invalidation

  • The Hardest Problem
  • Time-Based Invalidation
  • Event-Based Invalidation
  • Tag-Based Invalidation

Putting It All Together

  • Caching Checklist
  • Capstone: Caching the Book Catalog

Cache-Aside Pattern

The pattern

The previous two lessons built the pieces: a Map for storage (lesson 6) and TTL for expiration (lesson 7). Now we combine them into a formal pattern.

The cache-aside pattern (also called lazy loading) is the most common caching strategy. The application manages the cache explicitly:

  1. Check the cache for the requested data.
  2. Hit: Return the cached data.
  3. Miss: Query the database, store the result in the cache, return it.
function getTopBooks(): any[] {
  // 1. Check cache
  const cached = cacheGet<any[]>("top-books");
  if (cached) return cached; // 2. Hit

  // 3. Miss: query database
  const books = db.prepare("SELECT ...").all();

  // Store in cache for next time
  cacheSet("top-books", books, 5 * 60 * 1000); // 5 minutes

  return books;
}

The name “cache-aside” comes from the fact that the cache sits alongside the database — the application checks both and decides which to use. The database is the source of truth. The cache is a shortcut.

A reusable wrapper

Instead of writing the cache-check logic in every function, create a wrapper:

// src/cache.ts
export async function cacheThrough<T>(
  key: string,
  ttlMs: number,
  fn: () => T | Promise<T>,
): Promise<T> {
  const cached = cacheGet<T>(key);
  if (cached !== undefined) return cached;

  const result = await fn();
  cacheSet(key, result, ttlMs);
  return result;
}
route.get("/books/top", {
  resolve: async () => {
    const books = await cacheThrough("top-books", 5 * 60_000, () => db.prepare("SELECT ...").all());
    return Response.json(books);
  },
});

route.get("/books/:id", {
  resolve: async (c) => {
    const book = await cacheThrough(`book:${c.input.params.id}`, 10 * 60_000, () =>
      db.prepare("SELECT ... WHERE books.id = ?").get(c.input.params.id),
    );
    if (!book) return Response.json({ error: "Not found" }, { status: 404 });
    return Response.json(book);
  },
});

cacheThrough encapsulates the cache-aside pattern: check, miss, query, store. Every cached endpoint uses the same three lines.

Cache stampede

A popular endpoint’s cache expires. One hundred requests arrive in the same second. All 100 see a cache miss. All 100 query the database. All 100 store the result. The database gets hammered with 100 identical queries — exactly what caching was supposed to prevent.

This is a cache stampede (also called thundering herd). The fix: only one request should query the database while the others wait:

const pending = new Map<string, Promise<any>>();

export async function cacheThrough<T>(
  key: string,
  ttlMs: number,
  fn: () => T | Promise<T>,
): Promise<T> {
  const cached = cacheGet<T>(key);
  if (cached !== undefined) return cached;

  // If another request is already fetching this key, wait for it
  const inflight = pending.get(key);
  if (inflight) return inflight as Promise<T>;

  // This request fetches the data
  const promise = Promise.resolve(fn()).then((result) => {
    cacheSet(key, result, ttlMs);
    pending.delete(key);
    return result;
  });

  pending.set(key, promise);
  return promise;
}

The first request starts the query and stores the promise in pending. The next 99 requests see the pending promise and wait for it — no duplicate queries. When the query completes, all 100 requests get the result.

When NOT to use cache-aside

Write-heavy data: If data changes on every request, the cache is constantly being invalidated. The overhead of checking, missing, querying, and storing is worse than just querying.

User-specific data that varies widely: If each user sees different data and you have 10,000 users, you need 10,000 cache entries. The memory cost may outweigh the database savings.

Small, fast queries: If the database query takes 1ms, caching saves almost nothing. The cache check, miss, and store add complexity without meaningful speed improvement.

Exercises

Exercise 1: Implement cacheThrough. Use it for /books/top. Add logging to show cache hits and misses.

Exercise 2: Implement stampede protection. Simulate 10 concurrent requests. Verify only 1 database query runs.

Exercise 3: Apply cacheThrough to /books/:id. Verify each book ID gets its own cache entry.

What is a cache stampede?

← TTL and Expiration LRU Eviction →

© 2026 hectoday. All rights reserved.