Event-Based Invalidation
Invalidate on write
TTL-based invalidation accepts staleness. Event-based invalidation eliminates it: when data changes, delete the corresponding cache entries immediately. The next read sees a miss, queries the database, and caches fresh data.
// When a review is posted, invalidate affected cache entries
route.post("/books/:id/reviews", {
resolve: async (c) => {
const bookId = c.input.params.id as string;
// Insert the review
db.prepare(
"INSERT INTO reviews (id, book_id, user_id, rating, body) VALUES (?, ?, ?, ?, ?)",
).run(crypto.randomUUID(), bookId, c.input.body.userId, c.input.body.rating, c.input.body.body);
// Invalidate cache entries affected by this review
cacheDelete(`book:${bookId}`); // Book detail (avg_rating changed)
cacheDelete("top-books"); // Rankings might have changed
cacheDelete("catalog-stats"); // Total review count changed
cacheDelete("leaderboard:most-reviewed"); // Review count changed
return Response.json({ status: "created" }, { status: 201 });
},
}); The review is inserted, then every cache entry that depends on review data is deleted. The next request for any of those entries queries fresh data from the database.
The tracking problem
The example above hard-codes which cache entries to delete. This works for small applications with a few endpoints. It breaks down when the application grows:
- A new endpoint caches data that includes reviews. You must remember to add it to the invalidation list.
- A refactored endpoint changes its cache key. You must update every place that invalidates it.
- A developer adds a cache but forgets to add invalidation. The cache serves stale data forever.
The tag-based approach (next lesson) solves this systematically.
Write-through caching
Instead of deleting the cache entry and waiting for the next read to repopulate it, update the cache immediately:
route.post("/books/:id/reviews", {
resolve: async (c) => {
const bookId = c.input.params.id as string;
db.prepare("INSERT INTO reviews ...").run(/* ... */);
// Write-through: update the cache with fresh data immediately
const freshBook = db.prepare("SELECT ... WHERE books.id = ?").get(bookId);
cacheSet(`book:${bookId}`, freshBook, 10 * 60_000);
const freshTopBooks = db.prepare("SELECT ... ORDER BY avg_rating DESC LIMIT 10").all();
cacheSet("top-books", freshTopBooks, 5 * 60_000);
return Response.json({ status: "created" }, { status: 201 });
},
}); The cache is never empty — it goes from old data directly to new data. The next read is always a cache hit with fresh data. No cache miss, no cold cache, no stampede.
The tradeoff: the write endpoint is slower because it runs extra queries to repopulate the cache. This is acceptable when reads vastly outnumber writes (the book catalog scenario).
Write-behind caching
Write-through updates the cache during the write request, making the request slower. Write-behind defers the cache update to a background job:
route.post("/books/:id/reviews", {
resolve: async (c) => {
const bookId = c.input.params.id as string;
db.prepare("INSERT INTO reviews ...").run(/* ... */);
// Delete stale entries immediately
cacheDelete(`book:${bookId}`);
cacheDelete("top-books");
// Queue cache repopulation for the background worker
enqueue("repopulate_cache", { keys: [`book:${bookId}`, "top-books"] });
return Response.json({ status: "created" }, { status: 201 });
},
}); [!NOTE] This uses the
enqueuefunction from the Background Jobs course. The cache is cleared immediately (no stale data served), and the background worker repopulates it before the next request. The write endpoint stays fast.
Choosing the strategy
| Strategy | Write speed | Read after write | Complexity |
|---|---|---|---|
| Delete (invalidate) | Fast | Cache miss (query) | Low |
| Write-through | Slower (extra queries) | Cache hit (fresh) | Medium |
| Write-behind | Fast | Cache miss briefly | Medium |
Delete is simplest and works for most cases. The brief cache miss after a write is usually imperceptible.
Write-through is best when the cached data is expensive to compute and cache misses cause noticeable latency.
Write-behind combines the benefits: fast writes and warm caches, at the cost of a brief window where the cache is cold.
Exercises
Exercise 1: Add event-based invalidation to the review creation endpoint. Post a review. Verify the book detail cache is cleared and the next request returns fresh data.
Exercise 2: Implement write-through caching. Post a review. Verify the cache is updated (not just cleared) and the next read is a cache hit.
Exercise 3: Implement write-behind with a background job. Post a review. Verify the cache is cleared immediately and repopulated by the worker.
What is the main advantage of write-through caching over simple invalidation?