hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses Do You Need a Database?

Storage Fundamentals

  • A Database is Just Files
  • The Setup
  • Approach 1: Linear Scan
  • Approach 2: In-Memory Map
  • Approach 3: Binary Search on Disk
  • SQLite as a Baseline
  • Benchmarking with wrk
  • Reading the Numbers
  • When You Actually Need a Database
  • Quiz: Storage Fundamentals (wip)

Writes and Durability

  • The Write Path
  • Append Throughput
  • Writes Break the Index
  • Concurrent Writers
  • Atomic Multi-Record Writes

Reading the numbers

You have a table full of benchmark numbers. Something like this:

ApproachReq/s @ 1M records
Linear scan5
Binary search26,448
SQLite50,307
In-memory map72,074

Great. Now what?

A throughput number on its own is not actionable. “Our server does 50,000 requests per second” sounds impressive, but it does not tell you whether you need to scale up, scale out, or leave well enough alone. You need to translate it into something that describes your product. How many users can this handle? Is my traffic anywhere near that limit?

This lesson is about doing that translation honestly, with rough math you can do on the back of an envelope.

Real traffic is not flat

Before we can turn req/s into users, we have to deal with one inconvenient fact. Real applications do not get steady traffic throughout the day.

Users sleep. They go to work. They come home and browse. There are spikes around product launches, marketing emails, when a TV commercial runs, and, if your app is in that space, whenever there is a playoff game. If you average traffic over 24 hours you will undercount your peak, which is the number capacity planning actually cares about.

As a rough starting point, most B2B and B2C web apps see a peak-hour-to-daily-average ratio of about 2:1. Some are spikier (consumer apps with strong daypart concentration), some are flatter (background services doing steady automated work). 2:1 is the mental model.

So if you averaged 25,000 requests per second across a whole day, you would expect the busiest hour to push around 50,000. You plan for the peak, not the average.

A simple per-user model

To translate req/s into users, you need an estimate of how often each user fires a request.

This depends on your product, but here is a reasonable middle value to start with. A typical user in an average-usage app generates something like 10 id lookups per hour while active. That is loading a profile, opening a few records, clicking between pages. Chat apps and live dashboards are way higher (hundreds per hour). A CMS or an internal admin tool might be two or three.

Second, not every “daily active user” is online simultaneously. Most products see around 10 percent of DAU concurrent during peak hour. That means if you have a million DAU, maybe 100,000 of them are active at the exact busiest moment.

Put those two together and you get a rough formula:

peak req/s = DAU × 0.10 × (10 lookups/hr ÷ 3600 sec/hr)
peak req/s = DAU × 0.000278

Flipping it around gives you the DAU that would saturate a given throughput:

DAU at saturation = peak req/s ÷ 0.000278

That divisor is just the number we derived above. Nothing fancy.

The translation table

Apply the formula to each of our measured approaches.

ApproachPeak capacityDAU at saturation
Linear scan @ 10k records474 req/s~1.7M users
Linear scan @ 100k records49 req/s~176k users
Linear scan @ 1M records5 req/s~18k users
Binary search26,000 req/s~94M users
SQLite50,000 req/s~180M users
In-memory map72,000 req/s~259M users

Look at the right column.

A single Node process running SQLite can, in theory, serve around 180 million daily active users on the assumptions we just laid out. A single Node process with an in-memory map can serve more than 250 million.

For context, Instagram crossed 400 million daily active users while still running Postgres as their primary data store. They had sharding and replication and a wall of caching, of course. Most products will never approach a scale where a single SQLite file cannot handle them.

What this looks like for a real product

Let us ground this in two concrete examples.

A SaaS with 10,000 paying customers. Each customer uses the app once a day, generating maybe 50 requests per session. That is 500,000 requests per day total. Spread across business hours (about 8 hours in their primary timezone), we get roughly 17 req/s on average. Apply the 2:1 peak ratio and you are at about 35 req/s at peak.

Our linear scan at 10,000 records handles 474 req/s. That is over thirteen times your actual peak load. You do not need a database. You barely need a Map.

A consumer app with 100,000 daily active users. Each user opens the app three times a day on average, browses for five minutes per session, generates maybe 30 requests per session. Nine million requests per day. Spread over 24 hours unevenly, with a 2:1 peak, you might see around 200 req/s at peak.

The in-memory map handles 72,000 req/s. You are sitting at 0.28 percent of its capacity. SQLite would put you at 0.4 percent. Either is comically over-provisioned for this load.

A B2B SaaS has 5,000 customers, each averaging 20 requests during a one-hour session in their workday. Roughly what peak req/s should you plan for?

Adjusting the formula for your product

The “10 lookups per hour” and “10 percent concurrent” numbers are starting points, not universal constants. If your product is substantially different, the math shifts.

Higher per-user request rate. Real-time chat, gaming, live dashboards with autorefresh. These can drive hundreds of requests per active user per hour. Multiply your per-user rate up, and the user-count ceiling drops proportionally.

Different concurrency. A worldwide consumer app with users spread across time zones tends to see lower peaks, because traffic is smeared over 24 hours. A single-timezone B2B tool tends to see higher peaks, because everyone is on at 9am their time.

Different traffic shape. Background services, webhooks, automated pollers. These tend to be smoother (lower peak-to-average ratio) but more constant. Consumer mobile apps are spikier.

The point of the formula is not to predict your exact traffic down to the req/s. It is to be a calibration tool. The next time someone says “we need a distributed database to handle our scale,” you can ask: how many DAU? At what request rate per user? More often than not, the answer reveals you are nowhere near the limits of a single SQLite file.

What the numbers do not capture

A few things this analysis ignores on purpose, and you should keep them in mind.

Write traffic. We benchmarked reads only. Writes are slower in every approach because of file I/O, fsync calls, and (for SQLite) B-tree rebalancing. If your app is write-heavy, your real ceiling is lower than these numbers suggest. Run the same wrk test against a POST endpoint to find out. Section 2 of the course does exactly this.

Other endpoints. Your application is not just GET /users/:id. There is authentication, rendering, business logic, third-party API calls. Storage is only one component of overall request latency. If your handler spends 50ms calling Stripe, the time spent on storage is basically irrelevant to the user’s experience.

The dataset shape. Our records are tiny, around 100 bytes each. Larger records (articles with full text, rows with many columns, records containing embedded blobs) take more time to serialize, send over the wire, and parse. A 10KB record will not benchmark at 100,000 req/s no matter what storage you use, because at some point you become bottlenecked on JSON serialization, not lookup.

Failure modes. A single Node process going down takes 100 percent of your traffic with it. A single SQLite file getting corrupted takes 100 percent of your data with it. Throughput numbers say nothing about availability or recoverability, and those are real concerns at any scale.

The honest answer to the question

You opened this course asking: do I need a database?

For most applications you can ship today, the honest answer is “probably not yet.” A flat file with an in-memory map handles more concurrent users than the vast majority of products will ever have. SQLite handles essentially all of them.

You cross the line into needing a real distributed database (Postgres, MySQL, a cloud-hosted thing) when you hit one of a small number of specific structural constraints. Those constraints are the subject of the last lesson in this section, and they are not what you might expect.

← Benchmarking with wrk When You Actually Need a Database →

© 2026 hectoday. All rights reserved.