Quiz: Storage fundamentals
Section 1 covered a lot. Four storage strategies, a benchmark, and a translation from req/s into daily active users. Before we move into writes, let us make sure the big ideas stuck.
No stakes. Pick an answer on each question and the explanation will tell you why it is right or wrong. Revisit the lesson it comes from if something feels fuzzy.
Linear scan
A linear-scan server hits 492 req/s at 10,000 records and drops to 4 req/s at 1,000,000 records. What does that tell you about the cost model?
In-memory map
The in-memory map serves around 72,000 req/s at every dataset size we tested. Why is it essentially flat?
Binary search on disk
Binary search on disk stays roughly flat from 10k to 1M records even though every lookup reads from files on disk. What is the main reason?
SQLite as a baseline
On Node, SQLite serves around 50,000 req/s at 1M records. The in-memory map serves around 72,000 req/s. SQLite is only about 1.4x slower. Why is the gap so small?
When SQLite earns its keep
Your app currently uses an in-memory map to serve GET /users/:id. Product asks for a new endpoint that returns users filtered by email domain and sorted by signup date. Which is the right next step?
Translating req/s into users
A SaaS has 5,000 daily active customers. Each generates about 20 requests during a one-hour session during business hours. Roughly what peak req/s should you plan for?
What comes next
If any of those felt shaky, flip back to the lesson it came from and re-read the “What is going on?” and “Why it works” sections. The rest of this course is going to reuse these ideas constantly, so it pays to have them solid.
Section 2 picks up where we left off. Same scaffold, but we shift focus to writes. What appendFileSync actually guarantees (spoiler: less than you think). What fsync costs (a lot). What happens to a sorted index when you start appending to it. And why ACID transactions are the one thing flat files genuinely cannot give you.