hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses Database design with SQLite

Getting started

  • What is SQLite?
  • The SQLite shell

Creating tables

  • Tables and types
  • Constraints
  • Primary keys

Migrations

  • Why migrations matter
  • Building a migration system

Working with data

  • Inserting data
  • Transaction basics
  • Querying data
  • Filtering in depth
  • Sorting, grouping, and aggregates
  • Updating and deleting

Designing your schema

  • Why schema design matters
  • One-to-many
  • Many-to-many
  • One-to-one
  • What normalization means
  • When to denormalize

Querying across tables

  • JOINs
  • Aggregation
  • Subqueries and CTEs

Performance and configuration

  • Indexes
  • Query optimization
  • Transactions
  • SQLite PRAGMAs
  • Full-text search

Going to production

  • Using node:sqlite in your app
  • Backups with Litestream

Putting it all together

  • Schema design checklist
  • Capstone: designing a course platform

Query optimization

In the last lesson we learned what an index is and how to read EXPLAIN QUERY PLAN. That gave us the vocabulary. Now we need a process. When a page in our app feels slow, where do we start? What are the usual suspects, and how do we tell them apart from a glance at the query plan? This lesson walks through the problems you will run into again and again, the patterns they show in EXPLAIN, and the small fixes that turn a slow query into a fast one.

Reading EXPLAIN QUERY PLAN

Every optimization starts here. EXPLAIN shows you how SQLite plans to execute your query:

Code along
EXPLAIN QUERY PLAN
SELECT books.title, authors.name
FROM books
JOIN authors ON books.author_id = authors.id
WHERE books.genre = 'fiction';
SEARCH books USING INDEX idx_books_genre_title (genre=?)
SEARCH authors USING INTEGER PRIMARY KEY (rowid=?)

This is what good looks like. Both operations use indexes (SEARCH ... USING INDEX). SQLite finds fiction books via the covering idx_books_genre_title index from migration 014 (its first column is genre, so a WHERE genre = ? filter can use it), then looks up each author by primary key.

Bad would be:

SCAN books
SCAN authors

SCAN means full table scan. Every row is read. For large tables, that is slow.

The pattern to internalize: SEARCH is good, SCAN is (usually) bad. Everything that follows is about spotting SCAN where it should not be and fixing it.

Full table scans

The most straightforward problem. No index on the column you are filtering by:

Code along
-- No index on genre, so we get a full table scan
EXPLAIN QUERY PLAN SELECT * FROM books WHERE genre = 'fiction';
-- SCAN books  <- BAD

Fix: add an index in a migration.

CREATE INDEX idx_books_genre ON books(genre);
-- SEARCH books USING INDEX idx_books_genre  <- GOOD

This is the simplest case. You see SCAN, the WHERE clause is on a column that could be indexed, and you add the index through a migration file. The query plan changes to SEARCH USING INDEX and the query speeds up. Most slow queries start here. In our own schema we took this pattern a step further in migration 014 by making the index cover (genre, title), which still answers WHERE genre = ? queries and also avoids a table lookup when the result only needs the title.

LIKE with leading wildcard

Here is a sneakier one. Let’s say we have a title search:

Code along
-- Leading wildcard cannot use an index
SELECT * FROM books WHERE title LIKE '%sea%';
-- SCAN books  <- always a full scan

-- Trailing wildcard CAN use an index
SELECT * FROM books WHERE title LIKE 'The%';
-- SEARCH books USING INDEX idx_books_title  <- if index exists

Why the difference? A leading wildcard (%sea%) must check every row because “sea” could appear anywhere in the title. There is no shortcut. A trailing wildcard (The%) can use an index because it matches a prefix, and indexes are sorted, so SQLite can jump to the “The” range and stop when the prefix stops matching.

For full-text searching, LIKE is not the right tool at all. Use FTS5 instead. We cover it later in this course.

The N+1 problem

This is the most common performance problem in application code, and it is easy to write without realizing it:

// N+1 problem: 1 query for books + N queries for authors
const books = db.prepare("SELECT * FROM books").all();

for (const book of books) {
  const author = db.prepare("SELECT * FROM authors WHERE id = ?").get(book.author_id);
  // 100 books = 100 author queries
}

With 100 books, this runs 101 queries: 1 for the books list, then 100 individual author lookups. Each query has overhead. Parsing, planning, executing. It adds up fast.

The fix is always the same: use a JOIN.

// 1 query instead of 101
const books = db
  .prepare(
    `
  SELECT books.*, authors.name AS author_name
  FROM books
  JOIN authors ON books.author_id = authors.id
`,
  )
  .all();

One query. The database joins books and authors in a single pass.

[!NOTE] The N+1 problem gets its name from the pattern: 1 query to get the list, then N queries to get related data for each item. Whenever you see a database query inside a loop, you are probably looking at an N+1.

This is the kind of thing that looks fine on your laptop with 10 test rows and falls over in production with 10,000. It will not show up in EXPLAIN, because each individual query looks fine on its own. You have to spot it in the application code.

Unnecessary columns

Code along
-- BAD: selecting all columns when you need 2
SELECT * FROM books WHERE genre = 'fiction';

-- GOOD: selecting only what you need
SELECT id, title FROM books WHERE genre = 'fiction';

SELECT * reads every column from disk, including large text fields like descriptions or bios that you might not use. Selecting specific columns reduces the amount of data transferred, especially when combined with covering indexes from the previous lesson.

Missing index on foreign keys

This one catches a lot of people. Primary keys are automatically indexed, but foreign keys are not. Every JOIN on a foreign key without an index causes a full table scan on the joined table:

Code along
-- books.author_id is a foreign key but NOT automatically indexed
SELECT * FROM authors
JOIN books ON books.author_id = authors.id;
-- books is SCANNED for each author

Fix: add an index on the foreign key in a migration.

CREATE INDEX idx_books_author_id ON books(author_id);

[!TIP] Index every foreign key column. JOINs, WHERE clauses, and ON DELETE CASCADE all benefit from foreign key indexes. This is one of the easiest performance wins in schema design.

The optimization process

Putting it all together, here is the loop to run whenever a query feels slow:

  1. Identify slow queries. Add timing to your queries or use SQLite’s .timer on in the shell.
  2. Run EXPLAIN QUERY PLAN. Look for SCAN (bad) vs SEARCH USING INDEX (good).
  3. Add indexes. On columns in WHERE, JOIN ON, and ORDER BY clauses.
  4. Check for N+1. Look for queries inside loops. Replace with JOINs.
  5. Re-run EXPLAIN. Verify the index is being used.

That is the whole loop. Measure, explain, fix, verify. Most production optimization work is just running this cycle on one query at a time until the slow ones are gone.

Exercises

Exercise 1: Run EXPLAIN QUERY PLAN on your most complex query. Is it using indexes? If not, add them through a migration and check again.

Exercise 2: Write an N+1 query (loop through books, query author for each). Then rewrite it with a JOIN. Compare the number of queries.

Exercise 3: Check all your foreign key columns. Do they have indexes? Add any that are missing.

What is the N+1 problem?

Indexes and EXPLAIN QUERY PLAN are your primary tools for keeping queries fast. But there is more to running SQLite well in a real application. In the next lesson, we will look at transactions, which protect data integrity and also dramatically speed up bulk writes.

← Indexes Transactions →

© 2026 hectoday. All rights reserved.