hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses Database design with SQLite

Getting started

  • What is SQLite?
  • The SQLite shell

Creating tables

  • Tables and types
  • Constraints
  • Primary keys

Migrations

  • Why migrations matter
  • Building a migration system

Working with data

  • Inserting data
  • Transaction basics
  • Querying data
  • Filtering in depth
  • Sorting, grouping, and aggregates
  • Updating and deleting

Designing your schema

  • Why schema design matters
  • One-to-many
  • Many-to-many
  • One-to-one
  • What normalization means
  • When to denormalize

Querying across tables

  • JOINs
  • Aggregation
  • Subqueries and CTEs

Performance and configuration

  • Indexes
  • Query optimization
  • Transactions
  • SQLite PRAGMAs
  • Full-text search

Going to production

  • Using node:sqlite in your app
  • Backups with Litestream

Putting it all together

  • Schema design checklist
  • Capstone: designing a course platform

Backups with Litestream

Your app is shipping. Users are signing up. Data is piling up in app.db on a server somewhere. There is one problem we have been quietly ignoring for the whole course. That database lives as a single file on a single disk. If the disk fails, the data is gone. If a bad migration corrupts the file, the data is gone. If someone runs rm in the wrong directory, the data is gone.

Every other database system solves this with replicas and backups. SQLite historically made that awkward because it is not a server, it is a file. Litestream is a small tool that changes that. It watches the SQLite write-ahead log and streams changes to object storage. At any point, if the primary disk is gone, you can restore from S3, R2, or Backblaze to a new machine in a few seconds. This lesson is how to wire it up.

What Litestream actually does

Let’s get specific about the problem first. In WAL mode (which you turned on in the pragmas lesson), every change you make to the database is appended to a file next to your database called app.db-wal. Periodically, SQLite merges the WAL into the main database file and starts fresh. Litestream hooks into this flow. It reads the WAL as changes come in and ships those changes to object storage, continuously, in small chunks.

A few things follow from this.

First, it is almost real-time. By default Litestream syncs every second. You can tune it, but out of the box your data is durably offsite within a second of the write committing locally.

Second, it does not need the database to be idle. Litestream replicates while your app is running. No locks, no pauses. The only extra load is reading the WAL, which is small.

Third, it is a single binary. Litestream is written in Go, statically compiled, and drops into a Docker image or a VM without any setup. No daemons to configure, no clusters to manage.

Fourth, and this one matters: Litestream is not a high-availability system. It is a backup and restore system. It does not give you two live writers or multi-region reads. It gives you a very recent copy of your database sitting in object storage, ready to restore if you need it. For a lot of apps, that is exactly enough.

Installing

Litestream ships prebuilt binaries for Linux, macOS, and Windows. On your development machine:

Code along
# macOS
brew install litestream

# Linux
curl -L https://github.com/benbjohnson/litestream/releases/latest/download/litestream-linux-amd64.tar.gz \
  | tar xz
sudo mv litestream /usr/local/bin/

Inside a Dockerfile, grab the binary directly:

FROM node:24-slim

RUN apt-get update && apt-get install -y ca-certificates curl \
  && curl -L https://github.com/benbjohnson/litestream/releases/latest/download/litestream-linux-amd64.tar.gz \
    | tar xz -C /usr/local/bin litestream \
  && rm -rf /var/lib/apt/lists/*

# ... rest of the image

The binary is small (a handful of megabytes) and has no runtime dependencies beyond a standard OS. You are not pulling in a heavy runtime or a language-specific toolchain.

The config file

Litestream reads a YAML config that lists databases and replica destinations. Here is a minimal one:

# /etc/litestream.yml
dbs:
  - path: /data/app.db
    replicas:
      - type: s3
        bucket: my-app-backups
        path: app/app.db
        region: auto
        endpoint: https://<your-account>.r2.cloudflarestorage.com
        access-key-id: ${R2_ACCESS_KEY_ID}
        secret-access-key: ${R2_SECRET_ACCESS_KEY}

Let’s read through it.

dbs is a list. Each entry describes one SQLite database Litestream should watch. path is the absolute path to your .db file on disk. Everything under replicas is the list of places Litestream will ship the WAL to.

The replica above targets Cloudflare R2, which speaks the S3 API but is significantly cheaper than S3 for this use case because R2 has no egress fees. type: s3 works for any S3-compatible storage. The endpoint is what switches it from actual AWS to R2 or Backblaze B2. bucket and path are the bucket name and the path inside that bucket where replicas are stored. Credentials come from environment variables so that nothing secret ends up in the file.

You can list more than one replica if you want redundancy, for example one to R2 and a second to a local disk. Litestream ships changes to all listed replicas.

Replicating

Once the config exists and the credentials are set, you start Litestream:

Code along
litestream replicate -config /etc/litestream.yml

This runs in the foreground, tails the WAL, and streams changes. You see log lines every few seconds telling you what was synced. In production you run this as a long-lived process, either as a background service (systemd, Docker entrypoint, Fly.io process group) or as a sidecar container.

What do you think happens if you kill Litestream for a minute and then start it again? It catches up. Any changes the WAL picked up while Litestream was not running are synced as soon as it comes back. You never lose continuity just because the replicator process restarted.

Restoring

This is the part that matters. Backups are only as good as your ability to restore them.

If your database file is gone, restoring it is one command:

Code along
litestream restore -config /etc/litestream.yml /data/app.db

Litestream downloads the latest snapshot, applies any WAL segments after that snapshot, and writes a fresh /data/app.db. When the restore finishes, you have a file that matches the state of your database at the latest sync point, usually within the last second or two.

You can also restore to a specific point in time, which is useful if you need to roll back past a bad migration:

Code along
litestream restore -config /etc/litestream.yml \
  -timestamp 2026-04-18T09:15:00Z \
  /data/app.db

That is the SQLite equivalent of “restore the production database to five minutes ago.” For anyone who has ever needed that, it feels a little like magic the first time you try it.

Starting the app with restore on boot

Putting it all together, your container entrypoint looks something like this:

#!/bin/sh
# entrypoint.sh
set -e

# 1. Restore if the database is missing (fresh machine or lost volume)
if [ ! -f /data/app.db ]; then
  echo "No local database, restoring from backup..."
  litestream restore -config /etc/litestream.yml /data/app.db || true
fi

# 2. Run pending migrations
node scripts/migrate.js

# 3. Start Litestream replicating in the background
litestream replicate -config /etc/litestream.yml &

# 4. Start the app in the foreground
exec node dist/server.js

Let’s walk through each step.

Step 1 is the “new machine” case. If /data/app.db is missing, we ask Litestream to pull the latest replica and write it to disk. The || true is there so that the very first deploy, where there is no backup yet, does not fail the boot. On subsequent deploys, that same command restores any missing file.

Step 2 runs the migration script from the previous lesson. Anything new in the migrations/ folder is applied before the app starts serving traffic. Restoring first and migrating second means a fresh machine boots into the exact same schema version as the one it is replacing.

Step 3 kicks off Litestream in the background. From now on, every committed change on app.db gets streamed to object storage within a second.

Step 4 runs the app in the foreground. On a crash, the container exits, which signals your platform (Fly.io, Kubernetes, systemd, whatever you are using) to restart it, which runs the entrypoint again. Litestream picks up right where it left off.

This is the same pattern used by basically every production deployment of SQLite at any real scale. The specific details change based on your host, but restore-on-boot followed by replicate-in-background is the shape of it.

Testing the restore

There is a rule of thumb in ops circles: a backup you have never restored is not a backup. You should occasionally verify that your Litestream replica actually works end to end.

A common pattern is a small scheduled job that runs weekly:

#!/bin/sh
set -e

# Pull the latest replica into a temp file
litestream restore -config /etc/litestream.yml /tmp/restore-test.db

# Run a few sanity checks
sqlite3 /tmp/restore-test.db "SELECT COUNT(*) FROM users" > /dev/null
sqlite3 /tmp/restore-test.db "PRAGMA integrity_check" | grep -q "ok"

echo "Restore test passed"
rm /tmp/restore-test.db

If any step fails, the job fails, you get alerted, and you can investigate before an actual disaster forces the issue. Pair this with monitoring on Litestream itself (it exposes a Prometheus endpoint for its internal metrics) and you have a backup setup you can actually trust.

Where Litestream is not the right answer

Litestream is brilliant for the single-writer case. That covers the majority of applications, including most SaaS, most content sites, and most internal tools. There are three situations where you need something else.

Multiple app servers writing to the same database. Litestream assumes exactly one writer. Two app servers hitting the same database file is not really a Litestream problem, it is a SQLite problem. If you genuinely need multiple writers, look at LiteFS, Turso, or Cloudflare D1, or consider whether Postgres is a better fit.

Multi-region reads with live failover. Litestream can restore you in seconds, but that is not the same as a warm replica serving live read traffic from another region. LiteFS can do this. Turso can do this. Litestream cannot.

Sub-second recovery objectives with zero data loss. Litestream syncs every second by default. If you literally cannot tolerate losing one second of writes, SQLite on a single box is probably not the right database for the problem.

For the app-shaped problem most of this course has been building toward, Litestream is more than enough. A cheap box, a volume, a SQLite file, and a replica in R2 will comfortably carry a real product.

Exercises

Exercise 1: Install Litestream locally. Point it at a test SQLite database. Configure a replica to a local directory (type: file) so you can see it work without setting up S3 credentials yet. Run litestream replicate and insert some rows. Check the replica directory.

Exercise 2: Delete your test database. Run litestream restore and confirm the file comes back. Query it and verify the rows you just inserted are there.

Exercise 3: Set up a real replica against R2, Backblaze B2, or any S3-compatible bucket you have access to. Repeat Exercises 1 and 2 against the real cloud replica.

Exercise 4: Write a restore-test script like the one above and schedule it to run on a regular basis (cron, CI, whatever you have). Make sure you actually get alerted when it fails.

What does Litestream protect you against?

Your SQLite app now has a proper client, a migration workflow, and a durable backup story. That is everything you need to run SQLite in production with a straight face. In the final section, we will take a step back. First a schema design checklist to keep handy on every new project, then a capstone that walks through designing a full database from scratch using everything we have covered.

← Using node:sqlite in your app Schema design checklist →

© 2026 hectoday. All rights reserved.