Building a migration system
You know why migrations matter. Now let’s build one. By the end of this lesson, you will have a working migration system: a table that tracks which migrations have run, numbered SQL files that describe each change, and a small TypeScript runner that ties it all together.
We will build it in five steps, each one small enough to do in a single sitting. Steps 1 through 3 write the migration files. Step 4 writes the runner that applies them. Step 5 runs the whole thing and verifies it worked.
Step 1: the tracking table
The very first thing a migration system needs is a way to remember which migrations have already been applied. Without this, the system would try to run every migration file on every deploy. Things like CREATE TABLE or ALTER TABLE ADD COLUMN would fail the second time because the table or column already exists.
The solution is a table inside the database itself. Let’s call it schema_migrations:
CREATE TABLE schema_migrations (
version TEXT PRIMARY KEY,
name TEXT NOT NULL,
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
); Each row represents one migration that has been applied. The version column holds the migration number (like 001 or 002), the name column holds a human-readable description, and applied_at records when it was applied.
Now here is a small philosophical decision worth pausing on. Where should this CREATE TABLE actually live? You have two options. The first is to put it inside the migration runner itself, so the runner creates the table before it does anything else. The second is to make it a migration file like every other change to the database. We are going to take the second route, because it keeps the rules of the system simple: every change to the schema, including the tracking table, lives in a numbered file. The runner does nothing but run files. There are no special cases.
Create a migrations/ directory in your project and add a file called 000_create_schema_migrations.sql:
-- migrations/000_create_schema_migrations.sql
BEGIN;
CREATE TABLE schema_migrations (
version TEXT PRIMARY KEY,
name TEXT NOT NULL,
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
);
INSERT INTO schema_migrations (version, name)
VALUES ('000', 'create_schema_migrations');
COMMIT; A few things to notice. The version is 000, not 001, because this migration is the bootstrap that has to run before anything else. By starting at 000, the regular schema migrations can keep their familiar 001, 002, 003 numbering. The migration also inserts its own row into the table it just created. That feels recursive, but it is exactly what we want: after this file runs, schema_migrations exists and already knows that version 000 has been applied. The next time the runner starts up, it will see that row and skip this file.
There is no IF NOT EXISTS on the CREATE TABLE either. If this migration ever runs twice, something is wrong with the runner and we want to know about it loudly, not silently swallow the error.
Step 2: your first real migration
With the bootstrap in place, you can write the first migration that actually models your application. Add a file called 001_create_users.sql next to the bootstrap:
-- migrations/001_create_users.sql
BEGIN;
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
INSERT INTO schema_migrations (version, name)
VALUES ('001', 'create_users');
COMMIT; This is essentially the same users table we created by hand in the “Tables and types” lesson, with two small upgrades. The id column is now INTEGER PRIMARY KEY AUTOINCREMENT instead of just INTEGER PRIMARY KEY, and the email column has a UNIQUE constraint. Both of these came up later in the course (primary keys and constraints), and now that we are writing a real migration that will live in the project for good, it is the right moment to bake them in.
A few more things to notice here.
First, the entire migration is wrapped in BEGIN and COMMIT. This is a transaction. If anything inside the transaction fails, none of the changes are applied. The database stays exactly as it was before. This is critical because a half-applied migration (where the table was created but the tracking row was not inserted, or vice versa) would leave you in a confusing state.
Second, the migration records itself in the schema_migrations table. After this migration runs, the tracking table has a row that says “version 001 has been applied.” The next time the system runs, it can check the table and skip this file.
Third, the filename starts with 001. This numbering controls the order. Migration 001 always runs before 002, which always runs before 003. The system reads the filenames, sorts them, and executes them in sequence.
Step 3: add a column via migration
Your app is growing and you want users to have a bio. That means adding a column to an existing table. Create 002_add_bio_to_users.sql:
-- migrations/002_add_bio_to_users.sql
BEGIN;
ALTER TABLE users ADD COLUMN bio TEXT;
INSERT INTO schema_migrations (version, name)
VALUES ('002', 'add_bio_to_users');
COMMIT; This is a common pattern. ALTER TABLE ... ADD COLUMN adds a new column to the table without touching any existing data. Every existing row gets NULL for the new bio column, which is fine because a bio is optional.
You now have three migration files sitting in your migrations/ directory. None of them have run yet. The database file does not even exist. That is the state we want before writing the runner: SQL on disk, waiting to be applied.
Step 4: write the migration runner
So far you have been writing SQL files, but nothing actually applies them. A real project needs an automated runner that reads the files from disk, checks which ones have been applied, and executes the rest in order. Here is a simple one in TypeScript using node:sqlite, the SQLite module built into the Node runtime. No native package, no install step, just the import:
// migrate.ts
import { DatabaseSync } from "node:sqlite";
import { readdirSync, readFileSync } from "node:fs";
import { join } from "node:path";
const db = new DatabaseSync("app.db");
db.exec("PRAGMA journal_mode = WAL");
db.exec("PRAGMA foreign_keys = ON");
const MIGRATIONS_DIR = join(process.cwd(), "migrations");
function migrate(): void {
const files = readdirSync(MIGRATIONS_DIR)
.filter((f) => f.endsWith(".sql"))
.sort();
// The very first run is special: schema_migrations does not exist yet,
// because the migration that creates it has not run. Check sqlite_master,
// which is SQLite's built-in catalog of every table in the database.
const tableExists = db
.prepare("SELECT 1 FROM sqlite_master WHERE type = 'table' AND name = 'schema_migrations'")
.get();
const applied = tableExists
? new Set(
(db.prepare("SELECT version FROM schema_migrations").all() as { version: string }[]).map(
(row) => row.version,
),
)
: new Set<string>();
for (const file of files) {
// Extract the version number from the filename
const version = file.split("_")[0];
if (applied.has(version)) {
continue;
}
console.log(`Running migration: ${file}`);
const sql = readFileSync(join(MIGRATIONS_DIR, file), "utf-8");
db.exec(sql);
}
console.log("All migrations applied.");
}
migrate(); Let’s break down what this does.
The import is DatabaseSync from node:sqlite. The node: prefix is how Node marks built-in modules, so there is nothing to add to package.json. If your Node version is 22 or newer, this import just works. Deno supports the same module, so the same file runs on both runtimes.
Next, it opens the database and enables WAL mode and foreign keys. These are good defaults for any SQLite application, and we will dig into what they mean in a later lesson. Pragmas are just SQL, so we run them with db.exec("PRAGMA ..."), the same way we run any other statement.
Notice what is missing here: there is no CREATE TABLE schema_migrations in the runner. The runner does not create any tables. Every table, including the tracking table, comes from a migration file. The runner is just a runner.
That choice creates a small chicken-and-egg problem on the very first run. The runner wants to ask schema_migrations which versions have been applied, but on a fresh database that table does not exist yet. So before running the query, the runner peeks at sqlite_master, which is a built-in table SQLite maintains automatically. It contains one row for every table, index, view, and trigger in the database. If schema_migrations is not listed there, the runner treats the applied set as empty and lets migration 000 create the table on its first run.
The migrate function then reads every .sql file from the migrations/ directory and sorts them alphabetically. Because the filenames start with numbers (000, 001, 002), alphabetical order is the same as chronological order.
For each file that has not been applied, it reads the SQL from disk and executes it with db.exec(sql). Each migration file already contains its own BEGIN and COMMIT, so the transaction boundaries are baked into the file. Wrapping db.exec in another transaction would just be a no-op on top of a no-op.
Step 5: run it and verify
Save the script above as migrate.ts at the root of your project, alongside your migrations/ directory. Your layout should look like this:
your-project/
migrate.ts
migrations/
000_create_schema_migrations.sql
001_create_users.sql
002_add_bio_to_users.sql Then run it with Node:
node migrate.ts Node 22 and later can execute TypeScript files directly without a build step, and it ships with node:sqlite built in, so no npm install is required. The first run prints something like this:
Running migration: 000_create_schema_migrations.sql
Running migration: 001_create_users.sql
Running migration: 002_add_bio_to_users.sql
All migrations applied. A new file called app.db appears in your project directory. That is the SQLite database itself, with all three migrations applied.
You can verify what was tracked by opening the database in the SQLite shell and querying schema_migrations directly:
SELECT * FROM schema_migrations ORDER BY version; version | name | applied_at
--------|---------------------------|---------------------
000 | create_schema_migrations | 2026-04-14 09:59:00
001 | create_users | 2026-04-14 10:00:00
002 | add_bio_to_users | 2026-04-14 10:01:00 All three rows are there, so the system knows exactly which migrations have been applied. If a teammate sets up a fresh database and runs node migrate.ts against it, they end up with the exact same schema and the same tracking rows. That is the whole point. Every database, whether it is your local one, staging, or production, can be brought to the same state by running the same sequence of files.
Now run the same command a second time:
node migrate.ts This time it prints only:
All migrations applied. No migrations executed, because the runner saw that 000, 001, and 002 were already in schema_migrations and skipped them. This is the behavior you want every day from here on: run the script, it brings the database up to date, and it is safe to run any number of times.
If you want to start over from scratch (perhaps you made a mistake in a migration and want to fix it before checking it in), delete app.db and run node migrate.ts again. Every migration runs from the beginning against a fresh database. Once a migration has been committed and shared with teammates or deployed, though, you should never edit it. Write a new migration that fixes the problem instead, because anyone whose database has already applied the old version will not pick up your edits.
That is the whole system. Five steps, three migration files, one runner. Everything that follows in this lesson is about using the system well: what SQLite’s ALTER TABLE can and cannot do, the workaround for schema changes it cannot handle, and how to sequence migrations that would otherwise break a running app.
What ALTER TABLE can and cannot do in SQLite
You just used ALTER TABLE ADD COLUMN to add a bio. That works great. But SQLite’s ALTER TABLE is more limited than what you might find in PostgreSQL or MySQL. Knowing exactly what it can and cannot do will save you time later in this course, because you will run into these limits more than once.
What it can do:
ALTER TABLE t ADD COLUMN cadds a new column.ALTER TABLE t RENAME TO new_namerenames the entire table.ALTER TABLE t RENAME COLUMN c TO new_namerenames a column.ALTER TABLE t DROP COLUMN cdrops a column (SQLite 3.35 and later).
What it cannot do:
- Change a column’s type
- Add or remove constraints (NOT NULL, UNIQUE, FOREIGN KEY)
- Change a column’s default value
So what happens when you need to do one of those things? You need the recreate pattern.
The recreate pattern
The recreate pattern is the standard workaround for anything ALTER TABLE cannot handle in SQLite. The idea is straightforward. Create a new table with the schema you want, copy the data from the old table into the new one, drop the old table, and rename the new table to take its place.
Imagine a future version of your app where you have a books table with a year column defined as TEXT, and you realize it should be INTEGER. SQLite will not let you change a column’s type directly. Here is what that migration would look like, using a hypothetical version number 0NN to stand in for wherever it would live in your real sequence:
-- migrations/0NN_change_books_year_to_integer.sql
BEGIN;
CREATE TABLE books_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
year INTEGER,
author_id INTEGER NOT NULL
);
INSERT INTO books_new (id, title, year, author_id)
SELECT id, title, CAST(year AS INTEGER), author_id
FROM books;
DROP TABLE books;
ALTER TABLE books_new RENAME TO books;
INSERT INTO schema_migrations (version, name)
VALUES ('0NN', 'change_books_year_to_integer');
COMMIT; Let’s walk through this step by step.
- Create the new table.
books_newhas the exact schema you want, withyearasINTEGERinstead ofTEXT. - Copy the data. The
INSERT INTO ... SELECTstatement moves every row from the old table to the new one. TheCAST(year AS INTEGER)converts the text values to integers during the copy. - Drop the old table. Now that the data is safely in
books_new, the oldbookstable is no longer needed. - Rename.
books_newbecomesbooks. From the perspective of your application code, nothing has changed except the column type.
Because all four steps are wrapped in a transaction, if anything goes wrong (say the CAST fails on some row), the entire operation rolls back and your original table is untouched.
[!TIP] The recreate pattern works for any schema change SQLite cannot do natively: changing types, adding constraints, removing constraints, changing defaults. Any time
ALTER TABLEsays no, reach for this pattern.
Backwards-compatible migrations
Not all migrations are safe. Some can break your application if you are not careful about the order of deployment.
Renaming a column. If you rename username to handle in a migration, any code that references username will break. The fix is to deploy the code that uses the new name first, then run the migration. Or, if you are the only developer, just make sure you update both at the same time.
Dropping a column. If your code does SELECT bio FROM users and you drop the bio column, the query fails. Deploy code that stops using the column first, then drop it in a later migration.
Adding NOT NULL without a default. This one catches people off guard. If your table already has rows and you add a NOT NULL column without a default value, the migration fails because existing rows would violate the constraint. They have no value for the new column, and NULL is not allowed.
-- BAD: existing rows have no value for genre
ALTER TABLE books ADD COLUMN genre TEXT NOT NULL;
-- GOOD: existing rows get 'unknown'
ALTER TABLE books ADD COLUMN genre TEXT NOT NULL DEFAULT 'unknown'; Always include a DEFAULT when adding a NOT NULL column to a table that already has data.
[!WARNING] Adding a NOT NULL column without a DEFAULT to a table that already has rows will cause the migration to fail. SQLite will reject it because existing rows would have NULL in the new column, violating the constraint.
Exercises
Exercise 1: Create a migrations/ directory in your project. Add the bootstrap migration 000_create_schema_migrations.sql and the first real migration 001_create_users.sql, exactly as shown in the lesson. Save the migration runner as migrate.ts at the project root and run it with node migrate.ts. Open the resulting app.db in the SQLite shell and run SELECT * FROM schema_migrations to verify that both 000 and 001 are tracked.
Exercise 2: Write 002_add_bio_to_users.sql that adds a bio column to the users table. Run node migrate.ts again and confirm that only the new migration executes. Then run it a third time and verify it does nothing because all three migrations are already in schema_migrations.
Exercise 3: Imagine you later need to change the email column on users to have a COLLATE NOCASE constraint (so comparisons become case-insensitive). As a paper exercise, write the migration file using the recreate pattern: create a new table with the corrected schema, copy the data over, drop the old table, and rename the new one. Wrap it all in a transaction. You do not need to apply this to your database (later lessons in this course will use migration 003 for a different change), but writing it out cements the pattern.
Why does the migration runner wrap each migration in a transaction?
When would you use the recreate pattern instead of ALTER TABLE?
From here on out, whenever we add a table, change a column, or adjust a constraint, we write a migration. No more typing CREATE TABLE into the shell. The migration file is the source of truth. Your database matches whatever the numbered files say it should be.
Our schema is now versioned and repeatable. In the next section, we will put data into these tables and query it back out.