Transactions
You already know the basics. A transaction groups multiple SQL statements into one atomic unit, wrapped with BEGIN / COMMIT / ROLLBACK. We covered that back in Section 4, and every migration in this course has been using it from the start. This lesson is about everything that was out of scope back then: what SQLite does behind the scenes when you skip BEGIN entirely, how transactions get surprisingly fast for bulk writes, how to write them from TypeScript without remembering ROLLBACK on every error path, how they keep denormalized data in sync, and what happens when you try to nest them.
It is a big list, but each section is small and each one shows up in real production code. By the end of the lesson the tx helper will be in your vocabulary, the bulk-insert speedup will feel obvious, and you will know why BEGIN inside BEGIN is not a thing in SQLite.
Implicit transactions
Here is something that surprises most beginners: every single SQL statement in SQLite is already running inside a transaction. If you just run a plain INSERT without writing BEGIN first, SQLite automatically wraps it in its own tiny transaction behind the scenes:
-- what you write
INSERT INTO books (title) VALUES ('Dune');
-- what SQLite actually does
BEGIN;
INSERT INTO books (title) VALUES ('Dune');
COMMIT; This means every individual statement is atomic on its own. A single INSERT either fully succeeds or fully fails. You only need explicit transactions when you want multiple statements to be atomic together.
This implicit transaction behavior also has a performance implication, which we will get to shortly.
Transactions in app code
When we get to writing application code against SQLite, BEGIN and COMMIT are just strings we pass to db.exec. That works, but it gets repetitive fast, and you have to remember ROLLBACK on every error path. A tiny helper handles the ceremony for you:
export function tx<T>(fn: () => T): T {
db.exec("BEGIN");
try {
const result = fn();
db.exec("COMMIT");
return result;
} catch (err) {
db.exec("ROLLBACK");
throw err;
}
} Now a transaction is a small wrapper around the statements that should run together:
const transfer = (from: string, to: string, amount: number) =>
tx(() => {
db.prepare("UPDATE accounts SET balance = balance - ? WHERE id = ?").run(amount, from);
db.prepare("UPDATE accounts SET balance = balance + ? WHERE id = ?").run(amount, to);
});
transfer("alice", "bob", 100); If either UPDATE throws an error, tx rolls back and rethrows the original error. We do not need to remember ROLLBACK in a catch block at every call site. The helper does it for us.
Every serious SQLite codebase ends up with some version of this helper. Keep it in your db module next to the connection itself, use it everywhere you need atomicity, and you never have to think about BEGIN / COMMIT at a call site again.
A more involved migration
The migrations we have written so far are one or two statements each. Once you start making changes that add a column and then populate it from existing data, the BEGIN / COMMIT wrapper starts to earn its keep. A mid-run failure between the schema change and the data backfill would leave the table in a half-migrated shape that you would have to clean up by hand. The transaction prevents that:
-- migrations/015_add_books_status.sql
BEGIN;
ALTER TABLE books ADD COLUMN status TEXT NOT NULL DEFAULT 'draft';
UPDATE books SET status = 'published' WHERE id IN (1, 2);
UPDATE books SET status = 'archived' WHERE id = 3;
INSERT INTO schema_migrations (version, name)
VALUES ('015', 'add_books_status');
COMMIT; If any UPDATE fails, the ALTER TABLE is rolled back too. Nothing sticks, nothing needs manual cleanup, and the migration is safe to fix and re-run.
The same shape works from code when you run migrations programmatically. The tx helper maps directly onto what the SQL file was doing:
tx(() => {
db.exec("ALTER TABLE books ADD COLUMN status TEXT NOT NULL DEFAULT 'draft'");
db.exec("UPDATE books SET status = 'published' WHERE id IN (1, 2)");
db.exec("UPDATE books SET status = 'archived' WHERE id = 3");
}); Transactions for denormalized data
Remember the denormalized review_count from the normalization section? When we store a count alongside the actual data, inserting a review AND incrementing the count must happen together:
function addReview(userId: number, bookId: number, rating: number, body: string) {
return tx(() => {
db.prepare("INSERT INTO reviews (user_id, book_id, rating, body) VALUES (?, ?, ?, ?)").run(
userId,
bookId,
rating,
body,
);
db.prepare("UPDATE books SET review_count = review_count + 1 WHERE id = ?").run(bookId);
});
} If the UPDATE fails, the INSERT is rolled back. The count and the actual reviews always stay in sync. Without a transaction, we could end up with a review in the table but a stale count on the book, and the two would slowly drift apart over time.
Performance
This is the part that surprises most people. Transactions are not just about correctness. They are also about speed.
Remember how every individual statement runs inside an implicit transaction? Each of those implicit transactions includes a disk sync. SQLite writes the data to disk and waits for the operating system to confirm it landed safely. This is what makes SQLite durable: even if the power goes out, committed data is safe.
But disk syncs are slow. Each one takes roughly 5 to 10 milliseconds. If we insert 1,000 rows one at a time, that is 1,000 implicit transactions, which means 1,000 disk syncs. At 5ms each, that is 5 seconds for something that should be instant.
Wrapping all 1,000 inserts in a single explicit transaction means just one disk sync at the end:
// Slow: 1,000 implicit transactions, 1,000 disk syncs
for (const book of books) {
db.prepare("INSERT INTO books (title, genre, author_id) VALUES (?, ?, ?)").run(
book.title,
book.genre,
book.author_id,
);
}
// Fast: 1 explicit transaction, 1 disk sync
tx(() => {
for (const book of books) {
db.prepare("INSERT INTO books (title, genre, author_id) VALUES (?, ?, ?)").run(
book.title,
book.genre,
book.author_id,
);
}
}); On a typical machine, this is the difference between seconds and milliseconds. The data is identical in both cases. The only difference is when SQLite syncs to disk.
Any time we are doing bulk inserts, bulk updates, or bulk deletes, we should wrap them in a transaction. It is one of the easiest performance wins in SQLite.
Nesting is not real
One thing to know: SQLite does not support nested transactions. If you try to call BEGIN inside an already-open transaction, you get an error:
BEGIN;
BEGIN; -- Error: cannot start a transaction within a transaction If you genuinely need nesting (say, you want part of a transaction to be rollback-able while keeping the rest), SQLite offers savepoints. Savepoints are like bookmarks inside a transaction that you can roll back to without unwinding the whole thing. We will not go deep on them here, but know they exist if you ever hit this limitation.
The tx helper above does not handle nesting automatically. If you call tx from inside another tx, the inner BEGIN will fail because a transaction is already open. If you need composable transactions, you can extend the helper to detect whether a transaction is already active (via db.isTransaction) and use a SAVEPOINT on that branch. Most apps never run into this, so starting with the simple version is almost always the right call.
Exercises
Exercise 1: Create a transaction that inserts an order and its items. Intentionally cause the second insert to fail (use a nonexistent foreign key, for example). Verify that neither the order nor any items exist after the failure.
Exercise 2: Insert 1,000 rows without a transaction and measure how long it takes. Then insert 1,000 rows wrapped in a single transaction. Compare the times. The difference should be dramatic.
Exercise 3: Write a migration wrapped in a transaction. Add a column and populate it with UPDATE statements. Introduce an intentional error partway through and verify the column addition is rolled back.
Exercise 4: Use the tx helper to write a function that inserts a review and increments a review_count column. Test it with a valid book ID and an invalid one. Confirm the count stays correct.
Why are bulk inserts much faster inside a transaction?
Transactions protect our data and speed up bulk writes. There are also a handful of connection-time settings that make a real difference in production, especially once more than one client is hitting the database at once. In the next lesson, we will look at PRAGMAs.