Menu

Use Node's Built-In SQLite For Local App Features

Use Node's Built-In SQLite For Local App Features

Not every database problem deserves a robust database server.

Sometimes you just need a small, durable place to put local state: saved links, CLI cache rows, sync checkpoints, desktop app preferences, integration-test fixtures, or a tiny job queue that only one process owns.

SQLite has always been good at that job. The newer part is that Node now ships a built-in SQLite module: node:sqlite.

That does not make Postgres obsolete. It does not replace your production database. It does delete a dependency from a whole class of small Node features where a local file is exactly the right level of infrastructure.

But it's definitely useful to have around.

Check The Runtime Before You Commit To It

The Node docs say node:sqlite was added in Node 22.5.0 and is currently marked Stability 1.2, release candidate. It was originally exposed behind an experimental flag, then moved out from behind that flag while still experimental, and later advanced to release-candidate status.

That history matters because your laptop, CI image, server image, and Electron runtime might not all be on the same Node line.

Check the runtime explicitly:

node -p "process.version"
node -e "import('node:sqlite').then(() => console.log('sqlite ok'))"

If the import fails, stop there. Upgrade Node or keep your existing SQLite package. Do not bury a runtime requirement inside a helper module and let the app fail later.

Also note the import shape:

import sqlite, { DatabaseSync } from "node:sqlite"

The module is only available through the node: scheme, so do not import it as "sqlite".

Pick A Feature With A Local Boundary

The best use case is a feature where the data belongs to one app process or one local user.

For this example, build a tiny saved-links store:

import { DatabaseSync } from "node:sqlite"

export function openLinksDb(path = "links.db") {
  const db = new DatabaseSync(path, {
    enableForeignKeyConstraints: true,
    timeout: 5000,
    defensive: true
  })

  db.exec(`
    CREATE TABLE IF NOT EXISTS links (
      id INTEGER PRIMARY KEY,
      url TEXT NOT NULL UNIQUE,
      title TEXT NOT NULL,
      saved_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
    ) STRICT;
  `)

  return db
}

enableForeignKeyConstraints is on by default in Node's API, but I still like making the decision visible. SQLite's own documentation points out that foreign key enforcement is a per-connection setting, so your app should be deliberate about it.

timeout gives SQLite a short window to wait if another connection has the database locked. A zero timeout can turn normal local contention into avoidable failures.

defensive disables SQLite features that can deliberately corrupt the database file. Node enables it by default in current docs, and for app-owned local data files, that is a good default to preserve.

Use Prepared Statements, Not String Building

This is still SQL. Treat user input, file names, URLs, and cache keys as values, not SQL text.

export function createLinkStore(db) {
  const insertLink = db.prepare(`
    INSERT INTO links (url, title)
    VALUES (?, ?)
    ON CONFLICT(url) DO UPDATE SET
      title = excluded.title,
      saved_at = CURRENT_TIMESTAMP
  `)

  const listLinks = db.prepare(`
    SELECT id, url, title, saved_at
    FROM links
    ORDER BY saved_at DESC
    LIMIT ?
  `)

  return {
    save(link) {
      insertLink.run(link.url, link.title)
    },
    recent(limit = 20) {
      return listLinks.all(limit)
    }
  }
}

The ON CONFLICT clause matters for local stores. A user can save the same link twice. A CLI cache can refresh the same key. A sync process can see the same remote item again. When a row has a natural unique key, make repeat writes idempotent instead of turning them into noisy errors.

SQLite's UPSERT documentation describes this pattern as an INSERT that becomes an UPDATE or no-op when a uniqueness constraint would be violated.

Use createTagStore() For Repeated Helpers

Node's SQLite API also includes database.createTagStore(). It creates an LRU cache for prepared statements and lets you write tagged SQL templates where interpolated values become bound parameters.

export function createTaggedLinkStore(db) {
  const sql = db.createTagStore()

  return {
    findByUrl(url) {
      return sql.get`
        SELECT id, url, title, saved_at
        FROM links
        WHERE url = ${url}
      `
    },
    search(term) {
      return sql.all`
        SELECT id, url, title, saved_at
        FROM links
        WHERE title LIKE ${"%" + term + "%"}
        ORDER BY saved_at DESC
      `
    }
  }
}

The important part is not the nicer syntax. The important part is that ${url} and ${"%" + term + "%"} are bound as values. They are not pasted into the SQL string.

For application modules with a handful of repeated queries, this is a clean default. For migration scripts or one-off setup, plain db.prepare() is still perfectly fine.

Make Multi-Step Writes Atomic

SQLite will handle individual statements, but your app's behavior often spans more than one statement.

Wrap that behavior in a transaction:

export function saveManyLinks(db, links) {
  const insert = db.prepare(`
    INSERT INTO links (url, title)
    VALUES (?, ?)
    ON CONFLICT(url) DO UPDATE SET
      title = excluded.title,
      saved_at = CURRENT_TIMESTAMP
  `)

  db.exec("BEGIN")
  try {
    for (const link of links) {
      insert.run(link.url, link.title)
    }
    db.exec("COMMIT")
  } catch (error) {
    db.exec("ROLLBACK")
    throw error
  }
}

If the third row fails, you do not want the first two rows silently committed unless that is an explicit product decision.

For larger modules, hide this in a helper:

export function transaction(db, fn) {
  db.exec("BEGIN")
  try {
    const result = fn()
    db.exec("COMMIT")
    return result
  } catch (error) {
    db.exec("ROLLBACK")
    throw error
  }
}

Then keep write flows easy to read:

transaction(db, () => {
  for (const link of links) {
    store.save(link)
  }
})

Respect The Synchronous API

DatabaseSync is exactly what it sounds like. Node's docs describe it as a single SQLite connection whose exposed APIs execute synchronously.

That is acceptable in plenty of places:

  • CLI commands
  • local desktop app features
  • startup-time initialization
  • migration scripts
  • test setup
  • small local caches
  • background jobs owned by one process

It is a poor fit for unbounded work inside hot HTTP routes.

If you use node:sqlite in a server, keep the queries small and predictable. Avoid long scans in request handlers. Move heavier work to a worker thread or a separate process. Measure event-loop delay before you decide it is "probably fine."

SQLite can be fast and still block your Node process.

Back Up The File Before Migrations

A local database file is easy to treat casually until a user has real data in it.

Node exposes sqlite.backup() for copying an open database to another path:

import sqlite from "node:sqlite"

export async function backupLinksDb(db, backupPath) {
  await sqlite.backup(db, backupPath)
}

Use this before local schema migrations:

await backupLinksDb(db, `links.${Date.now()}.bak.db`)
db.exec("ALTER TABLE links ADD COLUMN notes TEXT")

That one backup file can save you from turning a small local feature into a support problem.

Test Both Memory And File Modes

In-memory databases are great for focused tests:

import assert from "node:assert/strict"
import { test } from "node:test"
import { openLinksDb } from "./links-db.js"
import { createLinkStore } from "./links-store.js"

test("saves and lists links", () => {
  const db = openLinksDb(":memory:")
  const store = createLinkStore(db)

  store.save({ url: "https://example.com", title: "Example" })

  assert.equal(store.recent()[0].url, "https://example.com")
})

But do not stop there. Add at least one test with a real temporary file. File-backed mode catches path mistakes, lock behavior, cleanup errors, and migration issues that ':memory:' will never show.

Where I Would Use It

Good fits:

  • CLI result caches
  • local-first app state
  • desktop app data files
  • build tool metadata
  • integration-test fixtures
  • small app-owned queues
  • embedded read models
  • local sync checkpoints

Bad fits:

  • shared production state across multiple servers
  • high-write concurrent workloads
  • data that needs remote database permissions
  • workflows that need replication, monitoring, point-in-time recovery, or database admin dashboards
  • request paths where blocking the event loop would be expensive

The practical rule is simple: use node:sqlite when the database is part of the app's local working set. Use a server database when the database is part of your production system's shared coordination layer.

That is not a knock on SQLite. It is the reason SQLite is useful. It gives you a real database in one file, and now Node can talk to it without another native dependency.

Sources

Walt is a software engineer, startup founder and previous mentor for a coding bootcamp. He has been creating software for the past 20+ years.
No comments posted yet
// Add a comment
// Color Theme

Custom accent
Pick any color
for the accent