Skip to main content
Waterline supports two database backends: a Postgres instance you run yourself (including the one bundled in Docker Compose) and a hosted Supabase project. You select the backend with the BACKEND environment variable. This page explains how to set up each option, what gets stored in the database, and how to manage your data over time.

Choosing a backend

BACKEND=postgresBACKEND=supabase
SetupZero config with Docker ComposeManual schema run in Supabase SQL editor
AuthJWT-based, handled by the APISupabase Auth (GoTrue)
Row-level securityNo — API layer enforces accessYes — RLS on all workspace tables
Best forFully local or air-gapped deploymentsEasier auth management and a managed DB
If you’re running Waterline entirely on your own infrastructure and want the simplest setup, use BACKEND=postgres. If you prefer a managed database with built-in auth, use BACKEND=supabase.

Setting up your database

When you use BACKEND=postgres with Docker Compose, the database is initialized automatically — no manual steps required.

How initialization works

When you first start the stack with docker compose up, the Postgres container runs the bundled initialization script automatically. All tables, indexes, and the local users table for JWT auth are created in a single pass — you don’t need to run any SQL manually.

Environment variables

Add these to your .env:
BACKEND=postgres
DATABASE_URL=postgresql://waterline:waterline@postgres:5432/waterline
JWT_SECRET=your-random-string-at-least-32-chars
The default credentials (waterline/waterline) match the Postgres service definition in docker-compose.yml.
Change the default Postgres credentials before running in production. Update POSTGRES_USER, POSTGRES_PASSWORD, and DATABASE_URL to non-default values and regenerate JWT_SECRET with openssl rand -hex 32.

Connecting an external Postgres instance

If you want to use your own Postgres server instead of the bundled container, point DATABASE_URL at it:
DATABASE_URL=postgresql://myuser:mypassword@my-postgres-host:5432/waterline
You’ll need to run schemas/postgres_init.sql manually against that database on first setup:
psql "$DATABASE_URL" -f schemas/postgres_init.sql

What Waterline stores

Waterline stores several categories of data in your database. The largest tables are the code symbol index and cached ticket results, which grow proportionally with repository size and analysis history.
CategoryWhat’s stored
Workspaces and membersUsers, workspaces, and access roles
Connected repositoriesWhich repos are linked to which workspace
Code symbol indexFunction and class summaries with embeddings — one row per indexed symbol
Sync stateThe last-processed commit SHA per repo (used for incremental sync)
Ticket progress cacheCached analysis results per ticket and repository
Issue dataFetched Jira and GitHub Issues content
For large codebases, the code symbol index is typically the biggest storage consumer. A repository with 10,000 indexed symbols will store 10,000 text summaries plus their associated metadata. Plan your storage accordingly.

Backups

Postgres (Docker Compose) The database lives in the postgres_data Docker volume. To create a backup:
docker compose exec postgres pg_dump -U waterline waterline > waterline_backup.sql
To restore from a backup:
docker compose exec -T postgres psql -U waterline waterline < waterline_backup.sql
Schedule regular backups with cron or your infrastructure’s snapshot tooling. The postgres_data volume is lost if you run docker compose down -v. Supabase Supabase automatically creates daily backups on paid plans. You can also take a manual backup from Project Settings → Backups in the Supabase dashboard, or export your data using the Supabase CLI.

Resetting the database

Remove the postgres_data volume and restart. Docker will re-run the init script on next start:
docker compose down -v
docker compose up --build
This permanently deletes all data. There is no undo.

Schema migrations

Waterline does not use an automatic migration runner. When you upgrade Waterline to a new version, check the changelog for any schema changes that need to be applied manually.
  • Postgres: look for new or updated SQL files in the schemas/ directory and run them against your Postgres instance with psql "$DATABASE_URL" -f schemas/<migration>.sql
  • Supabase: run migration SQL files via the SQL Editor in your Supabase dashboard, in the order listed in the changelog
Always back up your database before applying schema changes, especially in production.