Waterline reads all configuration from environment variables at startup. To get started, copy .env.example to .env in the project root and fill in your values before running make dev. This page lists every variable Waterline recognizes, grouped by feature area, so you know exactly what to set and what you can safely leave at the default.
App
| Variable | Default | Description |
|---|
DOMAIN | — | Required. Your domain without a path (e.g. localhost or getwaterline.dev) |
FRONTEND_URL | — | Full URL of the frontend (e.g. http://localhost:3001) |
API_BASE_URL | — | Full URL of the API (e.g. http://localhost:8000) |
ENVIRONMENT | development | Runtime environment. Set to production for deployed instances |
Backend / database
| Variable | Default | Description |
|---|
BACKEND | supabase | supabase for hosted Supabase auth and DB, postgres for a local Postgres instance |
DATABASE_URL | — | Postgres connection URL. Required when BACKEND=postgres |
JWT_SECRET | — | Secret used to sign authentication tokens. Required when BACKEND=postgres. Use a random string of at least 32 characters |
SUPABASE_URL | — | Your Supabase project URL. Required when BACKEND=supabase |
SUPABASE_KEY | — | Supabase anon key. Required when BACKEND=supabase |
SUPABASE_SERVICE_ROLE_KEY | — | Supabase service role key. Required for admin operations when BACKEND=supabase |
LLM providers
| Variable | Default | Description |
|---|
LLM_PROVIDER | — | Primary LLM provider: anthropic, openai, or ollama |
ANTHROPIC_API_KEY | — | Anthropic API key |
ANTHROPIC_MODEL | claude-3-7-sonnet-latest | Claude model used for semantic diff and general summaries |
OPENAI_API_KEY | — | OpenAI API key |
OPENAI_MODEL | gpt-4o | OpenAI model used for general LLM calls |
OLLAMA_URL | — | Ollama base URL (e.g. http://localhost:11434) |
OLLAMA_MODEL | — | Ollama model name (e.g. qwen2.5-coder:14b) |
Analysis LLM (ticket progress scoring)
These variables override LLM_PROVIDER for cheaper analysis tasks such as relevance scoring and criteria mapping. Leave them unset to use the primary LLM provider for everything.
| Variable | Default | Description |
|---|
ANALYSIS_LLM_PROVIDER | LLM_PROVIDER | Provider for ticket analysis tasks |
ANALYSIS_OPENAI_MODEL | — | OpenAI model for analysis (e.g. gpt-4o-mini) |
ANALYSIS_ANTHROPIC_MODEL | — | Anthropic model for analysis (e.g. claude-haiku-4-5-20251001) |
Symbol LLM (symbol summarization)
Symbol summarization runs one LLM call per function and class during repo indexing. Because this is the highest-volume task by far, setting a fast, cheap model here has the biggest impact on indexing cost and speed.
| Variable | Default | Description |
|---|
SYMBOL_LLM_PROVIDER | ANALYSIS_LLM_PROVIDER | Provider for symbol summarization calls |
SYMBOL_ANTHROPIC_MODEL | claude-haiku-4-5-20251001 | Anthropic model used for symbol summaries |
SYMBOL_OPENAI_MODEL | gpt-4o-mini | OpenAI model used for symbol summaries |
Embeddings
| Variable | Default | Description |
|---|
EMBEDDING_PROVIDER | openai | Embedding provider: openai or ollama |
EMBEDDING_MODEL | text-embedding-3-small | Embedding model name |
Anthropic does not provide an embedding API. If you set LLM_PROVIDER=anthropic and leave EMBEDDING_PROVIDER unset, Waterline automatically falls back to OpenAI for embeddings. You must provide OPENAI_API_KEY even when using Claude for all other tasks.
Vector store (ChromaDB)
| Variable | Default | Description |
|---|
CHROMA_PATH | ./chroma | Local directory for embedded ChromaDB storage |
CHROMADB_API_KEY | — | Chroma Cloud API key. When set, Waterline uses Chroma Cloud instead of local storage |
CHROMADB_TENANT | — | Chroma Cloud tenant ID |
CHROMADB_DATABASE | — | Chroma Cloud database name |
Cache (Redis)
| Variable | Default | Description |
|---|
REDIS_URL | — | Required. Redis connection URL (e.g. redis://localhost:6379) |
GitHub OAuth
| Variable | Default | Description |
|---|
GITHUB_CLIENT_ID | — | GitHub OAuth app client ID |
GITHUB_CLIENT_SECRET | — | GitHub OAuth app client secret |
GITHUB_REDIRECT_URI | — | GitHub OAuth callback URL (e.g. http://localhost:8000/api/connect/github/callback) |
GITHUB_WEBHOOK_PATH | /api/sync/github/webhook | Path that receives GitHub push webhook events |
Jira OAuth
| Variable | Default | Description |
|---|
JIRA_CLIENT_ID | — | Required. Jira OAuth app client ID |
JIRA_CLIENT_SECRET | — | Required. Jira OAuth app client secret |
JIRA_REDIRECT_URI | — | Required. Jira OAuth callback URL (e.g. http://localhost:8000/api/connect/jira/callback) |
Slack
| Variable | Default | Description |
|---|
SLACK_CLIENT_ID | — | Slack OAuth app client ID |
SLACK_CLIENT_SECRET | — | Slack OAuth app client secret |
SLACK_REDIRECT_URI | — | Slack OAuth callback URL |
SLACK_SIGNING_SECRET | — | Slack app signing secret, used to verify incoming webhook payloads |
Feature flags
| Variable | Default | Description |
|---|
ENABLE_SYMBOL_INDEXING | true | Enable function- and class-level symbol indexing during repo sync |
ENABLE_SYMBOL_SEARCH | true | Use the symbol index when analyzing ticket progress |
SYMBOL_SEARCH_FALLBACK_TO_FILES | true | Fall back to file-level search when symbol results are too sparse |
Search and analysis tuning
These variables control the vector search behavior that powers ticket progress analysis. The defaults work well for most setups — adjust them only if you’re seeing low-quality results or excessive LLM usage.
| Variable | Default | Description |
|---|
SYMBOL_DISTANCE_THRESHOLD | 0.7 | Maximum ChromaDB cosine distance for a symbol result to be considered relevant |
SYMBOL_TOP_K | 30 | Number of symbol candidates retrieved per vector search |
MIN_SYMBOL_MATCHES_BEFORE_FALLBACK | 3 | Minimum symbol results required before Waterline skips the file-level fallback |
LLM_SCORE_DISTANCE_CUTOFF | 0.55 | Symbols with a distance above this value are dropped before LLM relevance scoring |
MAX_SYMBOLS_FOR_LLM_SCORING | 20 | Maximum number of symbols sent to the LLM for relevance scoring per analysis run |
Caching
| Variable | Default | Description |
|---|
PROGRESS_CACHE_TTL_HOURS | 1 | How long ticket progress results are cached in Redis before being recomputed |
SYMBOL_CACHE_TTL_HOURS | 24 | How long symbol search results are cached |
Repo size limits
| Variable | Default | Description |
|---|
REPO_MAX_FILES | 2000 | Maximum number of source files indexed per repository |
REPO_MAX_SYMBOLS | 15000 | Maximum number of symbols indexed per repository |
These limits exist to prevent unexpectedly large LLM bills when a user connects a monorepo. Indexing stops once either limit is reached. Increase them with caution — a large symbol count means proportionally more LLM calls during the initial index.
Observability
| Variable | Default | Description |
|---|
SENTRY_DSN | — | Sentry DSN for error tracking. Sentry is only active when this is set andENVIRONMENT=production |