Skip to main content
Waterline reads all configuration from environment variables at startup. To get started, copy .env.example to .env in the project root and fill in your values before running make dev. This page lists every variable Waterline recognizes, grouped by feature area, so you know exactly what to set and what you can safely leave at the default.

App

VariableDefaultDescription
DOMAINRequired. Your domain without a path (e.g. localhost or getwaterline.dev)
FRONTEND_URLFull URL of the frontend (e.g. http://localhost:3001)
API_BASE_URLFull URL of the API (e.g. http://localhost:8000)
ENVIRONMENTdevelopmentRuntime environment. Set to production for deployed instances

Backend / database

VariableDefaultDescription
BACKENDsupabasesupabase for hosted Supabase auth and DB, postgres for a local Postgres instance
DATABASE_URLPostgres connection URL. Required when BACKEND=postgres
JWT_SECRETSecret used to sign authentication tokens. Required when BACKEND=postgres. Use a random string of at least 32 characters
SUPABASE_URLYour Supabase project URL. Required when BACKEND=supabase
SUPABASE_KEYSupabase anon key. Required when BACKEND=supabase
SUPABASE_SERVICE_ROLE_KEYSupabase service role key. Required for admin operations when BACKEND=supabase

LLM providers

VariableDefaultDescription
LLM_PROVIDERPrimary LLM provider: anthropic, openai, or ollama
ANTHROPIC_API_KEYAnthropic API key
ANTHROPIC_MODELclaude-3-7-sonnet-latestClaude model used for semantic diff and general summaries
OPENAI_API_KEYOpenAI API key
OPENAI_MODELgpt-4oOpenAI model used for general LLM calls
OLLAMA_URLOllama base URL (e.g. http://localhost:11434)
OLLAMA_MODELOllama model name (e.g. qwen2.5-coder:14b)

Analysis LLM (ticket progress scoring)

These variables override LLM_PROVIDER for cheaper analysis tasks such as relevance scoring and criteria mapping. Leave them unset to use the primary LLM provider for everything.
VariableDefaultDescription
ANALYSIS_LLM_PROVIDERLLM_PROVIDERProvider for ticket analysis tasks
ANALYSIS_OPENAI_MODELOpenAI model for analysis (e.g. gpt-4o-mini)
ANALYSIS_ANTHROPIC_MODELAnthropic model for analysis (e.g. claude-haiku-4-5-20251001)

Symbol LLM (symbol summarization)

Symbol summarization runs one LLM call per function and class during repo indexing. Because this is the highest-volume task by far, setting a fast, cheap model here has the biggest impact on indexing cost and speed.
VariableDefaultDescription
SYMBOL_LLM_PROVIDERANALYSIS_LLM_PROVIDERProvider for symbol summarization calls
SYMBOL_ANTHROPIC_MODELclaude-haiku-4-5-20251001Anthropic model used for symbol summaries
SYMBOL_OPENAI_MODELgpt-4o-miniOpenAI model used for symbol summaries

Embeddings

VariableDefaultDescription
EMBEDDING_PROVIDERopenaiEmbedding provider: openai or ollama
EMBEDDING_MODELtext-embedding-3-smallEmbedding model name
Anthropic does not provide an embedding API. If you set LLM_PROVIDER=anthropic and leave EMBEDDING_PROVIDER unset, Waterline automatically falls back to OpenAI for embeddings. You must provide OPENAI_API_KEY even when using Claude for all other tasks.

Vector store (ChromaDB)

VariableDefaultDescription
CHROMA_PATH./chromaLocal directory for embedded ChromaDB storage
CHROMADB_API_KEYChroma Cloud API key. When set, Waterline uses Chroma Cloud instead of local storage
CHROMADB_TENANTChroma Cloud tenant ID
CHROMADB_DATABASEChroma Cloud database name

Cache (Redis)

VariableDefaultDescription
REDIS_URLRequired. Redis connection URL (e.g. redis://localhost:6379)

GitHub OAuth

VariableDefaultDescription
GITHUB_CLIENT_IDGitHub OAuth app client ID
GITHUB_CLIENT_SECRETGitHub OAuth app client secret
GITHUB_REDIRECT_URIGitHub OAuth callback URL (e.g. http://localhost:8000/api/connect/github/callback)
GITHUB_WEBHOOK_PATH/api/sync/github/webhookPath that receives GitHub push webhook events

Jira OAuth

VariableDefaultDescription
JIRA_CLIENT_IDRequired. Jira OAuth app client ID
JIRA_CLIENT_SECRETRequired. Jira OAuth app client secret
JIRA_REDIRECT_URIRequired. Jira OAuth callback URL (e.g. http://localhost:8000/api/connect/jira/callback)

Slack

VariableDefaultDescription
SLACK_CLIENT_IDSlack OAuth app client ID
SLACK_CLIENT_SECRETSlack OAuth app client secret
SLACK_REDIRECT_URISlack OAuth callback URL
SLACK_SIGNING_SECRETSlack app signing secret, used to verify incoming webhook payloads

Feature flags

VariableDefaultDescription
ENABLE_SYMBOL_INDEXINGtrueEnable function- and class-level symbol indexing during repo sync
ENABLE_SYMBOL_SEARCHtrueUse the symbol index when analyzing ticket progress
SYMBOL_SEARCH_FALLBACK_TO_FILEStrueFall back to file-level search when symbol results are too sparse

Search and analysis tuning

These variables control the vector search behavior that powers ticket progress analysis. The defaults work well for most setups — adjust them only if you’re seeing low-quality results or excessive LLM usage.
VariableDefaultDescription
SYMBOL_DISTANCE_THRESHOLD0.7Maximum ChromaDB cosine distance for a symbol result to be considered relevant
SYMBOL_TOP_K30Number of symbol candidates retrieved per vector search
MIN_SYMBOL_MATCHES_BEFORE_FALLBACK3Minimum symbol results required before Waterline skips the file-level fallback
LLM_SCORE_DISTANCE_CUTOFF0.55Symbols with a distance above this value are dropped before LLM relevance scoring
MAX_SYMBOLS_FOR_LLM_SCORING20Maximum number of symbols sent to the LLM for relevance scoring per analysis run

Caching

VariableDefaultDescription
PROGRESS_CACHE_TTL_HOURS1How long ticket progress results are cached in Redis before being recomputed
SYMBOL_CACHE_TTL_HOURS24How long symbol search results are cached

Repo size limits

VariableDefaultDescription
REPO_MAX_FILES2000Maximum number of source files indexed per repository
REPO_MAX_SYMBOLS15000Maximum number of symbols indexed per repository
These limits exist to prevent unexpectedly large LLM bills when a user connects a monorepo. Indexing stops once either limit is reached. Increase them with caution — a large symbol count means proportionally more LLM calls during the initial index.

Observability

VariableDefaultDescription
SENTRY_DSNSentry DSN for error tracking. Sentry is only active when this is set andENVIRONMENT=production