Skip to main content
Waterline ships a docker-compose.yml in the repo root that runs every service you need: the API backend, the Next.js frontend, Redis, and a local Postgres database. This page walks you through starting the stack for the first time, configuring your environment, and making the changes needed before running Waterline in production.

Services

The compose file starts four services by default:
ServiceHost portDescription
api8000FastAPI backend
frontend3001Next.js frontend
redis6379Session cache and progress results
postgres5433Local Postgres (when BACKEND=postgres)
Once the stack is running, the Waterline UI is at http://localhost:3001 and the API is at http://localhost:8000. Interactive API docs are available at http://localhost:8000/docs.

Prerequisites

  • Docker Desktop (or Docker Engine + Docker Compose plugin on Linux)
  • A GitHub OAuth app — create one at github.com/settings/developers with the callback URL http://localhost:8000/api/connect/github/callback
  • A Jira OAuth 2.0 app — create one at developer.atlassian.com (required even if you don’t plan to connect Jira right away)
  • At least one LLM provider API key (Anthropic, OpenAI, or a local Ollama instance)

Starting Waterline

1

Clone the repository

git clone https://github.com/danfranco3/waterline.git
cd waterline
2

Create your environment file

Copy the example file to .env:
make setup
Or do it manually:
cp .env.example .env
Open .env and fill in your values. At minimum, set your LLM provider key, GitHub OAuth credentials, and Jira OAuth credentials. See Environment configuration below for a minimal working example.
3

Start the stack

make dev
Docker builds the API and frontend images, then starts all four services. The first build takes a few minutes. Subsequent starts using cached images are faster.
4

Verify the services are healthy

In a second terminal, check that all containers are running:
docker compose ps
The api and frontend containers wait for Redis and Postgres to pass their health checks before starting. If you see any container in an unhealthy or exited state, check the logs:
docker compose logs api
docker compose logs postgres
5

Open the app

Navigate to http://localhost:3001. Sign in with GitHub to create your workspace.

Environment configuration

The API container reads .env at startup. Here is a minimal configuration for a local BACKEND=postgres deployment:
# App
DOMAIN=http://localhost:8000
ENVIRONMENT=development
FRONTEND_URL=http://localhost:3001

# Database (self-hosted Postgres via Docker Compose)
BACKEND=postgres
DATABASE_URL=postgresql://waterline:waterline@postgres:5432/waterline
JWT_SECRET=your-random-secret-at-least-32-chars

# Redis (provided by Docker Compose — no changes needed)
REDIS_URL=redis://redis:6379

# LLM provider — pick one
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-7-sonnet-latest

# Embeddings
EMBEDDING_PROVIDER=openai
EMBEDDING_MODEL=text-embedding-3-small
OPENAI_API_KEY=sk-...

# GitHub OAuth
GITHUB_CLIENT_ID=your-github-client-id
GITHUB_CLIENT_SECRET=your-github-client-secret
GITHUB_REDIRECT_URI=http://localhost:8000/api/connect/github/callback

# Jira OAuth (required even if you don't connect Jira — validated on startup)
JIRA_CLIENT_ID=your-jira-client-id
JIRA_CLIENT_SECRET=your-jira-client-secret
JIRA_REDIRECT_URI=http://localhost:8000/api/connect/jira/callback
To generate a strong JWT_SECRET, run: openssl rand -hex 32

Database initialization

When BACKEND=postgres, the Postgres container runs the schema initialization script automatically the first time the postgres_data volume is created. You don’t need to run any SQL manually. If you need to reset the database and start fresh:
docker compose down -v   # removes the postgres_data volume
docker compose up --build
docker compose down -v permanently deletes all data in the postgres_data volume, including all workspaces, tickets, and sync history. This cannot be undone.

ChromaDB persistence

Waterline uses ChromaDB to store vector embeddings for code symbols. By default, the API container writes the index to ./chroma on your host, which persists across restarts. For production deployments or setups with multiple API instances, use Chroma Cloud instead:
CHROMADB_API_KEY=your-chromadb-api-key
CHROMADB_TENANT=your-tenant
CHROMADB_DATABASE=waterline
When these variables are set, the local ./chroma directory is ignored.

Common commands

CommandDescription
make devBuild images and start all services (foreground)
make stopStop all containers, keep data volumes
make logsTail API logs
make testRun unit and eval tests
The equivalent docker compose commands:
# Start (use cached images, run in background)
docker compose up -d

# Start (rebuild images)
docker compose up --build

# Stop (keep data)
docker compose down

# Stop and remove volumes (destroys database and ChromaDB)
docker compose down -v

# Tail API logs
docker compose logs -f api

# Open a shell in the API container
docker compose exec api bash

Production setup

The default compose configuration is designed for local development. Before running Waterline in production, make these changes:
  1. Set ENVIRONMENT=production in your .env file.
  2. Use a strong JWT_SECRET — generate one with openssl rand -hex 32.
  3. Change the default Postgres credentials — update POSTGRES_USER, POSTGRES_PASSWORD, and DATABASE_URL to non-default values.
  4. Remove host port bindings for internal services — don’t expose Postgres (5433) or Redis (6379) to the host network.
  5. Add a reverse proxy — put nginx or Traefik in front of the API (port 8000) and frontend (port 3001) containers and terminate TLS there.
  6. Update your OAuth callback URLs — replace localhost with your real domain in GITHUB_REDIRECT_URI, JIRA_REDIRECT_URI, and API_BASE_URL.
Create a docker-compose.prod.yml override to apply production settings without modifying the base file:
services:
  api:
    restart: always
  frontend:
    restart: always
  postgres:
    ports: []       # don't expose to host
    restart: always
  redis:
    ports: []       # don't expose to host
    restart: always
Start with the override:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

GitHub webhooks in local development

GitHub needs to reach your API over the public internet to deliver webhook events. In local development, use a tunnel to expose port 8000:
# Install ngrok from https://ngrok.com, then:
ngrok http 8000
Update your .env to use the ngrok HTTPS URL before connecting a repository:
API_BASE_URL=https://abc123.ngrok.io
GITHUB_REDIRECT_URI=https://abc123.ngrok.io/api/connect/github/callback
The ngrok URL changes each time you restart the tunnel unless you have a paid ngrok account with a reserved domain. Update .env and reconnect the repository if the URL changes.

Troubleshooting

Port already in use If Docker reports a port conflict on startup, another process is already using port 8000, 3001, 6379, or 5433. Find and stop it, or change the host-side port mapping in docker-compose.yml (the first number in "8000:8000"). Container stays unhealthy The api container waits for both Redis and Postgres to pass their health checks. If either stays unhealthy, check its logs:
docker compose logs postgres
docker compose logs redis
A fresh Postgres container can take 10–15 seconds to initialize on first start. API exits immediately after starting Missing or invalid environment variables are the most common cause. Run docker compose logs api and look for a startup error. The most frequently missing values are JWT_SECRET (when BACKEND=postgres), an LLM provider key, or the GitHub/Jira OAuth credentials. Frontend shows a blank page or API errors Check that FRONTEND_URL and API_BASE_URL in your .env match the URLs you’re actually using. A mismatch causes CORS errors that prevent the frontend from reaching the API.