Skip to main content

Documentation Index

Fetch the complete documentation index at: https://septemberai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

This page gets the Engine running on your machine, end to end, in development mode. By the end you’ll have:
  • An Engine process listening on port 8000.
  • A brain database with all migrations applied.
  • A passing /health check.
  • Your first successful /execute call.
If anything below doesn’t work for you, see Troubleshooting.

Prerequisites

You need:
  • Docker (for docker compose). Tested with Docker Desktop 4.x and Docker CE 24+.
  • Git.
  • An API key for at least one LLM provider (Anthropic, OpenAI, or Gemini) plus an OpenAI key for embeddings.
You do not need Python locally. The Engine runs inside a Docker container.

Step 1 — Clone the repo

cd ~/work
git clone git@github.com:septemberai/engine.git
cd engine
If you don’t have repo access yet, request it before continuing.

Step 2 — Configure your environment

Copy the example env file:
cp .env.example .env
Open .env and set, at minimum:
# Pick one chat provider and set its key + model.
LLM_PROVIDER=anthropic
LLM_API_KEY=sk-ant-...
LLM_MODEL=claude-sonnet-4-5

# Embeddings always route to OpenAI today.
OPENAI_API_KEY=sk-...

# Set a development API key for the Engine itself.
ENGINE_API_KEY=dev-engine-key
For the full list of variables, see Environment variables.

Step 3 — Start the Engine

From the engine repo root:
docker compose up engine
The first run pulls and builds the image, applies migrations, and starts Uvicorn. You’ll see something like:
engine    | INFO:     Uvicorn running on http://0.0.0.0:8000
engine    | INFO:     Application startup complete.
Leave it running. Open a second terminal for the next steps.

Step 4 — Confirm it’s alive

curl -fsS http://localhost:8000/health
You should see:
{
  "status": "ok",
  "uptime_seconds": 5.2,
  "subsystems": { ... }
}
The /health endpoint is the only route that does not require an API key. Every other endpoint requires X-Engine-Key.

Step 5 — Send your first request

Set your dev key as an environment variable for convenience:
export ENGINE_KEY=dev-engine-key
Now make an /execute call:
curl -N -X POST http://localhost:8000/execute \
  -H "Content-Type: application/json" \
  -H "X-Engine-Key: $ENGINE_KEY" \
  -d '{"message": "Say hello in one short sentence.", "task_id": "demo-001"}'
-N disables curl’s buffering so you see the SSE stream as it arrives. You should see a series of events, ending with the model’s response.

Step 6 — Watch the brain fill up

The brain database lives at /data/brain.sqlite inside the container. To look at it from the host:
docker compose exec engine sqlite3 /data/brain.sqlite '.tables'
You should see all the tables the migrations created — episodes, knowledge_store, social_graph_nodes, working_memory_log, and so on. After a few /execute calls, the working memory and trajectory tables will have rows.

Step 7 — Run the tests

The engine repo includes a test target in docker-compose:
docker compose run --rm test pytest tests/ --tb=short
Tests run inside a privileged container so the sandbox can use bubblewrap. Expect this to take a few minutes the first time.

What’s next

Tearing it down

docker compose down
To wipe the brain database too:
docker compose down --volumes
That removes the engine_data volume, which holds the SQLite file. Use it when you want to start completely fresh.