This page gets the Engine running on your machine, end to end, in development mode. By the end you’ll have:Documentation Index
Fetch the complete documentation index at: https://septemberai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
- An Engine process listening on port 8000.
- A brain database with all migrations applied.
- A passing
/healthcheck. - Your first successful
/executecall.
Prerequisites
You need:- Docker (for
docker compose). Tested with Docker Desktop 4.x and Docker CE 24+. - Git.
- An API key for at least one LLM provider (Anthropic, OpenAI, or Gemini) plus an OpenAI key for embeddings.
Step 1 — Clone the repo
Step 2 — Configure your environment
Copy the example env file:.env and set, at minimum:
Step 3 — Start the Engine
From the engine repo root:Step 4 — Confirm it’s alive
/health endpoint is the only route that does not require an API
key. Every other endpoint requires X-Engine-Key.
Step 5 — Send your first request
Set your dev key as an environment variable for convenience:/execute call:
-N disables curl’s buffering so you see the SSE stream as it arrives. You
should see a series of events, ending with the model’s response.
Step 6 — Watch the brain fill up
The brain database lives at/data/brain.sqlite inside the container. To
look at it from the host:
episodes,
knowledge_store, social_graph_nodes, working_memory_log, and so on.
After a few /execute calls, the working memory and trajectory tables will
have rows.
Step 7 — Run the tests
The engine repo includes a test target in docker-compose:bubblewrap.
Expect this to take a few minutes the first time.
What’s next
- Common tasks — reset the database, seed memory, swap models, run a single test.
- Quickstart in Python — same first-request flow using the Python client.
- Architecture overview — what’s actually running inside the container you just started.
Tearing it down
engine_data volume, which holds the SQLite file. Use it
when you want to start completely fresh.
