The Engine runs inside Docker, so the host requirements are minimal. You need:Documentation Index
Fetch the complete documentation index at: https://septemberai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Required
- Docker with Compose v2. Versions tested:
- Docker Desktop 4.x (macOS, Windows).
- Docker CE 24+ (Linux).
- Git for cloning the repo.
- An LLM provider API key. Pick one:
- Anthropic —
sk-ant-... - OpenAI —
sk-... - Google Gemini — Google API key with the Generative Language API enabled.
- Anthropic —
- An OpenAI API key for embeddings. This is required regardless of
which chat provider you choose. The Engine uses OpenAI’s
text-embedding-3-smallfor memory search.
Optional
sqlite3CLI if you want to inspect the brain database from the host.jqfor pretty-printing the JSON streams in your terminal.- An MCP server (Slack, Gmail, etc.) if you want to test external connectors. The Engine itself does not require any.
Disk and memory
- Disk: the Engine image is ~1.2 GB. Brain databases start at a few MB and grow with use; budget 1 GB per active user as a comfortable upper bound for a single-user development setup.
- RAM: the Engine process is light (~200 MB resident). The bottleneck is whatever the LLM call returns.
Architecture
Tested on:- macOS arm64 (Apple Silicon).
- Linux x86_64.
docker-compose.yml already pins the test
container to linux/amd64 because some sandbox tests require x86 syscall
behavior. You don’t need to do anything special — it just works.
Network
Outbound HTTPS to your chosen LLM provider must be reachable. If you’re behind a corporate proxy:- Set
HTTPS_PROXYandHTTP_PROXYin your shell before runningdocker compose up. - Make sure those envs are passed into the container by the compose file (the default compose file does this).
localhost:8000.

