Skip to main content

Documentation Index

Fetch the complete documentation index at: https://septemberai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The Engine’s permission system intercepts dangerous operations before they run and routes them to the user as Human-In-The-Loop (HITL) prompts. The user — not the model — decides whether to allow them. This page covers what gets gated, what doesn’t, and how to tune the policy for your deployment.

The principle

A model is good at intent. It’s not perfect at consequences. The permission system exists because the difference between “delete the draft I just wrote” and “delete the file containing the user’s life’s work” is one path away — and the model sometimes gets the path wrong.

What gets gated by default

The default policy intercepts:

Destructive shell

rm -rf outside ALLOWED_ROOTS, shred, recursive chmod/chown, truncating writes to a path with significant content already in it. The permission system does semantic analysis on the bash command (parses the AST, walks paths, checks against rules). Not just regex.

Filesystem writes outside known paths

A write_file to a path not under ALLOWED_ROOTS triggers a prompt. A write_file inside ALLOWED_ROOTS runs without one.

High-cost operations

Operations marked as high-cost in the catalog (e.g. starting a long- running provisioning task) prompt before running.

Network egress to unknown destinations

When network policy is configured, calls to destinations outside the allowlist prompt.

Anything declared in the catalog

A tool can mark itself as requiring permission via a flag in its definition:
{
  "name": "send_payment",
  "description": "...",
  "requires_permission": true,
  "input_schema": { ... }
}
The flag forces a prompt every time the tool is called, regardless of input.

What doesn’t get gated

Read-only operations don’t prompt:
  • read_file from anywhere readable.
  • grep, find, git status, git log.
  • web_search.
  • Memory queries.
Per-task caching: once the user approves a category in a task, similar operations in the same task don’t prompt again. The cache is keyed on the operation’s signature, not the exact string. Approving “delete files in /tmp/work” once approves the whole pattern for the task.

What a permission prompt looks like

When a gated operation comes up, the Engine emits:
event: hitl_request
data: {
  "request_id": "hitl-...",
  "kind": "permission",
  "question": "Permit this operation?",
  "context": {
    "operation": "bash",
    "command": "rm -rf /data/work/old/",
    "rationale": "Deletes recursively. Path /data/work/old contains 1.2 GB."
  },
  "options": ["yes", "no", "yes-and-remember"]
}
The agent loop pauses. The user (or your application) responds via:
POST /hitl/respond
{
  "task_id": "...",
  "answer": "yes"
}
After the response, the agent loop resumes. If the answer was no, the tool call returns an error to the model with a permission_denied reason; the model decides what to do next.

Designing your application’s UX

The HITL request is a UI event. How you surface it shapes user trust:

Surface the rationale

Show the user why the permission system stopped. “The agent wants to run rm -rf on a 1.2 GB directory” is more useful than “permission required.” If the agent is about to run 10 deletions, batch the prompts into a single approval. “Approve deleting these 10 files? [show list].” Don’t flood the user with 10 separate dialogs.

Provide an “always” option

For trusted patterns, let the user approve once for the whole session or task. The Engine’s per-task cache supports this — answer "yes-and-remember" and similar operations don’t prompt again.

Don’t auto-approve

For unattended workloads where there’s no human to ask, set FEATURE_DONT_ASK=true. This makes the permission system fail-closed: gated operations error out instead of prompting. Use this for batch runs where waiting forever is worse than failing. Don’t set FEATURE_DONT_ASK in deployments where a real user is present. The whole point of the permission system is to keep the user in the loop.

Configuration

ALLOWED_ROOTS

Comma-separated absolute paths. Filesystem operations outside these paths require permission (or are denied outright).
ALLOWED_ROOTS=/data/brain,/data/workspace
The narrower this list, the more the permission system fires. Aim for just the paths the agent legitimately works in.

FEATURE_DONT_ASK

"true" to disable prompts (fail-closed). Default "false".

FEATURE_DANGEROUS_REMOVAL

"true" (default) to enable detection of dangerous rm and shred patterns. Disabling this is rarely a good idea.

FEATURE_SED_VALIDATION

"1" (default) to validate sed commands before running. sed is a common vector for accidental damage; the validator catches the worst cases.

FEATURE_WINDOWS_PATH_EVASION

"true" (default) to detect Windows-style path-evasion attempts (e.g. using \\?\ to bypass length limits). Always on for production.

What the permission system can’t catch

The permission system is rules-based. It catches what the rules know about. It doesn’t catch:
  • Logic errors at the application layer. If the agent calls slack.send_message with a wrong channel ID, the permission system doesn’t know that ID is wrong — it just sees a Slack message.
  • Multi-step harm. A sequence of individually-allowed operations can compose into something harmful. The permission system is per-operation; it doesn’t reason across the sequence.
  • Semantic safety. “Send a rude email” and “send a polite email” look the same to the policy. Refusal at the model layer is what catches this.
For each of those, you need defense at a different layer — input validation upstream, system-prompt refusals, business-logic checks.

See also