Agents
Agents are the core of auxilia. An agent is an AI assistant defined by:
- A name, emoji, and color (the avatar shown in the sidebar and chat)
- Instructions — the system prompt sent on every message
- A set of MCP server bindings — which tools the agent can reach
- Optional subagents — specialized agents this agent can dispatch work to
- A sandbox flag — whether the agent has access to a Linux code-execution environment
How agents run
When a user sends a message to an agent, auxilia:
- Loads the agent configuration from Postgres
- Opens an MCP session to every bound server (reusing cached OAuth tokens per user)
- Discovers each server’s tools and filters them by the agent’s tool settings
- If the sandbox is enabled, adds the
create_sandbox/connect_sandboxtools and the standard file-ops toolset - If subagents are configured, wraps them as tools the coordinator can call
- Sends the conversation to the LLM via LangGraph with the resolved toolset
- Streams tokens back to the UI (or Slack) and executes tool calls as they appear
- Persists state as a LangGraph checkpoint in Postgres so the thread can be resumed
The underlying runtime is LangGraph’s create_agent (or create_deep_agent when the sandbox is on), with middleware for HITL interrupts, tool errors, and optional subagent dispatch.
Creating an agent
- Navigate to Agents in the sidebar
- Click Create Agent
- Fill in:
- Name — what users see in the agent list
- Emoji and color — the avatar
- Description — short summary shown on the agent card
- Instructions — the system prompt
- Save, then open the agent’s configuration page to bind MCP servers
Only users with the editor or admin role can create agents. The creator becomes the owner of the agent.
Threads
Conversations with an agent are organized into threads. Each thread:
- Is tied to one agent and one user
- Persists its message history via LangGraph Postgres checkpoints
- Remembers the model selected for that thread (not switchable)
- Is independent from other threads — each one has its own memory
You can regenerate the last assistant message in a thread; auxilia rewinds the LangGraph checkpoint and re-streams.
Available models
You can pick a different LLM per thread. Only models whose API key is configured in the environment appear in the selector.
| Provider | Models |
|---|---|
| Anthropic | claude-haiku-4-5, claude-sonnet-4-6, claude-opus-4-6 |
| OpenAI | gpt-4o-mini |
gemini-3-flash-preview, gemini-3-pro-preview | |
| DeepSeek | deepseek-chat, deepseek-reasoner |
Where to go next
- Configuration — editing instructions, avatar, MCP bindings
- Permissions — sharing agents across the workspace
- Subagents — dispatching work to specialized agents
- Sandbox — enabling code execution
- Tool Settings — approval rules per tool