Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
BrainBlend-AI

atomic-agents

Quality
9.0

Atomic Agents is a lightweight, modular framework for constructing AI agent pipelines and applications using single-purpose, reusable, and composable components. It excels when developers need predictable, consistent, and maintainable AI systems, allowing the application of traditional software engineering principles to agent development.

USP

Unlike other agent frameworks, Atomic Agents prioritizes atomicity, predictability, and control, enabling developers to build robust AI applications with familiar software engineering best practices. It allows fine-tuning individual compon…

Use cases

  • 01Building modular AI agent pipelines
  • 02Creating predictable AI applications
  • 03Developing single-purpose, reusable AI components
  • 04Scaffolding new Atomic Agents projects
  • 05Injecting dynamic context into agent prompts

Detected files (8)

  • claude-plugin/atomic-agents/skills/framework/SKILL.mdskill
    Show content (9651 bytes)
    ---
    name: framework
    description: Guide for the Atomic Agents Python framework — schemas, agents, tools, context providers, prompts, orchestration, and provider configuration. Use when code imports from `atomic_agents`, defines an `AtomicAgent`, `BaseTool`, or `BaseIOSchema`, or the user asks about multi-agent orchestration or LLM-provider wiring in an atomic-agents project.
    ---
    
    # Atomic Agents Framework
    
    Atomic Agents is a lightweight Python framework for building LLM applications with typed, structured input and output. It layers on top of [Instructor](https://python.useinstructor.com) and Pydantic so every interaction between user, agent, tool, and context is a validated schema.
    
    This skill orients Claude on the framework and routes to focused reference files as the task requires.
    
    ## Core abstractions
    
    | Concept | Class | Role |
    |---|---|---|
    | Schema | `BaseIOSchema` | Typed input/output contract — every agent/tool I/O is one |
    | Agent | `AtomicAgent[In, Out]` | LLM-backed transformer from input schema to output schema |
    | Config | `AgentConfig` | Wires client, model, history, prompt, roles, API params |
    | Prompt | `SystemPromptGenerator` | Three-section prompt: background, steps, output_instructions |
    | History | `ChatHistory` | Conversation state, serializable, token-counted |
    | Tool | `BaseTool[In, Out]` | Deterministic capability the agent can invoke |
    | Context | `BaseDynamicContextProvider` | Dynamic section injected into the system prompt at runtime |
    
    All communication between these uses `BaseIOSchema` subclasses with **docstring-required** descriptions.
    
    ## Canonical imports
    
    ```python
    from atomic_agents import (
        AtomicAgent, AgentConfig,
        BasicChatInputSchema, BasicChatOutputSchema,
        BaseIOSchema, BaseTool, BaseToolConfig,
    )
    from atomic_agents.context import (
        ChatHistory, Message,
        SystemPromptGenerator, BaseDynamicContextProvider,
    )
    # Optional: MCP interop
    from atomic_agents.connectors.mcp import fetch_mcp_tools, MCPTransportType
    ```
    
    Do not use legacy paths like `atomic_agents.lib.base.*` or `atomic_agents.agents.base_agent` — those were retired. Import from the top-level package where possible.
    
    ## Minimum viable agent
    
    ```python
    import os, instructor, openai
    from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema
    from atomic_agents.context import ChatHistory
    
    client = instructor.from_openai(openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"]))
    
    agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema](
        config=AgentConfig(
            client=client,
            model="gpt-5-mini",
            history=ChatHistory(),
        )
    )
    
    reply = agent.run(BasicChatInputSchema(chat_message="Hello"))
    print(reply.chat_message)
    ```
    
    `AtomicAgent` and `BaseTool` use PEP 695 generics — the type parameters carry runtime information, so write them explicitly and keep them accurate. Full runnable version: `atomic-examples/quickstart/quickstart/1_0_basic_chatbot.py`.
    
    ## Targeted creation skills
    
    For the four most common authoring tasks, dedicated atomic skills give a step-by-step workflow (clarify → write → verify → hand off) instead of just reference material. Prefer them when the user is actively building something specific.
    
    | User intent | Atomic skill |
    |---|---|
    | "create a schema" / "design the input/output schema" | `atomic-agents:create-atomic-schema` |
    | "create an agent" / "add another agent" / "wire up an `AtomicAgent`" | `atomic-agents:create-atomic-agent` |
    | "add a tool" / "wrap an API as a tool" / "build a `BaseTool`" | `atomic-agents:create-atomic-tool` |
    | "add a context provider" / "inject X into the prompt" / "wire up RAG" | `atomic-agents:create-atomic-context-provider` |
    
    These skills auto-trigger on the matching phrasing. The reference files below are what they (and you) load for deeper material.
    
    ## Decision routing
    
    Pick the reference file that matches the task. Each is loaded only when read.
    
    | Task | Reference |
    |---|---|
    | Design or validate an input/output schema | [references/schemas.md](references/schemas.md) |
    | Build, configure, or run an agent | [references/agents.md](references/agents.md) |
    | Write a tool the agent will invoke | [references/tools.md](references/tools.md) |
    | Inject dynamic data into the system prompt | [references/context-providers.md](references/context-providers.md) |
    | Structure the system prompt | [references/prompts.md](references/prompts.md) |
    | Coordinate multiple agents | [references/orchestration.md](references/orchestration.md) |
    | Manage conversation state and multi-agent memory | [references/memory.md](references/memory.md) |
    | Register telemetry, retries, or logging | [references/hooks.md](references/hooks.md) |
    | Swap LLM provider or configure roles | [references/providers.md](references/providers.md) |
    | Decide the project layout or `pyproject.toml` | [references/project-structure.md](references/project-structure.md) |
    | Write tests for agents and tools | [references/testing.md](references/testing.md) |
    
    When a concept is unclear, start from the user's verb: *create a schema* → `create-atomic-schema` skill, *hook up a weather API* → `create-atomic-tool` skill, *inject user name into prompt* → `create-atomic-context-provider` skill, *route between agents* → orchestration reference.
    
    ## Working style
    
    Follow these defaults unless the project says otherwise. The reference files go deeper on each.
    
    **Schemas are the contract.** Design the `BaseIOSchema` pair before writing the agent. Field descriptions flow into the LLM prompt via Instructor, so write them for the model, not just the developer. Every subclass needs a non-empty docstring — the framework enforces this at class-definition time.
    
    **System prompts have three sections.** Use `SystemPromptGenerator(background=..., steps=..., output_instructions=...)`. Put persona in `background`, the ordered procedure in `steps`, and output-format rules in `output_instructions`. The agent falls back to a sensible default when omitted.
    
    **Wrap the provider client with Instructor.** Always. `instructor.from_openai(...)`, `instructor.from_anthropic(...)`, `instructor.from_genai(...)` — without this the agent cannot enforce output schemas.
    
    **Use `model_api_parameters` for provider knobs.** `temperature`, `max_tokens`, `reasoning_effort`, etc. live in the `model_api_parameters` dict on `AgentConfig`, not on the agent itself.
    
    **Errors and retries flow through hooks.** Register handlers for `parse:error`, `completion:error`, `completion:last_attempt` rather than wrapping `run()` in try/except. See `references/hooks.md`.
    
    **Tools return the output schema on success.** Failure should surface as validation errors or typed result schemas the caller pattern-matches on — don't raise through `run()` unless the failure is truly unrecoverable.
    
    ## When the user is starting from nothing
    
    Scaffolding a brand-new project (fresh directory, `pyproject.toml`, first agent) is handled by the sibling skill `new-app`. Suggest it when the user says "new project", "start from scratch", or equivalent.
    
    ## When the user wants to understand an existing codebase
    
    Delegate to the `atomic-explorer` subagent when the project has more than a handful of atomic-agents files and the user asks to "explore", "map", "understand how X works", or similar. The subagent reads the relevant files in isolated context and returns a compact architecture map (agents, tools, schemas, context providers, orchestration, essential-reading list). Invoke via the `Task` tool with the scope (project root, module path, or feature) in the prompt.
    
    For a small project (a single `main.py` + one or two agents), reading the files directly in the main thread is fine — the isolation upside is thin.
    
    ## When the user wants a review
    
    Delegate to the `atomic-reviewer` subagent — do not review in the main thread. The subagent runs in isolated context with read-only tools, keeping the review's file exploration out of the parent conversation. Invoke it via the `Task` tool with the scope (diff, paths, or module) in the prompt. Review findings return as a single structured report the parent thread can act on.
    
    ## Versioning and compatibility
    
    - Python 3.12+ (PEP 695 generics are used internally).
    - Instructor 1.14+ with provider extras (`instructor[openai]`, `instructor[anthropic]`, etc.) — the workspace uses Instructor's extras to pull provider SDKs.
    - Pydantic 2.
    - For MCP interop, see `atomic_agents.connectors.mcp` — `fetch_mcp_tools`, `MCPFactory`, `MCPTransportType` are stable.
    
    ## Anti-patterns (surface these in review)
    
    - Plain `BaseModel` instead of `BaseIOSchema`.
    - Missing docstrings on `BaseIOSchema` subclasses (framework raises at import).
    - `Field(..., description="...")` missing — Instructor leans on descriptions for prompt generation.
    - Raw provider client passed as `AgentConfig.client` (must be wrapped in Instructor). Raw SDK use for embeddings, image generation, audio, or moderation is fine — the framework only covers structured chat/completions.
    - Hardcoded API keys instead of env vars.
    - Unbounded `ChatHistory` on long-running sessions.
    - Blocking I/O inside `BaseDynamicContextProvider.get_info()` — it runs on every `agent.run()`.
    - Catching `ValidationError` to hide schema problems instead of fixing descriptions or constraints.
    - `MCPTransportType.STREAMABLE_HTTP` — the correct value is `HTTP_STREAM`.
    - `ChatHistory.load(...)` called as a classmethod — it is an instance method that mutates self.
    
    For deeper guidance load the relevant reference file above. For code-review runs, delegate to the `atomic-reviewer` subagent.
    
  • claude-plugin/atomic-agents/skills/create-atomic-context-provider/SKILL.mdskill
    Show content (7627 bytes)
    ---
    name: create-atomic-context-provider
    description: Build a `BaseDynamicContextProvider` that injects a named, titled block into an agent's system prompt at every `run()` — current time, user identity, retrieved RAG docs, session state, cached DB schema. Use when the user asks to "add a context provider", "inject X into the prompt", "give the agent dynamic context", "wire up RAG", "make a `BaseDynamicContextProvider`", or runs `/atomic-agents:create-atomic-context-provider`.
    ---
    
    # Create an Atomic Agents Context Provider
    
    A context provider injects a named, titled block into the agent's system prompt at every `run()`. The base prompt stays static; the context is what changes between calls.
    
    For deep material (caching strategies, async data sources, multi-agent sharing patterns), the authority is `../framework/references/context-providers.md`. This skill is the action-oriented path: clarify → write → register.
    
    ## When this fires vs the umbrella `framework` skill
    
    - **This skill**: the user wants to add a provider — "inject the user's name", "make the agent see the current time", "feed retrieved docs into the prompt", "share session state across two agents".
    - **`framework` skill**: questions about Atomic Agents in general, or about something other than authoring a provider.
    
    ## Phase 1 — Clarify
    
    Bundle into one message:
    
    1. **What goes into the prompt?** One sentence. Defines the provider's job.
    2. **Where does the data come from?** In-memory state mutated by your code? A vector DB lookup? A REST call? A clock?
    3. **How fresh must it be?** Per-`run()` (default), every N seconds (cache), or refreshed externally before each call (async data).
    4. **Which agent(s)?** One agent, or shared across multiple agents?
    
    Skip what's already obvious from context.
    
    ## Phase 2 — Plan
    
    Confirm in one short block:
    
    - File: `<project>/context_providers.py` (or alongside the agent that owns it).
    - Class: `<Topic>Ctx(BaseDynamicContextProvider)`.
    - Title (rendered as the section header in the prompt) — must be unique across providers on the same agent.
    - Update mechanism: mutate fields between `run()` calls (most common), set via a method, or `await refresh()` for async sources.
    
    ## Phase 3 — Implement
    
    ### Skeleton
    
    ```python
    from atomic_agents.context import BaseDynamicContextProvider
    
    
    class UserCtx(BaseDynamicContextProvider):
        def __init__(self):
            super().__init__(title="User Context")
            self.name: str = ""
            self.role: str = ""
    
        def get_info(self) -> str:
            if not self.name:
                return "No user is logged in."
            return f"User: {self.name} (role: {self.role})"
    ```
    
    `get_info()` is **synchronous** and runs on **every** `agent.run()` — keep it cheap. No HTTP, no DB queries, no file I/O. Cache slow sources (see "Cached" pattern below). For async data sources, `await provider.refresh()` from your loop before calling the agent.
    
    ### Common patterns
    
    **Time** — read-only, no state mutation needed:
    
    ```python
    from datetime import datetime, timezone
    
    class TimeCtx(BaseDynamicContextProvider):
        def __init__(self):
            super().__init__(title="Current Time")
    
        def get_info(self) -> str:
            return datetime.now(timezone.utc).isoformat()
    ```
    
    **RAG / retrieved docs** — set externally, read inside `get_info()`:
    
    ```python
    class RAGCtx(BaseDynamicContextProvider):
        def __init__(self):
            super().__init__(title="Retrieved Documents")
            self.docs: list[dict] = []
    
        def set(self, docs: list[dict]) -> None:
            self.docs = docs
    
        def get_info(self) -> str:
            if not self.docs:
                return "No relevant documents retrieved."
            return "\n\n".join(f"[{d['source']}] {d['content']}" for d in self.docs)
    
    # In the calling code, just before agent.run():
    rag.set(vector_db.search(query, k=4))
    agent.run(query_input)
    ```
    
    **Session** — mutable key/value state shared across agents:
    
    ```python
    class SessionCtx(BaseDynamicContextProvider):
        def __init__(self):
            super().__init__(title="Session")
            self._data: dict[str, str] = {}
    
        def set(self, key: str, value: str) -> None:
            self._data[key] = value
    
        def get_info(self) -> str:
            if not self._data:
                return "No session state."
            return "\n".join(f"- {k}: {v}" for k, v in self._data.items())
    ```
    
    **Cached** — for slow sources (DB schema, expensive computation):
    
    ```python
    import time
    
    class DBSchemaCtx(BaseDynamicContextProvider):
        def __init__(self, conn, ttl_seconds: int = 300):
            super().__init__(title="Database Schema")
            self._conn = conn
            self._ttl = ttl_seconds
            self._cached: str = ""
            self._at: float = 0.0
    
        def get_info(self) -> str:
            now = time.time()
            if not self._cached or now - self._at > self._ttl:
                self._cached = render_schema(self._conn)
                self._at = now
            return self._cached
    ```
    
    **Async source** — refresh outside, read sync inside:
    
    ```python
    class AsyncCtx(BaseDynamicContextProvider):
        def __init__(self):
            super().__init__(title="Async Data")
            self._cached = ""
    
        async def refresh(self) -> None:
            self._cached = format(await fetch_remote())
    
        def get_info(self) -> str:
            return self._cached
    
    # Caller
    await ctx.refresh()
    await agent.run_async(input_data)
    ```
    
    ## Phase 4 — Register
    
    ```python
    ctx = UserCtx()
    agent.register_context_provider("user", ctx)
    
    # Mutate before each run as needed:
    ctx.name = "Alice"; ctx.role = "admin"
    agent.run(...)
    ```
    
    Sharing one provider instance across agents is allowed — updates propagate to every agent that registered it:
    
    ```python
    shared = SessionCtx()
    agent_a.register_context_provider("session", shared)
    agent_b.register_context_provider("session", shared)
    shared.set("locale", "en-GB")  # visible to both agents
    ```
    
    Inspect or unregister:
    
    ```python
    "user" in agent.context_providers
    agent.unregister_context_provider("user")
    ```
    
    ## Phase 5 — Verify
    
    Quick smoke test that the provider renders:
    
    ```bash
    uv run python -c "from <project>.context_providers import UserCtx; c = UserCtx(); c.name='Alice'; c.role='admin'; print(c.get_info())"
    ```
    
    Then confirm the rendered system prompt includes the provider's section by inspecting `agent.system_prompt_generator.generate_prompt(...)` or by running the agent and checking the first request's payload via the `completion:kwargs` hook (see `../framework/references/hooks.md`).
    
    ## Phase 6 — Hand off
    
    Tell the user:
    
    - Where the provider lives, what name was used in `register_context_provider`, and how to mutate it.
    - The lifecycle: register once, mutate fields between calls, the prompt picks up the change automatically.
    - Optional next steps:
      - The agent that consumes it → `create-atomic-agent` skill.
      - A research / RAG loop that updates the provider in a loop → see `atomic-examples/deep-research/` and `../framework/references/orchestration.md`.
    
    ## Anti-patterns
    
    - Slow I/O inside `get_info()` — runs on every `agent.run()`. Cache it or refresh externally.
    - Returning a non-string from `get_info()` — raises at prompt time.
    - Forgetting `register_context_provider(...)` — the provider never reaches the prompt.
    - Duplicate titles across providers on the same agent — sections collide. Use unique titles.
    - Storing secrets in the provider — they end up in every LLM request. Inject only what the model needs to reason about.
    
    For deeper material — multi-agent sharing patterns, dynamic-update research loops, async source patterns — load `../framework/references/context-providers.md`.
    
  • .claude/skills/release/SKILL.mdskill
    Show content (2459 bytes)
    ---
    name: release
    description: Release a new version of atomic-agents to PyPI and GitHub. Use when the user asks to "release", "publish", "deploy", or "bump version" for atomic-agents.
    allowed-tools: Read, Bash, Grep, Glob
    ---
    
    # Release Process for atomic-agents
    
    ## Overview
    
    This skill guides the release process for atomic-agents, including version bumping, PyPI publishing, and GitHub release creation.
    
    ## Prerequisites
    
    - Must be on `main` branch with clean working directory
    - `.env` file must contain `PYPI_TOKEN` environment variable
    - Must have push access to the repository
    
    ## Release Types
    
    | Type | When to Use | Example |
    |------|-------------|---------|
    | `major` | Breaking changes | 2.5.0 → 3.0.0 |
    | `minor` | New features (backwards compatible) | 2.5.0 → 2.6.0 |
    | `patch` | Bug fixes only | 2.5.0 → 2.5.1 |
    
    ## Step-by-Step Process
    
    ### 1. Prepare the Branch
    
    ```bash
    git checkout main
    git pull
    git status  # Ensure clean working directory
    ```
    
    ### 2. Run Build and Deploy Script
    
    **Important**: The script bumps versions, so if it fails partway through, reset to main before retrying.
    
    ```powershell
    powershell -ExecutionPolicy Bypass -File build_and_deploy.ps1 <major|minor|patch>
    ```
    
    This script will:
    - Read current version from `pyproject.toml`
    - Increment version based on release type
    - Update `pyproject.toml` with new version
    - Run `uv sync` to update dependencies
    - Run `uv build` to create distribution packages
    - Run `uv publish` to upload to PyPI
    
    ### 3. If Script Fails - Reset and Retry
    
    ```bash
    git checkout main
    git reset --hard origin/main
    # Fix the issue, then run script again
    ```
    
    ### 4. Commit and Push Version Bump
    
    ```bash
    git add pyproject.toml uv.lock
    git commit -m "Release vX.Y.Z"
    git push
    ```
    
    ### 5. Create GitHub Release
    
    ```bash
    gh release create vX.Y.Z --title "vX.Y.Z" --notes "RELEASE_NOTES_HERE"
    ```
    
    ## Release Notes Template
    
    ```markdown
    ## What's New
    
    ### Feature Name
    Brief description of the feature.
    
    #### Features
    - Feature 1
    - Feature 2
    
    #### Improvements
    - Improvement 1
    
    #### Bug Fixes
    - Fix 1
    
    ### Full Changelog
    https://github.com/BrainBlend-AI/atomic-agents/compare/vOLD...vNEW
    ```
    
    ## Checklist
    
    - [ ] On main branch with clean working directory
    - [ ] `.env` file has `PYPI_TOKEN`
    - [ ] Determined correct release type (major/minor/patch)
    - [ ] Build and deploy script completed successfully
    - [ ] Version bump committed and pushed
    - [ ] GitHub release created with release notes
    
  • claude-plugin/atomic-agents/skills/create-atomic-agent/SKILL.mdskill
    Show content (7703 bytes)
    ---
    name: create-atomic-agent
    description: Build and wire an `AtomicAgent[InSchema, OutSchema]` — schemas, `AgentConfig`, `SystemPromptGenerator`, provider client, history, hooks, optional context providers. Use when the user asks to "create an agent", "add another agent", "build an `AtomicAgent`", "wire up an agent", "make a planner/router/extractor agent", or runs `/atomic-agents:create-atomic-agent`.
    ---
    
    # Create an Atomic Agent
    
    An agent is an LLM-backed transformer from one `BaseIOSchema` to another. Building one means: design the schemas, write the system prompt, wire the provider client, build the `AgentConfig`, instantiate `AtomicAgent[In, Out]`.
    
    For deep material (streaming, token counting, hooks, multi-agent memory), the authority is `../framework/references/agents.md` plus `providers.md`, `prompts.md`, and `memory.md`. This skill is the action-oriented path: clarify → write → run.
    
    ## When this fires vs the umbrella `framework` skill
    
    - **This skill**: the user is creating or wiring a specific agent — "add a planner agent", "build a Q&A agent", "make a router that classifies tickets".
    - **`framework` skill**: questions about Atomic Agents in general, or the user is doing something other than authoring an agent.
    
    ## Phase 1 — Clarify
    
    Bundle into one message:
    
    1. **What should the agent do?** One sentence. Becomes the persona / `background` line.
    2. **Inputs and outputs.** Use `BasicChatInputSchema` / `BasicChatOutputSchema` for free-form chat. Use a custom pair for anything structured (extraction, classification, planning, routing). When custom, branch to the `create-atomic-schema` skill for the schema authoring.
    3. **Provider.** OpenAI / Anthropic / Groq / Ollama / Gemini / OpenRouter / MiniMax. Default: whatever the project already uses; otherwise OpenAI.
    4. **Conversational?** Yes → wire a `ChatHistory`. No (single-shot transformer) → omit it for stateless behavior.
    5. **Context providers.** Anything to inject into the prompt at runtime (current time, user identity, retrieved docs)? If yes, plan to also use the `create-atomic-context-provider` skill afterwards.
    
    Skip anything already settled in context.
    
    ## Phase 2 — Plan
    
    State the plan in one short block:
    
    - File: `<project>/agents/<agent_name>.py` (or directly in `main.py` for a tiny project — see `../framework/references/project-structure.md`).
    - Schemas: which pair, where they live.
    - Provider + model + Instructor mode. Default models: OpenAI `gpt-5-mini`, Anthropic `claude-haiku-4-5`, Groq `llama-3.3-70b-versatile`, Ollama `llama3.1`, Gemini `gemini-2.5-flash`.
    - `SystemPromptGenerator` content — three sections: `background`, `steps`, `output_instructions`.
    - History? Hooks? Context providers?
    
    ## Phase 3 — Implement
    
    ### Canonical imports (do not deviate)
    
    ```python
    from atomic_agents import (
        AtomicAgent, AgentConfig,
        BasicChatInputSchema, BasicChatOutputSchema,
    )
    from atomic_agents.context import ChatHistory, SystemPromptGenerator
    from instructor import Mode
    ```
    
    ### Wire the provider client (always Instructor-wrapped)
    
    The full per-provider matrix lives in `../framework/references/providers.md`. Quick recap:
    
    ```python
    # OpenAI — default mode is Mode.TOOLS
    import os, instructor, openai
    client = instructor.from_openai(openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"]))
    model = "gpt-5-mini"
    api_params: dict = {}
    
    # Anthropic — Mode.TOOLS, max_tokens REQUIRED in model_api_parameters
    import anthropic
    client = instructor.from_anthropic(anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"]))
    model = "claude-haiku-4-5"
    api_params = {"max_tokens": 4096}
    
    # Gemini — Mode.GENAI_TOOLS, assistant_role="model"
    from google import genai
    client = instructor.from_genai(genai.Client(api_key=os.environ["GEMINI_API_KEY"]), mode=Mode.GENAI_TOOLS)
    model = "gemini-2.5-flash"
    api_params = {}
    
    # Groq / Ollama / MiniMax — Mode.JSON in both factory and AgentConfig
    ```
    
    ### Build the agent
    
    ```python
    from atomic_agents import AtomicAgent, AgentConfig
    from atomic_agents.context import ChatHistory, SystemPromptGenerator
    
    agent = AtomicAgent[MyInput, MyOutput](
        config=AgentConfig(
            client=client,
            model=model,
            history=ChatHistory(),  # omit for stateless
            system_prompt_generator=SystemPromptGenerator(
                background=["You are a concise research assistant."],
                steps=[
                    "Read the question carefully.",
                    "Decide what minimum information answers it.",
                    "Produce the answer in the required schema.",
                ],
                output_instructions=[
                    "Reply under 100 words.",
                    "If unsure, set status='error' and explain why.",
                ],
            ),
            # Provider-specific knobs — match the Instructor factory
            # mode=Mode.TOOLS,                         # OpenAI / Anthropic / OpenRouter
            # mode=Mode.JSON,                          # Groq / Ollama / MiniMax
            # mode=Mode.GENAI_TOOLS, assistant_role="model",  # Gemini
            model_api_parameters=api_params or {"temperature": 0.2},
        )
    )
    ```
    
    ### Generics carry the truth
    
    `AtomicAgent[MyInput, MyOutput]` — write the type parameters explicitly. The framework reads them at class-definition time. Do **not** rely on subclass-level `input_schema` / `output_schema` class attributes.
    
    ### Provider-specific knobs (most common gotchas)
    
    - **Anthropic** without `max_tokens` in `model_api_parameters` → API rejects every call.
    - **Gemini** without `assistant_role="model"` → role mismatch on every turn.
    - **Groq / Ollama / MiniMax** with `Mode.TOOLS` → tools formatted in a way the provider does not accept; flip to `Mode.JSON`.
    - Reasoning models (o-series, GPT-5 reasoning variants) → often want `system_role=None` and `reasoning_effort` in `model_api_parameters`.
    
    ## Phase 4 — Run and verify
    
    ```python
    out = agent.run(MyInput(...))
    print(out)
    ```
    
    Quick smoke test without paying for a real call:
    
    ```bash
    uv run python -c "from <project>.agents.<agent_name> import agent; print(type(agent).__name__, '->', agent.input_schema.__name__, '/', agent.output_schema.__name__)"
    ```
    
    If output validation fails repeatedly, the `parse:error` hook has the details — see `../framework/references/hooks.md` for registration.
    
    ## Phase 5 — Hand off
    
    Tell the user:
    
    - How to call `agent.run(...)` (and `run_async`, `run_stream`, `run_async_stream` when appropriate).
    - Which env var to set for the provider key.
    - Optional next steps:
      - Tools the agent should be able to invoke → `create-atomic-tool` skill.
      - Dynamic data injected into the prompt → `create-atomic-context-provider` skill.
      - Custom schemas → `create-atomic-schema` skill.
      - Multiple agents working together → `../framework/references/orchestration.md`.
      - Telemetry / retries / logging → `../framework/references/hooks.md`.
      - Conversation persistence, summarization, multi-agent memory → `../framework/references/memory.md`.
    
    ## Anti-patterns
    
    - Forgetting to wrap the client with `instructor.from_*` — structured outputs silently stop working.
    - `BaseModel` instead of `BaseIOSchema` for the agent's input or output type.
    - `AgentConfig.mode` out of sync with the Instructor factory mode.
    - `assistant_role="assistant"` on Gemini — must be `"model"`.
    - Missing `max_tokens` on Anthropic — every call fails.
    - Hardcoded API keys in the source — read from env.
    - Unbounded `ChatHistory` in a long-running service — monitor `agent.get_context_token_count().utilization` or set `max_messages`.
    
    For deep material — streaming, async, token counting, hooks, multi-agent history — load `../framework/references/agents.md`.
    
  • claude-plugin/atomic-agents/skills/create-atomic-schema/SKILL.mdskill
    Show content (6659 bytes)
    ---
    name: create-atomic-schema
    description: Design and write a `BaseIOSchema` input/output pair for an Atomic Agents agent or tool — docstrings, field descriptions, validators, error variants. Use when the user asks to "create a schema", "design the input/output schema", "define an `IOSchema`", "write a `BaseIOSchema`", "model the agent's output", or runs `/atomic-agents:create-atomic-schema`.
    ---
    
    # Create an Atomic Agents Schema
    
    Author a `BaseIOSchema` pair (input and/or output) that becomes the contract between an agent or tool and its caller. The framework enforces docstrings on every subclass, and Instructor flows field descriptions into the LLM prompt — so the schema **is** part of the prompt, not just typing.
    
    For deep material (validators, discriminated unions, error envelopes), the authority is `../framework/references/schemas.md`. This skill is the action-oriented path: clarify → write → validate.
    
    ## When this fires vs the umbrella `framework` skill
    
    - **This skill**: the user is creating or modifying a specific schema — e.g. "design the output schema for the planner agent", "add a field to `WeatherInput`", "split the result into success and failure variants".
    - **`framework` skill**: the user is asking about Atomic Agents in general or doing something other than authoring schemas.
    
    ## Phase 1 — Clarify
    
    Ask only what is not already obvious from context. Bundle into one message; do not interrogate one-at-a-time.
    
    1. **Caller** — is this for an `AtomicAgent`, a `BaseTool`, both (an agent that emits a tool-input schema), or a nested sub-schema?
    2. **Direction** — input only, output only, or a paired Input/Output?
    3. **Fields** — what fields does the caller need, with which types? (Required vs optional, defaults, constraints.)
    4. **Failure modes** — can this legitimately fail? If yes, plan a typed error variant rather than raising. See `../framework/references/schemas.md` → "Error-schema pattern".
    
    If the user is mid-conversation about an existing schema, skip questions answered in context.
    
    ## Phase 2 — Write
    
    Place schema(s) where they will be imported from. Conventional locations:
    
    - `<project>/agents/<agent_name>/schemas.py` — agent-owned schemas
    - `<project>/tools/<tool_name>_tool.py` — tool I/O lives next to the tool
    - `<project>/schemas/<topic>.py` — schemas shared across multiple components
    
    ### Required ingredients on every schema
    
    - Subclass `BaseIOSchema` (not `BaseModel`).
    - A non-empty class docstring — the framework raises at import otherwise. Write it for the LLM, because Instructor uses it as the schema's `description`.
    - Every `Field(...)` carries a `description=` written for the LLM.
    - Use `Literal[...]` for closed sets before reaching for `Enum` — flatter JSON Schema, easier for Instructor.
    
    ### Minimal template
    
    ```python
    from typing import Optional, Literal
    from pydantic import Field
    from atomic_agents import BaseIOSchema
    
    
    class WeatherInput(BaseIOSchema):
        """A request for current weather conditions."""
    
        city: str = Field(..., description="City name, e.g. 'Brussels' or 'New York'.")
        units: Literal["metric", "imperial"] = Field(
            default="metric",
            description="Unit system for the temperature.",
        )
    
    
    class WeatherOutput(BaseIOSchema):
        """Current weather conditions for a city."""
    
        status: Literal["ok", "error"] = Field(..., description="Outcome code.")
        temperature_c: Optional[float] = Field(
            default=None, description="Temperature in Celsius when status='ok'."
        )
        summary: Optional[str] = Field(
            default=None, description="Human-readable summary when status='ok'."
        )
        error: Optional[str] = Field(
            default=None, description="Failure message when status='error'."
        )
    ```
    
    ### When to add validators
    
    - **Field-level** for normalization (lowercase, strip, enum coercion) and single-field validation.
    - **Model-level** (`@model_validator(mode="after")`) for cross-field rules (start ≤ end, mutually exclusive flags).
    
    Validation errors trigger Instructor retries and fire the `parse:error` hook — they're a feature, not a failure path. Do **not** swallow them.
    
    ### When to use discriminated unions
    
    If the caller must exhaustively handle multiple result shapes, prefer a union over an inflated single schema:
    
    ```python
    class SearchSuccess(BaseIOSchema):
        """Successful search result."""
        kind: Literal["ok"] = "ok"
        results: list[str] = Field(..., description="Matching items.")
    
    class SearchFailure(BaseIOSchema):
        """Search could not complete."""
        kind: Literal["error"] = "error"
        code: Literal["rate_limited", "no_results", "upstream_error"] = Field(
            ..., description="Machine-readable failure code."
        )
        message: str = Field(..., description="Human-readable failure reason.")
    
    class SearchOutput(BaseIOSchema):
        """Search outcome — success or typed failure."""
        result: SearchSuccess | SearchFailure = Field(..., description="Outcome.")
    ```
    
    The `kind` discriminator on each variant lets Pydantic resolve the union without ambiguity.
    
    ## Phase 3 — Verify
    
    Smoke-check the schema imports cleanly and round-trips through `model_json_schema()`:
    
    ```bash
    uv run python -c "from <project>.<module> import WeatherInput, WeatherOutput; print(WeatherInput.model_json_schema()['title'])"
    ```
    
    If the import raises `ValueError("… must have a non-empty docstring …")`, add the docstring. If a field's JSON schema is missing a description, add `description=` to its `Field(...)`.
    
    ## Phase 4 — Hand off
    
    Tell the user:
    
    - Where the schema lives and what to import.
    - Next step in their flow:
      - Wiring it into an agent → `create-atomic-agent` skill.
      - Wiring it into a tool → `create-atomic-tool` skill.
      - Adding a context provider that depends on the same domain → `create-atomic-context-provider` skill.
    
    ## Anti-patterns to refuse on sight
    
    - Plain `BaseModel` instead of `BaseIOSchema` — loses docstring enforcement and the JSON-schema overrides Instructor depends on.
    - Missing class docstring — framework raises at import.
    - `Field(...)` without `description=` — Instructor has nothing to tell the model about the field.
    - `Optional[str]` with no default — required-but-nullable, which is rarely the intent.
    - `dict`, `Any`, or `object` as a field type — the LLM produces anything and Pydantic cannot validate.
    - Catching `ValidationError` to "make the agent more robust" — fix the field constraints or descriptions instead.
    
    For deeper material — composition, nested schemas, discriminated unions in production, validator gotchas — load `../framework/references/schemas.md`.
    
  • claude-plugin/atomic-agents/skills/new-app/SKILL.mdskill
    Show content (5941 bytes)
    ---
    name: new-app
    description: Scaffold a new Atomic Agents project from scratch — create the directory, `pyproject.toml`, env file, first agent, and a runnable entry point. Use when the user asks to start a new atomic-agents project from scratch, says "scaffold" / "new project" / "start from zero", or runs `/atomic-agents:new-app`.
    disable-model-invocation: true
    argument-hint: [project-name]
    ---
    
    # New Atomic Agents Project
    
    Scaffold a fresh Atomic Agents project. The result is a single-package Python project with one working agent, one schema pair, a provider-wrapped client, and a runnable `main.py`.
    
    This skill is opinionated. Produce a complete, tested skeleton the user can run immediately.
    
    ## Phase 1 — Interrogate
    
    Ask these questions in one message, not one-at-a-time. Skip any the user already answered (including via `$ARGUMENTS`).
    
    1. **Project name** — used as both directory name and package name. Default from `$ARGUMENTS` if provided. Normalize to `kebab-case` for the directory and `snake_case` for the package.
    2. **LLM provider** — OpenAI / Anthropic / Groq / Ollama / Gemini / OpenRouter / MiniMax. Default: OpenAI.
    3. **Agent type** — a rough one-liner. Shapes the default `SystemPromptGenerator` content and the starter schema pair. Defaults to a generic chat agent.
    4. **Tooling** — `uv` (default, because the repo uses uv) or `pip + venv`.
    
    Do not ask about project layout, Python version, or dependency list. Pick them.
    
    ## Phase 2 — Confirm the plan
    
    State the plan in one short block and wait for a yes. Include:
    
    - Directory: `<project-name>/`
    - Package: `<project_name>/`
    - Python: `>=3.12` (Atomic Agents uses PEP 695 generics)
    - Dependencies: `atomic-agents>=2.7`, `instructor[<provider-extra>]>=1.14`, `python-dotenv`, `rich`
    - Dev dependencies: `pytest`, `pytest-asyncio`, `ruff`
    - First agent: `<agent-type>` — uses `BasicChatInputSchema`/`BasicChatOutputSchema` unless the agent type calls for custom schemas
    - Default model for the chosen provider (see `framework/references/providers.md`)
    - Entry point: `main.py` with a REPL
    
    ## Phase 3 — Scaffold
    
    Create files in this order. Verify each step before proceeding.
    
    ### Directory and package
    
    ```
    <project-name>/
    ├── pyproject.toml
    ├── .env.example
    ├── .gitignore
    ├── README.md
    └── <project_name>/
        ├── __init__.py
        └── main.py
    ```
    
    ### `pyproject.toml`
    
    Use the template from `framework/references/project-structure.md`, substituting the chosen provider extra and project name.
    
    ### `.env.example`
    
    Include the provider's API-key variable with a placeholder. Never the real key.
    
    ### `.gitignore`
    
    Use the template from `framework/references/project-structure.md`.
    
    ### `<project_name>/main.py`
    
    Produce a runnable REPL. Load `.env`, instantiate the provider client per `framework/references/providers.md`, build an agent, wire a `ChatHistory` with a seed assistant message, loop on `console.input(...)`.
    
    For the agent itself, follow the workflow from the `atomic-agents:create-atomic-agent` skill — same canonical imports, same per-provider `mode` matrix, same `SystemPromptGenerator` shape.
    
    When a custom agent type was requested, build custom `InputSchema` / `OutputSchema` subclasses with field `description=` populated, following the `atomic-agents:create-atomic-schema` skill. Otherwise use `BasicChatInputSchema` / `BasicChatOutputSchema`.
    
    Always use the canonical imports:
    
    ```python
    from atomic_agents import (
        AtomicAgent, AgentConfig,
        BasicChatInputSchema, BasicChatOutputSchema,
    )
    from atomic_agents.context import ChatHistory, SystemPromptGenerator
    from instructor import Mode
    ```
    
    Per-provider AgentConfig knobs — match the Instructor factory mode on `AgentConfig.mode`:
    
    - **OpenAI**: defaults work. Omit `mode` (or set `Mode.TOOLS`).
    - **Anthropic**: `mode=Mode.TOOLS`; include `max_tokens` in `model_api_parameters`.
    - **Groq / Ollama / MiniMax**: `mode=Mode.JSON` (Instructor factory also uses `Mode.JSON`).
    - **Gemini**: `assistant_role="model"` and `mode=Mode.GENAI_TOOLS` (Instructor factory uses `Mode.GENAI_TOOLS`).
    - **OpenRouter**: `mode=Mode.TOOLS`.
    
    ### `README.md`
    
    Short. Include: what the project is, how to install (`uv sync` or `pip install -e .[dev]`), how to set the API key (`cp .env.example .env` and edit), how to run (`uv run python -m <project_name>.main` or equivalent).
    
    ## Phase 4 — Install and smoke-test
    
    Execute the install step:
    
    - **uv**: `uv sync`
    - **pip**: `python -m venv .venv && .venv/bin/pip install -e ".[dev]"` (Windows: `.venv\Scripts\pip`)
    
    Verify imports without a live API key:
    
    ```bash
    uv run python -c "from <project_name>.main import agent; print('ok')"
    ```
    
    If that works, the scaffold is sound. Tell the user to drop their key into `.env` and run the REPL.
    
    ## Phase 5 — Hand off
    
    After scaffolding, tell the user:
    
    1. How to set their key (`cp .env.example .env`).
    2. How to run (`uv run python -m <project_name>.main`).
    3. Next steps, picked from:
       - Replace the starter schemas with domain-specific ones — use the `atomic-agents:create-atomic-schema` skill.
       - Add another agent — use the `atomic-agents:create-atomic-agent` skill.
       - Add a tool — use the `atomic-agents:create-atomic-tool` skill.
       - Add a context provider (time, user, RAG, session) — use the `atomic-agents:create-atomic-context-provider` skill.
       - Split into multiple agents — see `framework/references/orchestration.md`.
    4. A pointer to `framework` (auto-triggered) and `review` (auto-triggered before commit).
    
    ## Constraints
    
    - Never commit `.env`. Only `.env.example`.
    - Never install anything globally. Use the project venv.
    - Never pick an old model. Default to current generation: OpenAI `gpt-5-mini`, Anthropic `claude-haiku-4-5`, Groq `llama-3.3-70b-versatile`, Ollama `llama3.1`, Gemini `gemini-2.5-flash`.
    - Never hand-roll what `framework/references/project-structure.md` already templates.
    
  • claude-plugin/atomic-agents/skills/create-atomic-tool/SKILL.mdskill
    Show content (9325 bytes)
    ---
    name: create-atomic-tool
    description: Build a `BaseTool[InSchema, OutSchema]` subclass — input/output schemas, `BaseToolConfig`, `run()` (and optional `run_async()`), env-driven secrets, typed failure outputs. Use when the user asks to "add a tool", "create a tool", "wrap an API as a tool", "build a `BaseTool`", "make a calculator/search/weather tool", or runs `/atomic-agents:create-atomic-tool`.
    ---
    
    # Create an Atomic Agents Tool
    
    A tool is a deterministic capability an agent can invoke. In Atomic Agents, every tool is a `BaseTool[InSchema, OutSchema]` subclass with a typed `run()` (and optional `run_async()`). The input/output schemas double as the tool's signature for the LLM and as Pydantic validation at runtime.
    
    For deep material (MCP interop, distributing as a standalone package, advanced error patterns), the authority is `../framework/references/tools.md`. This skill is the action-oriented path: clarify → write → verify.
    
    ## When this fires vs the umbrella `framework` skill
    
    - **This skill**: the user is creating a specific tool — wrapping an API, building a calculator, scraping a page, querying a DB.
    - **`framework` skill**: questions about Atomic Agents in general, or the user is doing something other than authoring a tool.
    
    ## Phase 1 — Clarify
    
    Bundle into one message:
    
    1. **What does the tool do?** One sentence. This becomes the class docstring and feeds the LLM's tool description.
    2. **Inputs and outputs.** Names, types, units. If unclear, propose a schema pair and confirm.
    3. **External dependencies.** HTTP API? DB? Local computation only? If HTTP, what auth (API key env var, OAuth, none)?
    4. **Sync, async, or both?** If the rest of the project is async or the call is I/O bound, plan a `run_async()` alongside `run()`.
    5. **Failure modes.** Rate limits, not-found, network errors — how should the agent see them? Default: typed failure output, not raised exceptions.
    
    Skip any question already answered in context.
    
    ## Phase 2 — Plan
    
    Confirm the location and shape in one short block, then proceed:
    
    - File: `<project>/tools/<tool_name>_tool.py` (in-project tool — see `../framework/references/project-structure.md`).
    - Schemas: `<ToolName>Input`, `<ToolName>Output`, optionally a typed failure shape.
    - Config: `<ToolName>Config(BaseToolConfig)` if the tool needs API keys, base URLs, timeouts, retries.
    - Sync vs async: pick one or both.
    - Error pattern: typed failure output (preferred) vs raise (only for programmer error).
    
    ## Phase 3 — Implement
    
    ### Skeleton — local computation, no config
    
    ```python
    from pydantic import Field
    from atomic_agents import BaseIOSchema, BaseTool
    
    
    class CalculatorInput(BaseIOSchema):
        """Arithmetic expression to evaluate."""
        expression: str = Field(..., description="Python-style arithmetic, e.g. '2 + 2 * 3'.")
    
    
    class CalculatorOutput(BaseIOSchema):
        """Result of evaluating the expression."""
        result: float = Field(..., description="Numeric result.")
    
    
    class CalculatorTool(BaseTool[CalculatorInput, CalculatorOutput]):
        """Evaluate simple arithmetic expressions safely."""
    
        def run(self, params: CalculatorInput) -> CalculatorOutput:
            import ast, operator as op
            ops = {ast.Add: op.add, ast.Sub: op.sub, ast.Mult: op.mul, ast.Div: op.truediv}
            def ev(n):
                if isinstance(n, ast.Constant): return n.value
                if isinstance(n, ast.BinOp): return ops[type(n.op)](ev(n.left), ev(n.right))
                raise ValueError("unsupported")
            return CalculatorOutput(result=ev(ast.parse(params.expression, mode="eval").body))
    ```
    
    ### Skeleton — HTTP-backed, with config and typed failure
    
    ```python
    import os
    import httpx
    from typing import Literal, Optional
    from pydantic import Field
    from atomic_agents import BaseIOSchema, BaseTool, BaseToolConfig
    
    
    class WeatherConfig(BaseToolConfig):
        api_key: str = Field(
            default_factory=lambda: os.environ.get("WEATHER_API_KEY", ""),
            description="API key for the weather service.",
        )
        base_url: str = Field(
            default="https://api.weather.example/v1",
            description="Base URL for the weather API.",
        )
        timeout: float = Field(default=15.0, ge=1.0, le=120.0, description="Request timeout (s).")
    
    
    class WeatherInput(BaseIOSchema):
        """A request for current weather conditions."""
        city: str = Field(..., description="City name, e.g. 'Brussels'.")
    
    
    class WeatherOutput(BaseIOSchema):
        """Current weather conditions, or a typed failure."""
        status: Literal["ok", "error"] = Field(..., description="Outcome code.")
        temperature_c: Optional[float] = Field(default=None, description="Temperature in Celsius.")
        summary: Optional[str] = Field(default=None, description="Human-readable summary.")
        error: Optional[str] = Field(default=None, description="Failure message when status='error'.")
    
    
    class WeatherTool(BaseTool[WeatherInput, WeatherOutput]):
        """Fetch current conditions for a city from the weather API."""
    
        def __init__(self, config: WeatherConfig | None = None):
            super().__init__(config or WeatherConfig())
    
        def run(self, params: WeatherInput) -> WeatherOutput:
            cfg: WeatherConfig = self.config
            if not cfg.api_key:
                return WeatherOutput(status="error", error="WEATHER_API_KEY not set")
            try:
                r = httpx.get(
                    f"{cfg.base_url}/current",
                    params={"city": params.city},
                    headers={"Authorization": f"Bearer {cfg.api_key}"},
                    timeout=cfg.timeout,
                )
                r.raise_for_status()
            except httpx.HTTPError as e:
                return WeatherOutput(status="error", error=str(e))
            data = r.json()
            return WeatherOutput(status="ok", temperature_c=data["temp_c"], summary=data["summary"])
    
        async def run_async(self, params: WeatherInput) -> WeatherOutput:
            cfg: WeatherConfig = self.config
            if not cfg.api_key:
                return WeatherOutput(status="error", error="WEATHER_API_KEY not set")
            async with httpx.AsyncClient(timeout=cfg.timeout) as client:
                try:
                    r = await client.get(
                        f"{cfg.base_url}/current",
                        params={"city": params.city},
                        headers={"Authorization": f"Bearer {cfg.api_key}"},
                    )
                    r.raise_for_status()
                except httpx.HTTPError as e:
                    return WeatherOutput(status="error", error=str(e))
            data = r.json()
            return WeatherOutput(status="ok", temperature_c=data["temp_c"], summary=data["summary"])
    ```
    
    ### Hard rules
    
    - Generic parameters carry the runtime type info. **Never** also assign `input_schema` / `output_schema` as class attributes — it shadows the framework-managed property.
    - `run()` returns the **output schema instance**, not a dict.
    - Secrets via env / `BaseToolConfig`, never hardcoded.
    - HTTP tools always set a timeout. Tools run in the agent's request path.
    - The async hook is `async def run_async`, not `arun` — the framework calls `run_async`.
    - Convert routine failures (rate limits, 404s, validation rejects from the upstream API) into a typed failure output. Reserve `raise` for programmer error.
    
    ## Phase 4 — Wire it into an agent
    
    Two integration shapes (see `../framework/references/tools.md` for more):
    
    **Single-tool agent** — agent's output schema **is** the tool's input schema:
    
    ```python
    agent = AtomicAgent[UserQuery, WeatherInput](config=config)
    tool = WeatherTool()
    
    call = agent.run(UserQuery(question="weather in Brussels?"))
    result = tool.run(call)
    ```
    
    **Router agent** — agent picks among tools via a discriminated union of tool-call schemas. Use this when the agent has 2–10 tools to choose from. For dozens, see the search+execute pattern in `../framework/references/orchestration.md`.
    
    ## Phase 5 — Verify
    
    ```bash
    uv run python -c "from <project>.tools.<tool_name>_tool import <ToolName>Tool, <ToolName>Input; t = <ToolName>Tool(); print(t.run(<ToolName>Input(...)))"
    ```
    
    If imports fail with the docstring error, add the docstring on the schema. If `self.input_schema` is `None`, the generic parameters are missing — write `class FooTool(BaseTool[FooInput, FooOutput]):`, not `class FooTool(BaseTool):`.
    
    ## Phase 6 — Hand off
    
    Tell the user:
    
    - Where the tool lives and what to import.
    - How the agent should use it (single-tool or router shape).
    - Optional next steps:
      - The agent that calls it → `create-atomic-agent` skill.
      - Multi-agent wiring around the tool → `../framework/references/orchestration.md`.
      - MCP interop or packaging the tool for distribution → `../framework/references/tools.md`.
    
    ## Anti-patterns
    
    - `class FooTool(BaseTool):` with `input_schema = ...` class attributes — use generics: `BaseTool[FooInput, FooOutput]`.
    - Returning a dict or primitive from `run()` instead of `OutputSchema(...)`.
    - Raising on routine upstream failures — model them as typed output.
    - No timeout on HTTP / DB calls.
    - `MCPTransportType.STREAMABLE_HTTP` — the correct value is `HTTP_STREAM`.
    - Implementing `async def arun(...)` — the framework calls `run_async`.
    
    For deeper material — MCP interop, packaging a tool for `atomic download`, advanced router patterns — load `../framework/references/tools.md`.
    
  • .claude-plugin/marketplace.jsonmarketplace
    Show content (1397 bytes)
    {
      "$schema": "https://anthropic.com/claude-code/marketplace.schema.json",
      "name": "brainblend-plugins",
      "description": "Official plugins for the Atomic Agents framework - a lightweight, modular system for building AI agents with Pydantic and Instructor",
      "owner": {
        "name": "BrainBlend AI",
        "email": "support@brainblend.ai"
      },
      "plugins": [
        {
          "name": "atomic-agents",
          "description": "Skills plus explorer and reviewer subagents for building, scaffolding, understanding, and auditing applications with the Atomic Agents Python framework. Progressive-disclosure references for schemas, agents, tools, context providers, prompts, orchestration, memory, hooks, providers, project structure, and testing — plus isolated-context subagents for codebase mapping and code review, and a new-app scaffolder.",
          "version": "2.0.1",
          "author": {
            "name": "BrainBlend AI",
            "email": "support@brainblend.ai"
          },
          "source": "./claude-plugin/atomic-agents",
          "category": "development",
          "homepage": "https://github.com/BrainBlend-AI/atomic-agents",
          "repository": "https://github.com/BrainBlend-AI/atomic-agents",
          "license": "MIT",
          "keywords": [
            "atomic-agents",
            "ai-agents",
            "llm",
            "pydantic",
            "instructor",
            "multi-agent",
            "orchestration"
          ]
        }
      ]
    }
    

README

Atomic Agents

Atomic Agents

PyPI version Documentation Build Docs Code Quality Discord PyPI downloads Python Versions License: MIT GitHub Stars GitHub Forks Ask DeepWiki

What is Atomic Agents?

The Atomic Agents framework is designed around the concept of atomicity to be an extremely lightweight and modular framework for building Agentic AI pipelines and applications without sacrificing developer experience and maintainability.

Think of it like building AI applications with LEGO blocks - each component (agent, tool, context provider) is:

  • Single-purpose: Does one thing well
  • Reusable: Can be used in multiple pipelines
  • Composable: Easily combines with other components
  • Predictable: Produces consistent, reliable outputs

Built on Instructor and Pydantic, it enables you to create AI applications with the same software engineering principles you already know and love.

NEW: Join our community on Discord at discord.gg/J3W9b5AZJR and our official subreddit at /r/AtomicAgents!

Table of Contents

Getting Started

Installation

To install Atomic Agents, you can use pip:

pip install atomic-agents

Make sure you also install the provider you want to use. Provider SDKs are available as instructor extras:

pip install instructor[groq]        # for Groq
pip install instructor[anthropic]   # for Anthropic
pip install instructor[google-genai] # for Gemini

OpenAI is included by default. For a full list of supported providers, see the Instructor docs.

This also installs the CLI Atomic Assembler, which can be used to download Tools (and soon also Agents and Pipelines).

Quick Example

Here's a quick snippet demonstrating how easy it is to create a powerful agent with Atomic Agents:

from pydantic import Field
from openai import OpenAI
import instructor
from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BaseIOSchema
from atomic_agents.context import SystemPromptGenerator, ChatHistory

# Define a custom output schema
class CustomOutputSchema(BaseIOSchema):
    """
    docstring for the custom output schema
    """
    chat_message: str = Field(..., description="The chat message from the agent.")
    suggested_questions: list[str] = Field(..., description="Suggested follow-up questions.")

# Set up the system prompt
system_prompt_generator = SystemPromptGenerator(
    background=["This assistant is knowledgeable, helpful, and suggests follow-up questions."],
    steps=[
        "Analyze the user's input to understand the context and intent.",
        "Formulate a relevant and informative response.",
        "Generate 3 suggested follow-up questions for the user."
    ],
    output_instructions=[
        "Provide clear and concise information in response to user queries.",
        "Conclude each response with 3 relevant suggested questions for the user."
    ]
)

# Initialize OpenAI client
client = instructor.from_openai(OpenAI())

# Initialize the agent
agent = AtomicAgent[BasicChatInputSchema, CustomOutputSchema](
    config=AgentConfig(
        client=client,
        model="gpt-5-mini",
        system_prompt_generator=system_prompt_generator,
        history=ChatHistory(),
    )
)

# Example usage
if __name__ == "__main__":
    user_input = "Tell me about atomic agents framework"
    response = agent.run(BasicChatInputSchema(chat_message=user_input))
    print(f"Agent: {response.chat_message}")
    print("Suggested questions:")
    for question in response.suggested_questions:
        print(f"- {question}")

Why Atomic Agents?

While existing frameworks for agentic AI focus on building autonomous multi-agent systems, they often lack the control and predictability required for real-world applications. Businesses need AI systems that produce consistent, reliable outputs aligned with their brand and objectives.

Atomic Agents addresses this need by providing:

  • Modularity: Build AI applications by combining small, reusable components.
  • Predictability: Define clear input and output schemas to ensure consistent behavior.
  • Extensibility: Easily swap out components or integrate new ones without disrupting the entire system.
  • Control: Fine-tune each part of the system individually, from system prompts to tool integrations.

All logic and control flows are written in Python, enabling developers to apply familiar best practices and workflows from traditional software development without compromising flexibility or clarity.

Core Concepts

Anatomy of an Agent

In Atomic Agents, an agent is composed of several key components:

  • System Prompt: Defines the agent's behavior and purpose.
  • Input Schema: Specifies the structure and validation rules for the agent's input.
  • Output Schema: Specifies the structure and validation rules for the agent's output.
  • History: Stores conversation history or other relevant data.
  • Context Providers: Inject dynamic context into the agent's system prompt at runtime.

Here's a high-level architecture diagram:

High-level architecture overview of Atomic Agents Diagram showing what is sent to the LLM in the prompt

Context Providers

Atomic Agents allows you to enhance your agents with dynamic context using Context Providers. Context Providers enable you to inject additional information into the agent's system prompt at runtime, making your agents more flexible and context-aware.

To use a Context Provider, create a class that inherits from BaseDynamicContextProvider and implements the get_info() method, which returns the context string to be added to the system prompt.

Here's a simple example:

from atomic_agents.context import BaseDynamicContextProvider

class SearchResultsProvider(BaseDynamicContextProvider):
    def __init__(self, title: str, search_results: List[str]):
        super().__init__(title=title)
        self.search_results = search_results

    def get_info(self) -> str:
        return "\n".join(self.search_results)

You can then register your Context Provider with the agent:

# Initialize your context provider with dynamic data
search_results_provider = SearchResultsProvider(
    title="Search Results",
    search_results=["Result 1", "Result 2", "Result 3"]
)

# Register the context provider with the agent
agent.register_context_provider("search_results", search_results_provider)

This allows your agent to include the search results (or any other context) in its system prompt, enhancing its responses based on the latest information.

Chaining Schemas and Agents

Atomic Agents makes it easy to chain agents and tools together by aligning their input and output schemas. This design allows you to swap out components effortlessly, promoting modularity and reusability in your AI applications.

Suppose you have an agent that generates search queries and you want to use these queries with different search tools. By aligning the agent's output schema with the input schema of the search tool, you can easily chain them together or switch between different search providers.

Here's how you can achieve this:

import instructor
import openai
from pydantic import Field
from atomic_agents import BaseIOSchema, AtomicAgent, AgentConfig
from atomic_agents.context import SystemPromptGenerator

# Import the search tool you want to use
from web_search_agent.tools.searxng_search import SearXNGSearchTool

# Define the input schema for the query agent
class QueryAgentInputSchema(BaseIOSchema):
    """Input schema for the QueryAgent."""
    instruction: str = Field(..., description="Instruction to generate search queries for.")
    num_queries: int = Field(..., description="Number of queries to generate.")

# Initialize the query agent
query_agent = AtomicAgent[QueryAgentInputSchema, SearXNGSearchTool.input_schema](
    config=AgentConfig(
        client=instructor.from_openai(openai.OpenAI()),
        model="gpt-5-mini",
        system_prompt_generator=SystemPromptGenerator(
            background=[
                "You are an intelligent query generation expert.",
                "Your task is to generate a specified number of diverse and highly relevant queries based on a given instruction."
            ],
            steps=[
                "Receive the instruction and the number of queries to generate.",
                "Generate the queries in JSON format."
            ],
            output_instructions=[
                "Ensure each query is unique and relevant.",
                "Provide the queries in the expected schema."
            ],
        ),
    )
)

In this example:

  • Modularity: By setting the output_schema of the query_agent to match the input_schema of SearXNGSearchTool, you can directly use the output of the agent as input to the tool.
  • Swapability: If you decide to switch to a different search provider, you can import a different search tool and update the output_schema accordingly.

For instance, to switch to another search service:

# Import a different search tool
from web_search_agent.tools.another_search import AnotherSearchTool

# Update the output schema
query_agent.config.output_schema = AnotherSearchTool.input_schema

This design pattern simplifies the process of chaining agents and tools, making your AI applications more adaptable and easier to maintain.

Examples & Documentation

Read the Docs

Visit the Documentation Site »

Quickstart Examples

A complete list of examples can be found in the examples directory. We strive to thoroughly document each example, but if something is unclear, please don't hesitate to open an issue or pull request to improve the documentation.

For full, runnable examples, please refer to the following files in the atomic-examples/quickstart/quickstart/ directory:

Complete Examples

In addition to the quickstart examples, we have more complex examples demonstrating the power of Atomic Agents:

  • Hooks System: Comprehensive demonstration of the AtomicAgent hook system for monitoring, error handling, and performance metrics with intelligent retry mechanisms.
  • Basic Multimodal: Demonstrates how to analyze images with text, focusing on extracting structured information from nutrition labels using GPT-4 Vision capabilities.
  • Deep Research: An advanced example showing how to perform deep research tasks.
  • Orchestration Agent: Shows how to create an Orchestrator Agent that intelligently decides between using different tools (search or calculator) based on user input.
  • RAG Chatbot: A chatbot implementation using Retrieval-Augmented Generation (RAG) to provide context-aware responses.
  • Web Search Agent: An intelligent agent that performs web searches and answers questions based on the results.
  • YouTube Summarizer: An agent that extracts and summarizes knowledge from YouTube videos.
  • YouTube to Recipe: An example that extracts structured recipe information from cooking videos, demonstrating complex information extraction and structuring.

For a complete list of examples, see the examples directory.

🚀 Version 2.0 Released!

What's New in V2.0

Watch: What's New in Atomic Agents V2.0

Atomic Agents v2.0 is here with major improvements! This release includes breaking changes that significantly improve the developer experience:

Key Changes in v2.0:

  • Cleaner imports: Eliminated .lib from import paths
  • Renamed classes: BaseAgentAtomicAgent, BaseAgentConfigAgentConfig, and more
  • Better type safety: Generic type parameters for tools and agents
  • Enhanced streaming: New run_stream() and run_async_stream() methods
  • Improved organization: Better module structure with context, connectors, and more

⚠️ Upgrading from v1.x

If you're upgrading from v1.x, please read our comprehensive Upgrade Guide for detailed migration instructions.

Atomic Forge & CLI

Atomic Forge is a collection of tools that can be used with Atomic Agents to extend its functionality. Current tools include:

  • arXiv Search
  • BoCha Search
  • Calculator
  • DateTime
  • Fía Signals
  • Hacker News Search
  • PDF Reader
  • SearXNG Search
  • Tavily Search
  • Webpage Scraper
  • Weather
  • Wikipedia Search
  • YouTube Transcript Scraper

For more information on using and creating tools, see the Atomic Forge README.

Running the CLI

To run the CLI, simply run the following command:

atomic

Or if you're running from a cloned repository with uv:

uv run atomic

After running this command, you will be presented with a menu allowing you to download tools.

Each tool's has its own:

  • Input schema
  • Output schema
  • Usage example
  • Dependencies
  • Installation instructions

Atomic CLI tool example

The atomic-assembler CLI gives you complete control over your tools, avoiding the clutter of unnecessary dependencies. It makes modifying tools straightforward additionally, each tool comes with its own set of tests for reliability.

But you're not limited to the CLI! If you prefer, you can directly access the tool folders and manage them manually by simply copying and pasting as needed.

Atomic CLI menu

Project Structure

Atomic Agents uses a monorepo structure with the following main components:

  1. atomic-agents/: The core Atomic Agents library
  2. atomic-assembler/: The CLI tool for managing Atomic Agents components
  3. atomic-examples/: Example projects showcasing Atomic Agents usage
  4. atomic-forge/: A collection of tools that can be used with Atomic Agents

For local development, you can install from the repository:

git clone https://github.com/BrainBlend-AI/atomic-agents.git
cd atomic-agents
uv sync

To install all workspace packages (examples and tools):

uv sync --all-packages

Provider & Model Compatibility

Atomic Agents depends on the Instructor package. This means that in all examples where OpenAI is used, any other API supported by Instructor can also be used—such as Ollama, Groq, Mistral, Cohere, Anthropic, Gemini, MiniMax, and more. For a complete list, please refer to the Instructor documentation on its GitHub page.

Contributing

We welcome contributions! Please see the Contributing Guide for detailed information on how to contribute to Atomic Agents. Here are some quick steps:

  1. Fork the repository
  2. Create a new branch (git checkout -b feature-branch)
  3. Make your changes
  4. Run tests (uv run pytest --cov=atomic_agents atomic-agents)
  5. Format your code (uv run black atomic-agents atomic-assembler atomic-examples atomic-forge)
  6. Lint your code (uv run flake8 --extend-exclude=.venv atomic-agents atomic-assembler atomic-examples atomic-forge)
  7. Commit your changes (git commit -m 'Add some feature')
  8. Push to the branch (git push origin feature-branch)
  9. Open a pull request

For full development setup and guidelines, please refer to the Developer Guide.

License

This project is licensed under the MIT License—see the LICENSE file for details.

Additional Resources

If you want to learn more about the motivation and philosophy behind Atomic Agents, I suggest reading this Medium article (no account needed).

Video Resources:

Star History

Star History Chart