Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
exa-labs

exa-mcp-server

Quality
9.0

This MCP server integrates AI assistants with Exa's specialized search engine, providing real-time access to web, code, and company data. It is ideal for AI agents requiring up-to-date external information for research, analysis, or contextual understanding.

USP

Exa is purpose-built for AI, offering advanced, configurable search tools like `web_search_advanced_exa` and `web_fetch_exa` directly within your AI environment. It provides robust, official integration for deep research and contextual awa…

Use cases

  • 01Real-time web search for AI agents
  • 02Code search and context retrieval
  • 03Company research and competitive analysis
  • 04Lead generation and market research
  • 05Deep dives and literature reviews

Detected files (3)

  • skills/search/SKILL.mdskill
    Show content (14300 bytes)
    ---
    name: search
    description: "Deep research powered by Exa. Use for lead generation, literature reviews, deep dives, competitive analysis, or any query where one search falls short, including phrases like 'research this', 'find everything about', 'find me all', or 'deep dive on'."
    ---
    
    # Exa Research Orchestrator
    
    You are the orchestrator. Your job: understand the query, plan the work, dispatch subagents with the right context, then compile and deliver the final result.
    
    ## Prerequisites: Auth
    
    Server: `https://mcp.exa.ai/mcp`.
    
    1. **OAuth (recommended)** — client opens `auth.exa.ai`, user signs in with Google / SSO / email, JWT is attached automatically. No key to copy.
    2. **API key** — if OAuth isn't available, get one at https://dashboard.exa.ai/api-keys and pass it via `Authorization: Bearer …`, `?exaApiKey=…`, or `EXA_API_KEY` (local npm).
    3. **Anonymous** — works without setup but rate-limited.
    
    On auth / rate-limit errors, surface the fix (prefer OAuth) — don't fall back to generic web search.
    
    ## Date Calculation (Do This First)
    
    If the query involves time ("last week", "recent", "past 6 months"), calculate exact dates from today's date in your environment context. Write out the calculation explicitly before doing anything else. Never eyeball dates or reuse dates from examples.
    
    ## Step 1: Assess the Query
    
    Read the user's query and determine two things:
    
    **How complex is this?**
    - **Extremely Simple** (e.g. reading the contents of 1-2 pages): Handle it yourself. Read `references/searching.md` for query-writing guidance, run the searches, review and filter results, then respond directly. No subagents needed.
    - **Moderate** (when a fast or low-effort search is requested): Delegate to 1 subagent to keep your context window clean.
    - **Advanced** (clear topic, clear filters, a few parallel searches): Light subagent use. One round of parallel subagents, then compile.
    - **Complex** (cross-referencing across entity types, multi-hop chains, exhaustive coverage, semantic filtering): Full multi-pass with parallel subagents.
    
    **Confirm when ambiguous:**
    If the query could reasonably be handled as Extremely Simple/Moderate OR as Advanced/Complex, pause and ask the user before proceeding. Present:
    1. Your interpretation of the query
    2. The two (or more) plausible complexity levels
    3. What each level would look like in practice (e.g., "I can do a quick 1-2 search lookup, or I can fan out across 3-4 subagents to get deeper coverage")
    4. Let the user choose
    
    Examples of ambiguous queries:
    - "What are the best LLM fine-tuning frameworks?" — could be a quick opinionated list (Moderate) or an exhaustive evaluated comparison (Complex)
    - "Find competitors to Acme Corp" — could be a quick search for known competitors (Moderate) or a deep sweep across funding databases, press, and niche directories (Complex)
    - "What's the latest on WebGPU?" — could be one news search (Extremely Simple) or a multi-angle survey of specs, browser support, community adoption, and benchmarks (Advanced)
    
    Do NOT ask for confirmation when:
    - The query is clearly extremely simple (fact lookups, single-entity questions)
    - The query is clearly complex (explicit multi-constraint, "find everything", "exhaustive", "comprehensive")
    - The user has already specified depth ("do a deep dive", "quick answer")
    
    Note: if the user explicitly asks for something (e.g. "100" of something), continue to work until you've achieved it.
    
    **What work needs to happen?** Identify which of these apply (most queries use 3-5):
    
    1. **Seed from user input**: The user provided a list of entities to start from (company names, tickers, paper titles). Each seed becomes a parallel workstream.
    2. **Define what qualifies**: What makes a result a valid "row"? Translate the user's criteria into concrete checks.
    3. **Define what to capture**: What fields ("columns") does each result need? Build the schema before searching.
    4. **Search broadly**: Generate diverse queries and run them to find candidates. This is where subagents do the heavy lifting.
    5. **Extract structured data**: Pull specific fields from raw search results into the schema.
    6. **Filter**: Apply hard constraints (dates, geography, thresholds) and soft judgments (quality, relevance, semantic checks).
    7. **Merge and deduplicate**: Combine results from multiple subagents. Same URL = drop duplicate. Same entity from different sources = merge fields, keep best data.
    8. **Score and rank**: For "best of" (e.g. "what's the best ___?") queries, define the scoring criteria explicitly, then rank.
    9. **Synthesize narrative**: For research queries, organize findings by theme and write prose with citations.
    
    ## Step 2: Dispatch Subagents
    
    ### What subagents do
    
    Subagents run Exa searches and process the results. They keep raw search output out of your context window. Each subagent should:
    - Read the reference file(s) you point it to
    - Run the specific searches you assign
    - Return compact, structured output
    
    ### How to dispatch
    
    Use the **Agent tool** to dispatch subagents. Reference file paths are relative to the directory this file was loaded from.
    
    Use `model: "haiku"` for subagents.
    
    Tell each subagent:
    1. Which reference file(s) to read for instructions (always include the absolute path)
    2. What specific searches to run or what specific work to do
    3. What output format to return
    
    **Template:**
    ```
    Read the file at [this skill's directory]/references/searching.md for instructions on how to query Exa effectively.
    
    Then do the following:
    [specific task description]
    [specific queries to run, if you are prescribing them]
    [validation criteria -- what makes a result qualify, so the subagent filters before returning]
    
    Return: [output format -- e.g. "compact JSON with name, url, snippet per result" or "markdown table with columns X, Y, Z"].
    
    End with EXACTLY: `sources_reviewed: N` where N = sum of `numResults` across every `web_search_exa` call (incl. retries). E.g. calls with numResults 10, 10, 5 → `sources_reviewed: 25`.
    ```
    
    **Pass the `sources_reviewed` instruction line to every subagent verbatim — don't paraphrase.**
    ### Which reference files to point subagents to
    
    Always point subagents to `references/searching.md`. It contains Exa query guidance and an index of domain-specific pattern files that the subagent will select from based on its task.
    
    Point to whichever of these also apply:
    
    | File | Point a subagent here when... |
    |---|---|
    | `references/extraction.md` | The subagent needs to extract specific data points into a schema you defined |
    | `references/filtering.md` | The subagent needs to evaluate results against criteria (especially semantic/soft filters) |
    | `references/synthesis.md` | The subagent is producing a prose synthesis rather than structured data |
    | `references/source-quality.md` | The subagent needs to assess source credibility, especially for "best of", ranking, or expert-finding queries |
    
    ### How to split work across subagents
    
    If running parallel subagents, decompose the primary task/question into **sub-questions** to cover different search territories.
    
    For example, "best open-source LLM fine-tuning frameworks for production use" can be decomposed into multiple parallel sub-questions:
    1. "What open-source LLM fine-tuning frameworks do production engineers recommend, and what do they say about using them in real deployments?"
    2. "What open-source LLM fine-tuning tools have launched or gained traction in the last 6 months that aren't yet widely known?"
    3. "What are the most common complaints, failure modes, and reasons teams migrated away from specific open-source LLM fine-tuning frameworks in production?"
    
    Depending on your "**How complex is this?**" analysis: Some need 2-3; some need many. Some need several different angles, creative thought patterns, adversarial perspectives. It depends on what the user is asking for and how deep they want you to go.
    
    Give the sub-question directly to the subagent in its prompt.
    
    ### Subagent sizing
    
    - Aim for 3-5 searches per subagent
    - Parallelize aggressively — independent workstreams should be separate subagents launched in a single message
    - Do not use `run_in_background` — dispatch all subagents in one message and wait for their results
    - For per-seed work (enriching a list of 20 companies), batch 3-5 seeds per subagent
    
    ### Token isolation
    
    Never run bulk searches in your main context. The whole point of subagents is to keep raw search output out of your context window. Subagents process results and return only distilled output.
    
    ### When things go wrong
    
    - **Subagent returns empty**: Rephrase queries with different angles, not synonyms. If still empty, the topic may have limited web coverage -- report that.
    - **Subagent returns off-topic results**: Queries were too vague. Retry with longer, more specific queries.
    
    ## Step 3: Compile Results
    
    After subagents return:
    
    **Deduplicate:**
    1. Collect all results into a single list
    2. Remove exact URL duplicates
    3. Same entity from different sources: merge fields, keep the most complete/recent data
    4. Track: "Deduplicated X results down to Y unique entries"
    
    **Validate coverage:**
    - Are there obvious gaps? (missing time periods, missing geographic regions, missing entity types)
    - For each gap found, run targeted follow-up searches (via subagent if multiple queries are needed, direct if extremely simple)
    - For "find everything" queries, check if results from different subagents overlap heavily (good sign) or are completely disjoint (may indicate missed angles)
    
    **Format the output:**
    
    If you used subagents, open with: "I used Exa to review {X} sources across {Y} subagents. Here's what was found:" (X = sum of `sources_reviewed` across all subagents and passes plus any direct searches you ran; Y = total subagents dispatched. Pluralize naturally.)
    
    Then: Format output beautifully, filling up no more than one scroll length of the claude code screen. Include hyperlinked text where relevant. Below it, you may also include things (in a short, easy-to-read format) that:
    - ("Result") directly answer the original user request (in few words; make every word count)
    - ("Process") include anything worth noting about your process and what you consider to be high-signal in this domain vs. what you filtered out.
    - ("Patterns") any patterns identified that are non-obvious, require n-th order thinking, and are not included or alluded to in the rest of the output but might be interesting to the user.
    - ("Notes") based on everything you know about the user and their work beyond this task, mention anything notable/useful you found that is not included or alluded to in the rest of the output.
    
    If it's impossible to fit the full output in a single screen, write a file in the most relevant/useful file format (.csv, .md) to `./exa-results/<topic>-<YYYY-MM-DD>` and include a pointer to the full file below the 1-screen output.
    
    **General output rules:**
    - No emojis unless the user requested them
    - Include in-line 1-word or multi-word hyperlinks throughout outputs where hyperlinking is a value-add.
    - Prefer tables over lists (fall back to lists only when fields are non-uniform or values are too long to fit cleanly)
    
    ## Multi-Pass Queries
    
    Some queries require multiple sequential passes where later passes depend on earlier results. Common patterns:
    
    **Entity chaining** (multi-hop): Pass 1 finds entities (companies), Pass 2 finds related entities per result (people at those companies), Pass 3 enriches those (their public statements). Each pass is a round of parallel subagents.
    
    **Exploratory then targeted**: Pass 1 scouts the landscape broadly, Pass 2 searches deeply in the most promising directions found in Pass 1.
    
    **Criteria discovery**: When "best" isn't predefined, Pass 1 surveys what practitioners actually value, Pass 2 searches for candidates matching those criteria.
    
    Between passes, compile and deduplicate before dispatching the next round.
    
    ## Evaluating Source Quality
    
    Source quality matters most for "best of", ranking, expert-finding, and best-practices queries, but is useful context for almost any research task.
    
    **At the subagent level:** Point subagents to `references/source-quality.md` so they tag source quality in their output. This lets you weight results during compilation.
    
    **At the orchestrator level**, when compiling subagent results:
    
    1. **Convergence across high-signal sources**: Convergence alone isn't meaningful (3 low-quality sources agreeing is just shared noise). What matters is when multiple independent, high-signal sources (practitioners, people with skin in the game) converge on the same finding.
    2. **Practitioner vs commentator**: Weight practitioners (people doing the work) higher than commentators (people writing about the work).
    3. **Via negativa**: Before synthesizing, define who to exclude (sources with misaligned incentives, no skin in the game, or unfalsifiable claims). Filtering out noise is more valuable than seeking brilliance.
    4. **Red-team your compiled results**: What perspectives are missing? What biases might be distorting the aggregate? If a gap emerges, run a targeted follow-up.
    5. **Ideas over entities**: For expert-finding and best-practices queries, the primary output is convergent truths, not a ranked list of names. Lead with what the best sources agree on, then cite who said it.
    
    ## Gotchas
    
    - **Over-execution on simple queries**: If the user asks "what year was X founded", don't spin up subagents. One search, one answer.
    - **Under-execution on hard queries**: If the query has 4+ constraints, temporal joins, or semantic filtering, a single search will not cut it. Fan out.
    - **Synonym queries**: Running "overrated AI tools" and "overhyped AI tools" as separate subagent queries wastes tokens. These hit the same embedding region. Diversify by angle instead.
    - **Forgetting to deduplicate**: Multiple subagents will return overlapping results. Always deduplicate before synthesis.
    - **Treating Exa results as validated**: Exa returns similarity, not yet validated. A result appearing in search output does not mean it meets the user's criteria. You must validate.
    - **Date drift**: Always calculate dates from the current environment date. Never reuse dates from these instructions or from previous queries.
    
  • server.jsonmcp_server
    Show content (792 bytes)
    {
      "$schema": "https://static.modelcontextprotocol.io/schemas/2025-07-09/server.schema.json",
      "name": "io.github.exa-labs/exa-mcp-server",
      "description": "MCP server with Exa for web search and web crawling. Exa is the search engine for AI Applications.",
      "version": "3.2.1",
      "packages": [
        {
          "registryType": "npm",
          "identifier": "exa-mcp-server",
          "version": "3.2.1"
        }
      ],
      "remotes": [
        {
          "type": "sse",
          "url": "https://mcp.exa.ai/mcp?tools=web_search_exa,web_search_advanced_exa,web_fetch_exa",
          "description": "Hosted Exa MCP server with web search and web crawling capabilities. Get the API key from https://dashboard.exa.ai/api-keys. Customize the tools parameter to enable only specific tools (comma-separated list)."
        }
      ]
    }
    
  • .claude-plugin/marketplace.jsonmarketplace
    Show content (1314 bytes)
    {
      "name": "exa",
      "owner": {
        "name": "Exa",
        "email": "hello@exa.ai"
      },
      "metadata": {
        "description": "Official Exa AI plugin providing web search, code search, company research, and deep research capabilities",
        "version": "3.3.9"
      },
      "plugins": [
        {
          "name": "exa",
          "source": "./",
          "description": "A Model Context Protocol server with Exa for web search, code search, and web crawling. Provides real-time web searches with configurable tool selection, allowing users to enable or disable specific search capabilities.",
          "version": "3.3.9",
          "author": {
            "name": "Exa",
            "email": "hello@exa.ai"
          },
          "homepage": "https://docs.exa.ai/reference/exa-mcp",
          "repository": "https://github.com/exa-labs/exa-mcp-server",
          "license": "MIT",
          "keywords": [
            "mcp",
            "search",
            "web-search",
            "code-search",
            "exa",
            "research",
            "company-research",
            "linkedin",
            "crawling"
          ],
          "category": "productivity",
          "mcpServers": {
            "exa": {
              "type": "http",
              "url": "https://mcp.exa.ai/mcp?client=claude-code-plugin",
              "headers": {
                "x-exa-source": "claude-code-plugin"
              }
            }
          }
        }
      ]
    }
    
    

README

Exa MCP Server

Install in Cursor Install in VS Code npm version

Connect AI assistants to Exa's search capabilities: web search, code search, and company research.

Full Documentation | npm Package | Get Your Exa API Key

Installation

Connect to Exa's hosted MCP server:

https://mcp.exa.ai/mcp

Get your API key

Cursor

Add to ~/.cursor/mcp.json:

{
  "mcpServers": {
    "exa": {
      "url": "https://mcp.exa.ai/mcp"
    }
  }
}
VS Code

Add to .vscode/mcp.json:

{
  "servers": {
    "exa": {
      "type": "http",
      "url": "https://mcp.exa.ai/mcp"
    }
  }
}
Claude Code
claude mcp add --transport http exa https://mcp.exa.ai/mcp
Claude Desktop

Exa is available as a native Claude Connector — no config files or terminal commands needed.

  1. Open Claude Desktop Settings (or Customize) and go to Connectors
  2. Search for Exa in the directory
  3. Click + to add it

That's it! Claude will now have access to Exa's search tools.

Alternative: manual config

Add to your config file (~/Library/Application Support/Claude/claude_desktop_config.json on macOS, %APPDATA%\Claude\claude_desktop_config.json on Windows):

{
  "mcpServers": {
    "exa": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "https://mcp.exa.ai/mcp"]
    }
  }
}
Codex
codex mcp add exa --url https://mcp.exa.ai/mcp
OpenCode

Add to your opencode.json:

{
  "mcp": {
    "exa": {
      "type": "remote",
      "url": "https://mcp.exa.ai/mcp",
      "enabled": true
    }
  }
}
Antigravity

Open the MCP Store panel (from the "..." dropdown in the side panel), then add a custom server with:

https://mcp.exa.ai/mcp
Windsurf

Add to ~/.codeium/windsurf/mcp_config.json:

{
  "mcpServers": {
    "exa": {
      "serverUrl": "https://mcp.exa.ai/mcp"
    }
  }
}
Zed

Add to your Zed settings:

{
  "context_servers": {
    "exa": {
      "url": "https://mcp.exa.ai/mcp"
    }
  }
}
Gemini CLI

Add to ~/.gemini/settings.json:

{
  "mcpServers": {
    "exa": {
      "httpUrl": "https://mcp.exa.ai/mcp"
    }
  }
}
v0 by Vercel

In v0, select Prompt Tools > Add MCP and enter:

https://mcp.exa.ai/mcp
Warp

Go to Settings > MCP Servers > Add MCP Server and add:

{
  "exa": {
    "url": "https://mcp.exa.ai/mcp"
  }
}
Kiro

Add to ~/.kiro/settings/mcp.json:

{
  "mcpServers": {
    "exa": {
      "url": "https://mcp.exa.ai/mcp"
    }
  }
}
Roo Code

Add to your Roo Code MCP config:

{
  "mcpServers": {
    "exa": {
      "type": "streamable-http",
      "url": "https://mcp.exa.ai/mcp"
    }
  }
}
Other Clients

For clients that support remote MCP:

{
  "mcpServers": {
    "exa": {
      "url": "https://mcp.exa.ai/mcp"
    }
  }
}

For clients that need mcp-remote:

{
  "mcpServers": {
    "exa": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "https://mcp.exa.ai/mcp"]
    }
  }
}
Via npm Package

Use the npm package with your API key. Get your API key.

{
  "mcpServers": {
    "exa": {
      "command": "npx",
      "args": ["-y", "exa-mcp-server"],
      "env": {
        "EXA_API_KEY": "your_api_key"
      }
    }
  }
}

Development

Requires Node.js 20 or newer.

Setup:

npm run setup

Run locally:

npm start

Run tests:

npm run ci

Available Tools

Enabled by Default:

ToolDescription
web_search_exaSearch the web for any topic and get clean, ready-to-use content
web_fetch_exaGet the full content of a specific webpage from a known URL

Off by Default:

ToolDescription
web_search_advanced_exaAdvanced web search with full control over filters, domains, dates, and content options

Deprecated (still available for backwards compatibility):

ToolUse instead
get_code_context_exaweb_search_exa
company_research_exaweb_search_advanced_exa
crawling_exaweb_fetch_exa
people_search_exaweb_search_advanced_exa
linkedin_search_exaweb_search_advanced_exa
deep_researcher_startResearch API
deep_researcher_checkResearch API
deep_search_exaweb_search_advanced_exa

Enable additional tools with the tools parameter:

https://mcp.exa.ai/mcp?exaApiKey=YOUR_KEY&tools=web_search_exa,web_search_advanced_exa,web_fetch_exa

Agent Skills (Claude Skills)

Ready-to-use skills for Claude Code. Each skill teaches Claude how to use Exa search for a specific task. Copy the content inside a dropdown and paste it into Claude Code — it handles the rest.

Company Research

Copy the content below and paste it into Claude Code. It will set up the MCP connection and skill for you.

Step 1: Install or update Exa MCP

If Exa MCP already exists in your MCP configuration, either uninstall it first and install the new one, or update your existing MCP config with this endpoint. Run this command in your terminal:

claude mcp add --transport http exa "https://mcp.exa.ai/mcp?tools=web_search_advanced_exa"


Step 2: Add this Claude skill

---
name: company-research
description: Company research using Exa search. Finds company info, competitors, news, financials, LinkedIn profiles, builds company lists. Use when researching companies, doing competitor analysis, market research, or building company lists.
context: fork
---

# Company Research

## Tool Restriction (Critical)

ONLY use `web_search_advanced_exa`. Do NOT use `web_search_exa` or any other Exa tools.

## Token Isolation (Critical)

Never run Exa searches in main context. Always spawn Task agents:
- Agent runs Exa search internally
- Agent processes results using LLM intelligence
- Agent returns only distilled output (compact JSON or brief markdown)
- Main context stays clean regardless of search volume

## Dynamic Tuning

No hardcoded numResults. Tune to user intent:
- User says "a few" → 10-20
- User says "comprehensive" → 50-100
- User specifies number → match it
- Ambiguous? Ask: "How many companies would you like?"

## Query Variation

Exa returns different results for different phrasings. For coverage:
- Generate 2-3 query variations
- Run in parallel
- Merge and deduplicate

## Categories

Use appropriate Exa `category` depending on what you need:
- `company` → homepages, rich metadata (headcount, location, funding, revenue)
- `news` → press coverage, announcements
- `people` → LinkedIn profiles (public data)
- No category (`type: "auto"`) → general web results, deep dives, broader context

Start with `category: "company"` for discovery, then use other categories or no category for deeper research.

### Category-Specific Filter Restrictions

When using `category: "company"`, these parameters cause 400 errors:
- `includeDomains` / `excludeDomains`
- `startPublishedDate` / `endPublishedDate`
- `startCrawlDate` / `endCrawlDate`

When searching without a category (or with `news`), domain and date filters work fine.

**Universal restriction:** `includeText` and `excludeText` only support **single-item arrays**. Multi-item arrays cause 400 errors across all categories.

## LinkedIn

Public LinkedIn via Exa: `category: "people"`, no other filters.
Auth-required LinkedIn → use Claude in Chrome browser fallback.

## Browser Fallback

Auto-fallback to Claude in Chrome when:
- Exa returns insufficient results
- Content is auth-gated
- Dynamic pages need JavaScript

## Examples

### Discovery: find companies in a space
```
web_search_advanced_exa {
  "query": "AI infrastructure startups San Francisco",
  "category": "company",
  "numResults": 20,
  "type": "auto"
}
```

### Deep dive: research a specific company
```
web_search_advanced_exa {
  "query": "Anthropic funding rounds valuation 2024",
  "type": "deep",
  "numResults": 10,
  "includeDomains": ["techcrunch.com", "crunchbase.com", "bloomberg.com"]
}
```

### News coverage
```
web_search_advanced_exa {
  "query": "Anthropic AI safety",
  "category": "news",
  "numResults": 15,
  "startPublishedDate": "2024-01-01"
}
```

### LinkedIn profiles
```
web_search_advanced_exa {
  "query": "VP Engineering AI infrastructure",
  "category": "people",
  "numResults": 20
}
```

## Output Format

Return:
1) Results (structured list; one company per row)
2) Sources (URLs; 1-line relevance each)
3) Notes (uncertainty/conflicts)


Step 3: Ask User to Restart Claude Code

You should ask the user to restart Claude Code to have the config changes take effect.
Code Search

Copy the content below and paste it into Claude Code. It will set up the MCP connection and skill for you.

Step 1: Install or update Exa MCP

If Exa MCP already exists in your MCP configuration, either uninstall it first and install the new one, or update your existing MCP config with this endpoint. Run this command in your terminal:

claude mcp add --transport http exa "https://mcp.exa.ai/mcp?tools=web_search_exa"


Step 2: Add this Claude skill

---
name: code-search-exa
description: Code context using Exa. Finds real snippets and docs from GitHub, StackOverflow, and technical docs. Use when searching for code examples, API syntax, library documentation, or debugging help.
context: fork
---

# Code Context (Exa)

## Tool Restriction (Critical)

ONLY use `web_search_exa`. Do NOT use other Exa tools.

## Token Isolation (Critical)

Never run Exa in main context. Always spawn Task agents:
- Agent calls `web_search_exa`
- Agent extracts the minimum viable snippet(s) + constraints
- Agent deduplicates near-identical results (mirrors, forks, repeated StackOverflow answers) before presenting
- Agent returns copyable snippets + brief explanation
- Main context stays clean regardless of search volume

## When to Use

Use this tool for ANY programming-related request:
- API usage and syntax
- SDK/library examples
- config and setup patterns
- framework "how to" questions
- debugging when you need authoritative snippets

## Query Writing Patterns (High Signal)

To reduce irrelevant results and cross-language noise:
- Always include the **programming language** in the query.
  - Example: use **"Go generics"** instead of just **"generics"**.
- When applicable, also include **framework + version** (e.g., "Next.js 14", "React 19", "Python 3.12").
- Include exact identifiers (function/class names, config keys, error messages) when you have them.

## Output Format (Recommended)

Return:
1) Best minimal working snippet(s) (keep it copy/paste friendly)
2) Notes on version / constraints / gotchas
3) Sources (URLs if present in returned context)

Before presenting:
- Deduplicate similar results and keep only the best representative snippet per approach.

## MCP Configuration

```json
{
  "servers": {
    "exa": {
      "type": "http",
      "url": "https://mcp.exa.ai/mcp?tools=web_search_exa"
    }
  }
}
```


Step 3: Ask User to Restart Claude Code

You should ask the user to restart Claude Code to have the config changes take effect.
People Search

Copy the content below and paste it into Claude Code. It will set up the MCP connection and skill for you.

Step 1: Install or update Exa MCP

If Exa MCP already exists in your MCP configuration, either uninstall it first and install the new one, or update your existing MCP config with this endpoint. Run this command in your terminal:

claude mcp add --transport http exa "https://mcp.exa.ai/mcp?tools=web_search_advanced_exa"


Step 2: Add this Claude skill

---
name: people-research
description: People research using Exa search. Finds LinkedIn profiles, professional backgrounds, experts, team members, and public bios across the web. Use when searching for people, finding experts, or looking up professional profiles.
context: fork
---

# People Research

## Tool Restriction (Critical)

ONLY use `web_search_advanced_exa`. Do NOT use `web_search_exa` or any other Exa tools.

## Token Isolation (Critical)

Never run Exa searches in main context. Always spawn Task agents:
- Agent runs Exa search internally
- Agent processes results using LLM intelligence
- Agent returns only distilled output (compact JSON or brief markdown)
- Main context stays clean regardless of search volume

## Dynamic Tuning

No hardcoded numResults. Tune to user intent:
- User says "a few" → 10-20
- User says "comprehensive" → 50-100
- User specifies number → match it
- Ambiguous? Ask: "How many profiles would you like?"

## Query Variation

Exa returns different results for different phrasings. For coverage:
- Generate 2-3 query variations
- Run in parallel
- Merge and deduplicate

## Categories

Use appropriate Exa `category` depending on what you need:
- `people` → LinkedIn profiles, public bios (primary for discovery)
- `personal site` → personal blogs, portfolio sites, about pages
- `news` → press mentions, interviews, speaker bios
- No category (`type: "auto"`) → general web results, broader context

Start with `category: "people"` for profile discovery, then use other categories or no category for deeper research on specific individuals.

### Category-Specific Filter Restrictions

When using `category: "people"`, these parameters cause errors:
- `startPublishedDate` / `endPublishedDate`
- `startCrawlDate` / `endCrawlDate`
- `includeText` / `excludeText`
- `excludeDomains`
- `includeDomains` — **LinkedIn domains only** (e.g., "linkedin.com")

When searching without a category, all parameters are available (but `includeText`/`excludeText` still only support single-item arrays).

## LinkedIn

Public LinkedIn via Exa: `category: "people"`, no other filters.
Auth-required LinkedIn → use Claude in Chrome browser fallback.

## Browser Fallback

Auto-fallback to Claude in Chrome when:
- Exa returns insufficient results
- Content is auth-gated
- Dynamic pages need JavaScript

## Examples

### Discovery: find people by role
```
web_search_advanced_exa {
  "query": "VP Engineering AI infrastructure",
  "category": "people",
  "numResults": 20,
  "type": "auto"
}
```

### With query variations
```
web_search_advanced_exa {
  "query": "machine learning engineer San Francisco",
  "category": "people",
  "additionalQueries": ["ML engineer SF", "AI engineer Bay Area"],
  "numResults": 25,
  "type": "deep"
}
```

### Deep dive: research a specific person
```
web_search_advanced_exa {
  "query": "Dario Amodei Anthropic CEO background",
  "type": "auto",
  "numResults": 15
}
```

### News mentions
```
web_search_advanced_exa {
  "query": "Dario Amodei interview",
  "category": "news",
  "numResults": 10,
  "startPublishedDate": "2024-01-01"
}
```

## Output Format

Return:
1) Results (name, title, company, location if available)
2) Sources (Profile URLs)
3) Notes (profile completeness, verification status)


Step 3: Ask User to Restart Claude Code

You should ask the user to restart Claude Code to have the config changes take effect.
Financial Report Search

Copy the content below and paste it into Claude Code. It will set up the MCP connection and skill for you.

Step 1: Install or update Exa MCP

If Exa MCP already exists in your MCP configuration, either uninstall it first and install the new one, or update your existing MCP config with this endpoint. Run this command in your terminal:

claude mcp add --transport http exa "https://mcp.exa.ai/mcp?tools=web_search_advanced_exa"


Step 2: Add this Claude skill

---
name: web-search-advanced-financial-report
description: Search for financial reports using Exa advanced search. Near-full filter support for finding SEC filings, earnings reports, and financial documents. Use when searching for 10-K filings, quarterly earnings, or annual reports.
context: fork
---

# Web Search Advanced - Financial Report Category

## Tool Restriction (Critical)

ONLY use `web_search_advanced_exa` with `category: "financial report"`. Do NOT use other categories or tools.

## Filter Restrictions (Critical)

The `financial report` category has one known restriction:

- `excludeText` - NOT SUPPORTED (causes 400 error)

## Supported Parameters

### Core
- `query` (required)
- `numResults`
- `type` ("auto", "fast", "deep", "instant")

### Domain filtering
- `includeDomains` (e.g., ["sec.gov", "investor.apple.com"])
- `excludeDomains`

### Date filtering (ISO 8601) - Very useful for financial reports!
- `startPublishedDate` / `endPublishedDate`
- `startCrawlDate` / `endCrawlDate`

### Text filtering
- `includeText` (must contain ALL) - **single-item arrays only**; multi-item causes 400
- ~~`excludeText`~~ - NOT SUPPORTED

### Content extraction
- `textMaxCharacters` / `contextMaxCharacters`
- `enableSummary` / `summaryQuery`
- `enableHighlights` / `highlightsNumSentences` / `highlightsPerUrl` / `highlightsQuery`

### Additional
- `additionalQueries`
- `maxAgeHours` / `livecrawlTimeout`
- `subpages` / `subpageTarget`

## Token Isolation (Critical)

Never run Exa searches in main context. Always spawn Task agents:
- Agent calls `web_search_advanced_exa` with `category: "financial report"`
- Agent merges + deduplicates results before presenting
- Agent returns distilled output (brief markdown or compact JSON)
- Main context stays clean regardless of search volume

## When to Use

Use this category when you need:
- SEC filings (10-K, 10-Q, 8-K, S-1)
- Quarterly earnings reports
- Annual reports
- Investor presentations
- Financial statements

## Examples

SEC filings for a company:
```
web_search_advanced_exa {
  "query": "Anthropic SEC filing S-1",
  "category": "financial report",
  "numResults": 10,
  "type": "auto"
}
```

Recent earnings reports:
```
web_search_advanced_exa {
  "query": "Q4 2025 earnings report technology",
  "category": "financial report",
  "startPublishedDate": "2025-10-01",
  "numResults": 20,
  "type": "auto"
}
```

Specific filing type:
```
web_search_advanced_exa {
  "query": "10-K annual report AI companies",
  "category": "financial report",
  "includeDomains": ["sec.gov"],
  "startPublishedDate": "2025-01-01",
  "numResults": 15,
  "type": "deep"
}
```

Risk factors analysis:
```
web_search_advanced_exa {
  "query": "risk factors cybersecurity",
  "category": "financial report",
  "includeText": ["cybersecurity"],
  "numResults": 10,
  "enableHighlights": true,
  "highlightsQuery": "What are the main cybersecurity risks?"
}
```

## Output Format

Return:
1) Results (company name, filing type, date, key figures/highlights)
2) Sources (Filing URLs)
3) Notes (reporting period, any restatements, auditor notes)


Step 3: Ask User to Restart Claude Code

You should ask the user to restart Claude Code to have the config changes take effect.
Research Paper Search

Copy the content below and paste it into Claude Code. It will set up the MCP connection and skill for you.

Step 1: Install or update Exa MCP

If Exa MCP already exists in your MCP configuration, either uninstall it first and install the new one, or update your existing MCP config with this endpoint. Run this command in your terminal:

claude mcp add --transport http exa "https://mcp.exa.ai/mcp?tools=web_search_advanced_exa"


Step 2: Add this Claude skill

---
name: web-search-advanced-research-paper
description: Search for research papers and academic content using Exa advanced search. Full filter support including date ranges and text filtering. Use when searching for academic papers, arXiv preprints, or scientific research.
context: fork
---

# Web Search Advanced - Research Paper Category

## Tool Restriction (Critical)

ONLY use `web_search_advanced_exa` with `category: "research paper"`. Do NOT use other categories or tools.

## Full Filter Support

The `research paper` category supports ALL available parameters:

### Core
- `query` (required)
- `numResults`
- `type` ("auto", "fast", "deep", "instant")

### Domain filtering
- `includeDomains` (e.g., ["arxiv.org", "openreview.net"])
- `excludeDomains`

### Date filtering (ISO 8601)
- `startPublishedDate` / `endPublishedDate`
- `startCrawlDate` / `endCrawlDate`

### Text filtering
- `includeText` (must contain ALL)
- `excludeText` (exclude if ANY match)

**Array size restriction:** `includeText` and `excludeText` only support **single-item arrays**. Multi-item arrays (2+ items) cause 400 errors. To match multiple terms, put them in the `query` string or run separate searches.

### Content extraction
- `textMaxCharacters` / `contextMaxCharacters`
- `enableSummary` / `summaryQuery`
- `enableHighlights` / `highlightsNumSentences` / `highlightsPerUrl` / `highlightsQuery`

### Additional
- `userLocation`
- `moderation`
- `additionalQueries`
- `maxAgeHours` / `livecrawlTimeout`
- `subpages` / `subpageTarget`

## Token Isolation (Critical)

Never run Exa searches in main context. Always spawn Task agents:
- Agent calls `web_search_advanced_exa` with `category: "research paper"`
- Agent merges + deduplicates results before presenting
- Agent returns distilled output (brief markdown or compact JSON)
- Main context stays clean regardless of search volume

## When to Use

Use this category when you need:
- Academic papers from arXiv, OpenReview, PubMed, etc.
- Scientific research on specific topics
- Literature reviews with date filtering
- Papers containing specific methodologies or terms

## Examples

Recent papers on a topic:
```
web_search_advanced_exa {
  "query": "transformer attention mechanisms efficiency",
  "category": "research paper",
  "startPublishedDate": "2024-01-01",
  "numResults": 15,
  "type": "auto"
}
```

Papers from specific venues:
```
web_search_advanced_exa {
  "query": "large language model agents",
  "category": "research paper",
  "includeDomains": ["arxiv.org", "openreview.net"],
  "includeText": ["LLM"],
  "numResults": 20,
  "type": "deep"
}
```

## Output Format

Return:
1) Results (structured list with title, authors, date, abstract summary)
2) Sources (URLs with publication venue)
3) Notes (methodology differences, conflicting findings)


Step 3: Ask User to Restart Claude Code

You should ask the user to restart Claude Code to have the config changes take effect.
Personal Site Search

Copy the content below and paste it into Claude Code. It will set up the MCP connection and skill for you.

Step 1: Install or update Exa MCP

If Exa MCP already exists in your MCP configuration, either uninstall it first and install the new one, or update your existing MCP config with this endpoint. Run this command in your terminal:

claude mcp add --transport http exa "https://mcp.exa.ai/mcp?tools=web_search_advanced_exa"


Step 2: Add this Claude skill

---
name: web-search-advanced-personal-site
description: Search personal websites and blogs using Exa advanced search. Full filter support for finding individual perspectives, portfolios, and personal blogs. Use when searching for personal sites, blog posts, or portfolio websites.
context: fork
---

# Web Search Advanced - Personal Site Category

## Tool Restriction (Critical)

ONLY use `web_search_advanced_exa` with `category: "personal site"`. Do NOT use other categories or tools.

## Full Filter Support

The `personal site` category supports ALL available parameters:

### Core
- `query` (required)
- `numResults`
- `type` ("auto", "fast", "deep", "instant")

### Domain filtering
- `includeDomains`
- `excludeDomains` (e.g., exclude Medium if you want independent blogs)

### Date filtering (ISO 8601)
- `startPublishedDate` / `endPublishedDate`
- `startCrawlDate` / `endCrawlDate`

### Text filtering
- `includeText` (must contain ALL)
- `excludeText` (exclude if ANY match)

**Array size restriction:** `includeText` and `excludeText` only support **single-item arrays**. Multi-item arrays (2+ items) cause 400 errors. To match multiple terms, put them in the `query` string or run separate searches.

### Content extraction
- `textMaxCharacters` / `contextMaxCharacters`
- `enableSummary` / `summaryQuery`
- `enableHighlights` / `highlightsNumSentences` / `highlightsPerUrl` / `highlightsQuery`

### Additional
- `additionalQueries`
- `maxAgeHours` / `livecrawlTimeout`
- `subpages` / `subpageTarget` - useful for exploring portfolio sites

## Token Isolation (Critical)

Never run Exa searches in main context. Always spawn Task agents:
- Agent calls `web_search_advanced_exa` with `category: "personal site"`
- Agent merges + deduplicates results before presenting
- Agent returns distilled output (brief markdown or compact JSON)
- Main context stays clean regardless of search volume

## When to Use

Use this category when you need:
- Individual expert opinions and experiences
- Personal blog posts on technical topics
- Portfolio websites
- Independent analysis (not corporate content)
- Deep dives and tutorials from practitioners

## Examples

Technical blog posts:
```
web_search_advanced_exa {
  "query": "building production LLM applications lessons learned",
  "category": "personal site",
  "numResults": 15,
  "type": "deep",
  "enableSummary": true
}
```

Recent posts on a topic:
```
web_search_advanced_exa {
  "query": "Rust async runtime comparison",
  "category": "personal site",
  "startPublishedDate": "2025-01-01",
  "numResults": 10,
  "type": "auto"
}
```

Exclude aggregators:
```
web_search_advanced_exa {
  "query": "startup founder lessons",
  "category": "personal site",
  "excludeDomains": ["medium.com", "substack.com"],
  "numResults": 15,
  "type": "auto"
}
```

## Output Format

Return:
1) Results (title, author/site name, date, key insights)
2) Sources (URLs)
3) Notes (author expertise, potential biases, depth of coverage)


Step 3: Ask User to Restart Claude Code

You should ask the user to restart Claude Code to have the config changes take effect.

Links


Built with ❤️ by Exa