Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
BlockRunAI

blockrun-mcp

Quality
9.0

BlockRun MCP integruje różnorodne źródła danych w czasie rzeczywistym i zaawansowane modele AI bezpośrednio z Claude, umożliwiając mu odpowiadanie na złożone, aktualne pytania dotyczące rynków, badań i mediów społecznościowych, a także generowanie treści multimedialnych. Jest to idealne rozwiązanie dla deweloperów potrzebujących dostępu na żądanie do danych premium i możliwości AI bez zarządzania wieloma kluczami API czy subskrypcjami.

USP

BlockRun offers a single, pay-per-call wallet for over 55 LLMs, real-time market data, neural search, and multimedia generation, eliminating subscriptions, API keys, and complex billing. It simplifies access to premium AI tools and data vi…

Use cases

  • 01Answering current Polymarket probabilities
  • 02Finding recent academic papers
  • 03Analyzing X/Twitter sentiment
  • 04Generating images with specific text
  • 05Querying live DEX prices

Detected files (4)

  • skills/exa-research/SKILL.mdskill
    Show content (5075 bytes)
    ---
    name: exa-research
    description: Use when researching products, finding academic papers, discovering competitors, reading webpage content, or getting cited answers grounded in real web sources. Use over generic search when semantic relevance matters.
    triggers:
      - "research"
      - "web research"
      - "find papers"
      - "academic papers"
      - "competitor discovery"
      - "find similar sites"
      - "exa search"
      - "cited answer"
      - "scrape webpage"
      - "neural search"
      - "semantic search"
      - "look up sources"
    ---
    
    # Exa Research
    
    Neural web search via BlockRun. Understands meaning, not keywords. Four distinct actions for different research modes.
    
    ## Quick Decision Table
    
    | User wants... | Action | Cost |
    |--------------|--------|------|
    | Relevant URLs on a topic | `search` | $0.01/call |
    | Cited answer to a question | `answer` | $0.01/call |
    | Full text of URLs | `contents` | $0.002/URL |
    | Pages like a given URL | `similar` | $0.01/call |
    | Recent news | `search` + `category="news"` | $0.01/call |
    | Academic papers | `search` + `category="research paper"` | $0.01/call |
    | Company info | `search` + `category="company"` | $0.01/call |
    
    ## Instructions
    
    ### 1. Initialize (Python SDK)
    
    ```python
    from blockrun_llm import setup_agent_wallet
    
    chain = open(os.path.expanduser("~/.blockrun/.chain")).read().strip() if os.path.exists(os.path.expanduser("~/.blockrun/.chain")) else "base"
    if chain == "solana":
        from blockrun_llm import setup_agent_solana_wallet
        client = setup_agent_solana_wallet()
    else:
        from blockrun_llm import setup_agent_wallet
        client = setup_agent_wallet()
    ```
    
    ### 2. Search — Find Relevant URLs
    
    ```python
    # Basic search
    result = client._request_with_payment_raw("/v1/exa/search", {
        "query": "AI agent frameworks 2025",
        "numResults": 10,
    })
    for r in result.get("results", []):
        print(f"{r['title']} — {r['url']}")
    
    # Filter by category
    result = client._request_with_payment_raw("/v1/exa/search", {
        "query": "transformer architecture improvements",
        "numResults": 10,
        "category": "research paper",
    })
    
    # Restrict to specific domains
    result = client._request_with_payment_raw("/v1/exa/search", {
        "query": "prediction market regulation",
        "numResults": 10,
        "includeDomains": ["reuters.com", "bloomberg.com", "wsj.com"],
    })
    ```
    
    **Categories:** `"news"`, `"research paper"`, `"company"`, `"tweet"`, `"github"`, `"pdf"`
    
    ### 3. Answer — Cited, Grounded Response
    
    Use when the user asks a factual question and needs reliable sources (not Claude's training data).
    
    ```python
    result = client._request_with_payment_raw("/v1/exa/answer", {
        "query": "What is the current market cap of Polymarket?",
    })
    print(result.get("answer", ""))
    for c in result.get("citations", []):
        print(f"  [{c.get('title')}] {c.get('url')}")
    ```
    
    ### 4. Contents — Fetch URL Text
    
    Use when you have URLs and need their full text for LLM context (scraping without a browser).
    
    ```python
    urls = [
        "https://example.com/article-1",
        "https://example.com/article-2",
    ]
    result = client._request_with_payment_raw("/v1/exa/contents", {
        "urls": urls,
    })
    for item in result.get("results", []):
        print(f"=== {item['url']} ===")
        print(item.get("text", "")[:500])
    ```
    
    Up to 100 URLs per call. Returns Markdown-ready text.
    
    ### 5. Similar — Find Related Pages
    
    Use to discover competitors, related research, or sites with similar content.
    
    ```python
    result = client._request_with_payment_raw("/v1/exa/find-similar", {
        "url": "https://polymarket.com",
        "numResults": 10,
    })
    for r in result.get("results", []):
        print(f"{r['title']} — {r['url']}")
    ```
    
    ## Common Research Workflows
    
    **Competitor discovery:**
    ```python
    # 1. Find similar companies
    similar = client._request_with_payment_raw("/v1/exa/find-similar", {"url": "https://target-company.com", "numResults": 15})
    urls = [r["url"] for r in similar.get("results", [])]
    
    # 2. Fetch their about pages
    contents = client._request_with_payment_raw("/v1/exa/contents", {"urls": urls[:10]})
    ```
    
    **Research synthesis:**
    ```python
    # 1. Find papers
    papers = client._request_with_payment_raw("/v1/exa/search", {
        "query": "your topic",
        "category": "research paper",
        "numResults": 20,
    })
    
    # 2. Get answer with citations
    answer = client._request_with_payment_raw("/v1/exa/answer", {
        "query": "What are the key findings on your topic?",
    })
    ```
    
    ## When to Use Exa vs `client.search()`
    
    | Use `blockrun_exa` / `_request_with_payment_raw` | Use `client.search()` |
    |---------------------------------------------------|----------------------|
    | Finding specific URLs and fetching content | Getting a summarized answer with citations |
    | Semantic similarity search | Web + X/Twitter + news combined |
    | Academic paper discovery | Cheaper per call for simple lookups |
    | Domain-filtered research | Already returns a SearchResult object |
    
    ## Requirements
    
    - BlockRun SDK: `pip install blockrun-llm`
    - USDC wallet funded (see `client.get_balance()`)
    - `_request_with_payment_raw` is the Python SDK entry point for Exa (no dedicated method yet)
    
  • skills/image-prompting/SKILL.mdskill
    Show content (13622 bytes)
    ---
    name: image-prompting
    description: Use when generating or editing images via `blockrun_image` — especially with GPT Image 2, DALL-E 3, or Flux for posters, UI mockups, marketing assets, product shots, or anything with on-image text. Turns vague user requests ("make me a cool poster") into structured, text-accurate prompts that actually render what you asked for.
    triggers:
      - "image prompt"
      - "make a poster"
      - "create poster"
      - "ui mockup"
      - "marketing asset"
      - "product shot"
      - "gpt image 2"
      - "image with text"
      - "typography poster"
      - "social asset"
      - "generate marketing image"
      - "ai poster"
    ---
    
    # Image Prompting
    
    Most image failures are prompt failures. This skill gives the MCP agent a repeatable structure for turning any user request into a prompt that renders clean typography, preserves layout on edits, and avoids AI slop. Defaults are tuned for **GPT Image 2** (best legible text), with fallbacks for DALL-E 3, Flux, and Nano Banana.
    
    ## Quick Decision Table
    
    | User wants... | Model | Mode | Size | Cost |
    |---|---|---|---|---|
    | Poster / typography-heavy asset | `openai/gpt-image-2` | generate | `1536x1024` or `1024x1536` | ~$0.04 |
    | Clean product / UI mockup | `openai/gpt-image-2` | generate | `1024x1024` | ~$0.04 |
    | Photoreal / fashion / editorial | `openai/gpt-image-2` or `black-forest/flux-1.1-pro` | generate | `1024x1024`+ | $0.04–0.06 |
    | Artistic / stylized / fast | `google/nano-banana` | generate | `1024x1024` | ~$0.01 |
    | Edit an existing image (localized change) | `openai/gpt-image-2` | edit | match source | ~$0.04 |
    | Composite from multiple refs | `openai/gpt-image-2` | edit (multi-ref) | match target | ~$0.04 |
    
    **Valid GPT Image 2 sizes:** `1024x1024` (square), `1536x1024` (landscape ~3:2), `1024x1536` (portrait ~2:3).
    
    ## The 5-Section Prompt Framework
    
    Write prompts as **five short blocks separated by blank lines.** This is the single biggest quality lever.
    
    ```
    SCENE: where/when/background/environment, one or two lines.
    
    SUBJECT: the main focus (who/what), described concretely.
    
    DETAILS: materials, texture, lighting, camera angle, composition, mood,
    lens feel, depth of field, surface condition. Stack concrete nouns.
    
    USE CASE: editorial photo / product mockup / poster / UI screen / infographic / concept frame.
    (This single line tells the model what kind of image to produce.)
    
    CONSTRAINTS: what must not drift. "No extra text." "No duplicate elements."
    "Preserve face." "Legible typography." Repeat these on every edit.
    ```
    
    > The fifth slot is where most mediocre prompts fail silently. Describe the idea without bounding it and the model gets inventive in directions you will regret.
    
    ## Text & Typography Rules (the #1 differentiator for GPT Image 2)
    
    1. **Wrap literal text in quotes or ALL CAPS.** `Headline (EXACT TEXT): "Fresh and clean."`
    2. **Specify** font style, weight, size, color, placement, letter-spacing.
    3. **Treat text as layout, not decoration:** hero vs. sub vs. caption with hierarchy + spacing.
    4. **State:** `No extra words. No duplicate text. No watermarks.`
    5. **Spell difficult words letter-by-letter** if the model keeps breaking them.
    6. **Mark each distinct piece of copy** with its role: `HERO:`, `SUB:`, `BOTTOM-LEFT TAG:`, `TOP BANNER:`.
    
    ## Anti-Slop Rules (visual facts > excitement)
    
    | Bad (vague / praise-loaded) | Good (concrete visual fact) |
    |---|---|
    | "stunning, epic, masterpiece" | "overcast daylight, brushed aluminum, 50mm feel" |
    | "minimalist brutalist luxury editorial" | "cream background, heavy black condensed sans-serif, asymmetric type block, one hero object, studio tabletop light" |
    | "it should contain a boarding pass feel" | "a boarding pass lies on the tray, barcode visible, creased corner" |
    | "beautiful lighting" | "incandescent work lamp spilling warm light onto wet concrete" |
    
    **Rules:**
    - **Visual facts over praise.** Replace adjectives like *gorgeous/stunning/incredible* with observable specifics.
    - **Style tags need targets.** Don't just name a style — describe the artifacts that style produces.
    - **Say the real thing.** If the image must contain a boarding pass, say "boarding pass."
    - **Name the lens.** 35mm, 50mm, medium format. Depth of field: shallow vs. deep.
    - **Name the light.** Source + quality + direction + color temperature.
    
    ## Instructions
    
    ### 1. Initialize
    
    ```python
    import os
    from pathlib import Path
    
    chain_file = Path.home() / ".blockrun" / ".chain"
    chain = chain_file.read_text().strip() if chain_file.exists() else "base"
    
    if chain == "solana":
        from blockrun_llm import setup_agent_solana_wallet, ImageClient
        setup_agent_solana_wallet()
    else:
        from blockrun_llm import setup_agent_wallet, ImageClient
        setup_agent_wallet()
    
    image = ImageClient()
    ```
    
    ### 2. Generate from Scratch
    
    ```python
    prompt = """
    SCENE: A realistic roadside billboard at sunset, empty two-lane highway,
    soft gradient sky from peach to lavender, a few utility poles.
    
    SUBJECT: A product billboard for a bottled water brand. Bottle on the right
    third of the frame, catching warm rim light.
    
    DETAILS: 35mm photo feel, shallow depth of field, matte-painted billboard,
    clean kerning, precise print finish.
    
    USE CASE: Product mockup for a marketing deck, landscape 3:2.
    
    CONSTRAINTS:
    - Headline (EXACT TEXT): "Fresh and clean."
    - Bold sans-serif, high contrast, centered vertically in the left half.
    - No extra words. No duplicate text. No watermark.
    """
    
    result = image.generate(
        prompt,
        model="openai/gpt-image-2",
        size="1536x1024",
        n=1,
    )
    print(result.data[0].url)  # URL or data URL
    ```
    
    ### 3. Edit: Change / Preserve / Constraints
    
    **The golden pattern** for iterative editing — one small change per turn. Repeat the preserve list every turn.
    
    ```python
    prompt = """
    CHANGE: Make the light warmer — shift the sunset toward a deeper orange.
             Remove the extra chair on the left.
    
    PRESERVE: Keep the bottle position, label text, and billboard layout exactly
              as in the source image. Keep the headline text verbatim.
    
    CONSTRAINTS: No extra text. No duplicate elements. Same aspect ratio.
    """
    
    # `image` arg accepts a public URL OR a data URL (data:image/png;base64,...)
    result = image.edit(
        prompt,
        image="https://example.com/source.png",
        model="openai/gpt-image-2",
        size="1536x1024",
    )
    print(result.data[0].url)
    ```
    
    **Why this works:** small atomic edits compound reliably. Giant rewrites ("redo this but nicer") drift everything.
    
    ### 4. Multi-Reference Composition
    
    Pass multiple reference images (up to ~16) via the `edit` endpoint. Label each reference's role in the prompt so the model knows how to use it.
    
    ```python
    prompt = """
    COMPOSITE: Combine the three reference images as follows.
    - REF 1 is the SUBJECT (the wristwatch): preserve exact dial, hands, and crown.
    - REF 2 is the ENVIRONMENT (marble tabletop + window light): use as background.
    - REF 3 is the STYLE REFERENCE: match its color grade and contrast.
    
    USE CASE: E-commerce hero shot, square.
    
    CONSTRAINTS: No extra objects. No text. Preserve watch proportions exactly.
    """
    
    # SDK: pass primary via `image=`; additional refs via a multipart request
    # (check the MCP's `blockrun_image` tool for the multi-image payload shape)
    ```
    
    ## Worked Example: "make me a cool poster announcing 100 trillion tokens on blockrun.ai"
    
    This is the real prompt that produced the image below — a vague, one-line user ask turned into a structured prompt that rendered every copy line correctly on the first shot.
    
    ![100 Trillion Tokens poster generated with openai/gpt-image-2](./example-100t-poster.jpg)
    
    **User asked for:** *"generate 1 cool poster showing we hit 100 Trillion Token LLM consumption on blockrun.ai"*
    
    **Clarifying questions worth asking before prompting:**
    
    - Audience → *social media flex for X/Twitter*
    - Aesthetic → *retro-futuristic synthwave*
    - Hero text → *"100 TRILLION TOKENS"*
    - Supporting copy → *"The world's largest pay-per-call LLM gateway" · "Served on blockrun.ai" · "Powered by x402 micropayments" · "Now with Seedance + GPT Image 2"*
    - Aspect ratio → *16:9 landscape for X timeline*
    
    **Final prompt passed to `openai/gpt-image-2` at `size="1536x1024"`:**
    
    ```
    SCENE: A retro-futuristic synthwave scene, 80s vaporwave aesthetic, cinematic
    16:9 composition. Deep purple-to-magenta sunset sky with a giant glowing
    pink-and-orange setting sun cut by thin horizontal neon lines. Palm tree
    silhouettes on both sides. Faint city skyline in the distance. An infinite
    chrome grid floor vanishing at the horizon with pink and cyan perspective lines.
    
    SUBJECT: A milestone announcement poster with the hero text
    "100 TRILLION TOKENS" dominating the center of the frame.
    
    DETAILS: Hero text in huge glossy chrome letters with a pink-to-cyan gradient
    and neon rim light, bold condensed sans-serif, CRT glow, slight scanline
    texture across the letters. Faint CRT scanlines overlay the entire frame.
    Subtle film grain. Chromatic aberration on edges. High contrast, symmetrical,
    cinematic poster composition.
    
    USE CASE: Social media announcement poster for X/Twitter, 16:9 landscape.
    
    CONSTRAINTS:
    - HERO (EXACT TEXT, centered): "100 TRILLION TOKENS"
    - SUB under the hero (clean neon cyan, wide letter-spacing, EXACT TEXT):
      "The world's largest pay-per-call LLM gateway"
    - TOP-CENTER BANNER inside a thin neon outline (EXACT TEXT):
      "NOW WITH SEEDANCE + GPT IMAGE 2"
    - BOTTOM-LEFT TAG (monospace, magenta, EXACT TEXT):
      "> Served on blockrun.ai"
    - BOTTOM-RIGHT TAG (monospace, magenta, EXACT TEXT):
      "> Powered by x402 micropayments"
    - Legible crisp typography. No extra words. No duplicate text. No watermark.
    ```
    
    **Why this worked:**
    
    1. **Every piece of copy got a role label** (HERO / SUB / BANNER / TAG) with position and style — no guessing.
    2. **Every text string was marked `(EXACT TEXT)`** and wrapped in quotes.
    3. **Concrete visual facts** (CRT scanlines, chrome gradient, palm silhouettes) replaced vague words like "cool" and "awesome."
    4. **`No duplicate text. No extra words.`** was in the CONSTRAINTS block — GPT Image 2 loves to duplicate headlines if you don't forbid it.
    5. **Aspect ratio** chosen from the valid GPT Image 2 set (`1536x1024`) rather than asking for "16:9" and hoping.
    
    ## Common Workflows
    
    ### Poster / social asset
    
    Use `1536x1024` for X/Twitter, `1024x1024` for IG grid, `1024x1536` for IG story.
    Put every copy element on its own labeled line in the CONSTRAINTS block. Always include
    `No duplicate text. No extra words.`
    
    ### UI screen mockup
    
    State the device, status bar, app name, screen title, each visible element with position
    and state (checked, active, disabled), palette (hex-ish is fine: "deep navy accent"),
    typography scale, corner radius, spacing feel.
    
    ### Product shot
    
    Surface + light source + lens + depth of field + one hero object. Name the material
    of everything in frame. Avoid "beautiful" — describe what you'd see.
    
    ### Concept/brand exploration
    
    Generate 3–4 variants with the same 5-section skeleton but swap only the DETAILS
    block (palette, lens, material). Keep SCENE/SUBJECT/USE CASE/CONSTRAINTS identical
    so you're actually comparing one variable.
    
    ## Prompt Template (copy-paste and fill in)
    
    ```
    SCENE: <where/when/background>
    
    SUBJECT: <main focus, concrete nouns>
    
    DETAILS: <materials, texture, lighting (source + quality + color),
             camera angle, lens feel, depth of field, composition, mood>
    
    USE CASE: <poster | UI screen | product shot | editorial photo | concept frame>
    
    CONSTRAINTS:
    - HERO (EXACT TEXT): "<verbatim copy>"
    - SUB: "<verbatim copy>"
    - <other copy with role labels>
    - <typography rules: font style, weight, spacing, hierarchy>
    - No extra words. No duplicate text. No watermark.
    - <anything that must not drift: face, layout, aspect ratio>
    ```
    
    ## Three Operating Modes (at a glance)
    
    | Mode | When | Endpoint | Pattern |
    |---|---|---|---|
    | Generate | From scratch | `/v1/images/generations` | 5-section framework |
    | Edit | One image, localized change | `/v1/images/image2image` | Change / Preserve / Constraints |
    | Combine | Multi-image composition | `/v1/images/image2image` (multi-ref) | Labeled refs (SUBJECT / ENV / STYLE) |
    
    ## Notes & Gotchas
    
    - **GPT Image 2 is currently the best for legible on-image text.** For artistic prompts with no copy, Nano Banana is 4× cheaper and often prettier.
    - **Response shape:** `result.data[0].url` is either an HTTPS URL or a `data:image/...;base64,...` string. Save via `urllib.request.urlretrieve` for URLs or `base64.b64decode(item.b64_json)` for b64 payloads.
    - **Aspect ratio:** only the three sizes above are valid for GPT Image 2. If the user asks for 16:9, use `1536x1024` — it reads as landscape on X/Twitter without cropping.
    - **Iterative edits drift.** Repeat the PRESERVE list every turn, even when it feels redundant.
    - **If text keeps breaking:** shorten it, spell difficult words, and move it to a dedicated `CONSTRAINTS` line marked `(EXACT TEXT)`.
    - **Never use** words like *amazing, stunning, masterpiece, ultra-detailed, 8k, trending on artstation* — they waste tokens and pull toward generic AI-slop aesthetics.
    
    ## Requirements
    
    - BlockRun SDK: `pip install blockrun-llm`
    - USDC wallet funded (`ImageClient().get_wallet_address()`; `setup_agent_wallet().get_balance()`)
    - For `edit` / multi-ref: source images must be reachable by a public URL or passed as a data URL
    
    ## Reference
    
    BlockRun image models: `openai/gpt-image-2`, `openai/gpt-image-1`, `openai/dall-e-3`, `google/nano-banana`, `google/nano-banana-pro`, `black-forest/flux-1.1-pro`, `zai/cogview-4`, `xai/grok-imagine-image`, `xai/grok-imagine-image-pro`
    
  • skills/prediction-markets/SKILL.mdskill
    Show content (5494 bytes)
    ---
    name: prediction-markets
    description: Use when user asks about event probabilities, prediction market odds, what people are betting on, Polymarket or Kalshi prices, or wants to find markets on a specific topic (elections, crypto, sports, macro events).
    triggers:
      - "polymarket"
      - "kalshi"
      - "dflow"
      - "prediction market"
      - "event probability"
      - "betting odds"
      - "what are people betting on"
      - "election odds"
      - "crypto market odds"
      - "binance futures"
      - "yes/no market"
      - "implied probability"
    ---
    
    # Prediction Markets
    
    Real-time prediction market data via BlockRun (powered by Predexon). Covers Polymarket, Kalshi, dFlow, and Binance Futures.
    
    ## Quick Decision Table
    
    | User wants... | Method | Path | Cost |
    |--------------|--------|------|------|
    | Active Polymarket events | `client.pm(path)` | `"polymarket/events"` | $0.001 |
    | Search Polymarket by topic | `client.pm(path, q=...)` | `"polymarket/search"` | $0.001 |
    | Specific Kalshi market | `client.pm(path)` | `"kalshi/markets/TICKER"` | $0.001 |
    | Complex/filtered query | `client.pm_query(path, body)` | `"polymarket/query"` | $0.005 |
    | dFlow markets | `client.pm(path)` | `"dflow/..."` | $0.001 |
    | Binance Futures | `client.pm(path)` | `"binance/..."` | $0.001 |
    
    ## Instructions
    
    ### 1. Initialize
    
    ```python
    import os
    from pathlib import Path
    
    chain_file = Path.home() / ".blockrun" / ".chain"
    chain = chain_file.read_text().strip() if chain_file.exists() else "base"
    
    if chain == "solana":
        from blockrun_llm import setup_agent_solana_wallet
        client = setup_agent_solana_wallet()
    else:
        from blockrun_llm import setup_agent_wallet
        client = setup_agent_wallet()
    ```
    
    ### 2. List Active Events
    
    ```python
    # All active Polymarket events
    events = client.pm("polymarket/events")
    for event in events.get("data", events if isinstance(events, list) else [])[:10]:
        print(f"{event.get('title', '?')} — {event.get('slug', '')}")
    ```
    
    ### 3. Search by Topic
    
    ```python
    # Search Polymarket for a topic
    results = client.pm("polymarket/search", q="bitcoin ETF")
    for market in results.get("data", results if isinstance(results, list) else [])[:10]:
        title = market.get("title", "?")
        # Outcome prices are in the market object
        print(f"{title}")
        for outcome in market.get("outcomes", []):
            print(f"  {outcome.get('title', '?')}: {outcome.get('price', '?')}")
    ```
    
    ### 4. Specific Kalshi Market
    
    ```python
    # Get a specific Kalshi market by ticker
    market = client.pm("kalshi/markets/KXBTC-25MAR14")
    print(f"Market: {market.get('title', market.get('ticker', '?'))}")
    yes_price = market.get("yes_bid", market.get("yes_price", "?"))
    no_price = market.get("no_bid", market.get("no_price", "?"))
    print(f"YES: {yes_price} | NO: {no_price}")
    ```
    
    ### 5. Structured Query (POST)
    
    Use for complex filtering — active markets only, sorted by volume, with pagination.
    
    ```python
    # Polymarket: active markets sorted by volume, limit 20
    data = client.pm_query("polymarket/query", {
        "filter": "active",
        "limit": 20,
        "order": "volume",
    })
    
    # Kalshi: all markets in a specific series
    data = client.pm_query("kalshi/query", {
        "series_ticker": "KXBTC",
        "limit": 50,
    })
    ```
    
    ## Common Workflows
    
    **"What are people betting on in crypto right now?"**
    ```python
    events = client.pm("polymarket/search", q="crypto bitcoin ethereum")
    for e in events.get("data", [])[:5]:
        print(e.get("title", "?"))
        for o in e.get("outcomes", []):
            print(f"  {o.get('title')}: {o.get('price')} (implies {round(float(o.get('price', 0))*100)}%)")
    ```
    
    **"What's the probability of X event?"**
    ```python
    # 1. Search for the event
    results = client.pm("polymarket/search", q="US election 2026")
    
    # 2. Get specific market details
    if results.get("data"):
        market_id = results["data"][0].get("id", results["data"][0].get("slug"))
        detail = client.pm(f"polymarket/events/{market_id}")
        print(detail)
    ```
    
    **"Show me all active Kalshi markets"**
    ```python
    data = client.pm_query("kalshi/query", {"limit": 50, "status": "open"})
    markets = data.get("markets", data.get("data", []))
    for m in markets[:10]:
        print(f"{m.get('ticker')} — {m.get('title')}: YES={m.get('yes_bid')} NO={m.get('no_bid')}")
    ```
    
    ## Data Case Study: Sentiment Signal from Markets
    
    Prediction markets are often better probability estimates than polls or pundit takes. Pattern:
    
    ```python
    import json
    
    # 1. Find relevant markets
    crypto_markets = client.pm("polymarket/search", q="bitcoin price end of year")
    
    # 2. Extract implied probabilities
    for market in crypto_markets.get("data", [])[:3]:
        print(f"\n{market.get('title', '?')}")
        for outcome in market.get("outcomes", []):
            p = float(outcome.get("price", 0)) * 100
            print(f"  {outcome.get('title')}: {p:.0f}% implied probability")
    
    # 3. Save for later analysis
    with open(os.path.expanduser("~/.blockrun/data/markets_snapshot.json"), "w") as f:
        json.dump(crypto_markets, f, indent=2, default=str)
    ```
    
    ## Notes on Response Shape
    
    Predexon returns raw API responses. Structure varies by exchange:
    - **Polymarket**: usually `{ "data": [...] }` or `{ "events": [...] }`
    - **Kalshi**: usually `{ "markets": [...] }` with `ticker`, `yes_bid`, `no_bid` fields
    - **Print raw response first** when exploring a new path: `print(json.dumps(result, indent=2)[:1000])`
    
    ## Requirements
    
    - BlockRun SDK: `pip install blockrun-llm`
    - USDC wallet funded (`client.get_balance()`)
    - Kalshi tickers: format is `KXBTC-25MAR14` (series + expiry date)
    
  • .claude-plugin/marketplace.jsonmarketplace
    Show content (683 bytes)
    {
      "name": "blockrun-mcp",
      "owner": {
        "name": "BlockRun",
        "email": "hello@blockrun.ai"
      },
      "plugins": [
        {
          "name": "exa-research",
          "source": "./skills/exa-research",
          "description": "Use when researching products, finding academic papers, discovering competitors, reading webpage content, or getting cited answers grounded in real web sources."
        },
        {
          "name": "prediction-markets",
          "source": "./skills/prediction-markets",
          "description": "Use when user asks about event probabilities, prediction market odds, what people are betting on, Polymarket or Kalshi prices, or wants to find markets on a specific topic."
        }
      ]
    }
    

README

BlockRun MCP

npm License: MIT MCP

Real-time data for Claude — markets, research, X/Twitter, crypto. No API keys. Pay per call.

claude mcp add blockrun -s user -- npx -y @blockrun/mcp@latest

Wallet auto-created. Fund with $5 USDC. Ask Claude anything.


What changes

Before BlockRun, Claude can't answer:

  • "What's the current Polymarket probability that Bitcoin hits $100k this year?"
  • "Find me the top 5 papers on RAG published in the last 30 days"
  • "What are people saying about @sama on X right now?"
  • "What's the 24h volume on the PEPE/ETH pair on Uniswap?"

After BlockRun, it can. Each query costs fractions of a cent, billed from a local USDC wallet. No subscriptions. No API dashboards. No key rotation.


Demo

User: What's Polymarket saying about the Fed rate decision next month?

Claude: According to Polymarket, the market currently puts a 73% probability on the Fed holding rates steady at the May meeting, with 24% on a 25bp cut and 3% on a hike. Volume over the last 24h: $2.1M. The "Hold" contract last traded at $0.73.

(via blockrun_markets · cost: $0.001)


User: Find recent papers on speculative decoding and summarize the key approaches

Claude: I found 8 papers from the last 60 days via neural search. The dominant approaches are: (1) draft model selection — smaller models predicting N tokens ahead for the larger model to verify in parallel...

(via blockrun_exa · cost: $0.01)


Showcase

Posters generated through blockrun_image with openai/gpt-image-2. Each is a single API call routed through BlockRun, paid in USDC on Base.

Latest — GPT-5.5 now live on BlockRun

gpt-5.5 — now live on BlockRun. Pay per call. No subscription. No keys.

Gallery

Thank you, Cornell — BlockRun at the Cornell Blockchain Conference 2026, packed boothThank you, Cornell — BlockRun at the Cornell Blockchain Conference 2026100 Trillion Tokens served — synthwave milestone poster
Cornell Blockchain Conference 2026 — packed booth recapCornell Blockchain Conference 2026 — quiet variant100 Trillion Tokens — milestone synthwave poster

Prompts and a worked example for these are in skills/image-prompting/SKILL.md.


Install

Claude Code (recommended)

claude mcp add blockrun -s user -- npx -y @blockrun/mcp@latest

The -s user flag installs globally (available in every project). The -- separator ensures -y is passed to npx, not parsed by claude mcp add.

Claude Desktop — add to claude_desktop_config.json:

{
  "mcpServers": {
    "blockrun": {
      "command": "npx",
      "args": ["-y", "@blockrun/mcp"]
    }
  }
}

Hosted (no install, always latest)

claude mcp add blockrun -s user --transport http https://mcp.blockrun.ai/mcp

Fund your wallet

Run blockrun_wallet to see your address. Send USDC on Base.

MethodSteps
CoinbaseSend → USDC → Base network → paste address
Bridge from Ethereumbridge.base.org

$5 covers ~5,000 market queries, ~500 Exa searches, ~250 image generations, or ~30 Seedance 1.5-pro clips (5s).


Tools

ToolData sourceCost
blockrun_chat55+ LLMs (GPT, Claude, Gemini, DeepSeek, Kimi K2.6, GLM, NVIDIA free tier, ...) with mode tier routingper token
blockrun_imageDALL-E 3, GPT Image 1/2, Grok Imagine, Flux, CogView-4, Nano Banana — generation + editing$0.015–0.12
blockrun_videoxAI Grok Imagine Video + ByteDance Seedance 1.5/2.0/2.0-fast$0.03–0.30/sec
blockrun_musicMiniMax music generationper track
blockrun_pricePyth-backed realtime + OHLC — crypto / FX / commodity (free), 12 stock markets (paid)free or $0.001/call
blockrun_marketsPolymarket (markets, candles, trades, orderbooks, leaderboards, smart-wallet PnL/clusters, UMA oracle), Kalshi, Limitless, Opinion, Predict.Fun, dFlow, Binance Futures, cross-platform match + search$0.001–0.005/query
blockrun_xX/Twitter — profiles, tweets, followers, mentions, search (AttentionVC)per call
blockrun_exaNeural web search (Exa) — research, competitors, papers, URL content$0.01/query
blockrun_searchGrok Live Search — web + news with citations~$0.025 per source
blockrun_dexLive DEX prices via DexScreenerfree
blockrun_modelsLive catalogue of every LLM/image/video/music model + pricingfree
blockrun_walletBalance, spending, agent budgets, setup QRfree

Why not just use the APIs directly?

Direct APIsBlockRun
ExaSign up, $20/mo minimum$0.01/call, no subscription
PolymarketUndocumented, rate-limited$0.001/call, clean JSON
Twitter/X API$100–$5000/month$0.03/page, no approval
Multiple sources4 accounts, 4 API keys, 4 billing pages1 wallet

One wallet. All sources. No dashboards.


Multi-agent budget delegation

Delegate a spending budget to a child agent with agent_id. The child is auto-blocked when the budget runs out — useful for autonomous agents that shouldn't run up unbounded costs.


How it works

Pay-per-call via x402 micropayments in USDC on Base. Your wallet lives at ~/.blockrun/.session. Private key never leaves your machine.


blockrun.ai · npm · @BlockRunAI