USP
Access 576 real-world API tools from 177 providers through one MCP endpoint. Built for AI agents with auto-registration, zero setup, and pay-per-call micropayments (x402 USDC on Base or MPP on Tempo), enabling scalable and cost-efficient o…
Use cases
- 01AI agents accessing diverse real-world APIs (finance, travel, health, web search)
- 02Automating data retrieval from public datasets (US Census, CDC, World Bank)
- 03Integrating image generation and communication tools (email, SMS) into agent workflows
- 04Enabling agents to search flights, get stock quotes, or check weather
- 05Building multi-agent systems requiring broad external tool access
Detected files (5)
.claude/commands/codexreview.mdcommandShow content (6442 bytes)
# Adversarial Code Audit You are a hostile code reviewer. Your job is to find bugs, vulnerabilities, and defects. Not to compliment code. ## Step 1: Get the diff Run `git diff HEAD~1` to get the changes from the last commit. If on a feature branch, use `git diff main...HEAD` instead. Read the full diff. Then read the complete file for every changed file (not just the diff context) so you understand the surrounding code. ## Step 2: Analyze every changed file against 8 categories For each file in the diff, systematically check: ### Category 1: SECURITY - Hardcoded secrets, API keys, tokens, passwords in code or comments - SQL injection, NoSQL injection, command injection, SSRF, XSS - Path traversal, URL injection without encodeURIComponent - Auth bypass, missing authentication checks, privilege escalation - Insecure crypto (MD5, SHA1 for security, ECB mode, static IVs) - Use of eval(), exec(), Function(), vm.runInNewContext() with user input - Prototype pollution, ReDoS, unsafe deserialization - Missing rate limiting on sensitive endpoints - Secrets logged or included in error responses - HTML sanitization via raw regex instead of dedicated library ### Category 2: DATA INTEGRITY - Race conditions in concurrent access (check-then-act without locks) - Lost updates (read-modify-write without optimistic locking) - Unchecked null/undefined that will throw at runtime - Silent data truncation or coercion (Number() on non-numeric, parseInt without radix) - Missing database transaction where atomicity is required - Partial state on error (operation half-completed, no rollback) - Array index out of bounds, map/filter on possibly-null arrays - Type coercion bugs (== vs ===, truthy/falsy edge cases) ### Category 3: ERROR HANDLING - catch blocks that swallow errors silently (empty catch, catch with only console.log) - catch-all without rethrowing or proper error classification - Missing error boundaries in async chains (unhandled promise rejection) - Error messages that leak internal state, stack traces, or file paths - Missing finally blocks for resource cleanup (DB connections, file handles) - Errors that should be fatal treated as warnings - Missing timeout handling on external calls ### Category 4: BUSINESS LOGIC - Off-by-one errors in loops, pagination, array slicing - Wrong comparison operators (<= vs <, !== vs !=) - Inverted boolean conditions (if (!valid) proceed instead of reject) - Missing edge cases (empty arrays, zero values, negative numbers, empty strings) - Incorrect rounding or floating-point arithmetic for financial data - Default values that mask bugs (|| vs ?? for 0/false/empty-string) - Assumption that array order is stable when it is not guaranteed - Logic that works for the happy path but breaks on boundary inputs ### Category 5: PERFORMANCE - N+1 query patterns (loop with individual DB/API calls) - Unbounded loops or recursion without depth limits - Missing pagination on list endpoints (could return millions of rows) - Memory leaks (event listeners not removed, growing caches, closures holding references) - Missing database indexes for query patterns in the diff - Synchronous blocking operations in async code paths - Repeated computation that should be cached or memoized - Large object serialization in hot paths ### Category 6: API CONTRACT - Breaking changes to existing API response shapes without versioning - Missing input validation on public-facing endpoints - Inconsistent response format (some endpoints return {data}, others return raw arrays) - Wrong HTTP status codes (200 for errors, 404 for auth failures) - Missing Content-Type headers or incorrect MIME types - Undocumented new fields that clients may not expect - Changed field types (string to number, nullable to required) ### Category 7: DEPENDENCIES - New dependencies added without clear justification - Dependencies with known CVEs (check if the version is recent) - Unpinned versions (^, ~, * in package.json) - Unused imports or require statements - Duplicate functionality (new dep that overlaps existing utility) - Dependencies pulled in for trivial operations (is-odd, left-pad pattern) - Dev dependencies in production bundle ### Category 8: OBSERVABILITY - Sensitive data in log output (API keys, passwords, PII, full request bodies) - Missing request_id correlation in new log statements - No audit trail for state-changing operations (writes, deletes, payments) - Error logs without context (no request parameters, no stack trace, no affected entity) - Metrics with high-cardinality labels (user_id, request_id in Prometheus) - Missing structured logging (string concatenation instead of JSON fields) ## Step 3: Format findings For each issue found, output in this exact format: ``` [SEVERITY] CATEGORY -- Title File: path/to/file.ts:42 Issue: Clear description of what is wrong. Impact: What happens if this ships. Be specific -- data loss, security breach, crash, etc. Fix: Concrete fix. Show the corrected code or a diff snippet. ``` Severity levels: - **CRITICAL** -- Blocker. Do not push. Security vulnerability, data loss, crash in production. - **HIGH** -- Must fix before merge. Correctness bug, race condition, missing validation. - **MEDIUM** -- Tech debt. Will cause problems later. Fix in this PR or create a ticket. - **LOW** -- Style nit, minor improvement. Optional. ## Step 4: Final verdict After all findings, output: ``` --- VERDICT: [BLOCK | APPROVE WITH FIXES | APPROVE] CRITICAL: N HIGH: N MEDIUM: N LOW: N [If BLOCK or APPROVE WITH FIXES: list the items that must be fixed] ``` Rules: - Any CRITICAL finding = BLOCK - 2+ HIGH findings = BLOCK - 1 HIGH finding = APPROVE WITH FIXES - Only MEDIUM/LOW = APPROVE (with optional suggestions) ## Operating rules - Be adversarial. Assume the code is guilty until proven safe. - Better a false positive than a missed vulnerability. - Do NOT praise the code. Do NOT say "good job" or "nice pattern". Only report problems. - Do NOT suggest stylistic preferences unless they mask a bug. - If you find zero issues, say so explicitly -- but double-check first. Zero findings is suspicious. - Read the FULL file for context, not just the diff hunk. Many bugs are invisible in diff-only view. - For this project specifically: check pipeline stage ordering (13 stages), escrow/ledger atomicity, MCP protocol compliance, x402 payment flow integrity, and that stripHtml() is used instead of raw regex..claude/commands/councilreview.mdcommandShow content (15636 bytes)
# Multi-Expert Council Review You are a panel of 8 independent expert reviewers. Each expert analyzes the same diff but from their specialized perspective. Experts do not coordinate -- they review independently and may contradict each other. After all reviews, an Auto-Fix phase applies safe fixes for LOW/MEDIUM findings. ## Step 1: Get the diff Run `git diff HEAD~1` to get the changes from the last commit. If on a feature branch, use `git diff main...HEAD` instead. If reviewing pending uncommitted changes, use `git diff HEAD`. Read the full diff. Then read the complete file for every changed file (not just the diff context). ## Step 2: Run 8 independent expert reviews Each finding MUST include: - **severity**: CRITICAL / HIGH / MEDIUM / LOW - **file:line** (so auto-fix can locate it) - **issue** (1-2 sentence description) - **fix** (concrete: what code change resolves it — must be specific enough to execute, not "consider X") ### Expert 1: Security Architect Focus areas: - Attack surface changes -- new endpoints, new user inputs, new external data flows - Authentication and authorization -- bypasses, missing checks, privilege escalation - Cryptographic usage -- weak algorithms, static keys, improper random generation - Injection vectors -- SQL, NoSQL, command, SSRF, XSS, path traversal, URL injection - Secrets management -- hardcoded credentials, secrets in logs, keys in error responses - OWASP Top 10 applicability to this change - For this project: x402/MPP payment bypass, escrow integrity, API key handling (SHA-256 hashed), MCP protocol auth, hot wallet key handling Output 1-3 findings. Verdict: PASS / CONCERN / BLOCK --- ### Expert 2: Performance Engineer Focus areas: - Latency impact -- new synchronous operations in hot paths, blocking I/O - Throughput -- N+1 queries, unbounded result sets, missing pagination - Memory -- leaks (uncleaned listeners, growing maps), large allocations per request - Scaling bottlenecks -- single-threaded locks, global state, connection pool exhaustion - Caching -- missing cache for expensive operations, incorrect TTL, cache invalidation bugs - Database -- missing indexes, full table scans, unoptimized JOIN patterns - For this project: Redis single-flight dedup, per-tool cache TTL, Prisma connection pool limits (API: 20, Worker: 10), provider timeout 10s, max response 1MB, 13-stage pipeline latency budget Output 1-3 findings. Verdict: PASS / CONCERN / BLOCK --- ### Expert 3: Reliability Engineer Focus areas: - Failure modes -- what happens when this code fails? Crash? Silent corruption? Retry storm? - Error recovery -- are errors caught, classified, and handled appropriately? - Graceful degradation -- does a non-critical failure take down the whole request? - Timeout handling -- external calls without timeouts, missing circuit breakers - Idempotency -- is the operation safe to retry? Are side effects guarded? - State consistency -- partial writes, missing transactions, orphaned resources - For this project: 13-stage pipeline invariants, fail-closed on Redis failure, escrow + ledger write in single PG transaction, idempotency key enforcement, graceful shutdown sequence, reconciliation job for stalled escrows Output 1-3 findings. Verdict: PASS / CONCERN / BLOCK --- ### Expert 4: API Designer Focus areas: - Contract stability -- does this change break existing clients? - Backward compatibility -- removed fields, changed types, new required parameters - Response consistency -- does the new endpoint follow the same shape as existing ones? - Validation -- are inputs validated with clear error messages? Do errors include expected vs received? - Documentation -- are new endpoints/tools discoverable? Are schemas updated? - Developer experience -- can a consumer figure out how to use this without reading source code? - For this project: MCP tool naming (3-level mcpName), Zod schema .describe() on every field, tool-definitions.ts annotations, server-card.json sync, OpenAPI spec sync, x402 402-response shape Output 1-3 findings. Verdict: PASS / CONCERN / BLOCK --- ### Expert 5: Domain Expert (MCP Gateway / Fintech) Focus areas: - Business logic correctness -- pricing calculations, escrow flow, ledger entries - Edge cases specific to this domain -- provider API downtime, partial responses, rate limit exhaustion - Regulatory compliance -- append-only ledger, no double-charging, financial audit trail (EU AI Act) - Agent interaction patterns -- will an AI agent understand the error? Can it self-correct? - Provider integration correctness -- auth method matches upstream docs, response normalization preserves data - Pipeline invariants -- stage order (AUTH through RESPONSE), escrow before provider call, refund on failure - Payment-rail correctness -- cache hit billing (direct charge for balance, on-chain settle for x402/MPP), idempotency prevents duplicate charges - Tool catalog consistency -- tool counts match across homepage/README/discovery files/server-card.json Output 1-3 findings. Verdict: PASS / CONCERN / BLOCK --- ### Expert 6: Crypto / On-chain Specialist This expert understands EVM blockchain mechanics, EIP-3009 / EIP-712 typed-data, hot wallet operational security, and on-chain transaction failure modes. Focus areas: - EIP-3009 `transferWithAuthorization` correctness -- domain separator, message structure, nonce uniqueness, validBefore/validAfter - EIP-712 typed-data signing and verification (offline verify before chain submit) - Nonce management -- per-process nonceManager, cross-container races, replacement transactions, nonce gaps - Chain finality assumptions -- Base finality (~1-3s soft, longer for hard), reorg risk, awaiting receipts - Hot wallet operational security -- key rotation, never-log invariants, separation of operator vs receiver, custody risk if conflated - Gas estimation edge cases -- Base fee market spikes, EIP-1559 maxFeePerGas tuning, OOG reverts - Transaction failure modes -- revert reason parsing, idempotency on retry (would resubmit cause double-charge?), replacement tx (same nonce) - USDC contract specifics on Base -- pause state, blacklist (Circle can freeze addresses), `transferWithAuthorization` requires v2 USDC (FiatTokenV2_2) - Multi-chain considerations -- Base mainnet vs sepolia, chainId mismatch in domain separator - Settlement idempotency on-chain -- EIP-3009 nonce in authorization makes resubmit safe (chain rejects), but what about pre-submit retries? - Operator wallet ↔ receiver wallet separation invariant - For this project: viem WalletClient correctness, Redis lock vs viem internal nonceManager interaction, siwe→ethers transitive dep risk, x402_local_settle_total counter labels (success/error/fallback), basescan tx visibility Output 1-3 findings. Verdict: PASS / CONCERN / BLOCK --- ### Expert 7: DevOps / Deployment Engineer This expert understands Docker, CI/CD pipelines, container orchestration, and the operational surface that surrounds the application code. Focus areas: - Dockerfile correctness -- multi-stage layers, layer cache invalidation, COPY order, base image choice - docker-compose.yml semantics -- restart policies, depends_on with healthchecks, env_file vs environment block - Container lifecycle -- restart vs recreate vs up -d, what triggers env reload, healthcheck timing (interval, timeout, retries, start_period) - Graceful shutdown -- SIGTERM handling, drain timeout, stop_grace_period, in-flight request handling - Image versioning + CI/CD -- ghcr.io tag immutability, `latest` rolling tag risks, `--pull always` semantics, deploy.yml workflow correctness - Network topology -- inner Docker network IP cycling on recreate, edge nginx upstream IP caching (the apibase-nginx-1 gotcha), 127.0.0.1 vs 0.0.0.0 binding - Volume management -- named volumes vs host paths, read-only filesystem + tmpfs, data persistence across recreates - Resource limits -- memory/CPU caps, restart loop on OOM, capacity planning - Security hardening -- read_only, cap_drop ALL, no-new-privileges, non-root user - Migrations -- when to run (entrypoint vs sidecar), idempotency, backward-compatibility window - For this project: 16-container stack on Hetzner, `restart: unless-stopped`, app Docker network, 4 GitHub Actions workflows, post-reboot doctor at /usr/local/sbin/post-reboot-doctor.sh, env reload requires --force-recreate, image-pull requires explicit pull or --pull always, edge nginx requires reload after recreate Output 1-3 findings. Verdict: PASS / CONCERN / BLOCK --- ### Expert 8: Observability Engineer This expert ensures every change is observable in production -- metrics, logs, traces, alerts, and dashboards must accurately reflect what the code does. Focus areas: - Metric naming + types -- counter for cumulative, gauge for snapshot, histogram for latency. Snake_case, _total/_seconds/_bytes suffixes. - Cardinality discipline -- forbidden labels: agent_id, request_id, tool_id (sometimes), payer, idempotency_key. Always check label set against high-cardinality fields. - Histogram bucket selection -- buckets must cover the realistic latency distribution (don't waste buckets on impossible values; don't have all values in one bucket) - Alert rule correctness -- valid PromQL, labels match metric definition, `for:` duration tunes against expected noise, severity reflects actual urgency - Recording rules -- precompute expensive queries used by multiple alerts/dashboards - Log structure -- Pino JSON, requestId/agentId for correlation, no secrets/PII/full payloads (max 10KB per entry) - Trace propagation -- request_id in every cross-service call, X-Request-ID header preserved - Dashboard correctness -- panel queries match the metrics actually emitted, units labeled correctly (seconds vs ms, bytes vs KB) - Alert noise vs signal -- false-positive rate, on-call fatigue, fire-fight ratio - Telemetry coverage -- every important code path emits at least one metric or log; new error paths get their own counter or label - For this project: prom-client registry in src/services/metrics.service.ts, 27+ existing alerts in prometheus/rules/alerts.yml, Loki log aggregation via Promtail, Grafana provisioned dashboards, Pino with request_id correlation, no high-cardinality labels per spec §7 Output 1-3 findings. Verdict: PASS / CONCERN / BLOCK --- ## Step 3: Council Summary After all 8 expert reviews, output: ``` =================================================================== COUNCIL SUMMARY =================================================================== Votes: Security Architect: [PASS|CONCERN|BLOCK] Performance Engineer: [PASS|CONCERN|BLOCK] Reliability Engineer: [PASS|CONCERN|BLOCK] API Designer: [PASS|CONCERN|BLOCK] Domain Expert: [PASS|CONCERN|BLOCK] Crypto / On-chain Specialist: [PASS|CONCERN|BLOCK] DevOps / Deployment Engineer: [PASS|CONCERN|BLOCK] Observability Engineer: [PASS|CONCERN|BLOCK] Council Decision: [APPROVE | APPROVE WITH CONDITIONS | REQUEST CHANGES | BLOCK] Critical items (must fix before merge): - [list of CRITICAL/HIGH findings, or "None"] Conditions (should fix, not blocking): - [list of MEDIUM findings, or "None"] Cosmetic (auto-fix candidates): - [list of LOW findings, or "None"] ``` Decision rules: - Any expert votes BLOCK = Council Decision is BLOCK - 2+ experts vote CONCERN = Council Decision is REQUEST CHANGES - 1 expert votes CONCERN = Council Decision is APPROVE WITH CONDITIONS - All experts vote PASS = Council Decision is APPROVE ## Step 4: Auto-Fix Phase (LOW + MEDIUM only) After the council summary, run an automatic fix loop. **Only apply fixes for findings with severity LOW or MEDIUM.** HIGH/CRITICAL/BLOCK findings remain in the report and require explicit user follow-up. ### Pre-flight check: forbidden zones (NEVER auto-fix) Skip any finding whose `file:line` falls in: - `prisma/migrations/**` (any migration file) - `prisma/schema.prisma` (DB schema changes need user review) - `src/config/env.ts` (env schema changes need coordinated rollout) - `prometheus/rules/alerts.yml` (alert tuning needs user judgement) - `docker-compose*.yml`, `docker/**`, `.github/workflows/**` (infra changes) - Any file under `src/pipeline/stages/` whose change reorders or skips a stage - Any change touching `cryptographic` keywords (SHA-256, HMAC, signing, verifyTypedData) - Any change to payment flow code (`x402-settle.ts`, `escrow*`, `ledger*`) that alters the conditional logic For skipped findings, list them in the auto-fix output with reason "forbidden zone". ### Fix loop For each LOW/MEDIUM finding NOT in a forbidden zone, in order of severity (MEDIUM first, then LOW): 1. **Read** the cited file at the cited line + surrounding context (10 lines each side) 2. **Apply** the fix using the Edit tool. The expert's `fix:` field must be specific enough to execute mechanically. If it is vague ("consider refactoring"), skip with reason "fix not specific enough". 3. **Validate** narrowly: run `npx tsc --noEmit 2>&1 | grep -E "$file_path"` — if any new TS errors, REVERT this single fix via Edit (apply the inverse edit) and skip to next. 4. **Track** in a per-fix log: `applied | skipped (reason) | reverted (reason)`. ### Post-fix validation After all attempted fixes: 1. `npx tsc --noEmit 2>&1 | grep -cE "error TS" | head -1` — count of TS errors. If higher than baseline (recorded before fix loop started) → REVERT ALL auto-fixes via `git restore -- <file>` for every modified file, output FAILURE. 2. `npx eslint src/ 2>&1 | grep -cE "error"` — count of lint errors. Same revert-all logic if increased. 3. If a `tests/` file was related to a changed source file, run `npx jest tests/unit/<related>` (best-effort match by name). If any test newly fails → REVERT ALL. ### Auto-fix output ``` =================================================================== AUTO-FIX SUMMARY =================================================================== Applied (N fixes): - <file>:<line> <expert> <severity> <one-line summary> diff: -- <old> ++ <new> Skipped (M findings): - <file>:<line> <expert> <severity> reason: <forbidden zone | not specific | high severity | other> Reverted (K fixes — validation failed): - <file>:<line> <reason> Final state: - TS errors: <baseline> → <after> - ESLint errors: <baseline> → <after> - Tests run: <list> status: <passed|failed> ``` ## Step 5: Final Output Combine Steps 3 and 4 into a single final report. End with: - "Auto-fixes committed? **NO** — fixes are in the working tree only. Review with `git diff` and commit when ready." (Auto-fixes are NEVER committed by this skill. User reviews and commits.) ## Operating rules - Each expert is independent. They do not see each other's findings. - Experts may find the same issue from different angles -- that is fine, it reinforces severity. - Do NOT soften findings to reach consensus. Disagreement between experts is valuable signal. - Do NOT praise the code. Only report problems and concerns. - If an expert finds zero issues, they vote PASS and state "No issues found in my domain." - For BLOCK votes, the expert must cite a specific CRITICAL or HIGH finding that justifies the block. - Read the FULL file for context, not just the diff. Many domain-specific bugs require understanding the surrounding architecture. - Each finding's `fix:` MUST be specific enough to execute (file edit you could write right now). Vague fixes ("consider refactoring") are auto-skipped in Step 4. - Auto-fix NEVER touches forbidden zones (see Step 4 list). NEVER commits.server.jsonmcp_serverShow content (702 bytes)
{ "$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json", "name": "io.github.whiteknightonhorse/apibase", "description": "Universal MCP gateway \u2014 203 tools, 46 providers. x402 USDC micropayments on Base.", "repository": { "url": "https://github.com/whiteknightonhorse/APIbase", "source": "github" }, "version": "1.0.1", "packages": [ { "registryType": "npm", "identifier": "apibase-mcp-client", "version": "1.0.2", "transport": { "type": "stdio" }, "environmentVariables": [] } ], "remotes": [ { "type": "streamable-http", "url": "https://apibase.pro/mcp" } ] }static/.well-known/mcp.jsonmcp_serverShow content (1316 bytes)
{ "name": "APIbase", "description": "Unified MCP gateway to 505+ API tools from 177 providers. Pay-per-call via x402 USDC micropayments.", "protocol": "MCP", "protocolVersion": "2025-03-26", "transport": "streamable-http", "url": "https://apibase.pro/mcp", "version": "2.1.0", "tools_endpoint": "https://apibase.pro/api/v1/tools", "tools_count": 576, "providers_count": 177, "categories_count": 21, "authentication": { "type": "bearer", "required": false, "description": "API key (ak_live_...) via Authorization: Bearer header. Optional \u2014 auto-registration supported.", "payment": [ "x402", "mpp" ] }, "capabilities": { "tools": true, "prompts": true, "resources": false }, "prompts": [ "discover_tools", "find_cheapest_flight", "crypto_market_overview", "prediction_market_research" ], "discovery_hint": "Call prompt 'discover_tools' to browse tools by category or task instead of loading all schemas into context.", "documentation": "https://apibase.pro/ai.txt", "openapi": "https://apibase.pro/.well-known/openapi.json", "server_card": "https://apibase.pro/.well-known/mcp/server-card.json", "source": "https://github.com/whiteknightonhorse/APIbase", "status": "active", "updated_at": "2026-04-01" }mcp.jsonmcp_serverShow content (1810 bytes)
{ "name": "APIbase", "description": "The API Hub for AI Agents — 33 tools across prediction markets, flights, travel, and DeFi. One MCP endpoint, pay-per-call via x402.", "protocol": "MCP", "protocol_version": "2025-03-26", "transport": "streamable-http", "server_version": "1.0.0", "mcp_endpoint": "https://apibase.pro/mcp", "smithery": "https://smithery.ai/servers/apibase-pro/api-hub", "tools_catalog": "https://apibase.pro/api/v1/tools", "documentation": "https://apibase.pro/ai.txt", "discovery": "https://apibase.pro/.well-known/mcp.json", "health": "https://apibase.pro/health/ready", "status": "active", "authentication": { "type": "bearer", "header": "Authorization", "format": "Bearer ak_live_<key>", "auto_registration": true, "payment": "x402 (USDC on Base)" }, "capabilities": { "tools": true, "resources": false, "prompts": false }, "providers": [ { "name": "Polymarket", "category": "prediction-markets", "tools": 12, "prefix": "polymarket.*" }, { "name": "Amadeus", "category": "travel", "tools": 7, "prefix": "amadeus.*" }, { "name": "Sabre GDS", "category": "travel", "tools": 4, "prefix": "sabre.*" }, { "name": "Hyperliquid", "category": "defi", "tools": 6, "prefix": "hyperliquid.*" }, { "name": "AsterDEX", "category": "defi", "tools": 4, "prefix": "aster.*" } ], "quick_start": { "1_connect": "POST https://apibase.pro/mcp with JSON-RPC initialize", "2_authenticate": "Set Authorization: Bearer <your_api_key>", "3_discover": "Call tools/list to see all 33 available tools", "4_use": "Call tools/call with tool name and arguments" } }
README
APIbase.pro — The API Hub for AI Agents
One MCP endpoint. 576 tools. 177 providers. Pay per call with x402 (USDC on Base) or MPP (USDC on Tempo).
Live Platform | Tool Catalog | MCP Endpoint | Frameworks | Dashboard
Product Demo
https://github.com/user-attachments/assets/9e598d61-b2d0-486c-bd34-f0cb0354d09c
12-slide walkthrough: connect → discover tools → 13-stage pipeline → dual-rail payments → analytics. Full interactive version →
What is APIbase?
Production MCP server that gives AI agents access to 576 real-world API tools through a single endpoint. Agents connect once to https://apibase.pro/mcp and can search flights, get stock quotes, check weather and tides, query US Census and CDC health data, search ML models on HuggingFace, look up World Bank indicators, track streamflow from USGS stations, search 7M+ CS papers on DBLP, generate images, send emails, decode VINs, look up chemical compounds, scan npm/PyPI vulnerabilities, find EV chargers, search art at the Met Museum, batch multiple calls, track usage analytics — and 300+ more tools across 30+ categories.
Built for AI agents, not humans. Auto-registration, zero setup, pay-per-call via x402 USDC micropayments on Base or MPP (Machine Payments Protocol) on Tempo.
Quick Start (30 seconds)
Claude Desktop / Cursor / Windsurf
{
"mcpServers": {
"apibase": {
"url": "https://apibase.pro/mcp"
}
}
}
Multi-server setup (recommended)
Combine APIbase (real-world APIs) with Playwright (browser) and Context7 (docs):
{
"mcpServers": {
"apibase": { "url": "https://apibase.pro/mcp" },
"playwright": { "command": "npx", "args": ["-y", "@playwright/mcp"] },
"context7": { "command": "npx", "args": ["-y", "@upstash/context7-mcp"] }
}
}
Via npm (stdio bridge)
{
"mcpServers": {
"apibase": {
"command": "npx",
"args": ["-y", "apibase-mcp-client"]
}
}
}
REST API
# Register and get API key
curl -X POST https://apibase.pro/api/v1/agents/register \
-H "Content-Type: application/json" \
-d '{"agent_name": "my-agent", "agent_version": "1.0.0"}'
# Call any tool
curl -X POST https://apibase.pro/api/v1/tools/finnhub.quote/call \
-H "Authorization: Bearer ak_live_..." \
-H "Content-Type: application/json" \
-d '{"symbol": "AAPL"}'
Tool Categories (576 tools, 160 providers)
| Category | Tools | Providers | Examples |
|---|---|---|---|
| Web Search | 11 | Serper, Tavily, Exa, Spider.cloud | Google search, AI search, semantic search, web scraping |
| News & Events | 10 | NewsData, GDELT, Mastodon, Currents API | Global news (65 langs), crypto news, trending |
| Social | 7 | Bluesky, TwitterAPI.io | Search posts, profiles, feeds (AT Protocol, X/Twitter) |
| Travel & Flights | 17 | Amadeus, Sabre, Aviasales | Flight search, pricing, status, airports |
| Finance & Stocks | 17 | Finnhub, CoinGecko, ECB, FRED, World Bank | Stock quotes, OHLCV, FX rates, economic data, global indicators |
| Banking Data | 6 | FDIC BankFind, IBANAPI | US bank financials, branch locations, institution search, IBAN validation |
| Company Data | 8 | SEC EDGAR, Companies House, GLEIF | US filings + UK registry + global LEI (200+ countries) |
| Currency Conversion | 2 | ExchangeRate-API | 160+ currencies, real-time conversion |
| Tax & VAT | 3 | VATcomply | EU VAT validation, rates, ECB exchange rates |
| Maps & Geo | 7 | Geoapify | Geocode, routing, POI search, isochrone |
| Address (US/CA) | 2 | Geocodio | Geocode, reverse geocode, USPS-standard |
| Real Estate | 4 | Walk Score, US Real Estate | Walkability, property listings, details |
| Entertainment | 30 | TMDB, Ticketmaster, RAWG, IGDB, Jikan, Met Museum, Rijksmuseum, CMA | Movies, events, games, anime, art collections |
| Art & Culture | 5 | Europeana, ARTIC | 50M+ EU objects + 120K Chicago artworks |
| Stock Media | 3 | Pexels | Free stock photos & videos, commercial use |
| Music | 9 | MusicBrainz, ListenBrainz, RadioBrowser, AudD | Artists, albums, radio, song recognition, lyrics |
| Podcasts | 7 | PodcastIndex, Listen Notes | Search 4M+ podcasts, 186M+ episodes, best by genre |
| Health & Nutrition | 9 | USDA, OpenFDA, NIH, CDC | Food data, drug safety, supplements, public health datasets |
| Chemistry & Biology | 16 | PubChem, RCSB PDB, NCI CACTUS, Materials Project | 100M+ compounds, 220K+ proteins, 150K+ materials, chemical ID converter |
| EV Charging | 3 | Open Charge Map | 300K+ charging stations worldwide, connectors, power levels |
| Fraud Detection | 4 | IPQualityScore | IP/email/URL/phone fraud scoring, VPN/proxy/bot detection |
| Disease Data | 7 | disease.sh, WHO GHO | COVID/Influenza global disease statistics, WHO global health data |
| Clinical Trials | 3 | ClinicalTrials.gov | 577K+ trials, drug research, recruiting |
| Nutrition Database | 2 | FatSecret | 2.3M+ foods, calories, macros, vitamins |
| Education & Research | 9 | OpenAlex, arXiv, PubMed, CrossRef, DBLP | Papers, colleges, DOI lookup, CS bibliography |
| Jobs & Career | 20 | Adzuna, TheirStack, Jooble, Reed, Remotive, Arbeitnow, BLS, ESCO | Global job search, UK/EU/remote, salary data, tech stack analysis |
| Legal & Regulatory | 8 | Regulations.gov, Federal Register, CourtListener | US regulations, court opinions, executive orders |
| Air Quality | 2 | IQAir AirVisual | AQI, pollutants (PM2.5/O3), 30K+ stations |
| Weather | 10 | WeatherAPI.com, NWS, NOAA, NASA FIRMS | Current/forecast, hourly, observations, astronomy, alerts, fire detection |
| Space & Astronomy | 13 | NASA, JPL, NOAA SWPC | APOD, asteroids, fireballs, solar flares |
| Translation | 3 | Langbly | 90+ languages, language detection |
| Sports | 7 | API-Sports, BallDontLie | Football (2000+ leagues), NBA, NFL |
| Holidays & Calendar | 3 | Nager.Date, Calendarific | 230+ countries, national/religious/observance |
| Image Generation | 1 | Stability AI | Stable Diffusion, 16 style presets |
| OCR | 1 | OCR.space | Text from images/PDFs, 20+ languages |
| Speech-to-Text | 3 | AssemblyAI | Transcribe audio, 99 languages, diarization |
| PDF & Documents | 6 | API2PDF, ConvertAPI | HTML/URL to PDF, DOCX↔PDF, 200+ formats |
| Email & SMS | 11 | Resend, Twilio, Telnyx | Send emails, SMS (geo-tiered), voice, phone lookup |
| Messaging | 5 | Telegram | Send messages, photos, documents via bot |
| URL Shortener | 2 | Short.io | Custom branded short links + stats |
| SSL & Domain | 10 | WhoisXML, ssl-checker.io, ThreatIntel | WHOIS, DNS, SSL, domain reputation, malware check |
| Barcode & QR | 4 | QRServer, UPCitemdb | Generate/read QR, barcode lookup |
| Business Intel | 1 | Hunter.io | Company emails, enrichment, 50M+ domains |
| E-commerce | 12 | Zinc, Canopy API, Diffbot, Zyte | Product search, Amazon (12 marketplaces), web extraction |
| Memes & Fun | 2 | Imgflip | 100K+ meme templates, generate captioned meme images |
| AI Marketing | 7 | AIPush | AI-optimized pages, visibility scores |
| World Clock | 3 | TimeAPI.io | Timezone conversion, 597 IANA zones |
| Screenshots | 1 | ApiFlash | Chrome-based URL capture |
| Domain Registration | 5 | NameSilo | Check, buy, manage domains (.com $21) |
| Infrastructure | 6 | Cloudflare | DNS management, CDN cache, traffic analytics |
| Browser | 4 | Browserbase | Managed browser sessions, screenshots, scraping |
| Earthquakes | 3 | USGS | Global seismic data, real-time feeds |
| Water Data | 2 | USGS Water Services | Streamflow gauge sites, real-time water level & discharge |
| Tides & Currents | 2 | NOAA Tides & Currents | Water levels, tidal predictions, currents — 3,000+ US stations |
| Disasters | 3 | GDACS | UN global disaster alerts (earthquakes, floods, hurricanes, volcanoes) |
| IP Intelligence | 2 | ipapi.is | Geolocation, VPN/proxy detection |
| Vehicle Data | 9 | NHTSA, Auto.dev, MarketCheck | VIN decoder, recalls, safety ratings, car listings, market data |
| Country Data | 2 | REST Countries | Country search, ISO code lookup |
| Food Products | 2 | Open Food Facts | Barcode lookup, product search (3M+ products) |
| Test Data | 1 | RandomUser.me | Random user profiles for testing |
| Crypto & DeFi | 26 | CoinGecko, Polymarket, Hyperliquid | Prices, prediction markets, perpetuals |
| Logistics | 7 | 17TRACK, DHL, ShipEngine | Multi-carrier tracking, shipping rates, address validation |
| Postal Codes | 4 | Zippopotam.us, Postcodes.io | Global postal lookup (60+ countries), UK postcodes |
| Public-Domain Books | 13 | Free Use Bible API, Gutendex, LibriVox, Tatoeba | 78K Gutenberg books, 20K LibriVox audio, 1K Bible translations, 13M sentence pairs |
| Brazilian Gov Data | 17 | BrasilAPI, IBGE, BCB SGS | CNPJ/CEP/banks/PIX, census/municipalities, SELIC/CDI/IPCA/USD-BRL |
| EU & UK Gov Data | 7 | Eurostat, UK Police | EU unemployment/inflation/GDP, UK street-level crime |
| Singapore Gov Data | 4 | data.gov.sg | Live weather/PM2.5/rainfall/taxi |
| US Cultural Archives | 3 | US Library of Congress | 415K digitized historical items |
| Platform | 6 | APIbase (internal) | Usage analytics, tool quality index, batch calls |
Full tool catalog with schemas: https://apibase.pro/api/v1/tools
Platform Features
Usage Analytics (Free)
Track your API usage — total calls, cost, cache hit rate, latency, and per-tool breakdown.
# Usage summary
curl -X POST https://apibase.pro/api/v1/tools/account.usage/call \
-H "Authorization: Bearer ak_live_..." \
-d '{"period": "7d"}'
# Per-tool breakdown
curl -X POST https://apibase.pro/api/v1/tools/account.tools/call \
-H "Authorization: Bearer ak_live_..." \
-d '{"sort": "cost", "limit": 10}'
# Time series (hourly/daily buckets)
curl -X POST https://apibase.pro/api/v1/tools/account.timeseries/call \
-H "Authorization: Bearer ak_live_..." \
-d '{"period": "30d", "granularity": "day"}'
Tool Quality Index (Free)
Check tool reliability before calling — uptime, p50/p95 latency, error rate. Updated every 10 minutes.
# Quality metrics for a specific tool
curl -X POST https://apibase.pro/api/v1/tools/platform.tool_quality/call \
-H "Authorization: Bearer ak_live_..." \
-d '{"tool_id": "crypto.get_price"}'
# Rankings — find the most reliable tools
curl -X POST https://apibase.pro/api/v1/tools/platform.tool_rankings/call \
-H "Authorization: Bearer ak_live_..." \
-d '{"sort": "uptime", "limit": 20}'
Batch API (Free wrapper)
Execute up to 20 tool calls in parallel with a single request. Each sub-call runs the full pipeline independently. You pay only for individual tool calls.
# Via MCP tool
curl -X POST https://apibase.pro/api/v1/tools/platform.call_batch/call \
-H "Authorization: Bearer ak_live_..." \
-d '{"calls": [
{"tool_id": "crypto.get_price", "params": {"coin": "bitcoin"}},
{"tool_id": "finance.exchange_rates", "params": {"from": "USD", "to": "EUR"}},
{"tool_id": "country.by_code", "params": {"code": "US"}}
]}'
# Via REST endpoint
curl -X POST https://apibase.pro/api/v1/tools/call_batch \
-H "Authorization: Bearer ak_live_..." \
-d '{"calls": [...], "max_parallel": 10}'
Predictive Pre-fetching
When an agent calls a tool, the platform can automatically pre-fetch related data into cache. For example, a flight search pre-fetches exchange rates for the destination currency — so when the agent asks for rates next, it's an instant cache hit.
- Fire-and-forget: does not slow down the original response
- Controlled by
PREFETCH_ENABLEDenv var (disabled by default) - Rules: flight search → exchange rates, real estate → walk score, geocode → country data
How Payment Works
APIbase supports dual payment rails — agents can pay using either protocol:
x402 (USDC on Base)
| Field | Value |
|---|---|
| Protocol | x402 (HTTP 402 Payment Required) |
| Token | USDC on Base |
| Wallet | 0x50EbDa9dA5dC19c302Ca059d7B9E06e264936480 |
| Price range | $0.001 – $1.00 per call |
| Settlement | Self-hosted on-chain facilitator — no third-party SaaS in the payment path. See docs/x402-facilitator.md. |
Sovereign payment settlement
APIbase runs its own x402 facilitator in-process: every successful payment is settled by submitting transferWithAuthorization directly on Base via viem. There is no Coinbase CDP, no PayAI, no third-party intermediary in the critical path of a paid request.
- No vendor lock-in. If any third-party facilitator changes pricing, ToS, or KYC requirements, our service is unaffected.
- Open architecture. Built on the public
@x402/core+@x402/evmSDKs andviem— anyone can fork the pattern. Implementation insrc/payments/local-facilitator.ts. - Predictable cost. Settlement = fixed Base gas (~$0.0005 per call) instead of opaque per-settle facilitator fees.
- Fallback retained. PayAI HTTP facilitator stays wired as transparent in-client fallback — single-RPC blips don't drop revenue.
MPP (Machine Payments Protocol)
| Field | Value |
|---|---|
| Protocol | MPP (IETF draft-ryan-httpauth-payment) |
| Token | USDC on Tempo (chain 4217) |
| Wallet | 0x183fFa1335EB66858EebCb86F651f70632821f8d |
| USDC contract | 0x20C000000000000000000000b9537d11c60E8b50 |
| SDK | mppx (npm) |
| Agent setup | wallet.tempo.xyz — one link, connected |
| Discovery | mpp.dev/services |
| Price range | $0.001 – $1.00 per call |
No subscriptions. No minimums. Agent pays only for successful calls. Failed provider calls are auto-refunded.
13-Stage Pipeline
Every tool call passes through:
AUTH → IDEMPOTENCY → CONTENT_NEG → SCHEMA_VALIDATION → TOOL_STATUS →
CACHE → RATE_LIMIT → ESCROW → PROVIDER_CALL →
ESCROW_FINALIZE → LEDGER_WRITE → CACHE_SET → RESPONSE
- Escrow-first: USDC locked before provider call, refunded on failure
- Idempotent: same request + same key = same result, no double charges
- Cache: per-tool TTL (5s for stock prices, 7 days for walkability scores)
- Fail-closed: Redis down = reject all, no silent degradation
Authentication
| Method | Header | Format |
|---|---|---|
| API Key | Authorization | Bearer ak_live_<32hex> |
| x402 Payment | X-Payment | Base64 payment receipt |
| MPP Payment | Authorization | Payment <credential> (via mppx SDK) |
Auto-registration: agents get API keys instantly on first request. No forms, no approval.
MPP Payment Flow (important for agent developers)
MPP uses a challenge–credential–receipt cycle. You MUST follow the full flow:
1. Agent → POST /api/v1/tools/{tool}/call (with Authorization: Bearer <key>)
2. Server → 402 + WWW-Authenticate: Payment id="...", method="tempo", request="..."
3. Agent signs payment on Tempo → retries with Authorization: Payment <credential>
4. Server verifies on-chain → 200 + Payment-Receipt header + tool result
Critical: Each 402 challenge is unique (HMAC-bound to the request URL, amount, and timestamp). You cannot reuse a credential from one challenge on a different endpoint or after expiry. The mppx SDK handles this automatically.
Using mppx SDK (recommended):
import { Mppx, tempo } from 'mppx/client'
// mppx auto-handles the full 402 → pay → retry cycle
const mppx = Mppx.create({
methods: [tempo({ account: myTempoWallet })],
})
// This single call handles: request → 402 → sign → pay → retry → 200
const response = await fetch('https://apibase.pro/api/v1/tools/nasa.apod/call', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer ak_live_<your_key>', // API key for agent identity
'X-API-Key': 'ak_live_<your_key>', // Preserved when mppx replaces Authorization
},
body: JSON.stringify({}),
})
Using Tempo CLI:
curl -fsSL https://tempo.xyz/install | bash
tempo wallet login
tempo request https://apibase.pro/api/v1/tools/nasa.apod/call -X POST --json '{}'
Using AgentCash (one command):
# Try any tool instantly
npx agentcash try https://apibase.pro
# Add all APIbase tools to your agent
npx agentcash add https://apibase.pro
Note: When mppx retries with Authorization: Payment, it replaces the original Bearer header. To preserve agent identity, also send your API key via X-API-Key header — the server accepts both.
Error Codes (Agent-Friendly)
Every error response includes machine-readable recovery hints:
{
"error": "rate_limit_exceeded",
"error_code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests",
"request_id": "abc123",
"suggested_action": "retry_after_delay",
"documentation_url": "https://apibase.pro/frameworks#rest",
"retry_after": 15
}
| HTTP | Code | suggested_action |
|---|---|---|
| 400 | bad_request / schema_validation_failed | fix_request |
| 401 | unauthorized | fix_request |
| 402 | payment_required | add_payment |
| 404 | not_found | use_different_tool |
| 429 | rate_limit_exceeded | retry_after_delay |
| 502 | bad_gateway | retry_after_delay |
| 503 | service_unavailable | retry_after_delay |
Troubleshooting (Dual-Rail Payments)
"MPP payment verification failed" on x402 requests
Symptom: Agent sends x402 payment (X-Payment header) but gets 400 MPP payment verification failed instead of data.
Root cause: If you use the mppx SDK with default settings, Mppx.create() installs a global fetch() polyfill that intercepts ALL HTTP requests — including x402 ones. When mppx sees a 402 response, it automatically signs an MPP credential and retries, even if the original request was x402. The MPP credential is invalid for x402 → server returns 400.
Fix: Initialize mppx with polyfill: false:
// WRONG — intercepts all fetch() calls including x402
const mppx = await Mppx.create({ wallet });
// CORRECT — only use mppx.fetch() explicitly for MPP payments
const mppx = await Mppx.create({ wallet, polyfill: false });
Then use mppx.fetch() only for MPP payments, and regular fetch() for x402.
Using both payment protocols
APIbase supports dual-rail payments. Each request should use ONE protocol:
| Protocol | Header | When to use |
|---|---|---|
| x402 | X-Payment: <signed-payload> | Default. Use with Coinbase CDP or PayAI facilitator |
| MPP | Authorization: Payment <credential> | Use with Tempo wallet and mppx SDK |
Do NOT send both headers in the same request — both middleware will activate and one will fail.
MCP Discovery
GET /.well-known/mcp.json → MCP server metadata (transport, capabilities, tools count)
GET /.well-known/mcp/server-card.json → Full tool catalog with schemas (Smithery)
GET /.well-known/ai-capabilities.json → AI capabilities manifest (21 categories)
GET /.well-known/agent.json → A2A agent card (protocol, auth, payment)
GET /.well-known/x402-payment.json → Payment config (network, facilitators, dual-rail)
GET /.well-known/openapi.json → OpenAPI 3.1 spec (with x-payment-info)
GET /ai.txt → Plain text AI agent discovery
GET /llms.txt → Concise LLM context
GET /api/v1/tools → Live tool catalog (all 490 tools, JSON schemas)
GET /health/ready → System health check
POST /mcp prompts/get discover_tools → Browse tools by category or task (progressive disclosure)
GET /frameworks → Integration guides for 9 frameworks
Progressive disclosure: Instead of loading all 490 tool schemas into context, agents can call the discover_tools prompt to find relevant tools first:
discover_tools(no args) → 21 categories with tool countsdiscover_tools category="travel"→ 17 travel toolsdiscover_tools task="check earthquake near Tokyo"→ matching tools ranked by relevance
Tool composition hints: Task-based search results include related tool suggestions:
- amadeus.flights.search: Search for real-time flight offers...
→ Related: amadeus.flight_price (Confirm exact pricing), finance.exchange_rates (Convert to local currency)
Integrations
Every framework connects to one endpoint: https://apibase.pro/mcp
| Platform | Config | Docs |
|---|---|---|
| Claude Desktop / Code | "url": "https://apibase.pro/mcp" | 3 lines JSON |
| Cursor IDE | .cursor/mcp.json → same URL | 3 lines JSON |
| Windsurf (Codeium) | "serverUrl": "https://apibase.pro/mcp" | 3 lines JSON |
| OpenAI Agents SDK | MCPServerStreamableHTTP(url=...) | Python + TS |
| LangChain / LangGraph | MultiServerMCPClient({"apibase": {...}}) | Python |
| Google ADK | McpToolset(StreamableHTTPConnectionParams(...)) | Python |
| CrewAI | mcp_servers=["https://apibase.pro/mcp"] | 1 line |
| Microsoft Copilot Studio | UI: Actions → Add MCP Server | Enterprise |
Full framework guides with code examples →
Registry Listings
| Registry | Link |
|---|---|
| Smithery | smithery.ai/servers/apibase-pro/api-hub |
| Glama | glama.ai/mcp/servers/whiteknightonhorse/APIbase |
| MCP Registry | io.github.whiteknightonhorse/apibase |
| PulseMCP | pulsemcp.com (auto-synced) |
| MPPScan | mppscan.com |
Architecture
- 16 Docker containers: API, Worker, Outbox, PostgreSQL, Redis, Nginx, Prometheus, Grafana, Loki, Promtail, Alertmanager, exporters
- Single Hetzner server with automated health checks, graceful shutdown, and 27+ Prometheus alert rules
- PostgreSQL = source of truth for financial data (append-only ledger)
- Redis = cache, rate limiting, single-flight deduplication
- Self-hosted x402 facilitator = on-chain
transferWithAuthorizationsettled by APIbase directly (no third-party HTTP facilitator in the critical path). Details → - Fail-closed: any infrastructure failure = reject requests, never pass through
Self-Hosting
Prerequisites
- Docker 24.0+ with Compose v2.0+
- 8GB+ RAM (16 containers)
- Ports: 8880 (Nginx), 3000 (API), 5432 (Postgres), 6379 (Redis) — all internal
Quick Start
git clone https://github.com/whiteknightonhorse/APIbase.git
cd APIbase
cp .env.example .env # edit: set POSTGRES_PASSWORD, X402_PAYMENT_ADDRESS, provider keys
docker compose build
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Verify
# Health check (Nginx on 8880)
curl http://localhost:8880/health/ready
# Check all 16 containers
docker compose ps
# View API logs
docker compose logs api --tail 20
See .env.example for all configuration options. Never commit .env to git.