Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
awslabs

mcp

Quality
9.0

This skill enables AI agents to interact directly with Amazon Aurora DSQL, a serverless, PostgreSQL-compatible distributed SQL database. It's ideal for developers needing AI assistance with schema management, query execution, database migrations, and diagnosing query plans within their AWS cloud-native development workflows.

USP

Unlike generic database tools, this skill provides direct, AI-assisted interaction with Aurora DSQL, leveraging MCP to integrate AWS-specific database operations and best practices directly into your AI development environment.

Use cases

  • 01AI-assisted schema management for Aurora DSQL
  • 02Executing DSQL queries and diagnosing performance
  • 03Migrating databases to Aurora DSQL
  • 04Developing multi-tenant applications with DSQL
  • 05Applying IAM-based authentication for database access

Detected files (8)

  • src/cloudwatch-mcp-server/skills/agentcore-investigation/SKILL.mdskill
    Show content (10633 bytes)
    ---
    name: agentcore-investigation
    description: Investigate Bedrock AgentCore runtime sessions via CloudWatch Logs Insights — resolve session/trace IDs, query OTEL spans, filter noise, build timelines. Use when debugging AgentCore agent sessions, tracing tool calls, or analyzing latency.
    ---
    
    # AgentCore Runtime Session Investigation
    
    Investigate AgentCore runtime sessions by querying CloudWatch Logs Insights, filtering OpenTelemetry noise, and producing structured investigation output.
    
    **Key capabilities:**
    - Session-to-trace resolution via OTEL span correlation
    - Structured and glob-style parse queries for both dedicated and combined log groups
    - OpenTelemetry noise filtering with AgentCore-specific heuristics
    - Timeline construction with T+offset format
    - Error, tool invocation, token usage, and latency analysis
    
    ---
    
    ## Reference Files
    
    Load these files as needed for detailed guidance:
    
    ### MCP:
    #### [mcp-setup.md](mcp/mcp-setup.md)
    **When:** ALWAYS load before starting an investigation — ensures CloudWatch and Application Signals MCP servers are configured
    **Contains:** MCP server configuration for CloudWatch Logs and Application Signals, with setup instructions for Claude Code, Gemini, Codex, and Kiro CLI
    
    #### [.mcp.json](mcp/.mcp.json)
    **When:** Load when setting up MCP servers for the first time
    **Contains:** Sample MCP configuration with both CloudWatch and Application Signals servers
    
    ### [otel-span-schema.md](references/otel-span-schema.md)
    **When:** ALWAYS load before querying or filtering OTEL spans
    **Contains:** Field extraction priorities, known instrumentation scopes, noise filtering heuristics (DROP/KEEP patterns)
    
    ---
    
    ## Phase 0: SessionId-to-TraceId Resolution
    
    When the user provides a sessionId, resolve it to traceId(s) first. If user provides traceId directly, skip this phase.
    
    ### Discovery Query (structured fields)
    
    ```
    fields traceId, @timestamp
    | filter attributes.session.id = "SESSION_ID"
    | stats count(*) as spanCount, min(@timestamp) as firstSeen, max(@timestamp) as lastSeen by traceId
    | sort firstSeen asc
    ```
    
    ### Discovery Query (combined log group — glob-style parse)
    
    ```
    fields @timestamp, @message
    | parse @message '"traceId":"*"' as traceId
    | parse @message '"session.id":"*"' as sessionId
    | filter sessionId = "SESSION_ID" or @message like "SESSION_ID"
    | stats earliest(@timestamp) as firstSeen, latest(@timestamp) as lastSeen, count(*) as spanCount by traceId
    | sort firstSeen asc
    | limit 50
    ```
    
    ### Latest Interaction Only
    
    ```
    fields traceId
    | filter attributes.session.id = "SESSION_ID"
    | sort @timestamp desc
    | limit 1
    ```
    
    Store discovered traceId(s) and use them in ALL subsequent queries.
    
    ## Phase 1: Discover Log Groups
    
    Use `describe_log_groups` with logGroupNamePrefix `/aws/bedrock-agentcore/runtimes` to find all runtime log groups.
    
    ```
    Log group naming patterns (in priority order):
    - /aws/bedrock-agentcore/runtimes/<agent_id>-<endpoint_name>/otel-rt-logs (structured OTEL spans)
    - /aws/bedrock-agentcore/runtimes/<agent_id>-<endpoint_name>/[runtime-logs] (stdout/stderr)
    - /aws/bedrock-agentcore/runtimes/<agent_id>-<endpoint_name>-DEFAULT (single combined group)
    ```
    
    ### Log Group Layouts
    
    AgentCore runtimes always emit OTEL spans. Some deployments split logs into a dedicated `otel-rt-logs` sub-group; others write everything into a single combined log group. Both are normal.
    
    | Log Group Layout | Query Strategy |
    |-----------------|----------------|
    | Dedicated `otel-rt-logs` exists | Use structured field queries (`traceId`, `attributes.session.id`, etc.) |
    | Single combined log group | Try structured fields first — if they return 0 results, use glob-style `parse @message` |
    
    If a dedicated `otel-rt-logs` group exists, prefer it for structured queries.
    
    ### Parse Syntax Guidance
    
    When using `parse @message` on combined log groups, prefer glob-style parse — it is simpler and avoids escaping issues:
    
    ```
    | parse @message '"name":"*"' as spanName
    | parse @message '"traceId":"*"' as traceId
    | parse @message '"startTimeUnixNano":"*"' as startNano
    ```
    
    Regex parse (`/pattern/`) is valid CloudWatch Logs Insights syntax but requires careful escaping of quotes and special characters inside JSON. If glob-style parse extracts the field you need, use it.
    
    ## Phase 2: Query CloudWatch Logs Insights
    
    Run all 6 query types for a complete investigation. Each query has a structured version (for dedicated `otel-rt-logs`) and a glob-style parse version (for combined log groups).
    
    ### Query Size Limits
    
    Every query MUST include `| limit` to prevent context window overflow:
    - Session overview: `| limit 50`
    - Span details: `| limit 100`
    - Errors: `| limit 50`
    - Tool invocations: `| limit 100`
    - Token usage: `| limit 50`
    - Latency outliers: `| limit 20`
    
    ### Query 1: Session Overview
    
    **Structured:**
    ```
    fields @timestamp, traceId, spanId, parentSpanId, name, scope.name,
           attributes.session.id, attributes.gen_ai.operation.name, attributes.gen_ai.agent.name,
           startTimeUnixNano, endTimeUnixNano
    | filter traceId = "TRACE_ID"
    | sort startTimeUnixNano asc
    | limit 50
    ```
    
    **Combined log group:**
    ```
    fields @timestamp, @message
    | filter @message like "TRACE_ID"
    | parse @message '"name":"*"' as spanName
    | parse @message '"traceId":"*"' as traceId
    | parse @message '"spanId":"*"' as spanId
    | parse @message '"startTimeUnixNano":"*"' as startNano
    | parse @message '"endTimeUnixNano":"*"' as endNano
    | sort @timestamp asc
    | limit 50
    ```
    
    ### Query 2: Span Details with Duration
    
    **Structured:**
    ```
    fields @timestamp, traceId, spanId, parentSpanId, name, scope.name,
           startTimeUnixNano, endTimeUnixNano,
           (endTimeUnixNano - startTimeUnixNano) / 1000000 as durationMs,
           status.code, attributes.gen_ai.operation.name
    | filter traceId = "TRACE_ID"
    | filter ispresent(startTimeUnixNano)
    | sort startTimeUnixNano asc
    | limit 100
    ```
    
    **Combined log group:**
    ```
    fields @timestamp, @message
    | filter @message like "TRACE_ID"
    | parse @message '"name":"*"' as spanName
    | parse @message '"spanId":"*"' as spanId
    | parse @message '"parentSpanId":"*"' as parentSpanId
    | parse @message '"startTimeUnixNano":"*"' as startNano
    | parse @message '"endTimeUnixNano":"*"' as endNano
    | parse @message '"statusCode":"*"' as statusCode
    | sort @timestamp asc
    | limit 100
    ```
    
    ### Query 3: Errors
    
    **Structured:**
    ```
    fields @timestamp, traceId, spanId, name, status.code, status.message,
           attributes.error.message, attributes.exception.message, attributes.exception.type
    | filter traceId = "TRACE_ID"
    | filter status.code = 2 OR ispresent(attributes.error.message) OR ispresent(attributes.exception.message)
    | sort @timestamp asc
    | limit 50
    ```
    
    **Combined log group:**
    ```
    fields @timestamp, @message
    | filter @message like "TRACE_ID"
    | filter @message like /ERROR|exception|Exception|fault|STATUS_CODE_ERROR/
    | parse @message '"name":"*"' as spanName
    | parse @message '"statusCode":"*"' as statusCode
    | parse @message '"startTimeUnixNano":"*"' as startNano
    | sort @timestamp asc
    | limit 50
    ```
    
    ### Query 4: Tool Invocations
    
    **Structured:**
    ```
    fields @timestamp, traceId, spanId, name, scope.name,
           attributes.gen_ai.operation.name, attributes.tool.name,
           startTimeUnixNano, endTimeUnixNano,
           (endTimeUnixNano - startTimeUnixNano) / 1000000 as durationMs
    | filter traceId = "TRACE_ID"
    | filter attributes.gen_ai.operation.name = "execute_tool" OR ispresent(attributes.tool.name) OR name like /tool/
    | sort startTimeUnixNano asc
    | limit 100
    ```
    
    **Combined log group:**
    ```
    fields @timestamp, @message
    | filter @message like "TRACE_ID"
    | filter @message like /tool|execute_tool|function_call/
    | parse @message '"name":"*"' as spanName
    | parse @message '"startTimeUnixNano":"*"' as startNano
    | parse @message '"endTimeUnixNano":"*"' as endNano
    | parse @message '"statusCode":"*"' as statusCode
    | sort @timestamp asc
    | limit 100
    ```
    
    ### Query 5: Token Usage
    
    **Structured:**
    ```
    fields @timestamp, traceId, spanId, name,
           attributes.gen_ai.usage.input_tokens, attributes.gen_ai.usage.output_tokens,
           attributes.gen_ai.usage.total_tokens, attributes.gen_ai.agent.name
    | filter traceId = "TRACE_ID"
    | filter ispresent(attributes.gen_ai.usage.total_tokens)
    | sort @timestamp asc
    | limit 50
    ```
    
    **Combined log group:**
    ```
    fields @timestamp, @message
    | filter @message like "TRACE_ID"
    | filter @message like /input_tokens|output_tokens|usage/
    | parse @message '"name":"*"' as spanName
    | parse @message '"gen_ai.usage.input_tokens"' as hasTokens
    | sort @timestamp asc
    | limit 50
    ```
    
    ### Query 6: Latency Outliers
    
    **Structured:**
    ```
    fields @timestamp, traceId, spanId, name,
           (endTimeUnixNano - startTimeUnixNano) / 1000000 as durationMs
    | filter traceId = "TRACE_ID"
    | filter ispresent(endTimeUnixNano)
    | sort durationMs desc
    | limit 20
    ```
    
    **Combined log group:**
    ```
    fields @timestamp, @message
    | filter @message like "TRACE_ID"
    | parse @message '"name":"*"' as spanName
    | parse @message '"startTimeUnixNano":"*"' as startNano
    | parse @message '"endTimeUnixNano":"*"' as endNano
    | sort @timestamp asc
    | limit 50
    ```
    
    Queries are async — use `get_logs_insight_query_results` to poll until status is `Complete`.
    
    ## Phase 3: Filter OTEL Noise
    
    See [otel-span-schema.md](references/otel-span-schema.md) for extraction rules, known scopes, and DROP/KEEP heuristics.
    
    After retrieving query results:
    1. Count total results received
    2. Remove entries matching DROP patterns (count removed)
    3. Keep entries matching KEEP patterns
    4. Log: "Filtered: {total} → {kept} spans ({removed} noise entries dropped)"
    
    ## Phase 4: Build Timeline
    
    Compute relative offsets from the earliest span's `startTimeUnixNano`:
    
    ```
    [T+0ms]     Session started — traceId: abc123
    [T+45ms]    LLM inference — model: anthropic.claude-v3 — 1,200ms
    [T+1,250ms] Tool call: search_documents — 340ms
    [T+1,600ms] Tool result: 3 documents found
    [T+1,650ms] LLM inference — model: anthropic.claude-v3 — 890ms
    [T+2,550ms] Response generated — 200 OK
    [T+2,600ms] Session ended — total: 2,600ms
    ```
    
    ## Error Handling
    
    | Situation | Action |
    |-----------|--------|
    | No log groups found | Ask user for log group name or AWS region |
    | Query returns 0 results | Widen time range to ±24h, retry. If still empty, try alternate ID fields |
    | Session ID not found | Try filtering by requestId, invocationId, traceId variants |
    | Query timeout | Use `cancel_logs_insight_query`, reduce time range, retry |
    | Partial results | Note in output, suggest narrower time window |
    | Structured field queries return 0 results | Switch to glob-style `parse @message` queries (see Parse Syntax Guidance) |
    
  • src/aurora-dsql-mcp-server/skills/aurora-dsql-skill/SKILL.mdskill
    Show content (17766 bytes)
    ---
    name: aurora dsql
    description: "Build with Aurora DSQL — manage schemas, execute queries, handle migrations, diagnose query plans, and develop applications with a serverless, distributed SQL database. Covers IAM auth, multi-tenant patterns, MySQL-to-DSQL migration, DDL operations, and query plan explainability. Triggers on phrases like: DSQL, Aurora DSQL, create DSQL table, DSQL schema, migrate to DSQL, distributed SQL database, serverless PostgreSQL-compatible database, DSQL query plan, DSQL EXPLAIN ANALYZE, why is my DSQL query slow."
    ---
    
    # Amazon Aurora DSQL Skill
    
    Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
    
    **Key capabilities:**
    
    - Direct query execution via MCP tools
    - Schema management with DSQL constraints
    - Migration support and safe schema evolution
    - Multi-tenant isolation patterns
    - IAM-based authentication
    
    ---
    
    ## Reference Files
    
    Load these files as needed for detailed guidance:
    
    ### [development-guide.md](references/development-guide.md)
    
    **When:** ALWAYS load before implementing schema changes or database operations
    **Contains:** [Best Practices](references/development-guide.md#best-practices), DDL rules, connection patterns, transaction limits, data type serialization patterns, application-layer referential integrity instructions, security best practices
    
    ### MCP:
    
    #### [mcp-setup.md](mcp/mcp-setup.md)
    
    **When:** Always load for guidance using or updating the DSQL MCP server
    **Contains:** Instructions for setting up the DSQL MCP server with 2 configuration options as
    sampled in [mcp/.mcp.json](mcp/.mcp.json)
    
    1. Documentation-Tools Only
    2. Database Operations (requires a cluster endpoint)
    
    #### [mcp-tools.md](mcp/mcp-tools.md)
    
    **When:** Load when you need detailed MCP tool syntax and examples. PREFER MCP tools for ad-hoc queries — execute directly rather than writing scripts.
    **Contains:** Tool parameters, detailed examples, usage patterns, [input validation](mcp/tools/input-validation.md)
    
    ### [language.md](references/language.md)
    
    **When:** MUST load when making language-specific implementation choices. ALWAYS prefer DSQL Connector when available.
    **Contains:** Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
    
    ### [dsql-examples.md](references/dsql-examples.md)
    
    **When:** Load when looking for specific implementation examples
    **Contains:** Code examples, repository patterns, multi-tenant implementations
    
    ### [troubleshooting.md](references/troubleshooting.md)
    
    **When:** Load when debugging errors or unexpected behavior. SHOULD always consult for OCC errors, connection failures, or unexpected query results.
    **Contains:** Common pitfalls, error messages, solutions
    
    ### [onboarding.md](references/onboarding.md)
    
    **When:** User explicitly requests to "Get started with DSQL" or similar phrase
    **Contains:** Interactive step-by-step guide for new users
    
    ### [access-control.md](references/access-control.md)
    
    **When:** MUST load when creating database roles, granting permissions, setting up schemas for applications, or handling sensitive data. ALWAYS use scoped roles for applications — create database roles with `dsql:DbConnect`.
    **Contains:** Scoped role setup, IAM-to-database role mapping, schema separation for sensitive data, role design patterns
    
    ### DDL Migrations (modular):
    
    #### [ddl-migrations/overview.md](references/ddl-migrations/overview.md)
    
    **When:** MUST load when performing DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT
    **Contains:** Table recreation pattern overview, transaction rules, common verify & swap pattern
    
    #### [ddl-migrations/column-operations.md](references/ddl-migrations/column-operations.md)
    
    **When:** Load for DROP COLUMN, ALTER COLUMN TYPE, SET/DROP NOT NULL, SET/DROP DEFAULT migrations
    **Contains:** Step-by-step migration patterns for column-level changes
    
    #### [ddl-migrations/constraint-operations.md](references/ddl-migrations/constraint-operations.md)
    
    **When:** Load for ADD/DROP CONSTRAINT, MODIFY PRIMARY KEY, column split/merge migrations
    **Contains:** Step-by-step migration patterns for constraint and structural changes
    
    #### [ddl-migrations/batched-migration.md](references/ddl-migrations/batched-migration.md)
    
    **When:** Load when migrating tables exceeding 3,000 rows
    **Contains:** OFFSET-based and cursor-based batching patterns, progress tracking, error handling
    
    ### MySQL Migrations (modular):
    
    #### [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md)
    
    **When:** MUST load when migrating MySQL schemas to DSQL
    **Contains:** MySQL data type mappings, feature alternatives, DDL operation mapping
    
    #### [mysql-migrations/ddl-operations.md](references/mysql-migrations/ddl-operations.md)
    
    **When:** Load when translating MySQL DDL operations to DSQL equivalents
    **Contains:** ALTER COLUMN, DROP COLUMN, AUTO_INCREMENT, ENUM, SET, FOREIGN KEY migration patterns
    
    #### [mysql-migrations/full-example.md](references/mysql-migrations/full-example.md)
    
    **When:** Load when migrating a complete MySQL table to DSQL
    **Contains:** End-to-end MySQL CREATE TABLE migration example with decision summary
    
    ### Query Plan Explainability (modular):
    
    **When:** MUST load all four at Workflow 8 Phase 0 — [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md), [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md), [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md), [query-plan/report-format.md](references/query-plan/report-format.md)
    **Contains:** DSQL node types + Node Duration math + estimation-error bands, pg_class/pg_stats/pg_indexes SQL + correlated-predicate verification, GUC experiment procedures + 30-second skip protocol, required report structure + element checklist + support request template
    
    ---
    
    ## MCP Tools Available
    
    The `aurora-dsql` MCP server provides these tools:
    
    **Database Operations:**
    
    1. **readonly_query** - Execute SELECT queries (returns list of dicts)
    2. **transact** - Execute DDL/DML statements in transaction (takes list of SQL statements)
    3. **get_schema** - Get table structure for a specific table
    
    **Documentation & Knowledge:**
    
    1. **dsql_search_documentation** - Search Aurora DSQL documentation
    2. **dsql_read_documentation** - Read specific documentation pages
    3. **dsql_recommend** - Get DSQL best practice recommendations
    
    **Note:** There is no `list_tables` tool. Use `readonly_query` with information_schema.
    
    See [mcp-setup.md](mcp/mcp-setup.md) for detailed setup instructions.
    See [mcp-tools.md](mcp/mcp-tools.md) for detailed usage and examples.
    
    ### AWS Knowledge MCP (`awsknowledge`)
    
    Consult for verifying DSQL service limits before advising users. The numeric limits below are
    defaults that may change — when a user's decision depends on an exact limit, verify it first:
    
    | Limit                          | Default       | Verify query                       |
    | ------------------------------ | ------------- | ---------------------------------- |
    | Max rows per transaction       | 3,000         | `aurora dsql transaction limits`   |
    | Max data size per transaction  | 10 MiB        | `aurora dsql transaction limits`   |
    | Max transaction duration       | 5 minutes     | `aurora dsql transaction limits`   |
    | Max connections per cluster    | 10,000        | `aurora dsql connection limits`    |
    | Auth token expiry              | 15 minutes    | `aurora dsql authentication token` |
    | Max connection duration        | 60 minutes    | `aurora dsql connection limits`    |
    | Max indexes per table          | 24            | `aurora dsql index limits`         |
    | Max columns per index          | 8             | `aurora dsql index limits`         |
    | IDENTITY/SEQUENCE CACHE values | 1 or >= 65536 | `aurora dsql sequence cache`       |
    
    **When to verify:** Before recommending batch sizes, connection pool settings, or schema designs
    where hitting a limit would cause failures. No need to verify for general guidance or when
    the exact number doesn't affect the user's decision.
    
    **Fallback:** If `awsknowledge` is unavailable, use the defaults above and note to the user
    that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    
    ## CLI Scripts Available
    
    Bash scripts in [scripts/](scripts/) for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files.
    See [scripts/README.md](scripts/README.md) for usage.
    
    ---
    
    ## Quick Start
    
    ### 1. List tables and explore schema
    
    ```
    Use readonly_query with information_schema to list tables
    Use get_schema to understand table structure
    ```
    
    ### 2. Query data
    
    ```
    Use readonly_query for SELECT queries
    Always include tenant_id in WHERE clause for multi-tenant apps
    MUST build SQL with safe_query.build() — see mcp/tools/input-validation.md
    ```
    
    ### 3. Execute schema changes
    
    ```
    Use transact tool with list of SQL statements
    Follow one-DDL-per-transaction rule
    Always use CREATE INDEX ASYNC in separate transaction
    ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Workflow 6)
    ```
    
    ---
    
    ## Common Workflows
    
    ### Workflow 1: Create Multi-Tenant Schema
    
    1. Create main table with tenant_id column using transact
    2. Create async index on tenant_id in separate transact call
    3. Create composite indexes for common query patterns (separate transact calls)
    4. Verify schema with get_schema
    
    - MUST include tenant_id in all tables
    - MUST use `CREATE INDEX ASYNC` exclusively
    - MUST issue each DDL in its own transact call: `transact(["CREATE TABLE ..."])`
    - MUST store arrays/JSON as TEXT
    
    ### Workflow 2: Safe Data Migration
    
    1. Add column using transact: `transact(["ALTER TABLE ... ADD COLUMN ..."])`
    2. Populate existing rows with UPDATE in separate transact calls (batched under 3,000 rows)
    3. Verify migration with readonly_query using COUNT
    4. Create async index for new column using transact if needed
    
    - MUST add column first, populate later
    - MUST issue ADD COLUMN with only name and type; apply DEFAULT via separate UPDATE
    - MUST batch updates under 3,000 rows in separate transact calls
    - MUST issue each ALTER TABLE in its own transaction
    
    **Recovery — batch fails midway:** Rows already updated keep their new value (each batch committed
    in its own transaction). Resume by filtering on the unset state — e.g. add
    `WHERE new_column IS NULL` (or the sentinel value) to the next UPDATE — and continue from there.
    Re-running the entire migration is safe because the filter naturally excludes completed rows.
    
    ### Workflow 3: Application-Layer Referential Integrity
    
    **INSERT:** MUST validate parent exists with readonly_query → throw error if not found → insert child with transact.
    
    **DELETE:** MUST check dependents with readonly_query COUNT → return error if dependents exist → delete with transact if safe.
    
    ### Workflow 4: Query with Tenant Isolation
    
    1. **MUST** authorize the caller against the tenant — format validation does not establish authorization
    2. **MUST** build SQL with [`safe_query.build()`](mcp/tools/safe_query.py) — use `allow()`/`regex()` for
       values (emits `'v'`), `ident()` for table/column names (emits `"v"`).
       See [input-validation.md](mcp/tools/input-validation.md)
    3. **MUST** include `tenant_id` in the WHERE clause; reject cross-tenant access at the application layer
    
    ### Workflow 5: Set Up Scoped Database Roles
    
    MUST load [access-control.md](references/access-control.md) for role setup, IAM mapping, and schema permissions.
    
    ### Workflow 6: Table Recreation DDL Migration
    
    DSQL does NOT support direct `ALTER COLUMN TYPE`, `DROP COLUMN`, `DROP CONSTRAINT`, or `MODIFY PRIMARY KEY`. These operations require the **Table Recreation Pattern** — creating a new table, copying data, dropping the original, and renaming. This is a destructive workflow that requires user confirmation at each step.
    
    MUST load [ddl-migrations/overview.md](references/ddl-migrations/overview.md) before attempting any of these operations.
    
    ### Workflow 7: MySQL to DSQL Schema Migration
    
    MUST load [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md) for type mappings, feature alternatives, and migration steps.
    
    ### Workflow 8: Query Plan Explainability
    
    Explains why the DSQL optimizer chose a particular plan. Triggered by slow queries, high DPU, unexpected Full Scans, or plans the user doesn't understand. **REQUIRES a structured Markdown diagnostic report is the deliverable** beyond conversation — run the workflow end-to-end before answering. Use the `aurora-dsql` MCP when connected; fall back to raw `psql` with a generated IAM token (see the fallback block below) otherwise.
    
    **Phase 0 — Load reference material.** Read all four before starting — each has content later phases need verbatim (node-type math, exact catalog SQL, the `>30s` skip protocol, required report elements):
    
    1. [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md) — node types, duration math, anomalous values
    2. [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md) — pg_class / pg_stats / pg_indexes SQL
    3. [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md) — GUC procedures and `>30s` skip protocol
    4. [query-plan/report-format.md](references/query-plan/report-format.md) — required report structure
    
    **Phase 1 — Capture the plan.** **ALWAYS** run `readonly_query("EXPLAIN ANALYZE VERBOSE …")` on the user's query verbatim (SELECT form) — **ALWAYS** capture a fresh plan from the cluster, even when the user describes the plan or reports an anomaly. **MAY** leverage `get_schema` or `information_schema` for schema sanity checks. When EXPLAIN errors (`relation does not exist`, `column does not exist`), **MUST** report the error verbatim — **MUST NOT** invent DSQL-specific semantics (e.g., case sensitivity, identifier quoting) as the root cause. Extract Query ID, Planning Time, Execution Time, DPU Estimate. **SELECT** runs as-is. **UPDATE/DELETE** rewrite to the equivalent SELECT (same join chain + WHERE) — the optimizer picks the same plan shape. **INSERT**, pl/pgsql, DO blocks, and functions **MUST** be rejected. **MUST NOT** use `transact --allow-writes` for plan capture; it bypasses MCP safety.
    
    **Phase 2 — Gather evidence.** Using SQL from `catalog-queries.md`, query `pg_class`, `pg_stats`, `pg_indexes`, `COUNT(*)`, `COUNT(DISTINCT)`. Classify estimation errors per `plan-interpretation.md` (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew.
    
    **Phase 3 — Experiment (conditional).** ≤30s: run GUC experiments per `guc-experiments.md` (default + merge-join-only) plus optional redundant-predicate test. >30s: skip experiments, include the manual GUC testing SQL verbatim in the report, and do not re-run for redundant-predicate testing. Anomalous values (impossible row counts): confirm query results are correct despite the anomalous EXPLAIN, flag as a potential DSQL bug, and produce the Support Request Template from `report-format.md`.
    
    **Phase 4 — Produce the report, invite reassessment.** Produce the full diagnostic report per the "Required Elements Checklist" in [query-plan/report-format.md](references/query-plan/report-format.md) — structure is non-negotiable. End with the "Next Steps" block from that reference so the user can ask for a reassessment after applying a recommendation. When the user says "reassess" (or equivalent), re-run Phase 1–2 and **append an "Addendum: After-Change Performance"** to the original report (before/after table, match against expected impact) rather than producing a new report.
    
    **psql fallback (MCP unavailable).** Pipe statements into `psql` via heredoc and check `$?`; report failures without proceeding on partial evidence:
    
    ```bash
    TOKEN=$(aws dsql generate-db-connect-admin-auth-token --hostname "$HOST" --region "$REGION")
    PGPASSWORD="$TOKEN" psql "host=$HOST port=5432 user=admin dbname=postgres sslmode=require" <<<"EXPLAIN ANALYZE VERBOSE <sql>;"
    ```
    
    **Safety.** Plan capture uses `readonly_query` exclusively — it rejects INSERT/UPDATE/DELETE/DDL at the MCP layer. Rewrite DML to SELECT (Phase 1) rather than asking `transact --allow-writes` to run it; write-mode `transact` bypasses all MCP safety checks. **MUST NOT** run arbitrary DDL/DML or pl/pgsql.
    
    ---
    
    ## Error Scenarios
    
    - **`awsknowledge` returns no results:** Use the default limits in the table above and note that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    - **OCC serialization error:** Retry the transaction. If persistent, check for hot-key contention — see [troubleshooting.md](references/troubleshooting.md).
    - **Transaction exceeds limits:** Split into batches under 3,000 rows — see [batched-migration.md](references/ddl-migrations/batched-migration.md).
    - **Token expiration mid-operation:** Generate a fresh IAM token — see [authentication-guide.md](references/auth/authentication-guide.md). See [troubleshooting.md](references/troubleshooting.md) for other issues.
    
    ---
    
    ## Additional Resources
    
    - [Aurora DSQL Documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/)
    - [Code Samples Repository](https://github.com/aws-samples/aurora-dsql-samples)
    - [PostgreSQL Compatibility](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-postgresql-compatibility.html)
    - [CloudFormation Resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dsql-cluster.html)
    
  • src/aurora-dsql-mcp-server/skills/aws-dsql-skill/SKILL.mdskill
    Show content (17763 bytes)
    ---
    name: aws dsql
    description: "Build with Aurora DSQL — manage schemas, execute queries, handle migrations, diagnose query plans, and develop applications with a serverless, distributed SQL database. Covers IAM auth, multi-tenant patterns, MySQL-to-DSQL migration, DDL operations, and query plan explainability. Triggers on phrases like: DSQL, Aurora DSQL, create DSQL table, DSQL schema, migrate to DSQL, distributed SQL database, serverless PostgreSQL-compatible database, DSQL query plan, DSQL EXPLAIN ANALYZE, why is my DSQL query slow."
    ---
    
    # Amazon Aurora DSQL Skill
    
    Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
    
    **Key capabilities:**
    
    - Direct query execution via MCP tools
    - Schema management with DSQL constraints
    - Migration support and safe schema evolution
    - Multi-tenant isolation patterns
    - IAM-based authentication
    
    ---
    
    ## Reference Files
    
    Load these files as needed for detailed guidance:
    
    ### [development-guide.md](references/development-guide.md)
    
    **When:** ALWAYS load before implementing schema changes or database operations
    **Contains:** [Best Practices](references/development-guide.md#best-practices), DDL rules, connection patterns, transaction limits, data type serialization patterns, application-layer referential integrity instructions, security best practices
    
    ### MCP:
    
    #### [mcp-setup.md](mcp/mcp-setup.md)
    
    **When:** Always load for guidance using or updating the DSQL MCP server
    **Contains:** Instructions for setting up the DSQL MCP server with 2 configuration options as
    sampled in [mcp/.mcp.json](mcp/.mcp.json)
    
    1. Documentation-Tools Only
    2. Database Operations (requires a cluster endpoint)
    
    #### [mcp-tools.md](mcp/mcp-tools.md)
    
    **When:** Load when you need detailed MCP tool syntax and examples. PREFER MCP tools for ad-hoc queries — execute directly rather than writing scripts.
    **Contains:** Tool parameters, detailed examples, usage patterns, [input validation](mcp/tools/input-validation.md)
    
    ### [language.md](references/language.md)
    
    **When:** MUST load when making language-specific implementation choices. ALWAYS prefer DSQL Connector when available.
    **Contains:** Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
    
    ### [dsql-examples.md](references/dsql-examples.md)
    
    **When:** Load when looking for specific implementation examples
    **Contains:** Code examples, repository patterns, multi-tenant implementations
    
    ### [troubleshooting.md](references/troubleshooting.md)
    
    **When:** Load when debugging errors or unexpected behavior. SHOULD always consult for OCC errors, connection failures, or unexpected query results.
    **Contains:** Common pitfalls, error messages, solutions
    
    ### [onboarding.md](references/onboarding.md)
    
    **When:** User explicitly requests to "Get started with DSQL" or similar phrase
    **Contains:** Interactive step-by-step guide for new users
    
    ### [access-control.md](references/access-control.md)
    
    **When:** MUST load when creating database roles, granting permissions, setting up schemas for applications, or handling sensitive data. ALWAYS use scoped roles for applications — create database roles with `dsql:DbConnect`.
    **Contains:** Scoped role setup, IAM-to-database role mapping, schema separation for sensitive data, role design patterns
    
    ### DDL Migrations (modular):
    
    #### [ddl-migrations/overview.md](references/ddl-migrations/overview.md)
    
    **When:** MUST load when performing DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT
    **Contains:** Table recreation pattern overview, transaction rules, common verify & swap pattern
    
    #### [ddl-migrations/column-operations.md](references/ddl-migrations/column-operations.md)
    
    **When:** Load for DROP COLUMN, ALTER COLUMN TYPE, SET/DROP NOT NULL, SET/DROP DEFAULT migrations
    **Contains:** Step-by-step migration patterns for column-level changes
    
    #### [ddl-migrations/constraint-operations.md](references/ddl-migrations/constraint-operations.md)
    
    **When:** Load for ADD/DROP CONSTRAINT, MODIFY PRIMARY KEY, column split/merge migrations
    **Contains:** Step-by-step migration patterns for constraint and structural changes
    
    #### [ddl-migrations/batched-migration.md](references/ddl-migrations/batched-migration.md)
    
    **When:** Load when migrating tables exceeding 3,000 rows
    **Contains:** OFFSET-based and cursor-based batching patterns, progress tracking, error handling
    
    ### MySQL Migrations (modular):
    
    #### [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md)
    
    **When:** MUST load when migrating MySQL schemas to DSQL
    **Contains:** MySQL data type mappings, feature alternatives, DDL operation mapping
    
    #### [mysql-migrations/ddl-operations.md](references/mysql-migrations/ddl-operations.md)
    
    **When:** Load when translating MySQL DDL operations to DSQL equivalents
    **Contains:** ALTER COLUMN, DROP COLUMN, AUTO_INCREMENT, ENUM, SET, FOREIGN KEY migration patterns
    
    #### [mysql-migrations/full-example.md](references/mysql-migrations/full-example.md)
    
    **When:** Load when migrating a complete MySQL table to DSQL
    **Contains:** End-to-end MySQL CREATE TABLE migration example with decision summary
    
    ### Query Plan Explainability (modular):
    
    **When:** MUST load all four at Workflow 8 Phase 0 — [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md), [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md), [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md), [query-plan/report-format.md](references/query-plan/report-format.md)
    **Contains:** DSQL node types + Node Duration math + estimation-error bands, pg_class/pg_stats/pg_indexes SQL + correlated-predicate verification, GUC experiment procedures + 30-second skip protocol, required report structure + element checklist + support request template
    
    ---
    
    ## MCP Tools Available
    
    The `aurora-dsql` MCP server provides these tools:
    
    **Database Operations:**
    
    1. **readonly_query** - Execute SELECT queries (returns list of dicts)
    2. **transact** - Execute DDL/DML statements in transaction (takes list of SQL statements)
    3. **get_schema** - Get table structure for a specific table
    
    **Documentation & Knowledge:**
    
    1. **dsql_search_documentation** - Search Aurora DSQL documentation
    2. **dsql_read_documentation** - Read specific documentation pages
    3. **dsql_recommend** - Get DSQL best practice recommendations
    
    **Note:** There is no `list_tables` tool. Use `readonly_query` with information_schema.
    
    See [mcp-setup.md](mcp/mcp-setup.md) for detailed setup instructions.
    See [mcp-tools.md](mcp/mcp-tools.md) for detailed usage and examples.
    
    ### AWS Knowledge MCP (`awsknowledge`)
    
    Consult for verifying DSQL service limits before advising users. The numeric limits below are
    defaults that may change — when a user's decision depends on an exact limit, verify it first:
    
    | Limit                          | Default       | Verify query                       |
    | ------------------------------ | ------------- | ---------------------------------- |
    | Max rows per transaction       | 3,000         | `aurora dsql transaction limits`   |
    | Max data size per transaction  | 10 MiB        | `aurora dsql transaction limits`   |
    | Max transaction duration       | 5 minutes     | `aurora dsql transaction limits`   |
    | Max connections per cluster    | 10,000        | `aurora dsql connection limits`    |
    | Auth token expiry              | 15 minutes    | `aurora dsql authentication token` |
    | Max connection duration        | 60 minutes    | `aurora dsql connection limits`    |
    | Max indexes per table          | 24            | `aurora dsql index limits`         |
    | Max columns per index          | 8             | `aurora dsql index limits`         |
    | IDENTITY/SEQUENCE CACHE values | 1 or >= 65536 | `aurora dsql sequence cache`       |
    
    **When to verify:** Before recommending batch sizes, connection pool settings, or schema designs
    where hitting a limit would cause failures. No need to verify for general guidance or when
    the exact number doesn't affect the user's decision.
    
    **Fallback:** If `awsknowledge` is unavailable, use the defaults above and note to the user
    that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    
    ## CLI Scripts Available
    
    Bash scripts in [scripts/](scripts/) for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files.
    See [scripts/README.md](scripts/README.md) for usage.
    
    ---
    
    ## Quick Start
    
    ### 1. List tables and explore schema
    
    ```
    Use readonly_query with information_schema to list tables
    Use get_schema to understand table structure
    ```
    
    ### 2. Query data
    
    ```
    Use readonly_query for SELECT queries
    Always include tenant_id in WHERE clause for multi-tenant apps
    MUST build SQL with safe_query.build() — see mcp/tools/input-validation.md
    ```
    
    ### 3. Execute schema changes
    
    ```
    Use transact tool with list of SQL statements
    Follow one-DDL-per-transaction rule
    Always use CREATE INDEX ASYNC in separate transaction
    ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Workflow 6)
    ```
    
    ---
    
    ## Common Workflows
    
    ### Workflow 1: Create Multi-Tenant Schema
    
    1. Create main table with tenant_id column using transact
    2. Create async index on tenant_id in separate transact call
    3. Create composite indexes for common query patterns (separate transact calls)
    4. Verify schema with get_schema
    
    - MUST include tenant_id in all tables
    - MUST use `CREATE INDEX ASYNC` exclusively
    - MUST issue each DDL in its own transact call: `transact(["CREATE TABLE ..."])`
    - MUST store arrays/JSON as TEXT
    
    ### Workflow 2: Safe Data Migration
    
    1. Add column using transact: `transact(["ALTER TABLE ... ADD COLUMN ..."])`
    2. Populate existing rows with UPDATE in separate transact calls (batched under 3,000 rows)
    3. Verify migration with readonly_query using COUNT
    4. Create async index for new column using transact if needed
    
    - MUST add column first, populate later
    - MUST issue ADD COLUMN with only name and type; apply DEFAULT via separate UPDATE
    - MUST batch updates under 3,000 rows in separate transact calls
    - MUST issue each ALTER TABLE in its own transaction
    
    **Recovery — batch fails midway:** Rows already updated keep their new value (each batch committed
    in its own transaction). Resume by filtering on the unset state — e.g. add
    `WHERE new_column IS NULL` (or the sentinel value) to the next UPDATE — and continue from there.
    Re-running the entire migration is safe because the filter naturally excludes completed rows.
    
    ### Workflow 3: Application-Layer Referential Integrity
    
    **INSERT:** MUST validate parent exists with readonly_query → throw error if not found → insert child with transact.
    
    **DELETE:** MUST check dependents with readonly_query COUNT → return error if dependents exist → delete with transact if safe.
    
    ### Workflow 4: Query with Tenant Isolation
    
    1. **MUST** authorize the caller against the tenant — format validation does not establish authorization
    2. **MUST** build SQL with [`safe_query.build()`](mcp/tools/safe_query.py) — use `allow()`/`regex()` for
       values (emits `'v'`), `ident()` for table/column names (emits `"v"`).
       See [input-validation.md](mcp/tools/input-validation.md)
    3. **MUST** include `tenant_id` in the WHERE clause; reject cross-tenant access at the application layer
    
    ### Workflow 5: Set Up Scoped Database Roles
    
    MUST load [access-control.md](references/access-control.md) for role setup, IAM mapping, and schema permissions.
    
    ### Workflow 6: Table Recreation DDL Migration
    
    DSQL does NOT support direct `ALTER COLUMN TYPE`, `DROP COLUMN`, `DROP CONSTRAINT`, or `MODIFY PRIMARY KEY`. These operations require the **Table Recreation Pattern** — creating a new table, copying data, dropping the original, and renaming. This is a destructive workflow that requires user confirmation at each step.
    
    MUST load [ddl-migrations/overview.md](references/ddl-migrations/overview.md) before attempting any of these operations.
    
    ### Workflow 7: MySQL to DSQL Schema Migration
    
    MUST load [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md) for type mappings, feature alternatives, and migration steps.
    
    ### Workflow 8: Query Plan Explainability
    
    Explains why the DSQL optimizer chose a particular plan. Triggered by slow queries, high DPU, unexpected Full Scans, or plans the user doesn't understand. **REQUIRES a structured Markdown diagnostic report is the deliverable** beyond conversation — run the workflow end-to-end before answering. Use the `aurora-dsql` MCP when connected; fall back to raw `psql` with a generated IAM token (see the fallback block below) otherwise.
    
    **Phase 0 — Load reference material.** Read all four before starting — each has content later phases need verbatim (node-type math, exact catalog SQL, the `>30s` skip protocol, required report elements):
    
    1. [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md) — node types, duration math, anomalous values
    2. [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md) — pg_class / pg_stats / pg_indexes SQL
    3. [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md) — GUC procedures and `>30s` skip protocol
    4. [query-plan/report-format.md](references/query-plan/report-format.md) — required report structure
    
    **Phase 1 — Capture the plan.** **ALWAYS** run `readonly_query("EXPLAIN ANALYZE VERBOSE …")` on the user's query verbatim (SELECT form) — **ALWAYS** capture a fresh plan from the cluster, even when the user describes the plan or reports an anomaly. **MAY** leverage `get_schema` or `information_schema` for schema sanity checks. When EXPLAIN errors (`relation does not exist`, `column does not exist`), **MUST** report the error verbatim — **MUST NOT** invent DSQL-specific semantics (e.g., case sensitivity, identifier quoting) as the root cause. Extract Query ID, Planning Time, Execution Time, DPU Estimate. **SELECT** runs as-is. **UPDATE/DELETE** rewrite to the equivalent SELECT (same join chain + WHERE) — the optimizer picks the same plan shape. **INSERT**, pl/pgsql, DO blocks, and functions **MUST** be rejected. **MUST NOT** use `transact --allow-writes` for plan capture; it bypasses MCP safety.
    
    **Phase 2 — Gather evidence.** Using SQL from `catalog-queries.md`, query `pg_class`, `pg_stats`, `pg_indexes`, `COUNT(*)`, `COUNT(DISTINCT)`. Classify estimation errors per `plan-interpretation.md` (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew.
    
    **Phase 3 — Experiment (conditional).** ≤30s: run GUC experiments per `guc-experiments.md` (default + merge-join-only) plus optional redundant-predicate test. >30s: skip experiments, include the manual GUC testing SQL verbatim in the report, and do not re-run for redundant-predicate testing. Anomalous values (impossible row counts): confirm query results are correct despite the anomalous EXPLAIN, flag as a potential DSQL bug, and produce the Support Request Template from `report-format.md`.
    
    **Phase 4 — Produce the report, invite reassessment.** Produce the full diagnostic report per the "Required Elements Checklist" in [query-plan/report-format.md](references/query-plan/report-format.md) — structure is non-negotiable. End with the "Next Steps" block from that reference so the user can ask for a reassessment after applying a recommendation. When the user says "reassess" (or equivalent), re-run Phase 1–2 and **append an "Addendum: After-Change Performance"** to the original report (before/after table, match against expected impact) rather than producing a new report.
    
    **psql fallback (MCP unavailable).** Pipe statements into `psql` via heredoc and check `$?`; report failures without proceeding on partial evidence:
    
    ```bash
    TOKEN=$(aws dsql generate-db-connect-admin-auth-token --hostname "$HOST" --region "$REGION")
    PGPASSWORD="$TOKEN" psql "host=$HOST port=5432 user=admin dbname=postgres sslmode=require" <<<"EXPLAIN ANALYZE VERBOSE <sql>;"
    ```
    
    **Safety.** Plan capture uses `readonly_query` exclusively — it rejects INSERT/UPDATE/DELETE/DDL at the MCP layer. Rewrite DML to SELECT (Phase 1) rather than asking `transact --allow-writes` to run it; write-mode `transact` bypasses all MCP safety checks. **MUST NOT** run arbitrary DDL/DML or pl/pgsql.
    
    ---
    
    ## Error Scenarios
    
    - **`awsknowledge` returns no results:** Use the default limits in the table above and note that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    - **OCC serialization error:** Retry the transaction. If persistent, check for hot-key contention — see [troubleshooting.md](references/troubleshooting.md).
    - **Transaction exceeds limits:** Split into batches under 3,000 rows — see [batched-migration.md](references/ddl-migrations/batched-migration.md).
    - **Token expiration mid-operation:** Generate a fresh IAM token — see [authentication-guide.md](references/auth/authentication-guide.md). See [troubleshooting.md](references/troubleshooting.md) for other issues.
    
    ---
    
    ## Additional Resources
    
    - [Aurora DSQL Documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/)
    - [Code Samples Repository](https://github.com/aws-samples/aurora-dsql-samples)
    - [PostgreSQL Compatibility](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-postgresql-compatibility.html)
    - [CloudFormation Resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dsql-cluster.html)
    
  • src/aurora-dsql-mcp-server/skills/distributed-postgres-skill/SKILL.mdskill
    Show content (17775 bytes)
    ---
    name: distributed postgres
    description: "Build with Aurora DSQL — manage schemas, execute queries, handle migrations, diagnose query plans, and develop applications with a serverless, distributed SQL database. Covers IAM auth, multi-tenant patterns, MySQL-to-DSQL migration, DDL operations, and query plan explainability. Triggers on phrases like: DSQL, Aurora DSQL, create DSQL table, DSQL schema, migrate to DSQL, distributed SQL database, serverless PostgreSQL-compatible database, DSQL query plan, DSQL EXPLAIN ANALYZE, why is my DSQL query slow."
    ---
    
    # Amazon Aurora DSQL Skill
    
    Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
    
    **Key capabilities:**
    
    - Direct query execution via MCP tools
    - Schema management with DSQL constraints
    - Migration support and safe schema evolution
    - Multi-tenant isolation patterns
    - IAM-based authentication
    
    ---
    
    ## Reference Files
    
    Load these files as needed for detailed guidance:
    
    ### [development-guide.md](references/development-guide.md)
    
    **When:** ALWAYS load before implementing schema changes or database operations
    **Contains:** [Best Practices](references/development-guide.md#best-practices), DDL rules, connection patterns, transaction limits, data type serialization patterns, application-layer referential integrity instructions, security best practices
    
    ### MCP:
    
    #### [mcp-setup.md](mcp/mcp-setup.md)
    
    **When:** Always load for guidance using or updating the DSQL MCP server
    **Contains:** Instructions for setting up the DSQL MCP server with 2 configuration options as
    sampled in [mcp/.mcp.json](mcp/.mcp.json)
    
    1. Documentation-Tools Only
    2. Database Operations (requires a cluster endpoint)
    
    #### [mcp-tools.md](mcp/mcp-tools.md)
    
    **When:** Load when you need detailed MCP tool syntax and examples. PREFER MCP tools for ad-hoc queries — execute directly rather than writing scripts.
    **Contains:** Tool parameters, detailed examples, usage patterns, [input validation](mcp/tools/input-validation.md)
    
    ### [language.md](references/language.md)
    
    **When:** MUST load when making language-specific implementation choices. ALWAYS prefer DSQL Connector when available.
    **Contains:** Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
    
    ### [dsql-examples.md](references/dsql-examples.md)
    
    **When:** Load when looking for specific implementation examples
    **Contains:** Code examples, repository patterns, multi-tenant implementations
    
    ### [troubleshooting.md](references/troubleshooting.md)
    
    **When:** Load when debugging errors or unexpected behavior. SHOULD always consult for OCC errors, connection failures, or unexpected query results.
    **Contains:** Common pitfalls, error messages, solutions
    
    ### [onboarding.md](references/onboarding.md)
    
    **When:** User explicitly requests to "Get started with DSQL" or similar phrase
    **Contains:** Interactive step-by-step guide for new users
    
    ### [access-control.md](references/access-control.md)
    
    **When:** MUST load when creating database roles, granting permissions, setting up schemas for applications, or handling sensitive data. ALWAYS use scoped roles for applications — create database roles with `dsql:DbConnect`.
    **Contains:** Scoped role setup, IAM-to-database role mapping, schema separation for sensitive data, role design patterns
    
    ### DDL Migrations (modular):
    
    #### [ddl-migrations/overview.md](references/ddl-migrations/overview.md)
    
    **When:** MUST load when performing DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT
    **Contains:** Table recreation pattern overview, transaction rules, common verify & swap pattern
    
    #### [ddl-migrations/column-operations.md](references/ddl-migrations/column-operations.md)
    
    **When:** Load for DROP COLUMN, ALTER COLUMN TYPE, SET/DROP NOT NULL, SET/DROP DEFAULT migrations
    **Contains:** Step-by-step migration patterns for column-level changes
    
    #### [ddl-migrations/constraint-operations.md](references/ddl-migrations/constraint-operations.md)
    
    **When:** Load for ADD/DROP CONSTRAINT, MODIFY PRIMARY KEY, column split/merge migrations
    **Contains:** Step-by-step migration patterns for constraint and structural changes
    
    #### [ddl-migrations/batched-migration.md](references/ddl-migrations/batched-migration.md)
    
    **When:** Load when migrating tables exceeding 3,000 rows
    **Contains:** OFFSET-based and cursor-based batching patterns, progress tracking, error handling
    
    ### MySQL Migrations (modular):
    
    #### [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md)
    
    **When:** MUST load when migrating MySQL schemas to DSQL
    **Contains:** MySQL data type mappings, feature alternatives, DDL operation mapping
    
    #### [mysql-migrations/ddl-operations.md](references/mysql-migrations/ddl-operations.md)
    
    **When:** Load when translating MySQL DDL operations to DSQL equivalents
    **Contains:** ALTER COLUMN, DROP COLUMN, AUTO_INCREMENT, ENUM, SET, FOREIGN KEY migration patterns
    
    #### [mysql-migrations/full-example.md](references/mysql-migrations/full-example.md)
    
    **When:** Load when migrating a complete MySQL table to DSQL
    **Contains:** End-to-end MySQL CREATE TABLE migration example with decision summary
    
    ### Query Plan Explainability (modular):
    
    **When:** MUST load all four at Workflow 8 Phase 0 — [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md), [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md), [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md), [query-plan/report-format.md](references/query-plan/report-format.md)
    **Contains:** DSQL node types + Node Duration math + estimation-error bands, pg_class/pg_stats/pg_indexes SQL + correlated-predicate verification, GUC experiment procedures + 30-second skip protocol, required report structure + element checklist + support request template
    
    ---
    
    ## MCP Tools Available
    
    The `aurora-dsql` MCP server provides these tools:
    
    **Database Operations:**
    
    1. **readonly_query** - Execute SELECT queries (returns list of dicts)
    2. **transact** - Execute DDL/DML statements in transaction (takes list of SQL statements)
    3. **get_schema** - Get table structure for a specific table
    
    **Documentation & Knowledge:**
    
    1. **dsql_search_documentation** - Search Aurora DSQL documentation
    2. **dsql_read_documentation** - Read specific documentation pages
    3. **dsql_recommend** - Get DSQL best practice recommendations
    
    **Note:** There is no `list_tables` tool. Use `readonly_query` with information_schema.
    
    See [mcp-setup.md](mcp/mcp-setup.md) for detailed setup instructions.
    See [mcp-tools.md](mcp/mcp-tools.md) for detailed usage and examples.
    
    ### AWS Knowledge MCP (`awsknowledge`)
    
    Consult for verifying DSQL service limits before advising users. The numeric limits below are
    defaults that may change — when a user's decision depends on an exact limit, verify it first:
    
    | Limit                          | Default       | Verify query                       |
    | ------------------------------ | ------------- | ---------------------------------- |
    | Max rows per transaction       | 3,000         | `aurora dsql transaction limits`   |
    | Max data size per transaction  | 10 MiB        | `aurora dsql transaction limits`   |
    | Max transaction duration       | 5 minutes     | `aurora dsql transaction limits`   |
    | Max connections per cluster    | 10,000        | `aurora dsql connection limits`    |
    | Auth token expiry              | 15 minutes    | `aurora dsql authentication token` |
    | Max connection duration        | 60 minutes    | `aurora dsql connection limits`    |
    | Max indexes per table          | 24            | `aurora dsql index limits`         |
    | Max columns per index          | 8             | `aurora dsql index limits`         |
    | IDENTITY/SEQUENCE CACHE values | 1 or >= 65536 | `aurora dsql sequence cache`       |
    
    **When to verify:** Before recommending batch sizes, connection pool settings, or schema designs
    where hitting a limit would cause failures. No need to verify for general guidance or when
    the exact number doesn't affect the user's decision.
    
    **Fallback:** If `awsknowledge` is unavailable, use the defaults above and note to the user
    that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    
    ## CLI Scripts Available
    
    Bash scripts in [scripts/](scripts/) for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files.
    See [scripts/README.md](scripts/README.md) for usage.
    
    ---
    
    ## Quick Start
    
    ### 1. List tables and explore schema
    
    ```
    Use readonly_query with information_schema to list tables
    Use get_schema to understand table structure
    ```
    
    ### 2. Query data
    
    ```
    Use readonly_query for SELECT queries
    Always include tenant_id in WHERE clause for multi-tenant apps
    MUST build SQL with safe_query.build() — see mcp/tools/input-validation.md
    ```
    
    ### 3. Execute schema changes
    
    ```
    Use transact tool with list of SQL statements
    Follow one-DDL-per-transaction rule
    Always use CREATE INDEX ASYNC in separate transaction
    ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Workflow 6)
    ```
    
    ---
    
    ## Common Workflows
    
    ### Workflow 1: Create Multi-Tenant Schema
    
    1. Create main table with tenant_id column using transact
    2. Create async index on tenant_id in separate transact call
    3. Create composite indexes for common query patterns (separate transact calls)
    4. Verify schema with get_schema
    
    - MUST include tenant_id in all tables
    - MUST use `CREATE INDEX ASYNC` exclusively
    - MUST issue each DDL in its own transact call: `transact(["CREATE TABLE ..."])`
    - MUST store arrays/JSON as TEXT
    
    ### Workflow 2: Safe Data Migration
    
    1. Add column using transact: `transact(["ALTER TABLE ... ADD COLUMN ..."])`
    2. Populate existing rows with UPDATE in separate transact calls (batched under 3,000 rows)
    3. Verify migration with readonly_query using COUNT
    4. Create async index for new column using transact if needed
    
    - MUST add column first, populate later
    - MUST issue ADD COLUMN with only name and type; apply DEFAULT via separate UPDATE
    - MUST batch updates under 3,000 rows in separate transact calls
    - MUST issue each ALTER TABLE in its own transaction
    
    **Recovery — batch fails midway:** Rows already updated keep their new value (each batch committed
    in its own transaction). Resume by filtering on the unset state — e.g. add
    `WHERE new_column IS NULL` (or the sentinel value) to the next UPDATE — and continue from there.
    Re-running the entire migration is safe because the filter naturally excludes completed rows.
    
    ### Workflow 3: Application-Layer Referential Integrity
    
    **INSERT:** MUST validate parent exists with readonly_query → throw error if not found → insert child with transact.
    
    **DELETE:** MUST check dependents with readonly_query COUNT → return error if dependents exist → delete with transact if safe.
    
    ### Workflow 4: Query with Tenant Isolation
    
    1. **MUST** authorize the caller against the tenant — format validation does not establish authorization
    2. **MUST** build SQL with [`safe_query.build()`](mcp/tools/safe_query.py) — use `allow()`/`regex()` for
       values (emits `'v'`), `ident()` for table/column names (emits `"v"`).
       See [input-validation.md](mcp/tools/input-validation.md)
    3. **MUST** include `tenant_id` in the WHERE clause; reject cross-tenant access at the application layer
    
    ### Workflow 5: Set Up Scoped Database Roles
    
    MUST load [access-control.md](references/access-control.md) for role setup, IAM mapping, and schema permissions.
    
    ### Workflow 6: Table Recreation DDL Migration
    
    DSQL does NOT support direct `ALTER COLUMN TYPE`, `DROP COLUMN`, `DROP CONSTRAINT`, or `MODIFY PRIMARY KEY`. These operations require the **Table Recreation Pattern** — creating a new table, copying data, dropping the original, and renaming. This is a destructive workflow that requires user confirmation at each step.
    
    MUST load [ddl-migrations/overview.md](references/ddl-migrations/overview.md) before attempting any of these operations.
    
    ### Workflow 7: MySQL to DSQL Schema Migration
    
    MUST load [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md) for type mappings, feature alternatives, and migration steps.
    
    ### Workflow 8: Query Plan Explainability
    
    Explains why the DSQL optimizer chose a particular plan. Triggered by slow queries, high DPU, unexpected Full Scans, or plans the user doesn't understand. **REQUIRES a structured Markdown diagnostic report is the deliverable** beyond conversation — run the workflow end-to-end before answering. Use the `aurora-dsql` MCP when connected; fall back to raw `psql` with a generated IAM token (see the fallback block below) otherwise.
    
    **Phase 0 — Load reference material.** Read all four before starting — each has content later phases need verbatim (node-type math, exact catalog SQL, the `>30s` skip protocol, required report elements):
    
    1. [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md) — node types, duration math, anomalous values
    2. [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md) — pg_class / pg_stats / pg_indexes SQL
    3. [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md) — GUC procedures and `>30s` skip protocol
    4. [query-plan/report-format.md](references/query-plan/report-format.md) — required report structure
    
    **Phase 1 — Capture the plan.** **ALWAYS** run `readonly_query("EXPLAIN ANALYZE VERBOSE …")` on the user's query verbatim (SELECT form) — **ALWAYS** capture a fresh plan from the cluster, even when the user describes the plan or reports an anomaly. **MAY** leverage `get_schema` or `information_schema` for schema sanity checks. When EXPLAIN errors (`relation does not exist`, `column does not exist`), **MUST** report the error verbatim — **MUST NOT** invent DSQL-specific semantics (e.g., case sensitivity, identifier quoting) as the root cause. Extract Query ID, Planning Time, Execution Time, DPU Estimate. **SELECT** runs as-is. **UPDATE/DELETE** rewrite to the equivalent SELECT (same join chain + WHERE) — the optimizer picks the same plan shape. **INSERT**, pl/pgsql, DO blocks, and functions **MUST** be rejected. **MUST NOT** use `transact --allow-writes` for plan capture; it bypasses MCP safety.
    
    **Phase 2 — Gather evidence.** Using SQL from `catalog-queries.md`, query `pg_class`, `pg_stats`, `pg_indexes`, `COUNT(*)`, `COUNT(DISTINCT)`. Classify estimation errors per `plan-interpretation.md` (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew.
    
    **Phase 3 — Experiment (conditional).** ≤30s: run GUC experiments per `guc-experiments.md` (default + merge-join-only) plus optional redundant-predicate test. >30s: skip experiments, include the manual GUC testing SQL verbatim in the report, and do not re-run for redundant-predicate testing. Anomalous values (impossible row counts): confirm query results are correct despite the anomalous EXPLAIN, flag as a potential DSQL bug, and produce the Support Request Template from `report-format.md`.
    
    **Phase 4 — Produce the report, invite reassessment.** Produce the full diagnostic report per the "Required Elements Checklist" in [query-plan/report-format.md](references/query-plan/report-format.md) — structure is non-negotiable. End with the "Next Steps" block from that reference so the user can ask for a reassessment after applying a recommendation. When the user says "reassess" (or equivalent), re-run Phase 1–2 and **append an "Addendum: After-Change Performance"** to the original report (before/after table, match against expected impact) rather than producing a new report.
    
    **psql fallback (MCP unavailable).** Pipe statements into `psql` via heredoc and check `$?`; report failures without proceeding on partial evidence:
    
    ```bash
    TOKEN=$(aws dsql generate-db-connect-admin-auth-token --hostname "$HOST" --region "$REGION")
    PGPASSWORD="$TOKEN" psql "host=$HOST port=5432 user=admin dbname=postgres sslmode=require" <<<"EXPLAIN ANALYZE VERBOSE <sql>;"
    ```
    
    **Safety.** Plan capture uses `readonly_query` exclusively — it rejects INSERT/UPDATE/DELETE/DDL at the MCP layer. Rewrite DML to SELECT (Phase 1) rather than asking `transact --allow-writes` to run it; write-mode `transact` bypasses all MCP safety checks. **MUST NOT** run arbitrary DDL/DML or pl/pgsql.
    
    ---
    
    ## Error Scenarios
    
    - **`awsknowledge` returns no results:** Use the default limits in the table above and note that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    - **OCC serialization error:** Retry the transaction. If persistent, check for hot-key contention — see [troubleshooting.md](references/troubleshooting.md).
    - **Transaction exceeds limits:** Split into batches under 3,000 rows — see [batched-migration.md](references/ddl-migrations/batched-migration.md).
    - **Token expiration mid-operation:** Generate a fresh IAM token — see [authentication-guide.md](references/auth/authentication-guide.md). See [troubleshooting.md](references/troubleshooting.md) for other issues.
    
    ---
    
    ## Additional Resources
    
    - [Aurora DSQL Documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/)
    - [Code Samples Repository](https://github.com/aws-samples/aurora-dsql-samples)
    - [PostgreSQL Compatibility](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-postgresql-compatibility.html)
    - [CloudFormation Resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dsql-cluster.html)
    
  • src/aurora-dsql-mcp-server/skills/distributed-sql-skill/SKILL.mdskill
    Show content (17770 bytes)
    ---
    name: distributed sql
    description: "Build with Aurora DSQL — manage schemas, execute queries, handle migrations, diagnose query plans, and develop applications with a serverless, distributed SQL database. Covers IAM auth, multi-tenant patterns, MySQL-to-DSQL migration, DDL operations, and query plan explainability. Triggers on phrases like: DSQL, Aurora DSQL, create DSQL table, DSQL schema, migrate to DSQL, distributed SQL database, serverless PostgreSQL-compatible database, DSQL query plan, DSQL EXPLAIN ANALYZE, why is my DSQL query slow."
    ---
    
    # Amazon Aurora DSQL Skill
    
    Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
    
    **Key capabilities:**
    
    - Direct query execution via MCP tools
    - Schema management with DSQL constraints
    - Migration support and safe schema evolution
    - Multi-tenant isolation patterns
    - IAM-based authentication
    
    ---
    
    ## Reference Files
    
    Load these files as needed for detailed guidance:
    
    ### [development-guide.md](references/development-guide.md)
    
    **When:** ALWAYS load before implementing schema changes or database operations
    **Contains:** [Best Practices](references/development-guide.md#best-practices), DDL rules, connection patterns, transaction limits, data type serialization patterns, application-layer referential integrity instructions, security best practices
    
    ### MCP:
    
    #### [mcp-setup.md](mcp/mcp-setup.md)
    
    **When:** Always load for guidance using or updating the DSQL MCP server
    **Contains:** Instructions for setting up the DSQL MCP server with 2 configuration options as
    sampled in [mcp/.mcp.json](mcp/.mcp.json)
    
    1. Documentation-Tools Only
    2. Database Operations (requires a cluster endpoint)
    
    #### [mcp-tools.md](mcp/mcp-tools.md)
    
    **When:** Load when you need detailed MCP tool syntax and examples. PREFER MCP tools for ad-hoc queries — execute directly rather than writing scripts.
    **Contains:** Tool parameters, detailed examples, usage patterns, [input validation](mcp/tools/input-validation.md)
    
    ### [language.md](references/language.md)
    
    **When:** MUST load when making language-specific implementation choices. ALWAYS prefer DSQL Connector when available.
    **Contains:** Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
    
    ### [dsql-examples.md](references/dsql-examples.md)
    
    **When:** Load when looking for specific implementation examples
    **Contains:** Code examples, repository patterns, multi-tenant implementations
    
    ### [troubleshooting.md](references/troubleshooting.md)
    
    **When:** Load when debugging errors or unexpected behavior. SHOULD always consult for OCC errors, connection failures, or unexpected query results.
    **Contains:** Common pitfalls, error messages, solutions
    
    ### [onboarding.md](references/onboarding.md)
    
    **When:** User explicitly requests to "Get started with DSQL" or similar phrase
    **Contains:** Interactive step-by-step guide for new users
    
    ### [access-control.md](references/access-control.md)
    
    **When:** MUST load when creating database roles, granting permissions, setting up schemas for applications, or handling sensitive data. ALWAYS use scoped roles for applications — create database roles with `dsql:DbConnect`.
    **Contains:** Scoped role setup, IAM-to-database role mapping, schema separation for sensitive data, role design patterns
    
    ### DDL Migrations (modular):
    
    #### [ddl-migrations/overview.md](references/ddl-migrations/overview.md)
    
    **When:** MUST load when performing DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT
    **Contains:** Table recreation pattern overview, transaction rules, common verify & swap pattern
    
    #### [ddl-migrations/column-operations.md](references/ddl-migrations/column-operations.md)
    
    **When:** Load for DROP COLUMN, ALTER COLUMN TYPE, SET/DROP NOT NULL, SET/DROP DEFAULT migrations
    **Contains:** Step-by-step migration patterns for column-level changes
    
    #### [ddl-migrations/constraint-operations.md](references/ddl-migrations/constraint-operations.md)
    
    **When:** Load for ADD/DROP CONSTRAINT, MODIFY PRIMARY KEY, column split/merge migrations
    **Contains:** Step-by-step migration patterns for constraint and structural changes
    
    #### [ddl-migrations/batched-migration.md](references/ddl-migrations/batched-migration.md)
    
    **When:** Load when migrating tables exceeding 3,000 rows
    **Contains:** OFFSET-based and cursor-based batching patterns, progress tracking, error handling
    
    ### MySQL Migrations (modular):
    
    #### [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md)
    
    **When:** MUST load when migrating MySQL schemas to DSQL
    **Contains:** MySQL data type mappings, feature alternatives, DDL operation mapping
    
    #### [mysql-migrations/ddl-operations.md](references/mysql-migrations/ddl-operations.md)
    
    **When:** Load when translating MySQL DDL operations to DSQL equivalents
    **Contains:** ALTER COLUMN, DROP COLUMN, AUTO_INCREMENT, ENUM, SET, FOREIGN KEY migration patterns
    
    #### [mysql-migrations/full-example.md](references/mysql-migrations/full-example.md)
    
    **When:** Load when migrating a complete MySQL table to DSQL
    **Contains:** End-to-end MySQL CREATE TABLE migration example with decision summary
    
    ### Query Plan Explainability (modular):
    
    **When:** MUST load all four at Workflow 8 Phase 0 — [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md), [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md), [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md), [query-plan/report-format.md](references/query-plan/report-format.md)
    **Contains:** DSQL node types + Node Duration math + estimation-error bands, pg_class/pg_stats/pg_indexes SQL + correlated-predicate verification, GUC experiment procedures + 30-second skip protocol, required report structure + element checklist + support request template
    
    ---
    
    ## MCP Tools Available
    
    The `aurora-dsql` MCP server provides these tools:
    
    **Database Operations:**
    
    1. **readonly_query** - Execute SELECT queries (returns list of dicts)
    2. **transact** - Execute DDL/DML statements in transaction (takes list of SQL statements)
    3. **get_schema** - Get table structure for a specific table
    
    **Documentation & Knowledge:**
    
    1. **dsql_search_documentation** - Search Aurora DSQL documentation
    2. **dsql_read_documentation** - Read specific documentation pages
    3. **dsql_recommend** - Get DSQL best practice recommendations
    
    **Note:** There is no `list_tables` tool. Use `readonly_query` with information_schema.
    
    See [mcp-setup.md](mcp/mcp-setup.md) for detailed setup instructions.
    See [mcp-tools.md](mcp/mcp-tools.md) for detailed usage and examples.
    
    ### AWS Knowledge MCP (`awsknowledge`)
    
    Consult for verifying DSQL service limits before advising users. The numeric limits below are
    defaults that may change — when a user's decision depends on an exact limit, verify it first:
    
    | Limit                          | Default       | Verify query                       |
    | ------------------------------ | ------------- | ---------------------------------- |
    | Max rows per transaction       | 3,000         | `aurora dsql transaction limits`   |
    | Max data size per transaction  | 10 MiB        | `aurora dsql transaction limits`   |
    | Max transaction duration       | 5 minutes     | `aurora dsql transaction limits`   |
    | Max connections per cluster    | 10,000        | `aurora dsql connection limits`    |
    | Auth token expiry              | 15 minutes    | `aurora dsql authentication token` |
    | Max connection duration        | 60 minutes    | `aurora dsql connection limits`    |
    | Max indexes per table          | 24            | `aurora dsql index limits`         |
    | Max columns per index          | 8             | `aurora dsql index limits`         |
    | IDENTITY/SEQUENCE CACHE values | 1 or >= 65536 | `aurora dsql sequence cache`       |
    
    **When to verify:** Before recommending batch sizes, connection pool settings, or schema designs
    where hitting a limit would cause failures. No need to verify for general guidance or when
    the exact number doesn't affect the user's decision.
    
    **Fallback:** If `awsknowledge` is unavailable, use the defaults above and note to the user
    that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    
    ## CLI Scripts Available
    
    Bash scripts in [scripts/](scripts/) for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files.
    See [scripts/README.md](scripts/README.md) for usage.
    
    ---
    
    ## Quick Start
    
    ### 1. List tables and explore schema
    
    ```
    Use readonly_query with information_schema to list tables
    Use get_schema to understand table structure
    ```
    
    ### 2. Query data
    
    ```
    Use readonly_query for SELECT queries
    Always include tenant_id in WHERE clause for multi-tenant apps
    MUST build SQL with safe_query.build() — see mcp/tools/input-validation.md
    ```
    
    ### 3. Execute schema changes
    
    ```
    Use transact tool with list of SQL statements
    Follow one-DDL-per-transaction rule
    Always use CREATE INDEX ASYNC in separate transaction
    ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Workflow 6)
    ```
    
    ---
    
    ## Common Workflows
    
    ### Workflow 1: Create Multi-Tenant Schema
    
    1. Create main table with tenant_id column using transact
    2. Create async index on tenant_id in separate transact call
    3. Create composite indexes for common query patterns (separate transact calls)
    4. Verify schema with get_schema
    
    - MUST include tenant_id in all tables
    - MUST use `CREATE INDEX ASYNC` exclusively
    - MUST issue each DDL in its own transact call: `transact(["CREATE TABLE ..."])`
    - MUST store arrays/JSON as TEXT
    
    ### Workflow 2: Safe Data Migration
    
    1. Add column using transact: `transact(["ALTER TABLE ... ADD COLUMN ..."])`
    2. Populate existing rows with UPDATE in separate transact calls (batched under 3,000 rows)
    3. Verify migration with readonly_query using COUNT
    4. Create async index for new column using transact if needed
    
    - MUST add column first, populate later
    - MUST issue ADD COLUMN with only name and type; apply DEFAULT via separate UPDATE
    - MUST batch updates under 3,000 rows in separate transact calls
    - MUST issue each ALTER TABLE in its own transaction
    
    **Recovery — batch fails midway:** Rows already updated keep their new value (each batch committed
    in its own transaction). Resume by filtering on the unset state — e.g. add
    `WHERE new_column IS NULL` (or the sentinel value) to the next UPDATE — and continue from there.
    Re-running the entire migration is safe because the filter naturally excludes completed rows.
    
    ### Workflow 3: Application-Layer Referential Integrity
    
    **INSERT:** MUST validate parent exists with readonly_query → throw error if not found → insert child with transact.
    
    **DELETE:** MUST check dependents with readonly_query COUNT → return error if dependents exist → delete with transact if safe.
    
    ### Workflow 4: Query with Tenant Isolation
    
    1. **MUST** authorize the caller against the tenant — format validation does not establish authorization
    2. **MUST** build SQL with [`safe_query.build()`](mcp/tools/safe_query.py) — use `allow()`/`regex()` for
       values (emits `'v'`), `ident()` for table/column names (emits `"v"`).
       See [input-validation.md](mcp/tools/input-validation.md)
    3. **MUST** include `tenant_id` in the WHERE clause; reject cross-tenant access at the application layer
    
    ### Workflow 5: Set Up Scoped Database Roles
    
    MUST load [access-control.md](references/access-control.md) for role setup, IAM mapping, and schema permissions.
    
    ### Workflow 6: Table Recreation DDL Migration
    
    DSQL does NOT support direct `ALTER COLUMN TYPE`, `DROP COLUMN`, `DROP CONSTRAINT`, or `MODIFY PRIMARY KEY`. These operations require the **Table Recreation Pattern** — creating a new table, copying data, dropping the original, and renaming. This is a destructive workflow that requires user confirmation at each step.
    
    MUST load [ddl-migrations/overview.md](references/ddl-migrations/overview.md) before attempting any of these operations.
    
    ### Workflow 7: MySQL to DSQL Schema Migration
    
    MUST load [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md) for type mappings, feature alternatives, and migration steps.
    
    ### Workflow 8: Query Plan Explainability
    
    Explains why the DSQL optimizer chose a particular plan. Triggered by slow queries, high DPU, unexpected Full Scans, or plans the user doesn't understand. **REQUIRES a structured Markdown diagnostic report is the deliverable** beyond conversation — run the workflow end-to-end before answering. Use the `aurora-dsql` MCP when connected; fall back to raw `psql` with a generated IAM token (see the fallback block below) otherwise.
    
    **Phase 0 — Load reference material.** Read all four before starting — each has content later phases need verbatim (node-type math, exact catalog SQL, the `>30s` skip protocol, required report elements):
    
    1. [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md) — node types, duration math, anomalous values
    2. [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md) — pg_class / pg_stats / pg_indexes SQL
    3. [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md) — GUC procedures and `>30s` skip protocol
    4. [query-plan/report-format.md](references/query-plan/report-format.md) — required report structure
    
    **Phase 1 — Capture the plan.** **ALWAYS** run `readonly_query("EXPLAIN ANALYZE VERBOSE …")` on the user's query verbatim (SELECT form) — **ALWAYS** capture a fresh plan from the cluster, even when the user describes the plan or reports an anomaly. **MAY** leverage `get_schema` or `information_schema` for schema sanity checks. When EXPLAIN errors (`relation does not exist`, `column does not exist`), **MUST** report the error verbatim — **MUST NOT** invent DSQL-specific semantics (e.g., case sensitivity, identifier quoting) as the root cause. Extract Query ID, Planning Time, Execution Time, DPU Estimate. **SELECT** runs as-is. **UPDATE/DELETE** rewrite to the equivalent SELECT (same join chain + WHERE) — the optimizer picks the same plan shape. **INSERT**, pl/pgsql, DO blocks, and functions **MUST** be rejected. **MUST NOT** use `transact --allow-writes` for plan capture; it bypasses MCP safety.
    
    **Phase 2 — Gather evidence.** Using SQL from `catalog-queries.md`, query `pg_class`, `pg_stats`, `pg_indexes`, `COUNT(*)`, `COUNT(DISTINCT)`. Classify estimation errors per `plan-interpretation.md` (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew.
    
    **Phase 3 — Experiment (conditional).** ≤30s: run GUC experiments per `guc-experiments.md` (default + merge-join-only) plus optional redundant-predicate test. >30s: skip experiments, include the manual GUC testing SQL verbatim in the report, and do not re-run for redundant-predicate testing. Anomalous values (impossible row counts): confirm query results are correct despite the anomalous EXPLAIN, flag as a potential DSQL bug, and produce the Support Request Template from `report-format.md`.
    
    **Phase 4 — Produce the report, invite reassessment.** Produce the full diagnostic report per the "Required Elements Checklist" in [query-plan/report-format.md](references/query-plan/report-format.md) — structure is non-negotiable. End with the "Next Steps" block from that reference so the user can ask for a reassessment after applying a recommendation. When the user says "reassess" (or equivalent), re-run Phase 1–2 and **append an "Addendum: After-Change Performance"** to the original report (before/after table, match against expected impact) rather than producing a new report.
    
    **psql fallback (MCP unavailable).** Pipe statements into `psql` via heredoc and check `$?`; report failures without proceeding on partial evidence:
    
    ```bash
    TOKEN=$(aws dsql generate-db-connect-admin-auth-token --hostname "$HOST" --region "$REGION")
    PGPASSWORD="$TOKEN" psql "host=$HOST port=5432 user=admin dbname=postgres sslmode=require" <<<"EXPLAIN ANALYZE VERBOSE <sql>;"
    ```
    
    **Safety.** Plan capture uses `readonly_query` exclusively — it rejects INSERT/UPDATE/DELETE/DDL at the MCP layer. Rewrite DML to SELECT (Phase 1) rather than asking `transact --allow-writes` to run it; write-mode `transact` bypasses all MCP safety checks. **MUST NOT** run arbitrary DDL/DML or pl/pgsql.
    
    ---
    
    ## Error Scenarios
    
    - **`awsknowledge` returns no results:** Use the default limits in the table above and note that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    - **OCC serialization error:** Retry the transaction. If persistent, check for hot-key contention — see [troubleshooting.md](references/troubleshooting.md).
    - **Transaction exceeds limits:** Split into batches under 3,000 rows — see [batched-migration.md](references/ddl-migrations/batched-migration.md).
    - **Token expiration mid-operation:** Generate a fresh IAM token — see [authentication-guide.md](references/auth/authentication-guide.md). See [troubleshooting.md](references/troubleshooting.md) for other issues.
    
    ---
    
    ## Additional Resources
    
    - [Aurora DSQL Documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/)
    - [Code Samples Repository](https://github.com/aws-samples/aurora-dsql-samples)
    - [PostgreSQL Compatibility](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-postgresql-compatibility.html)
    - [CloudFormation Resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dsql-cluster.html)
    
  • src/aurora-dsql-mcp-server/skills/amazon-aurora-dsql-skill/SKILL.mdskill
    Show content (17773 bytes)
    ---
    name: amazon aurora dsql
    description: "Build with Aurora DSQL — manage schemas, execute queries, handle migrations, diagnose query plans, and develop applications with a serverless, distributed SQL database. Covers IAM auth, multi-tenant patterns, MySQL-to-DSQL migration, DDL operations, and query plan explainability. Triggers on phrases like: DSQL, Aurora DSQL, create DSQL table, DSQL schema, migrate to DSQL, distributed SQL database, serverless PostgreSQL-compatible database, DSQL query plan, DSQL EXPLAIN ANALYZE, why is my DSQL query slow."
    ---
    
    # Amazon Aurora DSQL Skill
    
    Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
    
    **Key capabilities:**
    
    - Direct query execution via MCP tools
    - Schema management with DSQL constraints
    - Migration support and safe schema evolution
    - Multi-tenant isolation patterns
    - IAM-based authentication
    
    ---
    
    ## Reference Files
    
    Load these files as needed for detailed guidance:
    
    ### [development-guide.md](references/development-guide.md)
    
    **When:** ALWAYS load before implementing schema changes or database operations
    **Contains:** [Best Practices](references/development-guide.md#best-practices), DDL rules, connection patterns, transaction limits, data type serialization patterns, application-layer referential integrity instructions, security best practices
    
    ### MCP:
    
    #### [mcp-setup.md](mcp/mcp-setup.md)
    
    **When:** Always load for guidance using or updating the DSQL MCP server
    **Contains:** Instructions for setting up the DSQL MCP server with 2 configuration options as
    sampled in [mcp/.mcp.json](mcp/.mcp.json)
    
    1. Documentation-Tools Only
    2. Database Operations (requires a cluster endpoint)
    
    #### [mcp-tools.md](mcp/mcp-tools.md)
    
    **When:** Load when you need detailed MCP tool syntax and examples. PREFER MCP tools for ad-hoc queries — execute directly rather than writing scripts.
    **Contains:** Tool parameters, detailed examples, usage patterns, [input validation](mcp/tools/input-validation.md)
    
    ### [language.md](references/language.md)
    
    **When:** MUST load when making language-specific implementation choices. ALWAYS prefer DSQL Connector when available.
    **Contains:** Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
    
    ### [dsql-examples.md](references/dsql-examples.md)
    
    **When:** Load when looking for specific implementation examples
    **Contains:** Code examples, repository patterns, multi-tenant implementations
    
    ### [troubleshooting.md](references/troubleshooting.md)
    
    **When:** Load when debugging errors or unexpected behavior. SHOULD always consult for OCC errors, connection failures, or unexpected query results.
    **Contains:** Common pitfalls, error messages, solutions
    
    ### [onboarding.md](references/onboarding.md)
    
    **When:** User explicitly requests to "Get started with DSQL" or similar phrase
    **Contains:** Interactive step-by-step guide for new users
    
    ### [access-control.md](references/access-control.md)
    
    **When:** MUST load when creating database roles, granting permissions, setting up schemas for applications, or handling sensitive data. ALWAYS use scoped roles for applications — create database roles with `dsql:DbConnect`.
    **Contains:** Scoped role setup, IAM-to-database role mapping, schema separation for sensitive data, role design patterns
    
    ### DDL Migrations (modular):
    
    #### [ddl-migrations/overview.md](references/ddl-migrations/overview.md)
    
    **When:** MUST load when performing DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT
    **Contains:** Table recreation pattern overview, transaction rules, common verify & swap pattern
    
    #### [ddl-migrations/column-operations.md](references/ddl-migrations/column-operations.md)
    
    **When:** Load for DROP COLUMN, ALTER COLUMN TYPE, SET/DROP NOT NULL, SET/DROP DEFAULT migrations
    **Contains:** Step-by-step migration patterns for column-level changes
    
    #### [ddl-migrations/constraint-operations.md](references/ddl-migrations/constraint-operations.md)
    
    **When:** Load for ADD/DROP CONSTRAINT, MODIFY PRIMARY KEY, column split/merge migrations
    **Contains:** Step-by-step migration patterns for constraint and structural changes
    
    #### [ddl-migrations/batched-migration.md](references/ddl-migrations/batched-migration.md)
    
    **When:** Load when migrating tables exceeding 3,000 rows
    **Contains:** OFFSET-based and cursor-based batching patterns, progress tracking, error handling
    
    ### MySQL Migrations (modular):
    
    #### [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md)
    
    **When:** MUST load when migrating MySQL schemas to DSQL
    **Contains:** MySQL data type mappings, feature alternatives, DDL operation mapping
    
    #### [mysql-migrations/ddl-operations.md](references/mysql-migrations/ddl-operations.md)
    
    **When:** Load when translating MySQL DDL operations to DSQL equivalents
    **Contains:** ALTER COLUMN, DROP COLUMN, AUTO_INCREMENT, ENUM, SET, FOREIGN KEY migration patterns
    
    #### [mysql-migrations/full-example.md](references/mysql-migrations/full-example.md)
    
    **When:** Load when migrating a complete MySQL table to DSQL
    **Contains:** End-to-end MySQL CREATE TABLE migration example with decision summary
    
    ### Query Plan Explainability (modular):
    
    **When:** MUST load all four at Workflow 8 Phase 0 — [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md), [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md), [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md), [query-plan/report-format.md](references/query-plan/report-format.md)
    **Contains:** DSQL node types + Node Duration math + estimation-error bands, pg_class/pg_stats/pg_indexes SQL + correlated-predicate verification, GUC experiment procedures + 30-second skip protocol, required report structure + element checklist + support request template
    
    ---
    
    ## MCP Tools Available
    
    The `aurora-dsql` MCP server provides these tools:
    
    **Database Operations:**
    
    1. **readonly_query** - Execute SELECT queries (returns list of dicts)
    2. **transact** - Execute DDL/DML statements in transaction (takes list of SQL statements)
    3. **get_schema** - Get table structure for a specific table
    
    **Documentation & Knowledge:**
    
    1. **dsql_search_documentation** - Search Aurora DSQL documentation
    2. **dsql_read_documentation** - Read specific documentation pages
    3. **dsql_recommend** - Get DSQL best practice recommendations
    
    **Note:** There is no `list_tables` tool. Use `readonly_query` with information_schema.
    
    See [mcp-setup.md](mcp/mcp-setup.md) for detailed setup instructions.
    See [mcp-tools.md](mcp/mcp-tools.md) for detailed usage and examples.
    
    ### AWS Knowledge MCP (`awsknowledge`)
    
    Consult for verifying DSQL service limits before advising users. The numeric limits below are
    defaults that may change — when a user's decision depends on an exact limit, verify it first:
    
    | Limit                          | Default       | Verify query                       |
    | ------------------------------ | ------------- | ---------------------------------- |
    | Max rows per transaction       | 3,000         | `aurora dsql transaction limits`   |
    | Max data size per transaction  | 10 MiB        | `aurora dsql transaction limits`   |
    | Max transaction duration       | 5 minutes     | `aurora dsql transaction limits`   |
    | Max connections per cluster    | 10,000        | `aurora dsql connection limits`    |
    | Auth token expiry              | 15 minutes    | `aurora dsql authentication token` |
    | Max connection duration        | 60 minutes    | `aurora dsql connection limits`    |
    | Max indexes per table          | 24            | `aurora dsql index limits`         |
    | Max columns per index          | 8             | `aurora dsql index limits`         |
    | IDENTITY/SEQUENCE CACHE values | 1 or >= 65536 | `aurora dsql sequence cache`       |
    
    **When to verify:** Before recommending batch sizes, connection pool settings, or schema designs
    where hitting a limit would cause failures. No need to verify for general guidance or when
    the exact number doesn't affect the user's decision.
    
    **Fallback:** If `awsknowledge` is unavailable, use the defaults above and note to the user
    that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    
    ## CLI Scripts Available
    
    Bash scripts in [scripts/](scripts/) for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files.
    See [scripts/README.md](scripts/README.md) for usage.
    
    ---
    
    ## Quick Start
    
    ### 1. List tables and explore schema
    
    ```
    Use readonly_query with information_schema to list tables
    Use get_schema to understand table structure
    ```
    
    ### 2. Query data
    
    ```
    Use readonly_query for SELECT queries
    Always include tenant_id in WHERE clause for multi-tenant apps
    MUST build SQL with safe_query.build() — see mcp/tools/input-validation.md
    ```
    
    ### 3. Execute schema changes
    
    ```
    Use transact tool with list of SQL statements
    Follow one-DDL-per-transaction rule
    Always use CREATE INDEX ASYNC in separate transaction
    ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Workflow 6)
    ```
    
    ---
    
    ## Common Workflows
    
    ### Workflow 1: Create Multi-Tenant Schema
    
    1. Create main table with tenant_id column using transact
    2. Create async index on tenant_id in separate transact call
    3. Create composite indexes for common query patterns (separate transact calls)
    4. Verify schema with get_schema
    
    - MUST include tenant_id in all tables
    - MUST use `CREATE INDEX ASYNC` exclusively
    - MUST issue each DDL in its own transact call: `transact(["CREATE TABLE ..."])`
    - MUST store arrays/JSON as TEXT
    
    ### Workflow 2: Safe Data Migration
    
    1. Add column using transact: `transact(["ALTER TABLE ... ADD COLUMN ..."])`
    2. Populate existing rows with UPDATE in separate transact calls (batched under 3,000 rows)
    3. Verify migration with readonly_query using COUNT
    4. Create async index for new column using transact if needed
    
    - MUST add column first, populate later
    - MUST issue ADD COLUMN with only name and type; apply DEFAULT via separate UPDATE
    - MUST batch updates under 3,000 rows in separate transact calls
    - MUST issue each ALTER TABLE in its own transaction
    
    **Recovery — batch fails midway:** Rows already updated keep their new value (each batch committed
    in its own transaction). Resume by filtering on the unset state — e.g. add
    `WHERE new_column IS NULL` (or the sentinel value) to the next UPDATE — and continue from there.
    Re-running the entire migration is safe because the filter naturally excludes completed rows.
    
    ### Workflow 3: Application-Layer Referential Integrity
    
    **INSERT:** MUST validate parent exists with readonly_query → throw error if not found → insert child with transact.
    
    **DELETE:** MUST check dependents with readonly_query COUNT → return error if dependents exist → delete with transact if safe.
    
    ### Workflow 4: Query with Tenant Isolation
    
    1. **MUST** authorize the caller against the tenant — format validation does not establish authorization
    2. **MUST** build SQL with [`safe_query.build()`](mcp/tools/safe_query.py) — use `allow()`/`regex()` for
       values (emits `'v'`), `ident()` for table/column names (emits `"v"`).
       See [input-validation.md](mcp/tools/input-validation.md)
    3. **MUST** include `tenant_id` in the WHERE clause; reject cross-tenant access at the application layer
    
    ### Workflow 5: Set Up Scoped Database Roles
    
    MUST load [access-control.md](references/access-control.md) for role setup, IAM mapping, and schema permissions.
    
    ### Workflow 6: Table Recreation DDL Migration
    
    DSQL does NOT support direct `ALTER COLUMN TYPE`, `DROP COLUMN`, `DROP CONSTRAINT`, or `MODIFY PRIMARY KEY`. These operations require the **Table Recreation Pattern** — creating a new table, copying data, dropping the original, and renaming. This is a destructive workflow that requires user confirmation at each step.
    
    MUST load [ddl-migrations/overview.md](references/ddl-migrations/overview.md) before attempting any of these operations.
    
    ### Workflow 7: MySQL to DSQL Schema Migration
    
    MUST load [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md) for type mappings, feature alternatives, and migration steps.
    
    ### Workflow 8: Query Plan Explainability
    
    Explains why the DSQL optimizer chose a particular plan. Triggered by slow queries, high DPU, unexpected Full Scans, or plans the user doesn't understand. **REQUIRES a structured Markdown diagnostic report is the deliverable** beyond conversation — run the workflow end-to-end before answering. Use the `aurora-dsql` MCP when connected; fall back to raw `psql` with a generated IAM token (see the fallback block below) otherwise.
    
    **Phase 0 — Load reference material.** Read all four before starting — each has content later phases need verbatim (node-type math, exact catalog SQL, the `>30s` skip protocol, required report elements):
    
    1. [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md) — node types, duration math, anomalous values
    2. [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md) — pg_class / pg_stats / pg_indexes SQL
    3. [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md) — GUC procedures and `>30s` skip protocol
    4. [query-plan/report-format.md](references/query-plan/report-format.md) — required report structure
    
    **Phase 1 — Capture the plan.** **ALWAYS** run `readonly_query("EXPLAIN ANALYZE VERBOSE …")` on the user's query verbatim (SELECT form) — **ALWAYS** capture a fresh plan from the cluster, even when the user describes the plan or reports an anomaly. **MAY** leverage `get_schema` or `information_schema` for schema sanity checks. When EXPLAIN errors (`relation does not exist`, `column does not exist`), **MUST** report the error verbatim — **MUST NOT** invent DSQL-specific semantics (e.g., case sensitivity, identifier quoting) as the root cause. Extract Query ID, Planning Time, Execution Time, DPU Estimate. **SELECT** runs as-is. **UPDATE/DELETE** rewrite to the equivalent SELECT (same join chain + WHERE) — the optimizer picks the same plan shape. **INSERT**, pl/pgsql, DO blocks, and functions **MUST** be rejected. **MUST NOT** use `transact --allow-writes` for plan capture; it bypasses MCP safety.
    
    **Phase 2 — Gather evidence.** Using SQL from `catalog-queries.md`, query `pg_class`, `pg_stats`, `pg_indexes`, `COUNT(*)`, `COUNT(DISTINCT)`. Classify estimation errors per `plan-interpretation.md` (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew.
    
    **Phase 3 — Experiment (conditional).** ≤30s: run GUC experiments per `guc-experiments.md` (default + merge-join-only) plus optional redundant-predicate test. >30s: skip experiments, include the manual GUC testing SQL verbatim in the report, and do not re-run for redundant-predicate testing. Anomalous values (impossible row counts): confirm query results are correct despite the anomalous EXPLAIN, flag as a potential DSQL bug, and produce the Support Request Template from `report-format.md`.
    
    **Phase 4 — Produce the report, invite reassessment.** Produce the full diagnostic report per the "Required Elements Checklist" in [query-plan/report-format.md](references/query-plan/report-format.md) — structure is non-negotiable. End with the "Next Steps" block from that reference so the user can ask for a reassessment after applying a recommendation. When the user says "reassess" (or equivalent), re-run Phase 1–2 and **append an "Addendum: After-Change Performance"** to the original report (before/after table, match against expected impact) rather than producing a new report.
    
    **psql fallback (MCP unavailable).** Pipe statements into `psql` via heredoc and check `$?`; report failures without proceeding on partial evidence:
    
    ```bash
    TOKEN=$(aws dsql generate-db-connect-admin-auth-token --hostname "$HOST" --region "$REGION")
    PGPASSWORD="$TOKEN" psql "host=$HOST port=5432 user=admin dbname=postgres sslmode=require" <<<"EXPLAIN ANALYZE VERBOSE <sql>;"
    ```
    
    **Safety.** Plan capture uses `readonly_query` exclusively — it rejects INSERT/UPDATE/DELETE/DDL at the MCP layer. Rewrite DML to SELECT (Phase 1) rather than asking `transact --allow-writes` to run it; write-mode `transact` bypasses all MCP safety checks. **MUST NOT** run arbitrary DDL/DML or pl/pgsql.
    
    ---
    
    ## Error Scenarios
    
    - **`awsknowledge` returns no results:** Use the default limits in the table above and note that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    - **OCC serialization error:** Retry the transaction. If persistent, check for hot-key contention — see [troubleshooting.md](references/troubleshooting.md).
    - **Transaction exceeds limits:** Split into batches under 3,000 rows — see [batched-migration.md](references/ddl-migrations/batched-migration.md).
    - **Token expiration mid-operation:** Generate a fresh IAM token — see [authentication-guide.md](references/auth/authentication-guide.md). See [troubleshooting.md](references/troubleshooting.md) for other issues.
    
    ---
    
    ## Additional Resources
    
    - [Aurora DSQL Documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/)
    - [Code Samples Repository](https://github.com/aws-samples/aurora-dsql-samples)
    - [PostgreSQL Compatibility](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-postgresql-compatibility.html)
    - [CloudFormation Resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dsql-cluster.html)
    
  • src/aurora-dsql-mcp-server/skills/dsql-skill/SKILL.mdskill
    Show content (17759 bytes)
    ---
    name: dsql
    description: "Build with Aurora DSQL — manage schemas, execute queries, handle migrations, diagnose query plans, and develop applications with a serverless, distributed SQL database. Covers IAM auth, multi-tenant patterns, MySQL-to-DSQL migration, DDL operations, and query plan explainability. Triggers on phrases like: DSQL, Aurora DSQL, create DSQL table, DSQL schema, migrate to DSQL, distributed SQL database, serverless PostgreSQL-compatible database, DSQL query plan, DSQL EXPLAIN ANALYZE, why is my DSQL query slow."
    ---
    
    # Amazon Aurora DSQL Skill
    
    Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
    
    **Key capabilities:**
    
    - Direct query execution via MCP tools
    - Schema management with DSQL constraints
    - Migration support and safe schema evolution
    - Multi-tenant isolation patterns
    - IAM-based authentication
    
    ---
    
    ## Reference Files
    
    Load these files as needed for detailed guidance:
    
    ### [development-guide.md](references/development-guide.md)
    
    **When:** ALWAYS load before implementing schema changes or database operations
    **Contains:** [Best Practices](references/development-guide.md#best-practices), DDL rules, connection patterns, transaction limits, data type serialization patterns, application-layer referential integrity instructions, security best practices
    
    ### MCP:
    
    #### [mcp-setup.md](mcp/mcp-setup.md)
    
    **When:** Always load for guidance using or updating the DSQL MCP server
    **Contains:** Instructions for setting up the DSQL MCP server with 2 configuration options as
    sampled in [mcp/.mcp.json](mcp/.mcp.json)
    
    1. Documentation-Tools Only
    2. Database Operations (requires a cluster endpoint)
    
    #### [mcp-tools.md](mcp/mcp-tools.md)
    
    **When:** Load when you need detailed MCP tool syntax and examples. PREFER MCP tools for ad-hoc queries — execute directly rather than writing scripts.
    **Contains:** Tool parameters, detailed examples, usage patterns, [input validation](mcp/tools/input-validation.md)
    
    ### [language.md](references/language.md)
    
    **When:** MUST load when making language-specific implementation choices. ALWAYS prefer DSQL Connector when available.
    **Contains:** Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
    
    ### [dsql-examples.md](references/dsql-examples.md)
    
    **When:** Load when looking for specific implementation examples
    **Contains:** Code examples, repository patterns, multi-tenant implementations
    
    ### [troubleshooting.md](references/troubleshooting.md)
    
    **When:** Load when debugging errors or unexpected behavior. SHOULD always consult for OCC errors, connection failures, or unexpected query results.
    **Contains:** Common pitfalls, error messages, solutions
    
    ### [onboarding.md](references/onboarding.md)
    
    **When:** User explicitly requests to "Get started with DSQL" or similar phrase
    **Contains:** Interactive step-by-step guide for new users
    
    ### [access-control.md](references/access-control.md)
    
    **When:** MUST load when creating database roles, granting permissions, setting up schemas for applications, or handling sensitive data. ALWAYS use scoped roles for applications — create database roles with `dsql:DbConnect`.
    **Contains:** Scoped role setup, IAM-to-database role mapping, schema separation for sensitive data, role design patterns
    
    ### DDL Migrations (modular):
    
    #### [ddl-migrations/overview.md](references/ddl-migrations/overview.md)
    
    **When:** MUST load when performing DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT
    **Contains:** Table recreation pattern overview, transaction rules, common verify & swap pattern
    
    #### [ddl-migrations/column-operations.md](references/ddl-migrations/column-operations.md)
    
    **When:** Load for DROP COLUMN, ALTER COLUMN TYPE, SET/DROP NOT NULL, SET/DROP DEFAULT migrations
    **Contains:** Step-by-step migration patterns for column-level changes
    
    #### [ddl-migrations/constraint-operations.md](references/ddl-migrations/constraint-operations.md)
    
    **When:** Load for ADD/DROP CONSTRAINT, MODIFY PRIMARY KEY, column split/merge migrations
    **Contains:** Step-by-step migration patterns for constraint and structural changes
    
    #### [ddl-migrations/batched-migration.md](references/ddl-migrations/batched-migration.md)
    
    **When:** Load when migrating tables exceeding 3,000 rows
    **Contains:** OFFSET-based and cursor-based batching patterns, progress tracking, error handling
    
    ### MySQL Migrations (modular):
    
    #### [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md)
    
    **When:** MUST load when migrating MySQL schemas to DSQL
    **Contains:** MySQL data type mappings, feature alternatives, DDL operation mapping
    
    #### [mysql-migrations/ddl-operations.md](references/mysql-migrations/ddl-operations.md)
    
    **When:** Load when translating MySQL DDL operations to DSQL equivalents
    **Contains:** ALTER COLUMN, DROP COLUMN, AUTO_INCREMENT, ENUM, SET, FOREIGN KEY migration patterns
    
    #### [mysql-migrations/full-example.md](references/mysql-migrations/full-example.md)
    
    **When:** Load when migrating a complete MySQL table to DSQL
    **Contains:** End-to-end MySQL CREATE TABLE migration example with decision summary
    
    ### Query Plan Explainability (modular):
    
    **When:** MUST load all four at Workflow 8 Phase 0 — [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md), [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md), [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md), [query-plan/report-format.md](references/query-plan/report-format.md)
    **Contains:** DSQL node types + Node Duration math + estimation-error bands, pg_class/pg_stats/pg_indexes SQL + correlated-predicate verification, GUC experiment procedures + 30-second skip protocol, required report structure + element checklist + support request template
    
    ---
    
    ## MCP Tools Available
    
    The `aurora-dsql` MCP server provides these tools:
    
    **Database Operations:**
    
    1. **readonly_query** - Execute SELECT queries (returns list of dicts)
    2. **transact** - Execute DDL/DML statements in transaction (takes list of SQL statements)
    3. **get_schema** - Get table structure for a specific table
    
    **Documentation & Knowledge:**
    
    1. **dsql_search_documentation** - Search Aurora DSQL documentation
    2. **dsql_read_documentation** - Read specific documentation pages
    3. **dsql_recommend** - Get DSQL best practice recommendations
    
    **Note:** There is no `list_tables` tool. Use `readonly_query` with information_schema.
    
    See [mcp-setup.md](mcp/mcp-setup.md) for detailed setup instructions.
    See [mcp-tools.md](mcp/mcp-tools.md) for detailed usage and examples.
    
    ### AWS Knowledge MCP (`awsknowledge`)
    
    Consult for verifying DSQL service limits before advising users. The numeric limits below are
    defaults that may change — when a user's decision depends on an exact limit, verify it first:
    
    | Limit                          | Default       | Verify query                       |
    | ------------------------------ | ------------- | ---------------------------------- |
    | Max rows per transaction       | 3,000         | `aurora dsql transaction limits`   |
    | Max data size per transaction  | 10 MiB        | `aurora dsql transaction limits`   |
    | Max transaction duration       | 5 minutes     | `aurora dsql transaction limits`   |
    | Max connections per cluster    | 10,000        | `aurora dsql connection limits`    |
    | Auth token expiry              | 15 minutes    | `aurora dsql authentication token` |
    | Max connection duration        | 60 minutes    | `aurora dsql connection limits`    |
    | Max indexes per table          | 24            | `aurora dsql index limits`         |
    | Max columns per index          | 8             | `aurora dsql index limits`         |
    | IDENTITY/SEQUENCE CACHE values | 1 or >= 65536 | `aurora dsql sequence cache`       |
    
    **When to verify:** Before recommending batch sizes, connection pool settings, or schema designs
    where hitting a limit would cause failures. No need to verify for general guidance or when
    the exact number doesn't affect the user's decision.
    
    **Fallback:** If `awsknowledge` is unavailable, use the defaults above and note to the user
    that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    
    ## CLI Scripts Available
    
    Bash scripts in [scripts/](scripts/) for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files.
    See [scripts/README.md](scripts/README.md) for usage.
    
    ---
    
    ## Quick Start
    
    ### 1. List tables and explore schema
    
    ```
    Use readonly_query with information_schema to list tables
    Use get_schema to understand table structure
    ```
    
    ### 2. Query data
    
    ```
    Use readonly_query for SELECT queries
    Always include tenant_id in WHERE clause for multi-tenant apps
    MUST build SQL with safe_query.build() — see mcp/tools/input-validation.md
    ```
    
    ### 3. Execute schema changes
    
    ```
    Use transact tool with list of SQL statements
    Follow one-DDL-per-transaction rule
    Always use CREATE INDEX ASYNC in separate transaction
    ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Workflow 6)
    ```
    
    ---
    
    ## Common Workflows
    
    ### Workflow 1: Create Multi-Tenant Schema
    
    1. Create main table with tenant_id column using transact
    2. Create async index on tenant_id in separate transact call
    3. Create composite indexes for common query patterns (separate transact calls)
    4. Verify schema with get_schema
    
    - MUST include tenant_id in all tables
    - MUST use `CREATE INDEX ASYNC` exclusively
    - MUST issue each DDL in its own transact call: `transact(["CREATE TABLE ..."])`
    - MUST store arrays/JSON as TEXT
    
    ### Workflow 2: Safe Data Migration
    
    1. Add column using transact: `transact(["ALTER TABLE ... ADD COLUMN ..."])`
    2. Populate existing rows with UPDATE in separate transact calls (batched under 3,000 rows)
    3. Verify migration with readonly_query using COUNT
    4. Create async index for new column using transact if needed
    
    - MUST add column first, populate later
    - MUST issue ADD COLUMN with only name and type; apply DEFAULT via separate UPDATE
    - MUST batch updates under 3,000 rows in separate transact calls
    - MUST issue each ALTER TABLE in its own transaction
    
    **Recovery — batch fails midway:** Rows already updated keep their new value (each batch committed
    in its own transaction). Resume by filtering on the unset state — e.g. add
    `WHERE new_column IS NULL` (or the sentinel value) to the next UPDATE — and continue from there.
    Re-running the entire migration is safe because the filter naturally excludes completed rows.
    
    ### Workflow 3: Application-Layer Referential Integrity
    
    **INSERT:** MUST validate parent exists with readonly_query → throw error if not found → insert child with transact.
    
    **DELETE:** MUST check dependents with readonly_query COUNT → return error if dependents exist → delete with transact if safe.
    
    ### Workflow 4: Query with Tenant Isolation
    
    1. **MUST** authorize the caller against the tenant — format validation does not establish authorization
    2. **MUST** build SQL with [`safe_query.build()`](mcp/tools/safe_query.py) — use `allow()`/`regex()` for
       values (emits `'v'`), `ident()` for table/column names (emits `"v"`).
       See [input-validation.md](mcp/tools/input-validation.md)
    3. **MUST** include `tenant_id` in the WHERE clause; reject cross-tenant access at the application layer
    
    ### Workflow 5: Set Up Scoped Database Roles
    
    MUST load [access-control.md](references/access-control.md) for role setup, IAM mapping, and schema permissions.
    
    ### Workflow 6: Table Recreation DDL Migration
    
    DSQL does NOT support direct `ALTER COLUMN TYPE`, `DROP COLUMN`, `DROP CONSTRAINT`, or `MODIFY PRIMARY KEY`. These operations require the **Table Recreation Pattern** — creating a new table, copying data, dropping the original, and renaming. This is a destructive workflow that requires user confirmation at each step.
    
    MUST load [ddl-migrations/overview.md](references/ddl-migrations/overview.md) before attempting any of these operations.
    
    ### Workflow 7: MySQL to DSQL Schema Migration
    
    MUST load [mysql-migrations/type-mapping.md](references/mysql-migrations/type-mapping.md) for type mappings, feature alternatives, and migration steps.
    
    ### Workflow 8: Query Plan Explainability
    
    Explains why the DSQL optimizer chose a particular plan. Triggered by slow queries, high DPU, unexpected Full Scans, or plans the user doesn't understand. **REQUIRES a structured Markdown diagnostic report is the deliverable** beyond conversation — run the workflow end-to-end before answering. Use the `aurora-dsql` MCP when connected; fall back to raw `psql` with a generated IAM token (see the fallback block below) otherwise.
    
    **Phase 0 — Load reference material.** Read all four before starting — each has content later phases need verbatim (node-type math, exact catalog SQL, the `>30s` skip protocol, required report elements):
    
    1. [query-plan/plan-interpretation.md](references/query-plan/plan-interpretation.md) — node types, duration math, anomalous values
    2. [query-plan/catalog-queries.md](references/query-plan/catalog-queries.md) — pg_class / pg_stats / pg_indexes SQL
    3. [query-plan/guc-experiments.md](references/query-plan/guc-experiments.md) — GUC procedures and `>30s` skip protocol
    4. [query-plan/report-format.md](references/query-plan/report-format.md) — required report structure
    
    **Phase 1 — Capture the plan.** **ALWAYS** run `readonly_query("EXPLAIN ANALYZE VERBOSE …")` on the user's query verbatim (SELECT form) — **ALWAYS** capture a fresh plan from the cluster, even when the user describes the plan or reports an anomaly. **MAY** leverage `get_schema` or `information_schema` for schema sanity checks. When EXPLAIN errors (`relation does not exist`, `column does not exist`), **MUST** report the error verbatim — **MUST NOT** invent DSQL-specific semantics (e.g., case sensitivity, identifier quoting) as the root cause. Extract Query ID, Planning Time, Execution Time, DPU Estimate. **SELECT** runs as-is. **UPDATE/DELETE** rewrite to the equivalent SELECT (same join chain + WHERE) — the optimizer picks the same plan shape. **INSERT**, pl/pgsql, DO blocks, and functions **MUST** be rejected. **MUST NOT** use `transact --allow-writes` for plan capture; it bypasses MCP safety.
    
    **Phase 2 — Gather evidence.** Using SQL from `catalog-queries.md`, query `pg_class`, `pg_stats`, `pg_indexes`, `COUNT(*)`, `COUNT(DISTINCT)`. Classify estimation errors per `plan-interpretation.md` (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew.
    
    **Phase 3 — Experiment (conditional).** ≤30s: run GUC experiments per `guc-experiments.md` (default + merge-join-only) plus optional redundant-predicate test. >30s: skip experiments, include the manual GUC testing SQL verbatim in the report, and do not re-run for redundant-predicate testing. Anomalous values (impossible row counts): confirm query results are correct despite the anomalous EXPLAIN, flag as a potential DSQL bug, and produce the Support Request Template from `report-format.md`.
    
    **Phase 4 — Produce the report, invite reassessment.** Produce the full diagnostic report per the "Required Elements Checklist" in [query-plan/report-format.md](references/query-plan/report-format.md) — structure is non-negotiable. End with the "Next Steps" block from that reference so the user can ask for a reassessment after applying a recommendation. When the user says "reassess" (or equivalent), re-run Phase 1–2 and **append an "Addendum: After-Change Performance"** to the original report (before/after table, match against expected impact) rather than producing a new report.
    
    **psql fallback (MCP unavailable).** Pipe statements into `psql` via heredoc and check `$?`; report failures without proceeding on partial evidence:
    
    ```bash
    TOKEN=$(aws dsql generate-db-connect-admin-auth-token --hostname "$HOST" --region "$REGION")
    PGPASSWORD="$TOKEN" psql "host=$HOST port=5432 user=admin dbname=postgres sslmode=require" <<<"EXPLAIN ANALYZE VERBOSE <sql>;"
    ```
    
    **Safety.** Plan capture uses `readonly_query` exclusively — it rejects INSERT/UPDATE/DELETE/DDL at the MCP layer. Rewrite DML to SELECT (Phase 1) rather than asking `transact --allow-writes` to run it; write-mode `transact` bypasses all MCP safety checks. **MUST NOT** run arbitrary DDL/DML or pl/pgsql.
    
    ---
    
    ## Error Scenarios
    
    - **`awsknowledge` returns no results:** Use the default limits in the table above and note that limits should be verified against [DSQL documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/).
    - **OCC serialization error:** Retry the transaction. If persistent, check for hot-key contention — see [troubleshooting.md](references/troubleshooting.md).
    - **Transaction exceeds limits:** Split into batches under 3,000 rows — see [batched-migration.md](references/ddl-migrations/batched-migration.md).
    - **Token expiration mid-operation:** Generate a fresh IAM token — see [authentication-guide.md](references/auth/authentication-guide.md). See [troubleshooting.md](references/troubleshooting.md) for other issues.
    
    ---
    
    ## Additional Resources
    
    - [Aurora DSQL Documentation](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/)
    - [Code Samples Repository](https://github.com/aws-samples/aurora-dsql-samples)
    - [PostgreSQL Compatibility](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-postgresql-compatibility.html)
    - [CloudFormation Resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dsql-cluster.html)
    
  • src/aurora-dsql-mcp-server/kiro_power/mcp.jsonmcp_server
    Show content (409 bytes)
    {
      "mcpServers": {
        "aurora-dsql": {
          "args": [
            "awslabs.aurora-dsql-mcp-server@latest"
          ],
          "command": "uvx",
          "env": {
            "FASTMCP_LOG_LEVEL": "ERROR"
          }
        },
        "aws-core": {
          "args": [
            "awslabs.core-mcp-server@latest"
          ],
          "command": "uvx",
          "disabled": true,
          "env": {
            "FASTMCP_LOG_LEVEL": "ERROR"
          }
        }
      }
    }
    

README

Open source MCP servers for AWS

A suite of specialized MCP servers that help you get the most out of AWS, wherever you use MCP.

GitHub License Codecov OSSF-Scorecard Score

[!TIP] The Agent Toolkit for AWS is now live! The Agent Toolkit for AWS is the successor to the MCP servers, plugins, and skills available on AWS Labs, and was informed by feedback from customers like you. If you're building production software using coding agents or building agents for your own customers, we recommend Agent Toolkit for AWS. It includes IAM condition keys to distinguish agent actions from human ones, CloudWatch and CloudTrail visibility, and skills that have been evaluated for accuracy and effectiveness. This repo continues to work and accept contributions. Over time, the most useful projects here will move into Agent Toolkit for AWS.

Table of Contents

What is the Model Context Protocol (MCP) and how does it work with MCP Servers for AWS?

The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.

Model Context Protocol README

An MCP Server is a lightweight program that exposes specific capabilities through the standardized Model Context Protocol. Host applications (such as chatbots, IDEs, and other AI tools) have MCP clients that maintain 1:1 connections with MCP servers. Common MCP clients include agentic AI coding assistants (like Kiro, Cline, Cursor, Windsurf) as well as chatbot applications like Claude Desktop, with more clients coming soon. MCP servers can access local data sources and remote services to provide additional context that improves the generated outputs from the models.

MCP Servers for AWS use this protocol to provide AI applications access to AWS documentation, contextual guidance, and best practices. Through the standardized MCP client-server architecture, AWS capabilities become an intelligent extension of your development environment or AI application.

MCP Servers for AWS enable enhanced cloud-native development, infrastructure management, and development workflows—making AI-assisted cloud computing more accessible and efficient.

The Model Context Protocol is an open source project run by Anthropic, PBC. and open to contributions from the entire community. For more information on MCP, you can find further documentation here

Open source MCP servers for AWS Transport Mechanisms

Supported transport mechanisms

The MCP protocol currently defines two standard transport mechanisms for client-server communication:

  • stdio, communication over standard in and standard out
  • streamable HTTP

The MCP servers in this repository are designed to support stdio only.

You are responsible for ensuring that your use of these servers comply with the terms governing them, and any laws, rules, regulations, policies, or standards that apply to you.

Server Sent Events Support Removal

Important Notice: On May 26th, 2025, Server Sent Events (SSE) support was removed from all MCP servers in their latest major versions. This change aligns with the Model Context Protocol specification's backwards compatibility guidelines.

We are actively working towards supporting Streamable HTTP, which will provide improved transport capabilities for future versions.

For applications still requiring SSE support, please use the previous major version of the respective MCP server until you can migrate to alternative transport methods.

Why MCP Servers for AWS?

MCP servers enhance the capabilities of foundation models (FMs) in several key ways:

  • Improved Output Quality: By providing relevant information directly in the model's context, MCP servers significantly improve model responses for specialized domains like AWS services. This approach reduces hallucinations, provides more accurate technical details, enables more precise code generation, and ensures recommendations align with current AWS best practices and service capabilities.

  • Access to Latest Documentation: FMs may not have knowledge of recent releases, APIs, or SDKs. MCP servers bridge this gap by pulling in up-to-date documentation, ensuring your AI assistant always works with the latest AWS capabilities.

  • Workflow Automation: MCP servers convert common workflows into tools that foundation models can use directly. Whether it's CDK, Terraform, or other AWS-specific workflows, these tools enable AI assistants to perform complex tasks with greater accuracy and efficiency.

  • Specialized Domain Knowledge: MCP servers provide deep, contextual knowledge about AWS services that might not be fully represented in foundation models' training data, enabling more accurate and helpful responses for cloud development tasks.

Available MCP Servers: Quick Installation

Get started quickly with one-click installation buttons for popular MCP clients. Click the buttons below to install servers directly in Cursor or VS Code:

🚀 Getting Started with AWS

For AWS interactions, we recommend starting with:

Server NameDescriptionInstall
AWS MCP Server (in preview)Start here for secure, auditable AWS interactions! This remote, managed MCP server is hosted by AWS and combines comprehensive AWS API support with access to the latest AWS documentation, API references, What's New posts, and Getting Started information. Features pre-built Agent SOPs that follow AWS best practices, helping agents complete complex multi-step AWS tasks reliably. Built with safety and control in mind: syntactically validated API calls, IAM-based permissions with zero credential exposure, and complete CloudTrail audit logging. Access all AWS services for managing infrastructure, exploring resources, and executing AWS operations with full transparency and traceability. Read moreInstall
Install
Install on VS Code

Browse by What You're Building

📚 Real-time access to official AWS documentation

Server NameDescriptionInstall
AWS Knowledge MCP ServerA remote, fully-managed MCP server hosted by AWS that provides access to the latest AWS docs, API references, What's New Posts, Getting Started information, Builder Center, Blog posts, Architectural references, and Well-Architected guidance.Install
Install
Install on VS Code
AWS Documentation MCP ServerGet latest AWS docs and API referencesInstall
Install
Install on VS Code

🏗️ Infrastructure & Deployment

Build, deploy, and manage cloud infrastructure with Infrastructure as Code best practices.

Server NameDescriptionInstall
AWS IaC MCP ServerComplete Infrastructure as Code toolkit with CloudFormation documentation access, CDK best practices guidance, construct examples, security validation, and deployment troubleshootingInstall
Install
Install on VS Code
AWS Cloud Control API MCP Server ⚠️ DEPRECATEDDirect AWS resource management with security scanning and best practices (Use AWS IaC MCP Server instead)Install
Install
Install on VS Code

Container Platforms

Server NameDescriptionInstall
Amazon EKS MCP ServerKubernetes cluster management and application deploymentInstall
Install
Install on VS Code
Amazon ECS MCP ServerContainer orchestration and ECS application deploymentInstall
Install
Install on VS Code
Finch MCP ServerLocal container building with ECR integrationInstall
Install
Install on VS Code

Serverless & Functions

Server NameDescriptionInstall
AWS Serverless MCP ServerComplete serverless application lifecycle with SAM CLIInstall
Install
Install on VS Code
AWS Lambda Tool MCP ServerExecute Lambda functions as AI tools for private resource accessInstall
Install
Install on VS Code

Support

Server NameDescriptionInstall
AWS Support MCP ServerHelp users create and manage AWS Support casesInstall
Install
Install on VS Code

🤖 AI & Machine Learning

Enhance AI applications with knowledge retrieval, content generation, and ML capabilities

Server NameDescriptionInstall
Amazon Bedrock Knowledge Bases Retrieval MCP Server Query enterprise knowledge bases with citation supportInstall
Install
Install on VS Code
Amazon Kendra Index MCP ServerEnterprise search and RAG enhancementInstall
Install
Install on VS Code
Amazon Q Business MCP ServerAI assistant for your ingested content with anonymous accessInstall
Install
Install on VS Code
Amazon Q Index MCP ServerData accessors to search through enterprise's Q indexInstall
Install
Install on VS Code
AWS Bedrock Custom Model Import MCP ServerManage custom models in Bedrock for on-demand inferenceInstall
Install
Install on VS Code
AWS Bedrock AgentCore MCP ServerProvides comprehensive documentation access on AgentCore platform services, APIs, and best practicesInstall
Install
Install on VS Code
Amazon SageMaker AI MCP ServerSageMaker AI resource management and model developmentInstall
Install
Install on VS Code

📊 Data & Analytics

Work with databases, caching systems, and data processing workflows.

SQL & NoSQL Databases

Server NameDescriptionInstall
Amazon DynamoDB MCP ServerDynamoDB expert design guidance and data modeling assistanceInstall
Install
Install on VS Code
Amazon Aurora PostgreSQL MCP ServerPostgreSQL database operations via RDS Data APIInstall
Install
Install on VS Code
Amazon Aurora MySQL MCP ServerMySQL database operations via RDS Data APIInstall
Install
Install on VS Code
Amazon Aurora DSQL MCP ServerDistributed SQL with PostgreSQL compatibilityInstall
Install
Install on VS Code
Amazon DocumentDB MCP ServerMongoDB-compatible document database operationsInstall
Install
Install on VS Code
Amazon Neptune MCP ServerGraph database queries with openCypher and GremlinInstall
Install
Install on VS Code
Amazon Keyspaces MCP ServerApache Cassandra-compatible operationsInstall
Install
Install on VS Code
Amazon Timestream for InfluxDB MCP ServerTime-series database operations and InfluxDB compatibilityInstall
Install
Install on VS Code
AWS S3 Tables MCP ServerManage S3 Tables for optimized analyticsInstall
Install
Install on VS Code
Amazon Redshift MCP ServerData warehouse operations and analytics queriesInstall
Install
Install on VS Code
AWS IoT SiteWise MCP ServerIndustrial IoT asset management, data ingestion, and analyticsInstall
Install
Install on VS Code
Search & Analytics

Backend API Providers

Server NameDescriptionInstall
AWS AppSync MCP ServerManage and Interact with application backends powered by AWS AppSyncInstall
Install
Install on VS Code

Caching & Performance

Server NameDescriptionInstall
Amazon ElastiCache MCP ServerComplete ElastiCache control plane operationsInstall
Install
Install on VS Code
Amazon ElastiCache / MemoryDB for Valkey MCP ServerAdvanced data structures and caching with ValkeyInstall
Install
Install on VS Code
Amazon ElastiCache for Memcached MCP ServerHigh-speed caching with Memcached protocolInstall
Install
Install on VS Code

🛠️ Developer Tools & Support

Accelerate development with code analysis, documentation, and testing utilities.

Server NameDescriptionInstall
AWS IAM MCP ServerComprehensive IAM user, role, group, and policy management with security best practicesInstall
Install
Install on VS Code
OpenAPI MCP ServerDynamic API integration through OpenAPI specificationsInstall
Install
Install on VS Code

📡 Integration & Messaging

Connect systems with messaging, workflows, and location services.

Server NameDescriptionInstall
Amazon SNS / SQS MCP ServerEvent-driven messaging and queue managementInstall
Install
Install on VS Code
Amazon MQ MCP ServerMessage broker management for RabbitMQ and ActiveMQInstall
Install
Install on VS Code
AWS Step Functions Tool MCP ServerExecute complex workflows and business processesInstall
Install
Install on VS Code
Amazon Location Service MCP ServerPlace search, geocoding, and route optimizationInstall
Install
Install on VS Code
OpenAPI MCP ServerDynamic API integration through OpenAPI specificationsInstall
Install
Install on VS Code

💰 Cost & Operations

Monitor, optimize, and manage your AWS infrastructure and costs.

Server NameDescriptionInstall
AWS Pricing MCP ServerAWS service pricing and cost estimatesInstall
Install
Install on VS Code
Amazon CloudWatch MCP ServerMetrics, Alarms, and Logs analysis and operational troubleshootingInstall
Install
Install on VS Code
AWS Managed Prometheus MCP ServerPrometheus-compatible operationsInstall
Install
Install on VS Code
AWS Billing and Cost Management MCP ServerBilling and cost management for chargeable and Proforma billingInstall
Install
Install on VS Code

🧬 Healthcare & Lifesciences

Interact with AWS HealthAI services.

Server NameDescriptionInstall
AWS HealthOmics MCP ServerGenerate, run, debug and optimize lifescience workflowsInstall
Install
Install on VS Code
AWS HealthImaging MCP ServerComprehensive medical imaging data lifecycle management with 21 tools for DICOM operations, datastore management, and automated discoveryInstall
Install
Install on VS Code
AWS HealthLake MCP ServerCreate, manage, search, and optimize FHIR healthcare data workflows with comprehensive AWS HealthLake integration, featuring automated resource discovery, advanced search capabilities, patient record management, and seamless import/export operations.Install
Install
Install on VS Code


Browse by How You're Working

👨‍💻 Vibe Coding & Development

AI coding assistants like Kiro, Cline, Cursor, and Claude Code helping you build faster

Workshop: Check out the Vibe Coding with AWS MCP Servers workshop for hands-on guidance and examples.

Core Development Workflow
Server NameDescriptionInstall
AWS API MCP ServerStart here for general AWS interactions! Comprehensive AWS API support with command validation, security controls, and access to all AWS services. Perfect for managing infrastructure, exploring resources, and executing AWS operations through natural language.Install
Install
Install VS Code
AWS Knowledge MCP ServerA remote, fully-managed MCP server hosted by AWS that provides access to the latest AWS docs, API references, What's New Posts, Getting Started information, Builder Center, Blog posts, Architectural references, and Well-Architected guidance.Install
Install
Install on VS Code
AWS Documentation MCP ServerGet latest AWS docs and API referencesInstall
Install
Install on VS Code
Infrastructure as Code
Server NameDescriptionInstall
AWS IaC MCP ServerComplete Infrastructure as Code toolkit with CloudFormation documentation access, CDK best practices guidance, construct examples, security validation, and deployment troubleshootingInstall
Install
Install on VS Code
AWS Cloud Control API MCP Server ⚠️ DEPRECATEDDirect AWS resource management with security scanning and best practices (Use AWS IaC MCP Server instead)Install
Install
Install on VS Code
Application Development
Server NameDescriptionInstall
OpenAPI MCP ServerDynamic API integration through OpenAPI specificationsInstall
Install
Install on VS Code
Container & Serverless Development
Server NameDescriptionInstall
Amazon SageMaker AI MCP ServerSageMaker AI resource management and model developmentInstall
Install
Install on VS Code
Amazon EKS MCP ServerKubernetes cluster management and app deploymentInstall
Install
Install on VS Code
Amazon ECS MCP ServerContainerize and deploy applications to ECSInstall
Install
Install on VS Code
Finch MCP ServerLocal container building with ECR pushInstall
Install
Install on VS Code
AWS Serverless MCP ServerFull serverless app lifecycle with SAM CLIInstall
Install
Install on VS Code
Testing & Data
Server NameDescriptionInstall
Lifesciences Workflow Development
Server NameDescriptionInstall
AWS HealthOmics MCP ServerGenerate, run, debug and optimize lifescience workflowsInstall
Install
Install on VS Code
Healthcare Data Management
Server NameDescriptionInstall
AWS HealthLake MCP ServerCreate, manage, search, and optimize FHIR healthcare data workflows with comprehensive AWS HealthLake integration, featuring automated resource discovery, advanced search capabilities, patient record management, and seamless import/export operations.Install
Install
Install on VS Code

💬 Conversational Assistants

Customer-facing chatbots, business agents, and interactive Q&A systems

Knowledge & Search
Server NameDescriptionInstall
Amazon Bedrock Knowledge Bases Retrieval MCP ServerQuery enterprise knowledge bases with citation supportInstall
Install
Install on VS Code
Amazon Kendra Index MCP ServerEnterprise search and RAG enhancementInstall
Install
Install on VS Code
Amazon Q Business MCP ServerAI assistant for your ingested content with anonymous accessInstall
Install
Install on VS Code
Amazon Q Index MCP ServerData accessors to search through enterprise's Q indexInstall
Install
Install on VS Code
AWS Documentation MCP ServerGet latest AWS docs and API referencesInstall
Install
Install on VS Code
Content Processing & Generation
Server NameDescriptionInstall
Document Loader MCP ServerParse and extract content from PDF, DOCX, XLSX, PPTX, and image filesInstall
Install
Install on VS Code
Business Services
Server NameDescriptionInstall
Amazon Location Service MCP ServerLocation search, geocoding, and business hoursInstall
Install
Install on VS Code
AWS Pricing MCP ServerAWS service pricing and cost estimatesInstall
Install
Install on VS Code

🤖 Autonomous Background Agents

Headless automation, ETL pipelines, and operational systems

Data Operations & ETL
Server NameDescriptionInstall
AWS Data Processing MCP ServerComprehensive data processing tools and real-time pipeline visibility across AWS Glue, Amazon EMR-EC2, and Amazon AthenaInstall
Install
Install on VS Code
Amazon DynamoDB MCP ServerComplete DynamoDB operations and table managementInstall
Install
Install on VS Code
Amazon Aurora PostgreSQL MCP ServerPostgreSQL database operations via RDS Data APIInstall
Install
Install on VS Code
Amazon Aurora MySQL MCP ServerMySQL database operations via RDS Data APIInstall
Install
Install on VS Code
Amazon Aurora DSQL MCP ServerDistributed SQL with PostgreSQL compatibilityInstall
Install
Install on VS Code
Amazon DocumentDB MCP ServerMongoDB-compatible document database operationsInstall
Install
Install on VS Code
Amazon Neptune MCP ServerGraph database queries with openCypher and GremlinInstall
Install
Install on VS Code
Amazon Keyspaces MCP ServerApache Cassandra-compatible operationsInstall
Install
Install on VS Code
Amazon Timestream for InfluxDB MCP ServerTime-series database operations and InfluxDB compatibilityInstall
Install
Install on VS Code
Caching & Performance
Server NameDescriptionInstall
Amazon ElastiCache / MemoryDB for Valkey MCP ServerAdvanced data structures and caching with ValkeyInstall
Install
Install on VS Code
Amazon ElastiCache for Memcached MCP Server High-speed caching with Memcached protocolInstall
Install
Install on VS Code
Workflow & Integration
Server NameDescriptionInstall
AWS Lambda Tool MCP ServerExecute Lambda functions as AI tools for private resource accessInstall
Install
Install on VS Code
AWS Step Functions Tool MCP ServerExecute complex workflows and business processesInstall
Install
Install on VS Code
Amazon SNS/SQS MCP ServerEvent-driven messaging and queue managementInstall
Install
Install on VS Code
Amazon MQ MCP ServerMessage broker management for RabbitMQ and ActiveMQInstall
Install
Install on VS Code
Operations & Monitoring
Server NameDescriptionInstall
Amazon CloudWatch MCP ServerMetrics, Alarms, and Logs analysis and operational troubleshootingInstall
Install
Install on VS Code
Amazon CloudWatch Application Signals MCP ServerApplication monitoring and performance insightsInstall
Install
Install on VS Code
AWS Managed Prometheus MCP ServerPrometheus-compatible operations and monitoringInstall
Install
Install on VS Code
AWS Well-Architected Security Assessment Tool MCP ServerAssess AWS environments against the Well-Architected Framework Security PillarInstall
Install
Install on VS Code
AWS CloudTrail MCP ServerCloudTrail events querying and analysisInstall
Install
Install on VS Code
AWS Systems Manager for SAP MCP ServerManage, monitor, and operate SAP applications on AWS with health checks, configuration validation, and schedulingInstall
Install
Install on VS Code

MCP AWS Lambda Handler Module

A Python library for creating serverless HTTP handlers for the Model Context Protocol (MCP) using AWS Lambda. This module provides a flexible framework for building MCP HTTP endpoints with pluggable session management, including built-in DynamoDB support.

Features:

  • Easy serverless MCP HTTP handler creation using AWS Lambda
  • Pluggable session management system
  • Built-in DynamoDB session backend support
  • Customizable authentication and authorization
  • Example implementations and tests

See src/mcp-lambda-handler/README.md for full usage, installation, and development instructions.

When to use Local vs Remote MCP Servers?

MCP servers can be run either locally on your development machine or remotely on the cloud. Here's when to use each approach:

Local MCP Servers

  • Development & Testing: Perfect for local development, testing, and debugging
  • Offline Work: Continue working when internet connectivity is limited
  • Data Privacy: Keep sensitive data and credentials on your local machine
  • Low Latency: Minimal network overhead for faster response times
  • Resource Control: Direct control over server resources and configuration

Remote MCP Servers

  • Team Collaboration: Share consistent server configurations across your team
  • Resource Intensive Tasks: Offload heavy processing to dedicated cloud resources
  • Always Available: Access your MCP servers from anywhere, any device
  • Automatic Updates: Get the latest features and security patches automatically
  • Scalability: Easily handle varying workloads without local resource constraints
  • Security: Centralized security controls with IAM-based permissions and zero credential exposure
  • Governance: Comprehensive audit logging and compliance monitoring for enterprise-grade governance

Note: Some MCP servers, like the official AWS MCP server (in preview) and AWS Knowledge MCP, are provided as fully managed services by AWS. These AWS-managed remote servers require no setup or infrastructure management on your part - just connect and start using them.

Use Cases for the Servers

For example, you can use the AWS Documentation MCP Server to help your AI assistant research and generate up-to-date code for any AWS service, like Amazon Bedrock Inline agents. Alternatively, you could use the CDK MCP Server or the Terraform MCP Server to have your AI assistant create infrastructure-as-code implementations that use the latest APIs and follow AWS best practices. With the AWS Pricing MCP Server, you could ask "What would be the estimated monthly cost for this CDK project before I deploy it?" or "Can you help me understand the potential AWS service expenses for this infrastructure design?" and receive detailed cost estimations and budget planning insights. The Valkey MCP Server enables natural language interaction with Valkey data stores, allowing AI assistants to efficiently manage data operations through a simple conversational interface.

Installation and Setup

Each server has specific installation instructions with one-click installs for Kiro, Cursor, and VSCode. Generally, you can:

  1. Install uv from Astral
  2. Install Python using uv python install 3.10
  3. Configure AWS credentials with access to required services
  4. Add the server to your MCP client configuration

Example configuration for Kiro MCP settings (~/.kiro/settings/mcp.json):

For macOS/Linux

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "command": "uvx",
      "args": [
        "awslabs.core-mcp-server@latest"
      ],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    }
  }
}

See individual server READMEs for specific requirements and configuration options.

For Windows

When configuring MCP servers on Windows, you'll need to use a slightly different configuration format:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "uv",
      "args": [
        "tool",
        "run",
        "--from",
        "awslabs.core-mcp-server@latest",
        "awslabs.core-mcp-server.exe"
      ],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    }
  }
}

If you have problems with MCP configuration or want to check if the appropriate parameters are in place, you can try the following:

# Run MCP server manually with timeout 15s
$ timeout 15s uv tool run <MCP Name> <args> 2>&1 || echo "Command completed or timed out"

# Example (Aurora MySQL MCP Server)
$ timeout 15s uv tool run awslabs.mysql-mcp-server --resource_arn <Your Resource ARN> --secret_arn <Your Secret ARN> ... 2>&1 || echo "Command completed or timed out"

# If the arguments are not set appropriately, you may see the following message:
usage: awslabs.mysql-mcp-server [-h] --resource_arn RESOURCE_ARN --secret_arn SECRET_ARN --database DATABASE
                                --region REGION --readonly READONLY
awslabs.mysql-mcp-server: error: the following arguments are required: --resource_arn, --secret_arn, --database, --region, --readonly

Note about performance when using uvx "@latest" suffix:

Using the "@latest" suffix checks and downloads the latest MCP server package from pypi every time you start your MCP clients, but it comes with a cost of increased initial load times. If you want to minimize the initial load time, remove "@latest" and manage your uv cache yourself using one of these approaches:

  • uv cache clean <tool>: where {tool} is the mcp server you want to delete from cache and install again (e.g.: "awslabs.lambda-tool-mcp-server") (remember to remove the '<>').
  • uvx <tool>@latest: this will refresh the tool with the latest version and add it to the uv cache.

Running MCP servers in containers

Docker images for each MCP server are published to the public AWS ECR registry.

This example uses docker with the "awslabs.aws-documentation-mcp-server and can be repeated for each MCP server

  • Optionally save sensitive environmental variables in a file:

    # contents of a .env file with fictitious AWS temporary credentials
    AWS_ACCESS_KEY_ID=ASIAIOSFODNN7EXAMPLE
    AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    AWS_SESSION_TOKEN=AQoEXAMPLEH4aoAH0gNCAPy...truncated...zrkuWJOgQs8IZZaIv2BXIa2R4Olgk
    
  • Use the docker options: --env, --env-file, and --volume as needed because the "env": {} are not available within the container.

    {
      "mcpServers": {
        "awslabs.aws-documentation-mcp-server": {
          "command": "docker",
          "args": [
            "run",
            "--rm",
            "--interactive",
            "--env",
            "FASTMCP_LOG_LEVEL=ERROR",
            "--env",
            "AWS_REGION=us-east-1",
            "--env-file",
            "/full/path/to/.env",
            "--volume",
            "/full/path/to/.aws:/app/.aws",
            "public.ecr.aws/awslabs-mcp/awslabs/aws-documentation-mcp-server:latest"
          ],
          "env": {}
        }
      }
    }
    
  • For testing local changes you can build and tag the image. You have to update the MCP configuration to use this tag instead of the ECR image.

    cd src/aws-documentation-mcp-server
    docker build -t awslabs/aws-documentation-mcp-server .
    

Getting Started with Kiro

Install in Kiro

See the Kiro IDE documentation or the Kiro CLI documentation for details.

In the Kiro IDE:

  1. Navigate Kiro > MCP Servers
  2. Add a new MCP server by clicking the + Add button.
  3. Paste the configuration given below.

For global configuration, edit ~/.kiro/settings/mcp.json. For project-specific configuration, edit .kiro/settings/mcp.json in your project directory.

~/.kiro/settings/mcp.json

For macOS/Linux:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.core-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    }
  }
}

For Windows:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "uv",
      "args": [
        "tool",
        "run",
        "--from",
        "awslabs.core-mcp-server@latest",
        "awslabs.core-mcp-server.exe"
      ],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    }
  }
}

Getting Started with Cline and Amazon Bedrock

Getting Started with Cline and Amazon Bedrock

IMPORTANT: Following these instructions may incur costs and are subject to the Amazon Bedrock Pricing. You are responsible for any associated costs. In addition to selecting the desired model in the Cline settings, ensure you have your selected model (e.g. anthropic.claude-3-7-sonnet) also enabled in Amazon Bedrock. For more information on this, see these AWS docs on enabling model access to Amazon Bedrock Foundation Models (FMs).

  1. Follow the steps above in the Installation and Setup section to install uv from Astral, install Python, and configure AWS credentials with the required services.

  2. If using Visual Studio Code, install the Cline VS Code Extension (or equivalent extension for your preferred IDE). Once installed, click the extension to open it. When prompted, select the tier that you wish. In this case, we will be using Amazon Bedrock, so the free tier of Cline is fine as we will be sending requests using the Amazon Bedrock API instead of the Cline API.

  1. Select the MCP Servers button.

  1. Select the Installed tab, then click Configure MCP Servers to open the cline_mcp_settings.json file.

  1. In the cline_mcp_settings.json file, add your desired MCP servers in the mcpServers object. See the following example that will use some of the current MCP servers that are available in this repository. Ensure you save the file to install the MCP servers.

cline_mcp_settings.json

For macOS/Linux:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.core-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR",
        "MCP_SETTINGS_PATH": "path to your mcp settings file"
      }
    }
   }
 }

For Windows:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "uv",
      "args": [
        "tool",
        "run",
        "--from",
        "awslabs.core-mcp-server@latest",
        "awslabs.core-mcp-server.exe"
      ],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR",
        "MCP_SETTINGS_PATH": "path to your mcp settings file"
      }
    }
  }
}
  1. Once installed, you should see a list of your MCP Servers under the MCP Server Installed tab, and they should have a green slider to show that they are enabled. See the following for an example with two of the possible MCP servers for AWS. Click Done when finished. You should now see the Cline chat interface.

  1. By default, Cline will be set as the API provider, which has limits for the free tier. Next, let's update the API provider to be AWS Bedrock, so we can use the LLMs through Bedrock, which would have billing go through your connected AWS account.

  2. Click the settings gear to open up the Cline settings. Then under API Provider, switch this from Cline to AWS Bedrock and select AWS Profile for the authentication type. As a note, the AWS Credentials option works as well, however it uses a static credentials (Access Key ID and Secret Access Key) instead of temporary credentials that are automatically redistributed when the token expires, so the temporary credentials with an AWS Profile is the more secure and recommended method.

  1. Fill out the configuration based on the existing AWS Profile you wish to use, select the desired AWS Region, and enable cross-region inference.

  1. Next, scroll down on the settings page until you reach the text box that says Custom Instructions. Paste in the following snippet to ensure the mcp-core server is used as the starting point for every prompt:
For every new project, always look at your MCP servers and use mcp-core as the starting point every time. Also after a task completion include the list of MCP servers used in the operation.

  1. Once the custom prompt is pasted in, click Done to return to the chat interface.

  2. Now you can begin asking questions and testing out the functionality of your installed MCP servers. The default option in the chat interface is is Plan which will provide the output for you to take manual action on (e.g. providing you a sample configuration that you copy and paste into a file). However, you can optionally toggle this to Act which will allow Cline to act on your behalf (e.g. searching for content using a web browser, cloning a repository, executing code, etc). You can optionally toggle on the "Auto-approve" section to avoid having to click to approve the suggestions, however we recommend leaving this off during testing, especially if you have the Act toggle selected.

Note: For the best results, please prompt Cline to use the desired MCP server you wish to use. For example, Using the Terraform MCP Server, do...

Getting Started with Cursor

Getting Started with Cursor
  1. Follow the steps above in the Installation and Setup section to install uv from Astral, install Python, and configure AWS credentials with the required services.

  2. You can place MCP configuration in two locations, depending on your use case:

A. Project Configuration - For tools specific to a project, create a .cursor/mcp.json file in your project directory. - This allows you to define MCP servers that are only available within that specific project.

B. Global Configuration - For tools that you want to use across all projects, create a ~/.cursor/mcp.json file in your home directory. - This makes MCP servers available in all your Cursor workspaces.

.cursor/mcp.json

For macOS/Linux:

 {
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.core-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    }
  }
}

For Windows:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "uv",
      "args": [
        "tool",
        "run",
        "--from",
        "awslabs.core-mcp-server@latest",
        "awslabs.core-mcp-server.exe"
      ],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    }
  }
}
  1. Using MCP in Chat The Composer Agent will automatically use any MCP tools that are listed under Available Tools on the MCP settings page if it determines them to be relevant. To prompt tool usage intentionally, please prompt Cursor to use the desired MCP server you wish to use. For example, Using the Terraform MCP Server, do...

  2. Tool Approval By default, when Agent wants to use an MCP tool, it will display a message asking for your approval. You can use the arrow next to the tool name to expand the message and see what arguments the Agent is calling the tool with.

Getting Started with Windsurf

Getting Started with Windsurf
  1. Follow the steps above in the Installation and Setup section to install uv from Astral, install Python, and configure AWS credentials with the required services.

  2. Access MCP Settings

    • Navigate to Windsurf - Settings > Advanced Settings or use the Command Palette > Open Windsurf Settings Page
    • Look for the "Model Context Protocol (MCP) Servers" section
  3. Add MCP Servers

    • Click "Add Server" to add a new MCP server
    • You can choose from available templates like GitHub, Puppeteer, PostgreSQL, etc.
    • Alternatively, click "Add custom server" to configure your own server
  4. Manual Configuration

    • You can also manually edit the MCP configuration file located at ~/.codeium/windsurf/mcp_config.json

~/.codeium/windsurf/mcp_config.json

For macOS/Linux:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.core-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR",
        "MCP_SETTINGS_PATH": "path to your mcp settings file"
      }
    }
   }
 }

For Windows:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "uv",
      "args": [
        "tool",
        "run",
        "--from",
        "awslabs.core-mcp-server@latest",
        "awslabs.core-mcp-server.exe"
      ],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR",
        "MCP_SETTINGS_PATH": "path to your mcp settings file"
      }
    }
  }
}

Getting Started with VS Code

Install in VS Code

Configure MCP servers in VS Code settings or in .vscode/mcp.json (see VS Code MCP docs for more info.):

.vscode/mcp.json

For macOS/Linux:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.core-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    }
  }
}

For Windows:

{
  "mcpServers": {
    "awslabs-core-mcp-server": {
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "uv",
      "args": [
        "tool",
        "run",
        "--from",
        "awslabs.core-mcp-server@latest",
        "awslabs.core-mcp-server.exe"
      ],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    }
  }
}

Getting Started with Claude Code

Install in Claude Code

Configure MCP servers in Claude Code through the CLI or in .mcp.json

  1. Follow the steps above in the Installation and Setup section to install uv from Astral, install Python, and configure AWS credentials with the required services.

  2. Using Claude Code CLI Commands

    Claude Code CLI commands to add MCP servers:

    # Add core AWS services
    claude mcp add aws-api uvx awslabs.aws-api-mcp-server@latest
    claude mcp add aws-iac uvx awslabs.aws-iac-mcp-server@latest
    claude mcp add aws-docs uvx awslabs.aws-documentation-mcp-server@latest
    claude mcp add aws-support uvx awslabs.aws-support-mcp-server@latest
    claude mcp add aws-pricing uvx awslabs.aws-pricing-mcp-server@latest
    
    # Add AI/ML and Bedrock services
    claude mcp add bedrock-kb uvx awslabs.bedrock-kb-retrieval-mcp-server@latest
    
    # Add data and analytics services
    claude mcp add aws-dataprocessing uvx awslabs.aws-dataprocessing-mcp-server@latest
    claude mcp add aurora-dsql uvx awslabs.aurora-dsql-mcp-server@latest
    claude mcp add valkey uvx awslabs.valkey-mcp-server@latest
    
    # List installed servers
    claude mcp list
    
  3. Manual Configuration (Alternative)

    You can also manually configure MCP servers by creating a .mcp.json file in your project root:

.mcp.json

For macOS/Linux:

{
  "mcpServers": {
    "awslabs.aws-iac-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-iac-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      }
    },
    "awslabs.aws-documentation-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-documentation-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR",
        "AWS_DOCUMENTATION_PARTITION": "aws"
      }
    }
  }
}

Samples

Ready-to-use examples of open source MCP servers for AWS in action are available in the samples directory. These samples provide working code and step-by-step guides to help you get started with each MCP server.

Vibe coding

You can use these MCP servers with your AI coding assistant to vibe code. For tips and tricks on how to improve your vibe coding experience, please refer to our guide.

Additional Resources

Security

See CONTRIBUTING for more information.

Contributing

Big shout out to our awesome contributors! Thank you for making this project better!

contributors

Contributions of all kinds are welcome! Check out our contributor guide for more information.

Developer guide

If you want to add a new MCP Server to the library, check out our development guide and be sure to follow our design guidelines.

License

This project is licensed under the Apache-2.0 License.

Disclaimer

Before using an MCP Server, you should consider conducting your own independent assessment to ensure that your use would comply with your own specific security and quality control practices and standards, as well as the laws, rules, and regulations that govern you and your content.