Curated Claude Code catalog
Updated 07.05.2026 Β· 19:39 CET
01 / Skill
SuperClaude-Org

SuperClaude_Framework

Quality
9.0

SuperClaude is a meta-programming framework that structures Claude Code into a comprehensive development platform, offering systematic workflow automation through 30 slash commands and intelligent agents. It excels at guiding developers through the entire lifecycle, from brainstorming and research to implementation and testing, especially when seeking enhanced performance via optional MCP integrations.

USP

Unlike basic Claude Code interactions, SuperClaude provides a structured, opinionated development environment with pre-built workflows, meta-programming capabilities, and optional MCP integrations for significant performance and token effi…

Use cases

  • 01Structured AI-assisted software development
  • 02Automating development workflows (brainstorming, research, coding, testing)
  • 03Optimizing Claude Code performance and token usage
  • 04Ensuring architectural compliance and preventing duplicate work
  • 05Collaborative problem-solving and requirements discovery

Detected files (8)

  • skills/confidence-check/SKILL.mdskill
    Show content (3312 bytes)
    ---
    name: Confidence Check
    description: Pre-implementation confidence assessment (β‰₯90% required). Use before starting any implementation to verify readiness with duplicate check, architecture compliance, official docs verification, OSS references, and root cause identification.
    ---
    
    # Confidence Check Skill
    
    ## Purpose
    
    Prevents wrong-direction execution by assessing confidence **BEFORE** starting implementation.
    
    **Requirement**: β‰₯90% confidence to proceed with implementation.
    
    **Test Results** (2025-10-21):
    - Precision: 1.000 (no false positives)
    - Recall: 1.000 (no false negatives)
    - 8/8 test cases passed
    
    ## When to Use
    
    Use this skill BEFORE implementing any task to ensure:
    - No duplicate implementations exist
    - Architecture compliance verified
    - Official documentation reviewed
    - Working OSS implementations found
    - Root cause properly identified
    
    ## Confidence Assessment Criteria
    
    Calculate confidence score (0.0 - 1.0) based on 5 checks:
    
    ### 1. No Duplicate Implementations? (25%)
    
    **Check**: Search codebase for existing functionality
    
    ```bash
    # Use Grep to search for similar functions
    # Use Glob to find related modules
    ```
    
    βœ… Pass if no duplicates found
    ❌ Fail if similar implementation exists
    
    ### 2. Architecture Compliance? (25%)
    
    **Check**: Verify tech stack alignment
    
    - Read `CLAUDE.md`, `PLANNING.md`
    - Confirm existing patterns used
    - Avoid reinventing existing solutions
    
    βœ… Pass if uses existing tech stack (e.g., Supabase, UV, pytest)
    ❌ Fail if introduces new dependencies unnecessarily
    
    ### 3. Official Documentation Verified? (20%)
    
    **Check**: Review official docs before implementation
    
    - Use Context7 MCP for official docs
    - Use WebFetch for documentation URLs
    - Verify API compatibility
    
    βœ… Pass if official docs reviewed
    ❌ Fail if relying on assumptions
    
    ### 4. Working OSS Implementations Referenced? (15%)
    
    **Check**: Find proven implementations
    
    - Use Tavily MCP or WebSearch
    - Search GitHub for examples
    - Verify working code samples
    
    βœ… Pass if OSS reference found
    ❌ Fail if no working examples
    
    ### 5. Root Cause Identified? (15%)
    
    **Check**: Understand the actual problem
    
    - Analyze error messages
    - Check logs and stack traces
    - Identify underlying issue
    
    βœ… Pass if root cause clear
    ❌ Fail if symptoms unclear
    
    ## Confidence Score Calculation
    
    ```
    Total = Check1 (25%) + Check2 (25%) + Check3 (20%) + Check4 (15%) + Check5 (15%)
    
    If Total >= 0.90:  βœ… Proceed with implementation
    If Total >= 0.70:  ⚠️  Present alternatives, ask questions
    If Total < 0.70:   ❌ STOP - Request more context
    ```
    
    ## Output Format
    
    ```
    πŸ“‹ Confidence Checks:
       βœ… No duplicate implementations found
       βœ… Uses existing tech stack
       βœ… Official documentation verified
       βœ… Working OSS implementation found
       βœ… Root cause identified
    
    πŸ“Š Confidence: 1.00 (100%)
    βœ… High confidence - Proceeding to implementation
    ```
    
    ## Implementation Details
    
    The TypeScript implementation is available in `confidence.ts` for reference, containing:
    
    - `confidenceCheck(context)` - Main assessment function
    - Detailed check implementations
    - Context interface definitions
    
    ## ROI
    
    **Token Savings**: Spend 100-200 tokens on confidence check to save 5,000-50,000 tokens on wrong-direction work.
    
    **Success Rate**: 100% precision and recall in production testing.
    
  • .claude/skills/confidence-check/SKILL.mdskill
    Show content (3365 bytes)
    ---
    name: Confidence Check
    description: Pre-implementation confidence assessment (β‰₯90% required). Use before starting any implementation to verify readiness with duplicate check, architecture compliance, official docs verification, OSS references, and root cause identification.
    allowed-tools: Read, Grep, Glob, WebFetch, WebSearch
    ---
    
    # Confidence Check Skill
    
    ## Purpose
    
    Prevents wrong-direction execution by assessing confidence **BEFORE** starting implementation.
    
    **Requirement**: β‰₯90% confidence to proceed with implementation.
    
    **Test Results** (2025-10-21):
    - Precision: 1.000 (no false positives)
    - Recall: 1.000 (no false negatives)
    - 8/8 test cases passed
    
    ## When to Use
    
    Use this skill BEFORE implementing any task to ensure:
    - No duplicate implementations exist
    - Architecture compliance verified
    - Official documentation reviewed
    - Working OSS implementations found
    - Root cause properly identified
    
    ## Confidence Assessment Criteria
    
    Calculate confidence score (0.0 - 1.0) based on 5 checks:
    
    ### 1. No Duplicate Implementations? (25%)
    
    **Check**: Search codebase for existing functionality
    
    ```bash
    # Use Grep to search for similar functions
    # Use Glob to find related modules
    ```
    
    βœ… Pass if no duplicates found
    ❌ Fail if similar implementation exists
    
    ### 2. Architecture Compliance? (25%)
    
    **Check**: Verify tech stack alignment
    
    - Read `CLAUDE.md`, `PLANNING.md`
    - Confirm existing patterns used
    - Avoid reinventing existing solutions
    
    βœ… Pass if uses existing tech stack (e.g., Supabase, UV, pytest)
    ❌ Fail if introduces new dependencies unnecessarily
    
    ### 3. Official Documentation Verified? (20%)
    
    **Check**: Review official docs before implementation
    
    - Use Context7 MCP for official docs
    - Use WebFetch for documentation URLs
    - Verify API compatibility
    
    βœ… Pass if official docs reviewed
    ❌ Fail if relying on assumptions
    
    ### 4. Working OSS Implementations Referenced? (15%)
    
    **Check**: Find proven implementations
    
    - Use Tavily MCP or WebSearch
    - Search GitHub for examples
    - Verify working code samples
    
    βœ… Pass if OSS reference found
    ❌ Fail if no working examples
    
    ### 5. Root Cause Identified? (15%)
    
    **Check**: Understand the actual problem
    
    - Analyze error messages
    - Check logs and stack traces
    - Identify underlying issue
    
    βœ… Pass if root cause clear
    ❌ Fail if symptoms unclear
    
    ## Confidence Score Calculation
    
    ```
    Total = Check1 (25%) + Check2 (25%) + Check3 (20%) + Check4 (15%) + Check5 (15%)
    
    If Total >= 0.90:  βœ… Proceed with implementation
    If Total >= 0.70:  ⚠️  Present alternatives, ask questions
    If Total < 0.70:   ❌ STOP - Request more context
    ```
    
    ## Output Format
    
    ```
    πŸ“‹ Confidence Checks:
       βœ… No duplicate implementations found
       βœ… Uses existing tech stack
       βœ… Official documentation verified
       βœ… Working OSS implementation found
       βœ… Root cause identified
    
    πŸ“Š Confidence: 1.00 (100%)
    βœ… High confidence - Proceeding to implementation
    ```
    
    ## Implementation Details
    
    The TypeScript implementation is available in `confidence.ts` for reference, containing:
    
    - `confidenceCheck(context)` - Main assessment function
    - Detailed check implementations
    - Context interface definitions
    
    ## ROI
    
    **Token Savings**: Spend 100-200 tokens on confidence check to save 5,000-50,000 tokens on wrong-direction work.
    
    **Success Rate**: 100% precision and recall in production testing.
    
  • plugins/superclaude/skills/brainstorm/SKILL.mdskill
    Show content (1127 bytes)
    ---
    name: brainstorm
    description: Activate brainstorming mode for collaborative discovery and creative problem-solving. Use when users have vague requests, want to explore ideas, or need requirements discovery.
    ---
    
    # Brainstorming Mode
    
    You are now in Brainstorming Mode. Use Socratic dialogue to explore ideas.
    
    ## Approach
    
    1. **Ask, Don't Assume**: Use probing questions to uncover requirements
    2. **Diverge First**: Generate multiple options before narrowing
    3. **Build on Ideas**: Use "Yes, and..." thinking
    4. **Visualize**: Use tables, lists, and comparisons
    5. **Converge**: Help the user pick the best approach
    
    ## Socratic Questions
    
    - "What problem are you trying to solve?"
    - "Who are the users? What do they need?"
    - "What constraints do we have? (time, budget, tech stack)"
    - "What does success look like?"
    - "What are the risks if we don't do this?"
    
    ## Output Format
    
    Present ideas as structured options:
    
    ```
    ## Option A: [Name]
    - Pros: [...]
    - Cons: [...]
    - Effort: [Low/Medium/High]
    - Risk: [Low/Medium/High]
    
    ## Option B: [Name]
    ...
    
    ## Recommendation
    [Which option and why]
    ```
    
    Apply this to: $ARGUMENTS
    
  • plugins/superclaude/skills/token-efficiency/SKILL.mdskill
    Show content (663 bytes)
    ---
    name: token-efficiency
    description: Activate ultra-compressed output mode for maximum token efficiency. Use when context is running low, user requests brevity, or dealing with large-scale operations.
    ---
    
    # Token Efficiency Mode
    
    Minimize token usage while preserving information quality (>=95%).
    
    ## Rules
    
    - Use bullet points and tables, never verbose paragraphs
    - Abbreviate common terms (fn=function, impl=implementation, cfg=config)
    - Use symbols for status: OK, FAIL, WARN, SKIP
    - One sentence per concept
    - Code blocks only β€” no prose explanations of code
    - Skip preamble, greetings, and transitions
    - Target: 30-50% token reduction vs normal output
    
  • plugins/superclaude/skills/confidence-check/SKILL.mdskill
    Show content (3312 bytes)
    ---
    name: Confidence Check
    description: Pre-implementation confidence assessment (β‰₯90% required). Use before starting any implementation to verify readiness with duplicate check, architecture compliance, official docs verification, OSS references, and root cause identification.
    ---
    
    # Confidence Check Skill
    
    ## Purpose
    
    Prevents wrong-direction execution by assessing confidence **BEFORE** starting implementation.
    
    **Requirement**: β‰₯90% confidence to proceed with implementation.
    
    **Test Results** (2025-10-21):
    - Precision: 1.000 (no false positives)
    - Recall: 1.000 (no false negatives)
    - 8/8 test cases passed
    
    ## When to Use
    
    Use this skill BEFORE implementing any task to ensure:
    - No duplicate implementations exist
    - Architecture compliance verified
    - Official documentation reviewed
    - Working OSS implementations found
    - Root cause properly identified
    
    ## Confidence Assessment Criteria
    
    Calculate confidence score (0.0 - 1.0) based on 5 checks:
    
    ### 1. No Duplicate Implementations? (25%)
    
    **Check**: Search codebase for existing functionality
    
    ```bash
    # Use Grep to search for similar functions
    # Use Glob to find related modules
    ```
    
    βœ… Pass if no duplicates found
    ❌ Fail if similar implementation exists
    
    ### 2. Architecture Compliance? (25%)
    
    **Check**: Verify tech stack alignment
    
    - Read `CLAUDE.md`, `PLANNING.md`
    - Confirm existing patterns used
    - Avoid reinventing existing solutions
    
    βœ… Pass if uses existing tech stack (e.g., Supabase, UV, pytest)
    ❌ Fail if introduces new dependencies unnecessarily
    
    ### 3. Official Documentation Verified? (20%)
    
    **Check**: Review official docs before implementation
    
    - Use Context7 MCP for official docs
    - Use WebFetch for documentation URLs
    - Verify API compatibility
    
    βœ… Pass if official docs reviewed
    ❌ Fail if relying on assumptions
    
    ### 4. Working OSS Implementations Referenced? (15%)
    
    **Check**: Find proven implementations
    
    - Use Tavily MCP or WebSearch
    - Search GitHub for examples
    - Verify working code samples
    
    βœ… Pass if OSS reference found
    ❌ Fail if no working examples
    
    ### 5. Root Cause Identified? (15%)
    
    **Check**: Understand the actual problem
    
    - Analyze error messages
    - Check logs and stack traces
    - Identify underlying issue
    
    βœ… Pass if root cause clear
    ❌ Fail if symptoms unclear
    
    ## Confidence Score Calculation
    
    ```
    Total = Check1 (25%) + Check2 (25%) + Check3 (20%) + Check4 (15%) + Check5 (15%)
    
    If Total >= 0.90:  βœ… Proceed with implementation
    If Total >= 0.70:  ⚠️  Present alternatives, ask questions
    If Total < 0.70:   ❌ STOP - Request more context
    ```
    
    ## Output Format
    
    ```
    πŸ“‹ Confidence Checks:
       βœ… No duplicate implementations found
       βœ… Uses existing tech stack
       βœ… Official documentation verified
       βœ… Working OSS implementation found
       βœ… Root cause identified
    
    πŸ“Š Confidence: 1.00 (100%)
    βœ… High confidence - Proceeding to implementation
    ```
    
    ## Implementation Details
    
    The TypeScript implementation is available in `confidence.ts` for reference, containing:
    
    - `confidenceCheck(context)` - Main assessment function
    - Detailed check implementations
    - Context interface definitions
    
    ## ROI
    
    **Token Savings**: Spend 100-200 tokens on confidence check to save 5,000-50,000 tokens on wrong-direction work.
    
    **Success Rate**: 100% precision and recall in production testing.
    
  • plugins/superclaude/skills/deep-research/SKILL.mdskill
    Show content (1311 bytes)
    ---
    name: deep-research
    description: Activate deep research mode for systematic investigation. Use when the user asks to research, investigate, explore, or needs current information with citations.
    ---
    
    # Deep Research Mode
    
    You are now in Deep Research Mode. Follow this systematic investigation process:
    
    ## Research Protocol
    
    1. **Scope Definition**: Clarify the research question and boundaries
    2. **Source Gathering**: Use WebSearch, WebFetch, and MCP tools to collect evidence
    3. **Evidence Evaluation**: Assess source credibility and relevance
    4. **Synthesis**: Combine findings into a coherent analysis
    5. **Citation**: Always cite sources with URLs
    
    ## Requirements
    
    - Every claim must have a source
    - Present multiple perspectives when they exist
    - Distinguish between facts, consensus, and speculation
    - Use tables for comparisons
    - Provide a confidence level for conclusions (high/medium/low)
    - Include a "Sources" section at the end
    
    ## Output Format
    
    ```
    ## Research: [Topic]
    
    ### Key Findings
    - Finding 1 (Source: [URL])
    - Finding 2 (Source: [URL])
    
    ### Analysis
    [Synthesized analysis with inline citations]
    
    ### Confidence: [High/Medium/Low]
    [Reasoning for confidence level]
    
    ### Sources
    1. [Title](URL) - [Brief description]
    2. [Title](URL) - [Brief description]
    ```
    
    Apply this to: $ARGUMENTS
    
  • plugins/superclaude/skills/pm/SKILL.mdskill
    Show content (1475 bytes)
    ---
    name: pm
    description: Project management with PDCA cycles, confidence checks, and context persistence. Auto-activates at session start to restore context. Use for task planning, progress tracking, and structured development.
    ---
    
    # PM Agent Mode
    
    You are the Project Management Agent. Manage development through PDCA cycles.
    
    ## Session Start Protocol
    
    1. Check for existing context (docs/memory/, TASK.md, KNOWLEDGE.md)
    2. Report status to user:
       - Previous: [last session summary]
       - Progress: [current status]
       - Next: [planned actions]
       - Blockers: [issues]
    
    ## PDCA Cycle
    
    ### Plan (Hypothesis)
    - Define what to implement and why
    - Set success criteria
    - Identify risks
    
    ### Do (Experiment)
    - Track tasks with TodoWrite
    - Record trial-and-error, errors, solutions
    - Checkpoint progress regularly
    
    ### Check (Evaluation)
    - "What went well? What failed?"
    - Assess against success criteria
    - Identify lessons learned
    
    ### Act (Improvement)
    - Success: Document pattern for reuse
    - Failure: Document mistake with prevention measures
    - Update project knowledge base
    
    ## Confidence Check (before implementation)
    
    Assess confidence on 5 dimensions:
    1. No duplicate implementations? (25%)
    2. Architecture compliant? (25%)
    3. Official docs verified? (20%)
    4. OSS references checked? (15%)
    5. Root cause identified? (15%)
    
    - >=90%: Proceed immediately
    - 70-89%: Present alternatives, investigate more
    - <70%: STOP and gather more information
    
    Apply this to: $ARGUMENTS
    
  • plugins/superclaude/skills/troubleshoot/SKILL.mdskill
    Show content (1373 bytes)
    ---
    name: troubleshoot
    description: Systematic troubleshooting with root cause analysis. Use when users report errors, bugs, or unexpected behavior. Never retry without understanding why.
    ---
    
    # Troubleshooting Protocol
    
    Follow this systematic root cause analysis process. NEVER retry the same approach without understanding WHY it failed.
    
    ## Protocol
    
    1. **STOP**: Do not re-execute the same command
    2. **Observe**: What exactly happened? What was expected?
    3. **Hypothesize**: What could cause this? (list 2-3 possibilities)
    4. **Investigate**: Check official docs, logs, stack traces, config
    5. **Root Cause**: Identify the fundamental cause (not symptoms)
    6. **Fix**: Implement a solution that addresses the root cause
    7. **Verify**: Confirm the fix works
    8. **Learn**: Document the solution for future reference
    
    ## Anti-Patterns (strictly prohibited)
    
    - "Got an error. Let's just try again"
    - "Retry: attempt 1... attempt 2... attempt 3..."
    - "It timed out, so let's increase the wait time" (ignoring root cause)
    - "There are warnings but it works, so it's fine" (future technical debt)
    
    ## Required Format
    
    ```
    ## Root Cause Analysis
    
    **Error**: [Exact error message]
    **Expected**: [What should have happened]
    **Cause**: [Root cause with evidence]
    **Fix**: [Solution addressing root cause]
    **Prevention**: [How to prevent recurrence]
    ```
    
    Apply this to: $ARGUMENTS
    

README

πŸš€ SuperClaude Framework

Run in Smithery

Transform Claude Code into a Structured Development Platform

Mentioned in Awesome Claude Code Try SuperGemini Framework Try SuperQwen Framework Version Tests License PRs Welcome

Website PyPI PyPI sats npm

English δΈ­ζ–‡ ζ—₯本θͺž

Quick Start β€’ Support β€’ Features β€’ Docs β€’ Contributing


πŸ“Š Framework Statistics

CommandsAgentsModesMCP Servers
302078
Slash CommandsSpecialized AIBehavioralIntegrations

30 slash commands covering the complete development lifecycle from brainstorming to deployment.


🎯 Overview

SuperClaude is a meta-programming configuration framework that transforms Claude Code into a structured development platform through behavioral instruction injection and component orchestration. It provides systematic workflow automation with powerful tools and intelligent agents.

Disclaimer

This project is not affiliated with or endorsed by Anthropic. Claude Code is a product built and maintained by Anthropic.

πŸ“– For Developers & Contributors

Essential documentation for working with SuperClaude Framework:

DocumentPurposeWhen to Read
PLANNING.mdArchitecture, design principles, absolute rulesSession start, before implementation
TASK.mdCurrent tasks, priorities, backlogDaily, before starting work
KNOWLEDGE.mdAccumulated insights, best practices, troubleshootingWhen encountering issues, learning patterns
CONTRIBUTING.mdContribution guidelines, workflowBefore submitting PRs
Commands ReferenceComplete reference for all 30 /sc:* commands with syntax, examples, workflows, and decision guidesLearning SuperClaude, choosing the right command

πŸ’‘ Pro Tip: Claude Code reads these files at session start to ensure consistent, high-quality development aligned with project standards.

πŸ“š New to SuperClaude? Start with Commands Reference β€” it contains visual decision trees, detailed command comparisons, and workflow examples to help you understand which commands to use and when.

⚑ Quick Installation

IMPORTANT: The TypeScript plugin system described in older documentation is not yet available (planned for v5.0). For current installation instructions, please follow the steps below for v4.x.

Current Stable Version (v4.3.0)

SuperClaude currently uses slash commands.

Option 1: pipx (Recommended)

# Install from PyPI
pipx install superclaude

# Install commands (installs all 30 slash commands)
superclaude install

# Install MCP servers (optional, for enhanced capabilities)
superclaude mcp --list         # List available MCP servers
superclaude mcp                # Interactive installation
superclaude mcp --servers tavily --servers context7  # Install specific servers

# Verify installation
superclaude install --list
superclaude doctor

After installation, restart Claude Code to use 30 commands including:

  • /sc:research - Deep web research (enhanced with Tavily MCP)
  • /sc:brainstorm - Structured brainstorming
  • /sc:implement - Code implementation
  • /sc:test - Testing workflows
  • /sc:pm - Project management
  • /sc - Show all 30 available commands

Option 2: Direct Installation from Git

# Clone the repository
git clone https://github.com/SuperClaude-Org/SuperClaude_Framework.git
cd SuperClaude_Framework

# Run the installation script
./install.sh

Coming in v5.0 (In Development)

We are actively working on a new TypeScript plugin system (see issue #419 for details). When released, installation will be simplified to:

# This feature is not yet available
/plugin marketplace add SuperClaude-Org/superclaude-plugin-marketplace
/plugin install superclaude

Status: In development. No ETA has been set.

Enhanced Performance (Optional MCPs)

For 2-3x faster execution and 30-50% fewer tokens, optionally install MCP servers:

# Optional MCP servers for enhanced performance (via airis-mcp-gateway):
# - Serena: Code understanding (2-3x faster)
# - Sequential: Token-efficient reasoning (30-50% fewer tokens)
# - Tavily: Web search for Deep Research
# - Context7: Official documentation lookup
# - Mindbase: Semantic search across all conversations (optional enhancement)

# Note: Error learning available via built-in ReflexionMemory (no installation required)
# Mindbase provides semantic search enhancement (requires "recommended" profile)
# Install MCP servers: https://github.com/agiletec-inc/airis-mcp-gateway
# See docs/mcp/mcp-integration-policy.md for details

Performance Comparison:

  • Without MCPs: Fully functional, standard performance βœ…
  • With MCPs: 2-3x faster, 30-50% fewer tokens ⚑

πŸ’– Support the Project

Hey, let's be real - maintaining SuperClaude takes time and resources.

The Claude Max subscription alone runs $100/month for testing, and that's before counting the hours spent on documentation, bug fixes, and feature development. If you're finding value in SuperClaude for your daily work, consider supporting the project. Even a few dollars helps cover the basics and keeps development active.

Every contributor matters, whether through code, feedback, or support. Thanks for being part of this community! πŸ™

β˜• Ko-fi

Ko-fi

One-time contributions

🎯 Patreon

Patreon

Monthly support

πŸ’œ GitHub

GitHub Sponsors

Flexible tiers

Your Support Enables:

ItemCost/Impact
πŸ”¬ Claude Max Testing$100/month for validation & testing
⚑ Feature DevelopmentNew capabilities & improvements
πŸ“š DocumentationComprehensive guides & examples
🀝 Community SupportQuick issue responses & help
πŸ”§ MCP IntegrationTesting new server connections
🌐 InfrastructureHosting & deployment costs

Note: No pressure though - the framework stays open source regardless. Just knowing people use and appreciate it is motivating. Contributing code, documentation, or spreading the word helps too! πŸ™


πŸŽ‰ What's New in v4.1

Version 4.1 focuses on stabilizing the slash command architecture, enhancing agent capabilities, and improving documentation.

πŸ€– Smarter Agent System

20 specialized agents with domain expertise:

  • PM Agent ensures continuous learning through systematic documentation
  • Deep Research agent for autonomous web research
  • Security engineer catches real vulnerabilities
  • Frontend architect understands UI patterns
  • Automatic coordination based on context
  • Domain-specific expertise on demand

⚑ Optimized Performance

Smaller framework, bigger projects:

  • Reduced framework footprint
  • More context for your code
  • Longer conversations possible
  • Complex operations enabled

πŸ”§ MCP Server Integration

8 powerful servers with easy CLI installation:

# List available MCP servers
superclaude mcp --list

# Install specific servers
superclaude mcp --servers tavily context7

# Interactive installation
superclaude mcp

Available servers:

  • Tavily β†’ Primary web search (Deep Research)
  • Context7 β†’ Official documentation lookup
  • Sequential-Thinking β†’ Multi-step reasoning
  • Serena β†’ Session persistence & memory
  • Playwright β†’ Cross-browser automation
  • Magic β†’ UI component generation
  • Morphllm-Fast-Apply β†’ Context-aware code modifications
  • Chrome DevTools β†’ Performance analysis

🎯 Behavioral Modes

7 adaptive modes for different contexts:

  • Brainstorming β†’ Asks right questions
  • Business Panel β†’ Multi-expert strategic analysis
  • Deep Research β†’ Autonomous web research
  • Orchestration β†’ Efficient tool coordination
  • Token-Efficiency β†’ 30-50% context savings
  • Task Management β†’ Systematic organization
  • Introspection β†’ Meta-cognitive analysis

πŸ“š Documentation Overhaul

Complete rewrite for developers:

  • Real examples & use cases
  • Common pitfalls documented
  • Practical workflows included
  • Better navigation structure

πŸ§ͺ Enhanced Stability

Focus on reliability:

  • Bug fixes for core commands
  • Improved test coverage
  • More robust error handling
  • CI/CD pipeline improvements

πŸ”¬ Deep Research Capabilities

Autonomous Web Research Aligned with DR Agent Architecture

SuperClaude v4.2 introduces comprehensive Deep Research capabilities, enabling autonomous, adaptive, and intelligent web research.

🎯 Adaptive Planning

Three intelligent strategies:

  • Planning-Only: Direct execution for clear queries
  • Intent-Planning: Clarification for ambiguous requests
  • Unified: Collaborative plan refinement (default)

πŸ”„ Multi-Hop Reasoning

Up to 5 iterative searches:

  • Entity expansion (Paper β†’ Authors β†’ Works)
  • Concept deepening (Topic β†’ Details β†’ Examples)
  • Temporal progression (Current β†’ Historical)
  • Causal chains (Effect β†’ Cause β†’ Prevention)

πŸ“Š Quality Scoring

Confidence-based validation:

  • Source credibility assessment (0.0-1.0)
  • Coverage completeness tracking
  • Synthesis coherence evaluation
  • Minimum threshold: 0.6, Target: 0.8

🧠 Case-Based Learning

Cross-session intelligence:

  • Pattern recognition and reuse
  • Strategy optimization over time
  • Successful query formulations saved
  • Performance improvement tracking

Research Command Usage

# Basic research with automatic depth
/research "latest AI developments 2024"

# Controlled research depth (via options in TypeScript)
/research "quantum computing breakthroughs"  # depth: exhaustive

# Specific strategy selection
/research "market analysis"  # strategy: planning-only

# Domain-filtered research (Tavily MCP integration)
/research "React patterns"  # domains: reactjs.org,github.com

Research Depth Levels

DepthSourcesHopsTimeBest For
Quick5-101~2minQuick facts, simple queries
Standard10-203~5minGeneral research (default)
Deep20-404~8minComprehensive analysis
Exhaustive40+5~10minAcademic-level research

Integrated Tool Orchestration

The Deep Research system intelligently coordinates multiple tools:

  • Tavily MCP: Primary web search and discovery
  • Playwright MCP: Complex content extraction
  • Sequential MCP: Multi-step reasoning and synthesis
  • Serena MCP: Memory and learning persistence
  • Context7 MCP: Technical documentation lookup

πŸ“š Documentation

Complete Guide to SuperClaude

πŸš€ Getting StartedπŸ“– User GuidesπŸ› οΈ Developer ResourcesπŸ“‹ Reference
- πŸ““ [**Examples Cookbook**](docs/reference/examples-cookbook.md) *Real-world recipes*

🀝 Contributing

Join the SuperClaude Community

We welcome contributions of all kinds! Here's how you can help:

PriorityAreaDescription
πŸ“ HighDocumentationImprove guides, add examples, fix typos
πŸ”§ HighMCP IntegrationAdd server configs, test integrations
🎯 MediumWorkflowsCreate command patterns & recipes
πŸ§ͺ MediumTestingAdd tests, validate features
🌐 Lowi18nTranslate docs to other languages

Contributing Guide Contributors


βš–οΈ License

This project is licensed under the MIT License - see the LICENSE file for details.

MIT License


⭐ Star History

Star History Chart

πŸš€ Built with passion by the SuperClaude community

Made with ❀️ for developers who push boundaries

Back to Top ↑


πŸ“‹ All 30 Commands

Click to expand full command list

🧠 Planning & Design (4)

  • /brainstorm - Structured brainstorming
  • /design - System architecture
  • /estimate - Time/effort estimation
  • /spec-panel - Specification analysis

πŸ’» Development (5)

  • /implement - Code implementation
  • /build - Build workflows
  • /improve - Code improvements
  • /cleanup - Refactoring
  • /explain - Code explanation

πŸ§ͺ Testing & Quality (4)

  • /test - Test generation
  • /analyze - Code analysis
  • /troubleshoot - Debugging
  • /reflect - Retrospectives

πŸ“š Documentation (2)

  • /document - Doc generation
  • /help - Command help

πŸ”§ Version Control (1)

  • /git - Git operations

πŸ“Š Project Management (3)

  • /pm - Project management
  • /task - Task tracking
  • /workflow - Workflow automation

πŸ” Research & Analysis (2)

  • /research - Deep web research
  • /business-panel - Business analysis

🎯 Utilities (9)

  • /agent - AI agents
  • /index-repo - Repository indexing
  • /index - Indexing alias
  • /recommend - Command recommendations
  • /select-tool - Tool selection
  • /spawn - Parallel tasks
  • /load - Load sessions
  • /save - Save sessions
  • /sc - Show all commands

πŸ“– View Detailed Command Reference β†’