USP
Unlike simple code generation tools, Superpowers implements a full, opinionated software development lifecycle, enforcing TDD, systematic debugging, and multi-agent orchestration. It transforms an agent into a disciplined junior engineer,…
Use cases
- 01Guiding AI agents through a complete TDD workflow
- 02Orchestrating multiple subagents for parallel development tasks
- 03Enforcing systematic design and planning before coding
- 04Automating code review and branch completion processes
- 05Developing new features or refactoring existing code with AI
Detected files (8)
skills/brainstorming/SKILL.mdskillShow content (10634 bytes)
--- name: brainstorming description: "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation." --- # Brainstorming Ideas Into Designs Help turn ideas into fully formed designs and specs through natural collaborative dialogue. Start by understanding the current project context, then ask questions one at a time to refine the idea. Once you understand what you're building, present the design and get user approval. <HARD-GATE> Do NOT invoke any implementation skill, write any code, scaffold any project, or take any implementation action until you have presented a design and the user has approved it. This applies to EVERY project regardless of perceived simplicity. </HARD-GATE> ## Anti-Pattern: "This Is Too Simple To Need A Design" Every project goes through this process. A todo list, a single-function utility, a config change — all of them. "Simple" projects are where unexamined assumptions cause the most wasted work. The design can be short (a few sentences for truly simple projects), but you MUST present it and get approval. ## Checklist You MUST create a task for each of these items and complete them in order: 1. **Explore project context** — check files, docs, recent commits 2. **Offer visual companion** (if topic will involve visual questions) — this is its own message, not combined with a clarifying question. See the Visual Companion section below. 3. **Ask clarifying questions** — one at a time, understand purpose/constraints/success criteria 4. **Propose 2-3 approaches** — with trade-offs and your recommendation 5. **Present design** — in sections scaled to their complexity, get user approval after each section 6. **Write design doc** — save to `docs/superpowers/specs/YYYY-MM-DD-<topic>-design.md` and commit 7. **Spec self-review** — quick inline check for placeholders, contradictions, ambiguity, scope (see below) 8. **User reviews written spec** — ask user to review the spec file before proceeding 9. **Transition to implementation** — invoke writing-plans skill to create implementation plan ## Process Flow ```dot digraph brainstorming { "Explore project context" [shape=box]; "Visual questions ahead?" [shape=diamond]; "Offer Visual Companion\n(own message, no other content)" [shape=box]; "Ask clarifying questions" [shape=box]; "Propose 2-3 approaches" [shape=box]; "Present design sections" [shape=box]; "User approves design?" [shape=diamond]; "Write design doc" [shape=box]; "Spec self-review\n(fix inline)" [shape=box]; "User reviews spec?" [shape=diamond]; "Invoke writing-plans skill" [shape=doublecircle]; "Explore project context" -> "Visual questions ahead?"; "Visual questions ahead?" -> "Offer Visual Companion\n(own message, no other content)" [label="yes"]; "Visual questions ahead?" -> "Ask clarifying questions" [label="no"]; "Offer Visual Companion\n(own message, no other content)" -> "Ask clarifying questions"; "Ask clarifying questions" -> "Propose 2-3 approaches"; "Propose 2-3 approaches" -> "Present design sections"; "Present design sections" -> "User approves design?"; "User approves design?" -> "Present design sections" [label="no, revise"]; "User approves design?" -> "Write design doc" [label="yes"]; "Write design doc" -> "Spec self-review\n(fix inline)"; "Spec self-review\n(fix inline)" -> "User reviews spec?"; "User reviews spec?" -> "Write design doc" [label="changes requested"]; "User reviews spec?" -> "Invoke writing-plans skill" [label="approved"]; } ``` **The terminal state is invoking writing-plans.** Do NOT invoke frontend-design, mcp-builder, or any other implementation skill. The ONLY skill you invoke after brainstorming is writing-plans. ## The Process **Understanding the idea:** - Check out the current project state first (files, docs, recent commits) - Before asking detailed questions, assess scope: if the request describes multiple independent subsystems (e.g., "build a platform with chat, file storage, billing, and analytics"), flag this immediately. Don't spend questions refining details of a project that needs to be decomposed first. - If the project is too large for a single spec, help the user decompose into sub-projects: what are the independent pieces, how do they relate, what order should they be built? Then brainstorm the first sub-project through the normal design flow. Each sub-project gets its own spec → plan → implementation cycle. - For appropriately-scoped projects, ask questions one at a time to refine the idea - Prefer multiple choice questions when possible, but open-ended is fine too - Only one question per message - if a topic needs more exploration, break it into multiple questions - Focus on understanding: purpose, constraints, success criteria **Exploring approaches:** - Propose 2-3 different approaches with trade-offs - Present options conversationally with your recommendation and reasoning - Lead with your recommended option and explain why **Presenting the design:** - Once you believe you understand what you're building, present the design - Scale each section to its complexity: a few sentences if straightforward, up to 200-300 words if nuanced - Ask after each section whether it looks right so far - Cover: architecture, components, data flow, error handling, testing - Be ready to go back and clarify if something doesn't make sense **Design for isolation and clarity:** - Break the system into smaller units that each have one clear purpose, communicate through well-defined interfaces, and can be understood and tested independently - For each unit, you should be able to answer: what does it do, how do you use it, and what does it depend on? - Can someone understand what a unit does without reading its internals? Can you change the internals without breaking consumers? If not, the boundaries need work. - Smaller, well-bounded units are also easier for you to work with - you reason better about code you can hold in context at once, and your edits are more reliable when files are focused. When a file grows large, that's often a signal that it's doing too much. **Working in existing codebases:** - Explore the current structure before proposing changes. Follow existing patterns. - Where existing code has problems that affect the work (e.g., a file that's grown too large, unclear boundaries, tangled responsibilities), include targeted improvements as part of the design - the way a good developer improves code they're working in. - Don't propose unrelated refactoring. Stay focused on what serves the current goal. ## After the Design **Documentation:** - Write the validated design (spec) to `docs/superpowers/specs/YYYY-MM-DD-<topic>-design.md` - (User preferences for spec location override this default) - Use elements-of-style:writing-clearly-and-concisely skill if available - Commit the design document to git **Spec Self-Review:** After writing the spec document, look at it with fresh eyes: 1. **Placeholder scan:** Any "TBD", "TODO", incomplete sections, or vague requirements? Fix them. 2. **Internal consistency:** Do any sections contradict each other? Does the architecture match the feature descriptions? 3. **Scope check:** Is this focused enough for a single implementation plan, or does it need decomposition? 4. **Ambiguity check:** Could any requirement be interpreted two different ways? If so, pick one and make it explicit. Fix any issues inline. No need to re-review — just fix and move on. **User Review Gate:** After the spec review loop passes, ask the user to review the written spec before proceeding: > "Spec written and committed to `<path>`. Please review it and let me know if you want to make any changes before we start writing out the implementation plan." Wait for the user's response. If they request changes, make them and re-run the spec review loop. Only proceed once the user approves. **Implementation:** - Invoke the writing-plans skill to create a detailed implementation plan - Do NOT invoke any other skill. writing-plans is the next step. ## Key Principles - **One question at a time** - Don't overwhelm with multiple questions - **Multiple choice preferred** - Easier to answer than open-ended when possible - **YAGNI ruthlessly** - Remove unnecessary features from all designs - **Explore alternatives** - Always propose 2-3 approaches before settling - **Incremental validation** - Present design, get approval before moving on - **Be flexible** - Go back and clarify when something doesn't make sense ## Visual Companion A browser-based companion for showing mockups, diagrams, and visual options during brainstorming. Available as a tool — not a mode. Accepting the companion means it's available for questions that benefit from visual treatment; it does NOT mean every question goes through the browser. **Offering the companion:** When you anticipate that upcoming questions will involve visual content (mockups, layouts, diagrams), offer it once for consent: > "Some of what we're working on might be easier to explain if I can show it to you in a web browser. I can put together mockups, diagrams, comparisons, and other visuals as we go. This feature is still new and can be token-intensive. Want to try it? (Requires opening a local URL)" **This offer MUST be its own message.** Do not combine it with clarifying questions, context summaries, or any other content. The message should contain ONLY the offer above and nothing else. Wait for the user's response before continuing. If they decline, proceed with text-only brainstorming. **Per-question decision:** Even after the user accepts, decide FOR EACH QUESTION whether to use the browser or the terminal. The test: **would the user understand this better by seeing it than reading it?** - **Use the browser** for content that IS visual — mockups, wireframes, layout comparisons, architecture diagrams, side-by-side visual designs - **Use the terminal** for content that is text — requirements questions, conceptual choices, tradeoff lists, A/B/C/D text options, scope decisions A question about a UI topic is not automatically a visual question. "What does personality mean in this context?" is a conceptual question — use the terminal. "Which wizard layout works better?" is a visual question — use the browser. If they agree to the companion, read the detailed guide before proceeding: `skills/brainstorming/visual-companion.md`skills/finishing-a-development-branch/SKILL.mdskillShow content (7061 bytes)
--- name: finishing-a-development-branch description: Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup --- # Finishing a Development Branch ## Overview Guide completion of development work by presenting clear options and handling chosen workflow. **Core principle:** Verify tests → Detect environment → Present options → Execute choice → Clean up. **Announce at start:** "I'm using the finishing-a-development-branch skill to complete this work." ## The Process ### Step 1: Verify Tests **Before presenting options, verify tests pass:** ```bash # Run project's test suite npm test / cargo test / pytest / go test ./... ``` **If tests fail:** ``` Tests failing (<N> failures). Must fix before completing: [Show failures] Cannot proceed with merge/PR until tests pass. ``` Stop. Don't proceed to Step 2. **If tests pass:** Continue to Step 2. ### Step 2: Detect Environment **Determine workspace state before presenting options:** ```bash GIT_DIR=$(cd "$(git rev-parse --git-dir)" 2>/dev/null && pwd -P) GIT_COMMON=$(cd "$(git rev-parse --git-common-dir)" 2>/dev/null && pwd -P) ``` This determines which menu to show and how cleanup works: | State | Menu | Cleanup | |-------|------|---------| | `GIT_DIR == GIT_COMMON` (normal repo) | Standard 4 options | No worktree to clean up | | `GIT_DIR != GIT_COMMON`, named branch | Standard 4 options | Provenance-based (see Step 6) | | `GIT_DIR != GIT_COMMON`, detached HEAD | Reduced 3 options (no merge) | No cleanup (externally managed) | ### Step 3: Determine Base Branch ```bash # Try common base branches git merge-base HEAD main 2>/dev/null || git merge-base HEAD master 2>/dev/null ``` Or ask: "This branch split from main - is that correct?" ### Step 4: Present Options **Normal repo and named-branch worktree — present exactly these 4 options:** ``` Implementation complete. What would you like to do? 1. Merge back to <base-branch> locally 2. Push and create a Pull Request 3. Keep the branch as-is (I'll handle it later) 4. Discard this work Which option? ``` **Detached HEAD — present exactly these 3 options:** ``` Implementation complete. You're on a detached HEAD (externally managed workspace). 1. Push as new branch and create a Pull Request 2. Keep as-is (I'll handle it later) 3. Discard this work Which option? ``` **Don't add explanation** - keep options concise. ### Step 5: Execute Choice #### Option 1: Merge Locally ```bash # Get main repo root for CWD safety MAIN_ROOT=$(git -C "$(git rev-parse --git-common-dir)/.." rev-parse --show-toplevel) cd "$MAIN_ROOT" # Merge first — verify success before removing anything git checkout <base-branch> git pull git merge <feature-branch> # Verify tests on merged result <test command> # Only after merge succeeds: cleanup worktree (Step 6), then delete branch ``` Then: Cleanup worktree (Step 6), then delete branch: ```bash git branch -d <feature-branch> ``` #### Option 2: Push and Create PR ```bash # Push branch git push -u origin <feature-branch> # Create PR gh pr create --title "<title>" --body "$(cat <<'EOF' ## Summary <2-3 bullets of what changed> ## Test Plan - [ ] <verification steps> EOF )" ``` **Do NOT clean up worktree** — user needs it alive to iterate on PR feedback. #### Option 3: Keep As-Is Report: "Keeping branch <name>. Worktree preserved at <path>." **Don't cleanup worktree.** #### Option 4: Discard **Confirm first:** ``` This will permanently delete: - Branch <name> - All commits: <commit-list> - Worktree at <path> Type 'discard' to confirm. ``` Wait for exact confirmation. If confirmed: ```bash MAIN_ROOT=$(git -C "$(git rev-parse --git-common-dir)/.." rev-parse --show-toplevel) cd "$MAIN_ROOT" ``` Then: Cleanup worktree (Step 6), then force-delete branch: ```bash git branch -D <feature-branch> ``` ### Step 6: Cleanup Workspace **Only runs for Options 1 and 4.** Options 2 and 3 always preserve the worktree. ```bash GIT_DIR=$(cd "$(git rev-parse --git-dir)" 2>/dev/null && pwd -P) GIT_COMMON=$(cd "$(git rev-parse --git-common-dir)" 2>/dev/null && pwd -P) WORKTREE_PATH=$(git rev-parse --show-toplevel) ``` **If `GIT_DIR == GIT_COMMON`:** Normal repo, no worktree to clean up. Done. **If worktree path is under `.worktrees/`, `worktrees/`, or `~/.config/superpowers/worktrees/`:** Superpowers created this worktree — we own cleanup. ```bash MAIN_ROOT=$(git -C "$(git rev-parse --git-common-dir)/.." rev-parse --show-toplevel) cd "$MAIN_ROOT" git worktree remove "$WORKTREE_PATH" git worktree prune # Self-healing: clean up any stale registrations ``` **Otherwise:** The host environment (harness) owns this workspace. Do NOT remove it. If your platform provides a workspace-exit tool, use it. Otherwise, leave the workspace in place. ## Quick Reference | Option | Merge | Push | Keep Worktree | Cleanup Branch | |--------|-------|------|---------------|----------------| | 1. Merge locally | yes | - | - | yes | | 2. Create PR | - | yes | yes | - | | 3. Keep as-is | - | - | yes | - | | 4. Discard | - | - | - | yes (force) | ## Common Mistakes **Skipping test verification** - **Problem:** Merge broken code, create failing PR - **Fix:** Always verify tests before offering options **Open-ended questions** - **Problem:** "What should I do next?" is ambiguous - **Fix:** Present exactly 4 structured options (or 3 for detached HEAD) **Cleaning up worktree for Option 2** - **Problem:** Remove worktree user needs for PR iteration - **Fix:** Only cleanup for Options 1 and 4 **Deleting branch before removing worktree** - **Problem:** `git branch -d` fails because worktree still references the branch - **Fix:** Merge first, remove worktree, then delete branch **Running git worktree remove from inside the worktree** - **Problem:** Command fails silently when CWD is inside the worktree being removed - **Fix:** Always `cd` to main repo root before `git worktree remove` **Cleaning up harness-owned worktrees** - **Problem:** Removing a worktree the harness created causes phantom state - **Fix:** Only clean up worktrees under `.worktrees/`, `worktrees/`, or `~/.config/superpowers/worktrees/` **No confirmation for discard** - **Problem:** Accidentally delete work - **Fix:** Require typed "discard" confirmation ## Red Flags **Never:** - Proceed with failing tests - Merge without verifying tests on result - Delete work without confirmation - Force-push without explicit request - Remove a worktree before confirming merge success - Clean up worktrees you didn't create (provenance check) - Run `git worktree remove` from inside the worktree **Always:** - Verify tests before offering options - Detect environment before presenting menu - Present exactly 4 options (or 3 for detached HEAD) - Get typed confirmation for Option 4 - Clean up worktree for Options 1 & 4 only - `cd` to main repo root before worktree removal - Run `git worktree prune` after removalskills/receiving-code-review/SKILL.mdskillShow content (6314 bytes)
--- name: receiving-code-review description: Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation --- # Code Review Reception ## Overview Code review requires technical evaluation, not emotional performance. **Core principle:** Verify before implementing. Ask before assuming. Technical correctness over social comfort. ## The Response Pattern ``` WHEN receiving code review feedback: 1. READ: Complete feedback without reacting 2. UNDERSTAND: Restate requirement in own words (or ask) 3. VERIFY: Check against codebase reality 4. EVALUATE: Technically sound for THIS codebase? 5. RESPOND: Technical acknowledgment or reasoned pushback 6. IMPLEMENT: One item at a time, test each ``` ## Forbidden Responses **NEVER:** - "You're absolutely right!" (explicit CLAUDE.md violation) - "Great point!" / "Excellent feedback!" (performative) - "Let me implement that now" (before verification) **INSTEAD:** - Restate the technical requirement - Ask clarifying questions - Push back with technical reasoning if wrong - Just start working (actions > words) ## Handling Unclear Feedback ``` IF any item is unclear: STOP - do not implement anything yet ASK for clarification on unclear items WHY: Items may be related. Partial understanding = wrong implementation. ``` **Example:** ``` your human partner: "Fix 1-6" You understand 1,2,3,6. Unclear on 4,5. ❌ WRONG: Implement 1,2,3,6 now, ask about 4,5 later ✅ RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding." ``` ## Source-Specific Handling ### From your human partner - **Trusted** - implement after understanding - **Still ask** if scope unclear - **No performative agreement** - **Skip to action** or technical acknowledgment ### From External Reviewers ``` BEFORE implementing: 1. Check: Technically correct for THIS codebase? 2. Check: Breaks existing functionality? 3. Check: Reason for current implementation? 4. Check: Works on all platforms/versions? 5. Check: Does reviewer understand full context? IF suggestion seems wrong: Push back with technical reasoning IF can't easily verify: Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?" IF conflicts with your human partner's prior decisions: Stop and discuss with your human partner first ``` **your human partner's rule:** "External feedback - be skeptical, but check carefully" ## YAGNI Check for "Professional" Features ``` IF reviewer suggests "implementing properly": grep codebase for actual usage IF unused: "This endpoint isn't called. Remove it (YAGNI)?" IF used: Then implement properly ``` **your human partner's rule:** "You and reviewer both report to me. If we don't need this feature, don't add it." ## Implementation Order ``` FOR multi-item feedback: 1. Clarify anything unclear FIRST 2. Then implement in this order: - Blocking issues (breaks, security) - Simple fixes (typos, imports) - Complex fixes (refactoring, logic) 3. Test each fix individually 4. Verify no regressions ``` ## When To Push Back Push back when: - Suggestion breaks existing functionality - Reviewer lacks full context - Violates YAGNI (unused feature) - Technically incorrect for this stack - Legacy/compatibility reasons exist - Conflicts with your human partner's architectural decisions **How to push back:** - Use technical reasoning, not defensiveness - Ask specific questions - Reference working tests/code - Involve your human partner if architectural **Signal if uncomfortable pushing back out loud:** "Strange things are afoot at the Circle K" ## Acknowledging Correct Feedback When feedback IS correct: ``` ✅ "Fixed. [Brief description of what changed]" ✅ "Good catch - [specific issue]. Fixed in [location]." ✅ [Just fix it and show in the code] ❌ "You're absolutely right!" ❌ "Great point!" ❌ "Thanks for catching that!" ❌ "Thanks for [anything]" ❌ ANY gratitude expression ``` **Why no thanks:** Actions speak. Just fix it. The code itself shows you heard the feedback. **If you catch yourself about to write "Thanks":** DELETE IT. State the fix instead. ## Gracefully Correcting Your Pushback If you pushed back and were wrong: ``` ✅ "You were right - I checked [X] and it does [Y]. Implementing now." ✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing." ❌ Long apology ❌ Defending why you pushed back ❌ Over-explaining ``` State the correction factually and move on. ## Common Mistakes | Mistake | Fix | |---------|-----| | Performative agreement | State requirement or just act | | Blind implementation | Verify against codebase first | | Batch without testing | One at a time, test each | | Assuming reviewer is right | Check if breaks things | | Avoiding pushback | Technical correctness > comfort | | Partial implementation | Clarify all items first | | Can't verify, proceed anyway | State limitation, ask for direction | ## Real Examples **Performative Agreement (Bad):** ``` Reviewer: "Remove legacy code" ❌ "You're absolutely right! Let me remove that..." ``` **Technical Verification (Good):** ``` Reviewer: "Remove legacy code" ✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?" ``` **YAGNI (Good):** ``` Reviewer: "Implement proper metrics tracking with database, date filters, CSV export" ✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?" ``` **Unclear Item (Good):** ``` your human partner: "Fix items 1-6" You understand 1,2,3,6. Unclear on 4,5. ✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing." ``` ## GitHub Thread Replies When replying to inline review comments on GitHub, reply in the comment thread (`gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies`), not as a top-level PR comment. ## The Bottom Line **External feedback = suggestions to evaluate, not orders to follow.** Verify. Question. Then implement. No performative agreement. Technical rigor always.skills/dispatching-parallel-agents/SKILL.mdskillShow content (6441 bytes)
--- name: dispatching-parallel-agents description: Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies --- # Dispatching Parallel Agents ## Overview You delegate tasks to specialized agents with isolated context. By precisely crafting their instructions and context, you ensure they stay focused and succeed at their task. They should never inherit your session's context or history — you construct exactly what they need. This also preserves your own context for coordination work. When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel. **Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently. ## When to Use ```dot digraph when_to_use { "Multiple failures?" [shape=diamond]; "Are they independent?" [shape=diamond]; "Single agent investigates all" [shape=box]; "One agent per problem domain" [shape=box]; "Can they work in parallel?" [shape=diamond]; "Sequential agents" [shape=box]; "Parallel dispatch" [shape=box]; "Multiple failures?" -> "Are they independent?" [label="yes"]; "Are they independent?" -> "Single agent investigates all" [label="no - related"]; "Are they independent?" -> "Can they work in parallel?" [label="yes"]; "Can they work in parallel?" -> "Parallel dispatch" [label="yes"]; "Can they work in parallel?" -> "Sequential agents" [label="no - shared state"]; } ``` **Use when:** - 3+ test files failing with different root causes - Multiple subsystems broken independently - Each problem can be understood without context from others - No shared state between investigations **Don't use when:** - Failures are related (fix one might fix others) - Need to understand full system state - Agents would interfere with each other ## The Pattern ### 1. Identify Independent Domains Group failures by what's broken: - File A tests: Tool approval flow - File B tests: Batch completion behavior - File C tests: Abort functionality Each domain is independent - fixing tool approval doesn't affect abort tests. ### 2. Create Focused Agent Tasks Each agent gets: - **Specific scope:** One test file or subsystem - **Clear goal:** Make these tests pass - **Constraints:** Don't change other code - **Expected output:** Summary of what you found and fixed ### 3. Dispatch in Parallel ```typescript // In Claude Code / AI environment Task("Fix agent-tool-abort.test.ts failures") Task("Fix batch-completion-behavior.test.ts failures") Task("Fix tool-approval-race-conditions.test.ts failures") // All three run concurrently ``` ### 4. Review and Integrate When agents return: - Read each summary - Verify fixes don't conflict - Run full test suite - Integrate all changes ## Agent Prompt Structure Good agent prompts are: 1. **Focused** - One clear problem domain 2. **Self-contained** - All context needed to understand the problem 3. **Specific about output** - What should the agent return? ```markdown Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts: 1. "should abort tool with partial output capture" - expects 'interrupted at' in message 2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed 3. "should properly track pendingToolCount" - expects 3 results but gets 0 These are timing/race condition issues. Your task: 1. Read the test file and understand what each test verifies 2. Identify root cause - timing issues or actual bugs? 3. Fix by: - Replacing arbitrary timeouts with event-based waiting - Fixing bugs in abort implementation if found - Adjusting test expectations if testing changed behavior Do NOT just increase timeouts - find the real issue. Return: Summary of what you found and what you fixed. ``` ## Common Mistakes **❌ Too broad:** "Fix all the tests" - agent gets lost **✅ Specific:** "Fix agent-tool-abort.test.ts" - focused scope **❌ No context:** "Fix the race condition" - agent doesn't know where **✅ Context:** Paste the error messages and test names **❌ No constraints:** Agent might refactor everything **✅ Constraints:** "Do NOT change production code" or "Fix tests only" **❌ Vague output:** "Fix it" - you don't know what changed **✅ Specific:** "Return summary of root cause and changes" ## When NOT to Use **Related failures:** Fixing one might fix others - investigate together first **Need full context:** Understanding requires seeing entire system **Exploratory debugging:** You don't know what's broken yet **Shared state:** Agents would interfere (editing same files, using same resources) ## Real Example from Session **Scenario:** 6 test failures across 3 files after major refactoring **Failures:** - agent-tool-abort.test.ts: 3 failures (timing issues) - batch-completion-behavior.test.ts: 2 failures (tools not executing) - tool-approval-race-conditions.test.ts: 1 failure (execution count = 0) **Decision:** Independent domains - abort logic separate from batch completion separate from race conditions **Dispatch:** ``` Agent 1 → Fix agent-tool-abort.test.ts Agent 2 → Fix batch-completion-behavior.test.ts Agent 3 → Fix tool-approval-race-conditions.test.ts ``` **Results:** - Agent 1: Replaced timeouts with event-based waiting - Agent 2: Fixed event structure bug (threadId in wrong place) - Agent 3: Added wait for async tool execution to complete **Integration:** All fixes independent, no conflicts, full suite green **Time saved:** 3 problems solved in parallel vs sequentially ## Key Benefits 1. **Parallelization** - Multiple investigations happen simultaneously 2. **Focus** - Each agent has narrow scope, less context to track 3. **Independence** - Agents don't interfere with each other 4. **Speed** - 3 problems solved in time of 1 ## Verification After agents return: 1. **Review each summary** - Understand what changed 2. **Check for conflicts** - Did agents edit same code? 3. **Run full suite** - Verify all fixes work together 4. **Spot check** - Agents can make systematic errors ## Real-World Impact From debugging session (2025-10-03): - 6 failures across 3 files - 3 agents dispatched in parallel - All investigations completed concurrently - All fixes integrated successfully - Zero conflicts between agent changesskills/executing-plans/SKILL.mdskillShow content (2469 bytes)
--- name: executing-plans description: Use when you have a written implementation plan to execute in a separate session with review checkpoints --- # Executing Plans ## Overview Load plan, review critically, execute all tasks, report when complete. **Announce at start:** "I'm using the executing-plans skill to implement this plan." **Note:** Tell your human partner that Superpowers works much better with access to subagents. The quality of its work will be significantly higher if run on a platform with subagent support (such as Claude Code or Codex). If subagents are available, use superpowers:subagent-driven-development instead of this skill. ## The Process ### Step 1: Load and Review Plan 1. Read plan file 2. Review critically - identify any questions or concerns about the plan 3. If concerns: Raise them with your human partner before starting 4. If no concerns: Create TodoWrite and proceed ### Step 2: Execute Tasks For each task: 1. Mark as in_progress 2. Follow each step exactly (plan has bite-sized steps) 3. Run verifications as specified 4. Mark as completed ### Step 3: Complete Development After all tasks complete and verified: - Announce: "I'm using the finishing-a-development-branch skill to complete this work." - **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch - Follow that skill to verify tests, present options, execute choice ## When to Stop and Ask for Help **STOP executing immediately when:** - Hit a blocker (missing dependency, test fails, instruction unclear) - Plan has critical gaps preventing starting - You don't understand an instruction - Verification fails repeatedly **Ask for clarification rather than guessing.** ## When to Revisit Earlier Steps **Return to Review (Step 1) when:** - Partner updates the plan based on your feedback - Fundamental approach needs rethinking **Don't force through blockers** - stop and ask. ## Remember - Review plan critically first - Follow plan steps exactly - Don't skip verifications - Reference skills when plan says to - Stop when blocked, don't guess - Never start implementation on main/master branch without explicit user consent ## Integration **Required workflow skills:** - **superpowers:using-git-worktrees** - Ensures isolated workspace (creates one or verifies existing) - **superpowers:writing-plans** - Creates the plan this skill executes - **superpowers:finishing-a-development-branch** - Complete development after all tasksskills/requesting-code-review/SKILL.mdskillShow content (2808 bytes)
--- name: requesting-code-review description: Use when completing tasks, implementing major features, or before merging to verify work meets requirements --- # Requesting Code Review Dispatch a code reviewer subagent to catch issues before they cascade. The reviewer gets precisely crafted context for evaluation — never your session's history. This keeps the reviewer focused on the work product, not your thought process, and preserves your own context for continued work. **Core principle:** Review early, review often. ## When to Request Review **Mandatory:** - After each task in subagent-driven development - After completing major feature - Before merge to main **Optional but valuable:** - When stuck (fresh perspective) - Before refactoring (baseline check) - After fixing complex bug ## How to Request **1. Get git SHAs:** ```bash BASE_SHA=$(git rev-parse HEAD~1) # or origin/main HEAD_SHA=$(git rev-parse HEAD) ``` **2. Dispatch code reviewer subagent:** Use Task tool with `general-purpose` type, fill template at `code-reviewer.md` **Placeholders:** - `{DESCRIPTION}` - Brief summary of what you built - `{PLAN_OR_REQUIREMENTS}` - What it should do - `{BASE_SHA}` - Starting commit - `{HEAD_SHA}` - Ending commit **3. Act on feedback:** - Fix Critical issues immediately - Fix Important issues before proceeding - Note Minor issues for later - Push back if reviewer is wrong (with reasoning) ## Example ``` [Just completed Task 2: Add verification function] You: Let me request code review before proceeding. BASE_SHA=$(git log --oneline | grep "Task 1" | head -1 | awk '{print $1}') HEAD_SHA=$(git rev-parse HEAD) [Dispatch code reviewer subagent] DESCRIPTION: Added verifyIndex() and repairIndex() with 4 issue types PLAN_OR_REQUIREMENTS: Task 2 from docs/superpowers/plans/deployment-plan.md BASE_SHA: a7981ec HEAD_SHA: 3df7661 [Subagent returns]: Strengths: Clean architecture, real tests Issues: Important: Missing progress indicators Minor: Magic number (100) for reporting interval Assessment: Ready to proceed You: [Fix progress indicators] [Continue to Task 3] ``` ## Integration with Workflows **Subagent-Driven Development:** - Review after EACH task - Catch issues before they compound - Fix before moving to next task **Executing Plans:** - Review after each task or at natural checkpoints - Get feedback, apply, continue **Ad-Hoc Development:** - Review before merge - Review when stuck ## Red Flags **Never:** - Skip review because "it's simple" - Ignore Critical issues - Proceed with unfixed Important issues - Argue with valid technical feedback **If reviewer wrong:** - Push back with technical reasoning - Show code/tests that prove it works - Request clarification See template at: requesting-code-review/code-reviewer.mdskills/subagent-driven-development/SKILL.mdskillShow content (12546 bytes)
--- name: subagent-driven-development description: Use when executing implementation plans with independent tasks in the current session --- # Subagent-Driven Development Execute plan by dispatching fresh subagent per task, with two-stage review after each: spec compliance review first, then code quality review. **Why subagents:** You delegate tasks to specialized agents with isolated context. By precisely crafting their instructions and context, you ensure they stay focused and succeed at their task. They should never inherit your session's context or history — you construct exactly what they need. This also preserves your own context for coordination work. **Core principle:** Fresh subagent per task + two-stage review (spec then quality) = high quality, fast iteration **Continuous execution:** Do not pause to check in with your human partner between tasks. Execute all tasks from the plan without stopping. The only reasons to stop are: BLOCKED status you cannot resolve, ambiguity that genuinely prevents progress, or all tasks complete. "Should I continue?" prompts and progress summaries waste their time — they asked you to execute the plan, so execute it. ## When to Use ```dot digraph when_to_use { "Have implementation plan?" [shape=diamond]; "Tasks mostly independent?" [shape=diamond]; "Stay in this session?" [shape=diamond]; "subagent-driven-development" [shape=box]; "executing-plans" [shape=box]; "Manual execution or brainstorm first" [shape=box]; "Have implementation plan?" -> "Tasks mostly independent?" [label="yes"]; "Have implementation plan?" -> "Manual execution or brainstorm first" [label="no"]; "Tasks mostly independent?" -> "Stay in this session?" [label="yes"]; "Tasks mostly independent?" -> "Manual execution or brainstorm first" [label="no - tightly coupled"]; "Stay in this session?" -> "subagent-driven-development" [label="yes"]; "Stay in this session?" -> "executing-plans" [label="no - parallel session"]; } ``` **vs. Executing Plans (parallel session):** - Same session (no context switch) - Fresh subagent per task (no context pollution) - Two-stage review after each task: spec compliance first, then code quality - Faster iteration (no human-in-loop between tasks) ## The Process ```dot digraph process { rankdir=TB; subgraph cluster_per_task { label="Per Task"; "Dispatch implementer subagent (./implementer-prompt.md)" [shape=box]; "Implementer subagent asks questions?" [shape=diamond]; "Answer questions, provide context" [shape=box]; "Implementer subagent implements, tests, commits, self-reviews" [shape=box]; "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" [shape=box]; "Spec reviewer subagent confirms code matches spec?" [shape=diamond]; "Implementer subagent fixes spec gaps" [shape=box]; "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [shape=box]; "Code quality reviewer subagent approves?" [shape=diamond]; "Implementer subagent fixes quality issues" [shape=box]; "Mark task complete in TodoWrite" [shape=box]; } "Read plan, extract all tasks with full text, note context, create TodoWrite" [shape=box]; "More tasks remain?" [shape=diamond]; "Dispatch final code reviewer subagent for entire implementation" [shape=box]; "Use superpowers:finishing-a-development-branch" [shape=box style=filled fillcolor=lightgreen]; "Read plan, extract all tasks with full text, note context, create TodoWrite" -> "Dispatch implementer subagent (./implementer-prompt.md)"; "Dispatch implementer subagent (./implementer-prompt.md)" -> "Implementer subagent asks questions?"; "Implementer subagent asks questions?" -> "Answer questions, provide context" [label="yes"]; "Answer questions, provide context" -> "Dispatch implementer subagent (./implementer-prompt.md)"; "Implementer subagent asks questions?" -> "Implementer subagent implements, tests, commits, self-reviews" [label="no"]; "Implementer subagent implements, tests, commits, self-reviews" -> "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)"; "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" -> "Spec reviewer subagent confirms code matches spec?"; "Spec reviewer subagent confirms code matches spec?" -> "Implementer subagent fixes spec gaps" [label="no"]; "Implementer subagent fixes spec gaps" -> "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" [label="re-review"]; "Spec reviewer subagent confirms code matches spec?" -> "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [label="yes"]; "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" -> "Code quality reviewer subagent approves?"; "Code quality reviewer subagent approves?" -> "Implementer subagent fixes quality issues" [label="no"]; "Implementer subagent fixes quality issues" -> "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [label="re-review"]; "Code quality reviewer subagent approves?" -> "Mark task complete in TodoWrite" [label="yes"]; "Mark task complete in TodoWrite" -> "More tasks remain?"; "More tasks remain?" -> "Dispatch implementer subagent (./implementer-prompt.md)" [label="yes"]; "More tasks remain?" -> "Dispatch final code reviewer subagent for entire implementation" [label="no"]; "Dispatch final code reviewer subagent for entire implementation" -> "Use superpowers:finishing-a-development-branch"; } ``` ## Model Selection Use the least powerful model that can handle each role to conserve cost and increase speed. **Mechanical implementation tasks** (isolated functions, clear specs, 1-2 files): use a fast, cheap model. Most implementation tasks are mechanical when the plan is well-specified. **Integration and judgment tasks** (multi-file coordination, pattern matching, debugging): use a standard model. **Architecture, design, and review tasks**: use the most capable available model. **Task complexity signals:** - Touches 1-2 files with a complete spec → cheap model - Touches multiple files with integration concerns → standard model - Requires design judgment or broad codebase understanding → most capable model ## Handling Implementer Status Implementer subagents report one of four statuses. Handle each appropriately: **DONE:** Proceed to spec compliance review. **DONE_WITH_CONCERNS:** The implementer completed the work but flagged doubts. Read the concerns before proceeding. If the concerns are about correctness or scope, address them before review. If they're observations (e.g., "this file is getting large"), note them and proceed to review. **NEEDS_CONTEXT:** The implementer needs information that wasn't provided. Provide the missing context and re-dispatch. **BLOCKED:** The implementer cannot complete the task. Assess the blocker: 1. If it's a context problem, provide more context and re-dispatch with the same model 2. If the task requires more reasoning, re-dispatch with a more capable model 3. If the task is too large, break it into smaller pieces 4. If the plan itself is wrong, escalate to the human **Never** ignore an escalation or force the same model to retry without changes. If the implementer said it's stuck, something needs to change. ## Prompt Templates - `./implementer-prompt.md` - Dispatch implementer subagent - `./spec-reviewer-prompt.md` - Dispatch spec compliance reviewer subagent - `./code-quality-reviewer-prompt.md` - Dispatch code quality reviewer subagent ## Example Workflow ``` You: I'm using Subagent-Driven Development to execute this plan. [Read plan file once: docs/superpowers/plans/feature-plan.md] [Extract all 5 tasks with full text and context] [Create TodoWrite with all tasks] Task 1: Hook installation script [Get Task 1 text and context (already extracted)] [Dispatch implementation subagent with full task text + context] Implementer: "Before I begin - should the hook be installed at user or system level?" You: "User level (~/.config/superpowers/hooks/)" Implementer: "Got it. Implementing now..." [Later] Implementer: - Implemented install-hook command - Added tests, 5/5 passing - Self-review: Found I missed --force flag, added it - Committed [Dispatch spec compliance reviewer] Spec reviewer: ✅ Spec compliant - all requirements met, nothing extra [Get git SHAs, dispatch code quality reviewer] Code reviewer: Strengths: Good test coverage, clean. Issues: None. Approved. [Mark Task 1 complete] Task 2: Recovery modes [Get Task 2 text and context (already extracted)] [Dispatch implementation subagent with full task text + context] Implementer: [No questions, proceeds] Implementer: - Added verify/repair modes - 8/8 tests passing - Self-review: All good - Committed [Dispatch spec compliance reviewer] Spec reviewer: ❌ Issues: - Missing: Progress reporting (spec says "report every 100 items") - Extra: Added --json flag (not requested) [Implementer fixes issues] Implementer: Removed --json flag, added progress reporting [Spec reviewer reviews again] Spec reviewer: ✅ Spec compliant now [Dispatch code quality reviewer] Code reviewer: Strengths: Solid. Issues (Important): Magic number (100) [Implementer fixes] Implementer: Extracted PROGRESS_INTERVAL constant [Code reviewer reviews again] Code reviewer: ✅ Approved [Mark Task 2 complete] ... [After all tasks] [Dispatch final code-reviewer] Final reviewer: All requirements met, ready to merge Done! ``` ## Advantages **vs. Manual execution:** - Subagents follow TDD naturally - Fresh context per task (no confusion) - Parallel-safe (subagents don't interfere) - Subagent can ask questions (before AND during work) **vs. Executing Plans:** - Same session (no handoff) - Continuous progress (no waiting) - Review checkpoints automatic **Efficiency gains:** - No file reading overhead (controller provides full text) - Controller curates exactly what context is needed - Subagent gets complete information upfront - Questions surfaced before work begins (not after) **Quality gates:** - Self-review catches issues before handoff - Two-stage review: spec compliance, then code quality - Review loops ensure fixes actually work - Spec compliance prevents over/under-building - Code quality ensures implementation is well-built **Cost:** - More subagent invocations (implementer + 2 reviewers per task) - Controller does more prep work (extracting all tasks upfront) - Review loops add iterations - But catches issues early (cheaper than debugging later) ## Red Flags **Never:** - Start implementation on main/master branch without explicit user consent - Skip reviews (spec compliance OR code quality) - Proceed with unfixed issues - Dispatch multiple implementation subagents in parallel (conflicts) - Make subagent read plan file (provide full text instead) - Skip scene-setting context (subagent needs to understand where task fits) - Ignore subagent questions (answer before letting them proceed) - Accept "close enough" on spec compliance (spec reviewer found issues = not done) - Skip review loops (reviewer found issues = implementer fixes = review again) - Let implementer self-review replace actual review (both are needed) - **Start code quality review before spec compliance is ✅** (wrong order) - Move to next task while either review has open issues **If subagent asks questions:** - Answer clearly and completely - Provide additional context if needed - Don't rush them into implementation **If reviewer finds issues:** - Implementer (same subagent) fixes them - Reviewer reviews again - Repeat until approved - Don't skip the re-review **If subagent fails task:** - Dispatch fix subagent with specific instructions - Don't try to fix manually (context pollution) ## Integration **Required workflow skills:** - **superpowers:using-git-worktrees** - Ensures isolated workspace (creates one or verifies existing) - **superpowers:writing-plans** - Creates the plan this skill executes - **superpowers:requesting-code-review** - Code review template for reviewer subagents - **superpowers:finishing-a-development-branch** - Complete development after all tasks **Subagents should use:** - **superpowers:test-driven-development** - Subagents follow TDD for each task **Alternative workflow:** - **superpowers:executing-plans** - Use for parallel session instead of same-session execution.claude-plugin/marketplace.jsonmarketplaceShow content (514 bytes)
{ "name": "superpowers-dev", "description": "Development marketplace for Superpowers core skills library", "owner": { "name": "Jesse Vincent", "email": "jesse@fsck.com" }, "plugins": [ { "name": "superpowers", "description": "Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques", "version": "5.1.0", "source": "./", "author": { "name": "Jesse Vincent", "email": "jesse@fsck.com" } } ] }
README
Superpowers
Superpowers is a complete software development methodology for your coding agents, built on top of a set of composable skills and some initial instructions that make sure your agent uses them.
Quickstart
Give your agent Superpowers: Claude Code, Codex CLI, Codex App, Factory Droid, Gemini CLI, OpenCode, Cursor, GitHub Copilot CLI.
How it works
It starts from the moment you fire up your coding agent. As soon as it sees that you're building something, it doesn't just jump into trying to write code. Instead, it steps back and asks you what you're really trying to do.
Once it's teased a spec out of the conversation, it shows it to you in chunks short enough to actually read and digest.
After you've signed off on the design, your agent puts together an implementation plan that's clear enough for an enthusiastic junior engineer with poor taste, no judgement, no project context, and an aversion to testing to follow. It emphasizes true red/green TDD, YAGNI (You Aren't Gonna Need It), and DRY.
Next up, once you say "go", it launches a subagent-driven-development process, having agents work through each engineering task, inspecting and reviewing their work, and continuing forward. It's not uncommon for Claude to be able to work autonomously for a couple hours at a time without deviating from the plan you put together.
There's a bunch more to it, but that's the core of the system. And because the skills trigger automatically, you don't need to do anything special. Your coding agent just has Superpowers.
Sponsorship
If Superpowers has helped you do stuff that makes money and you are so inclined, I'd greatly appreciate it if you'd consider sponsoring my opensource work.
Thanks!
- Jesse
Installation
Installation differs by harness. If you use more than one, install Superpowers separately for each one.
Claude Code
Superpowers is available via the official Claude plugin marketplace
Official Marketplace
-
Install the plugin from Anthropic's official marketplace:
/plugin install superpowers@claude-plugins-official
Superpowers Marketplace
The Superpowers marketplace provides Superpowers and some other related plugins for Claude Code.
-
Register the marketplace:
/plugin marketplace add obra/superpowers-marketplace -
Install the plugin from this marketplace:
/plugin install superpowers@superpowers-marketplace
Codex CLI
Superpowers is available via the official Codex plugin marketplace.
-
Open the plugin search interface:
/plugins -
Search for Superpowers:
superpowers -
Select
Install Plugin.
Codex App
Superpowers is available via the official Codex plugin marketplace.
- In the Codex app, click on Plugins in the sidebar.
- You should see
Superpowersin the Coding section. - Click the
+next to Superpowers and follow the prompts.
Factory Droid
-
Register the marketplace:
droid plugin marketplace add https://github.com/obra/superpowers -
Install the plugin:
droid plugin install superpowers@superpowers
Gemini CLI
-
Install the extension:
gemini extensions install https://github.com/obra/superpowers -
Update later:
gemini extensions update superpowers
OpenCode
OpenCode uses its own plugin install; install Superpowers separately even if you already use it in another harness.
-
Tell OpenCode:
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.opencode/INSTALL.md -
Detailed docs: docs/README.opencode.md
Cursor
-
In Cursor Agent chat, install from marketplace:
/add-plugin superpowers -
Or search for "superpowers" in the plugin marketplace.
GitHub Copilot CLI
-
Register the marketplace:
copilot plugin marketplace add obra/superpowers-marketplace -
Install the plugin:
copilot plugin install superpowers@superpowers-marketplace
The Basic Workflow
-
brainstorming - Activates before writing code. Refines rough ideas through questions, explores alternatives, presents design in sections for validation. Saves design document.
-
using-git-worktrees - Activates after design approval. Creates isolated workspace on new branch, runs project setup, verifies clean test baseline.
-
writing-plans - Activates with approved design. Breaks work into bite-sized tasks (2-5 minutes each). Every task has exact file paths, complete code, verification steps.
-
subagent-driven-development or executing-plans - Activates with plan. Dispatches fresh subagent per task with two-stage review (spec compliance, then code quality), or executes in batches with human checkpoints.
-
test-driven-development - Activates during implementation. Enforces RED-GREEN-REFACTOR: write failing test, watch it fail, write minimal code, watch it pass, commit. Deletes code written before tests.
-
requesting-code-review - Activates between tasks. Reviews against plan, reports issues by severity. Critical issues block progress.
-
finishing-a-development-branch - Activates when tasks complete. Verifies tests, presents options (merge/PR/keep/discard), cleans up worktree.
The agent checks for relevant skills before any task. Mandatory workflows, not suggestions.
What's Inside
Skills Library
Testing
- test-driven-development - RED-GREEN-REFACTOR cycle (includes testing anti-patterns reference)
Debugging
- systematic-debugging - 4-phase root cause process (includes root-cause-tracing, defense-in-depth, condition-based-waiting techniques)
- verification-before-completion - Ensure it's actually fixed
Collaboration
- brainstorming - Socratic design refinement
- writing-plans - Detailed implementation plans
- executing-plans - Batch execution with checkpoints
- dispatching-parallel-agents - Concurrent subagent workflows
- requesting-code-review - Pre-review checklist
- receiving-code-review - Responding to feedback
- using-git-worktrees - Parallel development branches
- finishing-a-development-branch - Merge/PR decision workflow
- subagent-driven-development - Fast iteration with two-stage review (spec compliance, then code quality)
Meta
- writing-skills - Create new skills following best practices (includes testing methodology)
- using-superpowers - Introduction to the skills system
Philosophy
- Test-Driven Development - Write tests first, always
- Systematic over ad-hoc - Process over guessing
- Complexity reduction - Simplicity as primary goal
- Evidence over claims - Verify before declaring success
Read the original release announcement.
Contributing
The general contribution process for Superpowers is below. Keep in mind that we don't generally accept contributions of new skills and that any updates to skills must work across all of the coding agents we support.
- Fork the repository
- Switch to the 'dev' branch
- Create a branch for your work
- Follow the
writing-skillsskill for creating and testing new and modified skills - Submit a PR, being sure to fill in the pull request template.
See skills/writing-skills/SKILL.md for the complete guide.
Updating
Superpowers updates are somewhat coding-agent dependent, but are often automatic.
License
MIT License - see LICENSE file for details
Community
Superpowers is built by Jesse Vincent and the rest of the folks at Prime Radiant.
- Discord: Join us for community support, questions, and sharing what you're building with Superpowers
- Issues: https://github.com/obra/superpowers/issues
- Release announcements: Sign up to get notified about new versions