Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
deanpeters

Product-Manager-Skills

Quality
9.0

This repository offers a collection of 47 battle-tested product management frameworks and 6 command workflows designed to enhance both human PMs and AI agents. It helps product managers frame problems, identify opportunities, scaffold validation experiments, and quickly iterate on product ideas using established methodologies.

USP

Unlike generic AI tools, this artifact explicitly teaches the "why" behind PM frameworks to the user, while enabling AI agents to execute the "how" with battle-tested methodologies from industry leaders.

Use cases

  • 01Framing product problems and identifying opportunities
  • 02Scaffolding product validation experiments
  • 03Conducting comprehensive company research
  • 04Diagnosing SaaS business health
  • 05Evaluating acquisition channels

Detected files (8)

  • skills/context-engineering-advisor/SKILL.mdskill
    Show content (31771 bytes)
    ---
    name: context-engineering-advisor
    description: Diagnose context stuffing vs. context engineering. Use when an AI workflow feels bloated, brittle, or hard to steer reliably.
    intent: >-
      Guide product managers through diagnosing whether they're doing **context stuffing** (jamming volume without intent) or **context engineering** (shaping structure for attention). Use this to identify context boundaries, fix "Context Hoarding Disorder," and implement tactical practices like bounded domains, episodic retrieval, and the Research→Plan→Reset→Implement cycle.
    type: interactive
    theme: ai-agents
    best_for:
      - "Diagnosing context stuffing vs. context engineering in your AI workflows"
      - "Building better memory and retrieval architecture for AI agents"
      - "Improving AI output quality through structured context design"
    scenarios:
      - "My AI outputs are mediocre even though I'm giving it lots of information — diagnose what's wrong"
      - "I want to architect context properly for a multi-step AI workflow in my product team"
    estimated_time: "15-20 min"
    ---
    
    ## Purpose
    
    Guide product managers through diagnosing whether they're doing **context stuffing** (jamming volume without intent) or **context engineering** (shaping structure for attention). Use this to identify context boundaries, fix "Context Hoarding Disorder," and implement tactical practices like bounded domains, episodic retrieval, and the Research→Plan→Reset→Implement cycle.
    
    **Key Distinction:** Context stuffing assumes volume = quality ("paste the entire PRD"). Context engineering treats AI attention as a scarce resource and allocates it deliberately.
    
    This is not about prompt writing—it's about **designing the information architecture** that grounds AI in reality without overwhelming it with noise.
    
    ## Key Concepts
    
    ### The Paradigm Shift: Parametric → Contextual Intelligence
    
    **The Fundamental Problem:**
    - LLMs have **parametric knowledge** (encoded during training) = static, outdated, non-attributable
    - When asked about proprietary data, real-time info, or user preferences → forced to hallucinate or admit ignorance
    - **Context engineering** bridges the gap between static training and dynamic reality
    
    **PM's Role Shift:** From feature builder → **architect of informational ecosystems** that ground AI in reality
    
    ---
    
    ### Context Stuffing vs. Context Engineering
    
    | Dimension | Context Stuffing | Context Engineering |
    |-----------|------------------|---------------------|
    | **Mindset** | Volume = quality | Structure = quality |
    | **Approach** | "Add everything just in case" | "What decision am I making?" |
    | **Persistence** | Persist all context | Retrieve with intent |
    | **Agent Chains** | Share everything between agents | Bounded context per agent |
    | **Failure Response** | Retry until it works | Fix the structure |
    | **Economic Model** | Context as storage | Context as attention (scarce resource) |
    
    **Critical Metaphor:** Context stuffing is like bringing your entire file cabinet to a meeting. Context engineering is bringing only the 3 documents relevant to today's decision.
    
    ---
    
    ### The Anti-Pattern: Context Stuffing
    
    **Five Markers of Context Stuffing:**
    1. **Reflexively expanding context windows** — "Just add more tokens!"
    2. **Persisting everything "just in case"** — No clear retention criteria
    3. **Chaining agents without boundaries** — Agent A passes everything to Agent B to Agent C
    4. **Adding evaluations to mask inconsistency** — "We'll just retry until it's right"
    5. **Normalized retries** — "It works if you run it 3 times" becomes acceptable
    
    **Why It Fails:**
    - **Reasoning Noise:** Thousands of irrelevant files compete for attention, degrading multi-hop logic
    - **Context Rot:** Dead ends, past errors, irrelevant data accumulate → goal drift
    - **Lost in the Middle:** Models prioritize beginning (primacy) and end (recency), ignore middle
    - **Economic Waste:** Every query becomes expensive without accuracy gains
    - **Quantitative Degradation:** Accuracy drops below 20% when context exceeds ~32k tokens
    
    **The Hidden Costs:**
    - Escalating token consumption
    - Diluted attention across irrelevant material
    - Reduced output confidence
    - Cascading retries that waste time and money
    
    ---
    
    ### Real Context Engineering: Core Principles
    
    **Five Foundational Principles:**
    1. **Context without shape becomes noise**
    2. **Structure > Volume**
    3. **Retrieve with intent, not completeness**
    4. **Small working contexts** (like short-term memory)
    5. **Context Compaction:** Maximize density of relevant information per token
    
    **Quantitative Framework:**
    ```
    Efficiency = (Accuracy × Coherence) / (Tokens × Latency)
    ```
    
    **Key Finding:** Using RAG with 25% of available tokens preserves 95% accuracy while significantly reducing latency and cost.
    
    ---
    
    ### The 5 Diagnostic Questions (Detect Context Hoarding Disorder)
    
    Ask these to identify context stuffing:
    
    1. **What specific decision does this support?** — If you can't answer, you don't need it
    2. **Can retrieval replace persistence?** — Just-in-time beats always-available
    3. **Who owns the context boundary?** — If no one, it'll grow forever
    4. **What fails if we exclude this?** — If nothing breaks, delete it
    5. **Are we fixing structure or avoiding it?** — Stuffing context often masks bad information architecture
    
    ---
    
    ### Memory Architecture: Two-Layer System
    
    **Short-Term (Conversational) Memory:**
    - Immediate interaction history for follow-up questions
    - Challenge: Space management → older parts summarized or truncated
    - Lifespan: Single session
    
    **Long-Term (Persistent) Memory:**
    - User preferences, key facts across sessions → deep personalization
    - Implemented via vector database (semantic retrieval)
    - Two types:
      - **Declarative Memory:** Facts ("I'm vegan")
      - **Procedural Memory:** Behavioral patterns ("I debug by checking logs first")
    - Lifespan: Persistent across sessions
    
    **LLM-Powered ETL:** Models generate their own memories by identifying signals, consolidating with existing data, updating database automatically.
    
    ---
    
    ### The Research → Plan → Reset → Implement Cycle
    
    **The Context Rot Solution:**
    
    1. **Research:** Agent gathers data → large, chaotic context window (noise + dead ends)
    2. **Plan:** Agent synthesizes into high-density SPEC.md or PLAN.md (Source of Truth)
    3. **Reset:** **Clear entire context window** (prevents context rot)
    4. **Implement:** Fresh session using **only** the high-density plan as context
    
    **Why This Works:** Context rot is eliminated; agent starts clean with compressed, high-signal context.
    
    ---
    
    ### Anti-Patterns (What This Is NOT)
    
    - **Not about choosing AI tools** — Claude vs. ChatGPT doesn't matter; architecture matters
    - **Not about writing better prompts** — This is systems design, not copywriting
    - **Not about adding more tokens** — "Infinite context" narratives are marketing, not engineering reality
    - **Not about replacing human judgment** — Context engineering amplifies judgment, doesn't eliminate it
    
    ---
    
    ### When to Use This Skill
    
    ✅ **Use this when:**
    - You're pasting entire PRDs/codebases into AI and getting vague responses
    - AI outputs are inconsistent ("works sometimes, not others")
    - You're burning tokens without seeing accuracy improvements
    - You suspect you're "context stuffing" but don't know how to fix it
    - You need to design context architecture for an AI product feature
    
    ❌ **Don't use this when:**
    - You're just getting started with AI (start with basic prompts first)
    - You're looking for tool recommendations (this is about architecture, not tooling)
    - Your AI usage is working well (if it ain't broke, don't fix it)
    
    ---
    
    ### Facilitation Source of Truth
    
    Use [`workshop-facilitation`](../workshop-facilitation/SKILL.md) as the default interaction protocol for this skill.
    
    It defines:
    - session heads-up + entry mode (Guided, Context dump, Best guess)
    - one-question turns with plain-language prompts
    - progress labels (for example, Context Qx/8 and Scoring Qx/5)
    - interruption handling and pause/resume behavior
    - numbered recommendations at decision points
    - quick-select numbered response options for regular questions (include `Other (specify)` when useful)
    
    This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
    
    ## Application
    
    This interactive skill uses **adaptive questioning** to diagnose context stuffing, identify boundaries, and provide tactical implementation guidance.
    
    ---
    
    ### Step 0: Gather Context
    
    **Agent asks:**
    
    Before we diagnose your context practices, let's gather information:
    
    **Current AI Usage:**
    - What AI tools/systems do you use? (ChatGPT, Claude, custom agents, etc.)
    - What PM tasks do you use AI for? (PRD writing, user research synthesis, discovery, etc.)
    - How do you provide context? (paste docs, reference files, use projects/memory)
    
    **Symptoms:**
    - Are AI outputs inconsistent? (works sometimes, not others)
    - Are you retrying prompts multiple times to get good results?
    - Are responses vague or hedged despite providing "all the context"?
    - Are token costs escalating without accuracy improvements?
    
    **System Architecture (if applicable):**
    - Do you have custom AI agents or workflows?
    - How is context shared between agents?
    - Do you use RAG, vector databases, or memory systems?
    
    **You can describe briefly or paste examples.**
    
    ---
    
    ### Step 1: Diagnose Context Stuffing Symptoms
    
    **Agent asks:**
    
    Let's assess whether you're experiencing **context stuffing**. Which of these symptoms do you recognize?
    
    **Select all that apply:**
    
    1. **"I paste entire documents into AI"** — Full PRDs, complete user interview transcripts, entire codebases
    2. **"AI gives vague, hedged responses despite having 'all the context'"** — Responses like "it depends," "consider these options," non-committal
    3. **"I have to retry prompts 3+ times to get usable output"** — Inconsistency is normalized
    4. **"Token costs are escalating but accuracy isn't improving"** — Spending more, getting same or worse results
    5. **"I keep adding more context hoping it'll help"** — Reflexive expansion without strategy
    6. **"My agents pass everything to each other"** — Agent A → Agent B → Agent C with full context chain
    7. **"I don't have clear criteria for what to include/exclude"** — No context boundary definitions
    8. **"None of these—my AI usage is working well"** — Skip to advanced optimization
    
    **User response:** [Select symptoms]
    
    **Agent analyzes:**
    
    Based on your selections:
    - **0-1 symptoms:** Healthy context practices; proceed to optimization
    - **2-3 symptoms:** Early context stuffing; address before it scales
    - **4+ symptoms:** Active Context Hoarding Disorder; immediate intervention needed
    
    **Agent proceeds to diagnostic questions.**
    
    ---
    
    ### Step 2: Diagnostic Question 1 — What Specific Decision Does This Support?
    
    **Agent asks:**
    
    Let's start with the most critical question: **For each piece of context you're including, what specific decision does it support?**
    
    **Example Context Analysis:**
    
    Imagine you're asking AI to "help with discovery planning." You're providing:
    - Entire PRD (20 pages)
    - 50 user interview transcripts (full)
    - Competitive analysis doc (15 pages)
    - Team meeting notes from last 3 months
    
    **Question:** What decision are you making right now?
    
    **Offer 3 scenarios:**
    
    1. **"I'm deciding which user segment to interview first"**
       - **Context needed:** User segments from PRD (2 paragraphs), prior interview themes (1 page synthesis), not full transcripts
       - **Context NOT needed:** Meeting notes, full competitive analysis, full PRD
    
    2. **"I'm deciding which discovery questions to ask in interviews"**
       - **Context needed:** Research objectives (from PRD), past interview insights (synthesis), Jobs-to-be-Done framework
       - **Context NOT needed:** Full competitive analysis, full meeting notes
    
    3. **"I'm not sure what decision I'm making—I just want AI to 'understand my product'"**
       - **Problem:** No specific decision = context stuffing trap
       - **Fix:** Define the decision first, then select context
    
    **Agent recommends:**
    
    **Best Practice:** Before adding context, complete this sentence:
    > "I need this context because I'm deciding [specific decision], and without [specific information], I can't make that decision."
    
    If you can't complete that sentence, you don't need the context.
    
    **User response:** [Describe their decision + context]
    
    **Agent validates:** Does the context directly support the stated decision? If not, recommend trimming.
    
    ---
    
    ### Step 3: Diagnostic Question 2 — Can Retrieval Replace Persistence?
    
    **Agent asks:**
    
    **Second question: Is this information you always need, or something you can retrieve just-in-time?**
    
    **The Distinction:**
    
    **Always-Needed (Persist):**
    - Core product constraints (technical, regulatory, strategic)
    - User preferences that apply to every interaction
    - Critical definitions (operational glossary)
    - Non-negotiable rules
    
    **Episodic (Retrieve on-demand):**
    - Project-specific details (this epic, this sprint)
    - Historical data (past PRDs, old interview transcripts)
    - Contextual facts (competitive analysis, market research)
    - Temporary decisions
    
    **Key Insight:** Just-in-time retrieval beats always-available. Don't persist what you can retrieve.
    
    **Offer 3 options:**
    
    1. **"Most of my context is always-needed (core constraints, user prefs)"**
       - **Assessment:** Good instinct; verify with Question 4 (what fails if excluded?)
       - **Recommendation:** Build constraints registry and operational glossary (persist these)
    
    2. **"Most of my context is episodic (project details, historical data)"**
       - **Assessment:** Perfect candidate for RAG or retrieval
       - **Recommendation:** Implement semantic search; retrieve only relevant chunks for each query
    
    3. **"I'm not sure which is which—I persist everything to be safe"**
       - **Assessment:** Classic Context Hoarding Disorder symptom
       - **Fix:** Apply Question 4 test to each piece of context
    
    **Agent recommends:**
    
    **Rule of Thumb:**
    - **Persist:** Information referenced in 80%+ of interactions
    - **Retrieve:** Information referenced in <20% of interactions
    - **Gray zone (20-80%):** Depends on retrieval latency vs. context window cost
    
    **User response:** [Categorize their context]
    
    **Agent provides:** Specific recommendations on what to persist vs. retrieve.
    
    ---
    
    ### Step 4: Diagnostic Question 3 — Who Owns the Context Boundary?
    
    **Agent asks:**
    
    **Third question: Who is responsible for defining what belongs in vs. out of your AI's context?**
    
    **The Ownership Problem:**
    
    If **no one** owns the context boundary, it will grow indefinitely. Every PM will add "just one more thing," and six months later, you're stuffing 100k tokens per query.
    
    **Offer 3 options:**
    
    1. **"I own the boundary (solo PM or small team)"**
       - **Assessment:** Good—you can make fast decisions
       - **Recommendation:** Document your boundary criteria (use Questions 1-5 as framework)
    
    2. **"My team shares ownership (collaborative boundary definition)"**
       - **Assessment:** Can work if formalized
       - **Recommendation:** Create a "Context Manifest" doc: what's always included, what's retrieved, what's excluded (and why)
    
    3. **"No one owns it—it's ad-hoc / implicit"**
       - **Assessment:** Critical risk; boundary will expand uncontrollably
       - **Fix:** Assign explicit ownership; schedule quarterly context audits
    
    **Agent recommends:**
    
    **Best Practice: Create a Context Manifest**
    
    ```markdown
    # Context Manifest: [Product/Feature Name]
    
    ## Always Persisted (Core Context)
    - Product constraints (technical, regulatory)
    - User preferences (role, permissions, preferences)
    - Operational glossary (20 key terms)
    
    ## Retrieved On-Demand (Episodic Context)
    - Historical PRDs (retrieve via semantic search)
    - User interview transcripts (retrieve relevant quotes)
    - Competitive analysis (retrieve when explicitly needed)
    
    ## Excluded (Out of Scope)
    - Meeting notes older than 30 days (no longer relevant)
    - Full codebase (use code search instead)
    - Marketing materials (not decision-relevant)
    
    ## Boundary Owner: [Name]
    ## Last Reviewed: [Date]
    ## Next Review: [Date + 90 days]
    ```
    
    **User response:** [Describe current ownership model]
    
    **Agent provides:** Recommendation on formalizing ownership + template for Context Manifest.
    
    ---
    
    ### Step 5: Diagnostic Question 4 — What Fails if We Exclude This?
    
    **Agent asks:**
    
    **Fourth question: For each piece of context, what specific failure mode occurs if you exclude it?**
    
    This is the **falsification test**. If you can't identify a concrete failure, you don't need the context.
    
    **Offer 3 scenarios:**
    
    1. **"If I exclude product constraints, AI will recommend infeasible solutions"**
       - **Failure Mode:** Clear and concrete
       - **Assessment:** Valid reason to persist constraints
    
    2. **"If I exclude historical PRDs, AI won't understand our product evolution"**
       - **Failure Mode:** Vague and hypothetical
       - **Assessment:** Historical context rarely needed for current decisions
       - **Fix:** Retrieve PRDs only when explicitly referencing past decisions
    
    3. **"If I exclude this, I'm not sure anything would break—I just include it to be thorough"**
       - **Failure Mode:** None identified
       - **Assessment:** Context stuffing; delete immediately
    
    **Agent recommends:**
    
    **The Falsification Protocol:**
    
    For each context element, complete this statement:
    > "If I exclude [context element], then [specific failure] will occur in [specific scenario]."
    
    **Examples:**
    - ✅ Good: "If I exclude GDPR constraints, AI will recommend features that violate EU privacy law."
    - ❌ Bad: "If I exclude this PRD, AI might not fully understand the product." (Vague)
    
    **User response:** [Apply falsification test to their context]
    
    **Agent provides:** List of context elements to delete (no concrete failure identified).
    
    ---
    
    ### Step 6: Diagnostic Question 5 — Are We Fixing Structure or Avoiding It?
    
    **Agent asks:**
    
    **Fifth question: Is adding more context solving a problem, or masking a deeper structural issue?**
    
    **The Root Cause Question:**
    
    Context stuffing often hides bad information architecture. Instead of fixing messy, ambiguous documents, teams add more documents hoping AI will "figure it out."
    
    **Offer 3 options:**
    
    1. **"I'm adding context because our docs are poorly structured/ambiguous"**
       - **Assessment:** You're masking a structural problem
       - **Fix:** Clean up the docs first (remove ambiguity, add constraints, define terms)
       - **Example:** Instead of pasting 5 conflicting PRDs, reconcile them into 1 Source of Truth
    
    2. **"I'm adding context because we don't have a shared operational glossary"**
       - **Assessment:** You're compensating for missing foundations
       - **Fix:** Build the glossary (20-30 key terms); AI can reference it reliably
       - **Example:** Define "active user," "churn," "engagement" unambiguously
    
    3. **"I'm adding context because our constraints aren't documented"**
       - **Assessment:** You're avoiding constraint engineering
       - **Fix:** Create constraints registry (technical, regulatory, strategic)
       - **Example:** Document "We won't build mobile apps" vs. explaining it in every prompt
    
    **Agent recommends:**
    
    **The Structural Health Test:**
    
    If you're adding context to compensate for:
    - **Ambiguous documentation** → Fix the docs, don't add more
    - **Undefined terms** → Build operational glossary
    - **Undocumented constraints** → Create constraints registry
    - **Conflicting information** → Reconcile into Source of Truth
    
    **User response:** [Identify structural issues]
    
    **Agent provides:** Prioritized list of structural fixes before adding more context.
    
    ---
    
    ### Step 7: Define Memory Architecture
    
    **Agent asks:**
    
    Based on your context needs, let's design a **two-layer memory architecture**.
    
    **Your Context Profile (from previous steps):**
    - Always-needed context: [Summary from Q2]
    - Episodic context: [Summary from Q2]
    - Boundary owner: [From Q3]
    - Validated essentials: [From Q4]
    - Structural fixes needed: [From Q5]
    
    **Recommended Architecture:**
    
    **Short-Term (Conversational) Memory:**
    - **What it stores:** Immediate interaction history for follow-up questions
    - **Lifespan:** Single session
    - **Management:** Summarize or truncate older parts to avoid crowding
    - **Your specific needs:** [Agent customizes based on user's workflow]
    
    **Long-Term (Persistent) Memory:**
    - **What it stores:** User preferences, core constraints, operational glossary
    - **Lifespan:** Persistent across sessions
    - **Implementation:** Vector database (semantic retrieval)
    - **Two types:**
      - **Declarative Memory:** Facts (e.g., "We follow HIPAA regulations")
      - **Procedural Memory:** Behavioral patterns (e.g., "Always validate feasibility before usability")
    - **Your specific needs:** [Agent customizes]
    
    **Retrieval Strategy (Episodic Context):**
    - **What it retrieves:** Historical PRDs, user interviews, competitive analysis
    - **Method:** Semantic search triggered by query intent
    - **Optimization:** Contextual Retrieval (Anthropic) — prepend explanatory context to each chunk before embedding
    - **Your specific needs:** [Agent customizes]
    
    **Agent offers:**
    
    Would you like me to:
    1. **Generate a Context Architecture Blueprint** for your specific use case?
    2. **Provide implementation guidance** (tools, techniques, best practices)?
    3. **Design a retrieval strategy** for your episodic context?
    
    **User response:** [Selection]
    
    ---
    
    ### Step 8: Implement Research → Plan → Reset → Implement Cycle
    
    **Agent asks:**
    
    Now let's implement the **Research → Plan → Reset → Implement** cycle to prevent context rot.
    
    **The Problem:** As agents research, context windows grow chaotic—filled with dead ends, errors, and noise. This dilutes attention and causes goal drift.
    
    **The Solution:** Compress research into a high-density plan, then **clear the context window** before implementing.
    
    **The Four-Phase Cycle:**
    
    **Phase 1: Research (Chaotic Context Allowed)**
    - Agent gathers data from multiple sources
    - Context window grows large and messy (this is expected)
    - Dead ends, failed hypotheses, and noise accumulate
    - **Goal:** Comprehensive information gathering
    
    **Phase 2: Plan (Synthesis)**
    - Agent synthesizes research into a high-density SPEC.md or PLAN.md
    - This becomes the **Source of Truth** for implementation
    - **Key elements:**
      - Decision made
      - Evidence supporting decision
      - Constraints applied
      - Next steps (sequenced)
    - **Format:** Structured, concise, unambiguous
    
    **Phase 3: Reset (Clear Context Window)**
    - **Critical step:** Clear the entire context window
    - Delete all research artifacts, dead ends, errors
    - This prevents context rot from poisoning implementation
    
    **Phase 4: Implement (Fresh Session with Plan Only)**
    - Start a new session with **only the high-density plan** as context
    - Agent has clean, focused attention on execution
    - No noise from research phase
    
    **Agent offers 3 options:**
    
    1. **"I want a template for the PLAN.md format"**
       - Agent provides structured template for high-density plans
    
    2. **"I want to see an example of this cycle in action"**
       - Agent walks through concrete PM use case (e.g., discovery planning)
    
    3. **"I'm ready to implement this in my workflow"**
       - Agent provides step-by-step implementation guide
    
    **User response:** [Selection]
    
    **Agent provides:** Tailored guidance based on selection.
    
    ---
    
    ### Step 9: Action Plan & Next Steps
    
    **Agent synthesizes:**
    
    Based on your context engineering assessment, here's your action plan:
    
    **Immediate Fixes (This Week):**
    1. [Delete context with no falsifiable failure mode from Q4]
    2. [Apply Research→Plan→Reset→Implement to your next AI task]
    3. [Document context boundary in Context Manifest]
    
    **Foundation Building (Next 2 Weeks):**
    1. [Build constraints registry with 20+ entries]
    2. [Create operational glossary with 20-30 key terms]
    3. [Implement two-layer memory architecture]
    
    **Long-Term Optimization (Next Month):**
    1. [Set up semantic retrieval for episodic context]
    2. [Assign context boundary owner + quarterly audit schedule]
    3. [Implement Contextual Retrieval (Anthropic) for RAG]
    
    **Success Metrics:**
    - Token usage down 50%+ (less context stuffing)
    - Output consistency up (less retry/regeneration)
    - Response quality up (sharper, less hedged answers)
    - Context window stable (no unbounded growth)
    
    **Agent offers:**
    
    Would you like me to:
    1. **Generate specific implementation docs** (Context Manifest, PLAN.md template, etc.)?
    2. **Provide advanced techniques** (Contextual Retrieval, LLM-powered ETL)?
    3. **Review your current context setup** (provide feedback on specific prompts/workflows)?
    
    ---
    
    ## Examples
    
    ### Example 1: Solo PM Context Stuffing → Engineering
    
    **Context:**
    - Solo PM at early-stage startup
    - Using Claude Projects for PRD writing
    - Pasting entire PRDs (20 pages) + all user interviews (50 transcripts) every time
    - Getting vague, inconsistent responses
    
    **Assessment:**
    - Symptoms: Hedged responses, normalized retries (4+ symptoms)
    - Q1 (Decision): "I just want AI to understand my product" (no specific decision)
    - Q2 (Persist/Retrieve): Persisting everything (no retrieval strategy)
    - Q3 (Ownership): No formal owner (solo PM, ad-hoc)
    - Q4 (Failure): Can't identify concrete failures for most context
    - Q5 (Structure): Avoiding constraint documentation
    
    **Diagnosis:** Active Context Hoarding Disorder
    
    **Intervention:**
    1. **Immediate:** Delete all context that fails Q4 test → keeps 20% of original
    2. **Week 1:** Build constraints registry (10 technical constraints, 5 strategic)
    3. **Week 2:** Create operational glossary (25 terms)
    4. **Week 3:** Implement Research→Plan→Reset→Implement for next PRD
    
    **Outcome:** Token usage down 70%, output quality up significantly, responses crisp and actionable.
    
    ---
    
    ### Example 2: Growth-Stage Team with Agent Chains
    
    **Context:**
    - Product team with 5 PMs
    - Custom AI agents for discovery synthesis
    - Agent A (research) → Agent B (synthesis) → Agent C (recommendations)
    - Each agent passes full context to next → context window explodes to 100k+ tokens
    
    **Assessment:**
    - Symptoms: Escalating token costs, inconsistent outputs (3 symptoms)
    - Q1 (Decision): Each agent has clear decision, but passes unnecessary context
    - Q2 (Persist/Retrieve): Mixing persistent and episodic without strategy
    - Q3 (Ownership): No explicit owner; each PM adds context
    - Q4 (Failure): Agents pass "just in case" context with no falsifiable failure
    - Q5 (Structure): Missing Context Manifest
    
    **Diagnosis:** Agent orchestration without boundaries
    
    **Intervention:**
    1. **Immediate:** Define bounded context per agent (Agent A outputs only 2-page synthesis to Agent B, not full research)
    2. **Week 1:** Assign context boundary owner (Lead PM)
    3. **Week 2:** Create Context Manifest (what persists, what's retrieved, what's excluded)
    4. **Week 3:** Implement Research→Plan→Reset→Implement between Agent B and Agent C
    
    **Outcome:** Token usage down 60%, agent chain reliability up, costs reduced by 50%.
    
    ---
    
    ### Example 3: Enterprise with RAG but No Context Engineering
    
    **Context:**
    - Large enterprise with vector database RAG system
    - "Stuff the whole knowledge base" approach (10,000+ documents)
    - Retrieval returns 50+ chunks per query → floods context window
    - Accuracy declining as knowledge base grows
    
    **Assessment:**
    - Symptoms: Vague responses despite "complete knowledge," normalized retries (2 symptoms)
    - Q1 (Decision): Decisions clear, but retrieval has no intent (returns everything)
    - Q2 (Persist/Retrieve): Good instinct to retrieve, but no filtering
    - Q3 (Ownership): Engineering owns RAG, Product doesn't own context boundaries
    - Q4 (Failure): Can't identify why 50 chunks needed vs. 5
    - Q5 (Structure): Knowledge base has no structure (flat documents, no metadata)
    
    **Diagnosis:** Retrieval without intent (RAG as context stuffing)
    
    **Intervention:**
    1. **Immediate:** Limit retrieval to top 5 chunks per query (down from 50)
    2. **Week 1:** Implement Contextual Retrieval (Anthropic) — prepend explanatory context to each chunk during indexing
    3. **Week 2:** Add metadata to documents (category, recency, authority)
    4. **Week 3:** Product team defines retrieval intent per query type (discovery = customer insights, feasibility = technical constraints)
    
    **Outcome:** Accuracy up 35% (from Anthropic benchmark), latency down 60%, token usage down 80%.
    
    ---
    
    ## Common Pitfalls
    
    ### 1. **"Infinite Context" Marketing vs. Engineering Reality**
    **Failure Mode:** Believing "1 million token context windows" means you should use all of them.
    
    **Consequence:** Reasoning Noise degrades performance; accuracy drops below 20% past ~32k tokens.
    
    **Fix:** Context windows are not free. Treat tokens as scarce; optimize for density, not volume.
    
    ---
    
    ### 2. **Retrying Instead of Restructuring**
    **Failure Mode:** "It works if I run it 3 times" → normalizing retries instead of fixing structure.
    
    **Consequence:** Wastes time and money; masks deeper context rot issues.
    
    **Fix:** If retries are common, your context structure is broken. Apply Q5 (fix structure, don't add volume).
    
    ---
    
    ### 3. **No Context Boundary Owner**
    **Failure Mode:** Ad-hoc, implicit context decisions → unbounded growth.
    
    **Consequence:** Six months later, every query stuffs 100k tokens per interaction.
    
    **Fix:** Assign explicit ownership; create Context Manifest; schedule quarterly audits.
    
    ---
    
    ### 4. **Mixing Always-Needed with Episodic**
    **Failure Mode:** Persisting historical data that should be retrieved on-demand.
    
    **Consequence:** Context window crowded with irrelevant information; attention diluted.
    
    **Fix:** Apply Q2 test: persist only what's needed in 80%+ of interactions; retrieve the rest.
    
    ---
    
    ### 5. **Skipping the Reset Phase**
    **Failure Mode:** Never clearing context window during Research→Plan→Implement cycle.
    
    **Consequence:** Context rot accumulates; goal drift; dead ends poison implementation.
    
    **Fix:** Mandatory Reset phase after Plan; start implementation with only high-density plan as context.
    
    ---
    
    ## References
    
    ### Related Skills
    - **[ai-shaped-readiness-advisor](../ai-shaped-readiness-advisor/SKILL.md)** (Interactive) — Context Design is Competency #1 of AI-shaped work
    - **[problem-statement](../problem-statement/SKILL.md)** (Component) — Evidence-based framing requires context engineering
    - **[epic-hypothesis](../epic-hypothesis/SKILL.md)** (Component) — Testable hypotheses depend on clear constraints (part of context)
    - **[pol-probe-advisor](../pol-probe-advisor/SKILL.md)** (Interactive) — Validation experiments benefit from context engineering (define what AI needs to know)
    
    ### External Frameworks
    - **Dean Peters** — [*Context Stuffing Is Not Context Engineering*](https://deanpeters.substack.com/p/context-stuffing-is-not-context-engineering) (Dean Peters' Substack, 2026)
    - **Teresa Torres** — *Continuous Discovery Habits* (Context Engineering as one of 5 new AI PM disciplines)
    - **Marty Cagan** — *Empowered* (Feasibility risk in AI era includes understanding "physics of AI")
    - **Anthropic** — [Contextual Retrieval whitepaper](https://www.anthropic.com/news/contextual-retrieval) (35% failure rate reduction)
    - **Google** — Context engineering whitepaper on LLM-powered memory systems
    
    ### Technical References
    - **RAG (Retrieval-Augmented Generation)** — Standard technique for episodic context retrieval
    - **Vector Databases** — Semantic search for long-term memory (Pinecone, Weaviate, Chroma)
    - **Contextual Retrieval (Anthropic)** — Prepend explanatory context to chunks before embedding
    - **LLM-as-Judge** — Automated evaluation of context quality
    
  • skills/company-research/SKILL.mdskill
    Show content (13993 bytes)
    ---
    name: company-research
    description: Create a company research brief with executive quotes, product strategy, and org context. Use when preparing for interviews, competitive analysis, partnerships, or market-entry work.
    intent: >-
      Create a comprehensive company profile that extracts executive insights, product strategy, transformation initiatives, and organizational dynamics from publicly available sources. Use this to understand competitive landscape, evaluate partnership opportunities, benchmark best practices, prepare for interviews, or inform market entry decisions by understanding how successful companies think about product management and strategy.
    type: component
    ---
    
    
    ## Purpose
    Create a comprehensive company profile that extracts executive insights, product strategy, transformation initiatives, and organizational dynamics from publicly available sources. Use this to understand competitive landscape, evaluate partnership opportunities, benchmark best practices, prepare for interviews, or inform market entry decisions by understanding how successful companies think about product management and strategy.
    
    This is not surface-level research—it's strategic intelligence gathering focused on product management perspectives and executive vision.
    
    ## Key Concepts
    
    ### The Executive Insights Framework
    This framework synthesizes company intelligence across multiple dimensions:
    
    **Core Components:**
    1. **Company Overview:** Basic info, history, industry context
    2. **Executive Quotes:** Strategic vision from CEO, COO, VP Product, Group PM
    3. **Product Insights:** Strategy, recent launches, innovation focus
    4. **Transformation Strategies:** Digital, AI, Agile transformations
    5. **Organizational Impact:** How PM influences strategy, cross-functional collaboration
    6. **Future Roadmap:** Upcoming initiatives and anticipated challenges
    7. **Product-Led Growth (PLG):** PLG strategies, data-driven decisions
    
    ### Why This Works
    - **Executive perspective:** Captures leadership thinking, not just marketing copy
    - **Product-centric:** Focuses on PM-relevant insights (strategy, process, culture)
    - **Multi-source:** Synthesizes interviews, earnings calls, blog posts, case studies
    - **Strategic intelligence:** Informs competitive positioning, partnership evaluation, or interview prep
    
    ### Anti-Patterns (What This Is NOT)
    - **Not financial analysis:** Focus is product strategy, not valuation or stock performance
    - **Not SWOT analysis:** This documents their perspective, not strengths/weaknesses assessment
    - **Not surface scraping:** Go deeper than "About Us" pages—find executive interviews, product blogs, earnings transcripts
    
    ### When to Use This
    - Competitive analysis (understanding how competitors approach PM)
    - Partnership evaluation (assessing cultural fit and strategic direction)
    - Interview preparation (understanding company culture, product philosophy)
    - Benchmarking best practices (learning from successful companies)
    - Market entry decisions (understanding how incumbents operate)
    
    ### When NOT to Use This
    - For internal analysis (this is external research)
    - When primary sources are unavailable (executives haven't spoken publicly)
    - As a substitute for customer research (this is company perspective, not customer perspective)
    
    ---
    
    ## Application
    
    Use `template.md` for the full fill-in structure.
    
    ### Step 1: Define Research Scope
    
    Clarify what you're researching and why:
    
    ```markdown
    ## Research Objective
    - **Company Name:** [e.g., "Stripe"]
    - **Research Purpose:** [e.g., "Understand payment platform product strategy for competitive positioning"]
    - **Key Questions:**
      - [Question 1: e.g., "How does Stripe think about platform extensibility?"]
      - [Question 2: e.g., "What's their approach to developer experience?"]
      - [Question 3: e.g., "How do they prioritize roadmap vs. custom enterprise requests?"]
    ```
    
    ---
    
    ### Step 2: Gather Company Overview
    
    Document basic company information:
    
    ```markdown
    ### Company Overview
    
    **Basic Information:**
    - **Name:** [Official company name]
    - **Headquarters:** [Location]
    - **Industry:** [Primary industries, e.g., "Fintech, Payment Processing, Developer Tools"]
    - **Founded:** [Year]
    - **Size:** [Employees, revenue if public, funding if private]
    
    **Brief History:**
    - [Key milestones that shaped current market position]
    - [Example: "2010: Founded by Patrick and John Collison. 2011: Launched 7-line integration. 2018: Launched Stripe Atlas. 2021: $95B valuation."]
    ```
    
    **Sources to check:**
    - Company website (About, Press, Blog)
    - LinkedIn company page
    - Crunchbase / PitchBook (for funding/valuation)
    - Wikipedia (for history)
    
    ---
    
    ### Step 3: Extract Executive Quotes on Strategic Vision
    
    Find recent quotes from key executives:
    
    ```markdown
    ### Executive Quotes on Strategic Vision
    
    **Quote from the CEO:**
    - "[Recent quote discussing long-term vision and market approach]"
    - **Source:** [Link to interview, earnings call, blog post, conference talk]
    - **Date:** [When the quote was made]
    - **Context:** [Brief explanation of what prompted this quote]
    
    **Quote from the COO:**
    - "[Recent quote focusing on operational strategies and challenges]"
    - **Source:** [Link]
    - **Date:** [When]
    
    **Quote from the VP of Product Management:**
    - "[Recent quote detailing product strategy and innovation focus]"
    - **Source:** [Link]
    - **Date:** [When]
    
    **Quote from the Group Product Manager:**
    - "[Recent quote discussing specific product initiatives and customer engagement]"
    - **Source:** [Link]
    - **Date:** [When]
    ```
    
    **Sources to check:**
    - Earnings call transcripts (if public)
    - Podcast interviews (e.g., Lenny's Podcast, Masters of Scale, How I Built This)
    - Conference talks (YouTube, company blog)
    - Blog posts by executives
    - LinkedIn posts
    - Industry publications (TechCrunch, The Verge, etc.)
    
    **Quality checks:**
    - **Recent:** Prioritize quotes from the last 12-24 months
    - **Substantive:** Look for strategy/philosophy, not generic PR statements
    - **Attributed:** Always cite source and date
    
    ---
    
    ### Step 4: Document Product Insights
    
    Synthesize product strategy and recent launches:
    
    ```markdown
    ### Detailed Product Insights
    
    **Product Strategy Overview:**
    - [Describe overall product strategy, emphasizing integration of market needs with technological capabilities]
    - [Example: "Stripe's product strategy centers on developer experience: reduce integration complexity, provide powerful primitives, enable rapid experimentation"]
    
    **Recent Product Launches and Innovations:**
    1. **[Product/Feature 1]** - [Description and market impact]
       - [Example: "Stripe Tax (2021): Automated sales tax calculation. Removed compliance barrier for global expansion."]
    2. **[Product/Feature 2]** - [Description and impact]
    3. **[Product/Feature 3]** - [Description and impact]
    
    **Product Philosophy:**
    - [Key principles that guide product decisions]
    - [Example: "Start with developer needs, not enterprise sales. Build for 10x scale before you need it. Default to public APIs."]
    ```
    
    **Sources to check:**
    - Product blog or changelog
    - Product Hunt launches
    - Release notes
    - Product team blog posts or case studies
    
    ---
    
    ### Step 5: Identify Transformation Strategies
    
    Document how the company is evolving:
    
    ```markdown
    ### Transformation Strategies and Initiatives
    
    **Digital Transformation:**
    - [Describe approach to digital transformation, emphasizing integration of cutting-edge technology with existing processes]
    - [Example: "Migrated from monolith to microservices architecture (2019-2022). Enabled 10x faster feature deployment."]
    
    **AI Transformation:**
    - [Explain how AI is incorporated into core processes, product offerings, and market positioning]
    - [Example: "Launched Radar for fraud detection (ML-powered). Reduced false positives by 40%, processing $640B annually."]
    
    **Agile Transformation:**
    - [Detail adoption of Agile methodologies, highlighting improvements in collaboration, project management, product delivery]
    - [Example: "Adopted Shape Up methodology (6-week cycles, no sprints). Improved focus, reduced meeting overhead."]
    ```
    
    **Sources to check:**
    - Engineering blog
    - Case studies or white papers
    - Conference talks by engineering/product leaders
    - LinkedIn posts about process changes
    
    ---
    
    ### Step 6: Understand Organizational Impact of Product Management
    
    Document how PM functions within the organization:
    
    ```markdown
    ### Organizational Impact of Product Management
    
    **Role of Product Management in Strategic Decisions:**
    - [Discuss how PM influences strategic decisions]
    - [Example: "PMs own P&L for their product area. Directly influence company roadmap through quarterly planning process. CEO reviews roadmap with PM leads, not just VPs."]
    
    **Cross-Functional Collaboration:**
    - [Outline collaboration between PM and other departments]
    - [Example: "PMs co-located with engineering (not in separate 'product' org). Weekly design reviews with Design VP. Monthly GTM sync with Sales/Marketing."]
    
    **PM Career Paths:**
    - [If available, describe how PMs grow and advance]
    - [Example: "IC track: PM → Senior PM → Staff PM → Principal PM. Manager track: PM → Group PM → Director → VP."]
    ```
    
    **Sources to check:**
    - PM job postings (describe role, responsibilities, team structure)
    - LinkedIn profiles (track PM career progression)
    - PM blog posts or interviews
    - Glassdoor reviews (internal culture insights)
    
    ---
    
    ### Step 7: Analyze Future Roadmap and Challenges
    
    Identify where the company is headed:
    
    ```markdown
    ### Future Product Roadmap and Challenges
    
    **Upcoming Product Initiatives:**
    - [Detail planned initiatives and alignment with strategic goals]
    - [Example: "Expanding into embedded finance (Stripe Capital, Stripe Treasury). Goal: Become financial infrastructure for the internet, not just payments."]
    
    **Anticipated Market Challenges:**
    - [Identify potential challenges and PM team plans to address them]
    - [Example: "Challenge: Increasing competition from Square, PayPal. Response: Double down on developer experience, global expansion (70+ countries)."]
    
    **Competitive Threats:**
    - [Document acknowledged or observed competitive pressures]
    ```
    
    **Sources to check:**
    - Earnings calls (forward-looking statements)
    - Analyst reports
    - Industry news (funding rounds by competitors, market shifts)
    
    ---
    
    ### Step 8: Document Product-Led Growth Insights
    
    If applicable, capture PLG strategies:
    
    ```markdown
    ### Product-Led Growth Insights
    
    **Implementation of PLG Strategies:**
    - [Describe how the company employs PLG to enhance customer acquisition, retention, expansion]
    - [Example: "Self-serve onboarding: 7-line code integration. No sales calls required for <$1M ARR. 90% of customers start with free tier."]
    
    **Data-Driven Product Decisions:**
    - [Explain role of data analytics in shaping product decisions and driving growth]
    - [Example: "Instrumented every API call. PMs have real-time dashboards. Feature adoption tracked within 24 hours of launch."]
    ```
    
    **Sources to check:**
    - Product analytics blog posts
    - Growth team blog posts
    - Case studies on activation, retention, expansion
    
    ---
    
    ### Step 9: Synthesize Key Takeaways
    
    Summarize the most important insights:
    
    ```markdown
    ### Key Takeaways
    
    **Strategic Principles:**
    1. **[Principle 1]** - [What you learned about their approach]
    2. **[Principle 2]** - [What you learned]
    3. **[Principle 3]** - [What you learned]
    
    **Product Management Lessons:**
    1. **[Lesson 1]** - [Applicable insight for your context]
    2. **[Lesson 2]** - [Applicable insight]
    3. **[Lesson 3]** - [Applicable insight]
    
    **Questions for Further Research:**
    - [Unanswered question 1]
    - [Unanswered question 2]
    ```
    
    ---
    
    ## Examples
    
    See `examples/sample.md` for a full company research example.
    
    Mini example excerpt:
    
    ```markdown
    **Company Name:** Stripe
    **Research Purpose:** Understand payment platform product strategy
    **Key Questions:** Developer experience? Platform extensibility?
    ```
    
    ## Common Pitfalls
    
    ### Pitfall 1: Surface-Level Research
    **Symptom:** "Stripe is a payments company. They process payments."
    
    **Consequence:** No strategic insights.
    
    **Fix:** Go deeper—find executive interviews, engineering blogs, product philosophy posts.
    
    ---
    
    ### Pitfall 2: No Source Citations
    **Symptom:** "The CEO said the company is focused on innovation"
    
    **Consequence:** Unverifiable, low credibility.
    
    **Fix:** Always cite source and date: "The CEO said X (Source: Lenny's Podcast, Episode 185, Sept 2023)."
    
    ---
    
    ### Pitfall 3: Mixing Opinion with Facts
    **Symptom:** "Stripe's product strategy is great because they focus on developers"
    
    **Consequence:** Analysis, not research.
    
    **Fix:** Document *what* they do, not whether it's "good." Save analysis for "Key Takeaways."
    
    ---
    
    ### Pitfall 4: Outdated Information
    **Symptom:** Using 5-year-old quotes or strategies
    
    **Consequence:** Irrelevant insights (company strategies evolve).
    
    **Fix:** Prioritize sources from the last 12-24 months.
    
    ---
    
    ### Pitfall 5: Ignoring Negative Signals
    **Symptom:** Only documenting successes, ignoring challenges or failures
    
    **Consequence:** Incomplete picture.
    
    **Fix:** Include "Anticipated Market Challenges" and competitive threats.
    
    ---
    
    ## References
    
    ### Related Skills
    - `skills/positioning-statement/SKILL.md` — Use company research to understand competitive positioning
    - `skills/pestel-analysis/SKILL.md` — Company research informs market context
    - `skills/proto-persona/SKILL.md` — Executive quotes may reveal target personas
    
    ### External Frameworks
    - Competitive intelligence frameworks
    - Strategic analysis methodologies
    
    ### Dean's Work
    - Executive Insights Company Profile Template
    
    ### Provenance
    - Adapted from `prompts/company-profile-executive-insights-research.md` in the `https://github.com/deanpeters/product-manager-prompts` repo.
    
    ---
    
    **Skill type:** Component
    **Suggested filename:** `company-research.md`
    **Suggested placement:** `/skills/components/`
    **Dependencies:** References `skills/positioning-statement/SKILL.md`, `skills/pestel-analysis/SKILL.md`
    
  • skills/business-health-diagnostic/SKILL.mdskill
    Show content (22380 bytes)
    ---
    name: business-health-diagnostic
    description: Diagnose SaaS business health across growth, retention, efficiency, and capital. Use when preparing a business review or prioritizing urgent fixes.
    intent: >-
      Diagnose overall SaaS business health by analyzing growth, retention, unit economics, and capital efficiency metrics together. Use this to identify problems early, prioritize actions by urgency, and deliver a comprehensive health scorecard for board meetings, quarterly reviews, or fundraising preparation.
    type: interactive
    theme: finance-metrics
    best_for:
      - "Getting a complete read on your SaaS business health across all dimensions"
      - "Identifying which metrics are red flags vs. leading indicators"
      - "Preparing for a board meeting or investor review"
    scenarios:
      - "Our growth is strong but we're burning cash fast — I need to understand our unit economics before the board meeting"
      - "I'm preparing for a Series A board meeting and need to assess our business health across growth, retention, and efficiency"
    estimated_time: "20-30 min"
    ---
    
    
    ## Purpose
    
    Diagnose overall SaaS business health by analyzing growth, retention, unit economics, and capital efficiency metrics together. Use this to identify problems early, prioritize actions by urgency, and deliver a comprehensive health scorecard for board meetings, quarterly reviews, or fundraising preparation.
    
    This is not a single-metric check—it's a holistic diagnostic that connects revenue, retention, economics, and efficiency to reveal systemic issues and opportunities.
    
    ## Key Concepts
    
    ### The Business Health Framework
    
    A SaaS business is healthy when four dimensions work together:
    
    1. **Growth & Retention** — Are you growing and keeping customers?
       - Revenue growth rate
       - NRR (Net Revenue Retention)
       - Churn rate
       - Quick Ratio
    
    2. **Unit Economics** — Is the business model profitable at the customer level?
       - CAC (Customer Acquisition Cost)
       - LTV (Lifetime Value)
       - LTV:CAC ratio
       - Payback period
       - Gross margin
    
    3. **Capital Efficiency** — Are you using cash efficiently?
       - Burn rate
       - Runway
       - Rule of 40
       - Magic Number
    
    4. **Strategic Position** — Are you positioned for sustainable success?
       - Market positioning (below, at, above market pricing)
       - Competitive moat (network effects, data, brand)
       - Revenue concentration risk
       - Operating leverage
    
    ### Stage-Specific Benchmarks
    
    **Early Stage (Pre-$10M ARR):**
    - Focus: Product-market fit, unit economics
    - Growth: >50% YoY
    - LTV:CAC: >3:1
    - Gross Margin: >70%
    - Runway: >12 months
    - Acceptable: Negative margins, high burn (if unit economics work)
    
    **Growth Stage ($10M-$50M ARR):**
    - Focus: Scaling efficiently
    - Growth: >40% YoY
    - NRR: >100%
    - Rule of 40: >40
    - Magic Number: >0.75
    - Acceptable: Moderate burn if growth is strong
    
    **Scale Stage ($50M+ ARR):**
    - Focus: Profitability, efficiency
    - Growth: >25% YoY
    - NRR: >110%
    - Rule of 40: >40
    - Profit Margin: >10%
    - Required: Positive or near-positive cash flow
    
    ### Red Flag Categories
    
    **Critical (Fix immediately):**
    - Runway <6 months
    - LTV:CAC <1.5:1
    - Churn accelerating cohort-over-cohort
    - NRR <90%
    - Magic Number <0.3
    
    **High Priority (Fix within quarter):**
    - Rule of 40 <25
    - Payback >24 months
    - Quick Ratio <2
    - Gross margin <60%
    - Revenue concentration >50% in top 10 customers
    
    **Medium Priority (Address within 6 months):**
    - NRR 90-100% (flat, not growing)
    - Magic Number 0.3-0.5
    - Operating leverage negative
    - Churn rate stable but high (>5% monthly)
    
    ### Anti-Patterns (What This Is NOT)
    
    - **Not a single metric:** "Revenue is growing 50%, we're great!" (ignoring burn, churn, unit economics)
    - **Not stage-agnostic:** Early-stage burn is acceptable; scale-stage burn is a problem
    - **Not static:** Health is directional—are metrics improving or degrading?
    - **Not just numbers:** Context matters (competitive pressure, market changes, team capacity)
    
    ### When to Use This Framework
    
    **Use this when:**
    - Preparing for board meetings or investor updates
    - Quarterly business reviews (QBR)
    - Fundraising preparation (know your numbers)
    - Annual planning (identify improvement areas)
    - You suspect problems but can't pinpoint them
    - New PM/exec joining and needs health assessment
    
    **Don't use this when:**
    - You're pre-revenue (focus on product-market fit first)
    - You're in pure research mode (not enough data)
    - You need tactical guidance (use specific skills: feature, channel, pricing)
    
    ---
    
    ### Facilitation Source of Truth
    
    Use [`workshop-facilitation`](../workshop-facilitation/SKILL.md) as the default interaction protocol for this skill.
    
    It defines:
    - session heads-up + entry mode (Guided, Context dump, Best guess)
    - one-question turns with plain-language prompts
    - progress labels (for example, Context Qx/8 and Scoring Qx/5)
    - interruption handling and pause/resume behavior
    - numbered recommendations at decision points
    - quick-select numbered response options for regular questions (include `Other (specify)` when useful)
    
    This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
    
    ## Application
    
    This interactive skill asks **up to 4 adaptive questions**, then delivers a comprehensive diagnostic with prioritized recommendations.
    
    ---
    
    ### Step 0: Gather Context
    
    **Agent asks:**
    
    "Let's diagnose your business health. I'll need metrics across four dimensions: growth, retention, unit economics, and capital efficiency.
    
    **Company context:**
    - Stage: (Pre-$10M ARR, $10M-$50M ARR, $50M+ ARR)
    - Business model: (PLG, sales-led, hybrid)
    - Target market: (SMB, mid-market, enterprise, mixed)
    
    **Why this matters:** Benchmarks vary by stage. Early-stage optimizes for growth; scale-stage optimizes for efficiency.
    
    Please provide the following metrics. Use 'unknown' if you don't have a metric."
    
    ---
    
    ### Step 1: Growth & Retention Metrics
    
    **Agent asks:**
    
    "**Growth & Retention:**
    
    1. **Revenue:**
       - Current MRR or ARR: $___
       - Revenue growth rate: ___% (MoM or YoY)
    
    2. **Retention:**
       - Monthly churn rate: ___%
       - NRR (Net Revenue Retention): ___%
       - Quick Ratio: ___ (or I can calculate it)
    
    3. **Expansion:**
       - Expansion revenue as % of total MRR: ___%
    
    4. **Cohort trends:**
       - Are recent cohorts retaining better or worse than older cohorts?
         1. Better (improving)
         2. Same (stable)
         3. Worse (degrading)
         4. Unknown"
    
    **Based on answers, agent evaluates:**
    - ✅ **Healthy growth:** Growth >40% YoY (growth stage) or >25% (scale stage)
    - ✅ **Healthy retention:** NRR >100%, churn <5% monthly, Quick Ratio >2
    - 🚨 **Growth problems:** Growth <20% YoY
    - 🚨 **Retention problems:** NRR <100%, churn >5%, cohort degradation
    
    ---
    
    ### Step 2: Unit Economics Metrics
    
    **Agent asks:**
    
    "**Unit Economics:**
    
    1. **Acquisition:**
       - CAC (Customer Acquisition Cost): $___
       - Blended or by channel? (If by channel, what's your best channel CAC?)
    
    2. **Value:**
       - LTV (Lifetime Value): $___
       - LTV:CAC ratio: ___ (or I can calculate it)
       - Payback period: ___ months (or I can calculate it)
    
    3. **Margins:**
       - Gross margin: ___%
       - Contribution margin (if known): ___%
    
    4. **Trends:**
       - Is CAC increasing, stable, or decreasing over time?
         1. Decreasing (improving efficiency)
         2. Stable
         3. Increasing (diminishing returns)
         4. Unknown"
    
    **Based on answers, agent evaluates:**
    - ✅ **Healthy economics:** LTV:CAC >3:1, payback <12 months, gross margin >70%
    - ⚠️ **Marginal economics:** LTV:CAC 2-3:1, payback 12-18 months
    - 🚨 **Poor economics:** LTV:CAC <2:1, payback >24 months, gross margin <60%
    
    ---
    
    ### Step 3: Capital Efficiency Metrics
    
    **Agent asks:**
    
    "**Capital Efficiency:**
    
    1. **Cash:**
       - Cash balance: $___
       - Monthly net burn rate: $___
       - Runway: ___ months (or I can calculate it)
    
    2. **Efficiency ratios:**
       - Rule of 40: ___ (Growth % + Profit Margin %) (or I can calculate it)
       - Magic Number: ___ (S&M efficiency) (or I can calculate it)
    
    3. **Operating expenses:**
       - S&M as % of revenue: ___%
       - R&D as % of revenue: ___%
       - Is OpEx growing faster than revenue?
         1. No (positive operating leverage)
         2. Yes (negative operating leverage)
         3. Unknown
    
    4. **Profitability:**
       - Profit margin: ___%
       - Path to profitability: (already profitable, 6-12 months, 12-24 months, >24 months, unknown)"
    
    **Based on answers, agent evaluates:**
    - ✅ **Healthy efficiency:** Rule of 40 >40, magic number >0.75, runway >12 months
    - ⚠️ **Acceptable efficiency:** Rule of 40 25-40, magic number 0.5-0.75, runway 6-12 months
    - 🚨 **Poor efficiency:** Rule of 40 <25, magic number <0.5, runway <6 months
    
    ---
    
    ### Step 4: Deliver Comprehensive Diagnostic
    
    **Agent synthesizes all metrics and delivers:**
    
    1. **Overall Health Score** — Healthy / Moderate / Concerning / Critical
    2. **Dimension Scores** — Growth, Retention, Economics, Efficiency
    3. **Red Flags** — Critical, High Priority, Medium Priority
    4. **Prioritized Recommendations** — Top 3-5 actions with expected impact
    5. **Stage-Appropriate Benchmarks** — How you compare to peers
    
    ---
    
    #### Diagnostic Pattern 1: Healthy Business
    
    **When:**
    - Growth, retention, economics, and efficiency all meet stage-appropriate benchmarks
    - No critical red flags
    - Improving trends
    
    **Output:**
    
    "## ✅ Overall Health: **Healthy**
    
    Your business shows strong fundamentals across all dimensions.
    
    ---
    
    ### Health Scorecard
    
    | Dimension | Score | Status |
    |-----------|-------|--------|
    | **Growth & Retention** | ✅ Healthy | Growth ___% YoY, NRR ___%, Churn ___% |
    | **Unit Economics** | ✅ Healthy | LTV:CAC ___:1, Payback ___ months |
    | **Capital Efficiency** | ✅ Healthy | Rule of 40: ___, Runway ___ months |
    | **Overall** | ✅ **Healthy** | Strong position for scaling |
    
    ---
    
    ### Key Strengths
    
    1. **[Specific strength 1]**
       - Metric: [e.g., NRR 120%]
       - Why it matters: [Expanding within base without new logos]
    
    2. **[Specific strength 2]**
       - Metric: [e.g., LTV:CAC 5:1]
       - Why it matters: [Sustainable unit economics support scaling]
    
    3. **[Specific strength 3]**
       - Metric: [e.g., Rule of 40 = 65]
       - Why it matters: [Excellent balance of growth and efficiency]
    
    ---
    
    ### Opportunities for Optimization
    
    Even healthy businesses can improve. Here are your top opportunities:
    
    **1. [Opportunity 1]**
    - Current: [e.g., Magic Number 0.9]
    - Opportunity: [Could scale S&M spend 2x and maintain efficiency]
    - Impact: [+$___ MRR/month]
    
    **2. [Opportunity 2]**
    - Current: [e.g., Expansion revenue 15% of total]
    - Opportunity: [Build upsell paths, target 25% expansion revenue]
    - Impact: [NRR 110% → 120%]
    
    **3. [Opportunity 3]**
    - Current: [e.g., CAC $500, stable]
    - Opportunity: [Improve conversion, reduce CAC to $400]
    - Impact: [Faster payback, better LTV:CAC]
    
    ---
    
    ### Recommended Actions (Next Quarter)
    
    **Priority 1: Scale what's working**
    - [e.g., Double content marketing budget (best channel)]
    - Expected impact: [+___ customers/month, +$___ MRR]
    
    **Priority 2: Expand within base**
    - [e.g., Launch premium tier for 20% of customers]
    - Expected impact: [NRR 110% → 115%]
    
    **Priority 3: Improve efficiency**
    - [e.g., Optimize paid acquisition (reduce CAC 10%)]
    - Expected impact: [Payback 8mo → 7mo]
    
    ---
    
    ### Monitor These Metrics
    
    **Weekly:**
    - NRR (should stay >___%)
    - Churn rate (should stay <___%)
    - Quick Ratio (should stay >___)
    
    **Monthly:**
    - Rule of 40 (should stay >___)
    - Magic Number (should stay >___)
    - LTV:CAC (should stay >___:1)
    
    **Quarterly:**
    - Cohort retention trends
    - Revenue concentration risk
    - Operating leverage
    
    ---
    
    ### Benchmarks (Your Stage: [Growth/Scale])
    
    | Metric | Your Performance | Benchmark | Status |
    |--------|------------------|-----------|--------|
    | Growth Rate | ___% | >40% (growth) / >25% (scale) | ✅ |
    | NRR | ___% | >100% | ✅ |
    | LTV:CAC | ___:1 | >3:1 | ✅ |
    | Rule of 40 | ___ | >40 | ✅ |
    | Gross Margin | ___% | >70% | ✅ |
    
    You're performing at or above benchmarks across the board."
    
    ---
    
    #### Diagnostic Pattern 2: Moderate Health (Fixable Issues)
    
    **When:**
    - Most metrics acceptable, but 1-2 dimensions have problems
    - Medium-priority red flags
    - Solvable with focus
    
    **Output:**
    
    "## ⚠️ Overall Health: **Moderate** (Fixable Issues)
    
    Your business has good fundamentals but needs attention in [specific dimension].
    
    ---
    
    ### Health Scorecard
    
    | Dimension | Score | Status |
    |-----------|-------|--------|
    | **Growth & Retention** | [✅ / ⚠️ / 🚨] | [Details] |
    | **Unit Economics** | [✅ / ⚠️ / 🚨] | [Details] |
    | **Capital Efficiency** | [✅ / ⚠️ / 🚨] | [Details] |
    | **Overall** | ⚠️ **Moderate** | [Primary issue area] needs attention |
    
    ---
    
    ### Red Flags Identified
    
    **High Priority** 🚨
    1. **[Specific red flag]**
       - Metric: [e.g., NRR 95%]
       - Threshold: [Should be >100%]
       - Impact: [Base is contracting, not expanding]
       - Fix by: [End of quarter]
    
    **Medium Priority** ⚠️
    1. **[Specific issue]**
       - Metric: [e.g., Magic Number 0.6]
       - Threshold: [Should be >0.75]
       - Impact: [S&M spend moderately efficient, room for improvement]
       - Fix by: [6 months]
    
    ---
    
    ### Root Cause Analysis
    
    **Primary Issue: [e.g., Retention & Expansion]**
    
    **Symptoms:**
    - NRR 95% (should be >100%)
    - Churn rate 5% monthly (should be <3%)
    - Expansion revenue only 10% of MRR (should be 20-30%)
    
    **Diagnosis:**
    [e.g., Customers are churning before they expand. Onboarding is weak, no clear upsell paths.]
    
    **Impact:**
    - Lost MRR: [Calculate churn impact]
    - Missed expansion: [Calculate expansion opportunity]
    - Total impact: [Combined revenue loss]
    
    ---
    
    ### Prioritized Action Plan
    
    **Immediate (Next 30 days):**
    
    **1. Fix [Primary Issue]**
    - Action: [Specific step, e.g., "Launch onboarding improvement program"]
    - Owner: [PM, Customer Success]
    - Target: [Reduce churn 5% → 4%]
    - Impact: [Save $___K MRR/month]
    
    **Short-term (Next Quarter):**
    
    **2. [Secondary Action]**
    - Action: [e.g., "Build premium tier for upsell"]
    - Target: [NRR 95% → 105%]
    - Impact: [+$___K expansion MRR]
    
    **3. [Tertiary Action]**
    - Action: [e.g., "Optimize S&M spend, improve magic number"]
    - Target: [Magic Number 0.6 → 0.8]
    - Impact: [More efficient growth]
    
    ---
    
    ### What Success Looks Like (90 Days)
    
    **Target metrics:**
    - NRR: 95% → 105% (+10pp)
    - Churn: 5% → 3.5% (-30%)
    - Magic Number: 0.6 → 0.8 (+33%)
    
    **Impact:**
    - Monthly revenue saved from churn: +$___K
    - Expansion revenue: +$___K
    - More efficient S&M: [details]
    
    **If you hit these targets, you'll be in 'Healthy' territory.**
    
    ---
    
    ### Monitor Weekly
    
    **Must-track metrics:**
    - Churn rate (track to ensure it's decreasing)
    - NRR (track to ensure it's improving)
    - Customer feedback (are improvements working?)
    
    **Leading indicators:**
    - Onboarding completion rate
    - Time-to-value
    - Usage metrics (activation, engagement)
    
    ---
    
    ### What Not to Do
    
    **Don't:**
    - Scale acquisition until retention is fixed (you'll just churn faster)
    - Ignore expansion (it's easier than new acquisition)
    - Wait too long (retention problems compound)"
    
    ---
    
    #### Diagnostic Pattern 3: Concerning Health (Urgent Action Required)
    
    **When:**
    - Multiple critical red flags
    - 2+ dimensions problematic
    - Requires immediate intervention
    
    **Output:**
    
    "## 🚨 Overall Health: **Concerning** (Urgent Action Required)
    
    Your business has multiple critical issues that need immediate attention.
    
    ---
    
    ### Health Scorecard
    
    | Dimension | Score | Status |
    |-----------|-------|--------|
    | **Growth & Retention** | 🚨 Concerning | [Details] |
    | **Unit Economics** | 🚨 Concerning | [Details] |
    | **Capital Efficiency** | 🚨 Critical | [Details] |
    | **Overall** | 🚨 **Concerning** | Multiple urgent issues |
    
    ---
    
    ### Critical Red Flags 🚨
    
    **1. [Critical Issue 1 - e.g., Runway]**
    - Current: [6 months runway]
    - Threshold: [<6 months = crisis]
    - Impact: [Survival risk]
    - Action: [Raise capital OR cut burn immediately]
    - Timeline: [30 days]
    
    **2. [Critical Issue 2 - e.g., Unit Economics]**
    - Current: [LTV:CAC 1.2:1]
    - Threshold: [<1.5:1 = unsustainable]
    - Impact: [Losing money on every customer]
    - Action: [Reduce CAC OR increase LTV]
    - Timeline: [60 days]
    
    **3. [Critical Issue 3 - e.g., Cohort Degradation]**
    - Current: [Newer cohorts churning 2x faster than old]
    - Threshold: [Degrading PMF]
    - Impact: [Scaling makes problem worse]
    - Action: [Stop scaling, fix retention]
    - Timeline: [90 days]
    
    ---
    
    ### Survival Plan (Next 90 Days)
    
    **Week 1-2: Triage**
    
    **Immediate actions:**
    1. **Extend runway** (if <6 months)
       - Option A: Raise bridge round ($___K)
       - Option B: Cut burn by ___%
       - Option C: Combination
       - Decision by: [Date]
    
    2. **Stop scaling broken channels**
       - Pause S&M spend on channels with LTV:CAC <2:1
       - Reallocate budget to [best-performing channel]
    
    3. **Assemble crisis team**
       - Daily standups on key metrics
       - Weekly progress reviews
    
    ---
    
    **Month 1: Stop the Bleeding**
    
    **Priority 1: Fix Unit Economics**
    - Current: LTV:CAC ___:1 (unsustainable)
    - Actions:
      1. Reduce CAC: [Specific tactics]
      2. Increase LTV: [Improve retention, add expansion]
    - Target: LTV:CAC >2:1 within 30 days
    
    **Priority 2: Improve Retention**
    - Current: Churn ___% (too high)
    - Actions:
      1. Interview churned customers (identify top 3 reasons)
      2. Fix onboarding (reduce early churn)
      3. Proactive outreach to at-risk accounts
    - Target: Reduce churn by 20% within 30 days
    
    ---
    
    **Month 2-3: Stabilize**
    
    **Milestone 1: Positive Unit Economics**
    - LTV:CAC >2:1 ✅
    - Payback <18 months ✅
    - Gross margin >60% ✅
    
    **Milestone 2: Slowing Churn**
    - Churn decreasing month-over-month
    - Cohort degradation stopped
    - NRR improving toward 100%
    
    **Milestone 3: Runway Extended**
    - 12+ months runway (via fundraise or burn reduction)
    - Clear path to next milestone
    
    ---
    
    ### What Success Looks Like (Day 90)
    
    **Metrics:**
    - Runway: ___ months → 12+ months ✅
    - LTV:CAC: ___:1 → >2:1 ✅
    - Churn: ___% → reduced by 30% ✅
    - NRR: ___% → improving toward 100%
    
    **Position:**
    - Out of crisis mode
    - Stable foundation to rebuild growth
    - Clear plan for next 6-12 months
    
    ---
    
    ### What to Avoid
    
    **Don't:**
    - Try to grow your way out of this (fix unit economics first)
    - Ignore the data (hope is not a strategy)
    - Scale before you fix retention (accelerates failure)
    - Wait until runway <3 months to fundraise (too late)
    
    **Do:**
    - Focus ruthlessly on retention and unit economics
    - Cut costs to extend runway
    - Be honest with board/investors about problems
    - Move fast (you don't have time to waste)"
    
    ---
    
    #### Diagnostic Pattern 4: Critical Health (Existential Crisis)
    
    **When:**
    - Runway <3 months OR
    - Multiple critical failures (LTV:CAC <1:1, massive churn, no path to profitability)
    
    **Output:**
    
    "## 🚨🚨 Overall Health: **Critical** (Existential Crisis)
    
    Your business is in survival mode. Immediate drastic action required.
    
    [Similar structure to Pattern 3, but more urgent tone, shorter timelines, more drastic measures]
    
    **Immediate Actions (This Week):**
    1. Emergency board meeting
    2. Fundraise immediately OR cut burn 50%+
    3. Stop all non-essential spend
    4. Fix top 1-2 critical issues (runway, unit economics)"
    
    ---
    
    ## Examples
    
    See `examples/` folder. Mini examples below:
    
    ### Example 1: Healthy Growth-Stage SaaS
    
    **Metrics:**
    - ARR: $20M, Growth: 60% YoY
    - NRR: 115%, Churn: 2.5%
    - LTV:CAC: 4:1, Payback: 10 months
    - Rule of 40: 50, Runway: 18 months
    
    **Diagnosis:** Healthy. Scale aggressively.
    
    ---
    
    ### Example 2: Moderate Health (Retention Issue)
    
    **Metrics:**
    - ARR: $15M, Growth: 40% YoY
    - NRR: 95%, Churn: 5%
    - LTV:CAC: 3.5:1, Payback: 12 months
    - Rule of 40: 38, Runway: 12 months
    
    **Diagnosis:** Moderate. Fix retention before scaling further.
    
    ---
    
    ### Example 3: Concerning (Multiple Issues)
    
    **Metrics:**
    - ARR: $8M, Growth: 25% YoY (slowing)
    - NRR: 88%, Churn: 7% (increasing)
    - LTV:CAC: 1.8:1, Payback: 20 months
    - Rule of 40: 15, Runway: 8 months
    
    **Diagnosis:** Concerning. Urgent action on retention and unit economics required.
    
    ---
    
    ## Common Pitfalls
    
    ### Pitfall 1: Celebrating Single Metrics
    **Symptom:** "Revenue growing 50%!" (ignoring burn, churn, unit economics)
    
    **Consequence:** Unsustainable growth. Scaling broken model.
    
    **Fix:** Look at all four dimensions together.
    
    ---
    
    ### Pitfall 2: Ignoring Stage-Specific Benchmarks
    **Symptom:** "We're not profitable yet, is that bad?" (early-stage company)
    
    **Consequence:** Misplaced worry. Early-stage should optimize for growth and unit economics, not profitability.
    
    **Fix:** Use stage-appropriate benchmarks.
    
    ---
    
    ### Pitfall 3: Focusing on Lagging Indicators Only
    **Symptom:** "Churn is 5%, let's watch it"
    
    **Consequence:** By the time lagging indicators (churn, NRR) show problems, it's late.
    
    **Fix:** Track leading indicators (usage, engagement, onboarding completion).
    
    ---
    
    ### Pitfall 4: Not Acting on Red Flags
    **Symptom:** "NRR <100% for 3 quarters, but we'll fix it eventually"
    
    **Consequence:** Problems compound. Becomes crisis.
    
    **Fix:** Set clear timelines. If metric doesn't improve in X time, escalate.
    
    ---
    
    ### Pitfall 5: Trying to Fix Everything at Once
    **Symptom:** "Let's improve growth, retention, CAC, and efficiency simultaneously"
    
    **Consequence:** Resources spread thin. Nothing improves.
    
    **Fix:** Prioritize top 1-3 issues. Fix sequentially.
    
    ---
    
    ## References
    
    ### Related Skills
    - `saas-revenue-growth-metrics` — Detailed growth and retention metrics
    - `saas-economics-efficiency-metrics` — Detailed unit economics and capital efficiency
    - `finance-metrics-quickref` — Fast lookup for all metrics and benchmarks
    - `feature-investment-advisor` — Uses health diagnostic to inform feature priorities
    - `acquisition-channel-advisor` — Uses health diagnostic to inform channel priorities
    - `finance-based-pricing-advisor` — Uses health diagnostic to inform pricing decisions
    
    ### External Frameworks
    - **Bessemer Venture Partners:** "SaaS Metrics 2.0" — Comprehensive benchmarks
    - **David Skok:** "SaaS Metrics" — Unit economics benchmarks
    - **OpenView Partners:** SaaS benchmarking reports
    - **Battery Ventures:** "State of SaaS" annual report
    
    ### Provenance
    - Adapted from `research/finance/Finance_QuickRef.md` (Red flags table)
    - Decision frameworks from `research/finance/Finance_For_PMs.Putting_It_Together_Synthesis.md`
    - Benchmarks from `research/finance/Finance for Product Managers.md`
    
  • skills/acquisition-channel-advisor/SKILL.mdskill
    Show content (21306 bytes)
    ---
    name: acquisition-channel-advisor
    description: Evaluate acquisition channels using unit economics, customer quality, and scalability. Use when deciding whether to scale, test, or kill a growth channel.
    intent: >-
      Guide product managers through evaluating whether to scale, test, or kill an acquisition channel based on unit economics (CAC, LTV, payback), customer quality (retention, NRR), and scalability (magic number, volume potential). Use this to make data-driven go-to-market decisions and optimize channel mix for sustainable growth.
    type: interactive
    best_for:
      - "Deciding whether a paid or outbound channel deserves more budget"
      - "Comparing channel quality, payback, and scalability side by side"
      - "Making scale, test, or kill decisions with finance-backed reasoning"
    scenarios:
      - "Should we keep investing in paid LinkedIn ads for enterprise leads?"
      - "Compare content marketing, outbound email, and partner referrals as acquisition channels"
      - "Help me decide whether to scale or kill our webinar acquisition channel"
    ---
    
    
    ## Purpose
    
    Guide product managers through evaluating whether to scale, test, or kill an acquisition channel based on unit economics (CAC, LTV, payback), customer quality (retention, NRR), and scalability (magic number, volume potential). Use this to make data-driven go-to-market decisions and optimize channel mix for sustainable growth.
    
    This is not a channel strategy framework—it's a financial lens for channel evaluation that helps you avoid scaling unprofitable channels or killing channels with fixable problems. Use when deciding how to allocate marketing budget across channels.
    
    ## Key Concepts
    
    ### The Channel Evaluation Framework
    
    A systematic approach to evaluate acquisition channels:
    
    1. **Unit Economics** — What does it cost to acquire, and what's the return?
       - CAC (Customer Acquisition Cost)
       - LTV (Lifetime Value)
       - LTV:CAC ratio
       - Payback period
    
    2. **Customer Quality** — Do customers from this channel stick around and expand?
       - Cohort retention rate (by channel)
       - Churn rate (by channel)
       - NRR (Net Revenue Retention by channel)
       - Expansion rate
    
    3. **Scalability** — Can this channel sustain growth at the volume you need?
       - Magic Number (S&M efficiency)
       - Addressable volume (TAM of channel)
       - Saturation risk (diminishing returns)
       - CAC trend (increasing, stable, decreasing)
    
    4. **Strategic Fit** — Does this channel align with your go-to-market strategy?
       - Customer segment match (SMB vs. enterprise)
       - Sales motion compatibility (PLG vs. sales-led)
       - Brand positioning alignment
    
    ### Decision Matrix
    
    | LTV:CAC | Payback | Customer Quality | Scalability | Decision |
    |---------|---------|------------------|-------------|----------|
    | >3:1 | <12mo | Good retention | High volume | **Scale aggressively** |
    | 2-3:1 | 12-18mo | Average retention | Medium volume | **Test & optimize** |
    | <2:1 | >18mo | Poor retention | Low volume | **Kill or fix** |
    
    ### Anti-Patterns (What This Is NOT)
    
    - **Not vanity metrics:** "We got 10,000 signups!" means nothing if they churn in 30 days
    - **Not CAC-only thinking:** Low CAC with terrible retention is worse than high CAC with great retention
    - **Not ignoring payback:** 5:1 LTV:CAC with 36-month payback is a cash trap
    - **Not scaling broken channels:** Pouring money into inefficient channels accelerates failure
    
    ### When to Use This Framework
    
    **Use this when:**
    - Evaluating whether to scale a new channel (content, paid, events, etc.)
    - Deciding how to allocate marketing budget across channels
    - Assessing whether to kill an underperforming channel
    - Comparing channels to optimize ROI
    - Planning annual marketing budget allocation
    
    **Don't use this when:**
    - Channel is brand-new (<3 months, <100 customers) — not enough data
    - You're testing channel fit (this is for evaluation, not experimentation)
    - Strategic channels (e.g., enterprises require field sales regardless of CAC)
    - You don't have channel-level data (need to track CAC, retention by source)
    
    ---
    
    ### Facilitation Source of Truth
    
    Use [`workshop-facilitation`](../workshop-facilitation/SKILL.md) as the default interaction protocol for this skill.
    
    It defines:
    - session heads-up + entry mode (Guided, Context dump, Best guess)
    - one-question turns with plain-language prompts
    - progress labels (for example, Context Qx/8 and Scoring Qx/5)
    - interruption handling and pause/resume behavior
    - numbered recommendations at decision points
    - quick-select numbered response options for regular questions (include `Other (specify)` when useful)
    
    This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
    
    ## Application
    
    This interactive skill asks **up to 4 adaptive questions**, offering **3-5 enumerated options** at decision points.
    
    ---
    
    ### Step 0: Gather Context
    
    **Agent asks:**
    
    "Let's evaluate this acquisition channel. Please provide:
    
    **Channel details:**
    - Channel name (e.g., Google Ads, content marketing, outbound sales, partnerships)
    - How long have you been using this channel? (months)
    - Current monthly spend on this channel
    
    **Customer acquisition:**
    - Customers acquired per month (from this channel)
    - CAC for this channel (if known, otherwise we'll calculate)
    
    **Business context:**
    - Blended CAC (across all channels)
    - Blended LTV
    - Current MRR/ARR
    - Target growth rate (% MoM or YoY)
    
    You can provide estimates if you don't have exact numbers."
    
    ---
    
    ### Step 1: Evaluate Unit Economics
    
    **Agent calculates (if not provided):**
    ```
    CAC = Monthly Spend / Customers Acquired per Month
    ```
    
    **Agent asks:**
    
    "Now let's compare this channel's unit economics to your blended metrics.
    
    **Channel Unit Economics:**
    - Channel CAC: $___
    - Blended CAC (all channels): $___
    - Channel LTV: $___ (if known; otherwise we'll use blended LTV as proxy)
    - Blended LTV: $___
    
    **Questions:**
    
    1. **Do customers from this channel have similar LTV to other channels?**
       - Similar (use blended LTV)
       - Higher (they upgrade more, stick around longer)
       - Lower (they churn faster or are smaller deals)
       - Unknown (need to analyze cohort data)
    
    2. **What's the payback period for this channel?**
       - We can calculate: CAC / (Monthly ARPU × Gross Margin %)
       - Or you can provide it"
    
    **Based on answers, agent calculates:**
    - LTV:CAC ratio for channel
    - Payback period
    - Comparison to blended metrics
    
    **Agent flags:**
    - ✅ If LTV:CAC >3:1 and payback <12 months: "Strong unit economics"
    - ⚠️ If LTV:CAC 2-3:1 or payback 12-18 months: "Marginal unit economics"
    - 🚨 If LTV:CAC <2:1 or payback >18 months: "Poor unit economics"
    
    ---
    
    ### Step 2: Assess Customer Quality
    
    **Agent asks:**
    
    "How do customers from this channel perform compared to other channels?
    
    **Retention & Expansion:**
    
    1. **What's the churn rate for customers from this channel?**
       - Lower than blended (they stick around longer)
       - Same as blended (no difference)
       - Higher than blended (they churn faster)
       - Unknown (need cohort analysis)
    
    2. **What's the NRR for customers from this channel?**
       - Higher than blended (they expand more)
       - Same as blended (no difference)
       - Lower than blended (they contract or churn more)
       - Unknown (need cohort analysis)
    
    3. **What's the customer profile from this channel?**
       - Ideal customer profile (ICP) — perfect fit
       - Close to ICP — mostly good fit
       - Off ICP — many poor-fit customers
       - Unknown"
    
    **Based on answers, agent evaluates:**
    - ✅ **High quality:** Lower churn, higher NRR, ICP match
    - ⚠️ **Medium quality:** Similar to blended, mostly good fit
    - 🚨 **Low quality:** Higher churn, lower NRR, off ICP
    
    **Agent flags:**
    - If high quality: "Premium channel—customers are better than average"
    - If low quality: "Quality problem—customers aren't sticking or expanding"
    
    ---
    
    ### Step 3: Evaluate Scalability
    
    **Agent asks:**
    
    "Can this channel scale to meet your growth targets?
    
    **Efficiency & Volume:**
    
    1. **What's the S&M efficiency for this channel (Magic Number)?**
       - Calculate: (New MRR from channel × 4) / Channel S&M Spend
       - Or provide if known
    
    2. **What's the addressable volume for this channel?**
       - Large (can scale 10x+ from current spend)
       - Medium (can scale 2-5x)
       - Small (near saturation, maybe 1.5x)
       - Unknown
    
    3. **What's the CAC trend for this channel?**
       - Decreasing (getting more efficient over time)
       - Stable (consistent CAC)
       - Increasing (diminishing returns, saturation)
       - Unknown (too early to tell)
    
    4. **How much growth do you need from acquisition?**
       - We'll calculate: Target growth - expansion/retention growth = acquisition gap"
    
    **Based on answers, agent evaluates:**
    - ✅ **Highly scalable:** Magic number >0.75, large volume, stable/decreasing CAC
    - ⚠️ **Moderately scalable:** Magic number 0.5-0.75, medium volume, stable CAC
    - 🚨 **Not scalable:** Magic number <0.5, small volume, increasing CAC
    
    ---
    
    ### Step 4: Deliver Recommendations
    
    **Agent synthesizes:**
    - Unit economics (LTV:CAC, payback)
    - Customer quality (retention, NRR, ICP fit)
    - Scalability (magic number, volume, CAC trend)
    - Strategic fit
    
    **Agent offers 3-4 recommendations:**
    
    ---
    
    #### Recommendation Pattern 1: Scale Aggressively
    
    **When:**
    - LTV:CAC >3:1 AND
    - Payback <12 months AND
    - Customer quality good or better AND
    - Magic Number >0.75 AND
    - Addressable volume large
    
    **Recommendation:**
    
    "**Scale this channel aggressively** — Excellent economics + scalability
    
    **Unit Economics:**
    - CAC: $___
    - LTV: $___
    - LTV:CAC: ___:1 ✅ (>3:1 threshold)
    - Payback: ___ months ✅ (<12 months)
    
    **Customer Quality:**
    - Retention: [Better than / Same as / Worse than] blended
    - NRR: [Higher / Same / Lower]
    - ICP Fit: [High / Medium / Low]
    
    **Scalability:**
    - Magic Number: ___ ✅ (>0.75 = efficient)
    - Addressable Volume: Large
    - CAC Trend: [Stable / Decreasing]
    
    **Why this is a winner:**
    - Every $1 spent returns $__ in LTV
    - Payback in under a year = fast cash recovery
    - [Customer quality insight]
    - Can scale 5-10x from current spend
    
    **How to scale:**
    1. **Increase budget by 50-100% next month**
       - Current: $___ /month → Target: $___ /month
    2. **Monitor key metrics weekly:**
       - CAC (should stay <$___)
       - Magic Number (should stay >0.75)
       - Customer quality (retention, NRR)
    3. **Scale until:**
       - CAC increases >20% (saturation signal)
       - Magic Number drops <0.75 (efficiency declining)
       - Volume caps out
    
    **Expected impact:**
    - Current: ___ customers/month
    - Target (2x spend): ___ customers/month
    - MRR impact: +$___/month
    - Payback: Still ~___ months even at 2x scale
    
    **Risk:** Low. Strong unit economics support aggressive scaling."
    
    ---
    
    #### Recommendation Pattern 2: Test & Optimize
    
    **When:**
    - LTV:CAC 2-3:1 OR
    - Payback 12-18 months OR
    - Customer quality average OR
    - Magic Number 0.5-0.75
    
    **Recommendation:**
    
    "**Test & optimize before scaling** — Marginal economics, fixable
    
    **Current State:**
    - CAC: $___
    - LTV: $___
    - LTV:CAC: ___:1 ⚠️ (2-3:1 = marginal)
    - Payback: ___ months ⚠️ (12-18 months)
    - Magic Number: ___ ⚠️ (0.5-0.75 = acceptable, not great)
    
    **Customer Quality:**
    - Retention: [Same as blended / Slightly worse]
    - NRR: [Same / Lower]
    - Issue: [Specific problem, e.g., "Higher churn in first 90 days"]
    
    **Diagnosis:**
    [One of these:]
    - **High CAC:** Spending too much to acquire
    - **Low LTV:** Customers churn too fast or don't expand
    - **Poor targeting:** Attracting off-ICP customers
    - **Inefficient conversion:** High cost-per-click but low conversion rate
    
    **How to optimize:**
    
    **If CAC is the problem:**
    1. Improve conversion rate (optimize landing pages, offer, onboarding)
    2. Reduce cost-per-click (better targeting, ad creative)
    3. Shorten sales cycle (faster qualification, better demos)
    
    **If LTV is the problem:**
    1. Improve onboarding for customers from this channel
    2. Target higher-value segments within channel
    3. Add expansion plays (upsell, cross-sell)
    
    **If targeting is the problem:**
    1. Narrow audience (exclude poor-fit segments)
    2. Improve messaging (attract better-fit customers)
    3. Add qualification step (reduce poor-fit signups)
    
    **Timeline:**
    - Spend 4-8 weeks optimizing
    - Track CAC and LTV weekly
    - Target: LTV:CAC >3:1, payback <12 months
    - If you hit targets: scale
    - If you can't fix it: consider killing
    
    **Don't scale yet:** Current economics are break-even at best. Fix first, then scale."
    
    ---
    
    #### Recommendation Pattern 3: Kill or Pause
    
    **When:**
    - LTV:CAC <1.5:1 AND
    - No clear path to improvement
    
    **Recommendation:**
    
    "**Kill this channel (or pause)** — Economics don't support investment
    
    **Why:**
    - CAC: $___
    - LTV: $___
    - LTV:CAC: ___:1 🚨 (<2:1 = unsustainable)
    - Payback: ___ months 🚨 (>18 months = cash trap)
    
    **Problem:**
    - You're spending $___ to acquire a customer worth $___
    - [Losing money / Barely breaking even / Taking too long to recover cost]
    
    **Customer Quality:**
    - Retention: [Worse than blended]
    - NRR: [Lower]
    - ICP Fit: [Poor]
    
    **What's broken:**
    [Specific diagnosis:]
    - CAC too high (spending $___ vs. blended $___)
    - LTV too low (customers churn at ___% vs. blended ___%)
    - Both (bad unit economics from both sides)
    
    **Should you fix or kill?**
    
    **Fix if:**
    - You have a hypothesis to improve CAC by 50%+ (better targeting, conversion)
    - You have a hypothesis to improve LTV by 50%+ (better onboarding, ICP focus)
    - This is a strategically important channel (e.g., enterprise requires field sales)
    
    **Kill if:**
    - No clear path to 3:1 LTV:CAC
    - Better channels available (reallocate budget there)
    - Small addressable volume (not worth fixing)
    
    **Recommendation: Kill and reallocate budget**
    
    **Reallocate to:**
    - Channel X (LTV:CAC = ___:1, can scale)
    - Channel Y (Magic Number = ___, efficient)
    
    **What to do with budget:**
    - Current channel spend: $___/month
    - Reallocate to [top-performing channel]
    - Expected impact: [better CAC, better LTV, faster payback]
    
    **Exception:** If this channel is <10% of total S&M spend, just pause it. Not worth fixing."
    
    ---
    
    #### Recommendation Pattern 4: Invest to Learn (Strategic Channel)
    
    **When:**
    - Poor unit economics BUT
    - Strategic importance (enterprise channel, brand building, long-term)
    
    **Recommendation:**
    
    "**Continue, but cap investment** — Strategic value > short-term ROI
    
    **Financial Reality:**
    - CAC: $___
    - LTV: $___
    - LTV:CAC: ___:1 (below 3:1 threshold)
    - Payback: ___ months (long)
    
    **Why continue despite poor economics:**
    - [Strategic reason: e.g., "Enterprise segment requires field events, but deals are 12-month sales cycles"]
    - [Brand building: e.g., "Conferences build brand awareness that drives inbound long-term"]
    - [Market positioning: e.g., "Need to be present in this channel for credibility"]
    
    **How to manage:**
    1. **Cap spend** — Don't scale until economics improve
       - Current: $___/month
       - Cap at: $___/month (hold steady)
    2. **Track leading indicators** — Don't just look at short-term CAC/LTV
       - Pipeline influence
       - Brand awareness lift
       - Referral rate from this channel
    3. **Re-evaluate quarterly**
       - If economics improve (LTV:CAC >3:1): scale
       - If economics stay poor: reconsider strategy
    
    **Timeline:**
    - Give it [6-12 months] to show results
    - If no improvement: kill or reduce drastically
    
    **Risk:** You're subsidizing growth. Make sure it's worth it."
    
    ---
    
    ### Step 5: Compare Across Channels (Optional)
    
    **If user has multiple channels, agent can generate:**
    
    | Channel | CAC | LTV | LTV:CAC | Payback | Magic Number | Quality | Recommendation |
    |---------|-----|-----|---------|---------|--------------|---------|----------------|
    | Google Ads | $500 | $2,000 | 4:1 | 8mo | 0.9 | High | Scale |
    | Content | $200 | $1,500 | 7.5:1 | 4mo | 1.2 | High | Scale |
    | Outbound | $10K | $50K | 5:1 | 18mo | 0.6 | Medium | Optimize |
    | Events | $15K | $30K | 2:1 | 24mo | 0.3 | Low | Kill |
    
    **Budget allocation recommendation:**
    1. Scale: Content (highest efficiency)
    2. Scale: Google Ads (strong economics)
    3. Optimize: Outbound (improve magic number)
    4. Kill: Events (reallocate budget)
    
    ---
    
    ## Examples
    
    See `examples/` folder for sample conversation flows. Mini examples below:
    
    ### Example 1: Scale (Content Marketing)
    
    **Channel:** Organic content (blog, SEO)
    - CAC: $200
    - LTV: $3,000
    - LTV:CAC: 15:1
    - Payback: 3 months
    - Magic Number: 1.8
    - Customer quality: High (lower churn, higher NRR)
    
    **Recommendation:** Scale aggressively. Exceptional unit economics, fast payback, high-quality customers. Increase content spend 2-3x.
    
    ---
    
    ### Example 2: Optimize (Paid Search)
    
    **Channel:** Google Ads
    - CAC: $800
    - LTV: $2,000
    - LTV:CAC: 2.5:1
    - Payback: 14 months
    - Magic Number: 0.6
    - Customer quality: Lower (higher churn in first 90 days)
    
    **Recommendation:** Test & optimize before scaling. CAC is high, onboarding is weak for this segment. Improve landing page, target higher-intent keywords, better onboarding for paid customers.
    
    ---
    
    ### Example 3: Kill (Trade Shows)
    
    **Channel:** Industry events
    - CAC: $20,000
    - LTV: $30,000
    - LTV:CAC: 1.5:1
    - Payback: 30 months
    - Magic Number: 0.2
    - Customer quality: Low (off-ICP, many tire-kickers)
    
    **Recommendation:** Kill. CAC too high, payback too long, poor customer quality. Reallocate budget to content and paid search.
    
    ---
    
    ## Common Pitfalls
    
    ### Pitfall 1: Scaling Broken Channels
    **Symptom:** "Let's 10x our Google Ads spend!" (LTV:CAC is 1.5:1)
    
    **Consequence:** You accelerate cash burn without improving unit economics. Lose money faster.
    
    **Fix:** Only scale channels with LTV:CAC >3:1 and payback <12 months. Fix broken channels before scaling.
    
    ---
    
    ### Pitfall 2: Ignoring Customer Quality
    **Symptom:** "CAC is only $100!" (but customers churn in 30 days)
    
    **Consequence:** Low CAC means nothing if LTV is also low. You're acquiring churners, not customers.
    
    **Fix:** Track cohort retention and NRR by channel. Low CAC + high churn = bad channel.
    
    ---
    
    ### Pitfall 3: Celebrating Vanity Metrics
    **Symptom:** "We got 10,000 signups from this campaign!" (5% convert to paid)
    
    **Consequence:** Signups don't pay bills. CAC is calculated on paid customers, not signups.
    
    **Fix:** Track CAC on paid customers only. Ignore vanity metrics like signups, impressions, clicks.
    
    ---
    
    ### Pitfall 4: Averaging Across Channels
    **Symptom:** "Blended CAC is $500" (but hiding that one channel is $10K CAC)
    
    **Consequence:** Bad channels hide in blended metrics. You don't know which channels to kill.
    
    **Fix:** Track CAC, LTV, payback by channel. Compare channels individually.
    
    ---
    
    ### Pitfall 5: Short-Term CAC Optimization
    **Symptom:** "We reduced CAC 50%!" (by targeting low-intent, low-LTV customers)
    
    **Consequence:** CAC dropped but so did LTV. Unit economics got worse, not better.
    
    **Fix:** Optimize for LTV:CAC ratio, not CAC alone. Higher CAC with higher LTV is better.
    
    ---
    
    ### Pitfall 6: Ignoring Payback Period
    **Symptom:** "LTV:CAC is 6:1, this channel is amazing!" (payback is 48 months)
    
    **Consequence:** You run out of cash before recovering CAC. Great ratio, terrible cash flow.
    
    **Fix:** Pair LTV:CAC with payback period. 3:1 with 8-month payback beats 6:1 with 36-month payback.
    
    ---
    
    ### Pitfall 7: Killing Channels Too Early
    **Symptom:** "This channel didn't work after 2 weeks"
    
    **Consequence:** Channels need time to optimize. Killing too early wastes learning.
    
    **Fix:** Give channels 3-6 months and 100+ customers before evaluating. Track trends, not snapshots.
    
    ---
    
    ### Pitfall 8: Over-Relying on One Channel
    **Symptom:** "90% of our customers come from Google Ads"
    
    **Consequence:** Algorithm change, competitor outbids you, channel saturates = business grinds to halt.
    
    **Fix:** Diversify channels. No single channel should be >50% of new customer acquisition.
    
    ---
    
    ### Pitfall 9: Forgetting Incrementality
    **Symptom:** "This retargeting campaign has great ROI!" (but customers would've converted anyway)
    
    **Consequence:** You're paying for conversions that would happen organically. Inflated ROI.
    
    **Fix:** Test incrementality with holdout groups. Only count truly incremental conversions.
    
    ---
    
    ### Pitfall 10: Strategic Channels Without Limits
    **Symptom:** "Enterprise events are strategic, we can't stop!" (losing $500K/year)
    
    **Consequence:** "Strategic" becomes an excuse for burning cash indefinitely.
    
    **Fix:** Cap spend on strategic channels. Set timeline for improvement (6-12 months). If no progress, kill.
    
    ---
    
    ## References
    
    ### Related Skills
    - `saas-economics-efficiency-metrics` — CAC, LTV, payback, magic number calculations
    - `saas-revenue-growth-metrics` — NRR, churn, cohort analysis by channel
    - `finance-metrics-quickref` — Fast lookup for channel evaluation metrics
    - `feature-investment-advisor` — Similar ROI framework for feature decisions
    - `business-health-diagnostic` — Broader business health assessment
    
    ### External Frameworks
    - **Brian Balfour (Reforge):** Channel-product fit framework
    - **David Skok:** "SaaS Metrics" — CAC, LTV, and payback for channels
    - **Tomasz Tunguz:** SaaS channel benchmarking
    - **First Round Review:** "How to Find and Scale Your Growth Channels"
    
    ### Provenance
    - Adapted from `research/finance/Finance_For_PMs.Putting_It_Together_Synthesis.md` (Decision Framework #2)
    - Channel economics from `research/finance/Finance for Product Managers.md`
    
  • skills/customer-journey-map/SKILL.mdskill
    Show content (14139 bytes)
    ---
    name: customer-journey-map
    description: Create a customer journey map across stages, touchpoints, actions, emotions, and metrics. Use when diagnosing a broken experience or aligning a team on the full customer flow.
    intent: >-
      Create a comprehensive customer journey map that visualizes how customers interact with your brand across all stages—from awareness to loyalty—documenting their actions, touchpoints, emotions, KPIs, business goals, and teams involved at each stage. Use this to identify pain points, align cross-functional teams, and systematically improve the customer experience to achieve business objectives.
    type: component
    theme: workshops-facilitation
    best_for:
      - "Mapping the full customer experience across all touchpoints"
      - "Aligning cross-functional teams on the end-to-end customer journey"
      - "Identifying pain points and opportunities by stage with measurable KPIs"
    scenarios:
      - "I need to map the customer journey for our B2B SaaS onboarding experience from signup to first value"
      - "Create a journey map for a PM leader evaluating our skills repo — from discovery through loyalty"
    estimated_time: "20-30 min"
    ---
    
    
    ## Purpose
    Create a comprehensive customer journey map that visualizes how customers interact with your brand across all stages—from awareness to loyalty—documenting their actions, touchpoints, emotions, KPIs, business goals, and teams involved at each stage. Use this to identify pain points, align cross-functional teams, and systematically improve the customer experience to achieve business objectives.
    
    This is not a user flow diagram—it's a strategic artifact that combines customer empathy with business metrics to drive actionable improvements.
    
    ## Key Concepts
    
    ### The Customer Journey Mapping Framework
    Adapted from NNGroup's framework and Carnegie Mellon's PM curriculum, a customer journey map documents:
    
    **Horizontal structure (stages):**
    - **Awareness:** Customer first learns about your brand
    - **Consideration:** Customer evaluates your offering
    - **Decision:** Customer makes a purchase
    - **Service:** Customer uses the product/service post-purchase
    - **Loyalty:** Customer becomes a repeat buyer and advocate
    
    **Vertical structure (for each stage):**
    - **Customer Actions:** What customers do
    - **Touchpoints:** Where/how they interact with your brand
    - **Customer Experience:** Emotions and thoughts
    - **KPIs:** Metrics to measure success
    - **Business Goals:** What you're trying to achieve
    - **Teams Involved:** Who owns this stage
    
    ### Why This Works
    - **Empathy-driven:** Centers on customer emotions, not just actions
    - **Cross-functional alignment:** Shows which teams affect which stages
    - **Metric-focused:** Ties customer experience to measurable outcomes
    - **Gap identification:** Makes pain points and opportunities visible
    - **Actionable:** Clear KPIs and goals enable prioritization
    
    ### Anti-Patterns (What This Is NOT)
    - **Not a user story map:** Journey maps are broader (all touchpoints, not just product use)
    - **Not a service blueprint:** Less detailed on internal processes, more focused on customer experience
    - **Not static:** Journey maps evolve as customer behavior changes
    
    ### When to Use This
    - Understanding customer experience across all touchpoints (not just product)
    - Aligning cross-functional teams (marketing, sales, product, support)
    - Identifying pain points and prioritizing improvements
    - Onboarding new team members to customer perspective
    - Auditing the end-to-end customer experience
    
    ### When NOT to Use This
    - For deep product-specific workflows (use story mapping instead)
    - Before defining personas (need to know who you're mapping)
    - As a one-time exercise (journey maps require ongoing updates)
    
    ---
    
    ## Application
    
    Use `template.md` for the full fill-in structure.
    
    ### Step 1: Prepare Prerequisites
    
    Before mapping, ensure you have:
    1. **Key stakeholders:** Marketing, sales, product, customer service representatives
    2. **Buyer personas:** Detailed personas with demographics, psychographics, goals, challenges (reference `skills/proto-persona/SKILL.md`)
    3. **Defined stages:** Main stages of your buying process (typically: Awareness, Consideration, Decision, Service, Loyalty)
    4. **Touchpoint inventory:** All places customers interact with your brand (website, social, email, store, support, etc.)
    
    **If missing:** Run discovery interviews, persona definition work, or touchpoint audits first.
    
    ---
    
    ### Step 2: Set Clear Objectives
    
    Define what you want to achieve:
    
    ```markdown
    ## Objectives
    - [Goal 1: e.g., "Identify top 3 pain points causing drop-off between Awareness and Consideration"]
    - [Goal 2: e.g., "Align marketing and sales on customer motivations at each stage"]
    - [Goal 3: e.g., "Understand emotional journey to inform messaging strategy"]
    ```
    
    **Quality checks:**
    - **Specific:** Not "understand customers" but "identify drop-off causes in Consideration stage"
    - **Actionable:** Results should inform decisions, not just document observations
    
    ---
    
    ### Step 3: Choose a Buyer Persona
    
    Select one persona to focus on (create separate maps for each persona):
    
    ```markdown
    ## Persona
    - [Persona name and brief description]
    - [Example: "Manager Mike: 35-42, Director of Product at mid-sized B2B SaaS, struggles with data-driven prioritization, values time savings over feature depth"]
    ```
    
    **Why one persona per map:** Different personas have different journeys. Mixing them creates confusion.
    
    ---
    
    ### Step 4: Map Each Stage
    
    For each stage (Awareness, Consideration, Decision, Service, Loyalty), document:
    
    #### Customer Actions
    What customers do at this stage:
    
    ```markdown
    ### Stage: [Stage Name, e.g., Awareness]
    
    **Customer Actions:**
    - [Action 1: e.g., "See LinkedIn ad about product management tools"]
    - [Action 2: e.g., "Hear about tool from PM peer at conference"]
    - [Action 3: e.g., "Google 'best product roadmap software'"]
    ```
    
    **Quality checks:**
    - **Observable:** You can see or measure this action
    - **Specific:** Not "research products" but "Google 'best roadmap software' and read comparison articles"
    
    ---
    
    #### Touchpoints
    Where/how customers interact with your brand:
    
    ```markdown
    **Touchpoints:**
    - [Touchpoint 1: e.g., "LinkedIn Ads"]
    - [Touchpoint 2: e.g., "Word-of-mouth at PM conferences"]
    - [Touchpoint 3: e.g., "Google organic search results"]
    - [Touchpoint 4: e.g., "Review sites (G2, Capterra)"]
    ```
    
    **Quality checks:**
    - **Comprehensive:** Include both digital and physical touchpoints
    - **Specific:** Not "social media" but "LinkedIn Ads," "Twitter mentions," etc.
    
    ---
    
    #### Customer Experience
    Emotions and thoughts customers have:
    
    ```markdown
    **Customer Experience:**
    - [Emotion 1: e.g., "Curious but skeptical—'Is this actually better than spreadsheets?'"]
    - [Emotion 2: e.g., "Overwhelmed by options—'Too many tools, how do I choose?'"]
    - [Emotion 3: e.g., "Hopeful but cautious—'Could this save me time?'"]
    ```
    
    **Quality checks:**
    - **Authentic:** Use customer quotes from research when possible
    - **Emotional:** Capture feelings, not just thoughts
    - **Specific:** Not "interested" but "curious but skeptical—worried about setup time"
    
    ---
    
    #### KPIs
    Key performance indicators for this stage:
    
    ```markdown
    **KPIs:**
    - [KPI 1: e.g., "Brand awareness (measured via surveys)"]
    - [KPI 2: e.g., "LinkedIn ad impressions: 100k/month"]
    - [KPI 3: e.g., "Organic search traffic: 5k visitors/month"]
    - [KPI 4: e.g., "G2 review views: 2k/month"]
    ```
    
    **Quality checks:**
    - **Measurable:** Can you track this?
    - **Stage-appropriate:** Awareness KPIs differ from Decision KPIs
    
    ---
    
    #### Business Goals
    What you're trying to achieve at this stage:
    
    ```markdown
    **Business Goals:**
    - [Goal 1: e.g., "Increase brand awareness among PMs at B2B SaaS companies"]
    - [Goal 2: e.g., "Generate 500 qualified leads/month"]
    - [Goal 3: e.g., "Position as top 3 roadmap tool in G2 rankings"]
    ```
    
    **Quality checks:**
    - **Outcome-focused:** Not "run ads" but "increase brand awareness"
    - **Aligned with stage:** Don't expect conversions at Awareness stage
    
    ---
    
    #### Teams Involved
    Who owns this stage:
    
    ```markdown
    **Teams Involved:**
    - [Team 1: e.g., "Marketing (ad campaigns, SEO)"]
    - [Team 2: e.g., "Content (blog posts, comparison guides)"]
    - [Team 3: e.g., "Customer Success (case studies, testimonials)"]
    ```
    
    **Quality checks:**
    - **Cross-functional:** Multiple teams usually touch each stage
    - **Specific roles:** Not just "marketing" but "marketing (ad campaigns, SEO)"
    
    ---
    
    ### Step 5: Visualize the Map
    
    Create a table or visual diagram:
    
    | **Stage** | **Awareness** | **Consideration** | **Decision** | **Service** | **Loyalty** |
    |-----------|---------------|-------------------|--------------|-------------|-------------|
    | **Customer Actions** | See ad, hear from peers, Google search | Compare features, read reviews, request demo | Free trial signup, test with real data, evaluate ROI | Onboard team, build first roadmap, integrate with Jira | Use daily, recommend to peers, share wins on LinkedIn |
    | **Touchpoints** | LinkedIn Ads, conferences, Google, review sites | Website, demo calls, sales emails | Product (free trial), onboarding emails | Product, support chat, knowledge base | Product, community forums, customer success check-ins |
    | **Customer Experience** | Curious but skeptical | Excited but overwhelmed by options | Anxious about setup time, hopeful about time savings | Relieved if easy, frustrated if complex | Satisfied and confident, proud of wins |
    | **KPIs** | Impressions: 100k/month, traffic: 5k/month | Demo requests: 100/month, trial signups: 50/month | Conversion rate: 20%, time-to-value: <2 hours | Activation rate: 70%, support ticket volume | Retention rate: 85%, NPS: 50, referral rate: 15% |
    | **Business Goals** | Increase brand awareness, generate 500 leads/month | Improve lead quality, reduce sales cycle to 30 days | Increase trial-to-paid conversion, optimize onboarding | Reduce churn, improve activation, minimize support costs | Increase LTV, generate referrals, upsell premium features |
    | **Teams Involved** | Marketing, Content | Marketing, Sales, Product | Sales, Product, Onboarding | Product, Support, Customer Success | Product, Customer Success, Marketing |
    
    ---
    
    ### Step 6: Analyze and Prioritize
    
    Review the map and ask:
    1. **Where are the biggest pain points?** (Look for negative emotions + high drop-off rates)
    2. **Which stages have the weakest KPIs?** (Prioritize low-performing stages)
    3. **Are teams aligned?** (Do teams understand their role in each stage?)
    4. **What opportunities exist?** (Where can small improvements create big impact?)
    
    **Prioritization criteria:**
    - **Impact:** How much would fixing this improve the customer experience?
    - **Feasibility:** How easy is this to fix?
    - **Alignment:** Does this support business goals?
    
    ---
    
    ### Step 7: Test and Refine
    
    - **Update regularly:** Customer behavior changes—revisit the map quarterly
    - **Validate with data:** Use analytics, surveys, and customer interviews to confirm assumptions
    - **Track improvements:** After making changes, measure impact on KPIs
    
    ---
    
    ## Examples
    
    See `examples/sample.md` for a full customer journey map example.
    See `examples/meta-product-manager-skills.md` for a meta dogfooding example mapping this repository's own customer journey.
    
    Mini example excerpt:
    
    ```markdown
    | **Stage** | **Awareness** | **Consideration** | **Decision** |
    | **Customer Actions** | Sees LinkedIn ad | Compares on G2 | Starts free trial |
    | **Customer Experience** | Curious but skeptical | Overwhelmed | Anxious about setup |
    ```
    
    ---
    
    ## Common Pitfalls
    
    ### Pitfall 1: Generic Emotions
    **Symptom:** "Customer feels happy" or "Customer is satisfied"
    
    **Consequence:** No insight into *why* they feel that way or what to improve.
    
    **Fix:** Be specific: "Relieved that setup took 30 minutes, not 3 hours as feared."
    
    ---
    
    ### Pitfall 2: Missing Touchpoints
    **Symptom:** Only documenting digital touchpoints (website, app)
    
    **Consequence:** Miss offline interactions (conferences, word-of-mouth, support calls).
    
    **Fix:** Include all touchpoints: physical, digital, human, and automated.
    
    ---
    
    ### Pitfall 3: Internal Perspective
    **Symptom:** Mapping what *you* want customers to do, not what they *actually* do
    
    **Consequence:** Journey map reflects wishful thinking, not reality.
    
    **Fix:** Validate with customer research, analytics, and support tickets.
    
    ---
    
    ### Pitfall 4: No KPIs or Goals
    **Symptom:** Journey map has actions and emotions but no metrics or business objectives
    
    **Consequence:** No way to measure success or prioritize improvements.
    
    **Fix:** Add KPIs and business goals for each stage. Make them measurable.
    
    ---
    
    ### Pitfall 5: One-and-Done Exercise
    **Symptom:** Journey map created once, never updated
    
    **Consequence:** Map becomes outdated as customer behavior evolves.
    
    **Fix:** Review quarterly. Update based on new data, product changes, or market shifts.
    
    ---
    
    ## References
    
    ### Related Skills
    - `skills/proto-persona/SKILL.md` — Defines the persona for the journey map
    - `skills/jobs-to-be-done/SKILL.md` — Informs customer actions and goals
    - `skills/problem-statement/SKILL.md` — Identifies pain points at each stage
    - `skills/user-story-mapping/SKILL.md` — Complementary (story mapping focuses on product usage, journey mapping covers all touchpoints)
    
    ### External Frameworks
    - NNGroup, *Customer Journey Mapping* (2016) — Foundational framework
    - Carnegie Mellon University, *Product Management Curriculum* — Academic approach
    - Chris Risdon & Patrick Quattlebaum, *Orchestrating Experiences* (2018) — Journey mapping for service design
    
    ### Dean's Work
    - Customer Journey Mapping Prompt Template (adapted from NNGroup and CMU frameworks)
    
    ### Provenance
    - Adapted from `prompts/customer-journey-mapping-prompt-template.md` in the `https://github.com/deanpeters/product-manager-prompts` repo.
    
    ---
    
    **Skill type:** Component
    **Suggested filename:** `customer-journey-map.md`
    **Suggested placement:** `/skills/components/`
    **Dependencies:** References `skills/proto-persona/SKILL.md`, `skills/jobs-to-be-done/SKILL.md`, `skills/problem-statement/SKILL.md`
    
  • skills/ai-shaped-readiness-advisor/SKILL.mdskill
    Show content (42712 bytes)
    ---
    name: ai-shaped-readiness-advisor
    description: Assess whether your product work is AI-first or AI-shaped. Use when evaluating AI maturity and choosing the next team capability to build.
    intent: >-
      Assess whether your product work is **"AI-first"** (using AI to automate existing tasks faster) or **"AI-shaped"** (fundamentally redesigning how product teams operate around AI capabilities). Use this to evaluate your readiness across **5 essential PM competencies for 2026**, identify gaps, and get concrete recommendations on which capability to build first.
    type: interactive
    theme: ai-agents
    best_for:
      - "Assessing whether your team is AI-first or genuinely AI-shaped"
      - "Identifying which of the 5 AI competencies to build next"
      - "Understanding your product org's AI maturity honestly"
    scenarios:
      - "My team uses AI tools but I'm not sure if we're working differently or just automating the same tasks"
      - "I want to assess my product org's AI maturity and prioritize where to invest next quarter"
    estimated_time: "15-20 min"
    ---
    
    ## Purpose
    
    Assess whether your product work is **"AI-first"** (using AI to automate existing tasks faster) or **"AI-shaped"** (fundamentally redesigning how product teams operate around AI capabilities). Use this to evaluate your readiness across **5 essential PM competencies for 2026**, identify gaps, and get concrete recommendations on which capability to build first.
    
    **Key Distinction:** AI-first is cute (using Copilot to write PRDs faster). AI-shaped is survival (building a durable "reality layer" that both humans and AI trust, orchestrating AI workflows, compressing learning cycles).
    
    This is not about AI tools—it's about **organizational redesign around AI as co-intelligence**. The interactive skill guides you through a maturity assessment, then recommends your next move.
    
    ## Key Concepts
    
    ### AI-First vs. AI-Shaped
    
    | Dimension | AI-First (Cute) | AI-Shaped (Survival) |
    |-----------|-----------------|----------------------|
    | **Mindset** | Automate existing tasks | Redesign how work gets done |
    | **Goal** | Speed up artifact creation | Compress learning cycles |
    | **AI Role** | Task assistant | Strategic co-intelligence |
    | **Advantage** | Temporary efficiency gains | Defensible competitive moat |
    | **Example** | "Copilot writes PRDs 2x faster" | "AI agent validates hypotheses in 48 hours instead of 3 weeks" |
    
    **Critical Insight:** If a competitor can replicate your AI usage by throwing bodies at it, it's not differentiation—it's just efficiency (which becomes table stakes within months).
    
    ---
    
    ### The 5 Essential PM Competencies (2026)
    
    These competencies define AI-shaped product work. You'll assess your maturity on each.
    
    #### 1. **Context Design**
    Building a durable **"reality layer"** that both humans and AI can trust—treating AI attention as a scarce resource and allocating it deliberately.
    
    **What it includes:**
    - Documenting what's true vs. assumed
    - Immutable constraints (technical, regulatory, strategic)
    - Operational glossary (shared definitions)
    - Evidence standards (what counts as validation)
    - **Context boundaries** (what to persist vs. retrieve)
    - **Memory architecture** (short-term conversational + long-term persistent)
    - **Retrieval strategies** (semantic search, contextual retrieval)
    
    **Key Principle:** *"If you can't point to evidence, constraints, and definitions, you don't have context. You have vibes."*
    
    **Critical Distinction: Context Stuffing vs. Context Engineering**
    - **Context Stuffing (AI-first):** Jamming volume without intent ("paste entire PRD")
    - **Context Engineering (AI-shaped):** Shaping structure for attention (bounded domains, retrieve with intent)
    
    **The 5 Diagnostic Questions:**
    1. What specific decision does this support?
    2. Can retrieval replace persistence?
    3. Who owns the context boundary?
    4. What fails if we exclude this?
    5. Are we fixing structure or avoiding it?
    
    **AI-first version:** Pasting PRDs into ChatGPT; no context boundaries; "more is better" mentality
    **AI-shaped version:** CLAUDE.md files, evidence databases, constraint registries AI agents reference; two-layer memory architecture; Research→Plan→Reset→Implement cycle to prevent context rot
    
    **Deep Dive:** See [`context-engineering-advisor`](../context-engineering-advisor/SKILL.md) for detailed guidance on diagnosing context stuffing and implementing memory architecture.
    
    ---
    
    #### 2. **Agent Orchestration**
    Creating repeatable, traceable AI workflows (not one-off prompts).
    
    **What it includes:**
    - Defined workflow loops: research → synthesis → critique → decision → log rationale
    - Each step shows its work (traceable reasoning)
    - Workflows run consistently (same inputs = predictable process)
    - Version-controlled prompts and agents
    
    **Key Principle:** One-off prompts are tactical. Orchestrated workflows are strategic.
    
    **AI-first version:** "Ask ChatGPT to analyze this user feedback"
    **AI-shaped version:** Automated workflow that ingests feedback, tags themes, generates hypotheses, flags contradictions, logs decisions
    
    ---
    
    #### 3. **Outcome Acceleration**
    Using AI to compress **learning cycles** (not just speed up tasks).
    
    **What it includes:**
    - Eliminate validation lag (PoL probes run in days, not weeks)
    - Remove approval delays (AI pre-validates against constraints)
    - Cut meeting overhead (async AI synthesis replaces status meetings)
    
    **Key Principle:** Do less, purposefully. AI removes bottlenecks, not generates more work.
    
    **AI-first version:** "AI writes user stories faster"
    **AI-shaped version:** "AI runs feasibility checks overnight, eliminating 2 weeks of technical discovery"
    
    ---
    
    #### 4. **Team-AI Facilitation**
    Redesigning team systems so AI operates as **co-intelligence**, not an accountability shield.
    
    **What it includes:**
    - Review norms (who checks AI outputs, when, how)
    - Evidence standards (AI must cite sources, not hallucinate)
    - Decision authority (AI recommends, humans decide—clear boundaries)
    - Psychological safety (team can challenge AI without feeling "dumb")
    
    **Key Principle:** AI amplifies judgment, doesn't replace accountability.
    
    **AI-first version:** "I used AI" as excuse for bad outputs
    **AI-shaped version:** Clear review protocols; AI outputs treated as drafts requiring human validation
    
    ---
    
    #### 5. **Strategic Differentiation**
    Moving beyond efficiency to create **defensible competitive advantages**.
    
    **What it includes:**
    - New customer capabilities (what can users do now that they couldn't before?)
    - Workflow rewiring (processes competitors can't replicate without full redesign)
    - Economics competitors can't match (10x cost advantage through AI)
    
    **Key Principle:** *"If a competitor can copy it by throwing bodies at it, it's not differentiation."*
    
    **AI-first version:** "We use AI to write better docs"
    **AI-shaped version:** "We validate product hypotheses in 2 days vs. industry standard 3 weeks—ship 6x more validated features per quarter"
    
    ---
    
    ### Anti-Patterns (What This Is NOT)
    
    - **Not about AI tools:** Using Claude vs. ChatGPT doesn't matter. Redesigning workflows matters.
    - **Not about speed:** Writing PRDs 2x faster isn't strategic if PRDs weren't the bottleneck.
    - **Not about automation:** Automating bad processes just scales the bad.
    - **Not about replacing humans:** AI-shaped orgs augment judgment, not eliminate it.
    
    ---
    
    ### When to Use This Skill
    
    ✅ **Use this when:**
    - You're using AI tools but not seeing strategic advantage
    - You suspect you're "AI-first" (efficiency) but want to be "AI-shaped" (transformation)
    - You need to prioritize which AI capability to build next
    - Leadership asks "How are we using AI?" and you're not sure how to answer strategically
    - You want to assess team readiness for AI-powered product work
    
    ❌ **Don't use this when:**
    - You haven't started using AI at all (start with basic tools first)
    - You're looking for tool recommendations (this is about organizational design, not tooling)
    - You need tactical "how to write a prompt" guidance (use skills for that)
    
    ---
    
    ### Facilitation Source of Truth
    
    Use [`workshop-facilitation`](../workshop-facilitation/SKILL.md) as the default interaction protocol for this skill.
    
    It defines:
    - session heads-up + entry mode (Guided, Context dump, Best guess)
    - one-question turns with plain-language prompts
    - progress labels (for example, Context Qx/8 and Scoring Qx/5)
    - interruption handling and pause/resume behavior
    - numbered recommendations at decision points
    - quick-select numbered response options for regular questions (include `Other (specify)` when useful)
    
    This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
    
    ## Application
    
    This interactive skill uses **adaptive questioning** to assess your maturity across 5 competencies, then recommends which to prioritize.
    
    ### Facilitation Protocol (Mandatory)
    
    1. Ask exactly **one question per turn**.
    2. Wait for the user's answer before asking the next question.
    3. Use plain-language questions (no shorthand labels as the primary question). If needed, include an example response format.
    4. Show progress on every turn using user-facing labels:
       - `Context Qx/8` during context gathering
       - `Scoring Qx/5` during maturity scoring
       - Include "questions remaining" when practical.
    5. Do not use internal phase labels (like "Step 0") in user-facing prompts unless the user asks for internal structure details.
    6. For maturity scoring questions, present concise 1-4 choices first; share full rubric details only if requested.
    7. For context questions, offer concise numbered quick-select options when practical, plus `Other (specify)` for open-ended answers. Accept multi-select replies like `1,3` or `1 and 3`.
    8. Give numbered recommendations **only at decision points**, not after every answer.
    9. Decision points include:
       - After the full context summary
       - After the 5-dimension maturity profile
       - During priority selection and action-plan path selection
    10. When recommendations are shown, enumerate clearly (`1.`, `2.`, `3.`) and accept selections like `#1`, `1`, `1 and 3`, `1,3`, or custom text.
    11. If multiple options are selected, synthesize a combined path and continue.
    12. If custom text is provided, map it to the closest valid path and continue without forcing re-entry.
    13. Interruption handling is mandatory: if the user asks a meta question ("how many left?", "why this label?", "pause"), answer directly first, then restate current progress and resume with the pending question.
    14. If the user says to stop or pause, halt the assessment immediately and wait for explicit resume.
    15. If the user asks for "one question at a time," keep that mode for the rest of the session unless they explicitly opt out.
    16. Before any assessment question, give a short heads-up on time/length and let the user choose an entry mode.
    
    ---
    
    ### Session Start: Heads-Up + Entry Mode (Mandatory)
    
    **Agent opening prompt (use this first):**
    
    "Quick heads-up before we start: this usually takes about 7-10 minutes and up to 13 questions total (8 context + 5 scoring).
    
    How do you want to do this?
    1. Guided mode: I’ll ask one question at a time.
    2. Context dump: you paste what you already know, and I’ll skip anything redundant.
    3. Best guess mode: I’ll make reasonable assumptions where details are missing, label them, and keep moving."
    
    Accept selections as `#1`, `1`, `1 and 3`, `1,3`, or custom text.
    
    **Mode behavior:**
    
    - **If Guided mode:** Run Step 0 as written, then scoring.
    - **If Context dump:** Ask for pasted context once, summarize it, identify gaps, and:
      - Skip any context questions already answered.
      - Ask only the minimum missing context needed (0-2 clarifying questions).
      - Move to scoring as soon as context is sufficient.
    - **If Best guess mode:** Ask for the smallest viable starting input (role/team + primary goal), then:
      - Infer missing details using reasonable defaults.
      - Label each inferred item as `Assumption`.
      - Include confidence tags (`High`, `Medium`, `Low`) for each assumption.
      - Continue without blocking on unknowns.
    
    At the final summary, include an **Assumptions to Validate** section when context dump or best guess mode was used.
    
    ---
    
    ### Step 0: Gather Context
    
    **Agent asks:**
    
    Collect context using this exact sequence, one question at a time:
    
    1. "Which AI tools are you using today?"
    2. "How does your team usually use AI today: one-off prompts, reusable templates, or multi-step workflows?"
    3. "Who uses AI consistently today: just you, PMs, or cross-functional teams?"
    4. "About how many PMs, engineers, and designers are on your team?"
    5. "What stage are you in: startup, growth, or enterprise?"
    6. "How are decisions made: centralized, distributed, or consensus-driven?"
    7. "What competitive advantage are you trying to build with AI?"
    8. "What's the biggest bottleneck slowing learning and iteration today?"
    
    After question 8, summarize back in 4 lines:
    - Current AI usage pattern
    - Team context
    - Strategic intent
    - Primary bottleneck
    
    ---
    
    ### Step 1: Context Design Maturity
    
    **Agent asks:**
    
    Let's assess your **Context Design** capability—how well you've built a "reality layer" that both humans and AI can trust, and whether you're doing **context stuffing** (volume without intent) or **context engineering** (structure for attention).
    
    **Which statement best describes your current state?**
    
    1. **Level 1 (AI-First / Context Stuffing):** "I paste entire documents into ChatGPT every time I need something. No shared knowledge base. No context boundaries."
       - Reality: One-off prompting with no durability; "more is better" mentality
       - Problem: AI has no memory; you repeat yourself constantly; context stuffing degrades attention
       - **Context Engineering Gap:** No answers to the 5 diagnostic questions; persisting everything "just in case"
    
    2. **Level 2 (Emerging / Early Structure):** "We have some docs (PRDs, strategy memos), but they're scattered. No consistent format. Starting to notice context stuffing issues (vague responses, normalized retries)."
       - Reality: Context exists but isn't structured for AI consumption; no retrieval strategy
       - Problem: AI can't reliably find or trust information; mixing always-needed with episodic context
       - **Context Engineering Gap:** No context boundary owner; no distinction between persist vs. retrieve
    
    3. **Level 3 (Transitioning / Context Engineering Emerging):** "We've started using CLAUDE.md files and project instructions. Constraints registry exists. We're identifying what to persist vs. retrieve. Experimenting with Research→Plan→Reset→Implement cycle."
       - Reality: Structured context emerging, but not comprehensive; context boundaries defined but not fully enforced
       - Problem: Coverage is patchy; some areas well-documented, others vibe-driven; inconsistent retrieval practices
       - **Context Engineering Progress:** Can answer 3-4 of the 5 diagnostic questions; context boundary owner assigned; starting to use two-layer memory
    
    4. **Level 4 (AI-Shaped / Context Engineering Mastery):** "We maintain a durable reality layer: constraints registry (20+ entries), evidence database, operational glossary (30+ terms). Two-layer memory architecture (short-term conversational + long-term persistent via vector DB). Context boundaries defined and owned. AI agents reference these automatically. We use Research→Plan→Reset→Implement to prevent context rot."
       - Reality: Comprehensive, version-controlled context both humans and AI trust; retrieval with intent (not completeness)
       - Outcome: AI operates with high confidence; reduces hallucination and rework; token usage optimized; no context stuffing
       - **Context Engineering Mastery:** Can answer all 5 diagnostic questions; context boundary audited quarterly; quantitative efficiency tracking: (Accuracy × Coherence) / (Tokens × Latency)
    
    **Select your level:** [1, 2, 3, or 4]
    
    **Note:** If you selected Level 1-2 and struggle with context stuffing, consider using [`context-engineering-advisor`](../context-engineering-advisor/SKILL.md) to diagnose and fix Context Hoarding Disorder before proceeding.
    
    **User response:** [Selection]
    
    **Agent records:** Context Design maturity = [Level X]
    
    ---
    
    ### Step 2: Agent Orchestration Maturity
    
    **Agent asks:**
    
    Now let's assess **Agent Orchestration**—whether you have repeatable AI workflows or just one-off prompts.
    
    **Which statement best describes your current state?**
    
    1. **Level 1 (AI-First):** "I type prompts into ChatGPT as needed. No saved workflows or templates."
       - Reality: Tactical, ad-hoc usage
       - Problem: Inconsistent results; can't scale or audit
    
    2. **Level 2 (Emerging):** "I have a few saved prompts I reuse. Maybe some custom GPTs or Claude Projects."
       - Reality: Repeatable prompts, but not full workflows
       - Problem: Each step is manual; no orchestration
    
    3. **Level 3 (Transitioning):** "We've built some multi-step workflows (research → synthesis → critique). Tracked in tools like Notion or Linear."
       - Reality: Workflows exist but require manual handoffs
       - Problem: Still human-in-the-loop for every step; not fully automated
    
    4. **Level 4 (AI-Shaped):** "We have orchestrated AI workflows that run autonomously: research → synthesis → critique → decision → log rationale. Each step is traceable and version-controlled."
       - Reality: Workflows run consistently; show their work at each step
       - Outcome: Reliable, auditable, scalable AI processes
    
    **Select your level:** [1, 2, 3, or 4]
    
    **User response:** [Selection]
    
    **Agent records:** Agent Orchestration maturity = [Level X]
    
    ---
    
    ### Step 3: Outcome Acceleration Maturity
    
    **Agent asks:**
    
    Next, **Outcome Acceleration**—are you using AI to compress learning cycles, or just speed up tasks?
    
    **Which statement best describes your current state?**
    
    1. **Level 1 (AI-First):** "AI helps me write docs faster (PRDs, user stories). Saves me a few hours per week."
       - Reality: Efficiency gains on artifact creation
       - Problem: Docs weren't the bottleneck; learning cycles unchanged
    
    2. **Level 2 (Emerging):** "AI helps with research and synthesis (summarize user feedback, analyze competitors). Saves research time."
       - Reality: Modest learning acceleration
       - Problem: Still sequential; AI doesn't eliminate validation lag
    
    3. **Level 3 (Transitioning):** "We use AI to run experiments faster (PoL probes, feasibility checks). Cut validation time from weeks to days."
       - Reality: Learning cycles compressing
       - Problem: Not yet systematic; only applied to some experiments
    
    4. **Level 4 (AI-Shaped):** "AI systematically removes bottlenecks: overnight feasibility checks, async synthesis replaces meetings, automated validation against constraints. Learning cycles 5-10x faster."
       - Reality: Fundamental redesign of how learning happens
       - Outcome: Ship validated features 6x faster than competitors
    
    **Select your level:** [1, 2, 3, or 4]
    
    **User response:** [Selection]
    
    **Agent records:** Outcome Acceleration maturity = [Level X]
    
    ---
    
    ### Step 4: Team-AI Facilitation Maturity
    
    **Agent asks:**
    
    Now assess **Team-AI Facilitation**—how well you've redesigned team systems for AI as co-intelligence.
    
    **Which statement best describes your current state?**
    
    1. **Level 1 (AI-First):** "I use AI privately. Team doesn't know or doesn't use it. No shared norms."
       - Reality: Individual tool usage, no team integration
       - Problem: Inconsistent quality; no accountability for AI outputs
    
    2. **Level 2 (Emerging):** "Team uses AI, but no formal review process. 'I used AI' mentioned casually."
       - Reality: Awareness but no structure
       - Problem: AI outputs treated as final; errors slip through
    
    3. **Level 3 (Transitioning):** "We have review norms emerging (AI outputs are drafts, not finals). Evidence standards discussed but not codified."
       - Reality: Cultural shift underway
       - Problem: Norms are informal; not everyone follows them
    
    4. **Level 4 (AI-Shaped):** "Clear protocols: AI outputs require human validation, evidence standards codified, decision authority explicit (AI recommends, humans decide). Team treats AI as co-intelligence."
       - Reality: AI integrated into team operating system
       - Outcome: High-quality outputs; psychological safety maintained
    
    **Select your level:** [1, 2, 3, or 4]
    
    **User response:** [Selection]
    
    **Agent records:** Team-AI Facilitation maturity = [Level X]
    
    ---
    
    ### Step 5: Strategic Differentiation Maturity
    
    **Agent asks:**
    
    Finally, **Strategic Differentiation**—are you creating defensible competitive advantages, or just efficiency gains?
    
    **Which statement best describes your current state?**
    
    1. **Level 1 (AI-First):** "We use AI to work faster (write better docs, respond to customers quicker). Efficiency gains only."
       - Reality: Table-stakes improvements
       - Problem: Competitors can copy this within months
    
    2. **Level 2 (Emerging):** "AI enables us to do things we couldn't before (analyze 10x more data, test more hypotheses). New capabilities, but competitors could replicate."
       - Reality: Capability expansion, but not defensible
       - Problem: No moat; competitors hire more people to match
    
    3. **Level 3 (Transitioning):** "We've redesigned some workflows around AI (e.g., validate hypotheses in 2 days vs. 3 weeks). Starting to create separation."
       - Reality: Workflow advantages emerging
       - Problem: Not yet systematic; only applied in pockets
    
    4. **Level 4 (AI-Shaped):** "We've fundamentally rewired how we operate: customers get capabilities they can't get elsewhere, our learning cycles are 10x faster than industry standard, our economics are 5x better. Competitors can't replicate without full org redesign."
       - Reality: Defensible competitive moat
       - Outcome: Strategic advantage that compounds over time
    
    **Select your level:** [1, 2, 3, or 4]
    
    **User response:** [Selection]
    
    **Agent records:** Strategic Differentiation maturity = [Level X]
    
    ---
    
    ### Step 6: Assess Maturity Profile
    
    **Agent synthesizes:**
    
    Here's your AI-Shaped Readiness Profile:
    
    ```
    ┌─────────────────────────────┬───────┬──────────┐
    │ Competency                  │ Level │ Maturity │
    ├─────────────────────────────┼───────┼──────────┤
    │ 1. Context Design           │   X   │ [Label]  │
    │ 2. Agent Orchestration      │   X   │ [Label]  │
    │ 3. Outcome Acceleration     │   X   │ [Label]  │
    │ 4. Team-AI Facilitation     │   X   │ [Label]  │
    │ 5. Strategic Differentiation│   X   │ [Label]  │
    └─────────────────────────────┴───────┴──────────┘
    
    Overall Assessment: [AI-First / Emerging / Transitioning / AI-Shaped]
    ```
    
    **Maturity Labels:**
    - **Level 1:** AI-First (efficiency only)
    - **Level 2:** Emerging (early capabilities)
    - **Level 3:** Transitioning (redesign underway)
    - **Level 4:** AI-Shaped (strategic transformation)
    
    **Overall Assessment Logic:**
    - **AI-First:** Average level 1-1.5 (mostly Level 1s)
    - **Emerging:** Average level 2-2.5 (mostly Level 2s)
    - **Transitioning:** Average level 3-3.5 (mostly Level 3s)
    - **AI-Shaped:** Average level 3.5-4 (mostly Level 4s)
    
    ---
    
    ### Step 7: Identify Priority Gap
    
    **Agent asks:**
    
    Based on your maturity profile, which competency should you prioritize first?
    
    **Agent analyzes dependencies:**
    
    **Dependency Logic:**
    1. **Context Design is foundational** — If Level 1-2, this must be priority #1 (Agent Orchestration and Outcome Acceleration depend on it)
    2. **Agent Orchestration enables Outcome Acceleration** — If Context Design is Level 3+, but Agent Orchestration is Level 1-2, prioritize orchestration
    3. **Team-AI Facilitation is parallel** — Can be developed alongside others, but required for scale
    4. **Strategic Differentiation requires Levels 3+ on others** — Don't focus here until foundational competencies are built
    
    **Agent recommends:**
    
    Based on your profile, I recommend focusing on **[Competency Name]** first because:
    
    **Option 1: Context Design (if Level 1-2)**
    - **Why:** Without durable context, AI operates on vibes. Every workflow will be fragile.
    - **Impact:** Unlocks Agent Orchestration and Outcome Acceleration
    - **Next Steps:** Build CLAUDE.md files, start constraints registry, create operational glossary
    
    **Option 2: Agent Orchestration (if Context is 3+, but Orchestration is 1-2)**
    - **Why:** You have context, but no repeatable workflows. Scaling requires orchestration.
    - **Impact:** Turn one-off prompts into reliable, traceable workflows
    - **Next Steps:** Document your most frequent AI workflow, version-control prompts, add traceability
    
    **Option 3: Outcome Acceleration (if Context + Orchestration are 3+)**
    - **Why:** You have infrastructure; now compress learning cycles
    - **Impact:** Strategic advantage emerges from speed-to-learning
    - **Next Steps:** Identify biggest bottleneck in learning cycle, design AI workflow to eliminate it
    
    **Option 4: Team-AI Facilitation (if usage is individual, not team-wide)**
    - **Why:** Can't scale if only you're AI-shaped; team must adopt
    - **Impact:** Organizational transformation, not just individual productivity
    - **Next Steps:** Establish review norms, codify evidence standards, create decision authority framework
    
    **Option 5: Strategic Differentiation (if all others are 3+)**
    - **Why:** You have the foundation; now build the moat
    - **Impact:** Create defensible competitive advantage
    - **Next Steps:** Identify workflow competitors can't replicate, design AI-enabled customer capabilities
    
    **Which would you like to focus on?**
    
    **Options:**
    1. **Accept recommendation** — [Agent provides detailed action plan]
    2. **Choose different priority** — [Agent warns about dependencies but allows override]
    3. **Focus on multiple simultaneously** — [Agent suggests parallel tracks if feasible]
    
    **User response:** [Selection]
    
    ---
    
    ### Step 8: Generate Action Plan
    
    **Agent provides tailored action plan based on selected priority:**
    
    ---
    
    #### If Priority = Context Design
    
    **Goal:** Build a durable "reality layer" that both humans and AI trust—move from context stuffing to context engineering.
    
    **Pre-Phase: Diagnose Context Stuffing (If Needed)**
    If you're at Level 1-2, first diagnose context stuffing symptoms:
    1. Run through the 5 diagnostic questions (see [`context-engineering-advisor`](../context-engineering-advisor/SKILL.md))
    2. Identify what you're persisting that should be retrieved
    3. Assign context boundary owner
    4. Create Context Manifest (what's always-needed vs. episodic)
    
    **Phase 1: Document Constraints (Week 1)**
    1. Create a constraints registry:
       - Technical constraints (APIs, data models, performance limits)
       - Regulatory constraints (GDPR, HIPAA, etc.)
       - Strategic constraints (we will/won't build X)
    2. Apply diagnostic question #4 to each constraint: "What fails if we exclude this?"
    3. Format: Structured file AI agents can parse (YAML, JSON, or Markdown with frontmatter)
    4. Version control in Git
    
    **Phase 2: Build Operational Glossary (Week 2)**
    1. List top 20-30 terms your team uses (e.g., "user," "customer," "activation," "churn")
    2. Define each unambiguously (avoid "it depends")
    3. Include edge cases and exceptions
    4. Add to CLAUDE.md or project instructions
    5. This becomes your **long-term persistent memory** (Declarative Memory)
    
    **Phase 3: Establish Evidence Standards + Context Boundaries (Week 3)**
    1. Define what counts as validation:
       - User feedback: "X users said Y" (with quotes)
       - Analytics: "Metric Z changed by N%" (with dashboard link)
       - Competitive intel: "Competitor A launched B" (with source)
    2. Reject: "I think," "We feel," "It seems like"
    3. Define context boundaries using the 5 diagnostic questions:
       - What specific decision does each piece of context support?
       - Can retrieval replace persistence?
       - Who owns the context boundary?
    4. Create Context Manifest document
    5. Codify in team docs
    
    **Phase 4: Implement Memory Architecture + Workflows (Week 4)**
    1. **Set up two-layer memory:**
       - **Short-term (conversational):** Summarize/truncate older parts of conversation
       - **Long-term (persistent):** Constraints registry + operational glossary (consider vector database for retrieval)
    2. **Implement Research→Plan→Reset→Implement cycle:**
       - Research: Allow chaotic context gathering
       - Plan: Synthesize into high-density SPEC.md or PLAN.md
       - Reset: Clear context window
       - Implement: Use only the plan as context
    3. Update AI prompts to reference constraints registry and glossary
    4. Test: Ask AI to cite constraints when making recommendations
    5. Measure: % of AI outputs that cite evidence vs. hallucinate; token usage efficiency
    
    **Success Criteria:**
    - ✅ Constraints registry has 20+ entries
    - ✅ Operational glossary has 20-30 terms
    - ✅ Evidence standards documented and shared
    - ✅ Context Manifest created (always-needed vs. episodic)
    - ✅ Context boundary owner assigned
    - ✅ Two-layer memory architecture implemented
    - ✅ Research→Plan→Reset→Implement cycle tested on 1 workflow
    - ✅ AI agents reference these automatically
    - ✅ Token usage down 30%+ (less context stuffing)
    - ✅ Output consistency up (fewer retries)
    
    **Related Skills:**
    - **[`context-engineering-advisor`](../context-engineering-advisor/SKILL.md)** (Interactive) — Deep dive on diagnosing context stuffing and implementing memory architecture
    - `problem-statement.md` — Define constraints before framing problems
    - `epic-hypothesis.md` — Evidence-based hypothesis writing
    
    ---
    
    #### If Priority = Agent Orchestration
    
    **Goal:** Turn one-off prompts into repeatable, traceable AI workflows.
    
    **Phase 1: Map Current Workflows (Week 1)**
    1. Pick your most frequent AI use case (e.g., "analyze user feedback")
    2. Document every step you currently take:
       - Copy/paste feedback into ChatGPT
       - Ask for themes
       - Manually categorize
       - Write summary
    3. Identify pain points (manual handoffs, inconsistent results)
    
    **Phase 2: Design Orchestrated Workflow (Week 2)**
    1. Define workflow loop:
       - **Research:** AI reads all feedback (structured input)
       - **Synthesis:** AI identifies themes (with evidence)
       - **Critique:** AI flags contradictions or weak signals
       - **Decision:** Human reviews and decides next steps
       - **Log:** AI records rationale and sources
    2. Each step must be traceable (show sources, reasoning)
    
    **Phase 3: Build and Test (Week 3)**
    1. Implement workflow using:
       - Claude Projects (if simple)
       - Custom GPTs (if moderate)
       - API orchestration (if complex)
    2. Run on 3 past examples; compare to manual process
    3. Measure: Time saved, consistency improved, traceability added
    
    **Phase 4: Document and Scale (Week 4)**
    1. Version-control prompts (Git)
    2. Document workflow steps for team
    3. Train 2 teammates; observe results
    4. Iterate based on feedback
    
    **Success Criteria:**
    - ✅ At least 1 workflow runs consistently (same inputs → predictable process)
    - ✅ Each step is traceable (AI cites sources)
    - ✅ Team can replicate workflow without your involvement
    
    **Related Skills:**
    - `pol-probe-advisor.md` — Use orchestrated workflows for validation experiments
    
    ---
    
    #### If Priority = Outcome Acceleration
    
    **Goal:** Use AI to compress learning cycles, not just speed up tasks.
    
    **Phase 1: Identify Bottleneck (Week 1)**
    1. Map your current learning cycle (e.g., hypothesis → experiment → analysis → decision)
    2. Time each step
    3. Identify slowest step (usually: validation lag, approval delays, or meeting overhead)
    
    **Phase 2: Design AI Intervention (Week 2)**
    1. Ask: "What if this step happened overnight?"
       - Feasibility checks: AI spike in 2 hours vs. 2 days
       - User research synthesis: AI analysis in 1 hour vs. 1 week
       - Approval pre-checks: AI validates against constraints before meeting
    2. Design minimal AI workflow to eliminate bottleneck
    
    **Phase 3: Run Pilot (Week 3)**
    1. Test AI intervention on 1 real initiative
    2. Measure cycle time: before vs. after
    3. Validate quality: Did AI maintain rigor, or cut corners?
    
    **Phase 4: Scale (Week 4)**
    1. If successful (cycle time down 50%+, quality maintained), apply to 3 more initiatives
    2. Document workflow
    3. Train team
    
    **Success Criteria:**
    - ✅ Learning cycle compressed by 50%+ on at least 1 initiative
    - ✅ Quality maintained (no shortcuts that compromise rigor)
    - ✅ Team adopts the accelerated workflow
    
    **Related Skills:**
    - `pol-probe.md` — Use AI to run PoL probes faster
    - `discovery-process.md` — Compress discovery cycles with AI
    
    ---
    
    #### If Priority = Team-AI Facilitation
    
    **Goal:** Redesign team systems so AI operates as co-intelligence, not accountability shield.
    
    **Phase 1: Establish Review Norms (Week 1)**
    1. Codify rule: "AI outputs are drafts, not finals"
    2. Define review protocol:
       - Who reviews AI outputs? (peer, lead PM, cross-functional partner)
       - When? (before sharing externally, before decisions)
       - What to check? (accuracy, completeness, evidence citation)
    3. Share with team, get buy-in
    
    **Phase 2: Set Evidence Standards (Week 2)**
    1. AI must cite sources (no hallucinations)
    2. Reject outputs that say "I think" or "it seems"
    3. Require: "According to [source], [fact]"
    4. Add to team operating docs
    
    **Phase 3: Define Decision Authority (Week 3)**
    1. Clarify: AI recommends, humans decide
    2. Document who has authority to override AI recommendations (PM, team lead, cross-functional consensus)
    3. Create escalation path (what if AI and human disagree?)
    
    **Phase 4: Build Psychological Safety (Week 4)**
    1. Team exercise: Share an AI mistake you caught (normalize catching errors)
    2. Reward critical thinking ("Good catch on that AI hallucination!")
    3. Avoid: "Why didn't you just use AI?" (shaming)
    
    **Success Criteria:**
    - ✅ Review norms documented and followed by team
    - ✅ Evidence standards codified
    - ✅ Decision authority clear
    - ✅ Team comfortable challenging AI outputs
    
    **Related Skills:**
    - `problem-statement.md` — Evidence-based problem framing
    - `epic-hypothesis.md` — Testable, evidence-backed hypotheses
    
    ---
    
    #### If Priority = Strategic Differentiation
    
    **Goal:** Create defensible competitive advantages, not just efficiency gains.
    
    **Phase 1: Identify Moat Opportunities (Week 1)**
    1. Ask: "What could we do with AI that competitors can't replicate by adding headcount?"
       - New customer capabilities (e.g., "AI advisor suggests personalized roadmap")
       - Workflow rewiring (e.g., "Validate product ideas in 2 days vs. 3 weeks")
       - Economics shift (e.g., "Deliver enterprise features at SMB prices via AI automation")
    2. List 5 candidates
    3. Prioritize by defensibility (how hard to copy?)
    
    **Phase 2: Design AI-Enabled Capability (Week 2)**
    1. Pick top candidate
    2. Design end-to-end workflow:
       - What does customer experience?
       - What does AI do behind the scenes?
       - What human judgment is required?
    3. Sketch MVP (minimum viable moat)
    
    **Phase 3: Build and Test (Weeks 3-4)**
    1. Build prototype (can be PoL probe, not production)
    2. Test with 5 customers
    3. Measure: Does this create value competitors can't match?
    
    **Phase 4: Validate Moat (Week 5)**
    1. Ask: "How would a competitor replicate this?"
       - If answer is "hire more people," it's not a moat
       - If answer is "redesign their entire org," you have a moat
    2. Document competitive analysis
    3. Decide: Build full version, pivot, or kill
    
    **Success Criteria:**
    - ✅ Identified at least 1 AI-enabled capability competitors can't easily copy
    - ✅ Validated with customers (they see the value)
    - ✅ Confirmed defensibility (competitor analysis)
    
    **Related Skills:**
    - `positioning-statement.md` — Articulate your AI-driven differentiation
    - `jobs-to-be-done.md` — Understand what customers hire your AI capabilities to do
    
    ---
    
    ### Step 9: Track Progress (Optional)
    
    **Agent offers:**
    
    Would you like me to create a progress tracker for your AI-shaped transformation?
    
    **Tracker includes:**
    - Current maturity levels (baseline)
    - Target maturity levels (goal state)
    - Action plan milestones (from Step 8)
    - Review cadence (weekly, monthly)
    
    **Options:**
    1. **Yes, create tracker** — [Agent generates Markdown checklist]
    2. **No, I'll track separately** — [Agent provides summary]
    
    ---
    
    ## Examples
    
    ### Example 1: Early-Stage Startup (AI-First → Emerging)
    
    **Context:**
    - Team: 2 PMs, 5 engineers
    - AI Usage: ChatGPT for writing PRDs, occasional Copilot usage
    - Goal: Move faster than larger competitors
    
    **Assessment Results:**
    - Context Design: Level 1 (no structured context)
    - Agent Orchestration: Level 1 (one-off prompts)
    - Outcome Acceleration: Level 1 (docs faster, but learning cycles unchanged)
    - Team-AI Facilitation: Level 2 (team uses AI, but no norms)
    - Strategic Differentiation: Level 1 (efficiency only)
    
    **Recommendation:** Focus on **Context Design** first.
    
    **Action Plan (Week 1-4):**
    - Week 1: Create constraints registry (10 technical constraints)
    - Week 2: Build operational glossary (15 terms)
    - Week 3: Establish evidence standards
    - Week 4: Add context to CLAUDE.md files
    
    **Outcome:** After 4 weeks, Context Design → Level 3. Unlocks Agent Orchestration next quarter.
    
    ---
    
    ### Example 2: Growth-Stage Company (Transitioning → AI-Shaped)
    
    **Context:**
    - Team: 10 PMs, 50 engineers, 5 designers
    - AI Usage: Claude Projects for research, custom workflows emerging
    - Goal: Build defensible AI advantage before IPO
    
    **Assessment Results:**
    - Context Design: Level 3 (structured context, not comprehensive)
    - Agent Orchestration: Level 3 (some workflows, manual handoffs)
    - Outcome Acceleration: Level 2 (modest gains, not systematic)
    - Team-AI Facilitation: Level 3 (norms emerging, not codified)
    - Strategic Differentiation: Level 2 (new capabilities, but copyable)
    
    **Recommendation:** Focus on **Outcome Acceleration** (foundation is solid; now compress learning cycles).
    
    **Action Plan (Week 1-4):**
    - Week 1: Identify bottleneck (discovery cycles take 3 weeks)
    - Week 2: Design AI workflow to run overnight feasibility checks
    - Week 3: Pilot on 1 initiative (cut cycle to 5 days)
    - Week 4: Scale to 3 initiatives
    
    **Outcome:** Learning cycles 5x faster → strategic separation from competitors → Level 4 Outcome Acceleration + Level 3 Strategic Differentiation.
    
    ---
    
    ### Example 3: Enterprise Company (AI-First, Scattered Usage)
    
    **Context:**
    - Team: 50 PMs, 300 engineers
    - AI Usage: Individual PMs use various tools, no consistency
    - Goal: Standardize AI usage, create cross-functional workflows
    
    **Assessment Results:**
    - Context Design: Level 2 (docs exist, not structured for AI)
    - Agent Orchestration: Level 1 (no shared workflows)
    - Outcome Acceleration: Level 1 (efficiency only)
    - Team-AI Facilitation: Level 1 (private usage, no norms)
    - Strategic Differentiation: Level 1 (no advantage)
    
    **Recommendation:** Focus on **Team-AI Facilitation** first (distributed team needs shared norms before building infrastructure).
    
    **Action Plan (Week 1-4):**
    - Week 1: Establish review norms (AI outputs are drafts)
    - Week 2: Set evidence standards (AI must cite sources)
    - Week 3: Define decision authority (AI recommends, leads decide)
    - Week 4: Pilot with 3 teams, gather feedback
    
    **Outcome:** Team-AI Facilitation → Level 3. Creates foundation for Context Design and Agent Orchestration next.
    
    ---
    
    ## Common Pitfalls
    
    ### 1. **Mistaking Efficiency for Differentiation**
    **Failure Mode:** "We use AI to write PRDs 2x faster—we're AI-shaped!"
    
    **Consequence:** Competitors copy within 3 months; no lasting advantage.
    
    **Fix:** Ask: "If a competitor threw 2x more people at this, could they match us?" If yes, it's efficiency (table stakes), not differentiation.
    
    ---
    
    ### 2. **Skipping Context Design**
    **Failure Mode:** Building Agent Orchestration workflows without durable context.
    
    **Consequence:** AI workflows are fragile (context changes break everything).
    
    **Fix:** Context Design is foundational. Don't skip it. Build constraints registry, glossary, evidence standards first.
    
    ---
    
    ### 3. **Individual Usage, Not Team Transformation**
    **Failure Mode:** "I'm AI-shaped, but my team isn't."
    
    **Consequence:** Can't scale; workflows die when you're on vacation.
    
    **Fix:** Prioritize Team-AI Facilitation. Shared norms > individual productivity.
    
    ---
    
    ### 4. **Focusing on Tools, Not Workflows**
    **Failure Mode:** "Should we use Claude or ChatGPT?"
    
    **Consequence:** Tool debates distract from organizational redesign.
    
    **Fix:** Tools don't matter. Workflows matter. Focus on redesigning how work gets done, not which AI you use.
    
    ---
    
    ### 5. **Speed Over Learning**
    **Failure Mode:** "AI helps us ship faster!"
    
    **Consequence:** Ship the wrong thing faster (if you're not compressing learning cycles).
    
    **Fix:** Outcome Acceleration is about learning faster, not building faster. Validate hypotheses in days, not weeks.
    
    ---
    
    ## References
    
    ### Related Skills
    - **[context-engineering-advisor](../context-engineering-advisor/SKILL.md)** (Interactive) — **Deep dive on Context Design competency:** Diagnose context stuffing, implement memory architecture, use Research→Plan→Reset→Implement cycle
    - **[problem-statement](../problem-statement/SKILL.md)** (Component) — Evidence-based problem framing (Context Design)
    - **[epic-hypothesis](../epic-hypothesis/SKILL.md)** (Component) — Testable hypotheses with evidence standards
    - **[pol-probe-advisor](../pol-probe-advisor/SKILL.md)** (Interactive) — Use AI to compress validation cycles (Outcome Acceleration)
    - **[discovery-process](../discovery-process/SKILL.md)** (Workflow) — Apply AI-shaped principles to discovery
    - **[positioning-statement](../positioning-statement/SKILL.md)** (Component) — Articulate your AI-driven differentiation (Strategic Differentiation)
    
    ### External Frameworks
    - **Dean Peters** — [*AI-First Is Cute. AI-Shaped Is Survival.*](https://deanpeters.substack.com/p/ai-first-is-cute-ai-shaped-is-survival) (Dean Peters' Substack, 2026)
    - **Dean Peters** — [*Context Stuffing Is Not Context Engineering*](https://deanpeters.substack.com/p/context-stuffing-is-not-context-engineering) (Dean Peters' Substack, 2026) — Deep dive on Competency #1 (Context Design)
    
    ### Further Reading
    - **Ethan Mollick** — *Co-Intelligence* (on AI as co-intelligence, not replacement)
    - **Shreyas Doshi** — Twitter threads on PM judgment augmentation with AI
    - **Lenny Rachitsky** — Newsletter interviews with AI-forward PMs
    
  • skills/altitude-horizon-framework/SKILL.mdskill
    Show content (14591 bytes)
    ---
    name: altitude-horizon-framework
    description: Understand the PM-to-Director transition through altitude and horizon thinking. Use when diagnosing scope, time-horizon, or leadership-level gaps.
    intent: >-
      Defines the two-axis mental model that distinguishes Director-level thinking from PM thinking: **Altitude** (how wide you zoom out) and **Horizon** (how far ahead you look). Use this to understand what actually changes in the transition, diagnose which transition zone is creating friction, and apply the Cascading Context Map when organizational direction is vague or absent.
    type: component
    theme: career-leadership
    best_for:
       - "Understanding what actually changes when you move from PM to Director"
       - "Diagnosing which transition zone is creating friction in your role"
       - "Applying the Cascading Context Map when organizational direction is vague"
    scenarios:
       - "I'm a senior PM trying to understand what changes when I become a Director"
       - "I'm newly promoted to Director and something isn't clicking — help me diagnose it"
       - "My team has no clear direction and I need to create context from a vague company strategy"
    estimated_time: "10-15 min"
    ---
    
    ## Purpose
    
    Defines the two-axis mental model that distinguishes Director-level thinking from PM thinking: **Altitude** (how wide you zoom out) and **Horizon** (how far ahead you look). Use this to understand what actually changes in the transition, diagnose which transition zone is creating friction, and apply the Cascading Context Map when organizational direction is vague or absent.
    
    This is not a seniority hierarchy. A PM operating at the right altitude for their role is doing excellent work. A Director operating at PM altitude is leaving their actual job undone.
    
    ## Key Concepts
    
    ### The Two Axes
    
    **Altitude — Scope**
    - **PM altitude:** Close to the ground. Customer problems, individual features, sprint priorities, specific team dynamics.
    - **Director altitude:** High-level view. Product portfolio, cross-functional systems, organizational dynamics, budget allocation, market positioning.
    - The shift is not about losing empathy for customers — it's about zooming out to see the entire restaurant, not just one table.
    
    **Horizon — Time**
    - **PM horizon:** Days, weeks, sprints. A quarter at most.
    - **Director horizon:** Quarter as the starting point. Annual planning cycles, multi-year strategy, market shifts.
    - Directors plan for where the product ecosystem needs to be in a year, then work backward.
    
    ---
    
    ### The Waiter vs. Restaurant Operator
    
    The sharpest analogy for the role shift:
    
    | Dimension | PM (Waiter) | Director (Restaurant Operator) |
    |---|---|---|
    | Focus | Individual diner experience | Entire system — staffing, margins, menu, suppliers |
    | Authority | Influence without control | Portfolio decisions, budget, resource allocation |
    | Success metric | Table seven is happy | Restaurant is profitable, consistent, and scalable |
    | Relationship to customers | Direct, daily, intimate | Aggregate patterns, buyer personas, market cohorts |
    | Failure mode | Ignoring Table Seven's needs | Obsessing over Table Seven's lemons |
    
    The waiter excels at translating the experience of individual diners. The operator isn't ignoring diners — they're asking different questions: "Are we overspending on ingredients? Is a 75-page menu confusing customers? Do we need another server for the dinner rush?" Neither question is more important in absolute terms. They're appropriate to different roles.
    
    ---
    
    ### Four Transition Zones
    
    The PM → Director shift requires movement across four zones. Most people struggle with one or two more than the others — diagnosing which one is the leverage point.
    
    **Zone 1 — Thinking Altitude**
    - Stop: Solving individual customer problems directly
    - Start: Designing systems and teams that solve classes of problems
    
    **Zone 2 — Persona Shift**
    - Stop: Obsessing over individual user personas and daily customer touchpoints
    - Start: Thinking in buyer personas, market cohorts, organizational stakeholders, and executive dynamics
    
    **Zone 3 — Hero Syndrome Recovery**
    - Stop: Being the person who saves the day and earns the pat on the back
    - Start: Getting satisfaction from team success — your product is your people, not the roadmap
    
    **Zone 4 — Direction Creation**
    - Stop: Waiting for clear direction from above before moving
    - Start: Creating context cascades that translate company strategy into team clarity, even when inputs are incomplete
    
    ---
    
    ### Named Failure Modes
    
    **Hero Syndrome**
    What it looks like: Jumping in to solve problems directly. Staying close to the tactical work. Wanting visibility on individual wins.
    Why it happens: PMs are trained to be helpful and responsive. Directors get fewer pats on the back, so they regress to the old reward loop.
    The cost: You under-perform as a Director while over-functioning as a senior IC. Your team doesn't develop because you're in their way.
    
    **Allergic to Process**
    What it looks like: Resisting shared structures. Letting high-performing PMs run their own playbooks independently.
    Why it happens: PMs naturally resist bureaucracy. Early director permissiveness can feel like "great leadership" and "trusting the team."
    The cost: Stakeholders across marketing, finance, and leadership can't synthesize inconsistent outputs. Without shared processes, teams become "monkeys in the room breaking glass."
    
    **People-Pleaser Leadership**
    What it looks like: Wanting the team to like you. Avoiding hard feedback. Saying yes to stakeholder requests to preserve relationships.
    Why it happens: The skills that made you a great PM — listening, empathy, responsiveness — become liabilities at organizational scale.
    The cost: You confuse "popular" with "effective." Respect is built through clarity and hard calls, not niceness.
    
    **Instant Gratification Trap**
    What it looks like: Reading leadership books, collecting certifications, asking "what do I need to do to get promoted?"
    Why it happens: PMs are good at optimization. They try to shortcut the experience requirement.
    The cost: Director readiness requires war stories and lived humility. You can study your way to fluency in the vocabulary, but not to readiness for the role.
    
    **Black-and-White Thinking**
    What it looks like: "This seems like an obvious decision." "Why can't we fund both?" "Why is everything so political here?"
    Why it happens: PMs operate in cleaner problem spaces with clearer cause-and-effect. Director decisions involve competing constraints, limited information, and organizational dynamics.
    The cost: Fast decisions with low confidence create downstream chaos. The grayscale is not a failure of leadership — it's the actual terrain.
    
    ---
    
    ### The Cascading Context Map
    
    When organizational direction is vague or absent, Directors don't wait — they cascade.
    
    **The six steps:**
    
    1. **Listen to the top-level strategy** — QBRs, company messaging, executive communications
    2. **Extract key priorities leadership stated** — Identify 3–5 themes, not 20 bullet points
    3. **Map the second layer:** "How does our business unit accomplish these objectives?"
    4. **Map the third layer:** "How does our product portfolio accomplish that?"
    5. **Map the fourth layer:** "What are my team's specific accountabilities that drive success at layer three?"
    6. **Communicate the cascade to the team** — Not just what to do, but why it connects upward
    
    **What this fixes:** Teams "wandering in the wilderness" — shipping work that doesn't connect to strategy because the context was never translated for them.
    
    **The core principle:** Even with incomplete direction from above, a Director's job is to fill the gap downward. Waiting for perfect clarity is a PM habit. Creating imperfect-but-useful clarity is a Director skill.
    
    ---
    
    ## Application
    
    ### Using This Framework as a PM (Pre-Transition)
    
    1. Identify which transition zone you're weakest in — not to act on it yet, but to know what to observe
    2. Use 1-on-1s with your manager to practice Zone 4: "How does my work connect to business strategy? What's the organizational context I'm not seeing?"
    3. Watch for Hero Syndrome habits now: do you jump in to solve things that others could solve with your coaching?
    4. Don't over-invest in Director thinking while you're still in a PM role. Serve your current scope with full commitment — director altitude will be available when the context requires it
    
    ### Using This Framework as a Newly Promoted Director
    
    1. **First 30 days:** Draw your new Altitude & Horizon map. Who are your new stakeholders? What does a quarter-to-annual planning horizon actually look like in this organization?
    2. **First 60 days:** Identify your Hero Syndrome triggers. When do you feel the pull to jump in directly instead of coaching?
    3. **First 90 days:** Run your first Cascading Context Map. Even if company strategy is unclear, make your best translation and share it with your team
    4. **Ongoing:** When friction appears, name which transition zone it lives in. Diagnosis before prescription
    
    ### Running a Cascading Context Map
    
    Use when your team is unclear on what organizational strategy means for their work.
    
    ```markdown
    ## Context Cascade
    
    **Company Priority:** [What leadership said — in their words]
    **Business Unit Translation:** [How your BU contributes to that priority]
    **Product Portfolio Translation:** [How your products contribute to that]
    **Team Accountabilities:** [What each team owns specifically]
    **Why this matters:** [The so-what for your team — what changes, what stays the same]
    ```
    
    One page is better than ten. The goal is clarity, not comprehensiveness.
    
    ---
    
    ## Examples
    
    See `examples/sample.md` for a full worked scenario with a completed Cascading Context Map and anti-pattern contrast.
    
    ### Good: Director Creates Clarity from a Vague Company Priority
    
    **Situation:** CEO announces at QBR: "We're doubling down on enterprise." Three PMs ask their Director: "What does that mean for our roadmaps?"
    
    **PM response (wrong altitude):** "Let's add enterprise features to our sprint backlogs."
    
    **Director response (right altitude):** Runs a Cascading Context Map. Translates: "Enterprise means larger deal sizes, longer sales cycles, and more integration requirements. For our portfolio: Product A owns the admin controls story, Product B owns the API documentation story, Product C owns the security certification story. Here's what changes in Q3 planning and what doesn't."
    
    **Why it works:** Director didn't wait for more clarity. They created it from available signal.
    
    ---
    
    ### Bad: Hero Syndrome in Action
    
    **Situation:** A PM on the team is struggling with a difficult stakeholder relationship.
    
    **Director response (wrong):** "Let me just talk to that stakeholder directly — I'll get it sorted out."
    
    **Director response (right):** "Walk me through what you've tried. Let's figure out where it broke down and what you'll do differently."
    
    **Why it matters:** The first response solves the problem and creates dependency. The second response grows the PM. Directors who rescue too often build teams that can't function without them.
    
    ---
    
    ### Good: Shifting from Waiter to Operator
    
    **Situation:** A high-performing PM insists on documenting requirements in a different format from the rest of the team because "my stakeholders prefer it."
    
    **Director response (wrong):** "That's fine, she's our best PM — if it works for her team, let it go."
    
    **Director response (right):** "Joe is crushing it individually. But when marketing tries to synthesize across all three PMs' work, they can't. Shared process isn't bureaucracy — it's what makes the system legible to everyone outside it."
    
    **Why it matters:** Protecting high-performer exceptions creates invisible coordination costs. The Restaurant Operator's job is the system, not the star waiter.
    
    ---
    
    ## Common Pitfalls
    
    ### Pitfall 1: Altitude Theater
    **Symptom:** Using strategy language ("portfolio," "ecosystem," "long-term vision") while still making sprint-level decisions
    
    **Consequence:** You sound like a Director but function like a PM. Your team is confused about who's actually deciding and at what level.
    
    **Fix:** If you're in the details, own it. If you're not, delegate it fully. Mixing altitude levels without signaling creates ambiguity that erodes team trust.
    
    ---
    
    ### Pitfall 2: One-and-Done Context Cascade
    **Symptom:** Running the Cascading Context Map once at annual planning, then never revisiting it
    
    **Consequence:** Team aligns in Q1 and drifts as strategy evolves. By Q3, team work is decoupled from current priorities.
    
    **Fix:** Revisit the cascade at major inflection points — quarterly planning, significant exec changes, pivots, or org restructuring.
    
    ---
    
    ### Pitfall 3: Confusing Kindness with Leadership
    **Symptom:** Shielding the team from hard decisions, over-explaining constraints you're holding, softening feedback into meaninglessness
    
    **Consequence:** Team operates without accurate context; trust erodes when reality eventually lands without warning.
    
    **Fix:** Be transparent about the "why" behind hard decisions. You don't need to share everything — but what you share should be honest and actionable.
    
    ---
    
    ### Pitfall 4: Premature Director Thinking as a PM
    **Symptom:** Spending PM years worried about portfolio strategy, organizational dynamics, and "thinking above your pay grade"
    
    **Consequence:** You under-serve your current role. PMs who think like Directors often miss the customer-level signal their actual role requires.
    
    **Fix:** Play your current role with full commitment. The transition will demand Director thinking soon enough — you'll be ready because you did your PM work well, not because you rehearsed the Director role prematurely.
    
    ---
    
    ## References
    
    ### Related Skills
    - `skills/director-readiness-advisor/SKILL.md` — Interactive advisor that uses this framework to diagnose and coach your specific transition situation
    
    ### Source Material
    - *The Product Porch*, Episode 42: [From Product Manager to Director: How to Make the Shift (Part 1)](https://the-product-porch-43ca35c0.simplecast.com/episodes/from-product-manager-to-director-how-to-make-the-shift-part-1) — Todd Blaquiere, Ryan Cantwell, Joe Ghali (January 2026)
    
    ### External Frameworks
    - Marty Cagan, *Empowered* — Organizational dynamics and role clarity in product leadership
    - Julie Zhuo, *The Making of a Manager* — IC-to-manager transition with practical war stories
    - Michael Watkins, *The First 90 Days* — Structured approach to leadership transitions
    
  • .claude-plugin/marketplace.jsonmarketplace
    Show content (17471 bytes)
    {
      "name": "pm-skills",
      "owner": {
        "name": "Dean Peters",
        "email": "dean.peters@productside.com"
      },
      "metadata": {
        "description": "47 battle-tested product management skills for Claude Code — discovery, strategy, finance, career, and more.",
        "version": "0.75.0"
      },
      "plugins": [
        {
          "name": "company-research",
          "source": "./skills/company-research",
          "description": "Create a company research brief with executive quotes, product strategy, and org context.",
          "category": "discovery",
          "tags": [
            "pm",
            "research",
            "competitive-analysis",
            "interviews"
          ],
          "strict": false
        },
        {
          "name": "customer-journey-map",
          "source": "./skills/customer-journey-map",
          "description": "Create a customer journey map across stages, touchpoints, actions, emotions, and metrics.",
          "category": "discovery",
          "tags": [
            "pm",
            "journey-map",
            "ux",
            "customer-experience"
          ],
          "strict": false
        },
        {
          "name": "customer-journey-mapping-workshop",
          "source": "./skills/customer-journey-mapping-workshop",
          "description": "Run a customer journey mapping workshop with adaptive questions and outputs.",
          "category": "discovery",
          "tags": [
            "pm",
            "journey-map",
            "workshop",
            "facilitation"
          ],
          "strict": false
        },
        {
          "name": "discovery-interview-prep",
          "source": "./skills/discovery-interview-prep",
          "description": "Plan customer discovery interviews with the right goal, segment, constraints, and method.",
          "category": "discovery",
          "tags": [
            "pm",
            "discovery",
            "interviews",
            "customer-research"
          ],
          "strict": false
        },
        {
          "name": "discovery-process",
          "source": "./skills/discovery-process",
          "description": "Run a full discovery cycle from problem hypothesis to validated solution.",
          "category": "discovery",
          "tags": [
            "pm",
            "discovery",
            "validation",
            "workflow"
          ],
          "strict": false
        },
        {
          "name": "jobs-to-be-done",
          "source": "./skills/jobs-to-be-done",
          "description": "Uncover customer jobs, pains, and gains in a structured JTBD format.",
          "category": "discovery",
          "tags": [
            "pm",
            "jtbd",
            "customer-needs",
            "discovery"
          ],
          "strict": false
        },
        {
          "name": "lean-ux-canvas",
          "source": "./skills/lean-ux-canvas",
          "description": "Guide teams through Lean UX Canvas v2 to frame problems and surface assumptions.",
          "category": "discovery",
          "tags": [
            "pm",
            "lean-ux",
            "assumptions",
            "experimentation"
          ],
          "strict": false
        },
        {
          "name": "opportunity-solution-tree",
          "source": "./skills/opportunity-solution-tree",
          "description": "Build an Opportunity Solution Tree from outcomes to opportunities, solutions, and tests.",
          "category": "discovery",
          "tags": [
            "pm",
            "ost",
            "teresa-torres",
            "continuous-discovery"
          ],
          "strict": false
        },
        {
          "name": "problem-framing-canvas",
          "source": "./skills/problem-framing-canvas",
          "description": "Guide teams through MITRE's Problem Framing Canvas for clearer problem statements.",
          "category": "discovery",
          "tags": [
            "pm",
            "problem-framing",
            "mitre",
            "workshop"
          ],
          "strict": false
        },
        {
          "name": "problem-statement",
          "source": "./skills/problem-statement",
          "description": "Write a user-centered problem statement with who, what, why, and how it feels.",
          "category": "discovery",
          "tags": [
            "pm",
            "problem-statement",
            "framing",
            "empathy"
          ],
          "strict": false
        },
        {
          "name": "proto-persona",
          "source": "./skills/proto-persona",
          "description": "Create a proto-persona from current research, market signals, and team knowledge.",
          "category": "discovery",
          "tags": [
            "pm",
            "persona",
            "user-research",
            "assumptions"
          ],
          "strict": false
        },
        {
          "name": "pestel-analysis",
          "source": "./skills/pestel-analysis",
          "description": "Analyze political, economic, social, technological, environmental, and legal forces.",
          "category": "strategy",
          "tags": [
            "pm",
            "pestel",
            "macro-environment",
            "strategy"
          ],
          "strict": false
        },
        {
          "name": "positioning-statement",
          "source": "./skills/positioning-statement",
          "description": "Create a Geoffrey Moore-style positioning statement for your product.",
          "category": "strategy",
          "tags": [
            "pm",
            "positioning",
            "geoffrey-moore",
            "messaging"
          ],
          "strict": false
        },
        {
          "name": "positioning-workshop",
          "source": "./skills/positioning-workshop",
          "description": "Run a positioning workshop to surface target customer, need, category, and differentiation.",
          "category": "strategy",
          "tags": [
            "pm",
            "positioning",
            "workshop",
            "messaging"
          ],
          "strict": false
        },
        {
          "name": "prd-development",
          "source": "./skills/prd-development",
          "description": "Build a structured PRD connecting problem, users, solution, and success criteria.",
          "category": "strategy",
          "tags": [
            "pm",
            "prd",
            "requirements",
            "workflow"
          ],
          "strict": false
        },
        {
          "name": "press-release",
          "source": "./skills/press-release",
          "description": "Write an Amazon-style press release that defines customer value before building.",
          "category": "strategy",
          "tags": [
            "pm",
            "press-release",
            "amazon",
            "working-backwards"
          ],
          "strict": false
        },
        {
          "name": "prioritization-advisor",
          "source": "./skills/prioritization-advisor",
          "description": "Choose a prioritization framework based on stage, team context, and stakeholder needs.",
          "category": "strategy",
          "tags": [
            "pm",
            "prioritization",
            "rice",
            "scoring"
          ],
          "strict": false
        },
        {
          "name": "product-strategy-session",
          "source": "./skills/product-strategy-session",
          "description": "Run an end-to-end product strategy session across positioning, discovery, and roadmap.",
          "category": "strategy",
          "tags": [
            "pm",
            "strategy",
            "workshop",
            "workflow"
          ],
          "strict": false
        },
        {
          "name": "roadmap-planning",
          "source": "./skills/roadmap-planning",
          "description": "Plan a strategic roadmap across prioritization, epics, stakeholders, and sequencing.",
          "category": "strategy",
          "tags": [
            "pm",
            "roadmap",
            "planning",
            "workflow"
          ],
          "strict": false
        },
        {
          "name": "tam-sam-som-calculator",
          "source": "./skills/tam-sam-som-calculator",
          "description": "Calculate TAM, SAM, and SOM with explicit assumptions, methods, and caveats.",
          "category": "strategy",
          "tags": [
            "pm",
            "market-sizing",
            "tam",
            "business-case"
          ],
          "strict": false
        },
        {
          "name": "eol-message",
          "source": "./skills/eol-message",
          "description": "Write a clear, empathetic EOL announcement with rationale and next steps.",
          "category": "strategy",
          "tags": [
            "pm",
            "eol",
            "communication",
            "sunset"
          ],
          "strict": false
        },
        {
          "name": "epic-breakdown-advisor",
          "source": "./skills/epic-breakdown-advisor",
          "description": "Break down epics into user stories with Humanizing Work split patterns.",
          "category": "delivery",
          "tags": [
            "pm",
            "epics",
            "splitting",
            "backlog"
          ],
          "strict": false
        },
        {
          "name": "epic-hypothesis",
          "source": "./skills/epic-hypothesis",
          "description": "Frame an epic as a testable hypothesis with target user, outcome, and validation.",
          "category": "delivery",
          "tags": [
            "pm",
            "epics",
            "hypothesis",
            "validation"
          ],
          "strict": false
        },
        {
          "name": "storyboard",
          "source": "./skills/storyboard",
          "description": "Create a six-frame storyboard showing a user's journey from problem to solution.",
          "category": "delivery",
          "tags": [
            "pm",
            "storyboard",
            "narrative",
            "ux"
          ],
          "strict": false
        },
        {
          "name": "user-story",
          "source": "./skills/user-story",
          "description": "Create user stories with Mike Cohn format and Gherkin acceptance criteria.",
          "category": "delivery",
          "tags": [
            "pm",
            "user-stories",
            "acceptance-criteria",
            "gherkin"
          ],
          "strict": false
        },
        {
          "name": "user-story-mapping",
          "source": "./skills/user-story-mapping",
          "description": "Create a user story map with activities, steps, tasks, and release slices.",
          "category": "delivery",
          "tags": [
            "pm",
            "story-mapping",
            "backlog",
            "mvp"
          ],
          "strict": false
        },
        {
          "name": "user-story-mapping-workshop",
          "source": "./skills/user-story-mapping-workshop",
          "description": "Run a user story mapping workshop with adaptive questions and structured output.",
          "category": "delivery",
          "tags": [
            "pm",
            "story-mapping",
            "workshop",
            "facilitation"
          ],
          "strict": false
        },
        {
          "name": "user-story-splitting",
          "source": "./skills/user-story-splitting",
          "description": "Break a large story into smaller deliverable stories using proven split patterns.",
          "category": "delivery",
          "tags": [
            "pm",
            "splitting",
            "user-stories",
            "backlog"
          ],
          "strict": false
        },
        {
          "name": "acquisition-channel-advisor",
          "source": "./skills/acquisition-channel-advisor",
          "description": "Evaluate acquisition channels using unit economics, customer quality, and scalability.",
          "category": "finance",
          "tags": [
            "pm",
            "finance",
            "acquisition",
            "unit-economics",
            "growth"
          ],
          "strict": false
        },
        {
          "name": "business-health-diagnostic",
          "source": "./skills/business-health-diagnostic",
          "description": "Diagnose SaaS business health across growth, retention, efficiency, and capital.",
          "category": "finance",
          "tags": [
            "pm",
            "finance",
            "saas",
            "metrics",
            "diagnostic"
          ],
          "strict": false
        },
        {
          "name": "feature-investment-advisor",
          "source": "./skills/feature-investment-advisor",
          "description": "Evaluate feature investments using revenue impact, cost structure, ROI, and strategy.",
          "category": "finance",
          "tags": [
            "pm",
            "finance",
            "roi",
            "investment",
            "feature"
          ],
          "strict": false
        },
        {
          "name": "finance-based-pricing-advisor",
          "source": "./skills/finance-based-pricing-advisor",
          "description": "Evaluate pricing changes using ARPU, conversion, churn risk, NRR, and payback.",
          "category": "finance",
          "tags": [
            "pm",
            "finance",
            "pricing",
            "arpu",
            "churn"
          ],
          "strict": false
        },
        {
          "name": "finance-metrics-quickref",
          "source": "./skills/finance-metrics-quickref",
          "description": "Look up SaaS finance metrics, formulas, and benchmarks fast.",
          "category": "finance",
          "tags": [
            "pm",
            "finance",
            "metrics",
            "reference",
            "saas"
          ],
          "strict": false
        },
        {
          "name": "saas-economics-efficiency-metrics",
          "source": "./skills/saas-economics-efficiency-metrics",
          "description": "Evaluate SaaS unit economics and capital efficiency for scaling decisions.",
          "category": "finance",
          "tags": [
            "pm",
            "finance",
            "unit-economics",
            "efficiency",
            "saas"
          ],
          "strict": false
        },
        {
          "name": "saas-revenue-growth-metrics",
          "source": "./skills/saas-revenue-growth-metrics",
          "description": "Calculate SaaS revenue, retention, and growth metrics for momentum diagnosis.",
          "category": "finance",
          "tags": [
            "pm",
            "finance",
            "revenue",
            "retention",
            "growth"
          ],
          "strict": false
        },
        {
          "name": "altitude-horizon-framework",
          "source": "./skills/altitude-horizon-framework",
          "description": "Understand the PM-to-Director transition through altitude and horizon thinking.",
          "category": "career",
          "tags": [
            "pm",
            "career",
            "director",
            "leadership",
            "transition"
          ],
          "strict": false
        },
        {
          "name": "director-readiness-advisor",
          "source": "./skills/director-readiness-advisor",
          "description": "Guide the PM-to-Director transition across preparing, interviewing, and landing.",
          "category": "career",
          "tags": [
            "pm",
            "career",
            "director",
            "coaching",
            "transition"
          ],
          "strict": false
        },
        {
          "name": "executive-onboarding-playbook",
          "source": "./skills/executive-onboarding-playbook",
          "description": "Plan a VP or CPO 30-60-90 day diagnostic onboarding path.",
          "category": "career",
          "tags": [
            "pm",
            "career",
            "executive",
            "onboarding",
            "vp",
            "cpo"
          ],
          "strict": false
        },
        {
          "name": "product-sense-interview-answer",
          "source": "./skills/product-sense-interview-answer",
          "description": "Structure a spoken product-sense interview answer with segmentation and MVP tradeoffs.",
          "category": "career",
          "tags": [
            "pm",
            "career",
            "interview",
            "product-sense",
            "product-design"
          ],
          "strict": false
        },
        {
          "name": "vp-cpo-readiness-advisor",
          "source": "./skills/vp-cpo-readiness-advisor",
          "description": "Guide the transition to VP or CPO across preparing, interviewing, and recalibrating.",
          "category": "career",
          "tags": [
            "pm",
            "career",
            "vp",
            "cpo",
            "executive",
            "coaching"
          ],
          "strict": false
        },
        {
          "name": "ai-shaped-readiness-advisor",
          "source": "./skills/ai-shaped-readiness-advisor",
          "description": "Assess whether your product work is AI-first or AI-shaped across 5 competencies.",
          "category": "ai",
          "tags": [
            "pm",
            "ai",
            "readiness",
            "assessment",
            "competencies"
          ],
          "strict": false
        },
        {
          "name": "context-engineering-advisor",
          "source": "./skills/context-engineering-advisor",
          "description": "Diagnose context stuffing vs. context engineering in AI workflows.",
          "category": "ai",
          "tags": [
            "pm",
            "ai",
            "context-engineering",
            "llm",
            "prompting"
          ],
          "strict": false
        },
        {
          "name": "pol-probe",
          "source": "./skills/pol-probe",
          "description": "Define a Proof of Life probe to test a risky hypothesis cheaply.",
          "category": "ai",
          "tags": [
            "pm",
            "validation",
            "proof-of-life",
            "experimentation"
          ],
          "strict": false
        },
        {
          "name": "pol-probe-advisor",
          "source": "./skills/pol-probe-advisor",
          "description": "Select the right Proof of Life probe type based on hypothesis, risk, and resources.",
          "category": "ai",
          "tags": [
            "pm",
            "validation",
            "proof-of-life",
            "advisor"
          ],
          "strict": false
        },
        {
          "name": "recommendation-canvas",
          "source": "./skills/recommendation-canvas",
          "description": "Evaluate an AI product idea across outcomes, hypotheses, risks, and positioning.",
          "category": "ai",
          "tags": [
            "pm",
            "ai",
            "evaluation",
            "canvas",
            "recommendation"
          ],
          "strict": false
        },
        {
          "name": "workshop-facilitation",
          "source": "./skills/workshop-facilitation",
          "description": "Facilitate workshop sessions with consistent pacing, options, and progress tracking.",
          "category": "meta",
          "tags": [
            "pm",
            "facilitation",
            "workshop",
            "interactive"
          ],
          "strict": false
        },
        {
          "name": "skill-authoring-workflow",
          "source": "./skills/skill-authoring-workflow",
          "description": "Turn raw PM content into a compliant, publish-ready skill.",
          "category": "meta",
          "tags": [
            "pm",
            "skill-authoring",
            "contributor",
            "workflow"
          ],
          "strict": false
        }
      ]
    }
    

README

Product Manager Skills

GitHub stars License: CC BY-NC-SA 4.0 PRs Welcome Version Claude Code Plugin Skills Commands Streamlit Beta

╔════════════════════════════════════════════════════════════════════╗
║                                                                    ║
║   ██████╗ ███╗   ███╗    ███████╗██╗  ██╗██╗██╗     ██╗     ███████╗
║   ██╔══██╗████╗ ████║    ██╔════╝██║ ██╔╝██║██║     ██║     ██╔════╝
║   ██████╔╝██╔████╔██║    ███████╗█████╔╝ ██║██║     ██║     ███████╗
║   ██╔═══╝ ██║╚██╔╝██║    ╚════██║██╔═██╗ ██║██║     ██║     ╚════██║
║   ██║     ██║ ╚═╝ ██║    ███████║██║  ██╗██║███████╗███████╗███████║
║   ╚═╝     ╚═╝     ╚═╝    ╚══════╝╚═╝  ╚═╝╚═╝╚══════╝╚══════╝╚══════╝
║                                                                    ║
║   47 battle-tested skills + 6 command workflows                    ║
║   Claude Code • Cursor • Codex  • n8n • OpenClaw • and more ...    ║
║                                                                    ║
║   v0.78 • Apr 26, 2026 • CC BY-NC-SA 4.0                            ║
╚════════════════════════════════════════════════════════════════════╝

Help product managers become more awesome at their craft — and help them send the ladder down to others.

Battle-tested PM frameworks that teach both you and your AI agents how to do product management work at a professional level. You learn the why. Your agents execute the how. Everyone gets better.

Frame problems, hunt opportunities, scaffold validation experiments, and kill bad bets fast. With frameworks from Teresa Torres, Geoffrey Moore, Amazon, MITRE, and much more from product management's greatest hits.


Start Here

Choose your setup:

I use...Download/use thisBest for
Claude Desktop / Claude Webpm-skills-starter-pack.zipNontechnical PMs
Claude CodePlugin marketplaceTerminal users
Codexpm-skills-codex.zip or repo cloneCodex CLI/app users
I am not surepm-skills-starter-pack.zipMost people

Fastest Path

Download the starter pack here:

pm-skills-starter-pack.zip

Unzip it. Inside, you will see individual skill ZIPs.

Upload those individual ZIPs to Claude Skills.

Ask:

Use the Product Manager Skills to help me frame this product problem.

Download ZIPs

All downloadable ZIPs live on the GitHub Releases page:

Open Product Manager Skills Releases

Common downloads:

Install Guides


📣 Updates & Announcements

Apr 26, 2026 — v0.78 Release Packaging: One Download, Then Better PM Work

This release makes Product Manager Skills easier to use outside the repo. The job is simple: when a PM wants to use these skills with Claude or Codex, they should not have to understand GitHub folders, packaging scripts, or agent internals first.

Who this is for: nontechnical PMs using Claude Desktop/Web, Claude Code users who prefer the plugin marketplace, Codex users who need .agents/skills, and maintainers who want releases to be repeatable instead of handmade.

What changed in v0.78:

  • Claude Desktop/Web users get easy-button ZIP packs that contain individual upload-ready skill ZIPs, including a small starter pack and themed packs for discovery, strategy, delivery, and AI PM work
  • Codex users get a codex-product-manager-skills.zip with .agents/skills and AGENTS.md
  • Maintainers get one release command: ./scripts/build-release.sh
  • GitHub Actions now builds release artifacts on PRs, main, and version tags
  • New install docs tell each audience which path to use

Why it matters: the repo now has a source, a shelf, a storefront, and a rescue desk. skills/ remains the source of truth; dist/ becomes the generated shelf; GitHub Releases becomes the storefront; and the README helps people choose the right path without feeling lost.

Release note: docs/announcements/2026-04-26-v0-78-release-packaging.md


Mar 17, 2026 — v0.75 Pedagogic-First: Restoring What This Repo Is Actually For

I want to apologize to a contributor who recently submitted a well-intentioned and well-coded improvement that stripped learning scaffolding in favor of tighter copy. It wasn't their fault — the docs they read never crisply stated that pedagogic value is non-negotiable. We fixed that. I will work with that contribution to bring in its efficiencies while retaining the learning aspects of the skills.

What this repo is actually for: As much as this repo is for adding skills to your agent, it's equally tasked to help product managers become more awesome at their craft, and helping them send the ladder down to others. Skills here serve both goals: they make your agent more capable, and they make you more knowledgeable about why the framework works. Neither is a byproduct of the other.

ABC — Always Be Coaching is a key governing principle. Every skill should leave the person using it knowing more than when they started. Stripping explanation to tighten output is a defect, not an improvement.

What changed in v0.75:

  • README.md — Mission statement updated to name both audiences: human PMs and AI agents
  • CONTRIBUTING.md — New Design Philosophy section so contributors know what they're protecting
  • CLAUDE.md — Pedagogic-first added to the agent's mandate, not just the style guide
  • AGENTS.md — New Operating Philosophy section so coding agents don't optimize away the teaching

Release note: docs/announcements/2026-03-17-v0-75-pedagogic-first.md

Now available: Install skills directly from Claude Code via the plugin marketplace:

/plugin marketplace add deanpeters/Product-Manager-Skills
/plugin install jobs-to-be-done@pm-skills

Mar 9, 2026 — v0.7 Sharper Skills, Faster Discovery

This release is about making the library easier to trust and easier to use.

As this repo grows, the standard has to rise with it. So v0.7 focuses on the parts users actually feel:

  • finding the right skill faster,
  • understanding when to use it,
  • getting cleaner activation behavior,
  • and trusting that the repo is being actively tightened, not just expanded.

Why it matters:

  1. You spend less time guessing which skill to use.
  2. Skills are more likely to show up in the situations where you actually need them.
  3. The library becomes easier to navigate as it grows, not more chaotic.
  4. Quality becomes a maintained promise, not a one-time cleanup.

What shipped:

  • Trigger-oriented description updates across the skill library so skills answer both "what it does" and "use this when..."
  • New intent frontmatter field so every skill can keep a sharp trigger description and a richer deeper-purpose summary
  • New trigger-readiness auditing in scripts/check-skill-triggers.py
  • Trigger checks wired into scripts/test-library.sh
  • New find-a-skill.sh --mode trigger for discovering skills by use-case language, best_for, and scenarios
  • New Streamlit (beta) Find My Skill mode so users can describe a situation in plain English and get recommended skills with clear next actions
  • Streamlit navigation now separates Learn, Find My Skill, and Run Skills so first-time users can move from confusion to action faster
  • Contributor docs updated so future skills follow the same tighter standard
  • Cross-checked the tighter standard against Anthropic's Complete Guide to Building Skills for Claude

Release note: docs/announcements/2026-03-09-v0-7-skill-quality-trigger-clarity.md


Mar 8, 2026 — v0.65 You Asked, We Listened: Setup + Integration Everywhere

You asked, we listened. We took a moment to create comprehensive instructions on how to install, integrate, or otherwise use any one or all of these skills.

What shipped:

  • docs/Using PM Skills 101.md as the complete beginner-first guide
  • docs/Platform Guides for PMs.md as the pick-your-tool index
  • docs/Using PM Skills with Slash Commands 101.md for Claude /slash workflows like /pm-story and /pm-prd
  • New PM-friendly platform docs for Claude Code, Claude Desktop, Claude Cowork, ChatGPT Desktop, OpenClaw, n8n, LangFlow, and Python agents
  • Updated START_HERE.md with comfort-level paths (chat-first, terminal-first, automation-first)

How to make the best use of this release:

  1. Start with docs/Using PM Skills 101.md
  2. Choose your platform in docs/Platform Guides for PMs.md
  3. Run one real task with one skill before scaling to multi-skill workflows

Release note: docs/announcements/2026-03-08-v0-65-onboarding-integration-guides.md


Mar 6, 2026 — v0.6 Navigation + Commands

We added a command layer and fast navigation system while keeping skills as the source of truth.

What shipped:

  • START_HERE.md for 60-second onboarding
  • commands/ directory with reusable multi-skill workflows
  • catalog/ generated indexes for quick browsing
  • New helper scripts: run-pm.sh, find-a-command.sh, test-library.sh, and generate-catalog.py
  • Command validation with scripts/check-command-metadata.py

Release note draft: docs/announcements/2026-03-06-v0-6-navigation-commands.md


Feb 27, 2026 — v0.5 Streamlit (beta) Playground

We launched a new Streamlit (beta) interface for local skill test-driving.

What shipped:

  • Local playground at app/main.py with guided browsing and session flows
  • Multi-provider support (Anthropic, OpenAI, Ollama) with provider/model picker
  • Environment-variable-only API handling (app/.env.example) for safer defaults
  • Workflow UX upgrades (phase detection fix, per-phase output persistence, run-all phases control)
  • Fast-model quality warnings on long workflows (especially PRD-style runs)

Docs:

Feedback welcome:


Feb 27, 2026 — v0.5 Career & Leadership Skills Suite

Four new skills covering the full product leadership career arc — from PM to Director to VP/CPO — distilled from two episodes of The Product Porch podcast.

Based on Episode 42 — From PM to Director: How to Make the Shift (Part 1):

  • altitude-horizon-framework (Component) — The core mental model: altitude (scope) and horizon (time), the waiter-to-operator shift, four transition zones, named failure modes, and the Cascading Context Map
  • director-readiness-advisor (Interactive) — Coaches PMs and new Directors across four situations: preparing, interviewing, newly landed, and recalibrating

Based on Episode 43 — Becoming a VP & CPO: Leading Product at the Executive Level (Part 2):

  • executive-onboarding-playbook (Workflow) — A 30-60-90 day diagnostic playbook for VP/CPO transitions: diagnose before acting, surface unwritten strategy, assess people, act with evidence
  • vp-cpo-readiness-advisor (Interactive) — Coaches Directors and executives through the VP/CPO transition, including the CEO interview framework for evaluating roles before accepting

Feb 10, 2026 — v0.4 Facilitation Protocol Fix

We found and fixed a facilitation regression in interactive flows.

What happened:

  • We expected guided, step-by-step facilitation with progressive context handling.
  • In practice, a brevity-focused rewrite path stripped out parts of the original facilitation modality (especially the "walk through questions" behavior).

What we changed in v0.4:

  • Standardized a canonical facilitation protocol in skills/workshop-facilitation/SKILL.md.
  • Rolled that source-of-truth linkage across interactive skills and facilitation-heavy workflow skills.
  • Added mandatory session heads-up, Context dump bypass, and Best guess mode.
  • Added stronger progress labels, interruption handling, and decision-point recommendation rules.

Credit:

  • Codex identified the protocol mismatch and implemented the fix across the repo.

Announcement draft: docs/announcements/2026-02-10-v0-4-facilitation-fix.md


Feb 8, 2026 — LinkedIn Launch

Post title: Product Management Skills for Your Agents Subtitle: Because "just prompt better" is not a strategy.

Still rewriting PM prompts and getting generic AI output? I built a reusable PM Skills repo to help you make sharper decisions, docs, and outcomes faster.


🎯 What This Is

47 ready-to-use PM skills + reusable command workflows that teach both you and your AI agents how to do product management work at a professional level — so the PM understands the why and the agent can execute the how.

Instead of saying "Write a PRD" and hoping for the best, you and your agent both know:

  • ✅ How to structure a PRD and why each section earns its place
  • ✅ What questions to ask stakeholders and what you're listening for
  • ✅ Which prioritization framework to use (and when each one breaks down)
  • ✅ How to run customer discovery interviews and what signals matter
  • ✅ How to break down epics using proven patterns — and the tradeoffs of each

Result: You work faster, with better consistency, at a higher strategic level — and you can explain why.

Works with: Claude Code, Cowork, OpenAI Codex, ChatGPT, Gemini, and any AI agent that can read structured knowledge.


🎓 Design Philosophy — Pedagogic and Practical in Equal Measure

As much as this repo is for adding skills to your agent, it's equally tasked to help product managers become more awesome at their craft — and to help them send the ladder down to others.

Skills here serve both goals simultaneously. They equip AI agents to do PM work at a professional level, and they teach the human PM the why behind the framework — so they can explain it, adapt it, and pass it on.

ABC — Always Be Coaching is a key governing principle. Every skill should leave the person using it knowing more than when they started.

This means:

  • Skills explain reasoning, not just steps
  • Examples show the thinking, not just the output
  • Anti-patterns name the failure mode so you recognize it in the wild
  • Interactive skills coach through discovery — they don't just collect answers

An edit that strips learning scaffolding to tighten copy is a defect, not an improvement.


⚡ Start in 60 Seconds

New here? Start with START_HERE.md.

# Run a skill (artifact/analysis)
./scripts/run-pm.sh skill prioritization-advisor "We have 12 requests and one sprint"

# Run a command (multi-skill workflow)
./scripts/run-pm.sh command discover "Reduce onboarding drop-off for self-serve users"

Need discovery first?

./scripts/find-a-skill.sh --keyword onboarding
./scripts/find-a-command.sh --keyword roadmap

Why The Command Layer Helps

Commands make using skills easier without replacing skills.

  • Skills stay deep and pedagogic: they are the source of truth for frameworks, reasoning, and quality — for humans and agents alike.
  • Commands remove stitching work: one command chains the right skills in the right order.
  • You start faster: less "which skill should I run first?" and fewer manual handoffs.
  • Outputs are more consistent: commands enforce checkpoints, then defer to skill-level rigor.
  • Teams onboard quicker: new users can run /discover or /write-prd and learn the skill system while shipping.

In short: skills provide expertise; commands provide momentum.


🧪 Streamlit (beta)

Want a quick local test-drive before using skills in your agent workflow?

pip install -r app/requirements.txt
streamlit run app/main.py

What you can do in v0.7:

  • Learn setup and integration paths without leaving the app
  • Find My Skill by describing your situation in plain English
  • Run Skills with your own scenario once you know what you want

This beta interface is a feature in flight. Feedback is welcome via GitHub Issues or LinkedIn.


✅ Safety and Evaluation

Before using any skill:

  • Review the skill file and any linked resources. If it includes scripts/, read them before running.
  • Prefer least privilege. Skills should not require secrets or network access unless explicitly documented.
  • Do a quick dry run with a realistic prompt, then refine name and description for better discoverability.
  • Run python3 scripts/check-skill-triggers.py --show-cases before packaging if you want a quick trigger-readiness pass.

🧰 Optional Scripts (Deterministic Helpers)

Some skills include a scripts/ folder with deterministic helpers for calculations or formatting. These are optional, should be audited before running, and should avoid network calls or external dependencies.

Examples:

  • skills/tam-sam-som-calculator/scripts/market-sizing.py
  • skills/user-story/scripts/user-story-template.py

🤖 Skill Creation Utility

Want to create your own skills? Choose one of these utilities:

  • scripts/add-a-skill.sh - Content-first, AI-assisted generation from notes/frameworks.
  • scripts/build-a-skill.sh - Guided "build-a-bear" wizard that prompts section-by-section.
  • scripts/find-a-skill.sh - Search skills by name/type/keyword with ranked results.
  • scripts/find-a-command.sh - Search commands by name/keyword/used skills.
  • scripts/run-pm.sh - Fast runner for either a skill or a command.
  • scripts/test-a-skill.sh - Run strict conformance checks and optional smoke checks.
  • scripts/check-skill-triggers.py - Audit frontmatter descriptions and scenario prompts for Claude-style triggering.
  • scripts/test-library.sh - Validate skills, commands, and regenerate catalogs.
  • scripts/zip-a-skill.sh - Build upload-ready .zip files by skill, type, or all skills.
  • scripts/generate-catalog.py - Regenerate skill/command navigation indexes.

New to terminals? See scripts/README.md for a plain-language walkthrough. Power users: These scripts are designed to chain together into fast end-to-end workflows (idea -> prompt -> validation -> packaging).

What it does:

  1. Analyzes your content and suggests skill types
  2. Generates complete skill files with examples
  3. Validates metadata for marketplace compliance
  4. Updates documentation automatically

Usage:

# From a file
./scripts/add-a-skill.sh research/your-framework.md

# Guided wizard
./scripts/build-a-skill.sh

# Find a skill
./scripts/find-a-skill.sh --keyword pricing --type interactive

# Find a command
./scripts/find-a-command.sh --keyword roadmap

# Run a command workflow
./scripts/run-pm.sh command write-prd "Mobile onboarding redesign"

# Test one skill
./scripts/test-a-skill.sh --skill finance-based-pricing-advisor --smoke

# Test full library surface
./scripts/test-library.sh

# Build Claude upload zip for one skill
./scripts/zip-a-skill.sh --skill finance-based-pricing-advisor

# Build Claude upload zips for all skills
./scripts/zip-a-skill.sh --all --output dist/skill-zips

# Build Claude upload zips for one category (component|interactive|workflow)
./scripts/zip-a-skill.sh --type component --output dist/skill-zips

# Build curated starter pack
./scripts/zip-a-skill.sh --preset core-pm --output dist/skill-zips

# Show available curated presets
./scripts/zip-a-skill.sh --list-presets

# From clipboard
pbpaste | ./scripts/add-a-skill.sh

# Check available adapters
./scripts/add-a-skill.sh --list-agents

Agent support: Claude Code, Manual mode (works with any CLI), and custom adapters via scripts/adapters/ADAPTER_TEMPLATE.sh

Learn more: See docs/Add-a-Skill Utility Guide.md for complete guide. Cloning locally? Start with docs/Building PM Skills.md#local-clone-quickstart.


✅ Claude Web Upload Checklist

  • Keep frontmatter name <= 64 chars and description <= 200 chars.
  • Use intent for the richer repo-facing explanation of the skill, while keeping description short and trigger-oriented.
  • Ensure the skill folder name matches the name value.
  • Use scripts/zip-a-skill.sh --skill <skill-name> (or --type component, --preset core-pm) to generate upload-ready ZIPs.
  • (Advanced) Use scripts/package-claude-skills.sh if you need unpacked upload-ready folders.
  • Validate metadata with scripts/check-skill-metadata.py.
  • For GitHub ZIP upload flow, see docs/Using PM Skills with Claude.md.

🏗️ Three-Tier Architecture (How Skills Work Together)

These 47 skills are organized into three types that build on each other:

┌───────────────────────────────────────────────────────────┐
│  WORKFLOW SKILLS (6)                                      │
│  Complete end-to-end PM processes                         │
│  Example: "Run a product strategy session"                │
│  Timeline: 2-4 weeks                                      │
└───────────────────────────────────────────────────────────┘
                         ↓ orchestrates
┌───────────────────────────────────────────────────────────┐
│  INTERACTIVE SKILLS (20)                                  │
│  Guided discovery with adaptive questions                 │
│  Example: "Which prioritization framework should I use?"  │
│  Timeline: 30-90 minutes                                  │
└───────────────────────────────────────────────────────────┘
                         ↓ uses
┌───────────────────────────────────────────────────────────┐
│  COMPONENT SKILLS (21)                                    │
│  Templates for specific PM deliverables                   │
│  Example: "Write a user story"                            │
│  Timeline: 10-30 minutes                                  │
└───────────────────────────────────────────────────────────┘

Component Skills (21) — Templates & Artifacts

What: Reusable templates for creating specific PM deliverables (user stories, positioning statements, epics, personas, PRDs, etc.)

When to use: You need a standard template or format for a specific deliverable.

Example: "Write a user story with acceptance criteria" → Use user-story.md


Interactive Skills (20) — Guided Discovery

What: Multi-turn conversational flows where AI asks you 3-5 adaptive questions, then offers smart recommendations based on your context.

When to use: You need help deciding which approach to take or gathering context before executing.

Example: "Which prioritization framework should I use?" → Run prioritization-advisor.md, which asks about your product stage, team size, data availability, then recommends RICE, ICE, Kano, or other frameworks.

How they work:

  1. AI asks 3-5 questions about your context
  2. You answer (or pick from numbered options)
  3. AI offers 3-5 tailored recommendations
  4. You choose one (or combine approaches)
  5. AI executes using the right component skills

Workflow Skills (6) — End-to-End Processes

What: Complete PM processes that orchestrate multiple component and interactive skills over days/weeks.

When to use: You need to run a full PM workflow from start to finish (strategy session, discovery cycle, roadmap planning, PRD creation).

Example: "Align stakeholders on product strategy" → Run product-strategy-session.md, which guides you through positioning → problem framing → solution exploration → roadmap planning over 2-4 weeks.


📦 All 47 Skills (Clickable)

Now that you understand the three types, here's the complete catalog:

🧱 Component Skills (21)

SkillUse When You Need To...
altitude-horizon-frameworkUnderstand the PM→Director mindset shift: altitude (scope), horizon (time), four transition zones, failure modes, and the Cascading Context Map. Based on The Product Porch E42
company-researchDeep-dive competitor or company analysis
customer-journey-mapMap customer experience across all touchpoints (NNGroup framework)
eol-messageCommunicate product/feature deprecation gracefully
epic-hypothesisTurn vague initiatives into testable hypotheses with success metrics
finance-metrics-quickrefFast lookup table for 32+ SaaS finance metrics with formulas, benchmarks, and when to use each
jobs-to-be-doneUnderstand what customers are trying to accomplish (JTBD framework)
pestel-analysisAnalyze external factors (Political, Economic, Social, Tech, Environmental, Legal)
pol-probeDefine lightweight, disposable validation experiments to test hypotheses before building (Dean Peters PoL framework)
positioning-statementDefine who you serve, what problem you solve, and how you're different (Geoffrey Moore framework)
press-releaseWrite a future press release to clarify product vision (Amazon Working Backwards)
problem-statementFrame a customer problem with evidence before jumping to solutions
product-sense-interview-answerStructure a spoken product-sense answer with assumptions, segmentation, pain-point prioritization, and MVP tradeoffs
proto-personaCreate hypothesis-driven personas before doing full research
recommendation-canvasDocument AI-powered product recommendations
saas-economics-efficiency-metricsEvaluate unit economics and capital efficiency (CAC, LTV, payback, margins, burn rate, Rule of 40, magic number)
saas-revenue-growth-metricsCalculate and interpret revenue, retention, and growth metrics (revenue, ARPU, MRR/ARR, churn, NRR, expansion)
storyboardVisualize user journeys with 6-frame narrative storyboards
user-storyWrite user stories with proper acceptance criteria (Mike Cohn + Gherkin)
user-story-mappingOrganize stories by user workflow (Jeff Patton framework)
user-story-splittingBreak down large stories using 8 proven patterns

🔄 Interactive Skills (20)

SkillWhat It Does
acquisition-channel-advisorEvaluate acquisition channels using unit economics, customer quality, and scalability. Recommends scale/test/kill decisions
ai-shaped-readiness-advisorAssess if you're "AI-first" (automating tasks) or "AI-shaped" (redesigning how you work). Evaluates 5 competencies and recommends which to build first
business-health-diagnosticDiagnose SaaS business health using key metrics, identify red flags, and prioritize actions. Analyzes growth, retention, efficiency, and capital health
context-engineering-advisorDiagnose context stuffing (volume without intent) vs. context engineering (structure for attention). Guides memory architecture, retrieval strategies, and Research→Plan→Reset→Implement cycle
customer-journey-mapping-workshopGuides journey mapping with pain point identification
director-readiness-advisorCoaches PMs and new Directors through the transition across four situations: preparing, interviewing, newly landed, recalibrating. Based on The Product Porch E42
discovery-interview-prepPlans customer interviews (Mom Test style) based on your research goals
epic-breakdown-advisorSplits epics into user stories using Richard Lawrence's 9 patterns
feature-investment-advisorEvaluate feature investments using revenue impact, cost structure, ROI, and strategic value. Delivers build/don't build recommendations
finance-based-pricing-advisorEvaluate pricing changes using financial impact analysis (ARPU/ARPA, conversion, churn risk, NRR, payback)
lean-ux-canvasSets up hypothesis-driven planning (Jeff Gothelf Lean UX Canvas v2)
opportunity-solution-treeGenerates opportunities and solutions, recommends best proof-of-concept to test
pol-probe-advisorRecommends which of 5 prototype types to use based on your hypothesis and risk (Feasibility, Task-Focused, Narrative, Synthetic Data, Vibe-Coded)
positioning-workshopGuides you through defining your positioning with adaptive questions
prioritization-advisorRecommends the right prioritization framework (RICE, ICE, Kano, etc.) for your situation
problem-framing-canvasLeads you through MITRE Problem Framing (Look Inward/Outward/Reframe)
tam-sam-som-calculatorProjects market size (TAM/SAM/SOM) with real-world data and citations
user-story-mapping-workshopWalks you through creating story maps with backbone and release slices
vp-cpo-readiness-advisorCoaches Directors and executives through the VP/CPO transition — includes CEO interview framework for evaluating roles before accepting. Based on The Product Porch E43
workshop-facilitationAdds one-step-at-a-time facilitation with numbered recommendations for workshop skills

🎭 Workflow Skills (6)

SkillWhat It DoesTimeline
discovery-processComplete discovery cycle: frame problem → research → synthesize → validate solutions3-4 weeks
executive-onboarding-playbook30-60-90 day diagnostic playbook for VP/CPO transitions: diagnose before acting, surface unwritten strategy, assess people, act with evidence. Based on The Product Porch E4390 days
prd-developmentStructured PRD: problem statement → personas → solution → metrics → user stories2-4 days
product-strategy-sessionFull strategy: positioning → problem framing → solution exploration → roadmap2-4 weeks
roadmap-planningStrategic roadmap: gather inputs → define epics → prioritize → sequence → communicate1-2 weeks
skill-authoring-workflowMeta workflow: choose add/build path → validate conformance → update docs → package/publish30-90 minutes

🔮 Agent Skills of the Future

Possible skills in development:

  • Dangerous Animals of Product Management - Feature hostage negotiations and stakeholder shuttle diplomacy for when you're facing HiPPOs, RHiNOs, and WoLFs (oh my!).
  • Pricing for Product Managers - Value-based pricing, packaging strategy, price increases, and grandfather clause negotiations without the panic spiral and flop sweat.
  • Classic Business Strategy Frameworks - Ansoff, BCG, Porter's 5 Forces, Blue Ocean, and SWOT in agent-ready format that helps you decide, not decorate slides.
  • Storytelling for Product Managers - Narrative arc, demo choreography, and pitch structure built on pro-opera lessons and Hakawati orations for commanding the room.
  • Prompt Building for Product Managers - Industrial-strength prompt engineering: team session starters, multi-turn workflow wizards, and reverse engineering templates for artifacts like PRDs.
  • Nightmares of Product Management - Telemetry, triage, and tactics for when things don't go as planned: adoption theater, feature graveyards, metric manipulation, launch amnesia, technical debt wildfires. Plus prevention strategies.

Detailed concept notes live in PLANS.md.


🚀 How to Use

Confused by setup options? Start here: PM Skills Rule-of-Thumb Guide.

Fastest Path (Local Repo)

# Skill mode
./scripts/run-pm.sh skill user-story "Checkout improvements for returning customers"

# Command mode
./scripts/run-pm.sh command plan-roadmap "Q3-Q4 roadmap for enterprise reporting"

Command definitions live in commands/, and generated browse indexes live in catalog/.

With Claude Desktop or Claude.ai

  1. Open a conversation with Claude
  2. Share the skill file: "Read user-story.md"
  3. Ask Claude to apply it: "Using the User Story skill, write stories for our checkout flow"

With Claude Code (CLI)

cd product-manager-skills
claude "Using the PRD Development workflow, create a PRD for our mobile feature"

You can discover via npx skills find <query> and npx skills add deanpeters/Product-Manager-Skills --list, then install for Claude Code. See Using PM Skills with Claude.

With OpenAI Codex

Use local workspace paths, GitHub-connected Codex on ChatGPT, or discover/install directly with npx skills. See Using PM Skills with Codex.

With ChatGPT

Use GitHub app connections (formerly connectors), Custom GPT Knowledge uploads, or Project files. See Using PM Skills with ChatGPT.

With Cowork or Other Agents

Cowork: Import skills as knowledge modules, invoke via natural language. Other agents: Follow your agent's docs for loading custom knowledge.


📄 Docs


💼 Real-World Use Cases

"I need to align stakeholders on product strategy"

Workflow: product-strategy-session (2-4 weeks, orchestrates positioning → roadmap)

"I need to validate a customer problem before building"

Workflow: discovery-process (3-4 weeks, interviews → synthesis → validation)

"I need to test a hypothesis quickly before investing in development"

Interactive: pol-probe-advisor (recommends which prototype type: Feasibility, Task-Focused, Narrative, Synthetic Data, or Vibe-Coded) → Component: pol-probe (template for documenting validation experiments)

"I want to know if I'm using AI strategically or just for efficiency"

Interactive: ai-shaped-readiness-advisor (assesses 5 competencies: Context Design, Agent Orchestration, Outcome Acceleration, Team-AI Facilitation, Strategic Differentiation)

"I'm pasting entire docs into AI and getting vague responses"

Interactive: context-engineering-advisor (diagnose context stuffing vs. engineering, define boundaries, implement Research→Plan→Reset→Implement cycle)

"I need to write a PRD for a new feature"

Workflow: prd-development (2-4 days, problem → solution → stories)

"I need to create a Q2 roadmap"

Workflow: roadmap-planning (1-2 weeks, epics → prioritization → sequencing)

"I need to choose a prioritization framework"

Interactive: prioritization-advisor (asks questions, recommends RICE/ICE/Kano)

"I need to split a large epic"

Interactive: epic-breakdown-advisor (Richard Lawrence's 9 patterns)

"I need to write a user story"

Component: user-story (template + examples)


💡 Why Skills Beat Prompts

PromptsSkills
One-time instructions per taskReusable frameworks learned once
"Write a PRD for X"Agent knows PRD structure, asks smart questions, handles edge cases
You repeat yourself constantlyAgent remembers best practices
Inconsistent outputsConsistent, professional results

Skills = Less explaining, more strategic work.


🌟 What Makes These Skills Different

✅ Battle-Tested Frameworks

Built on proven methods from Geoffrey Moore, Jeff Patton, Teresa Torres, Amazon, Richard Lawrence, MITRE, and more.

✅ Real Client Work

Based on decades of PM consulting across healthcare, finance, manufacturing, and tech.

✅ Agent-Ready Format

Optimized for AI comprehension—not blog posts, not books, not courses. Executable frameworks.

✅ Zero Fluff

Every word earns its keep. No filler, no buzzwords, no generic advice.

✅ Example-Rich

Shows both "good" and "bad" examples so you know what works and what to avoid.


📚 Skill Structure (What's Inside Each File)

Every skill follows the same format:

## Purpose
What this skill does and when to use it.

## Key Concepts
Core frameworks, definitions, anti-patterns.

## Application
Step-by-step instructions (with examples).

## Examples
Real-world cases (good and bad).

## Common Pitfalls
What to avoid and why.

## References
Related skills and external frameworks.

Clean. Practical. Zero fluff.


🤝 Contributing

Found a gap? Have a PM framework you'd like to see included?

Ways to contribute:

  • Open an issue with your suggestion
  • Submit a pull request (we'll help you format it)
  • Share feedback on what's working or missing

See CONTRIBUTING.md for detailed guidelines.


🎓 Philosophy

Principles:

  • Outcome-driven over output-driven (solve problems, don't just ship features)
  • Evidence over vibes (validate with data, not opinions)
  • Clarity beats completeness (simple and usable beats comprehensive and confusing)
  • Examples beat explanations (show, don't just tell)

No hype. No buzzwords. Just frameworks that work.


📖 Related Resources


📜 License

CC BY-NC-SA 4.0 — non-commercial use with share-alike.

See LICENSE for full details.


📞 Questions?


v0.78 — April 26, 2026

Highlights in this release:

  • Added one-command release packaging with ./scripts/build-release.sh
  • Added Claude Desktop/Web ZIP packs for starter, discovery, strategy, delivery, AI PM, and all-skills use cases
  • Added a Codex ZIP that installs .agents/skills plus AGENTS.md
  • Added GitHub Actions to validate, build, upload artifacts, and publish release assets on v* tags
  • Added install docs for Claude Desktop/Web, Claude Code, Codex, and release maintainers
  • Updated the README with a clearer Start Here path for people who just want to use the skills

v0.7 — March 9, 2026

Highlights in this release:

  • Tightened skill descriptions so they communicate both what the skill does and when to use it
  • Added intent as a repo-standard frontmatter field to separate trigger metadata from deeper purpose
  • Added scripts/check-skill-triggers.py and wired trigger-readiness auditing into test-library.sh
  • Added find-a-skill.sh --mode trigger so users can discover skills through description, best_for, and scenarios
  • Added a Streamlit (beta) Find My Skill mode with plain-English discovery, recommended-first results, and direct preview/run actions
  • Updated authoring docs and templates so the stronger metadata standard sticks

v0.65 — March 8, 2026

Highlights in this release:

  • Added comprehensive PM-first onboarding and setup guide: docs/Using PM Skills 101.md
  • Added platform chooser: docs/Platform Guides for PMs.md
  • Added slash-command playbook: docs/Using PM Skills with Slash Commands 101.md
  • Added and linked practical platform docs for Claude Code/Desktop/Cowork, ChatGPT Desktop, OpenClaw, n8n, LangFlow, and Python agents
  • Updated START_HERE.md and docs navigation so new users can pick the right setup path faster

v0.6 — March 6, 2026

Highlights in this release:

  • Added commands/ with reusable workflow wrappers over local skills (discover, strategy, write-prd, plan-roadmap, prioritize, leadership-transition)
  • Added START_HERE.md for 60-second onboarding
  • Added generated catalog/ artifacts for fast skill and command navigation
  • Added tooling for discovery/validation/execution: find-a-command.sh, run-pm.sh, check-command-metadata.py, test-library.sh, generate-catalog.py

v0.5 — February 27, 2026

Highlights in this release:

  • Added 4 Career & Leadership skills distilled from The Product Porch episodes on PM→Director and Director→VP/CPO transitions
  • Launched Streamlit (beta) local playground in app/ with multi-provider/model selection
  • Improved workflow UX in beta app: phase detection, explicit run controls, and per-phase output tracking

v0.4 — February 10, 2026

Highlights in this release:

  • Fixed a facilitation protocol regression where brevity-focused rewrites could remove expected guided-question behavior
  • Promoted workshop-facilitation to canonical source of truth for interactive facilitation
  • Added consistent opening heads-up, context-dump bypass path, and best-guess mode
  • Applied protocol linkage across interactive skills and facilitation-heavy workflow skills

v0.3 — February 9, 2026

Highlights in this release:

  • 42 total skills, including Phase 7 finance skills and the new skill-authoring-workflow
  • New skill tooling: add-a-skill, build-a-skill, find-a-skill, test-a-skill, zip-a-skill
  • New onboarding docs for Claude, Codex, ChatGPT, and non-technical "rule-of-thumb" setup

Built by Dean Peters (Principal Consultant and Trainer at Productside.com) with Anthropic Claude and OpenAI Codex.

Helping product managers work smarter with AI.