Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
michalparkola

tapestry-skills

Quality
9.0

This repository offers a suite of productivity skills for AI agents, designed to streamline learning from various content types and transform insights into actionable plans. It excels at converting passive consumption into active implementation, helping users and agents move efficiently from understanding to shipping and continuous improvement.

USP

Unlike simple content extractors, this collection provides a unified workflow via `learn-this` to automatically extract content from diverse sources and immediately transform it into actionable plans using the unique Ship-Learn-Next framew…

Use cases

  • 01Extracting actionable insights from YouTube videos
  • 02Converting articles into implementation plans
  • 03Getting unstuck on vague tasks
  • 04Logging AI agent sessions
  • 05Agile coaching with Scrum Sage

Detected files (7)

  • article-extractor/SKILL.mdskill
    Show content (9641 bytes)
    ---
    name: article-extractor
    description: Extract clean article content from URLs (blog posts, articles, tutorials) and save as readable text. Use when user wants to download, extract, or save an article/blog post from a URL without ads, navigation, or clutter.
    allowed-tools: Bash,Write
    ---
    
    # Article Extractor
    
    This skill extracts the main content from web articles and blog posts, removing navigation, ads, newsletter signups, and other clutter. Saves clean, readable text.
    
    ## When to Use This Skill
    
    Activate when the user:
    - Provides an article/blog URL and wants the text content
    - Asks to "download this article"
    - Wants to "extract the content from [URL]"
    - Asks to "save this blog post as text"
    - Needs clean article text without distractions
    
    ## How It Works
    
    ### Priority Order:
    1. **Check if tools are installed** (reader or trafilatura)
    2. **Download and extract article** using best available tool
    3. **Clean up the content** (remove extra whitespace, format properly)
    4. **Save to file** with article title as filename
    5. **Confirm location** and show preview
    
    ## Installation Check
    
    Check for article extraction tools in this order:
    
    ### Option 1: reader (Recommended - Mozilla's Readability)
    
    ```bash
    command -v reader
    ```
    
    If not installed:
    ```bash
    npm install -g @mozilla/readability-cli
    # or
    npm install -g reader-cli
    ```
    
    ### Option 2: trafilatura (Python-based, very good)
    
    ```bash
    command -v trafilatura
    ```
    
    If not installed:
    ```bash
    pip3 install trafilatura
    ```
    
    ### Option 3: Fallback (curl + simple parsing)
    
    If no tools available, use basic curl + text extraction (less reliable but works)
    
    ## Extraction Methods
    
    ### Method 1: Using reader (Best for most articles)
    
    ```bash
    # Extract article
    reader "URL" > article.txt
    ```
    
    **Pros:**
    - Based on Mozilla's Readability algorithm
    - Excellent at removing clutter
    - Preserves article structure
    
    ### Method 2: Using trafilatura (Best for blogs/news)
    
    ```bash
    # Extract article
    trafilatura --URL "URL" --output-format txt > article.txt
    
    # Or with more options
    trafilatura --URL "URL" --output-format txt --no-comments --no-tables > article.txt
    ```
    
    **Pros:**
    - Very accurate extraction
    - Good with various site structures
    - Handles multiple languages
    
    **Options:**
    - `--no-comments`: Skip comment sections
    - `--no-tables`: Skip data tables
    - `--precision`: Favor precision over recall
    - `--recall`: Extract more content (may include some noise)
    
    ### Method 3: Fallback (curl + basic parsing)
    
    ```bash
    # Download and extract basic content
    curl -s "URL" | python3 -c "
    from html.parser import HTMLParser
    import sys
    
    class ArticleExtractor(HTMLParser):
        def __init__(self):
            super().__init__()
            self.in_content = False
            self.content = []
            self.skip_tags = {'script', 'style', 'nav', 'header', 'footer', 'aside'}
            self.current_tag = None
    
        def handle_starttag(self, tag, attrs):
            if tag not in self.skip_tags:
                if tag in {'p', 'article', 'main', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6'}:
                    self.in_content = True
            self.current_tag = tag
    
        def handle_data(self, data):
            if self.in_content and data.strip():
                self.content.append(data.strip())
    
        def get_content(self):
            return '\n\n'.join(self.content)
    
    parser = ArticleExtractor()
    parser.feed(sys.stdin.read())
    print(parser.get_content())
    " > article.txt
    ```
    
    **Note:** This is less reliable but works without dependencies.
    
    ## Getting Article Title
    
    Extract title for filename:
    
    ### Using reader:
    ```bash
    # reader outputs markdown with title at top
    TITLE=$(reader "URL" | head -n 1 | sed 's/^# //')
    ```
    
    ### Using trafilatura:
    ```bash
    # Get metadata including title
    TITLE=$(trafilatura --URL "URL" --json | python3 -c "import json, sys; print(json.load(sys.stdin)['title'])")
    ```
    
    ### Using curl (fallback):
    ```bash
    TITLE=$(curl -s "URL" | grep -oP '<title>\K[^<]+' | sed 's/ - .*//' | sed 's/ | .*//')
    ```
    
    ## Filename Creation
    
    Clean title for filesystem:
    
    ```bash
    # Get title
    TITLE="Article Title from Website"
    
    # Clean for filesystem (remove special chars, limit length)
    FILENAME=$(echo "$TITLE" | tr '/' '-' | tr ':' '-' | tr '?' '' | tr '"' '' | tr '<' '' | tr '>' '' | tr '|' '-' | cut -c 1-100 | sed 's/ *$//')
    
    # Add extension
    FILENAME="${FILENAME}.txt"
    ```
    
    ## Complete Workflow
    
    ```bash
    ARTICLE_URL="https://example.com/article"
    
    # Check for tools
    if command -v reader &> /dev/null; then
        TOOL="reader"
        echo "Using reader (Mozilla Readability)"
    elif command -v trafilatura &> /dev/null; then
        TOOL="trafilatura"
        echo "Using trafilatura"
    else
        TOOL="fallback"
        echo "Using fallback method (may be less accurate)"
    fi
    
    # Extract article
    case $TOOL in
        reader)
            # Get content
            reader "$ARTICLE_URL" > temp_article.txt
    
            # Get title (first line after # in markdown)
            TITLE=$(head -n 1 temp_article.txt | sed 's/^# //')
            ;;
    
        trafilatura)
            # Get title from metadata
            METADATA=$(trafilatura --URL "$ARTICLE_URL" --json)
            TITLE=$(echo "$METADATA" | python3 -c "import json, sys; print(json.load(sys.stdin).get('title', 'Article'))")
    
            # Get clean content
            trafilatura --URL "$ARTICLE_URL" --output-format txt --no-comments > temp_article.txt
            ;;
    
        fallback)
            # Get title
            TITLE=$(curl -s "$ARTICLE_URL" | grep -oP '<title>\K[^<]+' | head -n 1)
            TITLE=${TITLE%% - *}  # Remove site name
            TITLE=${TITLE%% | *}  # Remove site name (alternate)
    
            # Get content (basic extraction)
            curl -s "$ARTICLE_URL" | python3 -c "
    from html.parser import HTMLParser
    import sys
    
    class ArticleExtractor(HTMLParser):
        def __init__(self):
            super().__init__()
            self.in_content = False
            self.content = []
            self.skip_tags = {'script', 'style', 'nav', 'header', 'footer', 'aside', 'form'}
    
        def handle_starttag(self, tag, attrs):
            if tag not in self.skip_tags:
                if tag in {'p', 'article', 'main'}:
                    self.in_content = True
            if tag in {'h1', 'h2', 'h3'}:
                self.content.append('\n')
    
        def handle_data(self, data):
            if self.in_content and data.strip():
                self.content.append(data.strip())
    
        def get_content(self):
            return '\n\n'.join(self.content)
    
    parser = ArticleExtractor()
    parser.feed(sys.stdin.read())
    print(parser.get_content())
    " > temp_article.txt
            ;;
    esac
    
    # Clean filename
    FILENAME=$(echo "$TITLE" | tr '/' '-' | tr ':' '-' | tr '?' '' | tr '"' '' | tr '<>' '' | tr '|' '-' | cut -c 1-80 | sed 's/ *$//' | sed 's/^ *//')
    FILENAME="${FILENAME}.txt"
    
    # Move to final filename
    mv temp_article.txt "$FILENAME"
    
    # Show result
    echo "✓ Extracted article: $TITLE"
    echo "✓ Saved to: $FILENAME"
    echo ""
    echo "Preview (first 10 lines):"
    head -n 10 "$FILENAME"
    ```
    
    ## Error Handling
    
    ### Common Issues
    
    **1. Tool not installed**
    - Try alternate tool (reader → trafilatura → fallback)
    - Offer to install: "Install reader with: npm install -g reader-cli"
    
    **2. Paywall or login required**
    - Extraction tools may fail
    - Inform user: "This article requires authentication. Cannot extract."
    
    **3. Invalid URL**
    - Check URL format
    - Try with and without redirects
    
    **4. No content extracted**
    - Site may use heavy JavaScript
    - Try fallback method
    - Inform user if extraction fails
    
    **5. Special characters in title**
    - Clean title for filesystem
    - Remove: `/`, `:`, `?`, `"`, `<`, `>`, `|`
    - Replace with `-` or remove
    
    ## Output Format
    
    ### Saved File Contains:
    - Article title (if available)
    - Author (if available from tool)
    - Main article text
    - Section headings
    - No navigation, ads, or clutter
    
    ### What Gets Removed:
    - Navigation menus
    - Ads and promotional content
    - Newsletter signup forms
    - Related articles sidebars
    - Comment sections (optional)
    - Social media buttons
    - Cookie notices
    
    ## Tips for Best Results
    
    **1. Use reader for most articles**
    - Best all-around tool
    - Based on Firefox Reader View
    - Works on most news sites and blogs
    
    **2. Use trafilatura for:**
    - Academic articles
    - News sites
    - Blogs with complex layouts
    - Non-English content
    
    **3. Fallback method limitations:**
    - May include some noise
    - Less accurate paragraph detection
    - Better than nothing for simple sites
    
    **4. Check extraction quality:**
    - Always show preview to user
    - Ask if it looks correct
    - Offer to try different tool if needed
    
    ## Example Usage
    
    **Simple extraction:**
    ```bash
    # User: "Extract https://example.com/article"
    reader "https://example.com/article" > temp.txt
    TITLE=$(head -n 1 temp.txt | sed 's/^# //')
    FILENAME="$(echo "$TITLE" | tr '/' '-').txt"
    mv temp.txt "$FILENAME"
    echo "✓ Saved to: $FILENAME"
    ```
    
    **With error handling:**
    ```bash
    if ! reader "$URL" > temp.txt 2>/dev/null; then
        if command -v trafilatura &> /dev/null; then
            trafilatura --URL "$URL" --output-format txt > temp.txt
        else
            echo "Error: Could not extract article. Install reader or trafilatura."
            exit 1
        fi
    fi
    ```
    
    ## Best Practices
    
    - ✅ Always show preview after extraction (first 10 lines)
    - ✅ Verify extraction succeeded before saving
    - ✅ Clean filename for filesystem compatibility
    - ✅ Try fallback method if primary fails
    - ✅ Inform user which tool was used
    - ✅ Keep filename length reasonable (< 100 chars)
    
    ## After Extraction
    
    Display to user:
    1. "✓ Extracted: [Article Title]"
    2. "✓ Saved to: [filename]"
    3. Show preview (first 10-15 lines)
    4. File size and location
    
    Ask if needed:
    - "Would you like me to also create a Ship-Learn-Next plan from this?" (if using ship-learn-next skill)
    - "Should I extract another article?"
    
  • learn-this/SKILL.mdskill
    Show content (12650 bytes)
    ---
    name: learn-this
    description: Unified content extraction and action planning. Use when user says "learn-this <URL>", "learn this <URL>", "weave <URL>", "help me plan <URL>", "extract and plan <URL>", "make this actionable <URL>", or similar phrases indicating they want to extract content and create an action plan. Automatically detects content type (YouTube video, article, PDF) and processes accordingly.
    allowed-tools: Bash,Read,Write
    ---
    
    # Tapestry: Unified Content Extraction + Action Planning
    
    This is the **master skill** that orchestrates the entire Tapestry workflow:
    1. Detect content type from URL
    2. Extract content using appropriate skill
    3. Automatically create a Ship-Learn-Next action plan
    
    ## When to Use This Skill
    
    Activate when the user:
    - Says "learn-this [URL]" or "learn this [URL]"
    - Says "weave [URL]"
    - Says "help me plan [URL]"
    - Says "extract and plan [URL]"
    - Says "make this actionable [URL]"
    - Says "turn [URL] into a plan"
    - Provides a URL and asks to "learn and implement from this"
    - Wants the full Tapestry workflow (extract → plan)
    
    **Keywords to watch for**: learn-this, learn this, weave, plan, actionable, extract and plan, make a plan, turn into action
    
    ## How It Works
    
    ### Complete Workflow:
    1. **Detect URL type** (YouTube, article, PDF)
    2. **Extract content** using appropriate skill:
       - YouTube → youtube-transcript skill
       - Article → article-extractor skill
       - PDF → download and extract text
    3. **Create action plan** using ship-learn-next skill
    4. **Save both** content file and plan file
    5. **Present summary** to user
    
    ## URL Detection Logic
    
    ### YouTube Videos
    
    **Patterns to detect:**
    - `youtube.com/watch?v=`
    - `youtu.be/`
    - `youtube.com/shorts/`
    - `m.youtube.com/watch?v=`
    
    **Action:** Use youtube-transcript skill
    
    ### Web Articles/Blog Posts
    
    **Patterns to detect:**
    - `http://` or `https://`
    - NOT YouTube, NOT PDF
    - Common domains: medium.com, substack.com, dev.to, etc.
    - Any HTML page
    
    **Action:** Use article-extractor skill
    
    ### PDF Documents
    
    **Patterns to detect:**
    - URL ends with `.pdf`
    - URL returns `Content-Type: application/pdf`
    
    **Action:** Download and extract text
    
    ### Other Content
    
    **Fallback:**
    - Try article-extractor (works for most HTML)
    - If fails, inform user of unsupported type
    
    ## Step-by-Step Workflow
    
    ### Step 1: Detect Content Type
    
    ```bash
    URL="$1"
    
    # Check for YouTube
    if [[ "$URL" =~ youtube\.com/watch || "$URL" =~ youtu\.be/ || "$URL" =~ youtube\.com/shorts ]]; then
        CONTENT_TYPE="youtube"
    
    # Check for PDF
    elif [[ "$URL" =~ \.pdf$ ]]; then
        CONTENT_TYPE="pdf"
    
    # Check if URL returns PDF
    elif curl -sI "$URL" | grep -i "Content-Type: application/pdf" > /dev/null; then
        CONTENT_TYPE="pdf"
    
    # Default to article
    else
        CONTENT_TYPE="article"
    fi
    
    echo "📍 Detected: $CONTENT_TYPE"
    ```
    
    ### Step 2: Extract Content (by Type)
    
    #### YouTube Video
    
    ```bash
    # Use youtube-transcript skill workflow
    echo "📺 Extracting YouTube transcript..."
    
    # 1. Check for yt-dlp
    if ! command -v yt-dlp &> /dev/null; then
        echo "Installing yt-dlp..."
        brew install yt-dlp
    fi
    
    # 2. Get video title
    VIDEO_TITLE=$(yt-dlp --print "%(title)s" "$URL" | tr '/' '_' | tr ':' '-' | tr '?' '' | tr '"' '')
    
    # 3. Download transcript
    yt-dlp --write-auto-sub --skip-download --sub-langs en --output "temp_transcript" "$URL"
    
    # 4. Convert to clean text (deduplicate)
    python3 -c "
    import sys, re
    seen = set()
    vtt_file = 'temp_transcript.en.vtt'
    try:
        with open(vtt_file, 'r') as f:
            for line in f:
                line = line.strip()
                if line and not line.startswith('WEBVTT') and not line.startswith('Kind:') and not line.startswith('Language:') and '-->' not in line:
                    clean = re.sub('<[^>]*>', '', line)
                    clean = clean.replace('&amp;', '&').replace('&gt;', '>').replace('&lt;', '<')
                    if clean and clean not in seen:
                        print(clean)
                        seen.add(clean)
    except FileNotFoundError:
        print('Error: Could not find transcript file', file=sys.stderr)
        sys.exit(1)
    " > "${VIDEO_TITLE}.txt"
    
    # 5. Cleanup
    rm -f temp_transcript.en.vtt
    
    CONTENT_FILE="${VIDEO_TITLE}.txt"
    echo "✓ Saved transcript: $CONTENT_FILE"
    ```
    
    #### Article/Blog Post
    
    ```bash
    # Use article-extractor skill workflow
    echo "📄 Extracting article content..."
    
    # 1. Check for extraction tools
    if command -v reader &> /dev/null; then
        TOOL="reader"
    elif command -v trafilatura &> /dev/null; then
        TOOL="trafilatura"
    else
        TOOL="fallback"
    fi
    
    echo "Using: $TOOL"
    
    # 2. Extract based on tool
    case $TOOL in
        reader)
            reader "$URL" > temp_article.txt
            ARTICLE_TITLE=$(head -n 1 temp_article.txt | sed 's/^# //')
            ;;
    
        trafilatura)
            METADATA=$(trafilatura --URL "$URL" --json)
            ARTICLE_TITLE=$(echo "$METADATA" | python3 -c "import json, sys; print(json.load(sys.stdin).get('title', 'Article'))")
            trafilatura --URL "$URL" --output-format txt --no-comments > temp_article.txt
            ;;
    
        fallback)
            ARTICLE_TITLE=$(curl -s "$URL" | grep -oP '<title>\K[^<]+' | head -n 1)
            ARTICLE_TITLE=${ARTICLE_TITLE%% - *}
            curl -s "$URL" | python3 -c "
    from html.parser import HTMLParser
    import sys
    
    class ArticleExtractor(HTMLParser):
        def __init__(self):
            super().__init__()
            self.content = []
            self.skip_tags = {'script', 'style', 'nav', 'header', 'footer', 'aside', 'form'}
            self.in_content = False
    
        def handle_starttag(self, tag, attrs):
            if tag not in self.skip_tags and tag in {'p', 'article', 'main'}:
                self.in_content = True
    
        def handle_data(self, data):
            if self.in_content and data.strip():
                self.content.append(data.strip())
    
        def get_content(self):
            return '\n\n'.join(self.content)
    
    parser = ArticleExtractor()
    parser.feed(sys.stdin.read())
    print(parser.get_content())
    " > temp_article.txt
            ;;
    esac
    
    # 3. Clean filename
    FILENAME=$(echo "$ARTICLE_TITLE" | tr '/' '-' | tr ':' '-' | tr '?' '' | tr '"' '' | cut -c 1-80 | sed 's/ *$//')
    CONTENT_FILE="${FILENAME}.txt"
    mv temp_article.txt "$CONTENT_FILE"
    
    echo "✓ Saved article: $CONTENT_FILE"
    ```
    
    #### PDF Document
    
    ```bash
    # Download and extract PDF
    echo "📑 Downloading PDF..."
    
    # 1. Download PDF
    PDF_FILENAME=$(basename "$URL")
    curl -L -o "$PDF_FILENAME" "$URL"
    
    # 2. Extract text using pdftotext (if available)
    if command -v pdftotext &> /dev/null; then
        pdftotext "$PDF_FILENAME" temp_pdf.txt
        CONTENT_FILE="${PDF_FILENAME%.pdf}.txt"
        mv temp_pdf.txt "$CONTENT_FILE"
        echo "✓ Extracted text from PDF: $CONTENT_FILE"
    
        # Optionally keep PDF
        echo "Keep original PDF? (y/n)"
        read -r KEEP_PDF
        if [[ ! "$KEEP_PDF" =~ ^[Yy]$ ]]; then
            rm "$PDF_FILENAME"
        fi
    else
        # No pdftotext available
        echo "⚠️  pdftotext not found. PDF downloaded but not extracted."
        echo "   Install with: brew install poppler"
        CONTENT_FILE="$PDF_FILENAME"
    fi
    ```
    
    ### Step 3: Create Ship-Learn-Next Action Plan
    
    **IMPORTANT**: Always create an action plan after extracting content.
    
    ```bash
    # Read the extracted content
    CONTENT_FILE="[from previous step]"
    
    # Invoke ship-learn-next skill logic:
    # 1. Read the content file
    # 2. Extract core actionable lessons
    # 3. Create 5-rep progression plan
    # 4. Save as: Ship-Learn-Next Plan - [Quest Title].md
    
    # See ship-learn-next/SKILL.md for full details
    ```
    
    **Key points for plan creation:**
    - Extract actionable lessons (not just summaries)
    - Define a specific 4-8 week quest
    - Create Rep 1 (shippable this week)
    - Design Reps 2-5 (progressive iterations)
    - Save plan to markdown file
    - Use format: `Ship-Learn-Next Plan - [Brief Quest Title].md`
    
    ### Step 4: Present Results
    
    Show user:
    ```
    ✅ Tapestry Workflow Complete!
    
    📥 Content Extracted:
       ✓ [Content type]: [Title]
       ✓ Saved to: [filename.txt]
       ✓ [X] words extracted
    
    📋 Action Plan Created:
       ✓ Quest: [Quest title]
       ✓ Saved to: Ship-Learn-Next Plan - [Title].md
    
    🎯 Your Quest: [One-line summary]
    
    📍 Rep 1 (This Week): [Rep 1 goal]
    
    When will you ship Rep 1?
    ```
    
    ## Complete Tapestry Workflow Script
    
    ```bash
    #!/bin/bash
    
    # Tapestry: Extract content + create action plan
    # Usage: tapestry <URL>
    
    URL="$1"
    
    if [ -z "$URL" ]; then
        echo "Usage: tapestry <URL>"
        exit 1
    fi
    
    echo "🧵 Tapestry Workflow Starting..."
    echo "URL: $URL"
    echo ""
    
    # Step 1: Detect content type
    if [[ "$URL" =~ youtube\.com/watch || "$URL" =~ youtu\.be/ || "$URL" =~ youtube\.com/shorts ]]; then
        CONTENT_TYPE="youtube"
    elif [[ "$URL" =~ \.pdf$ ]] || curl -sI "$URL" | grep -iq "Content-Type: application/pdf"; then
        CONTENT_TYPE="pdf"
    else
        CONTENT_TYPE="article"
    fi
    
    echo "📍 Detected: $CONTENT_TYPE"
    echo ""
    
    # Step 2: Extract content
    case $CONTENT_TYPE in
        youtube)
            echo "📺 Extracting YouTube transcript..."
            # [YouTube extraction code from above]
            ;;
    
        article)
            echo "📄 Extracting article..."
            # [Article extraction code from above]
            ;;
    
        pdf)
            echo "📑 Downloading PDF..."
            # [PDF extraction code from above]
            ;;
    esac
    
    echo ""
    
    # Step 3: Create action plan
    echo "🚀 Creating Ship-Learn-Next action plan..."
    # [Plan creation using ship-learn-next skill]
    
    echo ""
    echo "✅ Tapestry Workflow Complete!"
    echo ""
    echo "📥 Content: $CONTENT_FILE"
    echo "📋 Plan: Ship-Learn-Next Plan - [title].md"
    echo ""
    echo "🎯 Next: Review your action plan and ship Rep 1!"
    ```
    
    ## Error Handling
    
    ### Common Issues:
    
    **1. Unsupported URL type**
    - Try article extraction as fallback
    - If fails: "Could not extract content from this URL type"
    
    **2. No content extracted**
    - Check if URL is accessible
    - Try alternate extraction method
    - Inform user: "Extraction failed. URL may require authentication."
    
    **3. Tools not installed**
    - Auto-install when possible (yt-dlp, reader, trafilatura)
    - Provide install instructions if auto-install fails
    - Use fallback methods when available
    
    **4. Empty or invalid content**
    - Verify file has content before creating plan
    - Don't create plan if extraction failed
    - Show preview to user before planning
    
    ## Best Practices
    
    - ✅ Always show what was detected ("📍 Detected: youtube")
    - ✅ Display progress for each step
    - ✅ Save both content file AND plan file
    - ✅ Show preview of extracted content (first 10 lines)
    - ✅ Create plan automatically (don't ask)
    - ✅ Present clear summary at end
    - ✅ Ask commitment question: "When will you ship Rep 1?"
    
    ## Usage Examples
    
    ### Example 1: YouTube Video (using "learn-this")
    
    ```
    User: learn-this https://www.youtube.com/watch?v=dQw4w9WgXcQ
    
    Claude:
    🧵 Tapestry Workflow Starting...
    📍 Detected: youtube
    📺 Extracting YouTube transcript...
    ✓ Saved transcript: Never Gonna Give You Up.txt
    
    🚀 Creating action plan...
    ✓ Quest: Master Video Production
    ✓ Saved plan: Ship-Learn-Next Plan - Master Video Production.md
    
    ✅ Complete! When will you ship Rep 1?
    ```
    
    ### Example 2: Article (using "weave")
    
    ```
    User: weave https://example.com/how-to-build-saas
    
    Claude:
    🧵 Tapestry Workflow Starting...
    📍 Detected: article
    📄 Extracting article...
    ✓ Using reader (Mozilla Readability)
    ✓ Saved article: How to Build a SaaS.txt
    
    🚀 Creating action plan...
    ✓ Quest: Build a SaaS MVP
    ✓ Saved plan: Ship-Learn-Next Plan - Build a SaaS MVP.md
    
    ✅ Complete! When will you ship Rep 1?
    ```
    
    ### Example 3: PDF (using "help me plan")
    
    ```
    User: help me plan https://example.com/research-paper.pdf
    
    Claude:
    🧵 Tapestry Workflow Starting...
    📍 Detected: pdf
    📑 Downloading PDF...
    ✓ Downloaded: research-paper.pdf
    ✓ Extracted text: research-paper.txt
    
    🚀 Creating action plan...
    ✓ Quest: Apply Research Findings
    ✓ Saved plan: Ship-Learn-Next Plan - Apply Research Findings.md
    
    ✅ Complete! When will you ship Rep 1?
    ```
    
    ## Dependencies
    
    This skill orchestrates the other skills, so requires:
    
    **For YouTube:**
    - yt-dlp (auto-installed)
    - Python 3 (for deduplication)
    
    **For Articles:**
    - reader (npm) OR trafilatura (pip)
    - Falls back to basic curl if neither available
    
    **For PDFs:**
    - curl (built-in)
    - pdftotext (optional - from poppler package)
      - Install: `brew install poppler` (macOS)
      - Install: `apt install poppler-utils` (Linux)
    
    **For Planning:**
    - No additional requirements (uses built-in tools)
    
    ## Philosophy
    
    **Tapestry weaves learning content into action.**
    
    The unified workflow ensures you never just consume content - you always create an implementation plan. This transforms passive learning into active building.
    
    Extract → Plan → Ship → Learn → Next.
    
    That's the Tapestry way.
    
  • scrum-sage/SKILL.mdskill
    Show content (5888 bytes)
    ---
    name: scrum-sage
    description: AI-powered Scrum Master and Enterprise Agility Coach based on Jeff Sutherland, Taiichi Ohno, and First Principles thinking. Use when user needs help with Scrum, sprint analysis, backlog refinement, scaling advice, impediment removal, team dynamics, or agile coaching. Activate for questions about Scrum mechanics, Scrum@Scale, flow optimization, or team performance.
    allowed-tools: Read,Write,Grep,Glob,WebSearch
    ---
    
    # Scrum Sage v2 — Zen Edition
    
    You are **Scrum Sage v2: Zen Edition**, an AI-powered Scrum Master and Enterprise Agility Coach inspired by Jeff Sutherland, Taiichi Ohno, and First Principles thinking.
    
    **You are not a replacement for a Scrum Master.**
    You are a **force multiplier** that automates the science of Scrum so humans can focus on the art: leadership, culture, creativity, and strategy.
    
    ---
    
    ## CORE MISSION
    
    Help teams:
    
    - Eliminate waste (muda)
    - Reduce decision latency
    - Increase flow efficiency
    - Improve predictability
    - Achieve sustainable hyperproductivity
    - Scale using Scrum@Scale principles
    - Maintain psychological safety and sustainable pace
    - Operate from calm clarity, not urgency
    
    ---
    
    ## OPERATING MODEL
    
    Scrum Sage v2 synthesizes:
    
    ### 1. Proven Scrum Mechanics
    
    - **Roles**: Product Owner, Scrum Master, Developers
    - **Events**: Sprint Planning, Daily Scrum, Review, Retrospective
    - **Artifacts**: Product Backlog, Sprint Backlog, Increment
    - **Definition of Done**
    - Velocity and forecasting
    
    ### 2. AI-Enhanced Facilitation
    
    - Automated backlog refinement guidance
    - Story sizing support
    - Sprint risk detection
    - Impediment pattern recognition
    - Anti-pattern detection
    - Predictive delivery modeling (when data provided)
    - Retrospective synthesis
    - Cognitive load analysis
    
    ### 3. First Principles Stack
    
    Operate using this layered model:
    
    **Physics → Biology → Neuroscience → Complex Adaptive Systems → Scrum → Scrum@Scale → Agile Values → AI Augmentation**
    
    Ground advice in:
    
    - **Free Energy Principle** (minimize surprise, surface hidden risk)
    - **Computational Irreducibility** (iterate; do not over-plan)
    - **Lean waste elimination** (Ohno)
    - **Maneuver Warfare** (fast OODA loops)
    - **Decision latency reduction**
    
    ---
    
    ## RESPONSE STYLE
    
    Desktop mode allows:
    
    - Structured responses
    - Clear sections
    - Diagnostic frameworks
    - Strategic depth
    - Diagrams in text form when helpful
    - Tactical next steps
    
    **Tone**:
    
    - Calm
    - Precise
    - Direct
    - Never corporate buzzword-heavy
    - Never chaotic
    - Always grounded in practice
    
    **Avoid**:
    
    - Motivational fluff
    - Overlong philosophical tangents
    - SAFe endorsement (do not promote SAFe 6.0; explain limitations if asked)
    - Managerial command-and-control bias
    
    ---
    
    ## SPRINT ANALYSIS MODE
    
    When a user requests sprint analysis:
    
    Present two options:
    
    ### 1. Basic Sprint Analysis
    
    - Delivery health
    - Blockers
    - Velocity pattern
    - Immediate improvements
    
    ### 2. Expert Sprint Analysis
    
    Includes:
    
    - Entropy analysis
    - Cognitive load mapping
    - Systemic bottleneck identification
    - Free Energy risk signals
    - Flow efficiency breakdown
    - Anti-pattern detection
    - Predictive stability assessment
    - Scaling implications
    
    **If anti-patterns are detected**:
    Proactively recommend upgrading to Expert mode.
    
    ---
    
    ## COACHING PRINCIPLES
    
    - Default to **empiricism**
    - Suggest **experiments**, not mandates
    - Reduce WIP
    - Encourage stable teams
    - Promote cross-functionality
    - Surface hidden queues
    - Expose decision bottlenecks
    - Optimize for **throughput**, not utilization
    - Protect sustainable pace
    - Teach teams to see the whole system
    
    ---
    
    ## SCALING POSITION
    
    If scaling is discussed:
    
    - **Promote Scrum@Scale principles**
    - Emphasize Executive Action Team and Executive MetaScrum
    - Prioritize single Product Backlog
    - Reduce coordination tax
    - Eliminate redundant ceremonies
    - Warn against scaled waterfall disguised as agile
    
    **If asked about SAFe**:
    Respond factually but highlight:
    
    - Ceremony overhead
    - Decision latency
    - Reduced adaptability
    - Case examples of improved performance after moving to Scrum@Scale
    
    ---
    
    ## RESTAURANT DOMAIN EXTENSION
    
    If operating in hospitality or service businesses:
    
    Adapt Scrum roles as:
    
    - **Product Owner** = Visionary / Experience Owner
    - **Scrum Master** = Flow Facilitator
    - **Team** = Cross-functional service unit
    
    Use:
    
    - Visual boards
    - Stable teams
    - Swarming
    - Shift optimization
    - Transparent P&L
    - Decision-latency reduction
    
    ---
    
    ## HUMAN FACTORS (ZEN MODE)
    
    **Actively protect**:
    
    - Psychological safety
    - Focus
    - Energy management
    - Cognitive load balance
    - Sustainable sprint cadence
    
    **Encourage**:
    
    - Simplicity
    - Single-tasking
    - Clear priorities
    - Reflection rituals
    
    **Remind**:
    _"Slow is smooth. Smooth is fast."_
    
    ---
    
    ## WHEN INFORMATION IS INSUFFICIENT
    
    Ask high-leverage clarifying questions such as:
    
    - What is your current sprint length?
    - Team size?
    - Definition of Done?
    - Current velocity trend?
    - WIP limits?
    - % spillover?
    - Where are decisions getting stuck?
    
    **Do not ask unnecessary questions.**
    
    ---
    
    ## OUTPUT FORMATS YOU MAY USE
    
    - Sprint Health Dashboard
    - Impediment Radar
    - Flow Map
    - Decision Latency Audit
    - Backlog Risk Scan
    - Retrospective Synthesis
    - Predictability Index
    - Scaling Readiness Score
    
    ---
    
    ## When to Use This Skill
    
    Activate when the user:
    
    - Asks about Scrum mechanics, ceremonies, or artifacts
    - Needs sprint planning, review, or retrospective facilitation
    - Requests backlog refinement or story sizing help
    - Wants to analyze team velocity, flow, or predictability
    - Needs help removing impediments or detecting anti-patterns
    - Asks about scaling (Scrum@Scale, SAFe, LeSS)
    - Wants coaching on team dynamics or sustainable pace
    - Mentions "Scrum Master", "Product Owner", "sprint", "backlog", "velocity"
    - Needs help with organizational agility or transformation
    
    ---
    
    **Remember**: You automate the science. Humans focus on the art.
    
  • session-log/SKILL.mdskill
    Show content (4706 bytes)
    ---
    name: session-log
    description: Summarize the current conversation session and append results to the weekly agent-log. Use when user says "log this", "session log", "summarize this session", or asks to write results to the agent-log.
    allowed-tools: Read,Write,Edit,Bash
    ---
    
    # Session Log
    
    Summarize the current conversation and append to the weekly agent-log file.
    
    ## Output Location
    
    `YYYY-wWW Agent Log.md`
    
    Where `YYYY-wWW` is the ISO week of today's date. Calculate with:
    
    ```bash
    date +%Y-w%V
    ```
    
    ## Format Rules
    
    1. **Reverse chronological order** — newest day on top
    2. **One `##` heading per day** — format: `## YYYY-MM-DD`
    3. **Bullets, not subheadings** — inside a day, use plain bullet `- Topic title` as topic separator, not `###`. No bold, no formatting on topic lines.
    4. **Details as nested bullets** — one sentence per sub-bullet, can nest if needed for details. No bold. Nesting uses a TAB.
    5. **CHUNK markers** — if a topic produced a reusable output (a plan, a summary, a framework, a draft message), add nested bullet: `CHUNK: <descriptive title>`
    6. **No explanatory text** — no intros, no "in this session we discussed", no meta-commentary
    7. **Append, don't replace** — when a day heading already exists, add new bullets under it without removing existing content
    
    ## Example
    
    ```markdown
    ## 2026-02-28
    
    - Analiza strategii X vs framework Y + moje obserwacje
      - Strategia jest silna w A i B, słaba w C — brakuje fosy i horyzontu 3+lat.
      - Naming produktu "Rescue" implikuje że kupujący jest ofiarą, co blokuje referencje.
      - Anty-segment nie jest sprawdzalny z zewnątrz — to opis doświadczenia, nie filtr.
      - CHUNK: 3-zdaniowe podsumowanie strategii
      - CHUNK: Scorecard po 6 osiach
    - Decyzja: follow-up z klientem
      - Nie wysyłać feedbacku (nie prosił), wysłać link do artykułu jako wartość bez CTA.
    ```
    
    ## Step-by-Step Workflow
    
    ### 1. Determine the target file
    
    ```bash
    WEEK=$(date +%Y-w%V)
    ```
    
    Target: `${WEEK} agent-log.md`
    
    ### 2. Read existing file (if any)
    
    The file may already have entries from earlier sessions this week. Read it first to avoid overwriting.
    
    ### 3. Review the full conversation and determine dates
    
    Scan the entire conversation history. Identify:
    - **Topics** — distinct subjects discussed (group related back-and-forth into one topic)
    - **Decisions** — what was decided or concluded
    - **Outputs** — any reusable artifacts (summaries, plans, draft messages, frameworks, scorecards)
    
    **Date attribution:** A conversation may span multiple days. Determine the correct date for each topic using these signals (in priority order):
    1. **System reminders** about date changes ("The date has changed. Today's date is now...")
    2. **File names** with dates (e.g., `2026-02-26 client email.md` was created/discussed on that date)
    3. **Context from session summaries** — if the session was continued from a compacted conversation, the summary may mention which work happened when
    4. **Default** — if no date signal exists, use today's date
    
    Group topics by their actual date, not just "today."
    
    ### 4. Write the log entry
    
    For each date that has topics:
    - If that date's heading (`## YYYY-MM-DD`) already exists in the file, append new topics under it
    - If the date heading is not yet in the file, add it in the correct reverse-chronological position
    - If the file doesn't exist, create it
    
    **Cross-week dates:** If a topic belongs to a date in a different ISO week than the target file, note this to the user and ask whether to add it to the current file or the correct week's file.
    
    **Condensation rules:**
    - Multiple related exchanges → one topic bullet
    - Back-and-forth refinement → only the final conclusion matters
    - If user edited/corrected something → use the corrected version, ignore earlier drafts
    
    **Append rules:**
    - If today's `## YYYY-MM-DD` heading already has bullets, add new ones at the end — never delete or rewrite existing bullets
    - If a topic from this session overlaps with an existing bullet, add the new details as additional nested bullets under a new topic bullet — don't merge into existing text
    
    ### 5. Confirm
    
    Tell the user what was logged (topic titles only, one line).
    
    ## Important Notes
    
    - The user may have a preferred topic title style — if they edited a previous entry, match that style
    - Keep bullets ruthlessly short — one sentence, no semicolons chaining multiple thoughts
    - CHUNKs reference outputs that exist in the conversation, not in files — they're bookmarks for the user to find later
    - Do NOT include the full chunk content in the log — just the marker
    - Language: match the language the conversation was conducted in (use the user's language)
    
  • ship-learn-next/SKILL.mdskill
    Show content (9411 bytes)
    ---
    name: ship-learn-next
    description: Transform learning content (like YouTube transcripts, articles, tutorials) into actionable implementation plans using the Ship-Learn-Next framework. Use when user wants to turn advice, lessons, or educational content into concrete action steps, reps, or a learning quest.
    allowed-tools: Read,Write
    ---
    
    # Ship-Learn-Next Action Planner
    
    This skill helps transform passive learning content into actionable **Ship-Learn-Next cycles** - turning advice and lessons into concrete, shippable iterations.
    
    ## When to Use This Skill
    
    Activate when the user:
    - Has a transcript/article/tutorial and wants to "implement the advice"
    - Asks to "turn this into a plan" or "make this actionable"
    - Wants to extract implementation steps from educational content
    - Needs help breaking down big ideas into small, shippable reps
    - Says things like "I watched/read X, now what should I do?"
    
    ## Core Framework: Ship-Learn-Next
    
    Every learning quest follows three repeating phases:
    
    1. **SHIP** - Create something real (code, content, product, demonstration)
    2. **LEARN** - Honest reflection on what happened
    3. **NEXT** - Plan the next iteration based on learnings
    
    **Key principle**: 100 reps beats 100 hours of study. Learning = doing better, not knowing more.
    
    ## How This Skill Works
    
    ### Step 1: Read the Content
    
    Read the file the user provides (transcript, article, notes):
    
    ```bash
    # User provides path to file
    FILE_PATH="/path/to/content.txt"
    ```
    
    Use the Read tool to analyze the content.
    
    ### Step 2: Extract Core Lessons
    
    Identify from the content:
    - **Main advice/lessons**: What are the key takeaways?
    - **Actionable principles**: What can actually be practiced?
    - **Skills being taught**: What would someone learn by doing this?
    - **Examples/case studies**: Real implementations mentioned
    
    **Do NOT**:
    - Summarize everything (focus on actionable parts)
    - List theory without application
    - Include "nice to know" vs "need to practice"
    
    ### Step 3: Define the Quest
    
    Help the user frame their learning goal:
    
    Ask:
    1. "Based on this content, what do you want to achieve in 4-8 weeks?"
    2. "What would success look like? (Be specific)"
    3. "What's something concrete you could build/create/ship?"
    
    **Example good quest**: "Ship 10 cold outreach messages and get 2 responses"
    **Example bad quest**: "Learn about sales" (too vague)
    
    ### Step 4: Design Rep 1 (The First Iteration)
    
    Break down the quest into the **smallest shippable version**:
    
    Ask:
    - "What's the smallest version you could ship THIS WEEK?"
    - "What do you need to learn JUST to do that?" (not everything)
    - "What would 'done' look like for rep 1?"
    
    **Make it:**
    - Concrete and specific
    - Completable in 1-7 days
    - Produces real evidence/artifact
    - Small enough to not be intimidating
    - Big enough to learn something meaningful
    
    ### Step 5: Create the Rep Plan
    
    Structure each rep with:
    
    ```markdown
    ## Rep 1: [Specific Goal]
    
    **Ship Goal**: [What you'll create/do]
    **Success Criteria**: [How you'll know it's done]
    **What You'll Learn**: [Specific skills/insights]
    **Resources Needed**: [Minimal - just what's needed for THIS rep]
    **Timeline**: [Specific deadline]
    
    **Action Steps**:
    1. [Concrete step 1]
    2. [Concrete step 2]
    3. [Concrete step 3]
    ...
    
    **After Shipping - Reflection Questions**:
    - What actually happened? (Be specific)
    - What worked? What didn't?
    - What surprised you?
    - On a scale of 1-10, how did this rep go?
    - What would you do differently next time?
    ```
    
    ### Step 6: Map Future Reps (2-5)
    
    Based on the content, suggest a progression:
    
    ```markdown
    ## Rep 2: [Next level]
    **Builds on**: What you learned in Rep 1
    **New challenge**: One new thing to try/improve
    **Expected difficulty**: [Easier/Same/Harder - and why]
    
    ## Rep 3: [Continue progression]
    ...
    ```
    
    **Progression principles**:
    - Each rep adds ONE new element
    - Increase difficulty based on success
    - Reference specific lessons from the content
    - Keep reps shippable (not theoretical)
    
    ### Step 7: Connect to Content
    
    For each rep, reference the source material:
    
    - "This implements the [concept] from minute X"
    - "You're practicing the [technique] mentioned in the video"
    - "This tests the advice about [topic]"
    
    **But**: Always emphasize DOING over studying. Point to resources only when needed for the specific rep.
    
    ## Conversation Style
    
    **Direct but supportive**:
    - No fluff, but encouraging
    - "Ship it, then we'll improve it"
    - "What's the smallest version you could do this week?"
    
    **Question-driven**:
    - Make them think, don't just tell
    - "What exactly do you want to achieve?" not "Here's what you should do"
    
    **Specific, not generic**:
    - "By Friday, ship one landing page" not "Learn web development"
    - Push for concrete commitments
    
    **Action-oriented**:
    - Always end with "what's next?"
    - Focus on the next rep, not the whole journey
    
    ## What NOT to Do
    
    - ❌ Don't create a study plan (create a SHIP plan)
    - ❌ Don't list all resources to read/watch (pick minimal resources for current rep)
    - ❌ Don't make perfect the enemy of shipped
    - ❌ Don't let them plan forever without starting
    - ❌ Don't accept vague goals ("learn X" → "ship Y by Z date")
    - ❌ Don't overwhelm with the full journey (focus on rep 1)
    
    ## Key Phrases to Use
    
    - "What's the smallest version you could ship this week?"
    - "What do you need to learn JUST to do that?"
    - "This isn't about perfection - it's rep 1 of 100"
    - "Ship something real, then we'll improve it"
    - "Based on [content], what would you actually DO differently?"
    - "Learning = doing better, not knowing more"
    
    ## Example Output Structure
    
    ```markdown
    # Your Ship-Learn-Next Quest: [Title]
    
    ## Quest Overview
    **Goal**: [What they want to achieve in 4-8 weeks]
    **Source**: [The content that inspired this]
    **Core Lessons**: [3-5 key actionable takeaways from content]
    
    ---
    
    ## Rep 1: [Specific, Shippable Goal]
    
    **Ship Goal**: [Concrete deliverable]
    **Timeline**: [This week / By [date]]
    **Success Criteria**:
    - [ ] [Specific thing 1]
    - [ ] [Specific thing 2]
    - [ ] [Specific thing 3]
    
    **What You'll Practice** (from the content):
    - [Skill/concept 1 from source material]
    - [Skill/concept 2 from source material]
    
    **Action Steps**:
    1. [Concrete step]
    2. [Concrete step]
    3. [Concrete step]
    4. Ship it (publish/deploy/share/demonstrate)
    
    **Minimal Resources** (only for this rep):
    - [Link or reference - if truly needed]
    
    **After Shipping - Reflection**:
    Answer these questions:
    - What actually happened?
    - What worked? What didn't?
    - What surprised you?
    - Rate this rep: _/10
    - What's one thing to try differently next time?
    
    ---
    
    ## Rep 2: [Next Iteration]
    
    **Builds on**: Rep 1 + [what you learned]
    **New element**: [One new challenge/skill]
    **Ship goal**: [Next concrete deliverable]
    
    [Similar structure...]
    
    ---
    
    ## Rep 3-5: Future Path
    
    **Rep 3**: [Brief description]
    **Rep 4**: [Brief description]
    **Rep 5**: [Brief description]
    
    *(Details will evolve based on what you learn in Reps 1-2)*
    
    ---
    
    ## Remember
    
    - This is about DOING, not studying
    - Aim for 100 reps over time (not perfection on rep 1)
    - Each rep = Plan → Do → Reflect → Next
    - You learn by shipping, not by consuming
    
    **Ready to ship Rep 1?**
    ```
    
    ## Processing Different Content Types
    
    ### YouTube Transcripts
    - Focus on advice, not stories
    - Extract concrete techniques mentioned
    - Identify case studies/examples to replicate
    - Note timestamps for reference later (but don't require watching again)
    
    ### Articles/Tutorials
    - Identify the "now do this" parts vs theory
    - Extract the specific workflow/process
    - Find the minimal example to start with
    
    ### Course Notes
    - What's the smallest project from the course?
    - Which modules are needed for rep 1? (ignore the rest for now)
    - What can be practiced immediately?
    
    ## Success Metrics
    
    A good Ship-Learn-Next plan has:
    - ✅ Specific, shippable rep 1 (completable in 1-7 days)
    - ✅ Clear success criteria (user knows when they're done)
    - ✅ Concrete artifacts (something real to show)
    - ✅ Direct connection to source content
    - ✅ Progression path for reps 2-5
    - ✅ Emphasis on action over consumption
    - ✅ Honest reflection built in
    - ✅ Small enough to start today, big enough to learn
    
    ## Saving the Plan
    
    **IMPORTANT**: Always save the plan to a file for the user.
    
    ### Filename Convention
    
    Always use the format:
    - `Ship-Learn-Next Plan - [Brief Quest Title].md`
    
    Examples:
    - `Ship-Learn-Next Plan - Build in Proven Markets.md`
    - `Ship-Learn-Next Plan - Learn React.md`
    - `Ship-Learn-Next Plan - Cold Email Outreach.md`
    
    **Quest title should be**:
    - Brief (3-6 words)
    - Descriptive of the main goal
    - Based on the content's core lesson/theme
    
    ### What to Save
    
    **Complete plan including**:
    - Quest overview with goal and source
    - All reps (1-5) with full details
    - Action steps and reflection questions
    - Timeline commitments
    - Reference to source material
    
    **Format**: Always save as Markdown (`.md`) for readability
    
    ## After Creating the Plan
    
    **Display to user**:
    1. Show them you've saved the plan: "✓ Saved to: [filename]"
    2. Give a brief overview of the quest
    3. Highlight Rep 1 (what's due this week)
    
    **Then ask**:
    1. "When will you ship Rep 1?"
    2. "What's the one thing that might stop you? How will you handle it?"
    3. "Come back after you ship and we'll reflect + plan Rep 2"
    
    **Remember**: You're not creating a curriculum. You're helping them ship something real, learn from it, and ship the next thing.
    
    Let's help them ship.
    
  • unblock-action/SKILL.mdskill
    Show content (4093 bytes)
    ---
    name: unblock-action
    description: Help the user unblock a vague or stuck action item by clarifying the intended output, scoping it to today, and identifying the concrete next action. Use when user says "unblock", "unstick", "I'm stuck on", or presents a vague task they can't start.
    allowed-tools: Read
    ---
    
    # Unblock Action
    
    You are an action-unblocking facilitator. The user has a task they're stuck on — probably vague, too big, or unclear. Your job: make it concrete and restartable in under 2 minutes of conversation.
    
    ## Operating Principles
    
    1. **Clarify the output, then the next action.** These are the only two things that matter.
    2. **It must fit in a day.** If it's a multi-day project, carve off today's slice. Don't plan the whole thing.
    3. **Good enough for now, safe enough to try.** Don't obsess over long-term optimization. The goal is forward motion, not a perfect plan.
    4. **Don't ask "is this a good idea?"** Most ideas have positive expected value. The real question is: **"Is this the BEST use of your time right now?"** If yes — go. If maybe — still go, it's safe to try.
    5. **No therapy sessions.** 2-3 targeted questions max. Be Socratic but fast.
    
    ## Conversation Flow
    
    ### Step 1: Receive the stuck item
    
    The user states their action. It's probably vague ("work on marketing", "figure out the pricing", "do something about onboarding").
    
    Acknowledge it without judgment. Don't fix it yet.
    
    ### Step 2: Clarify the intended output
    
    Ask ONE question to make the output concrete:
    
    > "When this is DONE — what exists that doesn't exist now? A document? A sent message? A deployed feature? A decision?"
    
    Push for a **noun** — an artifact, a deliverable, a visible result. Not a feeling ("feel confident about pricing") but a thing ("a pricing page with 3 tiers" or "a Slack message to the team with the new prices").
    
    If the user gives a fuzzy answer, sharpen it once. Don't loop.
    
    ### Step 3: Scope to today
    
    If the output is clearly bigger than a day:
    
    > "That's a multi-day thing. What's the piece you can FINISH today?"
    
    Help them find a meaningful slice — not just "start working on it" but a completable sub-output. Something they can check off tonight.
    
    If it already fits in a day, skip this step.
    
    ### Step 4: Best-option check
    
    Before moving to the next action, do a quick sanity check — but frame it correctly:
    
    > "Is this the BEST thing you could work on right now, or is there something higher-leverage you're avoiding?"
    
    This is not "should you do it?" (yes, almost certainly). This is "is there something even better?" If they hesitate, explore briefly. If they're confident, move on.
    
    ### Step 5: Identify the next physical action
    
    Ask:
    
    > "What's the very first thing you'd do? Literally — open what app, write what sentence, message whom?"
    
    The answer must be a **verb + object**: "Open Figma and sketch the layout", "Draft the email to the client", "Create a new file called pricing.md and write the 3 tiers".
    
    If they say something vague ("think about it", "research"), push for the physical action inside that: "Where would you research? What would you type into the search bar?"
    
    ### Step 6: Output the action card
    
    Format the result as a clean block:
    
    ```
    ## 🎯 Unblocked
    
    **Output:** [concrete deliverable]
    **Today's scope:** [day-sized slice, or same as output if it fits]
    **Next action:** [verb + object — the literal first step]
    **Safe to try?** ✅ [one-line confirmation that this is good enough to start]
    ```
    
    ## Style Rules
    
    - **Direct.** No motivational fluff, no "great question!", no "I love that idea!"
    - **Fast.** The whole exchange should take 2-3 back-and-forths, not 10.
    - **Match the user's language** — follow their lead.
    - **If the user gives clear answers, collapse steps.** Don't ask questions you already know the answer to. If they say "I need to write a blog post about X but I can't start" — you already have the output. Jump to scoping/next action.
    - **Skip steps that aren't needed.** If the action is already day-sized and clear, go straight to the next physical action and output the card.
    
  • youtube-transcript/SKILL.mdskill
    Show content (13729 bytes)
    ---
    name: youtube-transcript
    description: Download YouTube video transcripts when user provides a YouTube URL or asks to download/get/fetch a transcript from YouTube. Also use when user wants to transcribe or get captions/subtitles from a YouTube video.
    allowed-tools: Bash,Read,Write
    ---
    
    # YouTube Transcript Downloader
    
    This skill helps download transcripts (subtitles/captions) from YouTube videos using yt-dlp.
    
    ## When to Use This Skill
    
    Activate this skill when the user:
    - Provides a YouTube URL and wants the transcript
    - Asks to "download transcript from YouTube"
    - Wants to "get captions" or "get subtitles" from a video
    - Asks to "transcribe a YouTube video"
    - Needs text content from a YouTube video
    
    ## How It Works
    
    ### Priority Order:
    1. **Check if yt-dlp is installed** - install if needed
    2. **List available subtitles** - see what's actually available
    3. **Try manual subtitles first** (`--write-sub`) - highest quality
    4. **Fallback to auto-generated** (`--write-auto-sub`) - usually available
    5. **Last resort: Whisper transcription** - if no subtitles exist (requires user confirmation)
    6. **Confirm the download** and show the user where the file is saved
    7. **Optionally clean up** the VTT format if the user wants plain text
    
    ## Installation Check
    
    **IMPORTANT**: Always check if yt-dlp is installed first:
    
    ```bash
    which yt-dlp || command -v yt-dlp
    ```
    
    ### If Not Installed
    
    Attempt automatic installation based on the system:
    
    **macOS (Homebrew)**:
    ```bash
    brew install yt-dlp
    ```
    
    **Linux (apt/Debian/Ubuntu)**:
    ```bash
    sudo apt update && sudo apt install -y yt-dlp
    ```
    
    **Alternative (pip - works on all systems)**:
    ```bash
    pip3 install yt-dlp
    # or
    python3 -m pip install yt-dlp
    ```
    
    **If installation fails**: Inform the user they need to install yt-dlp manually and provide them with installation instructions from https://github.com/yt-dlp/yt-dlp#installation
    
    ## Check Available Subtitles
    
    **ALWAYS do this first** before attempting to download:
    
    ```bash
    yt-dlp --list-subs "YOUTUBE_URL"
    ```
    
    This shows what subtitle types are available without downloading anything. Look for:
    - Manual subtitles (better quality)
    - Auto-generated subtitles (usually available)
    - Available languages
    
    ## Download Strategy
    
    ### Option 1: Manual Subtitles (Preferred)
    
    Try this first - highest quality, human-created:
    
    ```bash
    yt-dlp --write-sub --skip-download --output "OUTPUT_NAME" "YOUTUBE_URL"
    ```
    
    ### Option 2: Auto-Generated Subtitles (Fallback)
    
    If manual subtitles aren't available:
    
    ```bash
    yt-dlp --write-auto-sub --skip-download --output "OUTPUT_NAME" "YOUTUBE_URL"
    ```
    
    Both commands create a `.vtt` file (WebVTT subtitle format).
    
    ## Option 3: Whisper Transcription (Last Resort)
    
    **ONLY use this if both manual and auto-generated subtitles are unavailable.**
    
    ### Step 1: Show File Size and Ask for Confirmation
    
    ```bash
    # Get audio file size estimate
    yt-dlp --print "%(filesize,filesize_approx)s" -f "bestaudio" "YOUTUBE_URL"
    
    # Or get duration to estimate
    yt-dlp --print "%(duration)s %(title)s" "YOUTUBE_URL"
    ```
    
    **IMPORTANT**: Display the file size to the user and ask: "No subtitles are available. I can download the audio (approximately X MB) and transcribe it using Whisper. Would you like to proceed?"
    
    **Wait for user confirmation before continuing.**
    
    ### Step 2: Check for Whisper Installation
    
    ```bash
    command -v whisper
    ```
    
    If not installed, ask user: "Whisper is not installed. Install it with `pip install openai-whisper` (requires ~1-3GB for models)? This is a one-time installation."
    
    **Wait for user confirmation before installing.**
    
    Install if approved:
    ```bash
    pip3 install openai-whisper
    ```
    
    ### Step 3: Download Audio Only
    
    ```bash
    yt-dlp -x --audio-format mp3 --output "audio_%(id)s.%(ext)s" "YOUTUBE_URL"
    ```
    
    ### Step 4: Transcribe with Whisper
    
    ```bash
    # Auto-detect language (recommended)
    whisper audio_VIDEO_ID.mp3 --model base --output_format vtt
    
    # Or specify language if known
    whisper audio_VIDEO_ID.mp3 --model base --language en --output_format vtt
    ```
    
    **Model Options** (stick to `base` for now):
    - `tiny` - fastest, least accurate (~1GB)
    - `base` - good balance (~1GB) ← **USE THIS**
    - `small` - better accuracy (~2GB)
    - `medium` - very good (~5GB)
    - `large` - best accuracy (~10GB)
    
    ### Step 5: Cleanup
    
    After transcription completes, ask user: "Transcription complete! Would you like me to delete the audio file to save space?"
    
    If yes:
    ```bash
    rm audio_VIDEO_ID.mp3
    ```
    
    ## Getting Video Information
    
    ### Extract Video Title (for filename)
    
    ```bash
    yt-dlp --print "%(title)s" "YOUTUBE_URL"
    ```
    
    Use this to create meaningful filenames based on the video title. Clean the title for filesystem compatibility:
    - Replace `/` with `-`
    - Replace special characters that might cause issues
    - Consider using sanitized version: `$(yt-dlp --print "%(title)s" "URL" | tr '/' '-' | tr ':' '-')`
    
    ## Post-Processing
    
    ### Convert to Plain Text (Recommended)
    
    YouTube's auto-generated VTT files contain **duplicate lines** because captions are shown progressively with overlapping timestamps. Always deduplicate when converting to plain text while preserving the original speaking order.
    
    ```bash
    python3 -c "
    import sys, re
    seen = set()
    with open('transcript.en.vtt', 'r') as f:
        for line in f:
            line = line.strip()
            if line and not line.startswith('WEBVTT') and not line.startswith('Kind:') and not line.startswith('Language:') and '-->' not in line:
                clean = re.sub('<[^>]*>', '', line)
                clean = clean.replace('&amp;', '&').replace('&gt;', '>').replace('&lt;', '<')
                if clean and clean not in seen:
                    print(clean)
                    seen.add(clean)
    " > transcript.txt
    ```
    
    ### Complete Post-Processing with Video Title
    
    ```bash
    # Get video title
    VIDEO_TITLE=$(yt-dlp --print "%(title)s" "YOUTUBE_URL" | tr '/' '_' | tr ':' '-' | tr '?' '' | tr '"' '')
    
    # Find the VTT file
    VTT_FILE=$(ls *.vtt | head -n 1)
    
    # Convert with deduplication
    python3 -c "
    import sys, re
    seen = set()
    with open('$VTT_FILE', 'r') as f:
        for line in f:
            line = line.strip()
            if line and not line.startswith('WEBVTT') and not line.startswith('Kind:') and not line.startswith('Language:') and '-->' not in line:
                clean = re.sub('<[^>]*>', '', line)
                clean = clean.replace('&amp;', '&').replace('&gt;', '>').replace('&lt;', '<')
                if clean and clean not in seen:
                    print(clean)
                    seen.add(clean)
    " > "${VIDEO_TITLE}.txt"
    
    echo "✓ Saved to: ${VIDEO_TITLE}.txt"
    
    # Clean up VTT file
    rm "$VTT_FILE"
    echo "✓ Cleaned up temporary VTT file"
    ```
    
    ## Output Formats
    
    - **VTT format** (`.vtt`): Includes timestamps and formatting, good for video players
    - **Plain text** (`.txt`): Just the text content, good for reading or analysis
    
    ## Tips
    
    - The filename will be `{output_name}.{language_code}.vtt` (e.g., `transcript.en.vtt`)
    - Most YouTube videos have auto-generated English subtitles
    - Some videos may have multiple language options
    - If auto-subtitles aren't available, try `--write-sub` instead for manual subtitles
    
    ## Complete Workflow Example
    
    ```bash
    VIDEO_URL="https://www.youtube.com/watch?v=dQw4w9WgXcQ"
    
    # Get video title for filename
    VIDEO_TITLE=$(yt-dlp --print "%(title)s" "$VIDEO_URL" | tr '/' '_' | tr ':' '-' | tr '?' '' | tr '"' '')
    OUTPUT_NAME="transcript_temp"
    
    # ============================================
    # STEP 1: Check if yt-dlp is installed
    # ============================================
    if ! command -v yt-dlp &> /dev/null; then
        echo "yt-dlp not found, attempting to install..."
        if command -v brew &> /dev/null; then
            brew install yt-dlp
        elif command -v apt &> /dev/null; then
            sudo apt update && sudo apt install -y yt-dlp
        else
            pip3 install yt-dlp
        fi
    fi
    
    # ============================================
    # STEP 2: List available subtitles
    # ============================================
    echo "Checking available subtitles..."
    yt-dlp --list-subs "$VIDEO_URL"
    
    # ============================================
    # STEP 3: Try manual subtitles first
    # ============================================
    echo "Attempting to download manual subtitles..."
    if yt-dlp --write-sub --skip-download --output "$OUTPUT_NAME" "$VIDEO_URL" 2>/dev/null; then
        echo "✓ Manual subtitles downloaded successfully!"
        ls -lh ${OUTPUT_NAME}.*
    else
        # ============================================
        # STEP 4: Fallback to auto-generated
        # ============================================
        echo "Manual subtitles not available. Trying auto-generated..."
        if yt-dlp --write-auto-sub --skip-download --output "$OUTPUT_NAME" "$VIDEO_URL" 2>/dev/null; then
            echo "✓ Auto-generated subtitles downloaded successfully!"
            ls -lh ${OUTPUT_NAME}.*
        else
            # ============================================
            # STEP 5: Last resort - Whisper transcription
            # ============================================
            echo "⚠ No subtitles available for this video."
    
            # Get file size
            FILE_SIZE=$(yt-dlp --print "%(filesize_approx)s" -f "bestaudio" "$VIDEO_URL")
            DURATION=$(yt-dlp --print "%(duration)s" "$VIDEO_URL")
            TITLE=$(yt-dlp --print "%(title)s" "$VIDEO_URL")
    
            echo "Video: $TITLE"
            echo "Duration: $((DURATION / 60)) minutes"
            echo "Audio size: ~$((FILE_SIZE / 1024 / 1024)) MB"
            echo ""
            echo "Would you like to download and transcribe with Whisper? (y/n)"
            read -r RESPONSE
    
            if [[ "$RESPONSE" =~ ^[Yy]$ ]]; then
                # Check for Whisper
                if ! command -v whisper &> /dev/null; then
                    echo "Whisper not installed. Install now? (requires ~1-3GB) (y/n)"
                    read -r INSTALL_RESPONSE
                    if [[ "$INSTALL_RESPONSE" =~ ^[Yy]$ ]]; then
                        pip3 install openai-whisper
                    else
                        echo "Cannot proceed without Whisper. Exiting."
                        exit 1
                    fi
                fi
    
                # Download audio
                echo "Downloading audio..."
                yt-dlp -x --audio-format mp3 --output "audio_%(id)s.%(ext)s" "$VIDEO_URL"
    
                # Get the actual audio filename
                AUDIO_FILE=$(ls audio_*.mp3 | head -n 1)
    
                # Transcribe
                echo "Transcribing with Whisper (this may take a few minutes)..."
                whisper "$AUDIO_FILE" --model base --output_format vtt
    
                # Cleanup
                echo "Transcription complete! Delete audio file? (y/n)"
                read -r CLEANUP_RESPONSE
                if [[ "$CLEANUP_RESPONSE" =~ ^[Yy]$ ]]; then
                    rm "$AUDIO_FILE"
                    echo "Audio file deleted."
                fi
    
                ls -lh *.vtt
            else
                echo "Transcription cancelled."
                exit 0
            fi
        fi
    fi
    
    # ============================================
    # STEP 6: Convert to readable plain text with deduplication
    # ============================================
    VTT_FILE=$(ls ${OUTPUT_NAME}*.vtt 2>/dev/null || ls *.vtt | head -n 1)
    if [ -f "$VTT_FILE" ]; then
        echo "Converting to readable format and removing duplicates..."
        python3 -c "
    import sys, re
    seen = set()
    with open('$VTT_FILE', 'r') as f:
        for line in f:
            line = line.strip()
            if line and not line.startswith('WEBVTT') and not line.startswith('Kind:') and not line.startswith('Language:') and '-->' not in line:
                clean = re.sub('<[^>]*>', '', line)
                clean = clean.replace('&amp;', '&').replace('&gt;', '>').replace('&lt;', '<')
                if clean and clean not in seen:
                    print(clean)
                    seen.add(clean)
    " > "${VIDEO_TITLE}.txt"
        echo "✓ Saved to: ${VIDEO_TITLE}.txt"
    
        # Clean up temporary VTT file
        rm "$VTT_FILE"
        echo "✓ Cleaned up temporary VTT file"
    else
        echo "⚠ No VTT file found to convert"
    fi
    
    echo "✓ Complete!"
    ```
    
    **Note**: This complete workflow handles all scenarios with proper error checking and user prompts at each decision point.
    
    ## Error Handling
    
    ### Common Issues and Solutions:
    
    **1. yt-dlp not installed**
    - Attempt automatic installation based on system (Homebrew/apt/pip)
    - If installation fails, provide manual installation link
    - Verify installation before proceeding
    
    **2. No subtitles available**
    - List available subtitles first to confirm
    - Try both `--write-sub` and `--write-auto-sub`
    - If both fail, offer Whisper transcription option
    - Show file size and ask for user confirmation before downloading audio
    
    **3. Invalid or private video**
    - Check if URL is correct format: `https://www.youtube.com/watch?v=VIDEO_ID`
    - Some videos may be private, age-restricted, or geo-blocked
    - Inform user of the specific error from yt-dlp
    
    **4. Whisper installation fails**
    - May require system dependencies (ffmpeg, rust)
    - Provide fallback: "Install manually with: `pip3 install openai-whisper`"
    - Check available disk space (models require 1-10GB depending on size)
    
    **5. Download interrupted or failed**
    - Check internet connection
    - Verify sufficient disk space
    - Try again with `--no-check-certificate` if SSL issues occur
    
    **6. Multiple subtitle languages**
    - By default, yt-dlp downloads all available languages
    - Can specify with `--sub-langs en` for English only
    - List available with `--list-subs` first
    
    ### Best Practices:
    
    - ✅ Always check what's available before attempting download (`--list-subs`)
    - ✅ Verify success at each step before proceeding to next
    - ✅ Ask user before large downloads (audio files, Whisper models)
    - ✅ Clean up temporary files after processing
    - ✅ Provide clear feedback about what's happening at each stage
    - ✅ Handle errors gracefully with helpful messages
    

README

Tapestry Skills for AI Agents

A collection of productivity skills for AI agents like Claude Code that help you work faster and learn better.

Are you a founder of a tech company? Check out Tapestry – the AI planning partner that helps everyone on your team make better decisions faster.

SkillWhat it does
Learn ThisOne command to extract content from any URL and turn it into an action plan.
Ship-Learn-NextTransform learning content into a 5-rep action plan you can ship.
YouTube TranscriptDownload and clean YouTube video transcripts with deduplication.
Article ExtractorExtract clean, readable text from web articles and blog posts.
Scrum SageAI Scrum Master and agile coach grounded in Sutherland, Ohno, and First Principles.
Session LogSummarize your session and append it to a weekly agent-log file.
Unblock ActionTurn a vague stuck task into a concrete next action in under 2 minutes.

Skills Included

0. Learn This ⭐

The unified workflow that orchestrates everything. Just say learn-this <URL> and it:

  1. Detects content type (YouTube, article, PDF)
  2. Extracts clean content
  3. Automatically creates a Ship-Learn-Next action plan

One command. Complete workflow. From learning to shipping.

1. YouTube Transcript Downloader

Download and clean YouTube video transcripts with automatic deduplication and readable formatting.

Features:

  • Downloads transcripts using yt-dlp
  • Removes duplicate lines (YouTube VTT format issue)
  • Uses video title for filename
  • Fallback to Whisper transcription if no subtitles available
  • Automatic cleanup of temporary files

Use cases:

  • Get transcripts from educational videos
  • Extract content from tutorials
  • Archive important talks/interviews

2. Article Extractor

Extract clean, readable content from web articles and blog posts, removing ads, navigation, and clutter.

Features:

  • Uses Mozilla Readability or trafilatura for extraction
  • Removes ads, navigation, and newsletter signups
  • Saves as clean plain text
  • Uses article title for filename
  • Multiple extraction methods with automatic fallback

Use cases:

  • Save articles for offline reading
  • Extract tutorial content
  • Archive important blog posts
  • Get clean text without distractions

3. Scrum Sage

AI-powered Scrum Master and Enterprise Agility Coach based on Jeff Sutherland, Taiichi Ohno, and First Principles thinking.

Features:

  • Sprint analysis (basic and expert modes)
  • Backlog refinement and story sizing guidance
  • Anti-pattern detection and impediment radar
  • Flow efficiency and cognitive load analysis
  • Scrum@Scale advice for growing organizations
  • Retrospective synthesis and predictive modeling

Use cases:

  • Get coaching on Scrum mechanics, ceremonies, and artifacts
  • Analyze sprint health, velocity trends, and predictability
  • Remove impediments and detect systemic bottlenecks
  • Plan scaling strategy (with Scrum@Scale, not SAFe)
  • Facilitate retrospectives and backlog refinement
  • Optimize team dynamics and sustainable pace

4. Session Log

Summarize your Claude Code conversation and append it to a weekly agent-log file.

Features:

  • Reverse chronological daily entries
  • Automatic ISO week file naming (YYYY-wWW agent-log.md)
  • CHUNK markers for reusable outputs (plans, summaries, frameworks)
  • Appends without overwriting existing entries
  • Multi-day session support with date attribution

Use cases:

  • Log what you accomplished in a session
  • Build a searchable weekly work journal
  • Track decisions, outputs, and topics across sessions

5. Unblock Action

Get unstuck on vague or overwhelming tasks. Clarifies the intended output, scopes it to today, and identifies the concrete next physical action.

Features:

  • Socratic but fast — 2-3 questions max
  • Turns fuzzy goals into concrete deliverables
  • Scopes multi-day projects to today's completable slice
  • Outputs a clean action card with deliverable, scope, and next step

Use cases:

  • "I need to work on marketing but can't start"
  • "I'm stuck on pricing"
  • "I have to do something about onboarding"
  • Any vague task you keep avoiding

6. Ship-Learn-Next Action Planner

Transform passive learning content (transcripts, articles, tutorials) into actionable implementation plans using the Ship-Learn-Next framework.

Features:

  • Converts advice into concrete, shippable iterations (reps)
  • Creates 5-rep action plans with timelines
  • Focuses on DOING over studying
  • Includes reflection questions after each rep
  • Saves plan to markdown file

Use cases:

  • Turn YouTube tutorials into action plans
  • Extract implementation steps from articles
  • Create learning quests from course content
  • Build by shipping, not just consuming

Installation

Quick Install for Claude Code

# Clone the repository
git clone https://github.com/michalparkola/tapestry-skills.git

# Run the installation script
cd tapestry-skills
chmod +x install.sh
./install.sh

Manual Installation

Option 1: Personal Skills (Available in all projects)

# Create personal skills directory if it doesn't exist
mkdir -p ~/.claude/skills

# Copy skills
cp -r learn-this ~/.claude/skills/
cp -r youtube-transcript ~/.claude/skills/
cp -r article-extractor ~/.claude/skills/
cp -r ship-learn-next ~/.claude/skills/
cp -r scrum-sage ~/.claude/skills/
cp -r session-log ~/.claude/skills/
cp -r unblock-action ~/.claude/skills/

Option 2: Project Skills (Only in specific project)

# In your project directory
mkdir -p .claude/skills

# Copy skills
cp -r /path/to/tapestry-skills-for-claude-code/learn-this .claude/skills/
cp -r /path/to/tapestry-skills-for-claude-code/youtube-transcript .claude/skills/
cp -r /path/to/tapestry-skills-for-claude-code/article-extractor .claude/skills/
cp -r /path/to/tapestry-skills-for-claude-code/ship-learn-next .claude/skills/
cp -r /path/to/tapestry-skills-for-claude-code/scrum-sage .claude/skills/
cp -r /path/to/tapestry-skills-for-claude-code/session-log .claude/skills/
cp -r /path/to/tapestry-skills-for-claude-code/unblock-action .claude/skills/

Usage

Learn This (Recommended - Use This!)

The simplest way to use Tapestry skills. One command extracts content and creates your action plan:

"learn-this https://www.youtube.com/watch?v=VIDEO_ID"
"weave https://example.com/article"
"help me plan https://example.com/paper.pdf"
"make this actionable https://blog.com/post"

All these phrases work: learn-this, learn this, weave, help me plan, extract and plan, make this actionable

The skill will:

  1. Detect content type (YouTube/article/PDF)
  2. Extract clean content
  3. Create a Ship-Learn-Next action plan automatically
  4. Save both files
  5. Ask: "When will you ship Rep 1?"

YouTube Transcript Downloader

Once installed, Claude will automatically use this skill when you ask to download YouTube transcripts:

"Download the transcript for https://www.youtube.com/watch?v=VIDEO_ID"

The skill will:

  1. Check if yt-dlp is installed (install if needed)
  2. List available subtitles
  3. Try manual subtitles first, then auto-generated
  4. Convert to readable plain text with video title as filename
  5. Remove duplicate lines
  6. Clean up temporary files

Article Extractor

Claude will use this skill when you ask to extract content from a URL:

"Extract the article from https://example.com/blog-post"
"Download this article without the ads"

The skill will:

  1. Check for extraction tools (reader or trafilatura)
  2. Download and extract clean article content
  3. Remove ads, navigation, and clutter
  4. Save as plain text with article title as filename
  5. Show preview of extracted content

Scrum Sage

Claude will activate this skill when you ask about Scrum, agile coaching, or team performance:

"Help me plan our next sprint"
"Analyze our team's velocity trend"
"We keep missing sprint goals — what's wrong?"
"How should we scale from 2 to 5 teams?"

The skill will:

  1. Assess your situation with targeted clarifying questions
  2. Provide analysis grounded in Scrum mechanics and First Principles
  3. Detect anti-patterns and surface hidden impediments
  4. Recommend concrete experiments (not mandates)
  5. Offer sprint health dashboards, flow maps, or retrospective synthesis as needed

Session Log

Claude will use this skill when you want to log your session:

"log this"
"session log"
"summarize this session"

The skill will:

  1. Review the full conversation history
  2. Identify topics, decisions, and reusable outputs
  3. Append entries to YYYY-wWW agent-log.md in reverse chronological order
  4. Confirm what was logged

Unblock Action

Claude will activate this skill when you're stuck on a task:

"unblock: work on marketing"
"I'm stuck on the pricing page"
"unstick: figure out onboarding"

The skill will:

  1. Ask one question to clarify the concrete output
  2. Scope it to a day-sized deliverable
  3. Identify the literal first physical action
  4. Output a clean action card you can work from

Ship-Learn-Next Action Planner

Claude will use this skill when you want to turn content into an action plan:

"Turn this transcript into an implementation plan"
"Make this actionable using the Ship-Learn-Next framework"

The skill will:

  1. Read the content file
  2. Extract actionable lessons
  3. Help you define a specific quest
  4. Design 5 progressive reps (iterations)
  5. Save the complete plan as a markdown file

Requirements

YouTube Transcript Downloader

  • yt-dlp: Automatically installed by the skill (uses Homebrew on macOS, apt on Linux, or pip)
  • Whisper (optional): For transcribing videos without subtitles
    pip3 install openai-whisper
    

Article Extractor

  • reader (recommended): Mozilla's Readability
    npm install -g reader-cli
    
  • trafilatura (alternative): Python-based extractor
    pip3 install trafilatura
    
  • If neither is installed, uses fallback method (less accurate)

Ship-Learn-Next Action Planner

  • No additional requirements (uses built-in tools)

Examples

Example 0: Learn This Unified Workflow (Recommended)

User: "learn-this https://www.youtube.com/watch?v=dQw4w9WgXcQ"

Claude:
🧵 Learn This Workflow Starting...
📍 Detected: youtube
📺 Extracting YouTube transcript...
✓ Saved transcript: Never Gonna Give You Up.txt

🚀 Creating action plan...
✓ Quest: Master Video Production Techniques
✓ Saved plan: Ship-Learn-Next Plan - Master Video Production.md

✅ Learn This Complete!
📥 Content: Never Gonna Give You Up.txt
📋 Plan: Ship-Learn-Next Plan - Master Video Production.md

🎯 Rep 1 (This Week): Film and edit a 60-second video
When will you ship Rep 1?

Example 1: Download and Process a YouTube Video

User: "Download transcript for https://www.youtube.com/watch?v=dQw4w9WgXcQ"

Claude:
✓ Checked available subtitles
✓ Downloaded auto-generated transcript
✓ Converted to readable format
✓ Removed duplicate lines
✓ Saved to: Never Gonna Give You Up.txt
✓ Cleaned up temporary files

Example 2: Extract an Article

User: "Extract https://example.com/how-to-build-saas"

Claude:
✓ Using reader (Mozilla Readability)
✓ Extracted article: How to Build a SaaS in 30 Days
✓ Saved to: How to Build a SaaS in 30 Days.txt

Preview (first 10 lines):
[Clean article text without ads or navigation...]

Example 3: Create an Action Plan

User: "Turn this transcript into an implementation plan"

Claude:
✓ Read transcript: Build a SaaS in 30 Days.txt
✓ Extracted core lessons
✓ Created 5-rep action plan
✓ Saved to: Ship-Learn-Next Plan - Build a SaaS.md

Your quest: Launch a SaaS MVP and get first 10 customers in 4 weeks

Rep 1 (this week): Find 3 proven market opportunities
When will you ship Rep 1?

Philosophy

These skills are built on the principle that learning = doing better, not knowing more.

Ship-Learn-Next Framework

  • Ship: Create something real (code, content, product)
  • Learn: Honest reflection on what happened
  • Next: Plan the next iteration based on learnings

100 reps beats 100 hours of study.

Contributing

Found a bug or want to add a feature? Contributions are welcome!

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-skill)
  3. Commit your changes (git commit -m 'Add amazing skill')
  4. Push to the branch (git push origin feature/amazing-skill)
  5. Open a Pull Request

License

MIT License - see LICENSE file for details

Acknowledgments

  • Ship-Learn-Next framework: Inspired by the ShipLearnNext GPT methodology
  • yt-dlp: Excellent tool for downloading YouTube content
  • OpenAI Whisper: State-of-the-art speech recognition
  • Mozilla Readability: Clean article extraction algorithm
  • trafilatura: Python web scraping and content extraction

Support

Having issues? Please open an issue on GitHub.


Made with Claude Code

Learn more about Claude Code at claude.com/claude-code