Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
rampstackco

claude-skills

Quality
9.0

This opinionated library provides 98 stack-agnostic Claude Skills covering the full lifecycle of building, launching, running, and growing a brand and website. It's ideal for developers and marketers seeking structured, composable tools for brand strategy, content, SEO, development, and growth.

USP

Unlike disparate collections, this library offers 98 uniformly structured, composable, and stack-agnostic skills, ensuring predictable inputs and outputs across the entire project lifecycle. It includes a robust Ahrefs MCP-powered SEO audi…

Use cases

  • 01Building and launching new brands and websites
  • 02Optimizing content and SEO strategies
  • 03Managing product development and feature launches
  • 04Implementing growth and marketing campaigns
  • 05Conducting accessibility audits and QA

Detected files (11)

  • skills/accessibility-audit/SKILL.mdskill
    Show content (10884 bytes)
    ---
    name: accessibility-audit
    description: "Run a comprehensive WCAG accessibility audit covering perceivable, operable, understandable, and robust principles. Use this skill whenever the user wants to audit accessibility, review WCAG compliance, fix accessibility issues, prepare for accessibility certification, address an accessibility lawsuit risk, or systematically improve a site's accessibility. Triggers on accessibility audit, WCAG audit, a11y audit, accessibility compliance, ADA compliance, screen reader test, keyboard navigation, accessibility report, fix accessibility, axe scan. Also triggers when accessibility issues have been reported and need systematic remediation."
    category: development
    catalog_summary: "WCAG compliance audit with remediation plan"
    display_order: 3
    ---
    
    # Accessibility Audit
    
    Run a thorough accessibility audit and produce a remediation plan. Stack-agnostic. Anchored to WCAG 2.1 AA, with notes on AAA where relevant.
    
    This skill goes deeper than the accessibility checks in `qa-testing` and `design-standards`. Use this when accessibility itself is the goal.
    
    ---
    
    ## When to use
    
    - Pre-launch accessibility verification
    - Compliance preparation (ADA, EN 301 549, AODA, Section 508)
    - Remediation after an audit finding or complaint
    - Annual or quarterly accessibility health check
    - Onboarding accessibility into a team that hasn't prioritized it before
    
    ## When NOT to use
    
    - General QA after deploys (use `qa-testing`)
    - Component-level accessibility implementation (use `frontend-component-build`)
    - Color contrast for design tokens (use `design-standards` or `brand-identity`)
    
    ---
    
    ## Required inputs
    
    - The site or product under audit
    - The scope (full site, specific section, specific user flow)
    - The target standard (WCAG 2.1 AA is most common)
    - Any specific concerns or known issues
    - Tools available (automated scanners, screen readers, manual testing)
    
    ---
    
    ## The framework: WCAG's 4 principles
    
    WCAG organizes accessibility around four principles. The audit covers each in depth.
    
    ### 1. Perceivable
    
    Information and UI must be presentable in ways users can perceive.
    
    **Audit checks:**
    
    - **Text alternatives.** All non-decorative images have descriptive `alt` text. Decorative images use `alt=""`. Complex images (charts, infographics) have long descriptions.
    - **Time-based media.** Videos have captions. Pre-recorded audio has transcripts. Live audio has live captions where required.
    - **Adaptable.** Content structure is conveyed through markup (semantic HTML), not just visual styling. Reading order makes sense when CSS is disabled.
    - **Distinguishable.** Color is not the sole means of conveying information. Text contrast meets AA (4.5:1 normal, 3:1 large). UI element contrast meets 3:1. Audio can be paused, stopped, or muted.
    
    ### 2. Operable
    
    UI components and navigation must be operable.
    
    **Audit checks:**
    
    - **Keyboard accessible.** All functionality available via keyboard alone. No keyboard traps. Focus visible.
    - **Enough time.** Time limits can be adjusted, paused, or extended. Auto-updating content can be paused.
    - **Seizures and physical reactions.** No content that flashes more than 3 times per second.
    - **Navigable.** Skip links present. Pages have descriptive titles. Focus order is logical. Link purpose clear from text or context. Multiple ways to find pages (sitemap, search, navigation). Headings and labels are descriptive.
    - **Input modalities.** Pointer gestures have keyboard alternatives. Pointer cancellation supported (mouse-up, not mouse-down for activation). Labels match accessible names. Motion-triggered functionality has alternatives.
    
    ### 3. Understandable
    
    Information and operation must be understandable.
    
    **Audit checks:**
    
    - **Readable.** Page language declared (`<html lang="...">`). Unusual words and abbreviations have definitions or expansions. Reading level appropriate to audience.
    - **Predictable.** Focus does not change context unexpectedly. Input does not change context unexpectedly. Navigation is consistent across pages. Components that look similar behave similarly.
    - **Input assistance.** Errors are identified clearly. Labels and instructions are provided for input. Error suggestions are given where possible. For pages handling legal commitments or financial transactions, errors can be reviewed and corrected before submission.
    
    ### 4. Robust
    
    Content must be robust enough to work with current and future user agents.
    
    **Audit checks:**
    
    - **Compatible.** Markup is valid. Name, role, and value of UI components are programmatically determinable. Status messages can be programmatically determined and announced.
    
    ---
    
    ## Audit methodology
    
    ### Stage 1: Automated scan
    
    Run automated scanners across the priority pages. These catch 30 to 50 percent of issues but miss the rest.
    
    **Tools:**
    - axe DevTools (browser extension)
    - Lighthouse (Chrome DevTools accessibility audit)
    - WAVE (browser extension)
    - Pa11y (CLI for batch scanning)
    
    **Output:** A list of automated findings, by page.
    
    ### Stage 2: Manual keyboard testing
    
    Unplug the mouse. Navigate the priority user flows using only keyboard.
    
    **Test:**
    - Tab and Shift+Tab move through interactive elements in logical order
    - Enter activates buttons and links
    - Space activates buttons (and toggles checkboxes)
    - Arrow keys navigate within composite widgets (tabs, menus, listboxes)
    - Escape dismisses modals, popovers, menus
    - Focus is always visible
    - Focus returns to a sensible place after modals or popovers close
    - No keyboard trap (focus can always leave)
    
    **Document:** Any flow where keyboard navigation breaks down.
    
    ### Stage 3: Screen reader testing
    
    Test with at least one real screen reader. Each combination has quirks.
    
    **Common combinations:**
    - VoiceOver + Safari (macOS / iOS)
    - NVDA + Firefox or Chrome (Windows)
    - JAWS + Chrome (Windows; commercial but common in enterprise)
    - TalkBack + Chrome (Android)
    
    **Test:**
    - Page structure announced correctly (headings, landmarks)
    - Form labels read with their inputs
    - Errors announced when they appear
    - Status changes announced (loading, success, error)
    - Modal context announced when opened
    - Images have meaningful alt text (or are correctly identified as decorative)
    
    ### Stage 4: Visual testing
    
    Verify the visual aspects of accessibility.
    
    **Test:**
    - Color contrast for all text/background pairs (use a contrast checker)
    - UI element contrast (3:1 for icons, borders, focus rings)
    - Color-blindness simulation (deuteranopia at minimum)
    - Zoom to 200% - content remains usable, no horizontal scroll
    - Reflow at 320px viewport
    - Text spacing applied (line height, letter spacing) - no content cut off
    - Motion can be reduced (`prefers-reduced-motion` honored)
    
    ### Stage 5: Cognitive accessibility
    
    Often overlooked. Critical for inclusive products.
    
    **Test:**
    - Reading level appropriate
    - Instructions clear
    - Error messages explain how to fix the error, not just that one occurred
    - Forms allow correction before submission
    - Time limits avoidable or extendable
    - Important content not dependent on memory of prior pages
    
    ---
    
    ## Workflow
    
    1. **Define scope.** Full site? Specific flows? Specific page templates?
    2. **Run automated scans.** Document findings per page.
    3. **Manual keyboard pass.** Test all priority flows.
    4. **Screen reader pass.** Test with at least one combination.
    5. **Visual checks.** Contrast, zoom, color blindness, motion.
    6. **Cognitive checks.** Reading level, error handling, time limits.
    7. **Score against WCAG.** Per success criterion (level A, AA, AAA).
    8. **Prioritize findings.** Critical (blocks users), Important (degrades experience), Minor (polish).
    9. **Write the report.** Use the template in [`references/audit-report-template.md`](references/audit-report-template.md).
    10. **Build a remediation plan.** Sequenced fixes with effort and impact estimates.
    
    ---
    
    ## Severity classification
    
    For prioritization:
    
    **Critical (P0):**
    - Blocks an entire user flow for an assistive-tech user
    - Renders a key page completely inaccessible
    - Examples: form with no labels, modal without focus management, primary CTA not keyboard-accessible
    
    **Important (P1):**
    - Significantly degrades the experience for assistive-tech users
    - Examples: missing alt text on key images, low-contrast body text, error messages that don't announce
    
    **Minor (P2):**
    - Affects edge cases or specific assistive technology combinations
    - Examples: minor focus order issues, missing decorative alt attributes, edge case keyboard handling
    
    **Polish (P3):**
    - Above-AA improvements that benefit accessibility but aren't compliance-blocking
    - Examples: AAA contrast targets, additional reduced-motion variants, language attributes on inline foreign words
    
    ---
    
    ## Failure patterns
    
    - **Automated scan only.** Catches 30 to 50 percent of issues. The remaining 50 to 70 percent are in keyboard, screen reader, and cognitive testing.
    - **Testing only on the home page.** The home page is usually the most accessible. Bugs hide in deeper flows.
    - **Treating accessibility as a one-time project.** Accessibility erodes with every deploy. Bake it into the development cycle.
    - **Fixing without root cause.** Patching individual issues without understanding why they happened means new ones keep appearing.
    - **Ignoring screen reader testing.** Hard to do well, easy to skip. Single biggest source of "we thought we were accessible" surprises.
    - **Confusing AA and AAA.** AAA is rarely the right target. AA is the practical baseline for most products.
    - **Treating accessibility as a designer or developer responsibility alone.** Content, product, QA, and leadership all need to participate.
    - **Assuming compliance equals accessibility.** WCAG conformance is a floor, not a ceiling. Real users may still struggle.
    
    ---
    
    ## Output format
    
    Default output is a comprehensive audit report at `accessibility-audit.md`.
    
    Structure:
    1. Executive summary
    2. Methodology (tools used, pages tested, screen readers used)
    3. Findings by WCAG principle
    4. Critical findings (P0) with specific URLs and fixes
    5. Important findings (P1)
    6. Minor findings (P2)
    7. Polish (P3)
    8. Remediation roadmap (sequenced and prioritized)
    9. Appendices (full automated scan results, keyboard navigation notes, screen reader notes)
    
    Plus a remediation tracking spreadsheet with one row per finding.
    
    ---
    
    ## Reference files
    
    - [`references/audit-report-template.md`](references/audit-report-template.md) - Full audit report template.
    - [`references/wcag-quick-reference.md`](references/wcag-quick-reference.md) - Condensed WCAG 2.1 AA criteria with audit checks.
    - [`references/aria-patterns.md`](references/aria-patterns.md) - Decision-grade ARIA patterns. Semantic-HTML-first principle, common interactive widgets (accordion, tabs, modal, toggle, disclosure, navigation), live regions, hiding patterns, labeling, state indicators, anti-patterns.
    
  • skills/brand-discovery/SKILL.mdskill
    Show content (8755 bytes)
    ---
    name: brand-discovery
    description: "Run upstream brand discovery covering audience research, competitive landscape, category dynamics, problem space, and positioning territory exploration. Use this skill at the very start of a brand or website project when the user needs to understand who they're for, who they compete with, what the audience actually needs, and where the brand could plausibly stand. Triggers on brand discovery, audience research, market research, competitive scan, category research, customer research, who is this for, who are we, positioning research, intake, kickoff. Also triggers when a creative brief is requested but the upstream inputs (audience, competitors, problem space) are not yet clear."
    category: strategy-and-discovery
    catalog_summary: "Audience research, competitive scan, positioning territory exploration"
    display_order: 1
    ---
    
    # Brand Discovery
    
    Upstream of every brief, identity, and content plan. Discovery answers four questions: who the brand is for, what they need, who else is competing for that need, and where the brand could plausibly stand that competitors do not.
    
    ---
    
    ## When to use
    
    - The very first phase of a new brand or website project
    - Understanding an audience before any creative work begins
    - Mapping competitors and category dynamics
    - Surfacing the problem space the brand will operate in
    - Generating positioning territories before brief or ideation
    - When a creative brief was requested but the inputs are missing
    
    ## When NOT to use
    
    - The audience and category are already well-understood (jump to `creative-brief` or `brand-ideation`)
    - Validating a specific design or feature with users (use `usability-testing`)
    - Mapping the customer journey of an existing audience (use `journey-mapping`)
    - Generating brand names or visual directions (use `brand-ideation`)
    
    ---
    
    ## Required inputs
    
    - The product, service, or organization being branded
    - Whatever is already known about the audience and category (often very little)
    - Access to existing materials (sales calls, support tickets, reviews, analytics)
    - Any constraints (parent brand, regulatory, geographic)
    - A timeline (a 1-week discovery looks different than a 6-week one)
    
    ---
    
    ## The framework: 4 dimensions
    
    Discovery covers four dimensions. Each has its own sources, methods, and outputs.
    
    ### 1. Audience
    
    Who specifically does this serve?
    
    **Layers to surface:**
    
    - **Demographic:** Age range, geography, language, life stage, income (only if relevant; do not over-collect demographics)
    - **Psychographic:** Values, motivations, beliefs, fears
    - **Behavioral:** What they currently do to address the problem, what tools they use, where they spend time online and offline
    - **Jobs-to-be-done:** The functional, emotional, and social jobs they hire a brand to perform
    
    **Sources:**
    
    - Existing customer interviews (5 to 8 ideal)
    - Sales call recordings or transcripts
    - Support ticket themes
    - Review and forum analysis (Reddit, Trustpilot, App Store, niche communities)
    - First-party analytics (search console queries, on-site search, top pages)
    - Social listening (what they post about the category)
    
    **Output:** 1 to 3 named audience segments, each with a one-page profile.
    
    ### 2. Competitors
    
    Who else competes for the same audience and need?
    
    **Three layers of competitor:**
    
    - **Direct:** Solves the same problem the same way (e.g., another SaaS in the category)
    - **Indirect:** Solves the same problem a different way (e.g., a spreadsheet replacing a SaaS tool)
    - **Status quo:** Doing nothing, doing it manually, or living with the problem
    
    The third is most often forgotten and most often the actual competitor.
    
    **Per competitor, document:**
    
    - Who they target (audience overlap with you)
    - How they position (what they claim to be)
    - What they actually deliver (often different from positioning)
    - Pricing model and structure
    - Strengths and weaknesses from the audience's perspective
    - Recent moves (launches, pivots, hires, departures)
    
    **Sources:**
    
    - Their own website and marketing
    - Reviews of their product or service
    - Their content and SEO presence (use `seo-competitor` for the search angle)
    - Social proof and customer voices
    
    **Output:** Competitor matrix (3 to 8 competitors deep), plus a one-line "what makes them dangerous" for each.
    
    ### 3. Category and problem space
    
    What is the broader context this brand operates in?
    
    **Map:**
    
    - **The problem.** What is the actual user problem? Not the surface symptom, the underlying job to be done.
    - **The category.** How is the category structured? Is it new, mature, fragmenting, consolidating?
    - **The conventions.** What does every brand in the category do the same way? (These are the conventions you can break.)
    - **The shifts.** What is changing in the category? Technology, regulation, audience behavior, distribution.
    - **The moats.** What protects incumbents? Brand, distribution, network effects, switching costs?
    - **The vocabulary.** What language does the category use? What is jargon, what is meaningful, what is empty?
    
    **Sources:**
    
    - Industry reports and analyst coverage
    - Trade publications and conferences
    - Adjacent category observations
    - Customer language vs. category language (the gap is informative)
    
    **Output:** A category map and a list of conventions to consider breaking (or keeping deliberately).
    
    ### 4. Positioning territory
    
    Given the audience, competitors, and category, where could this brand plausibly stand?
    
    This is not yet "the positioning." Discovery surfaces possible territories. `brand-ideation` narrows them and commits.
    
    **Generate territories from:**
    
    - Underserved audience segments (audiences others ignore)
    - Underserved jobs (jobs the category does not do well)
    - Category convention violations (what would happen if you broke the rules everyone follows)
    - Honest brand truths (what is genuinely true about this brand that competitors cannot also claim)
    - Category shifts (where the puck is going)
    
    **Per territory, document:**
    
    - The statement (one sentence)
    - Why it could work (proof point)
    - Who would resonate (the audience for this territory)
    - Who is competing in this territory (often, no one good)
    - Risk (what makes it fragile)
    
    **Output:** 3 to 5 distinct territories. Not yet committed. Inputs to `brand-ideation`.
    
    ---
    
    ## Workflow
    
    1. **Define the discovery scope.** 1 week for a startup pre-launch. 4 to 6 weeks for a major rebrand. Set expectations.
    2. **Audit existing inputs.** Sales calls, support tickets, reviews, analytics, internal docs. Often more is known than people think.
    3. **Audience research.** 5 to 8 interviews if possible. Plus secondary research from review and forum analysis.
    4. **Competitor mapping.** 3 to 8 competitors deep, including indirect and status-quo competitors.
    5. **Category mapping.** Conventions, shifts, vocabulary, moats.
    6. **Territory generation.** 3 to 5 plausible positioning territories.
    7. **Write the discovery report.** Use the template in `references/discovery-report-template.md`.
    8. **Hand off to next phase.** Discovery feeds into `creative-brief`, `brand-ideation`, or `content-strategy` depending on where the project goes next.
    
    ---
    
    ## Failure patterns
    
    - **Skipping discovery to "save time."** Every shortcut here costs 10x downstream when the brand fails to land.
    - **Audience research that confirms what you already believe.** If your audience research validates every assumption, you did not actually research. Look for surprises.
    - **Demographic-heavy audience profiles.** "Women aged 25 to 45" is not insight. Behavior, beliefs, and jobs-to-be-done are.
    - **Listing every competitor as if equal.** Most competitors do not matter. Pick the 3 that are genuinely dangerous.
    - **Forgetting status-quo as competition.** The biggest competitor is usually "doing nothing" or "doing it manually."
    - **Outputting territories without rejection criteria.** A territory without a "what this rejects" is not a territory.
    - **Treating discovery as a one-time event.** Categories shift. Audiences evolve. Re-run discovery at least every 3 years.
    
    ---
    
    ## Output format
    
    Default output is a discovery report at `discovery-report.md` plus appendices.
    
    Structure:
    1. Executive summary (5 to 10 bullets)
    2. Audience (1 to 3 named segments with profiles)
    3. Competitors (matrix and per-competitor analysis)
    4. Category and problem space
    5. Positioning territories (3 to 5 candidates)
    6. Implications and recommendations
    7. Open questions that require further research
    
    Appendices:
    - Interview notes (sanitized)
    - Competitor research data
    - Source list
    
    ---
    
    ## Reference files
    
    - `references/discovery-report-template.md` - Full discovery report template.
    - `references/interview-guide.md` - Audience interview guide with question prompts.
    
  • skills/brand-ideation/SKILL.mdskill
    Show content (8799 bytes)
    ---
    name: brand-ideation
    description: "Generate, evaluate, and narrow brand concepts during early ideation including positioning territories, naming candidates, mood directions, and narrative angles. Use this skill whenever the user is in the early phase of brand creation, exploring brand directions, brainstorming names, building moodboards, generating positioning options, or trying to choose between multiple brand directions. Triggers on brand ideation, brand concept, naming, brand name, name candidates, positioning, brand positioning, mood board, brand directions, exploring brands, early brand work, brand exploration, brand brainstorm, brand options. Also triggers when the user has multiple half-formed brand ideas and needs help converging on one, even if they do not say 'ideation' explicitly."
    ---
    
    # Brand Ideation
    
    Generate and converge on brand directions before committing to identity work. This is upstream of `brand-identity` (the visual system) and `brand-style-guide` (the documentation). It is the divergent-then-convergent thinking phase where ideas are cheap and direction matters more than polish.
    
    ---
    
    ## When to use
    
    - Generating naming options for a new brand or product
    - Exploring positioning territories before committing
    - Building moodboards and visual direction options
    - Drafting narrative or origin-story angles
    - Helping the user converge from many half-ideas to one direction
    - Stress-testing an existing brand idea before investing in identity work
    
    ## When NOT to use
    
    - The brand direction is already locked, the user wants logo and identity work (use `brand-identity`)
    - Documenting an existing brand system (use `brand-style-guide`)
    - Defining voice for an existing brand (use `brand-voice`)
    - Initial discovery and audience research (use `brand-discovery`)
    
    ---
    
    ## Required inputs
    
    - The category or product type
    - The audience (at least roughly)
    - The reason this brand exists (the problem it solves or the gap it fills)
    - Constraints (existing brand assumptions, parent brand, regulatory limits)
    - Number of directions desired (typically 3 to 5)
    
    If the audience is unclear, run `brand-discovery` first.
    
    ---
    
    ## The framework: 4 stages
    
    Brand ideation moves through four stages. Each stage diverges (generate options) then converges (pick a direction).
    
    ### Stage 1: Positioning territories
    
    A positioning territory is the strategic space the brand occupies. It is not a tagline. It is the answer to "what does this brand stand for that competitors do not?"
    
    Generate 3 to 5 territories using these angles:
    
    - **Functional benefit.** "The fastest way to X." (Risk: easy to copy.)
    - **Emotional benefit.** "The brand that makes you feel Y." (Risk: vague if not earned.)
    - **Identity.** "For people who are Z." (Risk: alienates non-Z customers.)
    - **Antagonist.** "The opposite of [incumbent]." (Risk: defines you by them.)
    - **Originator.** "The first or only one to do W." (Risk: must be defensible.)
    - **Worldview.** "We believe V." (Risk: must be lived, not just stated.)
    
    For each territory, write:
    - **Statement** (one sentence)
    - **Why this is true** (the proof point)
    - **What this rejects** (the territory we are NOT going to)
    - **Risk** (what makes this fragile)
    
    ### Stage 2: Naming directions
    
    Names cluster by approach. Generate names across multiple approaches, not just one.
    
    | Approach | Description | Examples |
    |---|---|---|
    | Descriptive | Says what it is | "General Electric," "American Airlines" |
    | Evocative | Suggests a feeling or quality | "Patagonia," "Oasis," "Stripe" |
    | Founder | Person's name | "Disney," "Ford," "Tesla" |
    | Acronym | Letters from longer phrase | "IBM," "BMW," "AWS" |
    | Coined | Made-up word | "Kodak," "Häagen-Dazs," "Asana" |
    | Metaphor | Borrowed concept | "Apple," "Amazon," "Twitch" |
    | Compound | Two words combined | "Facebook," "PayPal," "Spotify" |
    | Suggestive | Hints at function without describing | "Tide," "Sprint," "Slack" |
    
    Generate 8 to 15 candidates per direction. Apply naming filters before short-listing:
    
    - **Pronounceable** in target languages
    - **Spellable** without confusion
    - **Available** as a domain (.com or relevant TLD), social handles, and trademark
    - **Distinctive** in the category (search the name + category, see what comes up)
    - **Stretchable** (does the name still work if the company expands?)
    - **Free of negative associations** (run it past native speakers of any major target market)
    
    A short-listable name passes all six. Most names fail at least one. The bar is necessarily high.
    
    ### Stage 3: Mood and visual direction
    
    Generate visual directions BEFORE designing anything. Each direction should be distinct enough that a designer would produce visibly different work for each.
    
    For each mood direction (typically 2 to 4):
    
    - **Mood adjectives** (3 to 5 words)
    - **Color territory** (warm/cool, saturated/muted, light/dark - not specific hex yet)
    - **Type territory** (serif/sans, modern/classical, geometric/humanist)
    - **Imagery direction** (photography style, illustration style, iconography)
    - **Reference brands or sites** (3 to 5 that exemplify the direction)
    - **What this rejects** (the visual territory we are NOT going to)
    
    A mood direction is "Editorial sophistication: Warm cream paper backgrounds, classical serifs, archival photography. Think: The New York Times Magazine meets a literary journal."
    
    A bad mood direction is "Modern and clean."
    
    ### Stage 4: Narrative and origin
    
    Every brand has a story. The narrative answers: how do we tell people why this exists?
    
    Common narrative shapes:
    
    - **Founder story.** A real person solved a real problem they had.
    - **Mission story.** A bigger purpose drives every decision.
    - **Discovery story.** We found something the world did not know.
    - **Heritage story.** This has always been done a certain way; we honor or refresh it.
    - **Frustration story.** The category was broken; we built the alternative.
    - **Vision story.** Here is the future we are pulling toward.
    
    For each candidate narrative:
    
    - **One-sentence summary**
    - **The opening line** (how the story starts when told for the first time)
    - **The proof points** (what makes it true and not marketing puff)
    - **The hero** (who the audience identifies with - the founder, the customer, the world)
    
    ---
    
    ## Workflow
    
    1. **Confirm the inputs.** Category, audience, reason for being, constraints.
    2. **Stage 1: Generate 3 to 5 positioning territories.** Use the 6 angles above. Name what each rejects.
    3. **Pick 1 to 2 territories.** Move forward with the strongest.
    4. **Stage 2: Generate 30 to 50 naming candidates** across at least 4 approaches. Filter to 8 to 12 that pass the six-criteria check.
    5. **Stage 3: Generate 2 to 4 mood directions.** Each must be distinguishable enough to brief a designer.
    6. **Stage 4: Generate 2 to 3 narrative shapes.** Pick the one that fits the founders, the audience, and the proof points.
    7. **Converge.** Present the user with: 1 positioning, 8 to 12 naming finalists, 2 to 4 mood directions, 2 to 3 narrative shapes. Help them pick.
    8. **Output.** Use the template in `references/ideation-output-template.md`.
    
    ---
    
    ## Failure patterns
    
    - **Generating one option and calling it "the answer."** The point of ideation is divergence. Without options, there is no real choice.
    - **Skipping the rejection step.** A territory that does not name what it rejects is not a territory. Same for moods.
    - **Naming before positioning.** Names without positioning end up arbitrary. Position first.
    - **Falling in love with one name too early.** Run the full filter on every candidate. The clever name that fails the trademark check is not a candidate.
    - **"Modern, clean, minimal" mood direction.** Means nothing. Always require specific reference brands.
    - **Skipping pronunciation tests.** Especially for international brands. A name that confuses non-English speakers loses search volume forever.
    - **Mistaking ideation for execution.** The output of this stage is direction, not finished assets. Resist the urge to design logos here.
    
    ---
    
    ## Output format
    
    Default output is a markdown brief at `brand-ideation.md` in the project root. Includes:
    
    1. The chosen positioning territory (with what it rejects)
    2. Naming finalists (8 to 12) with notes on each
    3. Mood directions (2 to 4) with reference URLs
    4. Narrative shape (chosen) with opening line and proof points
    5. Open questions and decisions still needed before identity work begins
    
    Optional: a separate `naming-explorations.md` with the full list of 30 to 50 candidates (the "kill file") in case the chosen finalists fail later checks.
    
    ---
    
    ## Reference files
    
    - `references/ideation-output-template.md` - Fillable template for the ideation deliverable.
    - `references/naming-evaluation-rubric.md` - The 6-criteria filter applied with examples.
    
  • skills/brand-identity/SKILL.mdskill
    Show content (10437 bytes)
    ---
    name: brand-identity
    description: "Design or evaluate a brand visual identity system covering logo, color, typography, imagery direction, iconography, and motion principles. Use this skill whenever the user wants to design a logo, build a visual identity, define brand colors, choose brand typography, develop iconography, plan brand imagery, or evaluate an existing identity for cohesion. Triggers on logo design, brand identity, visual identity, brand mark, wordmark, monogram, color palette, brand colors, brand typography, type system, iconography, brand imagery, motion design, brand system, identity system. Also triggers when the user has a brand direction approved and now needs the visual artifacts that express it."
    ---
    
    # Brand Identity
    
    Design or evaluate the visual artifacts that express a brand: logo system, color, typography, imagery, iconography, and motion. Stack-agnostic. Tool-agnostic.
    
    This skill assumes a brand direction is already approved (positioning, mood, name). If not, run `brand-ideation` first.
    
    ---
    
    ## When to use
    
    - Designing a logo system
    - Defining a brand color palette
    - Choosing brand typography
    - Developing iconography or illustration style
    - Defining imagery direction (photography, illustration)
    - Establishing motion principles
    - Auditing an existing identity for cohesion or gaps
    
    ## When NOT to use
    
    - Brand direction is not yet defined (use `brand-ideation`)
    - Documenting a finished system (use `brand-style-guide`)
    - Defining voice and tone (use `brand-voice`)
    - Building UI components from an existing brand (use `design-standards` or `design-system`)
    
    ---
    
    ## Required inputs
    
    - The brand name and approved positioning
    - The mood direction (from ideation, or supplied as references)
    - Audience and category context
    - Application contexts (web, print, packaging, motion - whatever applies)
    - Constraints (parent brand requirements, regulatory marks, accessibility minimums)
    
    ---
    
    ## The framework: 5 elements
    
    A complete identity has five elements. Each element should reinforce the others. Inconsistency between them is the most common identity failure.
    
    ### 1. Logo system
    
    Most brands need not one logo but a system of marks for different contexts.
    
    **Components of a logo system:**
    
    - **Primary mark.** The main logo. Used wherever there is room.
    - **Wordmark.** Just the typography, no symbol. Used in tight horizontal contexts.
    - **Symbol or glyph.** Just the symbol, no type. Used in app icons, favicons, social avatars.
    - **Lockup variations.** Horizontal, stacked, square - whichever apply.
    - **Monogram.** The initial(s) styled as a mark. Optional but useful for small contexts.
    
    **Design principles:**
    
    - **Legible at 16 pixels.** Test the logo at favicon size. If it falls apart, redesign.
    - **Reproducible in single-color.** If the logo only works in full color, it cannot be screen-printed, embroidered, or rendered in browser favicons.
    - **Distinctive silhouette.** Squint at it. Can you still tell what it is? If it looks like every other logo in the category at silhouette, redesign.
    - **Construction grid.** Every curve and angle is intentional. Document the construction.
    
    **Common failure:**
    - Designing only the primary mark and discovering at launch that the wordmark, glyph, and small-size variants do not exist.
    
    ### 2. Color system
    
    A color system is more than a palette. It is the rules for how color carries meaning.
    
    **Components:**
    
    - **Primary color.** The signature color most associated with the brand.
    - **Secondary colors.** 1 to 3 supporting colors that extend the palette.
    - **Neutrals.** The grays and tints that make up most of the surface area.
    - **Semantic colors.** Success, warning, error, info - if the brand operates in product UI.
    - **Accent colors.** Used sparingly for highlight and emphasis.
    
    **Per color, document:**
    - Hex, RGB, HSL, CMYK (if print is in scope), and Pantone (if brand-critical print exists)
    - WCAG AA contrast against the other colors in the system
    - Allowed and disallowed pairings (some brand colors look terrible together)
    - Usage notes (when to use, when not to use)
    
    **Design principles:**
    
    - **Test for accessibility.** WCAG AA requires 4.5:1 contrast for normal text, 3:1 for large text. If the brand color cannot pass either against white, you have a problem.
    - **Test for color blindness.** Around 8 percent of men have some form of color blindness. Critical UI signals should not rely on color alone.
    - **Define neutrals carefully.** Neutrals are 80 percent of the surface area in most brand applications. They carry more weight than the brand color.
    - **Limit the palette.** A 30-color palette is unmanageable. 5 to 8 carefully chosen colors beats a sprawling system.
    
    ### 3. Typography
    
    Type is the second-most-recognizable element of a brand after the logo.
    
    **Components of a type system:**
    
    - **Display typeface.** For headlines and brand moments.
    - **Body typeface.** For long-form reading. Often the same as display, sometimes different.
    - **Monospace typeface** (if applicable for technical brands).
    - **Type scale.** The set of sizes used across applications. Typically 5 to 8 steps.
    - **Type weights and styles.** Which weights and italics are part of the system.
    
    **Design principles:**
    
    - **Pairing.** If using two typefaces, they must work together at body and display sizes. Common pattern: serif display + sans body, or geometric sans display + humanist sans body.
    - **Web licensing.** Confirm web licensing covers expected pageviews. Some popular typefaces have prohibitive web licenses.
    - **Variable fonts** are increasingly the right call for performance and flexibility.
    - **Fallback stack.** Specify system fallbacks for when the brand font fails to load. The fallback should be visually similar.
    - **Open-source alternatives.** Document open-source equivalents for situations where licensing is impractical (third-party tools, embedded contexts).
    
    ### 4. Imagery and illustration
    
    Imagery direction is often underspecified, leading to brand drift over time.
    
    **Photography direction:**
    
    - **Subject matter.** What does the brand show?
    - **Composition style.** Tight crops, wide environments, candid, posed?
    - **Lighting.** Bright and natural, dramatic and directional, soft and diffused?
    - **Color treatment.** True color, warm shifted, desaturated, high-contrast?
    - **What to reject.** Stock photo aesthetics, specific cliches in the category.
    
    **Illustration direction:**
    
    - **Style.** Flat, dimensional, hand-drawn, geometric, abstract, representational?
    - **Color use.** Full palette, restricted palette, brand colors only?
    - **Line treatment.** Bold and outlined, soft and shaded, no outlines?
    - **Subject matter.** What gets illustrated and what does not?
    
    **Iconography:**
    
    - **Style.** Filled or outline. Rounded or sharp. Single weight or variable.
    - **Grid.** Pixel-perfect grid (often 24x24 or 16x16 base).
    - **Stroke weight.** Consistent across the icon set.
    - **Set.** Which icons exist, and how new icons get added consistently.
    
    ### 5. Motion
    
    If the brand lives in any digital product, marketing video, or animated touchpoint, motion is part of the identity.
    
    **Motion principles to define:**
    
    - **Easing curves.** Default easings used across animations. Linear, ease-in-out, custom.
    - **Duration scale.** Fast (100ms), medium (200-300ms), slow (500ms+) and when each applies.
    - **Choreography.** How elements enter, exit, and respond to interaction.
    - **Brand moments.** Signature animations (logo build, page transitions, loaders).
    
    **Design principles:**
    
    - **Restraint.** Most motion in product UI should be quick and subtle. Reserve dramatic motion for brand moments.
    - **Reduced motion.** Always provide a `prefers-reduced-motion` alternative for users with vestibular sensitivities.
    - **Performance.** Animations must be 60fps on the target devices. Profile before shipping.
    
    ---
    
    ## Workflow
    
    1. **Confirm brand direction.** Positioning, mood, audience. If unclear, return to `brand-ideation`.
    2. **Audit applications.** List every place the brand will appear (web, print, packaging, app, social, signage, video). The application contexts drive what the system needs to handle.
    3. **Element-by-element design.** Start with logo and color, since these constrain typography. Then type. Then imagery, iconography, motion.
    4. **Stress-test.** Apply the system to 3 to 5 mock applications (homepage, social post, business card, product UI, signage). Where does it break? Iterate.
    5. **Document.** Each element with rules, examples, and dos/don'ts. This becomes the input to `brand-style-guide`.
    6. **Get sign-off** before broad rollout. Identity changes after rollout are 10x the cost of changes during design.
    
    ---
    
    ## Failure patterns
    
    - **Designing logos in isolation from application.** A logo that looks great on a hi-res mockup but illegible at favicon size is a design failure.
    - **Picking a brand color without contrast testing.** A brand color that fails WCAG AA against white means UI built on the brand will be inaccessible by default.
    - **Specifying typography without checking web licensing.** Discovering at deploy that the foundry charges per-pageview for the chosen typeface is a budget-buster.
    - **Skipping imagery direction.** Without it, the brand becomes whatever stock photos the next person picks.
    - **Treating motion as decoration.** Inconsistent motion erodes brand cohesion as fast as inconsistent color.
    - **Designing only primary states.** What does the brand look like in error? In dark mode? In a localization where the wordmark needs to flip direction? These are not edge cases; they are the brand.
    
    ---
    
    ## Output format
    
    Default output is a structured set of files:
    
    - `identity/logo/` - All logo variants (SVG primary, plus PNG/JPG exports at common sizes)
    - `identity/colors.md` - Color system with hex codes, contrast ratios, usage rules
    - `identity/typography.md` - Type system with scale, weights, fallbacks
    - `identity/imagery.md` - Imagery direction with reference examples
    - `identity/iconography/` - Icon set or icon style spec
    - `identity/motion.md` - Motion principles
    - `identity/applications/` - 3 to 5 application mockups stress-testing the system
    
    These feed directly into `brand-style-guide`.
    
    ---
    
    ## Reference files
    
    - `references/identity-system-spec.md` - Detailed spec template for documenting each element.
    - `references/contrast-and-accessibility.md` - Accessibility checks for color and type, with the math.
    
  • skills/ads-creative-development/SKILL.mdskill
    Show content (21157 bytes)
    ---
    name: ads-creative-development
    description: "How to produce ad creative that converts at performance scale. Hook patterns, format selection, video pacing, variation systems, sequential testing methodology, fatigue detection, brand-voice alignment without conversion dilution, and platform-specific creative norms. Triggers on ad creative, ad design, hook patterns, ad video pacing, creative testing, ad variations, creative refresh, creative fatigue, refresh ad creative, video ads for Meta, TikTok creative, LinkedIn ad creative, ad asset library. Also triggers when a team is producing creative at scale, planning a creative test cycle, or auditing why creative is not converting."
    category: marketing
    catalog_summary: "Hook patterns, format selection, video pacing, variation systems, testing methodology, fatigue detection, and the platform-specific creative norms that separate ads from clutter"
    display_order: 2
    ---
    
    # Ads Creative Development
    
    A senior creative strategist's playbook for producing ad creative that performs.
    
    Performance creative is a different discipline from brand creative. Brand work optimizes for memorability, emotional resonance, and distinctive identity. Performance creative optimizes for stopping the scroll, communicating value in three seconds, and producing a click. Both matter. Mixing them up costs money. Brand creative running as performance ads bleeds budget; performance creative running as brand ads erodes equity.
    
    This skill is the discipline that produces performance creative without diluting brand. It assumes you know your audience and offer (see `paid-media-strategy`). It assumes you have brand-voice guidance (see `brand-voice`). The hard part is the systematic production of variations that test cleanly and ship without manual approval bottlenecks, and that is what is here.
    
    When to use this skill: producing ad creative for paid campaigns, planning a creative testing cycle, diagnosing creative fatigue, or auditing why creative is not converting.
    
    ---
    
    ## What this skill is for
    
    This skill spans creative production, hook patterns, testing methodology, and fatigue diagnosis. It does not cover paid media strategy (use `paid-media-strategy`), result interpretation (use `ads-performance-analytics` once it ships), or brand voice authoring (use `brand-voice`). Pair this skill with the relevant integrations microsite for platform-specific MCP details and example prompts.
    
    The audience is an ad creative producer, a growth marketer responsible for creative testing, or an agency producing creative at scale. The voice is tactical. There is no "consider every option." Performance creative has shape, and a senior practitioner can map a brief to a production-ready variation matrix in an afternoon.
    
    ---
    
    ## Performance vs brand creative
    
    The two disciplines optimize for different metrics. Brand creative optimizes for memorability, distinctiveness, and emotional resonance over months. Performance creative optimizes for scroll-stop in 1 second, value comprehension by 5 seconds, and click by 15.
    
    The shared layer. Both should reflect brand voice. Both should look like they came from the same brand. The difference is structure, pacing, and where the creative effort concentrates.
    
    The failure mode. Most agency creative tries to do both and does neither well. The brand video that runs as a 60-second performance ad has a strong narrative arc and zero CTR. The performance ad that ignores brand voice converts but trains the audience to not recognize the brand. The fix is not to compromise; it is to produce both, in their respective formats, with shared voice and divergent structure.
    
    A worked example. A premium coffee brand running a 60-second YouTube awareness ad gets to build the world: cinematography, the founder's hands, slow-pour rituals, music that sets a mood. That same brand running a 15-second Meta Reels performance ad gets 1 second to stop the scroll (a visual pattern interrupt: the steam rising from a cup, fast cut, brand logo dropping in), 4 seconds to clarify the offer (the new flavor, the price, the deal), 8 seconds for social proof (three customer-style testimonials, fast cuts), and 2 seconds for CTA (shop now, end card with logo). Same voice. Different structure. Different pacing. Different creative effort distribution.
    
    ---
    
    ## Hook patterns: the first 3 seconds
    
    The biggest lever in performance creative. If the hook fails, no amount of body copy or CTA can recover the impression. A user who scrolled past the first second has already decided. The hook is the whole game until the body justifies why the user kept watching.
    
    Twelve hook patterns work consistently. Detail in [`references/hook-pattern-library.md`](references/hook-pattern-library.md).
    
    1. **Problem-agitate-solve.** Open with the problem the audience feels. Agitate by naming the consequence. Solve with the offer. Works when the audience recognizes the pain.
    2. **Direct callout to audience.** "If you are a B2B founder running paid ads..." Triggers self-identification. Works when the audience is narrow and self-aware.
    3. **Contrarian claim.** "Stop using lookalike audiences." Hooks attention by violating expectation. Works when the audience has heard the conventional wisdom too often.
    4. **Result-led.** "How we cut CAC 40% in 30 days." Specific number, specific timeframe. Works when the result is real and documented.
    5. **Curiosity gap.** "The mistake 80% of marketers make..." Promises a payoff after the gap. Works when the gap is real; clickbait without payoff trains the audience to scroll past.
    6. **Social proof at top.** "Used by 10,000+ teams." Validation before pitch. Works when the proof is impressive enough to do the heavy lifting.
    7. **Visual pattern interrupt.** A surprising visual that does not match the platform's usual feed flow. Works on TikTok and Reels where the pattern is fast and the interrupt is louder.
    8. **Question that hits intent.** "Tired of paying $400 for project management software?" The question pre-qualifies the audience. Works when the question matches a real search query the audience has typed.
    9. **Number-led.** "3 changes that doubled our ROAS." Lists trigger the brain's pattern-completion instinct. Works for educational content; less so for product ads.
    10. **Personal story open.** "Last year I was burning $50K a month on Meta ads with no return..." First-person specificity is hard to skip. Works when the story is real and the conclusion is action-relevant.
    11. **Comparison.** "X vs Y: which actually works." Pits two options against each other. Works when the audience is in evaluation mode.
    12. **Behind-the-scenes / process.** "How we onboard a new client in 7 days..." Demystifies the work. Works when the process is the differentiator.
    
    For each pattern, the anti-pattern is the same: a hook that does not actually hook. Generic openings ("In today's world...", brand-logo cards, slow zooms over title cards) train the audience to scroll past. The first second is for the hook. The brand can wait.
    
    ---
    
    ## Format selection
    
    Different formats fit different combinations of audience, platform, and offer.
    
    - **Static image.** Best for simple value props, retargeting, and quick test cycles. Lowest production cost. Limited room for narrative.
    - **Carousel.** Best for multi-feature products, educational content, and B2B SaaS. Each card carries one idea; users swipe through at their own pace. Strong for considered purchases.
    - **Video (in-feed).** Best for demonstration, story, and broad audiences. Higher production cost. Performance correlates strongly with hook quality.
    - **UGC-style video.** Best for trust building, social proof, and lower production cost. Looks like an ordinary user filmed it. Especially strong on TikTok and Reels.
    - **Stories or Reels (vertical 9:16).** Best for TikTok, Instagram, and Snap. Native to the platform's primary surface. Skipping anything not vertical here is leaving performance on the table.
    - **Spark Ads (boosted organic).** Best for TikTok. Promotes an existing organic post as an ad. Retains organic engagement signals; consistently outperforms pure paid creative on TikTok.
    
    The decision rule. Match format to platform native style and to audience consumption pattern. Detail in [`references/format-decision-matrix.md`](references/format-decision-matrix.md).
    
    ---
    
    ## Video pacing
    
    Video performance correlates more with pacing than with production value. A well-paced phone-shot video outperforms a poorly-paced agency-produced spot. Specific guidance for the 15-second performance video.
    
    | Time window | Job |
    |---|---|
    | 0 to 1s | Hook. Visual pattern interrupt plus audio hook. |
    | 1 to 3s | Clarify what this is. Brand and offer registered. |
    | 3 to 7s | Value proposition. The problem-solution moment. |
    | 7 to 12s | Social proof or demonstration. Show, do not just tell. |
    | 12 to 15s | CTA. End card with logo and call to action. |
    
    Anything past 15 seconds in a performance video is awareness territory. Performance creative should resolve by 15s.
    
    Platform variations. TikTok performs at 15 to 30 seconds; the platform tolerates longer because the audience is in a longer dwell mode. Meta In-Feed performs at 15s. YouTube Shorts at 15 to 60s. Long-form YouTube at 30s+ for awareness, with the value prop still front-loaded.
    
    ---
    
    ## Variation systems
    
    The systematic way to produce 20 to 50 ad variations from one core concept without manual creative authoring per variation.
    
    The decomposition. A creative is a hook plus a body plus a CTA in a format. Treat each as a variable.
    
    - 1 concept times 5 hooks times 4 bodies times 3 CTAs times 2 formats equals 120 theoretical variations.
    - Most are not worth shipping. The matrix narrows to 20 to 40 variations the team will actually run.
    - Asset library structure: organize by concept, not by date. `concepts/launch-2026/hooks/`, `concepts/launch-2026/bodies/`, etc.
    
    Naming convention. Use a structured naming system so the analyst can join performance data back to creative components. Example: `launch2026_meta-reel_hookA_bodyB_ctaC_v1`. The analyst pulls the report, joins on the naming components, and identifies which hook is winning across body and CTA combinations.
    
    Worked example in [`references/creative-variation-templates.md`](references/creative-variation-templates.md). A half-day production session produces 40 variations from one core concept by using the matrix. Without the matrix, the same 40 variations require five days of authoring overhead.
    
    ---
    
    ## Sequential testing methodology
    
    The waterfall. Most teams test everything at once and lose the signal. Sequential testing isolates the variable that matters at each step.
    
    1. **Test hooks first.** 5 to 10 hook variants, same body, same CTA, same format. The hook is the biggest lever; isolate it first.
    2. **Winners advance to body testing.** Top 2 to 3 hooks each get 5 to 10 body variants. Now you are testing what the audience watches after the hook lands.
    3. **Winners advance to CTA testing.** Top hook plus body combinations each get 3 to 5 CTAs. Smaller search space at this stage; the differences are usually marginal.
    4. **Winners advance to format testing.** Top combinations each render in 2 to 3 formats (vertical video, carousel, static). Catches format-specific drop-offs.
    5. **Top combos go into evergreen rotation.** Two to four winners run on rotation. Refresh on the cadence below.
    
    The common mistake. Testing all variables at once. Variance compounds; the team cannot tell whether the hook, the body, the CTA, or the format made the difference. Sequential is slower but produces real learnings the team can apply to the next campaign.
    
    Detail in [`references/testing-cadence-playbook.md`](references/testing-cadence-playbook.md).
    
    ---
    
    ## Creative fatigue detection
    
    Fatigue is real. The same audience seeing the same creative six times a week tunes it out, or worse, develops negative associations. Five signals indicate fatigue.
    
    1. **Frequency above 4 to 5 per user per week.** Set explicit caps. Defaults are usually too high.
    2. **CTR declining 30%+ week over week.** The creative is no longer fresh to the audience.
    3. **CPM increasing without audience saturation explanation.** The platform is having to bid harder to deliver impressions because engagement signals are weakening.
    4. **Negative comments increasing.** A direct user signal that the audience is irritated.
    5. **Hide ratio increasing.** Meta exposes this as "negative feedback rate." TikTok exposes "not interested" rate.
    
    Refresh cadence. Weekly for high-spend campaigns ($50K+/month). Biweekly for medium spend. Monthly for low spend. The economics: producing a fresh variant is cheaper than running tired creative for one extra week.
    
    The decision tree. If a top performer's metrics are still strong but frequency is climbing, ship variants of the same concept (same hook structure, different copy and visuals). If the metrics are dropping, retire the concept and ship a new one. Detail in [`references/fatigue-detection-checklist.md`](references/fatigue-detection-checklist.md).
    
    ---
    
    ## Brand-voice alignment without conversion dilution
    
    The tension. Brand voice can feel off-brand when squeezed into 15-second ads. The fix is hierarchy, not compromise.
    
    Voice attributes that survive compression. Tone (playful, serious, expert, irreverent). Cadence (short sentences vs long-form). Vocabulary (industry-specific or accessible). Visual treatment (color, type, motion language). These survive in 15s because they live in the texture of every frame.
    
    Voice attributes that do not survive compression. Deep narrative arcs. Complex metaphors. Layered humor that requires setup. Long-form storytelling. These need 30s+ of runtime to land. Forcing them into 15s produces diluted versions that read as off-brand.
    
    The hierarchy. Brand voice in every frame; performance discipline in the structure. The brand voice shows up in tone, cadence, vocabulary, and visual treatment. The performance discipline shows up in pacing, hook strength, and CTA placement.
    
    A worked example. A brand whose voice is "warm and direct, slightly contrarian, plain-language" runs three creatives. The 60-second YouTube awareness piece has the founder talking to camera, plain language, contrarian framing of the category. The 15-second Meta performance ad has fast cuts, plain language captions on screen, contrarian hook ("Stop overpaying for project management software"), specific value prop, CTA. The static carousel has 6 cards, plain language, contrarian opening card, value prop progression, CTA on the last card. Same voice. Different structure. None feels off-brand to a customer who has seen all three.
    
    Detail in [`references/brand-voice-performance-balance.md`](references/brand-voice-performance-balance.md).
    
    ---
    
    ## Platform-specific creative norms
    
    Each platform has native norms. Violating them tanks performance. The fix is producing platform-native, not repurposing.
    
    **Meta (Facebook plus Instagram).** In-feed visual hierarchy. Captions on by default (most users watch sound-off). Native-feeling production beats studio production for direct response. Vertical 9:16 for Reels and Stories; 1:1 or 4:5 for in-feed. CTAs above the fold or in first 2 lines of caption.
    
    **TikTok.** Vertical 9:16 only. Native-creator aesthetic; phone-shot is the default. Fast cuts every 1 to 2 seconds. Captions for accessibility. Music-driven; trending sounds compound reach but expire fast. Spark Ads (boost an existing organic post) consistently outperform pure paid because they retain organic engagement signal.
    
    **LinkedIn.** Professional tone. Slower pacing tolerated; B2B audiences are in research mode rather than scroll mode. B2B vocabulary is fine; consumer-style copy reads as off-platform. Thought leadership angle works; product pitches feel salesy.
    
    **Google Search.** Text-only headlines plus descriptions. Character limits are real constraints (30-character headlines, 90-character descriptions). RSA (Responsive Search Ads) optimization rewards 15+ headline variations and 4+ descriptions; the platform mixes them.
    
    **Google Display and YouTube.** Depends on placement. YouTube allows longer narrative (15s to 6m); Display is fast banner-style. Skippable YouTube ads need to earn the next second of attention; non-skippable annoys.
    
    Detail in [`references/platform-creative-norms.md`](references/platform-creative-norms.md).
    
    ---
    
    ## Common failures
    
    Eight patterns recur across creative production work. The short version.
    
    - "We made a beautiful brand video and ran it as performance." Wrong format and length for the channel. Brand video belongs in awareness; performance ads need the 15-second structure.
    - "All our ads use the same hook." Saturation. Ship five to ten hook variants per concept and rotate.
    - "Our ads stopped working after 3 days." Fatigue plus narrow audience. Either ship more variants or expand the audience; usually both.
    - "We A/B tested 50 ad variations." Too many simultaneous variables. Run sequential tests instead.
    - "Creative passes brand approval but underperforms." Brand discipline at the cost of performance discipline. The hierarchy is wrong; voice in every frame, performance in the structure.
    - "TikTok ad uses Meta-style production." Platform norm violation. TikTok rejects polished ad creative; phone-shot native wins.
    - "Our hooks all start with the brand logo." Kills the first-3-second hook. Brand logo belongs at second 1 to 3 (after the hook lands), not at second 0.
    - "We never refresh creative because the old set still works ok." Incremental loss compounds. CTR decay is gradual; the team that does not refresh discovers a 40% performance gap when they finally check.
    
    ---
    
    ## The framework: 10 considerations for sustainable creative production
    
    When designing or auditing ad creative, walk these 10 considerations.
    
    1. **Performance vs brand.** Name what this creative is optimizing for; do not mix.
    2. **Hook strength.** First 3 seconds. Scroll-stopping. Specific.
    3. **Format-platform fit.** Native to the channel.
    4. **Pacing.** Value prop hits within 5 seconds; CTA by 15s.
    5. **Variation system.** Matrix-based, not ad-hoc.
    6. **Sequential testing.** Hooks, then bodies, then CTAs, then formats.
    7. **Fatigue monitoring.** Frequency, CTR decay, CPM trend, negative feedback.
    8. **Brand-voice alignment.** Voice that survives compression; performance in the structure.
    9. **Platform creative norms.** Vertical for short-video platforms, captions on, native production.
    10. **Refresh cadence.** Weekly to monthly by spend tier.
    
    The output of the framework is a production plan. A list of variations to ship in the next testing cycle, the testing waterfall to run, the fatigue thresholds that trigger refresh, and the platform-specific norms the variations respect. Three answers from the framework: ship as planned, revise the plan, or stop because a precondition is missing.
    
    ---
    
    ## Reference files
    
    - [`references/hook-pattern-library.md`](references/hook-pattern-library.md) - Twelve hook patterns with worked examples and anti-patterns for each.
    - [`references/format-decision-matrix.md`](references/format-decision-matrix.md) - Audience-objective context to recommended format with reasoning and common alternatives.
    - [`references/creative-variation-templates.md`](references/creative-variation-templates.md) - Matrix-based production system, naming conventions, asset library structure, half-day worked example.
    - [`references/testing-cadence-playbook.md`](references/testing-cadence-playbook.md) - Sequential testing waterfall with per-stage variant counts, durations, and winner-advancement criteria.
    - [`references/fatigue-detection-checklist.md`](references/fatigue-detection-checklist.md) - Five fatigue signals with thresholds and refresh decision tree.
    - [`references/platform-creative-norms.md`](references/platform-creative-norms.md) - Per-platform aspect ratios, lengths, audio expectations, caption norms, and native-aesthetic anti-patterns.
    - [`references/brand-voice-performance-balance.md`](references/brand-voice-performance-balance.md) - Voice attributes that survive 15-second compression vs those that do not, with a four-format worked example.
    
    ---
    
    ## Closing: the hook is the whole game
    
    In performance creative, the hook is the whole game. If the hook fails, no body copy or CTA can recover the impression. A user who scrolled past the first second has already decided.
    
    Spend the disproportionate creative effort on the first 3 seconds. The rest is delivery. The team that puts 60% of its creative effort into the hook and 40% into everything else outperforms the team that distributes effort evenly. The team that puts 90% of its effort into hooks and 10% into delivery beats both, as long as it ships enough variations for the testing waterfall to find which hooks actually work.
    
    When in doubt, ship the variant. A weak variant ships; a perfect variant never does. The sequential testing waterfall finds the winners; the variation system makes shipping cheap. The asymmetric cost of inaction is real; the asymmetric cost of shipping a marginal variant is small.
    
  • skills/ads-performance-analytics/SKILL.mdskill
    Show content (24125 bytes)
    ---
    name: ads-performance-analytics
    description: "How to read paid media dashboards without fooling yourself. Attribution models, platform reporting quirks, multi-platform reconciliation, ROAS vs LTV horizon traps, statistical noise in performance metrics, incrementality testing, and the failure modes that produce expensive lessons. Triggers on read paid media dashboard, attribution analysis, ROAS vs LTV, multi-platform reconciliation, ad incrementality, geo holdout, conversion lift study, ghost bidding, paid media reporting, board-deck paid media metrics, blended CAC, MMM, MTA, last-click attribution. Also triggers when a marketer is about to scale, kill, or rebudget a campaign based on platform metrics, or when reconciling platform reports against warehouse revenue."
    category: marketing
    catalog_summary: "Read paid media dashboards without fooling yourself: attribution models, platform reporting quirks, ROAS vs LTV, multi-platform reconciliation, incrementality testing, and the interpretation failures that compound into wasted budget"
    display_order: 3
    ---
    
    # Ads Performance Analytics
    
    A data-team-mentor's playbook for interpreting paid media dashboards without fooling yourself.
    
    The dashboard is the moment of truth for paid media decisions. The numbers on it determine whether you scale, hold, or kill. They also expose every platform's self-attribution bias, every modeled-conversion shortcut, every cross-platform double-count. Most "scale this campaign" decisions trace back to misreading the dashboard.
    
    This skill is the discipline that prevents misreading. It assumes the campaign was strategically sound (see `paid-media-strategy`). It assumes the creative was tested properly (see `ads-creative-development`). The hard part is knowing what each number actually means, what it does not, and how to reconcile platform-reported metrics with the truth in your warehouse.
    
    When to use this skill: any time you are about to scale, kill, or rebudget a campaign based on platform metrics; reconciling platform reports with revenue data; evaluating an agency's reporting; or building a paid media dashboard that will not lie to you.
    
    ---
    
    ## What this skill is for
    
    This skill spans paid media result interpretation. It does not cover paid media strategy (use `paid-media-strategy`), creative production (use `ads-creative-development`), or platform-specific tooling (covered in the integrations microsites). Pair this skill with the relevant integrations microsite for platform-specific MCP commands and example prompts.
    
    The audience is a marketer, growth analyst, agency analyst, or founder evaluating paid media reports. The voice is patient and clinical. There is no "trust the platform's number" or "ignore the platform entirely." Both are wrong. The discipline is knowing which numbers from which platform mean what, and what to reconcile against to make the actual decision.
    
    ---
    
    ## The result panel: what every paid media platform should expose
    
    A trustworthy result panel exposes nine things. Anything missing is a signal to treat reported numbers with extra skepticism.
    
    1. **Spend, impressions, clicks.** Table-stakes metrics. Should match across platforms within rounding.
    2. **Conversions with definition and window visible.** Not just a count; the definition of what counts as a conversion and the attribution window applied. Without this, the count is unreadable.
    3. **Attribution breakdown.** Last-click vs view-through vs modeled. The mix of how the conversions were credited.
    4. **Frequency.** Impressions per unique user. The fatigue early-warning system.
    5. **Audience saturation.** Where the platform exposes it. A flat audience-saturation curve means there is room to scale; a steep curve means efficiency is dropping.
    6. **Time series.** Daily breakdown to spot novelty effects, fatigue, day-of-week patterns, and exogenous variance.
    7. **Cost metrics in clear currency.** CPC, CPM, CPA, ROAS with the math defined and the currency labeled. Do not assume USD.
    8. **Conversion path data.** Touchpoints before conversion, where available. Tells you whether a campaign is a closer or an opener.
    9. **Filters, segments, and exports.** Without these, the panel is a brochure, not a tool.
    
    Platforms hide what makes their reporting look weakest. Google PMax hides keyword-level and placement-level data. Meta hides the modeled-conversion share. LinkedIn hides cross-device click paths. Treat hidden metrics as the place to dig.
    
    ---
    
    ## Platform-reported vs reality
    
    Every platform's dashboard is optimized to make the platform look effective. This is not a moral failing; it is a structural incentive. Platforms with rosier reporting attract more spend.
    
    **Conversion windows.** Meta defaults to 7-day click plus 1-day view. Google defaults to 30-day click plus 1-day view. Different windows, same activity, different reported numbers. If you compare Google's 30-day-click count against Meta's 7-day-click count, you are comparing different definitions and pretending they are the same.
    
    **View-through attribution.** Counted by Meta and Google for users who saw but did not click. Often half the reported "conversions" are view-through. Treat view-through as a signal of awareness contribution, not as a direct response measurement. The user might have converted from organic search anyway.
    
    **Modeled conversions.** When iOS users opt out of tracking, Meta and others statistically model what the conversion would have been. Modeled numbers are educated guesses, not measurements. They are useful for direction; they are not reliable for precision.
    
    **Self-attribution bias.** Every platform's pixel fires on conversion and the platform claims credit. If you ran Meta, Google, and TikTok in the same week, all three platforms report your conversions as theirs. Sum-of-platforms is always greater than 100% of actual conversions.
    
    The discipline. Never report platform-reported numbers as fact in board decks. Always reconcile against the single source of truth (warehouse, GA4, or unified analytics platform). Detail in [`references/platform-reporting-quirks.md`](references/platform-reporting-quirks.md).
    
    ---
    
    ## Attribution models in practice
    
    Six models and one anti-model. None is right. They are all approximations. The discipline is picking one, committing, and reading the others as sanity checks.
    
    **Last-click.** Simple, reproducible, undercredits awareness. The conversion is fully credited to the last click before the conversion event. Easy to compute; easy to compare across channels; bad for understanding upper-funnel contribution.
    
    **First-click.** Opposite bias. Fully credits the first touchpoint, undercredits closing channels. Useful as a sanity check against last-click; rarely the right primary view.
    
    **Linear.** Equal credit across all touchpoints. Gives every channel something. Defensible; not informative. Most useful for board reporting where avoiding "Google gets 70% so we cut Meta" politics matters more than precision.
    
    **Time-decay.** More credit to recent touchpoints. Reflects the intuition that recent ads are more influential. Hard to argue against; hard to verify.
    
    **U-shaped (position-based).** Heavy on first and last (40% each), light on middle (20% distributed). Honors both opener and closer roles. The default in many MTA tools.
    
    **Data-driven attribution (DDA, Google).** Machine-learning model that distributes credit based on observed conversion paths. Opaque; hard to audit. The closest to "right" for digital channels but a black box.
    
    **Marketing mix modeling (MMM).** Regression-based, top-down. Uses spend and revenue time series across channels to estimate channel contributions. Requires 2+ years of data. The strongest defense against platform self-attribution because it does not rely on platform-reported conversions at all.
    
    **The anti-model: trusting platform-reported attribution.** Each platform's "DDA" or "attributed conversions" is the platform's self-attribution. Sum across platforms exceeds reality. Use platform attribution for in-flight optimization within the platform; use a unified attribution model for cross-channel decisions.
    
    Practical guidance.
    
    - Early-stage. Use last-click plus a single guardrail metric (warehouse-attributed CAC). Sophisticated attribution requires data volume you do not have.
    - Mid-stage. Data-driven attribution from Google plus GA4, with explicit awareness vs closing channel labeling.
    - Mature. MMM as the canonical incremental reference. MTA for in-flight optimization. Last-click for channel-level decisions where ambiguity is acceptable.
    
    Detail and a decision matrix in [`references/attribution-model-comparison.md`](references/attribution-model-comparison.md).
    
    ---
    
    ## Multi-platform reconciliation
    
    The trap. Google says you spent $50K with 800 conversions. Meta says $30K with 600. LinkedIn says $20K with 200. Total reported equals 1,600 conversions. Your warehouse says 950. Where did 650 go?
    
    The answer. Nowhere. They never existed. Each platform claimed conversions other platforms also claimed.
    
    The reconciliation pattern.
    
    - Trust the warehouse for total conversions and total revenue.
    - Trust platforms for relative ranking within platform (which campaign won, which audience won).
    - Never trust platform sums.
    - Compute blended CAC as (total ad spend across platforms) divided by (total new customers from warehouse). Not from platform reports.
    
    The board-deck pattern. Report warehouse-attributed conversion counts, never platform-summed. Report blended CAC, not channel-by-channel CAC unless explicitly noted as platform-self-attributed. Detail and templates in [`references/dashboard-reconciliation-patterns.md`](references/dashboard-reconciliation-patterns.md).
    
    ---
    
    ## ROAS vs LTV: the time horizon trap
    
    ROAS is short-term. Revenue from purchases attributed to a campaign in a fixed window, often 7 to 30 days. LTV is long-term. Total customer lifetime revenue.
    
    Decisions made on ROAS can be wrong if LTV varies by channel. A worked example.
    
    Meta drives 2.5x ROAS at $40 CAC with $80 LTV. The 7-day-click revenue covers 1.5x payback over the customer lifetime.
    
    Google drives 1.8x ROAS at $60 CAC with $200 LTV. The 7-day-click revenue covers 3.3x payback over the customer lifetime.
    
    Google looks worse on ROAS, better on LTV-adjusted return. Allocating budget to Meta because the ROAS is higher is the wrong move.
    
    The fix. Cohort-based LTV by acquisition channel, updated quarterly. Compare channels on payback period or LTV-CAC ratio, not raw ROAS. The 2x ROAS heuristic is a dangerous shortcut. Same ROAS at different LTVs equals different actual returns.
    
    The trap that compounds. Performance teams optimize for short-term ROAS because the metric refreshes weekly. Brand and high-LTV channels get cut because their short-term ROAS is lower. Six months later, the brand pipeline has eroded and short-term ROAS itself drops because the cheap channels are saturated. The metric that drove the decision was the wrong horizon.
    
    ---
    
    ## Cohort analysis vs daily metrics
    
    Daily metrics tell you what happened today. Cohort analysis tells you whether today's customers are different from last month's.
    
    Three cohort cuts that matter for paid media.
    
    **By acquisition month.** Are users acquired in March retaining better than users acquired in January? A declining LTV over rolling acquisition cohorts means recent acquisition is lower quality; the daily metrics will show this two to three months later when the retention starts hurting.
    
    **By acquisition channel.** Are users from Meta retaining better than users from Google? Channel-level cohort divergence is the data behind the LTV-vs-ROAS argument. Meta might drive volume at lower LTV; Google might drive lower volume at higher LTV. The cohort tells the story the daily ROAS hides.
    
    **By acquisition campaign.** Campaign-level cohort signals. Useful for diagnosing why a campaign that "works" in week 1 produces no recurring revenue.
    
    The signal to act on. Declining cohort LTV over two consecutive months is the alarm. Pause the channel or campaign before the daily metrics force you to. Detail in [`references/cohort-analysis-templates.md`](references/cohort-analysis-templates.md).
    
    ---
    
    ## Statistical noise in performance metrics
    
    Most "the campaign improved 15% week over week" stories are noise. Real performance changes are 30% or more in metrics that vary 10 to 20% naturally. Below that threshold, you are looking at variance and calling it signal.
    
    Sources of noise in paid media metrics.
    
    - **Day-of-week effects.** B2C tends to weekend dips. B2B tends to weekend gains. A "Monday morning is better" hypothesis often dissolves when day-of-week is normalized.
    - **Holiday and seasonal effects.** Q4 dwarfs most "optimization" effects. A campaign launched in Q4 looks great because of seasonality, not strategy.
    - **Weather, news, competitor activity.** Real exogenous variance. Last week's news cycle can shift CPMs across an entire vertical.
    - **Pixel fire and reporting delay.** Conversions reported on a 7-day click window arrive incrementally. Reading the panel on Monday for last week's performance undercounts.
    
    The fix. Pre-commit to test duration before drawing conclusions. Use the experimentation discipline from `experimentation-analytics` for any directional change you want to claim is real. The signal-to-noise problem in paid media metrics is the same as the signal-to-noise problem in product experiments; the framework transfers.
    
    This is where `experimentation-analytics` bridges in. The statistical patterns are the same; the application is different. Read both for the full picture.
    
    ---
    
    ## Incrementality testing
    
    The honest test. If we had not run this ad, would we still have gotten the conversion? The number above zero is the incremental contribution.
    
    Most paid media is 30 to 70% incremental, not 100%. Some is zero. Branded search bidding is often 5 to 20% incremental (most converters would have found you organically). Retargeting is often 20 to 40% incremental (many of those users were going to convert anyway). Prospecting is often 50 to 90% incremental.
    
    Four methods.
    
    **Geo holdout.** Hold one region out from the campaign. Measure the difference in conversions between the holdout region and the matched test region. The cleanest causal test for paid media at scale.
    
    **Ghost bidding (Google).** Google's own incrementality tool. Bids on a holdout share of impressions but does not actually serve the ad. Reports incremental conversions. Honest signal; some teams find the math opaque.
    
    **Conversion lift studies (Meta).** Splits audiences into test and control. Test sees the ad; control does not. Reports incremental lift. The cleanest within-Meta test.
    
    **PSA tests.** Serve some users a public service announcement instead of your ad. Compare conversion rates. Useful for legacy brands with deep budget.
    
    Incremental rate ranges by channel type are in [`references/incrementality-testing-playbook.md`](references/incrementality-testing-playbook.md). The discipline is to run incrementality tests at least quarterly on the highest-spend channels. Without them, you are optimizing against platform-reported attribution that systematically overcounts.
    
    ---
    
    ## Geo experiments and holdouts
    
    For paid media specifically, geo-based testing is the most reliable causal method.
    
    **Geo holdout.** Turn off paid media in one region. Measure baseline organic conversions. The difference between expected and actual conversions in the holdout region is the incremental paid contribution.
    
    **Geo lift.** Scale spend in one region by 2x. See if conversions scale linearly. A linear scale means the channel has headroom. A sublinear scale means saturation; further spend is diminishing returns.
    
    **Switchback.** Alternate weeks of campaign on and off. Compare on-weeks to off-weeks. Useful when geo splitting is not feasible.
    
    **Pre-and-post analysis.** Launch in a region; measure 30 days before vs 30 days after. Weak design because external factors confound the comparison. Use only when no other test is available.
    
    The right setup. Matched markets (similar demographics, similar baseline conversion rates). Statistical power calculation upfront (how big a difference can the test actually detect). Pre-committed analysis window (so you do not stop early when the data looks good or wait too long when it looks bad).
    
    The trap. Calling a geo test successful because of timing-correlated revenue lift. A campaign launched in October will see "lift" because Q4 is starting; without a control region, the lift is not attributable to the campaign.
    
    ---
    
    ## Platform self-attribution bias
    
    A specific failure mode worth its own section.
    
    The mechanism. Platform's pixel fires on conversion. Platform claims credit. The user might have converted from any channel; the platform that loaded the pixel last gets the credit on the platform's own dashboard.
    
    Why platforms reward this design. More credit on the platform dashboard equals better-looking ROAS equals more advertiser spend. Platforms have no incentive to underreport their own contribution.
    
    Detection patterns. When platform-reported conversions exceed warehouse-attributed conversions for the same channel by more than 30%, you have heavy double-counting. When sum-of-platform-reports exceeds total conversions in the warehouse, you have cross-platform double-counting.
    
    The fix. Warehouse as canonical for board reporting. Platform reporting as in-flight signal only. Incrementality tests at least quarterly to keep the channel-attribution honest.
    
    A worked example. A retargeting campaign in Meta showed 3.5x ROAS for six months. The team scaled spend from $20K to $80K per month. A geo holdout test revealed that 65% of the "conversions" would have happened anyway from organic. Real ROAS adjusted for incrementality was 1.2x, not 3.5x. The campaign got cut and warehouse-attributed CAC dropped 18% in the next quarter.
    
    ---
    
    ## Common interpretation failures
    
    Twelve patterns recur in paid media reporting work. Detail in [`references/common-interpretation-failures.md`](references/common-interpretation-failures.md).
    
    - "ROAS dropped 20% week over week, kill the campaign." Could be noise. Pre-commit a test window before acting on weekly variance.
    - "Meta says 500 conversions, my warehouse says 200, who is right?" Both are wrong; warehouse is closer to truth, Meta self-attributes. Reconcile, do not pick a winner.
    - "We turned off Google PMax and conversions did not drop." PMax was harvesting branded search you would have gotten free. Audit branded queries inside PMax.
    - "The new campaign hit 5x ROAS in week 1." Likely retargeting hot leads. Check the audience composition before declaring victory.
    - "We A/B tested and one creative wins by 12%." Within margin of platform noise. Not significant.
    - "Our LTV calculation says this channel is profitable." Check cohort age. Recent cohorts may not have hit LTV yet; the calculation is a projection, not a measurement.
    - "The platform says high frequency is fine because conversions are still happening." Fatigue masked by free organic conversions. The campaign is taking credit for conversions that would have happened anyway.
    - "Last-click attribution shows Meta at 60% credit." Last-click bias. First-click view of the same data shows different. Pick a model and stick.
    - "We scaled spend 5x and conversions only doubled." Saturation. The channel found its ceiling; the marginal CAC at the new spend level is much higher.
    - "Brand campaigns underperform on direct ROAS." They do not have to. Brand impact shows up in other channels' efficiency. Measure brand against brand-search lift, not direct ROAS.
    - "ROAS held steady but profit dropped." The mix shifted toward lower-margin products. Channel-level ROAS hides product-mix effects.
    - "Agency reported a 4x ROAS month." Whose number? Platform-reported, warehouse-attributed, or model-adjusted? The unit of measurement matters more than the magnitude.
    
    ---
    
    ## The framework: 12 considerations for trustworthy paid media interpretation
    
    When reading a paid media dashboard about to inform a decision, walk these 12 considerations. Skipping any of them is how teams ship the wrong call.
    
    1. **Result panel completeness.** What is the platform showing vs hiding.
    2. **Platform-reported vs reality.** View-through, modeled conversions, conversion windows.
    3. **Attribution model.** Pick one and read the others as sanity checks.
    4. **Multi-platform reconciliation.** Sum-of-platforms is always inflated.
    5. **ROAS vs LTV horizon.** Short-term metric, long-term impact.
    6. **Cohort vs daily.** Cohort tells the quality story; daily tells the volume story.
    7. **Statistical noise.** Weekly variance, day-of-week, seasonal, exogenous.
    8. **Incrementality.** What would have happened without the spend.
    9. **Geo and holdout testing.** The honest causal test.
    10. **Self-attribution bias.** Platforms claim credit they do not deserve.
    11. **Decision rule.** Pre-committed scale up, hold, or pull back.
    12. **Single source of truth.** Warehouse over platform reporting for board metrics.
    
    The output of the framework is one of three answers. Scale (the campaign is incremental and unit economics work). Hold (data is ambiguous; gather more before deciding). Kill (the campaign is not incremental enough to justify the spend).
    
    ---
    
    ## Reference files
    
    - [`references/metric-definitions-glossary.md`](references/metric-definitions-glossary.md) - CTR, CPC, CPM, CPA, ROAS, LTV, AOV, frequency, reach, impressions, conversion window, view-through, modeled conversion, blended CAC, MER.
    - [`references/attribution-model-comparison.md`](references/attribution-model-comparison.md) - Last-click, first-click, linear, time-decay, U-shaped, DDA, MMM. Decision matrix by business stage.
    - [`references/platform-reporting-quirks.md`](references/platform-reporting-quirks.md) - Google PMax black box, Meta iOS impact and Conversions API, LinkedIn 30-day click defaults, TikTok video-completion attribution, programmatic viewability gates.
    - [`references/incrementality-testing-playbook.md`](references/incrementality-testing-playbook.md) - Geo holdout, ghost bidding, conversion lift, PSA tests, switchback designs. Setup, duration, analysis pattern, expected incremental rates.
    - [`references/dashboard-reconciliation-patterns.md`](references/dashboard-reconciliation-patterns.md) - Warehouse as canonical, platform as in-flight signal, blended CAC formula, board-deck patterns, reconciliation cadence.
    - [`references/cohort-analysis-templates.md`](references/cohort-analysis-templates.md) - By acquisition month, channel, and campaign. Retention curves, when to act on cohort signals.
    - [`references/common-interpretation-failures.md`](references/common-interpretation-failures.md) - Twelve failure patterns with symptom, root cause, fix, prevention.
    
    ---
    
    ## Closing: the courage to call it incremental zero
    
    Most paid media spend is not 100% incremental. Some channels are 70% incremental. Some are 30%. Some are zero, paying for conversions you would have gotten anyway.
    
    The discipline of accepting that channels can be incremental zero, and pulling spend accordingly, is the single highest-impact skill in paid media analytics. Most accounts have at least one campaign that looks profitable in the platform but is incremental zero in the warehouse. Branded paid search at $4 CPC when the same users find you at position one organically. Retargeting at $0.30 CPC for users who already added items to cart. PMax cannibalizing free brand traffic.
    
    The discipline of finding those campaigns and killing them is the work. The platform will not tell you. The platform's incentive is the opposite. The warehouse, paired with quarterly incrementality tests, is the only honest source.
    
    When in doubt about whether a campaign is incremental, run a geo holdout. The two-week test is cheaper than a quarter of unincremental spend. The team that does not run incrementality tests is optimizing against numbers that are systematically wrong, and the size of the error is exactly the size of the budget waste.
    
  • skills/after-action-report/SKILL.mdskill
    Show content (9379 bytes)
    ---
    name: after-action-report
    description: "Run a structured after-action review (postmortem, retrospective) on a launch, incident, or completed project to capture timeline, root cause analysis, contributing factors, and actionable lessons. Use this skill whenever the user wants to run a postmortem, retrospective, AAR, or after-action review on any past event. Triggers on after-action report, AAR, postmortem, retrospective, retro, post-incident review, what went well what didn't, lessons learned, blameless postmortem, root cause analysis, RCA, five whys. Also triggers when the user has just shipped something or just resolved an incident and wants to capture learnings."
    category: operations
    catalog_summary: "Post-mortems, retros, learnings documentation"
    display_order: 3
    ---
    
    # After-Action Report
    
    Run a structured retrospective on a launch, incident, or completed project. Produce actionable lessons, not just a document.
    
    This skill is for after-the-fact analysis. For active incident response, use `incident-response`. For planning launches, use `launch-runbook`.
    
    ---
    
    ## When to use
    
    - After any incident (any severity)
    - After every major launch
    - At the end of a project (sprint retro, quarterly retro, project closeout)
    - When a recurring issue has happened enough times to demand investigation
    - When a decision didn't work out and the team wants to learn
    
    ## When NOT to use
    
    - During an active incident (use `incident-response`)
    - For pre-launch planning (use `launch-runbook`)
    - For one-off bug fixes that don't merit broad analysis
    
    ---
    
    ## Required inputs
    
    - The event being analyzed (incident, launch, project)
    - A timeline reconstructed from logs, chat, tickets
    - Participant accounts of what they observed and did
    - Outcomes and impact (what actually happened to users, the business)
    
    ---
    
    ## The framework: blameless analysis
    
    The most important principle: blameless. Without it, retrospectives produce hidden information and theatrical lessons rather than real ones.
    
    ### What blameless means
    
    - Focus on systems, not individuals
    - Assume everyone made reasonable decisions given what they knew at the time
    - The question is "why was this decision reasonable to make?" not "who screwed up?"
    - Fixing the system means the next person in that situation succeeds where this person didn't
    
    ### What blameless does not mean
    
    - No accountability (action items still have owners)
    - No hard truths (sometimes the system is broken in obvious ways)
    - No standards (some patterns of failure are individual, not systemic)
    - No discomfort (real reflection is uncomfortable)
    
    ---
    
    ## The framework: 6 sections
    
    A complete AAR covers six sections.
    
    ### 1. Summary
    
    A 2 to 3 paragraph overview. Captures:
    
    - What happened
    - Impact (users, business, time)
    - Root cause (in plain language)
    - Top action items
    
    This is what executives read. Anyone who reads only this section should leave with the most important information.
    
    ### 2. Timeline
    
    A reconstructed timeline of events.
    
    For incidents:
    - T-0: Detection
    - T+X: Acknowledgment
    - T+Y: Severity assessed, IC assigned
    - T+Z: Investigation began
    - ... mitigation, communication, resolution events
    - T+N: Resolution declared
    
    For launches:
    - Pre-launch decisions and milestones
    - Launch day events
    - Post-launch monitoring observations
    
    For projects:
    - Major milestones, decisions, pivots
    - Both planned and emergent
    
    The timeline is the source of truth. Disagreements about what happened get resolved here.
    
    ### 3. Root cause analysis
    
    What caused this, in plain language.
    
    Use one or both of:
    
    **Five whys.** Start with the surface symptom. Ask "why?" Repeat 5 times (or until you reach a true root). Each "why" should yield a substantive answer, not a tautology.
    
    Example:
    - Why did the site go down? Database connection pool exhausted.
    - Why was the pool exhausted? Background job opened too many connections.
    - Why did the background job open too many connections? Connection cleanup code didn't run on errors.
    - Why didn't cleanup run on errors? Original code review didn't cover error paths.
    - Why didn't the review cover error paths? No checklist for error handling in our review process.
    
    The fifth why often reveals the system fix. In this case: improve the review process.
    
    **Causal chain.** Multiple contributing factors that combined.
    
    - Factor 1: Background job opened too many connections (technical)
    - Factor 2: Connection limit was set too low for actual traffic (configuration)
    - Factor 3: No alert on connection pool saturation (monitoring)
    - Factor 4: Recent traffic doubled without infra capacity review (process)
    
    No single fix addresses the incident. Multiple gaps need attention.
    
    ### 4. Contributing factors
    
    Factors that didn't cause the event but made it worse, or removed safety nets that would have caught it.
    
    - Monitoring gaps
    - Documentation gaps
    - Process gaps
    - Tooling gaps
    - Knowledge gaps
    
    A "would have been caught earlier if..." factor.
    
    ### 5. What went well
    
    Real lessons require capturing successes, not just failures.
    
    - What detection worked?
    - What response worked?
    - What decisions were good?
    - What tools or processes performed as expected?
    
    This is not consolation. It's calibration. Things that worked here should be reinforced and replicated.
    
    ### 6. Action items
    
    Specific, owned, dated.
    
    | Action | Owner | Due | Type |
    |---|---|---|---|
    | Add alert on connection pool saturation | [name] | [date] | Monitoring |
    | Add error handling checklist to PR template | [name] | [date] | Process |
    | Audit other background jobs for similar issue | [name] | [date] | Code |
    
    **Action item criteria:**
    
    - **Specific.** "Improve monitoring" is not actionable. "Add alert on connection pool saturation, threshold 80%, page on-call" is.
    - **Owned.** A name. Not "the team."
    - **Dated.** A real date. Not "soon."
    - **Sized.** Roughly hours, days, or weeks of effort.
    - **Closeable.** Definition of done is clear.
    
    Action items that don't close in their committed timeframe should re-surface in the next AAR. Patterns of unclosed actions point to deeper organizational issues.
    
    ---
    
    ## Workflow
    
    ### 1. Schedule the AAR
    
    Within 1 to 2 weeks of the event. Long enough that emotions cooled and facts gathered. Short enough that memories are fresh.
    
    For incidents: pre-decided in the response procedure.
    For launches: schedule on the runbook.
    For projects: schedule at project closeout.
    
    ### 2. Gather inputs
    
    Before the meeting:
    
    - Reconstructed timeline (often the scribe's notes if there was one)
    - Logs, chat transcripts, tickets, incident updates
    - Individual accounts from each participant (written, before the meeting)
    - Impact data (users affected, duration, revenue impact, etc.)
    
    ### 3. Run the meeting
    
    Typical agenda (60 to 90 minutes):
    
    - Read the summary as drafted (5 min)
    - Walk the timeline together. Add corrections. Resolve disagreements. (20 to 30 min)
    - Discuss root cause. Use five whys or causal chain. (15 to 20 min)
    - Discuss contributing factors. (10 min)
    - Discuss what went well. (10 min)
    - Identify action items. Owners and dates. (10 min)
    
    A facilitator runs the meeting. Often the IC for an incident, or a project lead for a project. The facilitator is not the scribe.
    
    ### 4. Write the document
    
    Within a few days of the meeting. The full AAR includes all 6 sections.
    
    ### 5. Distribute
    
    Internal: post in a known location. Make searchable. Reference in onboarding.
    
    For high-severity incidents: external summary may be appropriate (status page, customer email, public blog).
    
    ### 6. Track action items
    
    Every action item should be tracked to closure. The next AAR re-surfaces unclosed ones.
    
    ---
    
    ## Failure patterns
    
    - **Skipping the AAR for "small" incidents.** Patterns get missed.
    - **Naming and shaming.** Real lessons get hidden when people fear blame.
    - **Generic action items.** "Improve testing" instead of specific testing change.
    - **Action items that never close.** Filed, forgotten. Same incident recurs.
    - **Theater retrospectives.** Going through the motions without genuine reflection.
    - **Skipping "what went well."** Misses calibration on what's working.
    - **Blame externalized.** "Our vendor failed." OK, what's our system for vendor risk?
    - **Single-person AAR.** One person writes the whole thing. Misses other perspectives.
    - **AAR only for failures.** Successful launches deserve AARs too. Lessons from success are valuable.
    - **Long delays.** Memories fade. Conversations cool. Get it done within 2 weeks.
    
    ---
    
    ## Output format
    
    A markdown document at `aar-[date]-[event-name].md`.
    
    Structure:
    
    ```markdown
    # AAR: [Event name]
    
    **Date of event:** [YYYY-MM-DD]
    **AAR date:** [YYYY-MM-DD]
    **Severity / scope:** [SEV-1 / Major launch / Project closeout]
    **Facilitator:** [Name]
    **Participants:** [Names]
    
    ## Summary
    [2 to 3 paragraphs]
    
    ## Impact
    - Users affected: [number, segment]
    - Duration: [time]
    - Revenue / business impact: [if applicable]
    
    ## Timeline
    [Timestamped events]
    
    ## Root cause analysis
    [Five whys or causal chain]
    
    ## Contributing factors
    [List]
    
    ## What went well
    [List]
    
    ## Action items
    | Action | Owner | Due | Type | Status |
    |---|---|---|---|---|
    | | | | | |
    
    ## Lessons
    [Reflections that don't fit elsewhere. Often the most quotable section.]
    ```
    
    ---
    
    ## Reference files
    
    - [`references/aar-template.md`](references/aar-template.md) - Fillable AAR template covering incidents, launches, and projects.
    
  • skills/ai-content-collaboration/SKILL.mdskill
    Show content (22585 bytes)
    ---
    name: ai-content-collaboration
    description: "How humans and AI compose in content workflows. Where AI legitimately participates, where humans must own, hybrid workflow patterns, voice ownership preservation, the AI slop problem, disclosure and transparency, team calibration, and the ethics of intellectually honest AI-assisted content production. Triggers on AI content workflow, AI-assisted writing, hybrid content production, AI in editorial, AI slop, AI disclosure, AI usage policy, AI content ethics, voice preservation with AI, team AI calibration. Also triggers when content feels generic despite quality tools, when team AI usage has drifted into inconsistency, or when a regulated or trust-sensitive context requires explicit AI policy."
    category: content
    catalog_summary: "How humans and AI compose in content workflows: participation boundaries, hybrid patterns, voice ownership, the AI slop problem, disclosure and transparency, team calibration, and the ethics of honest AI-assisted production"
    display_order: 8
    ---
    
    # AI Content Collaboration
    
    A senior editorial leader's playbook for how humans and AI compose in content workflows. Pragmatic, tool-agnostic, honest about both what AI in the loop enables and what it threatens.
    
    Most content programs in 2026 use AI somewhere in the workflow. Pretending otherwise is dishonest; treating AI as a magic content factory is the failure mode this skill exists to prevent. The discipline is in between: knowing where AI legitimately accelerates, where humans must own, what hybrid patterns produce work that earns reader trust, and what crosses the line into AI slop or intellectual dishonesty.
    
    This skill is the WORKFLOW layer that composes with every other content skill. Briefs can be AI-assisted; hub architectures can be AI-assisted; programmatic SEO is almost always AI-involved; editorial QA now includes AI-content audit by necessity. The collaboration discipline applies to all production stages, not to a single artifact type.
    
    The voice is pragmatic and tool-agnostic deliberately. The methodology applies whether the AI in your loop is one of the major commercial models, an open-source model, or whatever ships next quarter. What stays constant is the workflow shape, the participation boundaries, the voice ownership question, and the ethical frame. What changes is which specific tool you reach for, which is implementation work that varies by team and budget.
    
    When to use this skill: building or refining an AI-content workflow, calibrating a team on consistent AI usage, addressing the "we use AI but our work feels generic" problem, designing disclosure policies, or working through the ethics of AI-assisted content production for a regulated or trust-sensitive context.
    
    ---
    
    ## What this skill is for
    
    This skill spans the workflow layer of AI-assisted content production. It composes with all six other content-suite skills as the cross-cutting discipline.
    
    - `content-strategy` is program scope: what to produce. Strategy decisions can be AI-assisted; the program-level judgment stays human.
    - `pillar-content-architecture` is hub scope: how the topical hub fits together. Hub architecture can be AI-suggested; the architectural commitment stays human.
    - `content-brief-authoring` is per-piece scope: briefs each piece. Briefs can be AI-drafted from research; the contract decisions stay human.
    - `content-and-copy` is execution scope: writes each piece. Drafts can be AI-produced; voice and editorial judgment stay human.
    - `programmatic-seo` is scaled scope: generates pages from data. AI generation is the dominant production model; sampling QA is the human gate.
    - `editorial-qa` is gate scope: verifies before publish. AI-content audit is now a load-bearing gate; the audit's judgment stays human.
    - This skill is workflow scope: how the human and AI layers compose across all six stages above.
    
    The audience: editorial leaders, content directors, content ops managers, agencies running AI-assisted production, in-house teams calibrating AI usage across writers. The voice is senior editorial leader to junior editor or content marketer. Pragmatic, honest, tool-agnostic.
    
    What is not in scope: specific prompts (those are implementation; teams develop their own), specific tool endorsements (the methodology applies regardless of which tool is in the loop), specific integration code (varies by stack and team). Tool categories appear when they earn methodology relevance; specific tools appear only as illustrations of categories, never as recommendations.
    
    ---
    
    ## Humans own, AI accelerates
    
    The keystone framing.
    
    The pathology to avoid is treating AI as either a magic content factory (cheap, fast, scaled, output quality optional) OR as a forbidden intruder (purity gospel that does not survive contact with deadlines). Both readings produce bad work.
    
    The discipline that produces durable work: humans own the content; AI accelerates the work. Specifically:
    
    **Humans own.** Editorial judgment, voice, distinctive POV, fact accuracy, ethical decisions, what to publish versus what to kill, brand voice, narrative arc, tone calibration, reader empathy, claim verification.
    
    **AI accelerates.** Research synthesis, draft generation against a brief, copy edit suggestions, alternative phrasings, summary, transcription, quality-control automation at scale.
    
    The line. AI does work that the human directs and verifies. AI does NOT make decisions about what publishes, who is quoted, what is true, or what voice the brand uses.
    
    The litmus test. If your AI-assisted piece publishes without a human being able to defend every claim, every position, and every word, you have crossed the line. The piece is AI's work, dressed in your byline. Readers eventually notice.
    
    ---
    
    ## Where AI legitimately participates
    
    A non-exhaustive list of stages where AI in the loop is fine and often improves the work.
    
    - **Research synthesis.** AI condenses long-form sources into briefs the writer reads. Saves hours; the writer still reads and verifies.
    - **Outline generation against a brief.** AI proposes an H2 / H3 structure from a brief; the editor approves or restructures.
    - **First-draft generation.** AI produces a draft against an explicit brief; the human edits substantially.
    - **Alternative phrasings.** AI offers 3 versions of a sentence; the human picks one or rewrites.
    - **Copy edit suggestions.** AI catches typos, awkward phrasings, repetition.
    - **Summary and abstraction.** AI condenses long pieces into TL;DRs.
    - **Transcription.** AI transcribes interview audio; the human verifies.
    - **Translation drafts.** AI produces a translation draft; a native speaker reviews and corrects.
    - **Quality-control automation at scale.** AI flags pages in a programmatic SEO set that need human review.
    - **Idea generation.** AI proposes 30 angles; the human picks 3.
    
    In each case, AI accelerates work the human still owns. The acceleration is real; the ownership stays unchanged.
    
    Detail in [`references/ai-participation-boundaries.md`](references/ai-participation-boundaries.md).
    
    ---
    
    ## Where humans must own
    
    The boundary list.
    
    - **Editorial judgment.** What to publish, what to kill, what is worth saying. AI cannot decide whether a piece is good enough to ship.
    - **Voice.** Brand voice, distinctive POV, the way THIS publication sounds different from the next one. AI default voice is generic by construction; voice is a human contribution.
    - **Fact verification.** Every claim, every statistic, every quote, every named person. AI hallucinates; humans verify.
    - **Ethical decisions.** What is appropriate to publish, what is harmful, what crosses lines, what disclosure is required.
    - **Reader empathy.** What the reader actually needs from this piece, not what the algorithm scores well.
    - **Quote attribution.** Real people who actually said the thing, with consent where relevant.
    - **Tone calibration on hard topics.** Grief, illness, sensitive history, contested politics. AI defaults to anodyne; humans calibrate to context.
    - **Narrative arc.** How the piece unfolds, where the reader's attention goes. AI produces shapes; humans choose them.
    - **Final approval.** The human who signs off is accountable for what shipped.
    
    The "human in the loop" framing is necessary but insufficient. A human briefly reviewing AI-generated content before publish is not ownership; it is rubber-stamping. Ownership requires the human to have made the actual decisions the piece embodies.
    
    ---
    
    ## Hybrid workflow patterns
    
    Five patterns that work, with tradeoffs.
    
    **1. AI-first draft, human-edit-heavy.** AI produces a 90% draft; the human spends 60% of the time editing. Output: efficient for high-volume editorial; risks generic voice if editing is light.
    
    **2. Human-first outline + research, AI-draft, human-rewrite.** Human builds the outline and gathers research; AI drafts within that scaffold; human rewrites in voice. Output: preserves voice better; slower than AI-first.
    
    **3. AI-as-research-assistant, human-writes.** AI condenses sources into a brief; human writes the entire piece from the brief. Output: highest voice fidelity; slowest.
    
    **4. Human-writes, AI-as-editor.** Human drafts; AI suggests edits, alternative phrasings, copy edits; human accepts or rejects. Output: writer voice preserved; AI catches details.
    
    **5. AI-generates-at-scale, human-samples.** For programmatic SEO. AI generates thousands of pages; human samples 50 to 200 with editorial-qa discipline. Output: scaled production; depends entirely on template quality and sampling discipline.
    
    The pattern that fits depends on volume, voice sensitivity, team skill, and time budget. No pattern is "the right one"; pattern selection is a real decision that should match the production context.
    
    Detail in [`references/hybrid-workflow-patterns.md`](references/hybrid-workflow-patterns.md).
    
    ---
    
    ## Voice ownership preservation
    
    Voice is the dominant casualty of careless AI workflows. The patterns that preserve voice.
    
    - **Voice guidelines as prompt input.** Every AI generation includes the brand voice guidelines as context. Generic AI defaults regress without this.
    - **Sample text as voice anchor.** Feed the AI 2 to 3 paragraphs of canonical brand voice as part of the prompt. AI mimics what it sees more than what it is told.
    - **Mid-draft voice check.** At the halfway mark of a long piece, have a human or a separate AI pass read for voice drift. Long AI generations regress halfway through almost always.
    - **Final pass in human voice.** The human edits the closing sections in their own voice; this is where the piece's emotional register often lands.
    - **Reject the bland.** Any sentence that could appear in any other piece on the topic gets rewritten. Voice lives in the specific.
    
    The honest framing. Voice is the hardest thing to preserve in AI-assisted work and the easiest thing to lose. Programs that do not actively preserve voice end up with content that is technically correct, semantically generic, and indistinguishable from competitors using the same tools.
    
    Detail in [`references/voice-ownership-preservation.md`](references/voice-ownership-preservation.md).
    
    ---
    
    ## The AI slop problem
    
    AI slop is the term of art for AI-generated content that is technically functional but reads as generic, derivative, and signal-less. Cross-reference `editorial-qa`'s ai-content-audit-patterns reference for the detection patterns; this section addresses prevention.
    
    **Patterns that produce slop.**
    
    - AI does too much of the work (no real human direction or rewriting)
    - Generic prompts (no brand voice context, no audience specificity, no anti-pattern guidance)
    - No editorial judgment in the loop (AI generates, human glances, ship)
    - Volume prioritized over quality (10x more pages can mean 10x more slop, not 10x more value)
    - No iteration (first draft ships; no rewrite for voice)
    
    **Patterns that prevent slop.**
    
    - Strong briefs (per `content-brief-authoring`)
    - Voice guidelines as prompt context
    - Heavy human editing pass
    - Iteration: AI draft, then human rewrite, then AI suggestions, then human final
    - Editorial judgment at every gate
    
    The reader-detection problem. Readers can often sense AI-flavored content even when they cannot articulate why. Generic openings, predictable structures, "perfect" grammar that is emotionally flat. Slop loses reader trust over time even when individual pieces are not penalized.
    
    Detail in [`references/ai-slop-detection-and-avoidance.md`](references/ai-slop-detection-and-avoidance.md) and cross-reference editorial-qa's audit patterns.
    
    ---
    
    ## Disclosure and transparency
    
    When should AI usage be disclosed to readers?
    
    The tiered framework.
    
    - **Always disclose.** Journalism, news reporting, attributed expert opinion, content where AI tools are the subject.
    - **Default disclosure (consider context).** Thought leadership where the byline is doing trust work, regulated industries, content that influences purchase decisions.
    - **Generally not necessary.** Marketing copy, descriptive product content, programmatic data pages, copy edit assistance only.
    - **Clearly fine without disclosure.** AI as research assistant only; AI for transcription; AI for spelling and grammar suggestions.
    
    The principle. Disclose when the reader's understanding of the content's origin would change their trust in it. A bylined opinion piece purportedly by a named expert that is substantially AI-drafted is a trust violation; a product description on an ecommerce site that was AI-drafted is not.
    
    **Disclosure language patterns (when used).**
    
    - "AI tools assisted in research and drafting; the author edited and verified all claims."
    - "This piece was generated programmatically from [data source]; reviewed by [team] before publish."
    - Avoid hedging language like "may have used AI" or "could have been AI-assisted"; be specific or omit.
    
    Industry-specific norms vary. Major journalism organizations have published explicit AI usage standards. Content marketing has weaker norms but is moving toward disclosure for high-trust pieces.
    
    Detail in [`references/disclosure-and-transparency-patterns.md`](references/disclosure-and-transparency-patterns.md).
    
    ---
    
    ## Team training and calibration
    
    Inconsistent AI usage across a team produces inconsistent output. The discipline.
    
    - **Documented AI policy.** Which uses are approved, which require explicit permission, which are prohibited.
    - **Calibration sessions.** Editors review AI-assisted pieces from multiple writers, surface differences, agree on standards.
    - **Voice library updates.** As voice evolves, the prompts and sample text fed to AI evolve with it.
    - **Quality benchmarks.** What does "AI-assisted but on-voice" look like for your brand? Document it with examples.
    - **Tool standardization or intentional pluralism.** Team uses one tool consistently OR documents which tools fit which tasks.
    - **Forbidden patterns list.** This team does not use AI for X (whatever X is for your context).
    - **Onboarding.** New writers learn the AI policy and calibration in their first 2 weeks.
    
    The pathology. AI usage emerges informally, every writer develops their own patterns, output drifts, editors cannot pinpoint why pieces feel off. The discipline is making AI usage explicit, calibrated, and documented.
    
    Detail in [`references/team-training-and-calibration.md`](references/team-training-and-calibration.md).
    
    ---
    
    ## Ethics: training data, attribution, intellectual honesty
    
    AI tools were trained on copyrighted material. That is the simple ethical reality of every major LLM in 2026. The catalog's position on this question is not "AI use is unethical" (that would render the catalog itself hypocritical) but "intellectual honesty about AI involvement is non-negotiable."
    
    The principles.
    
    - **Do not pass AI work as fully human-written.** Bylined content where the byline implies human craft requires substantial human craft.
    - **Do not claim AI did not help when it did.** False denials are worse than disclosure.
    - **Do not generate content that closely mirrors copyrighted source material.** AI tools can produce near-replicas of training data when prompted carelessly; humans verify originality.
    - **Attribute when borrowing.** Ideas, frameworks, statistics that came from specific sources get cited.
    - **Do not fabricate quotes or expertise.** Hallucinated quotes attributed to real people are dishonest regardless of whether AI generated them.
    - **Be honest about AI capabilities and limits.** Do not oversell AI as more capable than it is.
    
    The intellectual-honesty frame supersedes any specific policy debate. Teams that treat AI usage with intellectual honesty produce content readers can trust over time. Teams that hide, deny, or rationalize lose trust eventually.
    
    Detail in [`references/ethics-and-intellectual-honesty.md`](references/ethics-and-intellectual-honesty.md).
    
    ---
    
    ## Common failure modes
    
    Rapid-fire. Diagnoses in [`references/common-collaboration-failures.md`](references/common-collaboration-failures.md).
    
    - "We used AI and the content feels generic." Voice not preserved; not enough human rewriting.
    - "Hallucinated facts made it to publish." Fact-verification gate skipped or rushed.
    - "Different writers produce wildly different AI-assisted output." No team calibration.
    - "Our AI-assisted SEO content got penalized." Slop volume plus thin templates plus no QA discipline.
    - "We cannot tell what was AI versus human." No AI usage tracking; teams should document at the workflow level.
    - "Readers complained about AI-flavored content." Slop reaching audience; intensify human craft pass.
    - "We disclosed AI usage and lost credibility." Depends on context; disclosure is sometimes a trust gain, sometimes a loss; calibrate to audience norms.
    - "Our AI tools changed and our content shifted." Over-coupled to one tool's specific behavior; methodology should be tool-agnostic.
    - "We are producing 10x more content but the same audience growth." Volume was not the constraint that was binding; quality was.
    - "The team is using AI inconsistently." Calibration sessions overdue.
    - "An expert byline turned out to be substantially AI-drafted." Ethics breach; correct, disclose, recalibrate.
    
    ---
    
    ## The framework: 12 considerations for AI content collaboration
    
    When designing or auditing an AI-assisted content workflow, walk these 12 considerations.
    
    1. **Humans own; AI accelerates.** Make this explicit in your workflow, not implicit.
    2. **Participation boundaries.** Document where AI legitimately helps, where humans must own.
    3. **Hybrid pattern selection.** Match the pattern to volume, voice sensitivity, time budget.
    4. **Voice guidelines as prompt input.** Every AI generation includes brand voice context.
    5. **Voice drift sampling.** Long pieces drift mid-way; sample throughout.
    6. **Fact verification gate.** Every claim, every quote, every stat verified before publish.
    7. **AI slop prevention.** Heavy human editing, strong briefs, iteration.
    8. **Disclosure tiering.** Disclose when origin would change reader trust; calibrate to audience.
    9. **Team calibration.** Documented policy, calibration sessions, voice library.
    10. **Tool-agnostic methodology.** Workflow shape stays constant as tools change.
    11. **Ethical floor.** Intellectual honesty, no fabrication, no hidden AI in trust-sensitive work.
    12. **Final accountability.** The human who signs off is accountable; AI does not sign off.
    
    The output of the framework is a workflow document the team can reference: AI participation rules named, hybrid pattern selected, voice preservation patterns specified, disclosure tier set, calibration cadence committed, ethical floor articulated, accountable signer named for each piece.
    
    ---
    
    ## Reference files
    
    - [`references/ai-participation-boundaries.md`](references/ai-participation-boundaries.md) - Where AI legitimately helps, where humans must own. The boundary list and the "human-in-the-loop is not ownership" distinction.
    - [`references/hybrid-workflow-patterns.md`](references/hybrid-workflow-patterns.md) - Five workflow patterns with tradeoffs and selection criteria. When each pattern fits production context.
    - [`references/voice-ownership-preservation.md`](references/voice-ownership-preservation.md) - Voice guidelines as prompt input, sample text as voice anchor, mid-draft voice check, final pass in human voice, reject-the-bland discipline.
    - [`references/ai-slop-detection-and-avoidance.md`](references/ai-slop-detection-and-avoidance.md) - What produces slop, what prevents it. Cross-references editorial-qa's audit patterns.
    - [`references/disclosure-and-transparency-patterns.md`](references/disclosure-and-transparency-patterns.md) - Tiered disclosure framework, language patterns, industry norms.
    - [`references/team-training-and-calibration.md`](references/team-training-and-calibration.md) - Documented policy, calibration sessions, voice library, quality benchmarks, onboarding.
    - [`references/quality-calibration-with-ai-in-loop.md`](references/quality-calibration-with-ai-in-loop.md) - How editorial standards shift when AI is in the workflow. Same standards, different failure modes.
    - [`references/ethics-and-intellectual-honesty.md`](references/ethics-and-intellectual-honesty.md) - Training data, attribution, fabrication boundaries, intellectual honesty as the supervening frame.
    - [`references/common-collaboration-failures.md`](references/common-collaboration-failures.md) - 11+ failure patterns with diagnoses and fixes.
    
    ---
    
    ## Closing: collaboration, not replacement
    
    AI in content workflows is neither magic nor menace. It is a category of tooling that, like every tooling category before it, rewards disciplined use and punishes careless use. The teams producing memorable AI-assisted content are the ones holding the line on human ownership, voice, fact accuracy, and intellectual honesty. The teams producing AI slop are the ones treating AI as a content factory.
    
    The discipline is not anti-AI; it is pro-craft. Craft was always what made content worth reading; AI does not change that, it just raises the cost of skipping it.
    
    When in doubt about whether an AI-assisted workflow is ready, ask: is human ownership specified, are participation boundaries documented, is voice preservation built into the prompt and review patterns, is fact verification a halt-condition, is disclosure tiered to audience trust, is the team calibrated, and is the ethical floor explicit? If yes to all of those, the workflow is ready. If no to any, the gap is where the program will produce slop and lose reader trust.
    
  • skills/analytics-strategy/SKILL.mdskill
    Show content (9470 bytes)
    ---
    name: analytics-strategy
    description: "Design measurement frameworks including event taxonomy, KPI hierarchy, dashboard architecture, attribution models, and analytics implementation strategy. Use this skill whenever the user wants to plan analytics, design dashboards, build event taxonomies, define KPIs, set up tracking, or audit existing measurement. Triggers on analytics strategy, measurement plan, event taxonomy, tracking plan, KPI framework, dashboard design, north star metric, attribution model, conversion tracking, GA4 setup, Mixpanel setup, analytics audit. Also triggers when the user has data but no clear way to use it, or wants to make decisions but doesn't know what to track."
    category: growth
    catalog_summary: "Measurement frameworks, dashboard design, event taxonomy"
    display_order: 1
    ---
    
    # Analytics Strategy
    
    Design measurement frameworks that produce decisions, not just dashboards. Stack-agnostic. Tool-agnostic.
    
    This skill is for measurement planning. For conversion optimization, use `cro-optimization`. For SEO measurement specifically, use `seo-onpage` and adjacent SEO skills.
    
    ---
    
    ## When to use
    
    - Setting up analytics on a new product or site
    - Auditing existing analytics setup
    - Designing dashboards for a team or business
    - Defining KPIs and a north star metric
    - Building event taxonomies for product analytics
    - Designing attribution models for marketing
    - Translating business questions into measurement plans
    
    ## When NOT to use
    
    - Conversion testing or optimization (use `cro-optimization`)
    - SEO performance measurement (use SEO skills)
    - Pure data infrastructure decisions (different domain)
    
    ---
    
    ## Required inputs
    
    - The business or product context (what does success look like)
    - The audience for the analytics (who needs to make what decisions)
    - The current measurement state (existing tools, tracking, gaps)
    - The questions the team needs to answer
    
    ---
    
    ## The framework: 4 layers
    
    A complete measurement strategy covers all four. Each layer feeds the next.
    
    ### 1. North star and KPI hierarchy
    
    The single metric that captures the most important outcome, plus the supporting metrics.
    
    **North star metric:**
    
    - One metric. Singular.
    - Captures customer-perceived value.
    - Leads to revenue, but isn't revenue itself (revenue is too far downstream).
    - Examples: weekly active users, completed jobs, revenue-generating sessions, hours of value delivered.
    
    **Underneath the north star, the KPI hierarchy:**
    
    ```
    North star metric
    ├── Acquisition KPI (how new users enter)
    ├── Activation KPI (when new users get value)
    ├── Engagement KPI (how often users return)
    ├── Retention KPI (how many stick over time)
    └── Monetization KPI (how value translates to revenue)
    ```
    
    This is the "AARRR" or "pirate metrics" framework. It works because it covers the full lifecycle.
    
    ### 2. Event taxonomy
    
    The vocabulary the product uses to describe what users do.
    
    **Event design principles:**
    
    - **Verb + noun.** `signed_up`, `created_project`, `completed_checkout`. Past tense, snake_case.
    - **One event per discrete action.** Not "interacted_with_modal" - too vague. Specifically `opened_modal_X`, `closed_modal_X`, `confirmed_in_modal_X`.
    - **Properties capture context.** Each event has properties (key-value pairs) for context. `signed_up` has properties like `signup_method`, `referrer`, `plan`.
    - **Standardize property names.** `user_id` everywhere, not `userId` here and `id` there.
    - **Document everything.** A tracking plan that lives nowhere is a tracking plan no one follows.
    
    **Event coverage:**
    
    - All key user actions tracked
    - All conversion points tracked
    - All errors tracked
    - All page views tracked (with consistent properties)
    - All button clicks that matter (not all button clicks - that's noise)
    
    **Anti-patterns:**
    
    - 500+ events with no documentation
    - Inconsistent naming (`buttonClicked`, `Button Clicked`, `clicked_button`)
    - Property keys that vary across events
    - Events fired client-side that should be server-side (and vice versa)
    - PII in event properties (privacy issue and tooling issue)
    
    ### 3. Dashboards and reports
    
    The interface between data and decisions.
    
    **Dashboard design principles:**
    
    - **One audience per dashboard.** Executive dashboard != product team dashboard. Different metrics, different cadence.
    - **One question per chart.** A chart should answer one question, not three.
    - **Annotations matter.** Note launches, experiments, holidays, outages. A spike means nothing without context.
    - **Context comparisons.** "10,000 signups this month" - compared to what? Last month, last year, target?
    - **Lead with the action.** What does this dashboard help someone decide?
    
    **Common dashboard types:**
    
    | Dashboard | Audience | Metrics | Cadence |
    |---|---|---|---|
    | Executive | Leadership | North star, top 3 KPIs, big-picture trends | Weekly review |
    | Product | Product team | Funnel metrics, feature adoption, retention | Daily / weekly |
    | Marketing | Marketing team | Acquisition by channel, CAC, attribution | Daily / weekly |
    | Operations | Ops / on-call | Performance, errors, capacity | Real-time |
    | Custom (per team) | Specific team | Their specific KPIs | Their cadence |
    
    ### 4. Attribution and segmentation
    
    How to connect cause and effect.
    
    **Attribution models:**
    
    - **First-touch.** Credit the first interaction. Useful for awareness understanding.
    - **Last-touch.** Credit the final interaction before conversion. Default in many tools, often misleading.
    - **Linear.** Spread credit equally across touches. Avoids over-crediting any single channel.
    - **Time-decay.** Recent touches get more credit. Reasonable middle ground.
    - **Position-based.** First and last get more credit, middle touches less.
    - **Data-driven (algorithmic).** Tools like Google Analytics 4 use ML. Black box but increasingly the default.
    
    For most businesses: pick one primary attribution model, use multiple secondary models for validation.
    
    **Segmentation principles:**
    
    - Segment by what causes different behavior, not by what's easy to track
    - Useful segments: source/channel, plan tier, geography, device, cohort (signup date)
    - Less useful: demographic guesses without behavioral validation
    
    ---
    
    ## The tracking plan document
    
    Output of the analytics strategy. A living document.
    
    **Structure:**
    
    1. **Goals and KPIs.** Business objectives, north star, KPI hierarchy.
    2. **Event catalog.** Every event, with properties, when fired, why tracked.
    3. **User properties.** Persistent attributes (plan, signup_date, role).
    4. **Page taxonomy.** Page categories, page properties.
    5. **Naming conventions.** Snake_case, verb_noun, etc.
    6. **Implementation notes.** Client-side vs server-side, SDK details, sampling.
    7. **Privacy and compliance.** PII rules, consent handling, data retention.
    8. **Governance.** Who can add events, review process, change log.
    
    ---
    
    ## Workflow
    
    1. **Define the questions.** What does the team need to answer? Working backward from questions to metrics works better than starting from metrics.
    2. **Define the north star.** One metric. Tested against the criteria above.
    3. **Build the KPI hierarchy.** Acquisition, activation, engagement, retention, monetization.
    4. **Audit existing tracking.** What's there? What's broken? What's missing?
    5. **Design the event taxonomy.** Cover the user journey. Document everything.
    6. **Implement with care.** Test each event. Verify properties. Catch issues in staging.
    7. **Build dashboards.** One per audience. Lead with action.
    8. **Establish review cadence.** Weekly business review, monthly KPI review, quarterly strategy review.
    9. **Govern.** Who adds events, who reviews, how changes propagate.
    
    ---
    
    ## Failure patterns
    
    - **Tracking everything.** Noise overwhelms signal.
    - **Tracking nothing strategic.** Page views and that's it. Cannot answer real questions.
    - **No documentation.** Tracking plan lives in someone's head.
    - **Inconsistent naming.** Same concept, three names. Reports become detective work.
    - **Events fired but never reviewed.** Tracking debt accumulates.
    - **Dashboards no one looks at.** Built for vanity, not decisions.
    - **Single attribution model treated as truth.** All models lie. Some lie usefully.
    - **PII in events.** Compliance and tooling problems.
    - **Client-side only.** Critical business events should be server-side too. Ad blockers, network issues, edge cases lose client-side events.
    - **No connection to business outcomes.** Metrics exist in a silo, never connected to revenue, retention, or strategic decisions.
    
    ---
    
    ## Output format
    
    Default output: a markdown tracking plan at `analytics-tracking-plan.md` plus a dashboard inventory.
    
    Tracking plan structure:
    
    ```markdown
    # Tracking Plan
    
    ## North star metric
    [Definition, calculation, target]
    
    ## KPI hierarchy
    [Each KPI with definition, calculation, owner]
    
    ## Event catalog
    | Event | When fired | Properties | Owner | Status |
    |---|---|---|---|---|
    | user_signed_up | After successful signup form submit | source, plan, referrer | Marketing | Live |
    | project_created | When user clicks Create Project | project_type, template_used | Product | Live |
    | ... | | | | |
    
    ## User properties
    [List with definitions]
    
    ## Naming conventions
    [Rules]
    
    ## Privacy and compliance
    [Rules]
    
    ## Governance
    [Process]
    ```
    
    ---
    
    ## Reference files
    
    - [`references/event-taxonomy-template.md`](references/event-taxonomy-template.md) - Starter event catalog with patterns for common product types.
    
  • skills/art-direction/SKILL.mdskill
    Show content (8291 bytes)
    ---
    name: art-direction
    description: "Direct visual and creative work for campaigns, photography, illustration, video, and branded experiences. Use this skill whenever the user wants to brief a photographer, direct illustrators, plan a creative campaign, develop visual concepts, write a creative direction document, or evaluate creative work for fit. Triggers on art direction, photo brief, photography brief, illustration brief, campaign concept, creative concept, visual direction, mood board, look and feel, visual treatment, video direction. Also triggers when the user has approved brand identity but needs to extend it into specific creative deliverables."
    category: design
    catalog_summary: "Photography, illustration, and visual direction for campaigns"
    display_order: 3
    ---
    
    # Art Direction
    
    Direct creative work that extends the brand into specific deliverables. Photography, illustration, video, motion, campaigns, environmental design.
    
    This skill assumes brand identity is approved (`brand-identity` complete). Art direction is about applying and extending it, not defining it.
    
    ---
    
    ## When to use
    
    - Briefing photographers, illustrators, videographers
    - Developing campaign creative concepts
    - Directing in-house creative teams
    - Writing creative direction documents for vendors
    - Evaluating creative deliverables for brand fit
    - Adapting brand visual identity to a new format or context
    
    ## When NOT to use
    
    - Setting project-wide aesthetic direction across multiple downstream skills (use `creative-direction` instead). This skill briefs specific creative deliverables; `creative-direction` produces the structured aesthetic brief that this skill consumes.
    - Defining brand visual identity from scratch (use `brand-identity`)
    - Day-to-day component design (use `design-standards`)
    - Writing copy for creative work (use `content-and-copy` or `landing-page-copy`)
    - Building a design system (use `design-system`)
    
    ---
    
    ## Required inputs
    
    - The deliverable (photo shoot, illustration set, video, campaign)
    - The brand identity (visual system, voice, imagery direction)
    - The audience for this specific work
    - The goal (brand awareness, conversion, education, emotional connection)
    - Budget and timeline
    - Distribution context (where it will be seen)
    
    ---
    
    ## The framework: 5 layers
    
    A creative brief covers five layers. Each must be clear before the brief leaves your hands.
    
    ### 1. The story
    
    What this creative work is fundamentally about.
    
    - **The premise.** The core idea in one sentence.
    - **The emotional through-line.** What the audience feels.
    - **The role of the brand.** How the brand shows up in the story.
    - **The takeaway.** What the audience walks away with.
    
    A weak premise produces work that's pretty but says nothing. Spend time here.
    
    ### 2. The look
    
    The visual treatment.
    
    **For photography:**
    - Subject and composition (close-up, environmental, candid, posed)
    - Lighting (natural, studio, dramatic, soft)
    - Color palette (true color, treated, monochrome)
    - Locations (specific or general direction)
    - Wardrobe and props
    - Mood references (3 to 5 reference images)
    
    **For illustration:**
    - Style (flat, dimensional, hand-drawn, geometric, abstract)
    - Color use (full palette, restricted, brand-only)
    - Line treatment
    - Composition style
    - Detail level
    - Reference artists or works (with explicit "we want like X but NOT like Y")
    
    **For video / motion:**
    - Pacing (slow, medium, fast cuts)
    - Camera movement (static, handheld, sweeping)
    - Color grading
    - Transitions and effects
    - Audio direction (music, voiceover, ambient)
    - Reference work (3 to 5 examples)
    
    ### 3. The execution
    
    Production-level direction.
    
    **Specifications:**
    - Deliverable formats and sizes (web hero, social square, print full-page)
    - Required shots or frames
    - Optional shots if budget allows
    - Wardrobe and prop list (for live action)
    - Color and asset specs (RGB, CMYK, hex codes for matching)
    
    **Constraints:**
    - Things to avoid (specific cliches, forbidden treatments, regulatory)
    - Brand-system requirements (logo placement, color use, type rules)
    
    ### 4. The variants
    
    How this creative scales across distribution.
    
    Most creative needs to live in multiple places. Plan the variants up front.
    
    **Common variant set for a campaign:**
    - Hero web image (16:9 or wider)
    - Mobile web hero (4:5 or 1:1)
    - Social square (1:1)
    - Social vertical (9:16)
    - Email banner (3:1 typical)
    - Display ad sizes (300x250, 728x90, etc.)
    - Print sizes if applicable
    
    For each variant, note: how the composition adapts, what gets cropped or repositioned, what assets are required.
    
    ### 5. The standards
    
    The quality bar.
    
    **Technical:**
    - Resolution and format requirements
    - Color profile
    - File naming conventions
    - Delivery format (raw + edited, layered files, exported variants)
    
    **Creative:**
    - What "approved" looks like (specific examples of acceptable work)
    - What "not approved" looks like (specific examples to avoid)
    - Number of revision rounds budgeted
    
    ---
    
    ## Workflow
    
    ### For briefing external creative
    
    1. **Confirm the inputs.** Brand identity locked. Audience and goal clear. Budget and timeline known.
    2. **Develop the concept.** Premise, emotional through-line, takeaway.
    3. **Build the look.** Mood references. Specific direction on style elements.
    4. **Write the spec.** Production-level direction. Variants. Constraints.
    5. **Brief the vendor.** In writing. Walk through it live. Allow questions.
    6. **Review milestones.** Treatment review, halfway review, final review. Don't skip the early reviews; corrections compound.
    7. **Approve and document.** What was produced, what's licensed for what use.
    
    ### For directing in-house creative
    
    1. **Same brief, lighter format.** In-house direction can be more iterative. Still document the brief.
    2. **Co-create.** In-house teams know the brand. Use their judgment. Don't over-direct.
    3. **Establish review rhythm.** Daily check-ins for fast work, weekly for longer projects.
    
    ### For evaluating existing creative
    
    1. **Score against the brief.** Did the work hit the brief? Where did it deviate?
    2. **Score against the brand.** Does this look like the brand? Could this be confused with a competitor?
    3. **Score against the goal.** Will this drive the intended outcome?
    4. **Identify fixes.** What can be improved? What's a deal-breaker vs. acceptable?
    
    ---
    
    ## Failure patterns
    
    - **"Modern, clean, minimal" briefs.** Means nothing. Force specificity. Use specific reference brands, named artists, or visual examples.
    - **No "what to avoid" direction.** Vendors interpret broadly. Tell them what's out of bounds explicitly.
    - **Reference imagery that's actually competitor work.** You'll get something that looks like the competitor. Never use direct competitors as references.
    - **Skipping early reviews.** Every revision late in the process is 5x more expensive than catching issues at treatment stage.
    - **Too many cooks.** 6 stakeholders all giving creative feedback produces incoherent work. Concentrate creative authority.
    - **Ignoring distribution.** Creative that doesn't work in the actual contexts where it will live is failed creative.
    - **No variant planning.** Discovering at delivery that you need a square crop and the photographer composed for 16:9 only.
    - **Approving creative that's "fine."** Fine is the enemy of distinctive. If it doesn't move you, it won't move the audience.
    
    ---
    
    ## Output format
    
    Default output is a creative brief at `creative-brief-[project].md`.
    
    Structure:
    1. The story (premise, through-line, role of brand, takeaway)
    2. The look (visual treatment with references)
    3. The execution (specs, variants, constraints)
    4. The standards (quality bar, examples of acceptable and unacceptable)
    5. The logistics (timeline, milestones, budget, deliverables)
    
    Plus a separate moodboard or visual reference doc with images.
    
    ---
    
    ## Reference files
    
    - [`references/creative-brief-template.md`](references/creative-brief-template.md) - Generic art direction brief template covering any production type (photo, illustration, video, animation, mixed).
    - [`references/photo-shoot-brief.md`](references/photo-shoot-brief.md) - Detailed brief template for photography commissions.
    - [`references/illustration-brief.md`](references/illustration-brief.md) - Brief template for illustration commissions.
    
  • skills/backup-and-disaster-recovery/SKILL.mdskill
    Show content (10870 bytes)
    ---
    name: backup-and-disaster-recovery
    description: "Plan and run backups, set recovery objectives, and run disaster recovery drills. Use this skill when defining RPO/RTO targets, designing backup architecture, deciding what to back up and how often, planning for full-region or platform outages, or running a restoration drill. Triggers on backup, restore, RPO, RTO, disaster recovery, DR, business continuity, what if the database is gone, what if our hosting goes down, recovery drill, ransomware planning. Also triggers when an incident reveals a gap in restoration capability."
    category: operations
    catalog_summary: "RPO/RTO targets, backup strategy, restoration drills"
    display_order: 6
    ---
    
    # Backup and Disaster Recovery
    
    Plan for the worst case: the database is gone, the host is down for a week, the deploy was poisoned, ransomware encrypted everything. The skill is in advance preparation, not reaction.
    
    ---
    
    ## When to use
    
    - Setting up backups for a new system
    - Reviewing and validating backup architecture
    - Defining RPO (recovery point objective) and RTO (recovery time objective)
    - Running a disaster recovery drill
    - Diagnosing gaps after an incident
    - Planning for ransomware, data corruption, or insider threats
    - Migrating to a new platform (DR planning belongs in the migration plan)
    
    ## When NOT to use
    
    - Active incident response (use `incident-response`)
    - Routine deploy rollbacks (use `launch-runbook`)
    - Code or content versioning (covered by Git, CMS revision history)
    - Routine database snapshots (use this skill to set them up; routine review goes in monitoring)
    
    ---
    
    ## Required inputs
    
    - The systems in scope (databases, file storage, code, configs, secrets)
    - The hosting platforms and providers
    - Existing backup tooling and what it covers
    - Tolerance for data loss (in time)
    - Tolerance for downtime (in time)
    - Compliance requirements (some regulations mandate specific backup standards)
    
    ---
    
    ## The framework: 4 questions
    
    Every disaster recovery plan answers four questions explicitly.
    
    ### Question 1: What needs to be recoverable?
    
    List every system that holds state. Categorize by criticality.
    
    **Tier 1: must recover.** Without it, the business stops. (Customer database, transaction log, primary content store.)
    
    **Tier 2: should recover.** Loss is painful but not fatal. (Analytics, logs, secondary services.)
    
    **Tier 3: nice to recover.** Easy to rebuild. (Caches, derived data, temporary state.)
    
    The tier drives RPO, RTO, backup frequency, and storage spend.
    
    ### Question 2: How much data loss is acceptable? (RPO)
    
    RPO is the maximum age of data that's acceptable to lose, measured in time.
    
    - RPO = 1 hour: hourly backups or continuous replication needed
    - RPO = 1 day: daily backups acceptable
    - RPO = 1 week: weekly backups acceptable
    
    For most production data, RPO of 1 hour or less is the target. For critical financial systems, near-zero RPO (continuous replication).
    
    For derived or rebuildable data, RPO of 1 day or longer is fine.
    
    ### Question 3: How much downtime is acceptable? (RTO)
    
    RTO is the maximum time to restore service after a disaster.
    
    | RTO target | Implies |
    |---|---|
    | < 5 minutes | Hot standby with automatic failover |
    | < 1 hour | Warm standby with manual failover or fast restore from recent snapshot |
    | < 24 hours | Cold backup with documented restore process |
    | Days to weeks | Best-effort, accept extended downtime |
    
    RTO drives architecture spend. Aggressive RTOs (< 1 hour) are expensive. Loose RTOs (days) are cheap.
    
    ### Question 4: What's the disaster?
    
    Plan for specific scenarios. Each has different implications.
    
    **Hardware failure.** Disk dies. Standard backups solve this. Most modern hosts handle automatically.
    
    **Provider outage.** Region or vendor goes down. Cross-region or cross-provider redundancy needed for low RTO.
    
    **Data corruption.** Bad migration, bug, accidental delete. Point-in-time restore needed. The latest backup might be corrupted; you need history.
    
    **Ransomware or compromise.** Attacker encrypts or deletes. Backups must be immutable or air-gapped, otherwise the attacker takes them too.
    
    **Account compromise.** Attacker has admin credentials, deletes everything. Same defense as ransomware: immutable backups, separate access control.
    
    **Vendor lock-out.** Account suspended, billing dispute, vendor disappears. Backups outside the vendor needed.
    
    **Insider threat.** Disgruntled employee deletes or exfiltrates. Audit logs, separation of duties, immutable backups.
    
    A backup strategy that handles only hardware failure isn't a strategy. It's the easiest case.
    
    ---
    
    ## Workflow
    
    ### Step 1: Inventory state
    
    Every system that holds state goes on a list:
    
    | System | Data type | Tier | Current backup | Tested? |
    |---|---|---|---|---|
    
    If you can't list it, you can't protect it. Often the inventory itself reveals gaps (the "we forgot about that database" moment).
    
    ### Step 2: Set RPO and RTO per tier
    
    For each tier, agree on RPO and RTO. Get sign-off from the people who'd be impacted by a disaster.
    
    Push back on aspirational targets that aren't backed by infrastructure spend. RTO of 5 minutes for a system without a hot standby is not real.
    
    ### Step 3: Verify or design backup architecture
    
    For each system, ensure:
    
    - **Frequency** matches RPO.
    - **Retention** covers point-in-time recovery (typically 30+ days for production data).
    - **Storage location** is separate from the source. Same disk, same account, same region: not enough.
    - **Immutability or write-once storage** for at least some backup copies. Defends against ransomware.
    - **Encryption at rest.** Standard for compliance.
    - **Tested restore procedure.** Untested backups are not backups.
    
    The "3-2-1 rule" is a useful starting point: 3 copies of data, 2 different storage types, 1 offsite (or off-account, off-platform).
    
    ### Step 4: Document the restore runbook
    
    For each system, write the runbook:
    
    1. How to detect the disaster (cross-reference monitoring)
    2. How to decide to restore (decision criteria, who authorizes)
    3. The exact restore steps (commands, screenshots, sequence)
    4. How to verify the restore worked
    5. How to switch traffic back
    6. Communication template (status page, customer notice)
    
    The runbook is for the worst night of someone's career. Write it for tired, panicked you.
    
    ### Step 5: Run a drill
    
    The first restore should never be during a real disaster.
    
    Drills can be:
    
    - **Tabletop:** walk through the runbook on paper. Useful for finding gaps in the plan.
    - **Partial:** restore to a non-production environment. Verify the data, validate the steps.
    - **Full:** simulate the disaster. Production failover or full restore. Maximum confidence, maximum risk.
    
    For most teams: quarterly tabletop, annual partial drill, full drill before major launches or after major architecture changes.
    
    ### Step 6: Document drill results
    
    After each drill, document:
    - What was tested
    - What worked
    - What broke
    - What the actual RPO and RTO were (vs. targets)
    - Action items
    
    If the actual RTO was 6 hours when the target was 1 hour, the target is fiction. Either fix the gap or revise the target.
    
    ### Step 7: Schedule the next drill
    
    Calendar it. Assign an owner. Backups that aren't drilled drift toward useless.
    
    ---
    
    ## Special topics
    
    ### Database point-in-time recovery
    
    Many managed databases offer point-in-time recovery (PITR) within a retention window (often 7-35 days). This typically achieves RPO of seconds to minutes.
    
    For longer retention, schedule periodic exports to immutable storage.
    
    PITR alone isn't enough. If the database service itself is compromised, PITR is gone too. Always have at least one backup outside the source service.
    
    ### File storage backups
    
    Object stores (S3, GCS, Azure Blob) usually offer:
    
    - Versioning (recover overwritten objects)
    - Replication (cross-region)
    - Object lock or immutability (defense against deletion)
    
    Set all three for production-critical buckets. Don't rely on the storage provider's default retention.
    
    ### Code and config backups
    
    Code lives in Git. The Git host (GitHub, GitLab, etc.) is your backup, but a single host is a single point of failure.
    
    For high-criticality code:
    - Mirror to a second host or your own server
    - Periodic offline exports
    
    Configs and secrets need separate handling:
    - Infrastructure-as-code: in Git, mirrored
    - Runtime configs: backed up alongside the system
    - Secrets: in a secret manager with its own backup story
    
    ### Backups of backups
    
    The backup system itself can fail. Backup metadata, backup credentials, encryption keys: all must be backed up.
    
    If your backup is encrypted with a key you've lost, the backup is useless.
    
    ### Compliance backups
    
    Some regulations require specific retention (e.g., 7 years for financial data). Comply with the highest applicable standard.
    
    Don't conflate compliance retention with operational backup. Compliance often allows much slower restore (just need to be able to produce the data eventually).
    
    ---
    
    ## Failure patterns
    
    **Untested backups.** The single most common failure. Backups appear to work; restore fails. Test.
    
    **Backups in the same account or region as the source.** Account compromise or region outage takes both.
    
    **No immutability.** Ransomware encrypts the backups too. Use object lock or air-gapped storage.
    
    **RTO and RPO that aren't measured.** Target says "1 hour" but no one has verified the actual RTO. Assume the actual is longer than the target until proven otherwise.
    
    **Restore runbook only in someone's head.** Person leaves or is unavailable; runbook is gone. Document.
    
    **Backups but no DR plan.** "We have backups" isn't a plan. The plan is the runbook plus the architecture plus the drilling.
    
    **Optimism bias.** "It won't happen to us." It happens. Plan as if it will.
    
    **Backups too old or too new.** Want point-in-time history (in case corruption isn't immediately discovered). Daily snapshots with 30+ day retention. Or continuous replication with separate periodic snapshots for history.
    
    **Skipping drills "because we're busy."** Then you'll be busier during the disaster.
    
    **No communication plan.** Restoring data is half the job. Telling customers, stakeholders, and internal teams what's happening is the other half.
    
    ---
    
    ## Output format
    
    A DR plan document includes:
    
    - **Inventory:** every stateful system
    - **Tiering:** criticality per system
    - **Targets:** RPO and RTO per tier
    - **Architecture:** backup tooling, frequency, storage, immutability
    - **Runbooks:** restore procedures per system
    - **Drill schedule:** what gets tested when
    - **Drill log:** results of past drills
    - **Communication templates:** what to say during a real DR event
    
    ---
    
    ## Reference files
    
    - [`references/restore-runbook-template.md`](references/restore-runbook-template.md): Fillable template for a restore runbook, covering detection, authorization, steps, verification, and rollback.
    

README

Complete Claude Skills for the full web lifecycle. Build, ship, audit, optimize.

Brand Build Skills for Claude

A complete, opinionated library of Claude Skills covering the full lifecycle of building, launching, running, and growing a brand and a website.

License: MIT PRs Welcome Skills Made for Claude

Website LinkedIn X Facebook

98 stack-agnostic skills covering brand, design, content, SEO, dev, ops, growth, and research. Includes an Ahrefs MCP-powered SEO audit suite. Use them on Next.js, WordPress, Shopify, Webflow, plain HTML, or anything else.

Featured in awesome-claude-skills under Business & Marketing.


Table of contents


What are Claude Skills?

Claude Skills are reusable capability packages that teach Claude how to handle a specific kind of task with a consistent framework, vocabulary, and output format. Each skill is a folder containing a SKILL.md (instructions plus YAML metadata) and optional reference files (templates, checklists, worked examples). Claude loads a skill automatically when a user request matches the skill's description.

Skills work across Claude.ai, Claude Code, and the Anthropic API. Once you write a skill, it is portable across all three.

For the official deep dive, see Anthropic's Agent Skills documentation.


What is in this library

This is not a curated list of other people's skills. It is a single, opinionated library where every skill follows the same structure and conventions, so the skills compose cleanly across a real project lifecycle.

What you get:

  • 98 skills across 16 categories, every one with a complete SKILL.md and at least one reference file
  • 424 reference files (templates, checklists, decision matrices, worked examples)
  • Stack-agnostic. Works on any web stack. The only named-tool exception is the SEO audit suite, which assumes the Ahrefs MCP.
  • Future-proof. Principles over tools. Stable concepts over trending techniques. References to durable specs (W3C, WHATWG, Schema.org, MDN, NN/g, WCAG) over content that ages with each algorithm update.
  • Uniform structure. Every skill uses the same section order, the same tone, and the same authoring conventions. Predictable in, predictable out.
  • Composable. Skills reference each other. creative-brief points to brand-voice. incident-response points to monitoring-and-alerting. Each skill's "When NOT to use" tells you which sibling fits your adjacent work.

Highlight categories: brand strategy and identity, design systems, content production with full Tier 1 and Tier 2 coverage, full SEO suite (foundation plus Ahrefs MCP-powered audit suite), product management with experimentation and gap-closing tracks, growth tooling for interactive web tools, paid media discipline, frontend dev and accessibility, performance and QA, launch and incident ops, UX research, plus a meta-skill that teaches you to write your own.


Featured skills

Six entry-point skills, one per audience track. Run any of these standalone, or compose them with the rest of the catalog.

TrackSkillWhat it does
Brand and creativecreative-directionFour-axis brief (tone, aesthetic, audience, sensory ambition) that gives every downstream skill a coherent direction
PM, experimentationexperiment-designFrom hypothesis to decision: sample size, duration, segment analysis, and the failure modes that produce wrong shipping calls
PM, gap-closingfeature-launch-playbookThe discipline of launching a feature well: positioning, internal alignment, customer comms, enablement, rollout, monitoring
Contentpillar-content-architectureHub-and-cluster topical authority: pillar selection, cluster planning, internal linking, refresh discipline
Marketinglanding-page-copyLanding pages, sales pages, hero-to-CTA flow with copy that converts
Growth toolingfunnel-flow-architectureCross-tool conversion flows architected to match the audience and the funnel stage

See it in action

The creative-direction skill rendered as a live showcase →

Thirty fictional brands generated from briefs that all use the same skill. Each is a fully styled brand site, not a mockup. The showcase demonstrates what the four-axis framework produces in practice and lets you filter by axis position to see how each combination renders.

Showcase grid of brand archetypes including Pulse, Volt, Anode, Drift, and others, with type and motion intensity filter pills above the cards.

Filter by any axis position

The skill defines four axes: tone, aesthetic, relationship, sensory. The showcase lets you filter by any combination and see which examples match. Pre-filtered URLs deep-link from the SKILL.md and axes-explained reference, so you can read about a position and click straight through to the rendered examples.

Showcase grid filtered by Tone equals Provocative and Sensory equals Resonant, showing eight matching brand cards with the axis disclosure auto-expanded.

The empty state is the lesson

The framework is generative. The showcase is illustrative. Most rare-but-powerful combinations are valid creative choices that simply have not been built yet. Set Provocative + Editorial Restrained + Coach + Resonant and the grid is empty.

Showcase grid with all four axis filters set to Provocative, Editorial Restrained, Coach, and Resonant, showing zero matching examples and the empty state copy: No example yet. The framework allows this combination, it just hasn't been built as one of the thirty worked examples.

The framework's range

Same skill, same brief format. Four completely different visual systems. Notice that Pulse and Bloom share identical axis positions yet read as opposite visual languages. The reference brands and aesthetic interpretation do the rest.

Pulse music streaming brand. Saturated gradient hero with the headline 'Sound that moves with you' and pink-to-cyan equalizer bars below.Forge boutique fitness studio. Dark industrial hero with intense typography and motivational copy.
Pulse · music streaming
Sound that moves with you.
Playful / Expressive Maximalist / Companion / Resonant
Forge · boutique fitness
Show up. Get hammered.
Provocative / Expressive Maximalist / Coach / Resonant
Bloom adaptogenic soda brand. Peachy gradient hero with tri-color headline 'Soda that loves you back' and a strawberries-around-soda-can product photo.Observatory Editorial. Cream paper hero with restrained serif headline 'An observability tool for the engineers who already know what they are doing'.
Bloom · adaptogenic soda
Soda that loves you back.
Playful / Expressive Maximalist / Companion / Resonant
Observatory Editorial · observability tool
An open-source tool that respects engineer time.
Conversational / Editorial Restrained / Peer / Considered

Run this on your own brand

The creative-direction skill lives at skills/creative-direction/. Install it (see below), give Claude a project name and a few inspiration references, and the skill walks you through producing a brief that downstream skills can consume. The brand sites in the showcase were built from briefs of exactly that shape.


Getting started

Skills install in three different places depending on where you use Claude. Pick the platform that matches your workflow.

Option 1: Claude.ai (web and desktop)

If your Claude.ai plan supports custom Skills:

  1. Go to Settings → Capabilities → Skills.
  2. Upload the skill folder you want as a .zip (one zip per skill folder containing SKILL.md and the references/ subfolder).
  3. Enable the skill in the chat interface.

Claude will load the skill automatically when your request matches its description.

For current plan availability and the exact upload UI, see Anthropic's Skills user guide.

Option 2: Claude Code (recommended)

Skills are first-class citizens in Claude Code. Drop them into your skills directory and Claude Code picks them up automatically.

User-level skills (available in every project):

# macOS / Linux
mkdir -p ~/.claude/skills
cp -r skills/* ~/.claude/skills/

# Windows (PowerShell)
New-Item -ItemType Directory -Force -Path "$HOME\.claude\skills"
Copy-Item -Recurse skills\* "$HOME\.claude\skills\"

Project-level skills (available only in a specific project):

mkdir -p .claude/skills
cp -r path/to/this-repo/skills/* .claude/skills/

Start (or restart) Claude Code. Skills load automatically.

For exact current paths and config flags, see the Claude Code documentation.

Option 3: Anthropic API

Use Skills programmatically by referencing them in your API calls. Skills must first be uploaded to your workspace (via the Console or API), then referenced by ID when creating messages.

For the current API surface, request format, and limits, see the Agent Skills API documentation.

Want only a few skills?

You do not have to install all 98. Pick the categories that match your work. The library is modular: each skill stands on its own.


Quick example

Once installed, skills trigger automatically based on your request. You do not have to name the skill or change how you talk to Claude.

You ask:

"Our organic traffic dropped 30% last week. Help me figure out why."

What happens:

Claude recognizes the request matches seo-traffic-diagnosis, loads the skill, and walks through its 5-layer root cause framework: confirm the change is real → localize the change → page-level analysis → technical analysis → external analysis. By the end, you have a hypothesis statement, evidence, and an action plan, structured the same way every time.

Other natural triggers:

  • "Help me write a creative brief" → creative-brief
  • "Audit my homepage for SEO" → seo-onpage
  • "We need a backlink audit" → seo-backlink-audit
  • "Plan our content roadmap for Q3" → seo-content-gap-audit plus content-strategy
  • "Postmortem template for last night's incident" → after-action-report
  • "How do I write my own skill?" → skill-creation-walkthrough

You can also call a skill explicitly: "Use the seo-audit-orchestration skill to run a full audit on example.com."


How they compose

The skills compose into a full project flow:

brand-discovery → brand-ideation → brand-identity → brand-style-guide → brand-voice
                                                                        ↓
creative-brief → information-architecture → content-strategy → design-system
                                                              ↓
seo-keyword → seo-content-audit → content-and-copy → landing-page-copy
                                                    ↓
seo-onpage → seo-technical → seo-aeo-geo → seo-offpage → seo-competitor
                                          ↓
frontend-component-build → accessibility-audit → performance-optimization
                                                ↓
code-review-web → qa-testing → security-baseline → launch-runbook
                                                  ↓
domain-strategy → monitoring-and-alerting → backup-and-disaster-recovery
                                          ↓
incident-response → after-action-report
                  ↓
analytics-strategy → cro-optimization → ux-research → usability-testing → journey-mapping

The SEO audit suite (Ahrefs MCP-powered) wraps around the SEO foundation skills:

seo-audit-orchestration
  ├── seo-site-health-audit
  ├── seo-backlink-audit
  ├── seo-keyword-gap-audit
  ├── seo-content-gap-audit
  ├── seo-traffic-diagnosis  (also runs standalone for incident-style work)
  └── seo-rank-tracking      (ongoing, feeds the others)

The catalog also includes four audience tracks that compose alongside the foundational lifecycle. Each track has its own internal flow:

Paid media (Marketing track):

paid-media-strategy → ads-creative-development → ads-performance-analytics

Pairs with the paid media platforms in the integrations catalog at rampstack.co (Google Ads, Meta, LinkedIn, TikTok, plus Synter as the multi-platform aggregator).

Growth tooling (interactive web tools):

funnel-flow-architecture (orchestrator)
  ├── lead-magnet-design          (capture)
  ├── calculator-design           (capture / activate)
  ├── quiz-and-assessment-design  (capture / activate)
  ├── multi-step-form-design      (activate)
  ├── chatbot-flow-design         (activate)
  ├── onboarding-wizard-design    (activate)
  ├── interactive-product-tour    (activate / convert)
  ├── upgrade-flow-design         (convert)
  ├── scheduler-and-booking-design (convert)
  ├── comparison-tool-design      (convert)
  └── product-configurator-design (convert)

funnel-flow-architecture is the orchestrator: it sequences which interactive tool fits each audience and funnel stage, distinguishing matched-funnels from kitchen-sink-funnels.

Tier 2 content lifecycle:

content-strategy → pillar-content-architecture → content-brief-authoring
                                                ↓
              content-and-copy / long-form-content-frameworks / email-sequences
                                                ↓
                       editorial-qa → content-distribution → programmatic-seo
                                                ↓
              content-refresh-system → content-repurposing → content-migration

ai-content-collaboration is a workflow layer that runs across every phase rather than a single step. documentation-strategy operates continuously alongside the rest.

Tier 2 product management (two parallel tracks):

Experimentation track:
experiment-design → feature-flagging → experimentation-platform-orchestrator
                                     ↓
                         experimentation-analytics → data-warehouse-experimentation

Gap-closing track:
pm-spec-writing → roadmap-planning → feature-launch-playbook
                                   ↓
       beta-program-management → product-analytics-setup → integration-orchestrator

The experimentation track ships changes with statistical discipline; the gap-closing track ships features with operational discipline. Both compose with the foundational lifecycle above.

Operations, cross-cutting, and team skills (stakeholder-communication, documentation-strategy, vendor-evaluation, team-onboarding-playbook, dependency-management, cost-optimization, etc.) cut across every track.

You can also pull individual skills for one-off work. Need just a backlink audit? Use seo-backlink-audit. Need to write a creative brief? Use creative-brief. Each skill stands on its own.


How the catalog connects

The skills compose with the tools your team already uses. 98 skills at the center; 35 integrations across 6 categories radiating out via MCPs.

RampStack architecture: 98 skills at the center with 35 integrations across 6 categories radiating out (workflow, experimentation, paid media, data and analytics, content and SEO, SEO and competitive intelligence).


Surfaces

This catalog is the open-source methodology layer. Commercial surfaces at rampstack.co extend it:

  • Skills directory. Every skill on a curated landing surface with audience tracks, search, and category navigation.
  • Walkthroughs. Multi-skill recipes that orchestrate skill clusters end-to-end. Use these when one skill is not enough and a packaged sequence is.
  • Integrations directory. Curated MCPs, APIs, and tooling that the skills hook into.
  • Showcase. Real brand sites built from these skills, with the brief that produced each one.

The skills in this repository remain free, open-source, and stack-agnostic. The surfaces above are how the same methodology is delivered as a product.


The 98-skill catalog

All 98 skills are shipped. Each has a complete SKILL.md plus at least one reference file (template, checklist, or playbook).

Strategy and discovery (5)

#SkillWhat it does
1brand-discoveryAudience research, competitive scan, positioning territory exploration
2creative-briefProject briefs that align stakeholders before work starts
3creative-directionFour-axis aesthetic brief (tone, aesthetic, audience, sensory ambition) for cross-skill coherence
4information-architectureSitemap, navigation, URL structure, content types, taxonomy
5content-strategyEditorial strategy, content calendar, topical authority planning

Brand (5)

#SkillWhat it does
6brand-ideationNaming, positioning territories, mood directions, narrative angles
7brand-identityLogo system, color, typography, imagery, iconography, motion
8brand-style-guideThe canonical reference document for the full brand system
9brand-voiceVoice attributes, tone shifts, vocabulary, paired-example library
10logo-designLogo variants across architectures (wordmark, lockup, monogram, letterform-as-symbol), with rationale and application specs

Design (3)

#SkillWhat it does
11design-systemComponent library, design tokens, design system documentation
12design-standardsProduction-grade page and component design standards
13art-directionPhotography, illustration, and visual direction for campaigns

Content (12)

#SkillWhat it does
14pillar-content-architectureHub-level content architecture: pillar topic selection, cluster planning, internal linking, URL structure, pillar and cluster page anatomy, topical authority signals, refresh discipline
15content-brief-authoringPer-piece editorial brief: target keyword, intent, audience, outline, entity coverage, internal linking, success criteria, and the discipline that distinguishes useful briefs from bloat
16content-and-copyWebsite copy, blog content, content production frameworks
17landing-page-copyLanding pages, sales pages, hero-to-CTA flow
18email-sequencesOnboarding flows, lifecycle campaigns, transactional copy
19programmatic-seoDesigning pSEO programs that work: data sources, template design, quality control at scale, internal linking, crawl budget, AEO/GEO patterns, refresh discipline, and when pSEO is and is not the right answer
20editorial-qaPre-publish QA framework: brief adherence, voice consistency, fact accuracy, AI-content audit, AEO/SEO compliance, sampling at scale, and the workflow that distinguishes catch-problems QA from process theater
21ai-content-collaborationHow humans and AI compose in content workflows: participation boundaries, hybrid patterns, voice ownership, the AI slop problem, disclosure and transparency, team calibration, and the ethics of honest AI-assisted production
22long-form-content-frameworksStructural patterns for individual long-form pieces (case studies, whitepapers, research reports, definitive guides, manifestos, ebooks, long-form tutorials) that distinguish publication-quality work from bloggy-long padding or academic bloat
23content-refresh-systemSystematic content refresh: quarterly audits, refresh prioritization, refresh-vs-merge-vs-delete decisions, the lifecycle discipline that distinguishes intentional programs from set-and-forget decay
24content-repurposingCross-format content adaptation: one piece becomes many (blog series, email, social, webinar, podcast, video) with per-format adaptation rather than mass-blast that ignores medium constraints
25content-distributionContent distribution discipline: owned, earned, and paid channels matched to audience and content type. Channel-fit decisions, distribution cadence, the strategic alternative to spam-everywhere or hope-and-pray

SEO foundation (7)

Tool-agnostic SEO skills. These define the conceptual frameworks. The SEO audit suite below adds the Ahrefs MCP-powered execution layer.

#SkillWhat it does
26seo-onpageSingle-page audits and optimization across 8 dimensions
27seo-technicalCrawlability, indexability, rendering, schema, page experience
28seo-keywordDiscovery, intent classification, clustering, prioritization
29seo-competitorSERP overlap, content gaps, backlink gaps, technical comparison
30seo-offpageLink building, digital PR, citations, linkable assets
31seo-content-auditKeep/update/merge/redirect/delete decisions across a site
32seo-aeo-geoAI search optimization, llms.txt, extraction-friendly content

SEO audit suite (Ahrefs MCP-powered) (7)

End-to-end SEO audit workflows that pull data from the Ahrefs MCP and produce concrete deliverables. These skills assume the Ahrefs MCP is connected.

#SkillWhat it does
33seo-audit-orchestrationMaster orchestrator: sequences the suite, produces a rollup report
34seo-backlink-auditProfile health, anchor mix, toxic links, reclamation, gap analysis
35seo-keyword-gap-auditCompetitor keyword gaps with opportunity scoring and clustering
36seo-content-gap-auditMissing topics, thin coverage, outdated content, decay diagnosis
37seo-traffic-diagnosisDiagnose drops, stalls, or wins via 5-layer root cause analysis
38seo-site-health-auditTriage Ahrefs Site Audit findings by SEO impact, not severity
39seo-rank-trackingSetup, baseline, segmentation, alerting, dashboarding

Product (13)

#SkillWhat it does
40pm-spec-writingPRDs, user stories, acceptance criteria, dev briefs
41roadmap-planningQuarterly planning, prioritization, dependency mapping
42integration-orchestratorSequence creative-direction work across phases, gates, handoffs, and QA verification
43experiment-designHypothesis to decision: sample size, duration, segment analysis, interpretation, and the failure modes that produce wrong shipping calls
44feature-flaggingFlags as production infrastructure: types, naming, lifecycle, targeting, rollout, stale flag cleanup, governance
45experimentation-analyticsRead result panels without fooling yourself: confidence intervals, p-values, multiple testing, sequential testing, CUPED, ratio metrics, network effects, dashboard reconciliation
46experimentation-platform-orchestratorPick the right experimentation platform, migrate when wrong, coordinate when multi-platform: a decision framework for Statsig, PostHog, GrowthBook, Optimizely, Amplitude, Eppo, Kameleoon
47product-analytics-setupInstrument product analytics correctly: event taxonomy, properties, naming conventions, schema versioning, funnels, retention cohorts, North Star selection, and the instrumentation debt that compounds without discipline
48data-warehouse-experimentationRun experiments out of the warehouse: SQL assignment, exposure logs, dbt metric definitions, statistical analysis, variance reduction with CUPED, sequential testing, and the operational tradeoffs vs platforms
49feature-launch-playbookThe operational discipline of launching a feature well: positioning, internal alignment, customer comms, sales enablement, support readiness, rollout strategy, monitoring, and post-launch measurement
50jtbd-framingJobs-to-be-Done framework. Job statements, struggling moments, hire/fire criteria, the difference between feature-thinking and job-thinking. Honest about where JTBD earns its keep and where it becomes performative
51okr-designOKR design discipline. Outcome statements, key results, scoring, mid-quarter recalibration. Distinguishes sandbagged OKRs (always hit, useless) from aspirational fantasy (impossible, demoralizing) from stretch OKRs (genuine ambition with quarterly accountability)
52beta-program-managementRunning betas that produce real signal. Participant selection, structured feedback, beta-to-GA decisions. Distinguishes soft-launch (no structure) from kitchen-sink (everyone in) from structured-beta (calibrated cohort with intentional feedback loops)

Development (4)

#SkillWhat it does
53code-review-webPR review, build error diagnosis, security and quality checks
54frontend-component-buildComponent architecture, props design, accessibility from the start
55accessibility-auditWCAG compliance audit with remediation plan
56performance-optimizationCore Web Vitals, asset optimization, render performance

Quality assurance (1)

#SkillWhat it does
57qa-testingPre-launch QA, regression testing, cross-browser checks

Operations (9)

#SkillWhat it does
58launch-runbookGo-live runbook, DNS cutover, deploy day procedures
59incident-responseIncident triage, comms, mitigation, escalation
60after-action-reportPost-mortems, retros, learnings documentation
61domain-strategyDNS architecture, redirects, registrars, multi-domain portfolios
62monitoring-and-alertingSLO design, uptime checks, alert routing, on-call rotations
63backup-and-disaster-recoveryRPO/RTO targets, backup strategy, restoration drills
64security-baselineHTTPS, security headers, CSP, secrets management, vulnerability scans
65email-deliverabilityDMARC, SPF, DKIM, sender reputation, deliverability monitoring
66media-asset-managementImage pipelines, video hosting, asset libraries, format selection

Growth (2)

#SkillWhat it does
67analytics-strategyMeasurement frameworks, dashboard design, event taxonomy
68cro-optimizationHypothesis-driven testing, conversion optimization

Growth tooling (12)

Interactive web tools that turn visitors into leads. Lead magnets, calculators, quizzes, multi-step forms, chatbots, and the cross-tool funnel architecture that orchestrates them.

#SkillWhat it does
69lead-magnet-designDesigning gated content that earns the email. Distinguishes thin-bait (overpromises, underdelivers) from kitchen-sink-resource (everything, helps with nothing) from earned-value-magnet (delivers standalone value while qualifying the lead)
70calculator-designDesigning interactive calculators that deliver decision-support value while qualifying leads. Distinguishes vanity-calculator (no real value) from lead-trap (hides answer behind email) from transparent-decision-tool (gives genuine value, captures leads honestly)
71quiz-and-assessment-designDesigning quizzes and assessments that produce actionable segmentation. Distinguishes clickbait-quiz (engagement only) from vanity-result (entertaining, not useful) from actionable-segmentation (genuine categorization that drives next-step recommendations)
72multi-step-form-designDesigning multi-step forms that respect cognitive load while maintaining completion intent. Distinguishes kitchen-sink-single-page (overwhelms) from progress-theater (steps without genuine staging) from genuinely-staged (each step earns its own page)
73chatbot-flow-designDesigning conversational flows for chatbots and AI agents on websites. Distinguishes scripted-bot (rigid trees, fail edge cases) from hallucinating-bot (LLM without structure, makes things up) from structured-guided-conversation (LLM-powered with intent architecture and fallback discipline)
74funnel-flow-architectureArchitecting cross-tool conversion flows that match audience and stage. Distinguishes silo-funnels (every tool standalone) from kitchen-sink-funnels (every audience squeezed through one path) from matched-funnels (architecture matched to audience-and-stage)
75onboarding-wizard-designDesigning first-run product onboarding wizards. Distinguishes tutorial-overload (dump everything upfront) from skip-friendly-empty (skipped onboarding leads to abandoned product) from earned-progressive-disclosure (right things at the right moments)
76interactive-product-tourDesigning in-product tours and contextual help. Distinguishes tooltip-spam (every button has a tour stop) from one-and-done (tour shows once, never seen again) from contextual-when-needed (surfaces help at the moment friction occurs)
77upgrade-flow-designDesigning free-to-paid conversion flows. Distinguishes paywall-everywhere (gates everything aggressively) from free-forever-trap (no upgrade path surfaces) from value-triggered-upgrade (paywall surfaces at moments of demonstrated value)
78scheduler-and-booking-designDesigning schedulers and booking flows. Distinguishes any-time-friction (no qualification, just a booking link) from interrogation-gate (so much qualification it scares users off) from qualified-fast-path (just enough qualification to set up the call well)
79comparison-tool-designDesigning comparison tools that help users decide. Distinguishes feature-list-dump (every feature in a row, no decision support) from hidden-recommendation (biased comparison pretending to be neutral) from honest-comparison-with-guidance (genuine comparison plus opinionated recommendation)
80product-configurator-designDesigning interactive product configurators. Distinguishes infinite-options (decision paralysis from too many options) from canned-bundles-only (no real customization) from guided-configuration (smart defaults plus meaningful constraints plus escape hatches)

Marketing (3)

Paid media discipline: strategy, creative, and performance analytics. Pairs with the paid media platforms in the /integrations catalog at rampstack.co.

#SkillWhat it does
81paid-media-strategyHypothesis to spend: channel selection, budget allocation, audience targeting, bid strategy, attribution reality, and the failure modes that burn agency-scale budgets
82ads-creative-developmentHook patterns, format selection, video pacing, variation systems, testing methodology, fatigue detection, and the platform-specific creative norms that separate ads from clutter
83ads-performance-analyticsRead paid media dashboards without fooling yourself: attribution models, platform reporting quirks, ROAS vs LTV, multi-platform reconciliation, incrementality testing, and the interpretation failures that compound into wasted budget

Research (5)

#SkillWhat it does
84ux-researchResearch planning, user interviews, qualitative synthesis
85usability-testingTest design, moderation, findings reports
86journey-mappingCustomer journey maps, service blueprints, friction analysis
87discovery-research-synthesisSynthesizing customer interviews, research notes, and support tickets into actionable PM decisions. Distinguishes data-dump (no synthesis) from insight-theater (overpolished narrative) from actionable synthesis (decision-grade clarity)
88user-feedback-aggregationCollecting and synthesizing user feedback across channels into continuous decision signal. Triage discipline that distinguishes loudest-voice (whoever complains most) from averaged-noise (every signal weighted equally) from triaged-synthesis (weighted by source quality and decision relevance)

Cross-cutting workflows (5)

#SkillWhat it does
89form-strategyForm design, validation patterns, spam prevention, conversion tuning
90content-migrationPlatform migrations with SEO equity preservation
91internationalizationLocale strategy, hreflang, translation workflow, RTL design
92dependency-managementPackage updates, security patches, lockfile hygiene
93cost-optimizationInfrastructure spend audits, rightsizing, contract negotiation

Process and team (5)

#SkillWhat it does
94stakeholder-communicationStatus updates, exec readouts, project communications
95documentation-strategyDocumentation systems, what to document, maintenance cadence
96vendor-evaluationTool and vendor selection using a structured rubric
97team-onboarding-playbook30-60-90 onboarding plans for new hires and contractors
98skill-creation-walkthroughThe meta-skill: how to write your own custom skills

Recommended MCPs

Skills compose best when Claude has live access to your data and tools. Model Context Protocol (MCP) servers provide that bridge. The skills in this library work without any MCPs, but pair them with the right ones and they go from "frameworks Claude follows" to "workflows Claude executes against your real systems."

Below is the MCP shortlist by skill area. None of these are required (except the Ahrefs MCP for the SEO audit suite). All are categorical recommendations: where multiple options exist for the same job, pick the one that fits your stack.

SEO, competitive intelligence, and search data

The SEO audit suite (skills 23-29) is built around Ahrefs as its primary backend; foundation SEO skills (16-22) work with any equivalent. Competitive intelligence MCPs (Ahrefs, Semrush, Similarweb) cover overlapping but distinct data shapes: backlinks and keywords, traffic estimation, audience behavior. Use them in combination for the strongest signal.

A note on MCP costs: many of these MCPs are wrappers around APIs you are already paying for through a subscription, where MCP calls do not add marginal cost. Others (Ahrefs, Semrush, Similarweb, DataForSEO) use paid API credits per call, and long agentic sessions against these platforms can burn meaningful credit volume quickly. The cost model is documented on each integration's landing page at rampstack.co/integrations. Free with rate limits is called out where it applies (Google Search Console, PageSpeed Insights). When in doubt, check the platform's API pricing before running multi-hour agent workflows.

Backlink and keyword data

  • Ahrefs MCP - primary backend for the audit suite; backlink profiles, keyword data, content explorer, site audit. Referenced explicitly by seo-audit-orchestration and the 6 audit suite skills (backlink, keyword gap, content gap, traffic, site health, rank tracking). Credits-per-call.
  • Semrush MCP - alternative or complement to Ahrefs with stronger US keyword data and SEO-PR features (Topic Research, brand monitoring) Ahrefs does not cover. Pairs with seo-keyword, seo-competitor, seo-content-gap-audit. Verify the official MCP endpoint at authoring time; Semrush has shipped first-party MCP tooling. Credits-per-call.
  • DataForSEO MCP - programmatic SEO data (SERP, keywords, backlinks) at developer-friendly pricing; useful as a third source for cross-validation when methodology decisions hinge on data agreement. Credits-per-call (free tier available).

Traffic estimation and competitive intelligence

  • Similarweb MCP - competitive traffic estimation, audience demographics, channel mix (organic, paid, direct, referral, social, email), industry benchmarks, audience overlap analysis. Pairs with seo-competitor, seo-traffic-diagnosis (external-factor layer), brand-discovery (competitive scan), analytics-strategy (industry benchmarks). Where Ahrefs answers "how do they rank" and Semrush answers "what keywords drive what," Similarweb answers "how much traffic, from where, from whom." Credits-per-call.

Search Console and Core Web Vitals

  • Google Search Console MCP - free, official Google data; essential for seo-traffic-diagnosis and any audit that needs ground-truth click and impression data. Free with rate limits.
  • PageSpeed Insights MCP - free, paired with performance-optimization and seo-site-health-audit for Core Web Vitals field data. Free with rate limits.

Development and code

  • GitHub MCP - paired with code-review-web, pm-spec-writing, roadmap-planning, incident-response. Lets Claude read PRs, file issues, search code, and reference real commits.
  • Filesystem MCP - local file and code operations; pairs with most dev and content skills
  • Sentry MCP - paired with monitoring-and-alerting and incident-response. Real error data turns generic incident frameworks into specific diagnoses.

Hosting and infrastructure

  • Cloudflare MCP - paired with domain-strategy, security-baseline, performance-optimization. DNS records, redirects, page rules, security headers.
  • Vercel MCP - paired with launch-runbook and incident-response. Deployments, env vars, build logs.
  • Supabase MCP - paired with code-review-web, pm-spec-writing, backup-and-disaster-recovery. Schema, queries, edge functions.

Analytics and monitoring

  • PostHog MCP - paired with analytics-strategy, cro-optimization, journey-mapping. Event taxonomy review and funnel analysis grounded in real data.
  • Datadog MCP - paired with monitoring-and-alerting, incident-response. SLO design and alert routing against actual metrics.

Communication and project management

  • Slack MCP - paired with incident-response, stakeholder-communication, after-action-report. Read channel context, draft updates, post incident comms.
  • Linear MCP (or Jira MCP) - paired with pm-spec-writing, roadmap-planning. Spec writing against the actual issue tracker, not a generic template.

Research and search

  • Web search (built into Claude in most environments) - paired with brand-discovery, seo-keyword, seo-competitor, ux-research
  • Tavily MCP or Brave Search MCP - alternatives for deeper research workflows

Where to find them

  • modelcontextprotocol.io/servers - the canonical directory of MCP servers
  • The Connectors directory inside Claude.ai (Settings → Connectors)
  • claude mcp add in Claude Code for direct installation
  • Vendor websites for first-party servers (most major SaaS tools now ship official MCPs)

Building your own MCP

If a skill in this library would benefit from a tool integration that does not yet exist, the MCP documentation walks through building one. The seo-audit-orchestration skill is a worked example of how to design a skill suite around a specific MCP's capabilities.


Authoring conventions

Every skill follows the same structure. See SKILL_AUTHORING.md for the full spec.

Highlights:

  • Stack-agnostic. No specific framework versions in SKILL.md. Stack-specific patterns go in reference files. The Ahrefs-powered audit suite is the single named-tool exception.
  • Future-proof. Reference durable specs (W3C, WHATWG, Schema.org, MDN, NN/g, WCAG). Avoid trend pieces.
  • Uniform structure. Every SKILL.md has the same section order: When to use, When NOT to use, Required inputs, The framework, Workflow, Failure patterns, Output format, Reference files.
  • Tight length. SKILL.md under 250 lines. References under 400.
  • Punchy voice. Short sentences. Concrete examples beat abstract advice.

Repository structure

skills/
  skill-name/
    SKILL.md
    references/
      template.md
      checklist.md
      example.md
SKILL_AUTHORING.md          (the authoring guide)
CONTRIBUTING.md             (how to contribute)
MAPPING.md                  (origin notes for skills ported from existing work)
README.md                   (this file)
LICENSE                     (MIT)

Contributing

Contributions are welcome. Whether you want to fix a typo, add a reference file, or propose an entirely new skill, the bar is the same: follow the uniform structure, keep the voice consistent, and prove the skill earns its place.

See CONTRIBUTING.md for the full process.

The fastest path: use the skill-creation-walkthrough skill itself. It teaches the same authoring discipline used across all 98 skills, with worked examples and a blank template.


Resources

Official Anthropic documentation

Other skill libraries worth knowing

Companion concepts


License

MIT. Use it. Fork it. Ship things with it.