USP
Unlike generic AI tools, this collection provides deeply specialized, interconnected marketing skills, built on a foundational product-marketing-context. It ensures AI agents apply industry-specific frameworks and best practices across a w…
Use cases
- 01Automating marketing copywriting
- 02Optimizing conversion rates (CRO)
- 03Improving SEO and content strategy
- 04Setting up analytics tracking
- 05Planning A/B tests and growth experiments
Detected files (9)
skills/aso-audit/SKILL.mdskillShow content (15576 bytes)
--- name: aso-audit description: "When the user wants to audit or optimize an App Store or Google Play listing. Also use when the user mentions 'ASO audit,' 'app store optimization,' 'optimize my app listing,' 'improve app visibility,' 'app store ranking,' 'audit my listing,' 'why aren't people downloading my app,' 'improve my app conversion,' 'keyword optimization for app,' or 'compare my app to competitors.' Use when the user shares an App Store or Google Play URL and wants to improve it." metadata: version: 1.0.0 --- # ASO Audit Analyze App Store and Google Play listings against ASO best practices. Fetches live listing data, scores metadata, visuals, and ratings, then produces a prioritized action plan. ## When to Use - User shares an App Store or Google Play URL - User asks to audit or optimize an app listing - User wants to compare their app against competitors - User asks about app store ranking, visibility, or download conversion ## Before Auditing **Check for product marketing context first:** If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task. ## Phase 1 — Identify Store & Fetch ### Detect store type from URL ``` Apple: apps.apple.com/{country}/app/{name}/id{digits} Google: play.google.com/store/apps/details?id={package} ``` If the user gives an app name instead of a URL, search the web for: `site:apps.apple.com "{app name}"` or `site:play.google.com "{app name}"` ### Fetch the listing Use WebFetch to retrieve the listing page. Extract every available field: **Apple App Store fields:** - App name (title) — 30 char limit - Subtitle — 30 char limit - Description (long) — not indexed for search, but matters for conversion - Promotional text — 170 chars, updatable without new release - Category (primary + secondary) - Screenshots (count, order, caption text) - Preview video (presence, duration) - Rating (average + count) - Recent reviews (visible ones) - Price / in-app purchases - Developer name - Last updated date - Version history notes - Age rating - Size - Languages / localizations listed - In-app events (if any visible) **Google Play fields:** - App name (title) — 30 char limit - Short description — 80 char limit - Full description — 4,000 char limit, IS indexed for search - Category + tags - Feature graphic (presence) - Screenshots (count, order) - Preview video (presence) - Rating (average + count) - Recent reviews (visible ones) - Price / in-app purchases - Developer name - Last updated date - What's new text - Downloads range - Content rating - Data safety section - Languages listed If WebFetch returns incomplete data (stores render client-side), note gaps and work with what's available. Ask the user to paste missing fields if critical. ### Visual asset assessment WebFetch cannot extract screenshot images or caption text. **Take a screenshot of the listing page** to get visual data: 1. Navigate to the listing URL and capture a full-page screenshot 2. Assess the screenshot for: icon quality, screenshot count, caption text, messaging quality, preview video presence, feature graphic (Google Play) 3. If browser tools are unavailable, ask the user to share a screenshot of the listing page **Promotional text (Apple):** This 170-char field appears above the description but is often indistinguishable from it in scraped HTML. If you cannot confirm its presence, note this and recommend the user check App Store Connect. --- ## Phase 1.5 — Assess Brand Maturity Before scoring, classify the app into one of three tiers. This determines how you interpret "textbook ASO" deviations — a deliberate brand choice by a household name is not the same as a missed opportunity by an unknown app. ### Tier definitions | Tier | Signals | Examples | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------- | | **Dominant** | Household name, 1M+ ratings, top-10 in category, near-universal brand recognition. Users search by brand name, not generic keywords. | Instagram, Uber, Spotify, WhatsApp, Netflix | | **Established** | Well-known in their category, 100K+ ratings, strong organic installs, recognized brand but not universally known. | Strava, Notion, Duolingo, Cash App, Calm | | **Challenger** | Building awareness, <100K ratings, needs discovery through keywords and ASO tactics. Most apps fall here. | Your app, most indie/startup apps | ### How tier affects scoring **Dominant apps** get adjusted scoring in these areas: - **Title:** Brand-only or brand-first titles are valid (score 8+ if brand is the keyword). These apps don't need generic keyword discovery. - **Description:** Score purely on conversion quality, not keyword presence. If the app is a household name, a well-crafted brand description beats a keyword-stuffed one. - **Visual Assets:** Lifestyle/brand photography instead of UI demos is a legitimate conversion strategy. No video is acceptable if the product is hard to demo in 30s or brand awareness is near-universal. - **What's New:** Generic release notes at weekly+ cadence are acceptable (score 8+). At scale, detailed changelogs have minimal ROI and risk backlash. - **In-app events:** Missing events for utility apps with massive install bases (Uber, WhatsApp) is not a penalty. These apps don't need discovery help. - **Localization:** Score relative to actual market, not absolute count. A US-only fintech with 2 languages (English + Spanish) is appropriately localized. **Established apps** get partial adjustment: - Brand-first titles are fine but should still include 1-2 keywords - Strategic description choices get benefit of the doubt - Other dimensions scored normally **Challenger apps** are scored strictly against textbook ASO best practices — every character, screenshot, and keyword matters. **Key principle:** Before docking points, ask: "Is this a mistake or a deliberate choice by a team that has data I don't?" If the app has 1M+ ratings and a dedicated ASO team, assume their choices are data-informed unless clearly wrong. --- ## Phase 2 — Score Each Dimension Score each dimension 0-10 using the criteria in `references/scoring-criteria.md`. Apply the brand maturity tier adjustments from Phase 1.5. Reference files for platform specs and benchmarks: - `references/apple-specs.md` — Official Apple character limits, screenshot/video specs, CPP/PPO rules, rejection triggers - `references/google-play-specs.md` — Official Google Play limits, screenshot specs, Android Vitals thresholds, policies - `references/benchmarks.md` — Conversion data, rating impact, video lift, screenshot behavior, CPP/event benchmarks ### Dimensions and Weights | # | Dimension | Weight | What It Covers | | --- | -------------------- | ------ | ------------------------------------------------------------------------- | | 1 | Title & Subtitle | 20% | Character usage, keyword presence, clarity, brand + keyword balance | | 2 | Description | 15% | First 3 lines, keyword density (Google), CTA, structure, promotional text | | 3 | Visual Assets | 25% | Screenshot count/quality/messaging, video, icon, feature graphic | | 4 | Ratings & Reviews | 20% | Average rating, volume, recency, developer responses | | 5 | Metadata & Freshness | 10% | Category choice, update recency, localization count, data safety | | 6 | Conversion Signals | 10% | Price positioning, IAP transparency, social proof, download range | **Final score** = weighted sum, out of 100. ### Score interpretation | Score | Grade | Meaning | | ------ | ----- | --------------------------------------------------------- | | 85-100 | A | Well-optimized; focus on A/B testing and iteration | | 70-84 | B | Good foundation; clear opportunities to improve | | 50-69 | C | Significant gaps; prioritized fixes will have high impact | | 30-49 | D | Major optimization needed across multiple dimensions | | 0-29 | F | Listing needs a complete overhaul | --- ## Phase 3 — Competitor Comparison (Optional) If the user provides competitor URLs or asks for comparison: 1. Fetch 2-3 top competitors in the same category 2. Run the same scoring on each 3. Build a comparison table highlighting where the user's app is weaker/stronger 4. Identify keyword gaps — terms competitors rank for that the user's app doesn't target If no competitors are specified, suggest the user provide 2-3 or offer to search for top apps in their category. --- ## Phase 4 — Generate Report Use the template in `references/report-template.md` to structure the output. The report must include: 1. **Score card** — table with all 6 dimensions, scores, and grade 2. **Top 3 quick wins** — changes that take <1 hour and have highest impact 3. **Detailed findings** — per-dimension breakdown with specific issues and fixes 4. **Keyword suggestions** — based on title/description analysis and competitor gaps 5. **Visual asset recommendations** — specific screenshot/video improvements 6. **Priority action plan** — ordered list of changes by impact vs effort ### Report rules - Every recommendation must be **specific and actionable** ("Change subtitle from X to Y" not "Improve subtitle") - Include character counts for all text recommendations - Flag platform-specific differences (Apple vs Google) when relevant - Note what CANNOT be assessed without paid tools (search volume, exact rankings) - When suggesting keyword changes, explain WHY each keyword matters --- ## Platform-Specific Rules ### Apple App Store — Key Facts - Title (30 chars) + Subtitle (30 chars) + Keyword field (100 **bytes**, hidden) = indexed text - Keywords field is bytes not chars — Arabic/CJK use 2-3 bytes per char - Long description is NOT indexed for search — optimize for conversion only - Promotional text (170 chars) does NOT affect search (Apple confirmed) - Never repeat words across title/subtitle/keyword field (Apple indexes each word once) - Keyword field: commas, no spaces ("photo,editor,filter" not "photo, editor, filter") - Screenshots: up to 10 per device. First 3 visible in search — 90% never scroll past 3rd - Screenshot captions indexed since June 2025 (AI extraction) - In-app events: max 10 published at once, max 31 days each. Indexed and appear in search - Custom Product Pages (up to 70) in organic search since July 2025. +5.9% avg conversion lift - App preview video: up to 3, 15-30s each. Autoplays muted — +20-40% conversion lift - SKStoreReviewController: max 3 prompts per 365 days - Apple has human editorial curation — quality and design matter more - See `references/apple-specs.md` for full specs, dimensions, and rejection triggers ### Google Play — Key Facts - Title (30 chars) + Short description (80 chars) + Full description (4,000 chars) = indexed text - Full description IS indexed — target 2-3% keyword density naturally - No hidden keyword field — all keywords must be in visible text - Google NLP/semantic understanding — keyword stuffing detected and penalized - Prohibited in title: emojis, ALL CAPS, "best"/"#1"/"free", CTAs (enforced since 2021) - Screenshots: min 2, **max 8** per device (not 10 like Apple) - Feature graphic (1024x500, exact) required for featured placements - Video does NOT autoplay — only ~6% of users tap play (low ROI vs iOS) - Android Vitals directly affect ranking: crash >1.09% or ANR >0.47% = reduced visibility - Promotional Content: submit 14 days early for featuring. Apps see 2x explore acquisitions - Custom Store Listings: up to 50 (can target churned users, specific countries, ad campaigns) - Store Listing Experiments: test up to 3 variants, run 7+ days, 1 experiment at a time - See `references/google-play-specs.md` for full specs and policy details ### What Apple Indexes vs What Google Indexes | Field | Apple Indexed? | Google Indexed? | | --------------------- | ---------------- | ---------------------- | | Title | Yes | Yes (strongest signal) | | Subtitle / Short desc | Yes | Yes | | Keyword field | Yes (hidden) | Does not exist | | Long description | No | Yes (heavily) | | Screenshot captions | Yes (since 2025) | No | | In-app events | Yes | N/A (LiveOps instead) | | Developer name | No | Partial | | IAP names | Yes | Yes | --- ## Common Issues Checklist Flag these if found. Items marked _(tier-dependent)_ should be evaluated against the app's brand maturity tier — they may be deliberate choices for Dominant apps. **Always flag (all tiers):** - [ ] Rating below 4.0 - [ ] Last update > 3 months ago - [ ] Google Play description has no keyword strategy (under 1% density) - [ ] Google Play missing feature graphic - [ ] Apple keyword field likely has repeated words (inferred from title+subtitle) - [ ] Category mismatch — app would face less competition in a different category - [ ] Fewer than 5 screenshots **Flag for Challenger/Established only** _(not mistakes for Dominant apps):_ - [ ] Title wastes characters on brand name only (no keywords) _(Dominant: brand IS the keyword)_ - [ ] Subtitle/short description duplicates title keywords - [ ] Description first 3 lines are generic _(Dominant: may be brand-voice choice)_ - [ ] No preview video _(Dominant: may be rational if product is hard to demo)_ - [ ] Screenshots are just UI dumps with no messaging/captions _(Dominant: lifestyle/brand shots may convert better)_ - [ ] Only 1-2 localizations _(score relative to actual market, not absolute count)_ - [ ] No in-app events or promotional content _(Dominant utility apps may not need discovery help)_ **Flag for all tiers but note context:** - [ ] No developer responses to negative reviews _(note volume — responding at 10M+ reviews is a different challenge than at 1K)_ - [ ] Generic "What's New" text _(acceptable at weekly+ release cadence for Established/Dominant)_ --- ## Task-Specific Questions 1. What is the App Store or Google Play URL? 2. Is this your app or a competitor's? 3. What category does the app compete in? 4. Do you have competitor URLs to compare against? 5. Are you focused on search visibility, conversion rate, or both? 6. Do you have access to App Store Connect or Google Play Console data? --- ## Related Skills - **page-cro**: For optimizing the conversion of web-based landing pages that drive app installs - **ad-creative**: For creating App Store and Google Play ad creatives - **analytics-tracking**: For setting up install attribution and in-app event tracking - **customer-research**: For understanding user needs and language to inform listing copyskills/ab-test-setup/SKILL.mdskillShow content (11153 bytes)
--- name: ab-test-setup description: When the user wants to plan, design, or implement an A/B test or experiment, or build a growth experimentation program. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "should I test this," "which version is better," "test two versions," "statistical significance," "how long should I run this test," "growth experiments," "experiment velocity," "experiment backlog," "ICE score," "experimentation program," or "experiment playbook." Use this whenever someone is comparing two approaches and wants to measure which performs better, or when they want to build a systematic experimentation practice. For tracking implementation, see analytics-tracking. For page-level conversion optimization, see page-cro. metadata: version: 1.2.0 --- # A/B Test Setup You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results. ## Initial Assessment **Check for product marketing context first:** If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task. Before designing a test, understand: 1. **Test Context** - What are you trying to improve? What change are you considering? 2. **Current State** - Baseline conversion rate? Current traffic volume? 3. **Constraints** - Technical complexity? Timeline? Tools available? --- ## Core Principles ### 1. Start with a Hypothesis - Not just "let's see what happens" - Specific prediction of outcome - Based on reasoning or data ### 2. Test One Thing - Single variable per test - Otherwise you don't know what worked ### 3. Statistical Rigor - Pre-determine sample size - Don't peek and stop early - Commit to the methodology ### 4. Measure What Matters - Primary metric tied to business value - Secondary metrics for context - Guardrail metrics to prevent harm --- ## Hypothesis Framework ### Structure ``` Because [observation/data], we believe [change] will cause [expected outcome] for [audience]. We'll know this is true when [metrics]. ``` ### Example **Weak**: "Changing the button color might increase clicks." **Strong**: "Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start." --- ## Test Types | Type | Description | Traffic Needed | |------|-------------|----------------| | A/B | Two versions, single change | Moderate | | A/B/n | Multiple variants | Higher | | MVT | Multiple changes in combinations | Very high | | Split URL | Different URLs for variants | Moderate | --- ## Sample Size ### Quick Reference | Baseline | 10% Lift | 20% Lift | 50% Lift | |----------|----------|----------|----------| | 1% | 150k/variant | 39k/variant | 6k/variant | | 3% | 47k/variant | 12k/variant | 2k/variant | | 5% | 27k/variant | 7k/variant | 1.2k/variant | | 10% | 12k/variant | 3k/variant | 550/variant | **Calculators:** - [Evan Miller's](https://www.evanmiller.org/ab-testing/sample-size.html) - [Optimizely's](https://www.optimizely.com/sample-size-calculator/) **For detailed sample size tables and duration calculations**: See [references/sample-size-guide.md](references/sample-size-guide.md) --- ## Metrics Selection ### Primary Metric - Single metric that matters most - Directly tied to hypothesis - What you'll use to call the test ### Secondary Metrics - Support primary metric interpretation - Explain why/how the change worked ### Guardrail Metrics - Things that shouldn't get worse - Stop test if significantly negative ### Example: Pricing Page Test - **Primary**: Plan selection rate - **Secondary**: Time on page, plan distribution - **Guardrail**: Support tickets, refund rate --- ## Designing Variants ### What to Vary | Category | Examples | |----------|----------| | Headlines/Copy | Message angle, value prop, specificity, tone | | Visual Design | Layout, color, images, hierarchy | | CTA | Button copy, size, placement, number | | Content | Information included, order, amount, social proof | ### Best Practices - Single, meaningful change - Bold enough to make a difference - True to the hypothesis --- ## Traffic Allocation | Approach | Split | When to Use | |----------|-------|-------------| | Standard | 50/50 | Default for A/B | | Conservative | 90/10, 80/20 | Limit risk of bad variant | | Ramping | Start small, increase | Technical risk mitigation | **Considerations:** - Consistency: Users see same variant on return - Balanced exposure across time of day/week --- ## Implementation ### Client-Side - JavaScript modifies page after load - Quick to implement, can cause flicker - Tools: PostHog, Optimizely, VWO ### Server-Side - Variant determined before render - No flicker, requires dev work - Tools: PostHog, LaunchDarkly, Split --- ## Running the Test ### Pre-Launch Checklist - [ ] Hypothesis documented - [ ] Primary metric defined - [ ] Sample size calculated - [ ] Variants implemented correctly - [ ] Tracking verified - [ ] QA completed on all variants ### During the Test **DO:** - Monitor for technical issues - Check segment quality - Document external factors **Avoid:** - Peek at results and stop early - Make changes to variants - Add traffic from new sources ### The Peeking Problem Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. Pre-commit to sample size and trust the process. --- ## Analyzing Results ### Statistical Significance - 95% confidence = p-value < 0.05 - Means <5% chance result is random - Not a guarantee—just a threshold ### Analysis Checklist 1. **Reach sample size?** If not, result is preliminary 2. **Statistically significant?** Check confidence intervals 3. **Effect size meaningful?** Compare to MDE, project impact 4. **Secondary metrics consistent?** Support the primary? 5. **Guardrail concerns?** Anything get worse? 6. **Segment differences?** Mobile vs. desktop? New vs. returning? ### Interpreting Results | Result | Conclusion | |--------|------------| | Significant winner | Implement variant | | Significant loser | Keep control, learn why | | No significant difference | Need more traffic or bolder test | | Mixed signals | Dig deeper, maybe segment | --- ## Documentation Document every test with: - Hypothesis - Variants (with screenshots) - Results (sample, metrics, significance) - Decision and learnings **For templates**: See [references/test-templates.md](references/test-templates.md) --- ## Growth Experimentation Program Individual tests are valuable. A continuous experimentation program is a compounding asset. This section covers how to run experiments as an ongoing growth engine, not just one-off tests. ### The Experiment Loop ``` 1. Generate hypotheses (from data, research, competitors, customer feedback) 2. Prioritize with ICE scoring 3. Design and run the test 4. Analyze results with statistical rigor 5. Promote winners to a playbook 6. Generate new hypotheses from learnings → Repeat ``` ### Hypothesis Generation Feed your experiment backlog from multiple sources: | Source | What to Look For | |--------|-----------------| | Analytics | Drop-off points, low-converting pages, underperforming segments | | Customer research | Pain points, confusion, unmet expectations | | Competitor analysis | Features, messaging, or UX patterns they use that you don't | | Support tickets | Recurring questions or complaints about conversion flows | | Heatmaps/recordings | Where users hesitate, rage-click, or abandon | | Past experiments | "Significant loser" tests often reveal new angles to try | ### ICE Prioritization Score each hypothesis 1-10 on three dimensions: | Dimension | Question | |-----------|----------| | **Impact** | If this works, how much will it move the primary metric? | | **Confidence** | How sure are we this will work? (Based on data, not gut.) | | **Ease** | How fast and cheap can we ship and measure this? | **ICE Score** = (Impact + Confidence + Ease) / 3 Run highest-scoring experiments first. Re-score monthly as context changes. ### Experiment Velocity Track your experimentation rate as a leading indicator of growth: | Metric | Target | |--------|--------| | Experiments launched per month | 4-8 for most teams | | Win rate | 20-30% is common for mature programs (sustained higher rates may indicate conservative hypotheses) | | Average test duration | 2-4 weeks | | Backlog depth | 20+ hypotheses queued | | Cumulative lift | Compound gains from all winners | ### The Experiment Playbook When a test wins, don't just implement it — document the pattern: ``` ## [Experiment Name] **Date**: [date] **Hypothesis**: [the hypothesis] **Sample size**: [n per variant] **Result**: [winner/loser/inconclusive] — [primary metric] changed by [X%] (95% CI: [range], p=[value]) **Guardrails**: [any guardrail metrics and their outcomes] **Segment deltas**: [notable differences by device, segment, or cohort] **Why it worked/failed**: [analysis] **Pattern**: [the reusable insight — e.g., "social proof near pricing CTAs increases plan selection"] **Apply to**: [other pages/flows where this pattern might work] **Status**: [implemented / parked / needs follow-up test] ``` Over time, your playbook becomes a library of proven growth patterns specific to your product and audience. ### Experiment Cadence **Weekly (30 min)**: Review running experiments for technical issues and guardrail metrics. Don't call winners early — but do stop tests where guardrails are significantly negative. **Bi-weekly**: Conclude completed experiments. Analyze results, update playbook, launch next experiment from backlog. **Monthly (1 hour)**: Review experiment velocity, win rate, cumulative lift. Replenish hypothesis backlog. Re-prioritize with ICE. **Quarterly**: Audit the playbook. Which patterns have been applied broadly? Which winning patterns haven't been scaled yet? What areas of the funnel are under-tested? --- ## Common Mistakes ### Test Design - Testing too small a change (undetectable) - Testing too many things (can't isolate) - No clear hypothesis ### Execution - Stopping early - Changing things mid-test - Not checking implementation ### Analysis - Ignoring confidence intervals - Cherry-picking segments - Over-interpreting inconclusive results --- ## Task-Specific Questions 1. What's your current conversion rate? 2. How much traffic does this page get? 3. What change are you considering and why? 4. What's the smallest improvement worth detecting? 5. What tools do you have for testing? 6. Have you tested this area before? --- ## Related Skills - **page-cro**: For generating test ideas based on CRO principles - **analytics-tracking**: For setting up test measurement - **copywriting**: For creating variant copyskills/analytics-tracking/SKILL.mdskillShow content (8341 bytes)
--- name: analytics-tracking description: When the user wants to set up, improve, or audit analytics tracking and measurement. Also use when the user mentions "set up tracking," "GA4," "Google Analytics," "conversion tracking," "event tracking," "UTM parameters," "tag manager," "GTM," "analytics implementation," "tracking plan," "how do I measure this," "track conversions," "attribution," "Mixpanel," "Segment," "are my events firing," or "analytics isn't working." Use this whenever someone asks how to know if something is working or wants to measure marketing results. For A/B test measurement, see ab-test-setup. metadata: version: 1.1.0 --- # Analytics Tracking You are an expert in analytics implementation and measurement. Your goal is to help set up tracking that provides actionable insights for marketing and product decisions. ## Initial Assessment **Check for product marketing context first:** If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task. Before implementing tracking, understand: 1. **Business Context** - What decisions will this data inform? What are key conversions? 2. **Current State** - What tracking exists? What tools are in use? 3. **Technical Context** - What's the tech stack? Any privacy/compliance requirements? --- ## Core Principles ### 1. Track for Decisions, Not Data - Every event should inform a decision - Avoid vanity metrics - Quality > quantity of events ### 2. Start with the Questions - What do you need to know? - What actions will you take based on this data? - Work backwards to what you need to track ### 3. Name Things Consistently - Naming conventions matter - Establish patterns before implementing - Document everything ### 4. Maintain Data Quality - Validate implementation - Monitor for issues - Clean data > more data --- ## Tracking Plan Framework ### Structure ``` Event Name | Category | Properties | Trigger | Notes ---------- | -------- | ---------- | ------- | ----- ``` ### Event Types | Type | Examples | |------|----------| | Pageviews | Automatic, enhanced with metadata | | User Actions | Button clicks, form submissions, feature usage | | System Events | Signup completed, purchase, subscription changed | | Custom Conversions | Goal completions, funnel stages | **For comprehensive event lists**: See [references/event-library.md](references/event-library.md) --- ## Event Naming Conventions ### Recommended Format: Object-Action ``` signup_completed button_clicked form_submitted article_read checkout_payment_completed ``` ### Best Practices - Lowercase with underscores - Be specific: `cta_hero_clicked` vs. `button_clicked` - Include context in properties, not event name - Avoid spaces and special characters - Document decisions --- ## Essential Events ### Marketing Site | Event | Properties | |-------|------------| | cta_clicked | button_text, location | | form_submitted | form_type | | signup_completed | method, source | | demo_requested | - | ### Product/App | Event | Properties | |-------|------------| | onboarding_step_completed | step_number, step_name | | feature_used | feature_name | | purchase_completed | plan, value | | subscription_cancelled | reason | **For full event library by business type**: See [references/event-library.md](references/event-library.md) --- ## Event Properties ### Standard Properties | Category | Properties | |----------|------------| | Page | page_title, page_location, page_referrer | | User | user_id, user_type, account_id, plan_type | | Campaign | source, medium, campaign, content, term | | Product | product_id, product_name, category, price | ### Best Practices - Use consistent property names - Include relevant context - Don't duplicate automatic properties - Avoid PII in properties --- ## GA4 Implementation ### Quick Setup 1. Create GA4 property and data stream 2. Install gtag.js or GTM 3. Enable enhanced measurement 4. Configure custom events 5. Mark conversions in Admin ### Custom Event Example ```javascript gtag('event', 'signup_completed', { 'method': 'email', 'plan': 'free' }); ``` **For detailed GA4 implementation**: See [references/ga4-implementation.md](references/ga4-implementation.md) --- ## Google Tag Manager ### Container Structure | Component | Purpose | |-----------|---------| | Tags | Code that executes (GA4, pixels) | | Triggers | When tags fire (page view, click) | | Variables | Dynamic values (click text, data layer) | ### Data Layer Pattern ```javascript dataLayer.push({ 'event': 'form_submitted', 'form_name': 'contact', 'form_location': 'footer' }); ``` **For detailed GTM implementation**: See [references/gtm-implementation.md](references/gtm-implementation.md) --- ## UTM Parameter Strategy ### Standard Parameters | Parameter | Purpose | Example | |-----------|---------|---------| | utm_source | Traffic source | google, newsletter | | utm_medium | Marketing medium | cpc, email, social | | utm_campaign | Campaign name | spring_sale | | utm_content | Differentiate versions | hero_cta | | utm_term | Paid search keywords | running+shoes | ### Naming Conventions - Lowercase everything - Use underscores or hyphens consistently - Be specific but concise: `blog_footer_cta`, not `cta1` - Document all UTMs in a spreadsheet --- ## Debugging and Validation ### Testing Tools | Tool | Use For | |------|---------| | GA4 DebugView | Real-time event monitoring | | GTM Preview Mode | Test triggers before publish | | Browser Extensions | Tag Assistant, dataLayer Inspector | ### Validation Checklist - [ ] Events firing on correct triggers - [ ] Property values populating correctly - [ ] No duplicate events - [ ] Works across browsers and mobile - [ ] Conversions recorded correctly - [ ] No PII leaking ### Common Issues | Issue | Check | |-------|-------| | Events not firing | Trigger config, GTM loaded | | Wrong values | Variable path, data layer structure | | Duplicate events | Multiple containers, trigger firing twice | --- ## Privacy and Compliance ### Considerations - Cookie consent required in EU/UK/CA - No PII in analytics properties - Data retention settings - User deletion capabilities ### Implementation - Use consent mode (wait for consent) - IP anonymization - Only collect what you need - Integrate with consent management platform --- ## Output Format ### Tracking Plan Document ```markdown # [Site/Product] Tracking Plan ## Overview - Tools: GA4, GTM - Last updated: [Date] ## Events | Event Name | Description | Properties | Trigger | |------------|-------------|------------|---------| | signup_completed | User completes signup | method, plan | Success page | ## Custom Dimensions | Name | Scope | Parameter | |------|-------|-----------| | user_type | User | user_type | ## Conversions | Conversion | Event | Counting | |------------|-------|----------| | Signup | signup_completed | Once per session | ``` --- ## Task-Specific Questions 1. What tools are you using (GA4, Mixpanel, etc.)? 2. What key actions do you want to track? 3. What decisions will this data inform? 4. Who implements - dev team or marketing? 5. Are there privacy/consent requirements? 6. What's already tracked? --- ## Tool Integrations For implementation, see the [tools registry](../../tools/REGISTRY.md). Key analytics tools: | Tool | Best For | MCP | Guide | |------|----------|:---:|-------| | **GA4** | Web analytics, Google ecosystem | ✓ | [ga4.md](../../tools/integrations/ga4.md) | | **Mixpanel** | Product analytics, event tracking | - | [mixpanel.md](../../tools/integrations/mixpanel.md) | | **Amplitude** | Product analytics, cohort analysis | - | [amplitude.md](../../tools/integrations/amplitude.md) | | **PostHog** | Open-source analytics, session replay | - | [posthog.md](../../tools/integrations/posthog.md) | | **Segment** | Customer data platform, routing | - | [segment.md](../../tools/integrations/segment.md) | --- ## Related Skills - **ab-test-setup**: For experiment tracking - **seo-audit**: For organic traffic analysis - **page-cro**: For conversion optimization (uses this data) - **revops**: For pipeline metrics, CRM tracking, and revenue attributionskills/churn-prevention/SKILL.mdskillShow content (18457 bytes)
--- name: churn-prevention description: "When the user wants to reduce churn, build cancellation flows, set up save offers, recover failed payments, or implement retention strategies. Also use when the user mentions 'churn,' 'cancel flow,' 'offboarding,' 'save offer,' 'dunning,' 'failed payment recovery,' 'win-back,' 'retention,' 'exit survey,' 'pause subscription,' 'involuntary churn,' 'people keep canceling,' 'churn rate is too high,' 'how do I keep users,' or 'customers are leaving.' Use this whenever someone is losing subscribers or wants to build systems to prevent it. For post-cancel win-back email sequences, see email-sequence. For in-app upgrade paywalls, see paywall-upgrade-cro." metadata: version: 1.1.0 --- # Churn Prevention You are an expert in SaaS retention and churn prevention. Your goal is to help reduce both voluntary churn (customers choosing to cancel) and involuntary churn (failed payments) through well-designed cancel flows, dynamic save offers, proactive retention, and dunning strategies. ## Before Starting **Check for product marketing context first:** If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task. Gather this context (ask if not provided): ### 1. Current Churn Situation - What's your monthly churn rate? (Voluntary vs. involuntary if known) - How many active subscribers? - What's the average MRR per customer? - Do you have a cancel flow today, or does cancel happen instantly? ### 2. Billing & Platform - What billing provider? (Stripe, Chargebee, Paddle, Recurly, Braintree) - Monthly, annual, or both billing intervals? - Do you support plan pausing or downgrades? - Any existing retention tooling? (Churnkey, ProsperStack, Raaft) ### 3. Product & Usage Data - Do you track feature usage per user? - Can you identify engagement drop-offs? - Do you have cancellation reason data from past churns? - What's your activation metric? (What do retained users do that churned users don't?) ### 4. Constraints - B2B or B2C? (Affects flow design) - Self-serve cancellation required? (Some regulations mandate easy cancel) - Brand tone for offboarding? (Empathetic, direct, playful) --- ## How This Skill Works Churn has two types requiring different strategies: | Type | Cause | Solution | |------|-------|----------| | **Voluntary** | Customer chooses to cancel | Cancel flows, save offers, exit surveys | | **Involuntary** | Payment fails | Dunning emails, smart retries, card updaters | Voluntary churn is typically 50-70% of total churn. Involuntary churn is 30-50% but is often easier to fix. This skill supports three modes: 1. **Build a cancel flow** — Design from scratch with survey, save offers, and confirmation 2. **Optimize an existing flow** — Analyze cancel data and improve save rates 3. **Set up dunning** — Failed payment recovery with retries and email sequences --- ## Cancel Flow Design ### The Cancel Flow Structure Every cancel flow follows this sequence: ``` Trigger → Survey → Dynamic Offer → Confirmation → Post-Cancel ``` **Step 1: Trigger** Customer clicks "Cancel subscription" in account settings. **Step 2: Exit Survey** Ask why they're cancelling. This determines which save offer to show. **Step 3: Dynamic Save Offer** Present a targeted offer based on their reason (discount, pause, downgrade, etc.) **Step 4: Confirmation** If they still want to cancel, confirm clearly with end-of-billing-period messaging. **Step 5: Post-Cancel** Set expectations, offer easy reactivation path, trigger win-back sequence. ### Exit Survey Design The exit survey is the foundation. Good reason categories: | Reason | What It Tells You | |--------|-------------------| | Too expensive | Price sensitivity, may respond to discount or downgrade | | Not using it enough | Low engagement, may respond to pause or onboarding help | | Missing a feature | Product gap, show roadmap or workaround | | Switching to competitor | Competitive pressure, understand what they offer | | Technical issues / bugs | Product quality, escalate to support | | Temporary / seasonal need | Usage pattern, offer pause | | Business closed / changed | Unavoidable, learn and let go gracefully | | Other | Catch-all, include free text field | **Survey best practices:** - 1 question, single-select with optional free text - 5-8 reason options max (avoid decision fatigue) - Put most common reasons first (review data quarterly) - Don't make it feel like a guilt trip - "Help us improve" framing works better than "Why are you leaving?" ### Dynamic Save Offers The key insight: **match the offer to the reason.** A discount won't save someone who isn't using the product. A feature roadmap won't save someone who can't afford it. **Offer-to-reason mapping:** | Cancel Reason | Primary Offer | Fallback Offer | |---------------|---------------|----------------| | Too expensive | Discount (20-30% for 2-3 months) | Downgrade to lower plan | | Not using it enough | Pause (1-3 months) | Free onboarding session | | Missing feature | Roadmap preview + timeline | Workaround guide | | Switching to competitor | Competitive comparison + discount | Feedback session | | Technical issues | Escalate to support immediately | Credit + priority fix | | Temporary / seasonal | Pause subscription | Downgrade temporarily | | Business closed | Skip offer (respect the situation) | — | ### Save Offer Types **Discount** - 20-30% off for 2-3 months is the sweet spot - Avoid 50%+ discounts (trains customers to cancel for deals) - Time-limit the offer ("This offer expires when you leave this page") - Show the dollar amount saved, not just the percentage **Pause subscription** - 1-3 month pause maximum (longer pauses rarely reactivate) - 60-80% of pausers eventually return to active - Auto-reactivation with advance notice email - Keep their data and settings intact **Plan downgrade** - Offer a lower tier instead of full cancellation - Show what they keep vs. what they lose - Position as "right-size your plan" not "downgrade" - Easy path back up when ready **Feature unlock / extension** - Unlock a premium feature they haven't tried - Extend trial of a higher tier - Works best for "not getting enough value" reasons **Personal outreach** - For high-value accounts (top 10-20% by MRR) - Route to customer success for a call - Personal email from founder for smaller companies ### Cancel Flow UI Patterns ``` ┌─────────────────────────────────────┐ │ We're sorry to see you go │ │ │ │ What's the main reason you're │ │ cancelling? │ │ │ │ ○ Too expensive │ │ ○ Not using it enough │ │ ○ Missing a feature I need │ │ ○ Switching to another tool │ │ ○ Technical issues │ │ ○ Temporary / don't need right now │ │ ○ Other: [____________] │ │ │ │ [Continue] │ │ [Never mind, keep my subscription] │ └─────────────────────────────────────┘ ↓ (selects "Too expensive") ┌─────────────────────────────────────┐ │ What if we could help? │ │ │ │ We'd love to keep you. Here's a │ │ special offer: │ │ │ │ ┌───────────────────────────────┐ │ │ │ 25% off for the next 3 months│ │ │ │ Save $XX/month │ │ │ │ │ │ │ │ [Accept Offer] │ │ │ └───────────────────────────────┘ │ │ │ │ Or switch to [Basic Plan] at │ │ $X/month → │ │ │ │ [No thanks, continue cancelling] │ └─────────────────────────────────────┘ ``` **UI principles:** - Keep the "continue cancelling" option visible (no dark patterns) - One primary offer + one fallback, not a wall of options - Show specific dollar savings, not abstract percentages - Use the customer's name and account data when possible - Mobile-friendly (many cancellations happen on mobile) For detailed cancel flow patterns by industry and billing provider, see [references/cancel-flow-patterns.md](references/cancel-flow-patterns.md). --- ## Churn Prediction & Proactive Retention The best save happens before the customer ever clicks "Cancel." ### Risk Signals Track these leading indicators of churn: | Signal | Risk Level | Timeframe | |--------|-----------|-----------| | Login frequency drops 50%+ | High | 2-4 weeks before cancel | | Key feature usage stops | High | 1-3 weeks before cancel | | Support tickets spike then stop | High | 1-2 weeks before cancel | | Email open rates decline | Medium | 2-6 weeks before cancel | | Billing page visits increase | High | Days before cancel | | Team seats removed | High | 1-2 weeks before cancel | | Data export initiated | Critical | Days before cancel | | NPS score drops below 6 | Medium | 1-3 months before cancel | ### Health Score Model Build a simple health score (0-100) from weighted signals: ``` Health Score = ( Login frequency score × 0.30 + Feature usage score × 0.25 + Support sentiment × 0.15 + Billing health × 0.15 + Engagement score × 0.15 ) ``` | Score | Status | Action | |-------|--------|--------| | 80-100 | Healthy | Upsell opportunities | | 60-79 | Needs attention | Proactive check-in | | 40-59 | At risk | Intervention campaign | | 0-39 | Critical | Personal outreach | ### Proactive Interventions **Before they think about cancelling:** | Trigger | Intervention | |---------|-------------| | Usage drop >50% for 2 weeks | "We noticed you haven't used [feature]. Need help?" email | | Approaching plan limit | Upgrade nudge (not a wall — paywall-upgrade-cro handles this) | | No login for 14 days | Re-engagement email with recent product updates | | NPS detractor (0-6) | Personal follow-up within 24 hours | | Support ticket unresolved >48h | Escalation + proactive status update | | Annual renewal in 30 days | Value recap email + renewal confirmation | --- ## Involuntary Churn: Payment Recovery Failed payments cause 30-50% of all churn but are the most recoverable. ### The Dunning Stack ``` Pre-dunning → Smart retry → Dunning emails → Grace period → Hard cancel ``` ### Pre-Dunning (Prevent Failures) - **Card expiry alerts**: Email 30, 15, and 7 days before card expires - **Backup payment method**: Prompt for a second payment method at signup - **Card updater services**: Visa/Mastercard auto-update programs (reduces hard declines 30-50%) - **Pre-billing notification**: Email 3-5 days before charge for annual plans ### Smart Retry Logic Not all failures are the same. Retry strategy by decline type: | Decline Type | Examples | Retry Strategy | |-------------|----------|----------------| | Soft decline (temporary) | Insufficient funds, processor timeout | Retry 3-5 times over 7-10 days | | Hard decline (permanent) | Card stolen, account closed | Don't retry — ask for new card | | Authentication required | 3D Secure, SCA | Send customer to update payment | **Retry timing best practices:** - Retry 1: 24 hours after failure - Retry 2: 3 days after failure - Retry 3: 5 days after failure - Retry 4: 7 days after failure (with dunning email escalation) - After 4 retries: Hard cancel with reactivation path **Smart retry tip:** Retry on the day of the month the payment originally succeeded (if Day 1 worked before, retry on Day 1). Stripe Smart Retries handles this automatically. ### Dunning Email Sequence | Email | Timing | Tone | Content | |-------|--------|------|---------| | 1 | Day 0 (failure) | Friendly alert | "Your payment didn't go through. Update your card." | | 2 | Day 3 | Helpful reminder | "Quick reminder — update your payment to keep access." | | 3 | Day 7 | Urgency | "Your account will be paused in 3 days. Update now." | | 4 | Day 10 | Final warning | "Last chance to keep your account active." | **Dunning email best practices:** - Direct link to payment update page (no login required if possible) - Show what they'll lose (their data, their team's access) - Don't blame ("your payment failed" not "you failed to pay") - Include support contact for help - Plain text performs better than designed emails for dunning ### Recovery Benchmarks | Metric | Poor | Average | Good | |--------|------|---------|------| | Soft decline recovery | <40% | 50-60% | 70%+ | | Hard decline recovery | <10% | 20-30% | 40%+ | | Overall payment recovery | <30% | 40-50% | 60%+ | | Pre-dunning prevention | None | 10-15% | 20-30% | For the complete dunning playbook with provider-specific setup, see [references/dunning-playbook.md](references/dunning-playbook.md). --- ## Metrics & Measurement ### Key Churn Metrics | Metric | Formula | Target | |--------|---------|--------| | Monthly churn rate | Churned customers / Start-of-month customers | <5% B2C, <2% B2B | | Revenue churn (net) | (Lost MRR - Expansion MRR) / Start MRR | Negative (net expansion) | | Cancel flow save rate | Saved / Total cancel sessions | 25-35% | | Offer acceptance rate | Accepted offers / Shown offers | 15-25% | | Pause reactivation rate | Reactivated / Total paused | 60-80% | | Dunning recovery rate | Recovered / Total failed payments | 50-60% | | Time to cancel | Days from first churn signal to cancel | Track trend | ### Cohort Analysis Segment churn by: - **Acquisition channel** — Which channels bring stickier customers? - **Plan type** — Which plans churn most? - **Tenure** — When do most cancellations happen? (30, 60, 90 days?) - **Cancel reason** — Which reasons are growing? - **Save offer type** — Which offers work best for which segments? ### Cancel Flow A/B Tests Test one variable at a time: | Test | Hypothesis | Metric | |------|-----------|--------| | Discount % (20% vs 30%) | Higher discount saves more | Save rate, LTV impact | | Pause duration (1 vs 3 months) | Longer pause increases return rate | Reactivation rate | | Survey placement (before vs after offer) | Survey-first personalizes offers | Save rate | | Offer presentation (modal vs full page) | Full page gets more attention | Save rate | | Copy tone (empathetic vs direct) | Empathetic reduces friction | Save rate | **How to run cancel flow experiments:** Use the **ab-test-setup** skill to design statistically rigorous tests. PostHog is a good fit for cancel flow experiments — its feature flags can split users into different flows server-side, and its funnel analytics track each step of the cancel flow (survey → offer → accept/decline → confirm). See the [PostHog integration guide](../../tools/integrations/posthog.md) for setup. --- ## Common Mistakes - **No cancel flow at all** — Instant cancel leaves money on the table. Even a simple survey + one offer saves 10-15% - **Making cancellation hard to find** — Hidden cancel buttons breed resentment and bad reviews. Many jurisdictions require easy cancellation (FTC Click-to-Cancel rule) - **Same offer for every reason** — A blanket discount doesn't address "missing feature" or "not using it" - **Discounts too deep** — 50%+ discounts train customers to cancel-and-return for deals - **Ignoring involuntary churn** — Often 30-50% of total churn and the easiest to fix - **No dunning emails** — Letting payment failures silently cancel accounts - **Guilt-trip copy** — "Are you sure you want to abandon us?" damages brand trust - **Not tracking save offer LTV** — A "saved" customer who churns 30 days later wasn't really saved - **Pausing too long** — Pauses beyond 3 months rarely reactivate. Set limits. - **No post-cancel path** — Make reactivation easy and trigger win-back emails, because some churned users will want to come back --- ## Tool Integrations For implementation, see the [tools registry](../../tools/REGISTRY.md). ### Retention Platforms | Tool | Best For | Key Feature | |------|----------|-------------| | **Churnkey** | Full cancel flow + dunning | AI-powered adaptive offers, 34% avg save rate | | **ProsperStack** | Cancel flows with analytics | Advanced rules engine, Stripe/Chargebee integration | | **Raaft** | Simple cancel flow builder | Easy setup, good for early-stage | | **Chargebee Retention** | Chargebee customers | Native integration, was Brightback | ### Billing Providers (Dunning) | Provider | Smart Retries | Dunning Emails | Card Updater | |----------|:------------:|:--------------:|:------------:| | **Stripe** | Built-in (Smart Retries) | Built-in | Automatic | | **Chargebee** | Built-in | Built-in | Via gateway | | **Paddle** | Built-in | Built-in | Managed | | **Recurly** | Built-in | Built-in | Built-in | | **Braintree** | Manual config | Manual | Via gateway | ### Related CLI Tools | Tool | Use For | |------|---------| | `stripe` | Subscription management, dunning config, payment retries | | `customer-io` | Dunning email sequences, retention campaigns | | `posthog` | Cancel flow A/B tests via feature flags, funnel analytics | | `mixpanel` / `ga4` | Usage tracking, churn signal analysis | | `segment` | Event routing for health scoring | --- ## Related Skills - **email-sequence**: For win-back email sequences after cancellation - **paywall-upgrade-cro**: For in-app upgrade moments and trial expiration - **pricing-strategy**: For plan structure and annual discount strategy - **onboarding-cro**: For activation to prevent early churn - **analytics-tracking**: For setting up churn signal events - **ab-test-setup**: For testing cancel flow variations with statistical rigorskills/co-marketing/SKILL.mdskillShow content (10145 bytes)
--- name: co-marketing description: "When the user wants to find co-marketing partners, plan joint campaigns, or brainstorm partnership opportunities. Use when the user says 'co-marketing,' 'partner marketing,' 'joint campaign,' 'who should we partner with,' 'integration marketing,' 'cross-promotion,' 'collaborate with another company,' 'partnership ideas,' or 'co-brand.' For customer referral programs, see referral-program. For launch-specific partnerships, see launch-strategy." metadata: version: 1.0.0 --- You are a co-marketing strategist who helps SaaS companies identify ideal partners and brainstorm high-impact joint campaigns. ## Before Starting **Check for product marketing context first:** If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task. ## When to Use This Skill - Finding potential co-marketing partners - Brainstorming campaign ideas with a specific partner - Planning joint launches or promotions - Evaluating partnership fit - Structuring co-marketing agreements --- ## Partner Identification Framework ### 1. Audience Overlap Analysis The best partners share your audience but don't compete for the same budget. **Ideal partner characteristics:** - Same buyer persona, different problem solved - Adjacent in the workflow (before, after, or alongside your tool) - Similar company stage and customer size - Complementary, not competitive **Questions to identify partners:** - What tools do your customers already use? - What do they use before/after your product? - Who else is selling to your ICP? - Which integrations do customers request most? ### 2. Partner Scoring Criteria Rate potential partners (1-5) on: | Criteria | What to Evaluate | |----------|------------------| | **Audience fit** | How closely does their audience match your ICP? | | **Audience size** | Do they have reach worth partnering for? | | **Brand alignment** | Would you be proud to be associated? | | **Engagement quality** | Do they have an active, engaged audience? | | **Reciprocity potential** | Can you offer them equal value? | | **Ease of execution** | Do they have a partnerships team? History of co-marketing? | ### 3. Where to Find Partners **Integration ecosystem:** - Your existing integration partners - Tools in the same app marketplace category - Platforms your product plugs into **Adjacent categories:** - Tools that solve the problem before yours - Tools that solve the problem after yours - Tools used by the same role but different workflow **Community signals:** - Who sponsors the same podcasts/newsletters? - Who exhibits at the same conferences? - Who's active in the same communities? - Whose content does your audience share? **Data sources:** - Crossbeam or Reveal for account overlap - Customer surveys ("what else do you use?") - G2/Capterra category neighbors - Job postings mentioning your tool + others --- ## Co-Marketing Campaign Types ### Content Partnerships | Format | Effort | Lead Sharing | Best For | |--------|--------|--------------|----------| | **Co-authored blog post** | Low | Shared byline, link exchange | Thought leadership, SEO | | **Joint ebook/guide** | Medium | Gated, split leads | Lead gen, deeper topic | | **Research report** | High | Gated, split leads | Authority, PR | | **Guest newsletter swap** | Low | Each keeps own leads | Audience exposure | | **Podcast guest exchange** | Low | Each keeps own leads | Relationship building | ### Webinars & Events | Format | Effort | Best For | |--------|--------|----------| | **Joint webinar** | Medium | Lead gen, product education | | **Virtual summit panel** | Medium | Multi-partner exposure | | **Co-hosted workshop** | High | Hands-on education, deeper engagement | | **Conference booth sharing** | Medium | Cost splitting, audience overlap | | **Joint happy hour/dinner** | Low | Relationship building at events | ### Product & Integration Marketing | Format | Effort | Best For | |--------|--------|----------| | **Integration launch** | Medium | Existing integration partners | | **Joint case study** | Medium | Shared customers | | **"Better together" landing page** | Low | Integration discovery | | **Bundle or discount** | Medium | Conversion boost, cross-sell | | **In-app cross-promotion** | Medium | User activation | ### Community & Social | Format | Effort | Best For | |--------|--------|----------| | **Social media takeover** | Low | Audience exposure | | **Joint giveaway/contest** | Low | List building, engagement | | **Slack/Discord community collab** | Low | Community building | | **Joint AMA or Twitter Space** | Low | Thought leadership | --- ## Brainstorming Partner Campaigns When brainstorming with a specific partner, consider: ### 1. Shared Audience Moments - What trigger events matter to both audiences? - What seasonal moments align with both products? - What industry trends affect both customer bases? ### 2. Combined Value Propositions - What can customers achieve with both tools that they can't with one? - What workflow does the combination enable? - What pain point does the integration solve? ### 3. Unique Assets Each Brings | Your Assets | Their Assets | |-------------|--------------| | Your audience size/engagement | Their audience size/engagement | | Your content expertise | Their content expertise | | Your product capabilities | Their product capabilities | | Your brand credibility | Their brand credibility | | Your customer stories | Their customer stories | ### 4. Campaign Idea Prompts Ask these to generate ideas: - "What would we create if we had to launch something in 2 weeks?" - "What content do both our audiences desperately need?" - "What would make customers say 'finally, someone did this'?" - "What exclusive thing could we offer together?" - "What data do we both have that would make a compelling story?" --- ## Approaching Potential Partners ### Cold Outreach Template ``` Subject: [Your Company] + [Their Company] co-marketing idea Hey [Name], I'm [Role] at [Your Company]. We [one-line description]. I noticed we share a lot of the same audience—[specific observation about overlap]. I have an idea for [specific campaign type] that could work well for both of us: [one-sentence pitch]. Would you be open to a quick call to explore? [Your name] ``` ### What to Prepare for the Call 1. **Account overlap data** (if available via Crossbeam/Reveal) 2. **2-3 specific campaign ideas** (not just "let's do something") 3. **Your audience metrics** (list size, traffic, engagement) 4. **Examples of past partnerships** (shows you can execute) 5. **Clear ask** (what you want from them, what you'll provide) --- ## Structuring the Partnership ### Key Questions to Align On - **Lead ownership**: How are leads split or shared? - **Promotion commitments**: What will each party do to promote? - **Asset creation**: Who creates what? Who approves? - **Timeline**: When does each phase happen? - **Success metrics**: How will you measure success? - **Follow-up**: Will you do more together if it works? ### Simple Co-Marketing Agreement Outline 1. **Campaign description**: What you're doing together 2. **Responsibilities**: Who does what 3. **Timeline**: Key dates and deadlines 4. **Lead handling**: How leads are captured, shared, followed up 5. **Promotion**: Minimum commitments from each side 6. **Branding**: Logo usage, approval process 7. **Costs**: Who pays for what (if any) 8. **Metrics sharing**: What data you'll share post-campaign --- ## Measuring Co-Marketing Success ### Quantitative Metrics - Leads generated (total and per partner) - Lead quality (MQL/SQL conversion rate) - Revenue attributed - Audience growth (new subscribers, followers) - Content engagement (views, downloads, shares) ### Qualitative Metrics - Ease of collaboration - Partner responsiveness - Audience reception - Brand lift - Relationship strengthened for future campaigns --- ## Co-Marketing Checklist ### Partner Identification - [ ] List tools your customers already use - [ ] Check Crossbeam/Reveal for account overlap - [ ] Score top 5 potential partners - [ ] Research their past co-marketing activities ### Campaign Planning - [ ] Agree on campaign type and goals - [ ] Define lead sharing arrangement - [ ] Assign responsibilities and deadlines - [ ] Set success metrics ### Execution - [ ] Create shared assets (landing page, content, etc.) - [ ] Coordinate promotion schedules - [ ] Brief both teams on talking points ### Post-Campaign - [ ] Share metrics with partner - [ ] Debrief on what worked/didn't - [ ] Discuss future collaboration opportunities --- ## Task-Specific Questions 1. Are you looking for partners or planning a campaign with a specific partner? 2. What type of co-marketing are you most interested in? (content, events, integrations, community) 3. What's your audience size? (email list, social following, traffic) 4. Do you have existing integration partners? 5. Have you done co-marketing before? What worked/didn't? 6. What's your timeline and budget for co-marketing? --- ## Tool Integrations For implementation, see the [tools registry](../../tools/REGISTRY.md). Key tools for co-marketing: | Tool | Best For | Guide | |------|----------|-------| | **Crossbeam** | Account overlap with partners | [crossbeam.md](../../tools/integrations/crossbeam.md) | | **Introw** | Partner program management, deal registration | [introw.md](../../tools/integrations/introw.md) | | **PartnerStack** | Partner and affiliate program management | [partnerstack.md](../../tools/integrations/partnerstack.md) | --- ## Related Skills - **referral-program** — For customer referral and affiliate programs (customers referring customers) - **launch-strategy** — For product launches with partners; covers co-marketing as a "borrowed channel" - **content-strategy** — For content planning including co-created content - **sales-enablement** — For partner-facing collateral and enablement materialsskills/cold-email/SKILL.mdskillShow content (7189 bytes)
--- name: cold-email description: Write B2B cold emails and follow-up sequences that get replies. Use when the user wants to write cold outreach emails, prospecting emails, cold email campaigns, sales development emails, or SDR emails. Also use when the user mentions "cold outreach," "prospecting email," "outbound email," "email to leads," "reach out to prospects," "sales email," "follow-up email sequence," "nobody's replying to my emails," or "how do I write a cold email." Covers subject lines, opening lines, body copy, CTAs, personalization, and multi-touch follow-up sequences. For warm/lifecycle email sequences, see email-sequence. For sales collateral beyond emails, see sales-enablement. metadata: version: 1.1.0 --- # Cold Email Writing You are an expert cold email writer. Your goal is to write emails that sound like they came from a sharp, thoughtful human — not a sales machine following a template. ## Before Writing **Check for product marketing context first:** If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task. Understand the situation (ask if not provided): 1. **Who are you writing to?** — Role, company, why them specifically 2. **What do you want?** — The outcome (meeting, reply, intro, demo) 3. **What's the value?** — The specific problem you solve for people like them 4. **What's your proof?** — A result, case study, or credibility signal 5. **Any research signals?** — Funding, hiring, LinkedIn posts, company news, tech stack changes Work with whatever the user gives you. If they have a strong signal and a clear value prop, that's enough to write. Don't block on missing inputs — use what you have and note what would make it stronger. --- ## Writing Principles ### Write like a peer, not a vendor The email should read like it came from someone who understands their world — not someone trying to sell them something. Use contractions. Read it aloud. If it sounds like marketing copy, rewrite it. ### Every sentence must earn its place Cold email is ruthlessly short. If a sentence doesn't move the reader toward replying, cut it. The best cold emails feel like they could have been shorter, not longer. ### Personalization must connect to the problem If you remove the personalized opening and the email still makes sense, the personalization isn't working. The observation should naturally lead into why you're reaching out. See [personalization.md](references/personalization.md) for the 4-level system and research signals. ### Lead with their world, not yours The reader should see their own situation reflected back. "You/your" should dominate over "I/we." Don't open with who you are or what your company does. ### One ask, low friction Interest-based CTAs ("Worth exploring?" / "Would this be useful?") beat meeting requests. One CTA per email. Make it easy to say yes with a one-line reply. --- ## Voice & Tone **The target voice:** A smart colleague who noticed something relevant and is sharing it. Conversational but not sloppy. Confident but not pushy. **Calibrate to the audience:** - C-suite: ultra-brief, peer-level, understated - Mid-level: more specific value, slightly more detail - Technical: precise, no fluff, respect their intelligence **What it should NOT sound like:** - A template with fields swapped in - A pitch deck compressed into paragraph form - A LinkedIn DM from someone you've never met - An AI-generated email (avoid the telltale patterns: "I hope this email finds you well," "I came across your profile," "leverage," "synergy," "best-in-class") --- ## Structure There's no single right structure. Choose a framework that fits the situation, or write freeform if the email flows naturally without one. **Common shapes that work:** - **Observation → Problem → Proof → Ask** — You noticed X, which usually means Y challenge. We helped Z with that. Interested? - **Question → Value → Ask** — Struggling with X? We do Y. Company Z saw [result]. Worth a look? - **Trigger → Insight → Ask** — Congrats on X. That usually creates Y challenge. We've helped similar companies with that. Curious? - **Story → Bridge → Ask** — [Similar company] had [problem]. They [solved it this way]. Relevant to you? For the full catalog of frameworks with examples, see [frameworks.md](references/frameworks.md). --- ## Subject Lines Short, boring, internal-looking. The subject line's only job is to get the email opened — not to sell. - 2-4 words, lowercase, no punctuation tricks - Should look like it came from a colleague ("reply rates," "hiring ops," "Q2 forecast") - No product pitches, no urgency, no emojis, no prospect's first name See [subject-lines.md](references/subject-lines.md) for the full data. --- ## Follow-Up Sequences Each follow-up should add something new — a different angle, fresh proof, a useful resource. "Just checking in" gives the reader no reason to respond. - 3-5 total emails, increasing gaps between them - Each email should stand alone (they may not have read the previous ones) - The breakup email is your last touch — honor it See [follow-up-sequences.md](references/follow-up-sequences.md) for cadence, angle rotation, and breakup email templates. --- ## Quality Check Before presenting, gut-check: - Does it sound like a human wrote it? (Read it aloud) - Would YOU reply to this if you received it? - Does every sentence serve the reader, not the sender? - Is the personalization connected to the problem? - Is there one clear, low-friction ask? --- ## What to Avoid - Opening with "I hope this email finds you well" or "My name is X and I work at Y" - Jargon: "synergy," "leverage," "circle back," "best-in-class," "leading provider" - Feature dumps — one proof point beats ten features - HTML, images, or multiple links - Fake "Re:" or "Fwd:" subject lines - Identical templates with only {{FirstName}} swapped - Asking for 30-minute calls in first touch - "Just checking in" follow-ups --- ## Data & Benchmarks The references contain performance data if you need to make informed choices: - [benchmarks.md](references/benchmarks.md) — Reply rates, conversion funnels, expert methods, common mistakes - [personalization.md](references/personalization.md) — 4-level personalization system, research signals - [subject-lines.md](references/subject-lines.md) — Subject line data and optimization - [follow-up-sequences.md](references/follow-up-sequences.md) — Cadence, angles, breakup emails - [frameworks.md](references/frameworks.md) — All copywriting frameworks with examples Use this data to inform your writing — not as a checklist to satisfy. --- ## Related Skills - **copywriting**: For landing pages and web copy - **email-sequence**: For lifecycle/nurture email sequences (not cold outreach) - **social-content**: For LinkedIn and social posts - **product-marketing-context**: For establishing foundational positioning - **revops**: For lead scoring, routing, and pipeline managementskills/ad-creative/SKILL.mdskillShow content (13579 bytes)
--- name: ad-creative description: "When the user wants to generate, iterate, or scale ad creative — headlines, descriptions, primary text, or full ad variations — for any paid advertising platform. Also use when the user mentions 'ad copy variations,' 'ad creative,' 'generate headlines,' 'RSA headlines,' 'bulk ad copy,' 'ad iterations,' 'creative testing,' 'ad performance optimization,' 'write me some ads,' 'Facebook ad copy,' 'Google ad headlines,' 'LinkedIn ad text,' or 'I need more ad variations.' Use this whenever someone needs to produce ad copy at scale or iterate on existing ads. For campaign strategy and targeting, see paid-ads. For landing page copy, see copywriting." metadata: version: 1.1.0 --- # Ad Creative You are an expert performance creative strategist. Your goal is to generate high-performing ad creative at scale — headlines, descriptions, and primary text that drive clicks and conversions — and iterate based on real performance data. ## Before Starting **Check for product marketing context first:** If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task. Gather this context (ask if not provided): ### 1. Platform & Format - What platform? (Google Ads, Meta, LinkedIn, TikTok, Twitter/X) - What ad format? (Search RSAs, display, social feed, stories, video) - Are there existing ads to iterate on, or starting from scratch? ### 2. Product & Offer - What are you promoting? (Product, feature, free trial, demo, lead magnet) - What's the core value proposition? - What makes this different from competitors? ### 3. Audience & Intent - Who is the target audience? - What stage of awareness? (Problem-aware, solution-aware, product-aware) - What pain points or desires drive them? ### 4. Performance Data (if iterating) - What creative is currently running? - Which headlines/descriptions are performing best? (CTR, conversion rate, ROAS) - Which are underperforming? - What angles or themes have been tested? ### 5. Constraints - Brand voice guidelines or words to avoid? - Compliance requirements? (Industry regulations, platform policies) - Any mandatory elements? (Brand name, trademark symbols, disclaimers) --- ## How This Skill Works This skill supports two modes: ### Mode 1: Generate from Scratch When starting fresh, you generate a full set of ad creative based on product context, audience insights, and platform best practices. ### Mode 2: Iterate from Performance Data When the user provides performance data (CSV, paste, or API output), you analyze what's working, identify patterns in top performers, and generate new variations that build on winning themes while exploring new angles. The core loop: ``` Pull performance data → Identify winning patterns → Generate new variations → Validate specs → Deliver ``` --- ## Platform Specs Platforms reject or truncate creative that exceeds these limits, so verify every piece of copy fits before delivering. ### Google Ads (Responsive Search Ads) | Element | Limit | Quantity | |---------|-------|----------| | Headline | 30 characters | Up to 15 | | Description | 90 characters | Up to 4 | | Display URL path | 15 characters each | 2 paths | **RSA rules:** - Headlines must make sense independently and in any combination - Pin headlines to positions only when necessary (reduces optimization) - Include at least one keyword-focused headline - Include at least one benefit-focused headline - Include at least one CTA headline ### Meta Ads (Facebook/Instagram) | Element | Limit | Notes | |---------|-------|-------| | Primary text | 125 chars visible (up to 2,200) | Front-load the hook | | Headline | 40 characters recommended | Below the image | | Description | 30 characters recommended | Below headline | | URL display link | 40 characters | Optional | ### LinkedIn Ads | Element | Limit | Notes | |---------|-------|-------| | Intro text | 150 chars recommended (600 max) | Above the image | | Headline | 70 chars recommended (200 max) | Below the image | | Description | 100 chars recommended (300 max) | Appears in some placements | ### TikTok Ads | Element | Limit | Notes | |---------|-------|-------| | Ad text | 80 chars recommended (100 max) | Above the video | | Display name | 40 characters | Brand name | ### Twitter/X Ads | Element | Limit | Notes | |---------|-------|-------| | Tweet text | 280 characters | The ad copy | | Headline | 70 characters | Card headline | | Description | 200 characters | Card description | For detailed specs and format variations, see [references/platform-specs.md](references/platform-specs.md). --- ## Generating Ad Visuals For image and video ad creative, use generative AI tools and code-based video rendering. See [references/generative-tools.md](references/generative-tools.md) for the complete guide covering: - **Image generation** — Nano Banana Pro (Gemini), Flux, Ideogram for static ad images - **Video generation** — Veo, Kling, Runway, Sora, Seedance, Higgsfield for video ads - **Voice & audio** — ElevenLabs, OpenAI TTS, Cartesia for voiceovers, cloning, multilingual - **Code-based video** — Remotion for templated, data-driven video at scale - **Platform image specs** — Correct dimensions for every ad placement - **Cost comparison** — Pricing for 100+ ad variations across tools **Recommended workflow for scaled production:** 1. Generate hero creative with AI tools (exploratory, high-quality) 2. Build Remotion templates based on winning patterns 3. Batch produce variations with Remotion using data feeds 4. Iterate — AI for new angles, Remotion for scale --- ## Generating Ad Copy ### Step 1: Define Your Angles Before writing individual headlines, establish 3-5 distinct **angles** — different reasons someone would click. Each angle should tap into a different motivation. **Common angle categories:** | Category | Example Angle | |----------|---------------| | Pain point | "Stop wasting time on X" | | Outcome | "Achieve Y in Z days" | | Social proof | "Join 10,000+ teams who..." | | Curiosity | "The X secret top companies use" | | Comparison | "Unlike X, we do Y" | | Urgency | "Limited time: get X free" | | Identity | "Built for [specific role/type]" | | Contrarian | "Why [common practice] doesn't work" | ### Step 2: Generate Variations per Angle For each angle, generate multiple variations. Vary: - **Word choice** — synonyms, active vs. passive - **Specificity** — numbers vs. general claims - **Tone** — direct vs. question vs. command - **Structure** — short punch vs. full benefit statement ### Step 3: Validate Against Specs Before delivering, check every piece of creative against the platform's character limits. Flag anything that's over and provide a trimmed alternative. ### Step 4: Organize for Upload Present creative in a structured format that maps to the ad platform's upload requirements. --- ## Iterating from Performance Data When the user provides performance data, follow this process: ### Step 1: Analyze Winners Look at the top-performing creative (by CTR, conversion rate, or ROAS — ask which metric matters most) and identify: - **Winning themes** — What topics or pain points appear in top performers? - **Winning structures** — Questions? Statements? Commands? Numbers? - **Winning word patterns** — Specific words or phrases that recur? - **Character utilization** — Are top performers shorter or longer? ### Step 2: Analyze Losers Look at the worst performers and identify: - **Themes that fall flat** — What angles aren't resonating? - **Common patterns in low performers** — Too generic? Too long? Wrong tone? ### Step 3: Generate New Variations Create new creative that: - **Doubles down** on winning themes with fresh phrasing - **Extends** winning angles into new variations - **Tests** 1-2 new angles not yet explored - **Avoids** patterns found in underperformers ### Step 4: Document the Iteration Track what was learned and what's being tested: ``` ## Iteration Log - Round: [number] - Date: [date] - Top performers: [list with metrics] - Winning patterns: [summary] - New variations: [count] headlines, [count] descriptions - New angles being tested: [list] - Angles retired: [list] ``` --- ## Writing Quality Standards ### Headlines That Click **Strong headlines:** - Specific ("Cut reporting time 75%") over vague ("Save time") - Benefits ("Ship code faster") over features ("CI/CD pipeline") - Active voice ("Automate your reports") over passive ("Reports are automated") - Include numbers when possible ("3x faster," "in 5 minutes," "10,000+ teams") **Avoid:** - Jargon the audience won't recognize - Claims without specificity ("Best," "Leading," "Top") - All caps or excessive punctuation - Clickbait that the landing page can't deliver on ### Descriptions That Convert Descriptions should complement headlines, not repeat them. Use descriptions to: - Add proof points (numbers, testimonials, awards) - Handle objections ("No credit card required," "Free forever for small teams") - Reinforce CTAs ("Start your free trial today") - Add urgency when genuine ("Limited to first 500 signups") --- ## Output Formats ### Standard Output Organize by angle, with character counts: ``` ## Angle: [Pain Point — Manual Reporting] ### Headlines (30 char max) 1. "Stop Building Reports by Hand" (29) 2. "Automate Your Weekly Reports" (28) 3. "Reports Done in 5 Min, Not 5 Hr" (31) <- OVER LIMIT, trimmed below -> "Reports in 5 Min, Not 5 Hrs" (27) ### Descriptions (90 char max) 1. "Marketing teams save 10+ hours/week with automated reporting. Start free." (73) 2. "Connect your data sources once. Get automated reports forever. No code required." (80) ``` ### Bulk CSV Output When generating at scale (10+ variations), offer CSV format for direct upload: ```csv headline_1,headline_2,headline_3,description_1,description_2,platform "Stop Manual Reporting","Automate in 5 Minutes","Join 10K+ Teams","Save 10+ hrs/week on reports. Start free.","Connect data sources once. Reports forever.","google_ads" ``` ### Iteration Report When iterating, include a summary: ``` ## Performance Summary - Analyzed: [X] headlines, [Y] descriptions - Top performer: "[headline]" — [metric]: [value] - Worst performer: "[headline]" — [metric]: [value] - Pattern: [observation] ## New Creative [organized variations] ## Recommendations - [What to pause, what to scale, what to test next] ``` --- ## Batch Generation Workflow For large-scale creative production (Anthropic's growth team generates 100+ variations per cycle): ### 1. Break into sub-tasks - **Headline generation** — Focused on click-through - **Description generation** — Focused on conversion - **Primary text generation** — Focused on engagement (Meta/LinkedIn) ### 2. Generate in waves - Wave 1: Core angles (3-5 angles, 5 variations each) - Wave 2: Extended variations on top 2 angles - Wave 3: Wild card angles (contrarian, emotional, specific) ### 3. Quality filter - Remove anything over character limit - Remove duplicates or near-duplicates - Flag anything that might violate platform policies - Ensure headline/description combinations make sense together --- ## Common Mistakes - **Writing headlines that only work together** — RSA headlines get combined randomly - **Ignoring character limits** — Platforms truncate without warning - **All variations sound the same** — Vary angles, not just word choice - **No CTA headlines** — RSAs need action-oriented headlines to drive clicks; include at least 2-3 - **Generic descriptions** — "Learn more about our solution" wastes the slot - **Iterating without data** — Gut feelings are less reliable than metrics - **Testing too many things at once** — Change one variable per test cycle - **Retiring creative too early** — Allow 1,000+ impressions before judging --- ## Tool Integrations For pulling performance data and managing campaigns, see the [tools registry](../../tools/REGISTRY.md). | Platform | Pull Performance Data | Manage Campaigns | Guide | |----------|:---------------------:|:----------------:|-------| | **Google Ads** | `google-ads campaigns list`, `google-ads reports get` | `google-ads campaigns create` | [google-ads.md](../../tools/integrations/google-ads.md) | | **Meta Ads** | `meta-ads insights get` | `meta-ads campaigns list` | [meta-ads.md](../../tools/integrations/meta-ads.md) | | **LinkedIn Ads** | `linkedin-ads analytics get` | `linkedin-ads campaigns list` | [linkedin-ads.md](../../tools/integrations/linkedin-ads.md) | | **TikTok Ads** | `tiktok-ads reports get` | `tiktok-ads campaigns list` | [tiktok-ads.md](../../tools/integrations/tiktok-ads.md) | ### Workflow: Pull Data, Analyze, Generate ```bash # 1. Pull recent ad performance node tools/clis/google-ads.js reports get --type ad_performance --date-range last_30_days # 2. Analyze output (identify top/bottom performers) # 3. Feed winning patterns into this skill # 4. Generate new variations # 5. Upload to platform ``` --- ## Related Skills - **paid-ads**: For campaign strategy, targeting, budgets, and optimization - **copywriting**: For landing page copy (where ad traffic lands) - **ab-test-setup**: For structuring creative tests with statistical rigor - **marketing-psychology**: For psychological principles behind high-performing creative - **copy-editing**: For polishing ad copy before launchskills/ai-seo/SKILL.mdskillShow content (19655 bytes)
--- name: ai-seo description: "When the user wants to optimize content for AI search engines, get cited by LLMs, or appear in AI-generated answers. Also use when the user mentions 'AI SEO,' 'AEO,' 'GEO,' 'LLMO,' 'answer engine optimization,' 'generative engine optimization,' 'LLM optimization,' 'AI Overviews,' 'optimize for ChatGPT,' 'optimize for Perplexity,' 'AI citations,' 'AI visibility,' 'zero-click search,' 'how do I show up in AI answers,' 'LLM mentions,' or 'optimize for Claude/Gemini.' Use this whenever someone wants their content to be cited or surfaced by AI assistants and AI search engines. For traditional technical and on-page SEO audits, see seo-audit. For structured data implementation, see schema-markup." metadata: version: 1.2.0 --- # AI SEO You are an expert in AI search optimization — the practice of making content discoverable, extractable, and citable by AI systems including Google AI Overviews, ChatGPT, Perplexity, Claude, Gemini, and Copilot. Your goal is to help users get their content cited as a source in AI-generated answers. ## Before Starting **Check for product marketing context first:** If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task. Gather this context (ask if not provided): ### 1. Current AI Visibility - Do you know if your brand appears in AI-generated answers today? - Have you checked ChatGPT, Perplexity, or Google AI Overviews for your key queries? - What queries matter most to your business? ### 2. Content & Domain - What type of content do you produce? (Blog, docs, comparisons, product pages) - What's your domain authority / traditional SEO strength? - Do you have existing structured data (schema markup)? ### 3. Goals - Get cited as a source in AI answers? - Appear in Google AI Overviews for specific queries? - Compete with specific brands already getting cited? - Optimize existing content or create new AI-optimized content? ### 4. Competitive Landscape - Who are your top competitors in AI search results? - Are they being cited where you're not? --- ## How AI Search Works ### The AI Search Landscape | Platform | How It Works | Source Selection | |----------|-------------|----------------| | **Google AI Overviews** | Summarizes top-ranking pages | Strong correlation with traditional rankings | | **ChatGPT (with search)** | Searches web, cites sources | Draws from wider range, not just top-ranked | | **Perplexity** | Always cites sources with links | Favors authoritative, recent, well-structured content | | **Gemini** | Google's AI assistant | Pulls from Google index + Knowledge Graph | | **Copilot** | Bing-powered AI search | Bing index + authoritative sources | | **Claude** | Brave Search (when enabled) | Training data + Brave search results | For a deep dive on how each platform selects sources and what to optimize per platform, see [references/platform-ranking-factors.md](references/platform-ranking-factors.md). ### Key Difference from Traditional SEO Traditional SEO gets you ranked. AI SEO gets you **cited**. In traditional search, you need to rank on page 1. In AI search, a well-structured page can get cited even if it ranks on page 2 or 3 — AI systems select sources based on content quality, structure, and relevance, not just rank position. **Critical stats:** - AI Overviews appear in ~45% of Google searches - AI Overviews reduce clicks to websites by up to 58% - Brands are 6.5x more likely to be cited via third-party sources than their own domains - Optimized content gets cited 3x more often than non-optimized - Statistics and citations boost visibility by 40%+ across queries --- ## AI Visibility Audit Before optimizing, assess your current AI search presence. ### Step 1: Check AI Answers for Your Key Queries Test 10-20 of your most important queries across platforms: | Query | Google AI Overview | ChatGPT | Perplexity | You Cited? | Competitors Cited? | |-------|:-----------------:|:-------:|:----------:|:----------:|:-----------------:| | [query 1] | Yes/No | Yes/No | Yes/No | Yes/No | [who] | | [query 2] | Yes/No | Yes/No | Yes/No | Yes/No | [who] | **Query types to test:** - "What is [your product category]?" - "Best [product category] for [use case]" - "[Your brand] vs [competitor]" - "How to [problem your product solves]" - "[Your product category] pricing" ### Step 2: Analyze Citation Patterns When your competitors get cited and you don't, examine: - **Content structure** — Is their content more extractable? - **Authority signals** — Do they have more citations, stats, expert quotes? - **Freshness** — Is their content more recently updated? - **Schema markup** — Do they have structured data you're missing? - **Third-party presence** — Are they cited via Wikipedia, Reddit, review sites? ### Step 3: Content Extractability Check For each priority page, verify: | Check | Pass/Fail | |-------|-----------| | Clear definition in first paragraph? | | | Self-contained answer blocks (work without surrounding context)? | | | Statistics with sources cited? | | | Comparison tables for "[X] vs [Y]" queries? | | | FAQ section with natural-language questions? | | | Schema markup (FAQ, HowTo, Article, Product)? | | | Expert attribution (author name, credentials)? | | | Recently updated (within 6 months)? | | | Heading structure matches query patterns? | | | AI bots allowed in robots.txt? | | ### Step 4: AI Bot Access Check Verify your robots.txt allows AI crawlers. Each AI platform has its own bot, and blocking it means that platform can't cite you: - **GPTBot** and **ChatGPT-User** — OpenAI (ChatGPT) - **PerplexityBot** — Perplexity - **ClaudeBot** and **anthropic-ai** — Anthropic (Claude) - **Google-Extended** — Google Gemini and AI Overviews - **Bingbot** — Microsoft Copilot (via Bing) Check your robots.txt for `Disallow` rules targeting any of these. If you find them blocked, you have a business decision to make: blocking prevents AI training on your content but also prevents citation. One middle ground is blocking training-only crawlers (like **CCBot** from Common Crawl) while allowing the search bots listed above. See [references/platform-ranking-factors.md](references/platform-ranking-factors.md) for the full robots.txt configuration. --- ## Optimization Strategy ### The Three Pillars ``` 1. Structure (make it extractable) 2. Authority (make it citable) 3. Presence (be where AI looks) ``` ### Pillar 1: Structure — Make Content Extractable AI systems extract passages, not pages. Every key claim should work as a standalone statement. **Content block patterns:** - **Definition blocks** for "What is X?" queries - **Step-by-step blocks** for "How to X" queries - **Comparison tables** for "X vs Y" queries - **Pros/cons blocks** for evaluation queries - **FAQ blocks** for common questions - **Statistic blocks** with cited sources For detailed templates for each block type, see [references/content-patterns.md](references/content-patterns.md). **Structural rules:** - Lead every section with a direct answer (don't bury it) - Keep key answer passages to 40-60 words (optimal for snippet extraction) - Use H2/H3 headings that match how people phrase queries - Tables beat prose for comparison content - Numbered lists beat paragraphs for process content - Each paragraph should convey one clear idea ### Pillar 2: Authority — Make Content Citable AI systems prefer sources they can trust. Build citation-worthiness. **The Princeton GEO research** (KDD 2024, studied across Perplexity.ai) ranked 9 optimization methods: | Method | Visibility Boost | How to Apply | |--------|:---------------:|--------------| | **Cite sources** | +40% | Add authoritative references with links | | **Add statistics** | +37% | Include specific numbers with sources | | **Add quotations** | +30% | Expert quotes with name and title | | **Authoritative tone** | +25% | Write with demonstrated expertise | | **Improve clarity** | +20% | Simplify complex concepts | | **Technical terms** | +18% | Use domain-specific terminology | | **Unique vocabulary** | +15% | Increase word diversity | | **Fluency optimization** | +15-30% | Improve readability and flow | | ~~Keyword stuffing~~ | **-10%** | **Actively hurts AI visibility** | **Best combination:** Fluency + Statistics = maximum boost. Low-ranking sites benefit even more — up to 115% visibility increase with citations. **Statistics and data** (+37-40% citation boost) - Include specific numbers with sources - Cite original research, not summaries of research - Add dates to all statistics - Original data beats aggregated data **Expert attribution** (+25-30% citation boost) - Named authors with credentials - Expert quotes with titles and organizations - "According to [Source]" framing for claims - Author bios with relevant expertise **Freshness signals** - "Last updated: [date]" prominently displayed - Regular content refreshes (quarterly minimum for competitive topics) - Current year references and recent statistics - Remove or update outdated information **E-E-A-T alignment** - First-hand experience demonstrated - Specific, detailed information (not generic) - Transparent sourcing and methodology - Clear author expertise for the topic ### Pillar 3: Presence — Be Where AI Looks AI systems don't just cite your website — they cite where you appear. **Third-party sources matter more than your own site:** - Wikipedia mentions (7.8% of all ChatGPT citations) - Reddit discussions (1.8% of ChatGPT citations) - Industry publications and guest posts - Review sites (G2, Capterra, TrustRadius for B2B SaaS) - YouTube (frequently cited by Google AI Overviews) - Quora answers **Actions:** - Ensure your Wikipedia page is accurate and current - Participate authentically in Reddit communities - Get featured in industry roundups and comparison articles - Maintain updated profiles on relevant review platforms - Create YouTube content for key how-to queries - Answer relevant Quora questions with depth ### Machine-Readable Files for AI Agents AI agents aren't just answering questions — they're becoming buyers. When an AI agent evaluates tools on behalf of a user, it needs structured, parseable information. If your pricing is locked in a JavaScript-rendered page or a "contact sales" wall, agents will skip you and recommend competitors whose information they can actually read. Add these machine-readable files to your site root: **`/pricing.md` or `/pricing.txt`** — Structured pricing data for AI agents ```markdown # Pricing — [Your Product Name] ## Free - Price: $0/month - Limits: 100 emails/month, 1 user - Features: Basic templates, API access ## Pro - Price: $29/month (billed annually) | $35/month (billed monthly) - Limits: 10,000 emails/month, 5 users - Features: Custom domains, analytics, priority support ## Enterprise - Price: Custom — contact sales@example.com - Limits: Unlimited emails, unlimited users - Features: SSO, SLA, dedicated account manager ``` **Why this matters now:** - AI agents increasingly compare products programmatically before a human ever visits your site - Opaque pricing gets filtered out of AI-mediated buying journeys - A simple markdown file is trivially parseable by any LLM — no rendering, no JavaScript, no login walls - Same principle as `robots.txt` (for crawlers), `llms.txt` (for AI context), and `AGENTS.md` (for agent capabilities) **Best practices:** - Use consistent units (monthly vs. annual, per-seat vs. flat) - Include specific limits and thresholds, not just feature names - List what's included at each tier, not just what's different - Keep it updated — stale pricing is worse than no file - Link to it from your sitemap and main pricing page **`/llms.txt`** — Context file for AI systems (see [llmstxt.org](https://llmstxt.org)) If you don't have one yet, add an `llms.txt` that gives AI systems a quick overview of what your product does, who it's for, and links to key pages (including your pricing). ### Schema Markup for AI Structured data helps AI systems understand your content. Key schemas: | Content Type | Schema | Why It Helps | |-------------|--------|-------------| | Articles/Blog posts | `Article`, `BlogPosting` | Author, date, topic identification | | How-to content | `HowTo` | Step extraction for process queries | | FAQs | `FAQPage` | Direct Q&A extraction | | Products | `Product` | Pricing, features, reviews | | Comparisons | `ItemList` | Structured comparison data | | Reviews | `Review`, `AggregateRating` | Trust signals | | Organization | `Organization` | Entity recognition | Content with proper schema shows 30-40% higher AI visibility. For implementation, use the **schema-markup** skill. --- ## Content Types That Get Cited Most Not all content is equally citable. Prioritize these formats: | Content Type | Citation Share | Why AI Cites It | |-------------|:------------:|----------------| | **Comparison articles** | ~33% | Structured, balanced, high-intent | | **Definitive guides** | ~15% | Comprehensive, authoritative | | **Original research/data** | ~12% | Unique, citable statistics | | **Best-of/listicles** | ~10% | Clear structure, entity-rich | | **Product pages** | ~10% | Specific details AI can extract | | **How-to guides** | ~8% | Step-by-step structure | | **Opinion/analysis** | ~10% | Expert perspective, quotable | **Underperformers for AI citation:** - Generic blog posts without structure - Thin product pages with marketing fluff - Gated content (AI can't access it) - Content without dates or author attribution - PDF-only content (harder for AI to parse) --- ## Monitoring AI Visibility ### What to Track | Metric | What It Measures | How to Check | |--------|-----------------|-------------| | AI Overview presence | Do AI Overviews appear for your queries? | Manual check or Semrush/Ahrefs | | Brand citation rate | How often you're cited in AI answers | AI visibility tools (see below) | | Share of AI voice | Your citations vs. competitors | Peec AI, Otterly, ZipTie | | Citation sentiment | How AI describes your brand | Manual review + monitoring tools | | Source attribution | Which of your pages get cited | Track referral traffic from AI sources | ### AI Visibility Monitoring Tools | Tool | Coverage | Best For | |------|----------|----------| | **Otterly AI** | ChatGPT, Perplexity, Google AI Overviews | Share of AI voice tracking | | **Peec AI** | ChatGPT, Gemini, Perplexity, Claude, Copilot+ | Multi-platform monitoring at scale | | **ZipTie** | Google AI Overviews, ChatGPT, Perplexity | Brand mention + sentiment tracking | | **LLMrefs** | ChatGPT, Perplexity, AI Overviews, Gemini | SEO keyword → AI visibility mapping | ### DIY Monitoring (No Tools) Monthly manual check: 1. Pick your top 20 queries 2. Run each through ChatGPT, Perplexity, and Google 3. Record: Are you cited? Who is? What page? 4. Log in a spreadsheet, track month-over-month --- ## AI SEO for Different Content Types ### SaaS Product Pages **Goal:** Get cited in "What is [category]?" and "Best [category]" queries. **Optimize:** - Clear product description in first paragraph (what it does, who it's for) - Feature comparison tables (you vs. category, not just competitors) - Specific metrics ("processes 10,000 transactions/sec" not "blazing fast") - Customer count or social proof with numbers - Pricing transparency (AI cites pages with visible pricing) — add a `/pricing.md` file so AI agents can parse your plans without rendering your page (see "Machine-Readable Files" above) - FAQ section addressing common buyer questions ### Blog Content **Goal:** Get cited as an authoritative source on topics in your space. **Optimize:** - One clear target query per post (match heading to query) - Definition in first paragraph for "What is" queries - Original data, research, or expert quotes - "Last updated" date visible - Author bio with relevant credentials - Internal links to related product/feature pages ### Comparison/Alternative Pages **Goal:** Get cited in "[X] vs [Y]" and "Best [X] alternatives" queries. **Optimize:** - Structured comparison tables (not just prose) - Fair and balanced (AI penalizes obviously biased comparisons) - Specific criteria with ratings or scores - Updated pricing and feature data - Cite the competitor-alternatives skill for building these pages ### Documentation / Help Content **Goal:** Get cited in "How to [X] with [your product]" queries. **Optimize:** - Step-by-step format with numbered lists - Code examples where relevant - HowTo schema markup - Screenshots with descriptive alt text - Clear prerequisites and expected outcomes --- ## Common Mistakes - **Ignoring AI search entirely** — ~45% of Google searches now show AI Overviews, and ChatGPT/Perplexity are growing fast - **Treating AI SEO as separate from SEO** — Good traditional SEO is the foundation; AI SEO adds structure and authority on top - **Writing for AI, not humans** — If content reads like it was written to game an algorithm, it won't get cited or convert - **No freshness signals** — Undated content loses to dated content because AI systems weight recency heavily. Show when content was last updated - **Gating all content** — AI can't access gated content. Keep your most authoritative content open - **Ignoring third-party presence** — You may get more AI citations from a Wikipedia mention than from your own blog - **No structured data** — Schema markup gives AI systems structured context about your content - **Keyword stuffing** — Unlike traditional SEO where it's just ineffective, keyword stuffing actively reduces AI visibility by 10% (Princeton GEO study) - **Hiding pricing behind "contact sales" or JS-rendered pages** — AI agents evaluating your product on behalf of buyers can't parse what they can't read. Add a `/pricing.md` file - **Blocking AI bots** — If GPTBot, PerplexityBot, or ClaudeBot are blocked in robots.txt, those platforms can't cite you - **Generic content without data** — "We're the best" won't get cited. "Our customers see 3x improvement in [metric]" will - **Forgetting to monitor** — You can't improve what you don't measure. Check AI visibility monthly at minimum --- ## Tool Integrations For implementation, see the [tools registry](../../tools/REGISTRY.md). | Tool | Use For | |------|---------| | `semrush` | AI Overview tracking, keyword research, content gap analysis | | `ahrefs` | Backlink analysis, content explorer, AI Overview data | | `gsc` | Search Console performance data, query tracking | | `ga4` | Referral traffic from AI sources | --- ## Task-Specific Questions 1. What are your top 10-20 most important queries? 2. Have you checked if AI answers exist for those queries today? 3. Do you have structured data (schema markup) on your site? 4. What content types do you publish? (Blog, docs, comparisons, etc.) 5. Are competitors being cited by AI where you're not? 6. Do you have a Wikipedia page or presence on review sites? --- ## Related Skills - **seo-audit**: For traditional technical and on-page SEO audits - **schema-markup**: For implementing structured data that helps AI understand your content - **content-strategy**: For planning what content to create - **competitor-alternatives**: For building comparison pages that get cited - **programmatic-seo**: For building SEO pages at scale - **copywriting**: For writing content that's both human-readable and AI-extractable.claude-plugin/marketplace.jsonmarketplaceShow content (2228 bytes)
{ "name": "marketingskills", "owner": { "name": "Corey Haines", "url": "https://corey.co" }, "metadata": { "description": "Marketing skills for AI agents — conversion optimization, copywriting, SEO, paid ads, and growth", "version": "1.10.0", "repository": "https://github.com/coreyhaines31/marketingskills" }, "plugins": [ { "name": "marketing-skills", "description": "41 marketing skills for technical marketers and founders: ASO, CRO, copywriting, cold email, SEO, AI SEO, paid ads, ad creative, video production, image generation, co-marketing, churn prevention, pricing strategy, referral programs, revenue operations, sales enablement, customer research, site architecture, and more", "source": "./", "skills": [ "./skills/ab-test-setup", "./skills/ad-creative", "./skills/ai-seo", "./skills/analytics-tracking", "./skills/aso-audit", "./skills/churn-prevention", "./skills/co-marketing", "./skills/cold-email", "./skills/community-marketing", "./skills/competitor-alternatives", "./skills/competitor-profiling", "./skills/content-strategy", "./skills/copy-editing", "./skills/copywriting", "./skills/customer-research", "./skills/directory-submissions", "./skills/email-sequence", "./skills/form-cro", "./skills/free-tool-strategy", "./skills/image", "./skills/launch-strategy", "./skills/lead-magnets", "./skills/marketing-ideas", "./skills/marketing-psychology", "./skills/onboarding-cro", "./skills/page-cro", "./skills/paid-ads", "./skills/paywall-upgrade-cro", "./skills/popup-cro", "./skills/pricing-strategy", "./skills/product-marketing-context", "./skills/programmatic-seo", "./skills/referral-program", "./skills/revops", "./skills/sales-enablement", "./skills/schema-markup", "./skills/seo-audit", "./skills/signup-flow-cro", "./skills/site-architecture", "./skills/social-content", "./skills/video" ] } ] }
README
Marketing Skills for AI Agents
A collection of AI agent skills focused on marketing tasks. Built for technical marketers and founders who want AI coding agents to help with conversion optimization, copywriting, SEO, analytics, and growth engineering. Works with Claude Code, OpenAI Codex, Cursor, Windsurf, and any agent that supports the Agent Skills spec.
Built by Corey Haines. Need hands-on help? Check out Conversion Factory — Corey's agency for conversion optimization, landing pages, and growth strategy. Want to learn more about marketing? Subscribe to Swipe Files. Want an autonomous AI agent that uses these skills to be your CMO? Try Magister.
New to the terminal and coding agents? Check out the companion guide Coding for Marketers.
Contributions welcome! Found a way to improve a skill or have a new one to add? Open a PR.
Run into a problem or have a question? Open an issue — we're happy to help.
What are Skills?
Skills are markdown files that give AI agents specialized knowledge and workflows for specific tasks. When you add these to your project, your agent can recognize when you're working on a marketing task and apply the right frameworks and best practices.
How Skills Work Together
Skills reference each other and build on shared context. The product-marketing-context skill is the foundation — every other skill checks it first to understand your product, audience, and positioning before doing anything.
┌──────────────────────────────────────┐
│ product-marketing-context │
│ (read by all other skills first) │
└──────────────────┬───────────────────┘
│
┌──────────────┬─────────────┬─────────────┼─────────────┬──────────────┬──────────────┐
▼ ▼ ▼ ▼ ▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────────┐ ┌──────────┐ ┌─────────────┐ ┌───────────┐
│ SEO & │ │ CRO │ │Content & │ │ Paid & │ │ Growth & │ │ Sales & │ │ Strategy │
│ Content │ │ │ │ Copy │ │Measurement │ │Retention │ │ GTM │ │ │
├──────────┤ ├──────────┤ ├──────────┤ ├────────────┤ ├──────────┤ ├─────────────┤ ├───────────┤
│seo-audit │ │page-cro │ │copywritng│ │paid-ads │ │referral │ │revops │ │mktg-ideas │
│ai-seo │ │signup-cro│ │copy-edit │ │ad-creative │ │free-tool │ │sales-enable │ │mktg-psych │
│site-arch │ │onboard │ │cold-email│ │ab-test │ │churn- │ │launch │ │customer- │
│programm │ │form-cro │ │email-seq │ │analytics │ │ prevent │ │pricing │ │ research │
│schema │ │popup-cro │ │social │ │ │ │community │ │comp-alts │ │ │
│content │ │paywall │ │video │ │ │ │lead-magnt│ │comp-profile │ │ │
│aso-audit │ │ │ │image │ │ │ │co-mktg │ │directory │ │ │
└────┬─────┘ └────┬─────┘ └────┬─────┘ └─────┬──────┘ └────┬─────┘ └──────┬──────┘ └─────┬─────┘
│ │ │ │ │ │ │
└────────────┴─────┬──────┴──────────────┴─────────────┴──────────────┴──────────────┘
│
Skills cross-reference each other:
copywriting ↔ page-cro ↔ ab-test-setup
revops ↔ sales-enablement ↔ cold-email
seo-audit ↔ schema-markup ↔ ai-seo
customer-research → copywriting, page-cro, competitor-alternatives
See each skill's Related Skills section for the full dependency map.
Available Skills
| Skill | Description |
|---|---|
| ab-test-setup | When the user wants to plan, design, or implement an A/B test or experiment, or build a growth experimentation program.... |
| ad-creative | When the user wants to generate, iterate, or scale ad creative — headlines, descriptions, primary text, or full ad... |
| ai-seo | When the user wants to optimize content for AI search engines, get cited by LLMs, or appear in AI-generated answers.... |
| analytics-tracking | When the user wants to set up, improve, or audit analytics tracking and measurement. Also use when the user mentions... |
| aso-audit | When the user wants to audit or optimize an App Store or Google Play listing. Also use when the user mentions 'ASO... |
| churn-prevention | When the user wants to reduce churn, build cancellation flows, set up save offers, recover failed payments, or... |
| co-marketing | When the user wants to find co-marketing partners, plan joint campaigns, or brainstorm partnership opportunities. Use... |
| cold-email | Write B2B cold emails and follow-up sequences that get replies. Use when the user wants to write cold outreach emails,... |
| community-marketing | Build and leverage online communities to drive product growth and brand loyalty. Use when the user wants to create a... |
| competitor-alternatives | When the user wants to create competitor comparison or alternative pages for SEO and sales enablement. Also use when... |
| competitor-profiling | When the user wants to research, profile, or analyze competitors from their URLs. Also use when the user mentions... |
| content-strategy | When the user wants to plan a content strategy, decide what content to create, or figure out what topics to cover. Also... |
| copy-editing | When the user wants to edit, review, or improve existing marketing copy, or refresh outdated content. Also use when the... |
| copywriting | When the user wants to write, rewrite, or improve marketing copy for any page — including homepage, landing pages,... |
| customer-research | When the user wants to conduct, analyze, or synthesize customer research. Use when the user mentions "customer... |
| directory-submissions | When the user wants to submit their product to startup, SaaS, AI, agent, MCP, no-code, or review directories for... |
| email-sequence | When the user wants to create or optimize an email sequence, drip campaign, automated email flow, or lifecycle email... |
| form-cro | When the user wants to optimize any form that is NOT signup/registration — including lead capture forms, contact forms,... |
| free-tool-strategy | When the user wants to plan, evaluate, or build a free tool for marketing purposes — lead generation, SEO value, or... |
| image | When the user wants to create, generate, edit, or optimize images for marketing — blog heroes, social graphics, product... |
| launch-strategy | When the user wants to plan a product launch, feature announcement, or release strategy. Also use when the user... |
| lead-magnets | When the user wants to create, plan, or optimize a lead magnet for email capture or lead generation. Also use when the... |
| marketing-ideas | When the user needs marketing ideas, inspiration, or strategies for their SaaS or software product. Also use when the... |
| marketing-psychology | When the user wants to apply psychological principles, mental models, or behavioral science to marketing. Also use when... |
| onboarding-cro | When the user wants to optimize post-signup onboarding, user activation, first-run experience, or time-to-value. Also... |
| page-cro | When the user wants to optimize, improve, or increase conversions on any marketing page — including homepage, landing... |
| paid-ads | When the user wants help with paid advertising campaigns on Google Ads, Meta (Facebook/Instagram), LinkedIn, Twitter/X,... |
| paywall-upgrade-cro | When the user wants to create or optimize in-app paywalls, upgrade screens, upsell modals, or feature gates. Also use... |
| popup-cro | When the user wants to create or optimize popups, modals, overlays, slide-ins, or banners for conversion purposes. Also... |
| pricing-strategy | When the user wants help with pricing decisions, packaging, or monetization strategy. Also use when the user mentions... |
| product-marketing-context | When the user wants to create or update their product marketing context document. Also use when the user mentions... |
| programmatic-seo | When the user wants to create SEO-driven pages at scale using templates and data. Also use when the user mentions... |
| referral-program | When the user wants to create, optimize, or analyze a referral program, affiliate program, or word-of-mouth strategy.... |
| revops | When the user wants help with revenue operations, lead lifecycle management, or marketing-to-sales handoff processes.... |
| sales-enablement | When the user wants to create sales collateral, pitch decks, one-pagers, objection handling docs, or demo scripts. Also... |
| schema-markup | When the user wants to add, fix, or optimize schema markup and structured data on their site. Also use when the user... |
| seo-audit | When the user wants to audit, review, or diagnose SEO issues on their site. Also use when the user mentions "SEO... |
| signup-flow-cro | When the user wants to optimize signup, registration, account creation, or trial activation flows. Also use when the... |
| site-architecture | When the user wants to plan, map, or restructure their website's page hierarchy, navigation, URL structure, or internal... |
| social-content | When the user wants help creating, scheduling, or optimizing social media content for LinkedIn, Twitter/X, Instagram,... |
| video | When the user wants to create, generate, or produce video content using AI tools or programmatic frameworks. Also use... |
Installation
Option 1: CLI Install (Recommended)
Use npx skills to install skills directly:
# Install all skills
npx skills add coreyhaines31/marketingskills
# Install specific skills
npx skills add coreyhaines31/marketingskills --skill page-cro copywriting
# List available skills
npx skills add coreyhaines31/marketingskills --list
This automatically installs to your .agents/skills/ directory (and symlinks into .claude/skills/ for Claude Code compatibility).
Option 2: Claude Code Plugin
Install via Claude Code's built-in plugin system:
# Add the marketplace
/plugin marketplace add coreyhaines31/marketingskills
# Install all marketing skills
/plugin install marketing-skills
Option 3: Clone and Copy
Clone the entire repo and copy the skills folder:
git clone https://github.com/coreyhaines31/marketingskills.git
cp -r marketingskills/skills/* .agents/skills/
Option 4: Git Submodule
Add as a submodule for easy updates:
git submodule add https://github.com/coreyhaines31/marketingskills.git .agents/marketingskills
Then reference skills from .agents/marketingskills/skills/.
Option 5: Fork and Customize
- Fork this repository
- Customize skills for your specific needs
- Clone your fork into your projects
Option 6: SkillKit (Multi-Agent)
Use SkillKit to install skills across multiple AI agents (Claude Code, Cursor, Copilot, etc.):
# Install all skills
npx skillkit install coreyhaines31/marketingskills
# Install specific skills
npx skillkit install coreyhaines31/marketingskills --skill page-cro copywriting
# List available skills
npx skillkit install coreyhaines31/marketingskills --list
Upgrading from v1.0
Skills now use .agents/ instead of .claude/ for the product marketing context file. Move your existing context file:
mkdir -p .agents
mv .claude/product-marketing-context.md .agents/product-marketing-context.md
Skills will still check .claude/ as a fallback, so nothing breaks if you don't.
Usage
Once installed, just ask your agent to help with marketing tasks:
"Help me optimize this landing page for conversions"
→ Uses page-cro skill
"Write homepage copy for my SaaS"
→ Uses copywriting skill
"Set up GA4 tracking for signups"
→ Uses analytics-tracking skill
"Create a 5-email welcome sequence"
→ Uses email-sequence skill
You can also invoke skills directly:
/page-cro
/email-sequence
/seo-audit
Skill Categories
Conversion Optimization
page-cro- Any marketing pagesignup-flow-cro- Registration flowsonboarding-cro- Post-signup activationform-cro- Lead capture formspopup-cro- Modals and overlayspaywall-upgrade-cro- In-app upgrade moments
Content & Copy
copywriting- Marketing page copycopy-editing- Edit and polish existing copycold-email- B2B cold outreach emails and sequencesemail-sequence- Automated email flowssocial-content- Social media contentimage- AI image generation, design tools, and optimization
SEO & Discovery
seo-audit- Technical and on-page SEOai-seo- AI search optimization (AEO, GEO, LLMO)programmatic-seo- Scaled page generationsite-architecture- Page hierarchy, navigation, URL structurecompetitor-alternatives- Comparison and alternative pagesschema-markup- Structured data
Paid & Distribution
paid-ads- Google, Meta, LinkedIn ad campaignsad-creative- Bulk ad creative generation and iterationsocial-content- Social media scheduling and strategy
Measurement & Testing
analytics-tracking- Event tracking setupab-test-setup- Experiment design
Retention
churn-prevention- Cancel flows, save offers, dunning, payment recovery
Growth Engineering
co-marketing- Partner identification and joint campaignsfree-tool-strategy- Marketing tools and calculatorsreferral-program- Referral and affiliate programs
Strategy & Monetization
marketing-ideas- 140 SaaS marketing ideasmarketing-psychology- Mental models and psychologylaunch-strategy- Product launches and announcementspricing-strategy- Pricing, packaging, and monetization
Sales & RevOps
revops- Lead lifecycle, scoring, routing, pipeline managementsales-enablement- Sales decks, one-pagers, objection docs, demo scripts
Contributing
Found a way to improve a skill? Have a new skill to suggest? PRs and issues welcome!
See CONTRIBUTING.md for guidelines on adding or improving skills.
License
MIT - Use these however you want.