Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
glebis

claude-skills

Quality
9.0

This skill generates and edits images using OpenAI's GPT Image 2 model, offering superior text rendering and a "thinking mode" for complex compositions like infographics or diagrams. It's ideal for creating social media content, transforming photos into artistic styles, or any image generation where high-quality text in visuals is crucial.

USP

Unlike other image generation tools, GPT Image 2 boasts 99%+ text rendering accuracy and a unique "thinking mode" for complex compositions, alongside cost controls and extensive style/platform presets.

Use cases

  • 01Generate infographics with accurate text
  • 02Create social media visuals
  • 03Edit photos into artistic styles
  • 04Design diagrams and posters
  • 05Produce marketing collateral

Detected files (9)

  • context-builder/SKILL.mdskill
    Show content (6368 bytes)
    ---
    name: context-builder
    description: Generate interactive AI transformation context-builder prompts for consulting clients. Use when creating structured discovery session prompts that guide a company through context gathering about their business, pain points, tech stack, and AI opportunities. Produces a resumable, multi-section prompt with Express/Deep Dive modes.
    ---
    
    # Context Builder
    
    Generate interactive context-building prompts for consulting clients. These prompts are designed to be run in Claude Code -- they guide a team through structured questions using AskUserQuestion, generate output files per section, and compile everything into a reusable CLAUDE.md.
    
    ## Workflow
    
    ### Phase 1: Intake (AskUserQuestion)
    
    Ask all intake questions using AskUserQuestion with closed-list options. Gather:
    
    **Question 1: Company identifier**
    - Options: "I have a website URL", "I have a company name", "I have both"
    - Follow up to get the actual URL/name
    
    **Question 2: Who will use this prompt?**
    - Options: "Specific person (name + role)", "A team (no specific person)", "Unknown / TBD"
    - If specific person: follow up for name and role
    
    **Question 3: Primary consulting focus** (multiSelect)
    - "AI automation of current operations"
    - "Existential strategy (what survives AI)"
    - "New business models / pivots"
    - "Product development with AI"
    
    **Question 4: Industry**
    - "Marketing / Advertising"
    - "Manufacturing / Construction"
    - "SaaS / Software"
    - "Professional Services / Consulting"
    - (Other)
    
    **Question 5: Existing context in vault?**
    - "Yes, there's a call transcript"
    - "Yes, there are notes/files"
    - "No existing context"
    - If yes: ask for filename or search term to locate it
    
    **Question 6: Session language**
    - "Russian (questions in Russian, output in English)"
    - "English throughout"
    - "Other"
    
    ### Phase 2: Research (automated)
    
    Run these research steps in parallel where possible:
    
    1. **Web research**: Use WebSearch and WebFetch (via Task agent) to gather:
       - What the company does, products/services
       - Target market, company size, geography
       - Tech stack, partnerships
       - Recent news, funding, team info
       - Competitive landscape
    
    2. **Vault search**: Search the Obsidian vault for:
       - Transcripts mentioning the company name (Grep in vault root and Daily/)
       - People files for contacts at the company (People/ folder)
       - Any existing notes or research
    
    3. **Transcript analysis** (if found): Extract from call transcripts:
       - Team members and their roles
       - Current AI tool usage
       - Pain points and concerns mentioned
       - Specific processes described
       - Questions raised by the team
    
    ### Phase 3: Section Selection (AskUserQuestion)
    
    Present a curated set of sections based on the consulting focus. Use AskUserQuestion with multiSelect to let the user pick which sections to include.
    
    #### Section Library
    
    Draw from `references/section-library.md` for the full section catalog. Default section sets by focus:
    
    **AI Automation focus:**
    1. Process Inventory, 2. Pain Points & Waste, 3. Current Tech Stack, 4. AI Opportunity Mapping, 5. People & Org, 6. Data Reality Check, 7. Quick Wins
    
    **Existential Strategy focus:**
    1. Revenue & Service Map, 2. The Existential Question, 3. Client Value Chain, 4. New Business Models, 5. Data & Knowledge Assets, 6. People & Org, 7. Quick Wins & Pilots
    
    **Full Assessment (both):**
    All 10 sections from the library.
    
    After section selection, ask:
    
    **Express mode grouping**: Present a suggested grouping of selected sections into 4 Express mega-sections. Let user confirm or adjust.
    
    ### Phase 4: Generation
    
    Generate two files:
    
    #### 1. The Context-Builder Prompt
    
    Save to: `Claude-Drafts/{company-slug}-context-prompt.md`
    
    **Structure** (follow the template in `references/prompt-template.md`):
    
    ```
    ---
    created_date: '[[YYYYMMDD]]'
    type: draft
    topic: consulting, AI transformation, {industry}
    for: {contact person or team name}
    ---
    
    # AI Transformation Context Builder -- {Company Name}
    
    ## About {Company}
      [Generated from research -- company description, size, market, positioning]
    
    ## Current State
      **What's working:** [from research + transcript]
      **The gap:** [from research + transcript]
      [If existential concerns found: **Existential context:**]
    
    ## Mode Selection
      [Express vs Deep Dive with section descriptions]
    
    ## How This Works
      [Standard interactive session instructions]
    
    ## Session Resumability
      [Standard resumability logic]
    
    ## Interactive Flow
      [Selected sections with tailored questions]
    
    ## Output Files
      [One file per section + final CLAUDE.md]
    
    ## Relevant Frameworks
      [Selected from references/frameworks.md based on focus]
    ```
    
    #### 2. Instruction File (optional)
    
    If the prompt will be sent to someone external, generate a short instruction file:
    `Claude-Drafts/{company-slug}-context-instructions.md`
    
    Containing:
    - What this file is and how to use it
    - Prerequisites (Claude Code or similar)
    - The two modes explained simply
    - What they'll get on output
    - Privacy note (they can share as much or as little as they want)
    
    ### Phase 5: Delivery (AskUserQuestion)
    
    **Question: What to do with the generated files?**
    - "Save to vault only"
    - "Save and send via Telegram"
    - "Save and let me review first"
    
    If Telegram: ask for the recipient handle/name, then send using the telegram skill (intro message + file).
    
    ## Key Principles
    
    - **Maximize closed-list questions**: Every AskUserQuestion should have concrete options. Minimize free-text input.
    - **Research before asking**: Don't ask the user things that can be found via web search or vault search.
    - **Tailor sections to context**: If the transcript reveals specific concerns (e.g., existential fears, specific tech stack), customize the section questions to reference those specifics.
    - **Bake in discovered context**: The generated prompt's "About" and "Current State" sections should be rich with researched details so the person running the prompt gets a warm start.
    - **Language awareness**: If session language is Russian, all AskUserQuestion interactions during prompt execution should be in Russian, but output files in English.
    
    ## Resources
    
    ### references/
    - `section-library.md` -- Full catalog of available sections with question templates
    - `prompt-template.md` -- Structural template for the generated prompt
    - `frameworks.md` -- Consulting frameworks to selectively include
    
  • chrome-history/SKILL.mdskill
    Show content (2708 bytes)
    ---
    name: chrome-history
    description: Query Chrome browsing history with natural language. Filter by date range, article type, keywords, and specific sites.
    ---
    
    # Chrome History Query Skill
    
    Search and filter your Chrome browsing history using natural language queries.
    
    ## What It Does
    
    1. Parses natural language queries to understand date ranges and filters
    2. Queries Chrome's SQLite history database
    3. Filters out noise (social media, email, redirects)
    4. Groups results by type (reading, research, tools, events)
    5. Returns formatted markdown with links
    
    ## Supported Queries
    
    ### Date Range
    - "yesterday" → previous day only
    - "today" → today only
    - "last week" → past 7 days
    - "last month" → past 30 days
    - "last 2 weeks" → past 14 days
    
    ### Content Filters
    - "articles I read" → reading cluster (news, blogs, essays)
    - "scientific articles" → research cluster (papers, docs)
    - "code/research" → GitHub, Stack Overflow, docs
    
    ### Keyword Filtering
    - "articles about AI" → finds pages mentioning AI
    - "scientific articles about climate" → finds research pages mentioning climate
    
    ### Site-Specific
    - "reddit threads" → reddit.com only
    - "on medium" → medium.com only
    - "twitter posts" → twitter.com only
    
    ## Example Queries
    
    ```
    "articles I read yesterday"
    "articles about AI I read yesterday"
    "scientific articles for the last week"
    "research about machine learning this week"
    "reddit threads last month"
    "code repos I visited yesterday"
    "on medium this week"
    ```
    
    ## Usage
    
    Run directly with a query:
    ```bash
    python3 ~/.claude/skills/chrome-history/chrome_history_query.py "articles I read yesterday"
    ```
    
    Or integrate into Claude Code when user asks:
    - "Show me articles I read yesterday"
    - "What scientific papers did I look at last week?"
    - "Show reddit threads I visited last month"
    - "Articles about AI from yesterday?"
    
    ## Configuration
    
    - **Chrome History**: `~/Library/Application Support/Google/Chrome/Default/History`
    - **Vault Location**: `/Users/glebkalinin/Brains/brain`
    - **Filtered Sites**: Social media, email, Google redirect wrappers
    - **Clustering**: Automatic by domain type (reading, research, tools, events)
    
    ## Exclusions
    
    Automatically filters out:
    - Social media: Facebook, Instagram, Twitter, TikTok, Reddit, LinkedIn
    - Email: Gmail, Outlook
    - Shopping: Amazon, eBay
    - Google redirects: google.com/url wrappers
    - Utility sites: FreeFeed, YouTube
    
    ## Output Format
    
    Results grouped by content type with timestamps:
    
    ```
    ## Chrome History: articles about AI yesterday
    
    *Found 5 items*
    
    ### Reading (3)
    - 14:22 [The more that people use AI...](url)
    - 16:38 [AI makes you smarter but...](url)
    
    ### Research (2)
    - 11:23 [GitHub: AI project](url)
    ```
    
  • agency-docs-updater/SKILL.mdskill
    Show content (10380 bytes)
    ---
    name: agency-docs-updater
    description: End-to-end pipeline for publishing Claude Code lab meetings. Accepts optional args: date (YYYYMMDD, "yesterday", "today") and lab number (e.g. "04"). Examples: "yesterday 04", "20260420 05", "04" (today, lab 04), "" (today, auto-detect lab).
    ---
    
    # Agency Docs Updater
    
    Execute ALL steps automatically in sequence. Only pause if a step fails and cannot be recovered. Read `references/learnings.md` before starting for known pitfalls.
    
    **Configuration**: paths are read from `.env` in the skill root (see `.env.example`). Defaults work for the standard setup. Key env vars: `VAULT_DIR`, `DOCS_SITE_DIR`, `YOUTUBE_UPLOADER_DIR`, `PRESENTATIONS_DIR`, `SKILLS_REPO_DIR`, `SKILLS_LOCAL_DIR`, `ZOOM_CREDENTIALS_DIR`, `GITHUB_REPO`, `SITE_DOMAIN`.
    
    **Dependencies** (verify these exist before running):
    - [zoom](https://github.com/glebis/claude-skills/tree/main/zoom) — Zoom recording download (`scripts/zoom_meetings.py`)
    - [fathom](https://github.com/glebis/claude-skills/tree/main/fathom) — Fathom video fallback (`scripts/download_video.py`)
    - [nano-banana](https://github.com/glebis/claude-skills/tree/main/nano-banana) — thumbnail overlay generation (`scripts/generate_image.sh`)
    - [calendar-sync](~/.claude/skills/calendar-sync) — local-only, calendar event sync (`sync.sh`)
    - [youtube-uploader](https://github.com/glebis/youtube-uploader) — video processing, upload, and YouTube API auth
    
    ## Step 0: Parse Arguments & Load Config
    
    Load `.env` from skill root. Then split `args` by whitespace:
    - 8-digit token (`YYYYMMDD`) → `DATE`
    - "yesterday" → `DATE = $(date -v-1d +%Y%m%d)`
    - "today" or missing → `DATE = $(date +%Y%m%d)`
    - 2-digit token (`NN`) or `lab-NN` → `LAB_FILTER`
    
    Expand env vars for paths used in subsequent steps:
    ```bash
    VAULT_DIR="${VAULT_DIR:-$HOME/Brains/brain}"
    DOCS_SITE_DIR="${DOCS_SITE_DIR:-$HOME/Sites/agency-docs}"
    YOUTUBE_UPLOADER_DIR="${YOUTUBE_UPLOADER_DIR:-$HOME/ai_projects/youtube-uploader}"
    SKILLS_REPO_DIR="${SKILLS_REPO_DIR:-$HOME/ai_projects/claude-skills}"
    SKILLS_LOCAL_DIR="${SKILLS_LOCAL_DIR:-$HOME/.claude/skills}"
    ZOOM_CREDENTIALS_DIR="${ZOOM_CREDENTIALS_DIR:-$HOME/.zoom_credentials}"
    PRESENTATIONS_DIR="${PRESENTATIONS_DIR:-$HOME/ai_projects/claude-code-lab}"
    GITHUB_REPO="${GITHUB_REPO:-glebis/agency-docs}"
    SITE_DOMAIN="${SITE_DOMAIN:-agency-lab.glebkalinin.com}"
    ```
    
    ## Step 1: Find Fathom Transcript
    
    If `LAB_FILTER` is set: `${VAULT_DIR}/${DATE}-claude-code-lab-${LAB_FILTER}.md`
    If empty: glob `${VAULT_DIR}/${DATE}-claude-code-lab-*.md` (pick most recent by mtime).
    
    If missing: run `${SKILLS_LOCAL_DIR}/calendar-sync/sync.sh`, re-check, stop if still missing.
    
    Extract from YAML frontmatter and store:
    - `FATHOM_FILE`, `SHARE_URL`, `MEETING_TITLE`, `DATE`, `LAB_NUMBER`
    - `VIDEO_NAME` = `${DATE}-claude-code-lab-${LAB_NUMBER}`
    - `TRANSCRIPT_LANG` = auto-detect from first ~50 lines (Cyrillic ratio > 0.3 → `ru`, else `en`)
    
    **Determine `MEETING_NUMBER`**: check existing MDX files in `${DOCS_SITE_DIR}/content/docs/claude-code-internal-${LAB_NUMBER}/meetings/` for a placeholder with today's date. If found, use that number. Otherwise, check file content sizes to find the next empty slot. Store as zero-padded two-digit string (e.g. `04`). This variable is used in Steps 3b, 4b, 5, 6, and 8.
    
    ## Step 2: Download Video
    
    Skip if `${VAULT_DIR}/${VIDEO_NAME}.mp4` exists and is > 1MB.
    
    **Note**: Zoom recordings may take ~15 minutes to process after a meeting ends. If the Zoom API returns no recordings, wait and retry before falling back to Fathom.
    
    **Primary — Zoom:**
    ```bash
    python3 ${SKILLS_REPO_DIR}/zoom/scripts/zoom_meetings.py recordings \
      --start ${DATE:0:4}-${DATE:4:2}-${DATE:6:2} \
      --end $(date -j -v+1d -f %Y%m%d ${DATE} +%Y-%m-%d) \
      --show-downloads 2>&1
    ```
    Find the MP4 URL, then:
    ```bash
    TOK=$(python3 -c "import json,pathlib; print(json.load(open(pathlib.Path('${ZOOM_CREDENTIALS_DIR}')/'oauth_token.json'))['access_token'])")
    curl -L -H "Authorization: Bearer ${TOK}" -o ${VAULT_DIR}/${VIDEO_NAME}.mp4 "${MP4_DOWNLOAD_URL}"
    ```
    
    **Fallback — Fathom** (if no Zoom recording):
    ```bash
    cd ${VAULT_DIR} && python3 ${SKILLS_LOCAL_DIR}/fathom/scripts/download_video.py \
      "${SHARE_URL}" --output-name "${VIDEO_NAME}"
    ```
    
    ## Step 3: Upload to YouTube
    
    ```bash
    cd ${YOUTUBE_UPLOADER_DIR} && \
    python3 process_video.py \
      --video ${VAULT_DIR}/${VIDEO_NAME}.mp4 \
      --fathom-transcript ${FATHOM_FILE} \
      --title "${MEETING_TITLE}" \
      --upload
    ```
    
    Run with `run_in_background: true` (10-30 min). On failure: `--resume-from upload`.
    
    Extract `YOUTUBE_URL` from stdout (`✓ YouTube video: ...`) or `processed/metadata/${VIDEO_NAME}.json`.
    Extract `VIDEO_ID` from the URL (the part after `?v=` or last path segment).
    
    **Start Step 4 in parallel** — summary doesn't depend on YouTube URL.
    
    ### Step 3b: Lab-Style Thumbnail (REQUIRED)
    
    **Always run this step** — it replaces the generic thumbnail from `process_video.py` with the branded lab template. The generic thumbnail is NOT acceptable for publishing.
    
    **Prerequisites**: `VIDEO_ID` must be known (wait for Step 3 to complete if needed).
    
    Follow `references/thumbnail-guide.md` for the full workflow:
    1. Generate Nano Banana overlay image (topic-specific prompt from the guide's prompt patterns)
    2. Read/inspect raw image to confirm background color, then recolor lines to orange (#e85d04)
    3. Write a **temporary** HTML file (e.g. `/tmp/lab-meeting-${MEETING_NUMBER}.html`) based on `${YOUTUBE_UPLOADER_DIR}/templates/images/lab-meeting.html` — update meeting number, topic hero text, bullet descriptions, date. **Do not edit the original template in-place.**
    4. Render with Playwright at 1280×720 → `${YOUTUBE_UPLOADER_DIR}/processed/thumbnails/${VIDEO_NAME}.jpg`
    5. Read/inspect the rendered thumbnail to verify layout before uploading
    6. Upload to YouTube: use `VIDEO_ID` extracted from Step 3
    
    Do NOT skip this step or rely on the `process_video.py` thumbnail.
    
    ## Step 4: Generate Fact-Checked Summary
    
    Read `${FATHOM_FILE}`. Generate a structured summary **in `${TRANSCRIPT_LANG}`**:
    - `##` section headers, bullet points, code examples where relevant
    - Technical terms in English (MCP, Skills, Claude Code, etc.)
    - **Exclude personal scheduling details**
    - Sanitize for MDX: escape `<`, `>`, and bare `{` characters that would break MDX compilation
    
    Fact-check Claude Code feature claims using `claude-code-guide` subagent (if available; skip fact-checking if the agent is not accessible). Save corrected summary to scratchpad as `summary.md`.
    
    ## Step 4b: Update YouTube Metadata
    
    **After both Step 3 and Step 4 complete.** `VIDEO_ID`, `MEETING_NUMBER`, and `LAB_NUMBER` must all be determined before this step. Read `references/youtube-api.md` for description format and API snippets.
    
    Generate YouTube description from the summary. Use the language-appropriate template:
    
    - **If `TRANSCRIPT_LANG=en`**: English labels ("In this video:", "Course materials and session notes:")
    - **If `TRANSCRIPT_LANG=ru`**: Russian labels ("В этом видео:", "Материалы и конспект занятия:")
    
    Do NOT mix languages in a single description.
    
    Meeting page URL: `https://${SITE_DOMAIN}/docs/claude-code-internal-${LAB_NUMBER}/meetings/${MEETING_NUMBER}`
    
    Update title, description, tags via YouTube API, then add video to playlist "Claude Code Lab ${LAB_NUMBER}" (auto-created if it does not exist).
    
    ## Step 5: Generate MDX
    
    ```bash
    python3 ${SKILLS_LOCAL_DIR}/agency-docs-updater/scripts/update_meeting_doc.py \
      ${FATHOM_FILE} "${YOUTUBE_URL}" ${SCRATCHPAD}/summary.md
    ```
    
    **Before running**: check if a placeholder MDX already exists for today's date (`grep -l` in `meetings/`). If so, use `-n ${MEETING_NUMBER} --update` to target it.
    
    **After running**:
    1. Strip appended Marp content (everything after summary's closing `---` before `<!-- _class: lead -->`) — MDX breaks on HTML comments (`<!-- -->`), unescaped `<`, and bare `{` characters
    2. Check for presentation file: look in `${PRESENTATIONS_DIR}/presentations/lab-${LAB_NUMBER}/` and `${PRESENTATIONS_DIR}/lesson-generator/` for files matching `${DATE}`. If found, copy to `${DOCS_SITE_DIR}/public/${DATE}-claude-code-lab-${LAB_NUMBER}.html` and add link in MDX
    3. Replace frontmatter placeholders (`[Название встречи]`, `[Краткое описание встречи]`, `[Дата встречи]`)
    4. If `TRANSCRIPT_LANG=en`, rewrite the MDX entirely with English labels — the script defaults to Russian and the translation fallback produces broken mixed-language output
    5. Verify: `cd ${DOCS_SITE_DIR} && npm run build 2>&1 | tail -5`
    
    ## Step 6: Commit and Push
    
    Only stage pipeline files — never `git add .`:
    ```bash
    cd ${DOCS_SITE_DIR}
    git fetch origin main
    BEHIND=$(git rev-list --count HEAD..origin/main)
    if [ "$BEHIND" -gt 0 ]; then
      git stash push -m "agency-docs-updater: temp stash"
      git pull --rebase origin main
      git stash pop || true
    fi
    git add content/docs/claude-code-internal-${LAB_NUMBER}/meetings/${MEETING_NUMBER}.mdx
    # Only stage presentation HTML if it was copied
    [ -f public/${DATE}-claude-code-lab-${LAB_NUMBER}.html ] && git add public/${DATE}-claude-code-lab-${LAB_NUMBER}.html
    git commit -m "Add Lab ${LAB_NUMBER} Meeting ${MEETING_NUMBER}"
    git push
    ```
    
    Store `COMMIT_HASH=$(git rev-parse HEAD)` for Step 7.
    
    ## Step 7: Wait for Vercel Deploy
    
    ```bash
    TIMEOUT=300; ELAPSED=0
    until [ "$(gh api repos/${GITHUB_REPO}/commits/${COMMIT_HASH}/status --jq '.state' 2>/dev/null || echo 'pending')" != "pending" ]; do
      sleep 15; ELAPSED=$((ELAPSED+15))
      [ "$ELAPSED" -ge "$TIMEOUT" ] && echo "Deploy timeout after ${TIMEOUT}s" && break
    done
    DEPLOY_STATE=$(gh api repos/${GITHUB_REPO}/commits/${COMMIT_HASH}/status --jq '.state')
    echo "Deploy state: ${DEPLOY_STATE}"
    ```
    
    Run with `run_in_background: true`. If state is `failure` or `error`: check Vercel logs (`vercel logs`), fix locally, re-push, restart this step.
    
    ## Step 8: Verify in Browser
    
    Open `https://${SITE_DOMAIN}/docs/claude-code-internal-${LAB_NUMBER}/meetings/${MEETING_NUMBER}` in a browser (via chrome automation tools or manually). Verify YouTube embed is visible. If not: check VIDEO_ID, wait for YouTube processing, or re-upload.
    
    ## Pipeline Report
    
    After completion, report: Fathom path, video path, YouTube URL, MDX path, commit hash, deploy status, embed verification.
    
  • codex/SKILL.mdskill
    Show content (3531 bytes)
    ---
    name: codex
    description: Use when the user asks to run Codex CLI (codex exec, codex resume) or references OpenAI Codex for code analysis, refactoring, or automated editing
    ---
    
    # Codex Skill Guide
    
    ## Running a Task
    1. **Do NOT specify a model by default.** The Codex CLI is configured with a ChatGPT account, and explicit model flags (`-m gpt-5-codex`, `-m gpt-5`, `-m o4-mini`) all fail with "not supported when using Codex with a ChatGPT account." Omitting `-m` lets Codex use its default model, which works. Only add `-m` if the user explicitly requests a specific model.
    2. Select the sandbox mode required for the task; default to `--sandbox read-only` unless edits or network access are necessary.
    3. Assemble the command with the appropriate options:
       - `--sandbox <read-only|workspace-write|danger-full-access>`
       - `--full-auto`
       - `-C, --cd <DIR>`
       - `--skip-git-repo-check`
       - `-m, --model <MODEL>` (only if user explicitly requests)
       - `--config model_reasoning_effort="<high|medium|low>"` (only if user explicitly requests)
    4. Always use --skip-git-repo-check.
    5. When continuing a previous session, use `codex exec --skip-git-repo-check resume --last` via stdin. When resuming don't use any configuration flags unless explicitly requested by the user. Resume syntax: `echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null`. All flags have to be inserted between exec and resume.
    6. **IMPORTANT**: By default, append `2>/dev/null` to all `codex exec` commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.
    7. Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
    8. **After Codex completes**, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."
    
    ### Quick Reference
    | Use case | Sandbox mode | Key flags |
    | --- | --- | --- |
    | Read-only review or analysis | `read-only` | `--sandbox read-only 2>/dev/null` |
    | Apply local edits | `workspace-write` | `--sandbox workspace-write --full-auto 2>/dev/null` |
    | Permit network or broad access | `danger-full-access` | `--sandbox danger-full-access --full-auto 2>/dev/null` |
    | Resume recent session | Inherited from original | `echo "prompt" \| codex exec --skip-git-repo-check resume --last 2>/dev/null` (no flags allowed) |
    | Run from another directory | Match task needs | `-C <DIR>` plus other flags `2>/dev/null` |
    
    ## Following Up
    - After every `codex` command, immediately use `AskUserQuestion` to confirm next steps, collect clarifications, or decide whether to resume with `codex exec resume --last`.
    - When resuming, pipe the new prompt via stdin: `echo "new prompt" | codex exec resume --last 2>/dev/null`. The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
    - Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.
    
    ## Error Handling
    - Stop and report failures whenever `codex --version` or a `codex exec` command exits non-zero; request direction before retrying.
    - Before you use high-impact flags (`--full-auto`, `--sandbox danger-full-access`, `--skip-git-repo-check`) ask the user for permission using AskUserQuestion unless it was already given.
    - When output includes warnings or partial results, summarize them and ask how to adjust using `AskUserQuestion`.
    
  • balanced/SKILL.mdskill
    Show content (6699 bytes)
    ---
    name: balanced
    description: Constructive, evidence-based dialogue mode that avoids sycophancy. This skill should be used when the user wants balanced multi-perspective analysis, critical feedback, or rigorous challenge of their ideas. Triggers on "/balanced" or requests for honest/critical/balanced feedback. Supports passive, interactive, tldr, steelman, and decision modes.
    ---
    
    # Balanced Dialog
    
    Engage in constructive, evidence-based dialogue. Multiple output modes available.
    
    ## Onboard Mode
    
    Trigger: `/balanced onboard` or `/balanced setup`. Walk the user through all available modes and let them pick a default.
    
    ### Flow
    
    1. Display this overview using AskUserQuestion:
    
    ```
    Balanced Dialog — available modes:
    
    1. FULL (default)  — 4-move structured analysis
    2. INTERACTIVE (i) — Socratic Q&A, one move at a time
    3. TLDR            — 3-5 line insight box, action-oriented
    4. STEELMAN        — strongest argument + strongest counter
    5. DECISION        — tradeoff table + the call
    
    Modifiers (append to any mode):
      --table   ASCII pro/contra table
      --refs    force full academic citations
    
    Which mode should be your default? (1-5, or press Enter for FULL)
    ```
    
    2. Save the user's choice to the skill config file at `~/.claude/skills/balanced/config.json`:
       ```json
       {"default_mode": "full", "default_modifiers": []}
       ```
    
    3. Then ask via AskUserQuestion:
       ```
       Default modifiers? (comma-separated, or Enter for none)
       Options: --table, --refs
       ```
    
    4. Update config.json with the chosen modifiers.
    
    5. Confirm:
       ```
       ★ Balanced configured ──────────────────────────
       Default: [mode] [modifiers]
       Usage: /balanced <your statement>
       Override anytime: /balanced tldr --table <statement>
       ─────────────────────────────────────────────────
       ```
    
    ### Config Loading
    
    On every invocation, check if `~/.claude/skills/balanced/config.json` exists. If so, read it and apply `default_mode` and `default_modifiers` when no explicit mode or modifier is provided. Explicit arguments always override config.
    
    ## Mode Selection
    
    - **Passive mode** (default): `/balanced <statement>`. Full 4-move analysis in a single structured pass.
    - **Interactive mode**: `/balanced i <statement>`. Socratic Q&A using AskUserQuestion, one move at a time.
    - **TL;DR mode**: `/balanced tldr <statement>`. 3-5 lines max. One key fact, one challenge, one action. Output in insight box format:
      ```
      ★ Balanced ─────────────────────────────────────
      [key fact]. [challenge to assumption].
      → Action: [concrete next step].
      ─────────────────────────────────────────────────
      ```
    - **Steelman mode**: `/balanced steelman <statement>`. Only moves 1+2. Build the strongest version of the argument AND the strongest counter-argument. No action steps. For preparing to defend a position.
    - **Decision mode**: `/balanced decision <statement>`. Only move 4 (refinement) with an explicit tradeoff table. For when analysis is done and the call needs to be made.
    
    ## Output Modifiers
    
    Append these flags to any mode:
    
    - **`--table`**: Output pro/contra analysis as an ASCII table. Apply whenever the analysis has clear opposing factors. Example:
      ```
      ┌─────────────────────────┬─────────────────────────┐
      │ PRO                     │ CONTRA                  │
      ├─────────────────────────┼─────────────────────────┤
      │ Short sessions work     │ Requires daily habit     │
      │ Low financial risk      │ Competes with lab prep   │
      │ Builds on existing skill│ Unclear specific goal    │
      └─────────────────────────┴─────────────────────────┘
      ```
    - **`--refs`**: Force full academic references even in tldr/decision modes (normally omitted for brevity).
    
    ## Four Moves
    
    ### 1 | Surface Merits
    - Acknowledge well-supported points or creative angles.
    - State why they are non-trivial. No generic praise.
    - **Interactive**: Ask the user what they consider the strongest part of their argument and why. Then offer the analysis.
    
    ### 2 | Rigorous Challenge
    - Question assumptions and potential biases.
    - Test logic for gaps, fallacies, or over-generalization.
    - Offer counter-evidence or rival explanations.
    - **Interactive**: Present the strongest counter-argument found. Use AskUserQuestion to ask the user how they would respond. Then evaluate their response.
    
    ### 3 | Expansion
    - Suggest alternative framings, methods, or resources.
    - When helpful, pose clarifying questions rather than assume.
    - **Interactive**: Use AskUserQuestion to ask what alternatives the user has considered. Then suggest framings they may have missed.
    
    ### 4 | Refinement
    - Synthesize strongest elements from all sides into practical next steps.
    - Flag residual uncertainty and cite sources.
    - **Interactive**: Present a draft synthesis. Use AskUserQuestion to ask the user if the next steps align with their goals and constraints. Adjust based on their response.
    
    ## Interactive Mode Flow
    
    When in interactive mode:
    1. Begin by restating the user's position in one sentence. Use AskUserQuestion to confirm accuracy.
    2. Walk through each move sequentially. Each move gets its own AskUserQuestion exchange.
    3. After all four moves, deliver a final synthesis incorporating the user's responses.
    4. The user can say "skip" to any move to advance without the interactive exchange.
    
    ## Meta-Rules
    
    - No flattery. No needless pessimism.
    - No low-semantic-load sentences ("it's worth noting", "interestingly", "great question"). No opinion statements.
    - Maintain neutral, analytical tone. Quantify confidence when possible (e.g., "~70% confident based on available evidence").
    - Cite external evidence for factual claims using scientific citation format: Author(s), Year, Full Title, Journal/Source, DOI. When referencing a DOI, perform a web search to validate it exists.
    - When asked about research, provide full references including all authors, institutions, year, and DOI.
    - Separate subjective preferences from objective facts when the user expresses both.
    - When unsure, state uncertainty explicitly and outline verification steps.
    
  • brand-agency/SKILL.mdskill
    Show content (6647 bytes)
    ---
    name: brand-agency
    description: Applies Agency brand colors and typography to artifacts including presentations, SVG graphics, documents, and web interfaces. This skill should be used when brand colors, visual formatting, neobrutalism style, or Agency design standards apply. Keywords - branding, corporate identity, visual identity, styling, brand colors, typography, visual formatting, visual design, neobrutalism.
    ---
    
    # Agency Brand Styling
    
    ## Overview
    
    To access Agency's official brand identity and style resources, use this skill. The style is based on neobrutalism aesthetic with bold colors, hard shadows, and strong typography.
    
    ## Brand Guidelines
    
    ### Colors
    
    **Main Colors:**
    
    - Background Light: `#ffffff` - Light backgrounds
    - Foreground Dark: `#000000` - Primary text and dark elements
    - Muted: `#e5e5e5` - Subtle backgrounds, secondary elements
    
    **Primary Palette:**
    
    - Primary (Orange): `#e85d04` - Main accent, CTAs, highlights
    - Secondary (Yellow): `#ffd60a` - Secondary accent, warnings, attention
    - Accent (Blue): `#3a86ff` - Links, interactive elements, info
    
    **Chart/Extended Colors:**
    
    - Chart Green: `#38b000` - Success states, positive indicators
    - Chart Red: `#d62828` - Error states, destructive actions
    
    ### Typography
    
    **Font Stack:**
    
    - **Headings**: Geist ExtraBold (weight 800), fallback: Arial
    - **Body Text**: EB Garamond, fallback: Georgia
    - **Monospace/Code**: Geist Mono, fallback: Courier New
    
    **Google Fonts Import:**
    ```css
    @import url('https://fonts.googleapis.com/css2?family=EB+Garamond:ital,wght@0,400;0,500;0,600;1,400&family=Geist:wght@800&family=Geist+Mono:wght@400;500&display=swap');
    ```
    
    **CSS Variables:**
    ```css
    :root {
      --font-body: 'EB Garamond', Georgia, serif;
      --font-heading: 'Geist', Arial, sans-serif;
      --font-mono: 'Geist Mono', 'Courier New', monospace;
    }
    ```
    
    ### Neobrutalism Style
    
    **Shadows:**
    - Hard shadow offset: `4px 4px 0px 0px #000000`
    - No blur (stdDeviation: 0)
    - CSS: `box-shadow: 4px 4px 0px 0px #000000;`
    - SVG filter: `<feDropShadow dx="4" dy="4" stdDeviation="0" flood-color="#000000"/>`
    
    **Borders:**
    - Width: 3px
    - Color: `#000000`
    - Style: solid
    - Border radius: 0 (no rounded corners)
    
    **Key Principles:**
    - High contrast between elements
    - Bold, saturated colors
    - No gradients (flat colors only)
    - Strong black outlines
    - Offset hard shadows
    - Zero border radius
    
    ## Application Guidelines
    
    ### SVG Graphics
    
    To create SVG in Agency brand style:
    
    ```xml
    <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 400 400">
      <defs>
        <filter id="shadow" x="-20%" y="-20%" width="150%" height="150%">
          <feDropShadow dx="4" dy="4" stdDeviation="0" flood-color="#000000" flood-opacity="1"/>
        </filter>
      </defs>
    
      <circle cx="200" cy="200" r="80"
        fill="#e85d04"
        stroke="#000000"
        stroke-width="3"
        filter="url(#shadow)"/>
    </svg>
    ```
    
    ### Presentations (Marp/PowerPoint)
    
    **Slide backgrounds by type:**
    - Title slides: Primary Orange `#e85d04`
    - Content slides: Light `#ffffff` or Muted `#e5e5e5`
    - Accent slides: Secondary Yellow `#ffd60a`, Accent Blue `#3a86ff`
    - Dark slides: Foreground `#000000`
    
    **Text colors:**
    - On light backgrounds: `#000000`
    - On dark/colored backgrounds: `#ffffff`
    
    ### Web/HTML
    
    ```css
    :root {
      /* Colors */
      --color-background: #ffffff;
      --color-foreground: #000000;
      --color-primary: #e85d04;
      --color-secondary: #ffd60a;
      --color-accent: #3a86ff;
      --color-success: #38b000;
      --color-error: #d62828;
      --color-muted: #e5e5e5;
    
      /* Typography */
      --font-body: 'EB Garamond', Georgia, serif;
      --font-heading: 'Geist', Arial, sans-serif;
      --font-mono: 'Geist Mono', 'Courier New', monospace;
    
      /* Shadows */
      --shadow: 4px 4px 0px 0px #000000;
      --shadow-sm: 2px 2px 0px 0px #000000;
    }
    
    /* Headings */
    h1, h2, h3, h4, h5, h6 {
      font-family: var(--font-heading);
      font-weight: 800;
    }
    
    /* Body */
    body {
      font-family: var(--font-body);
      color: var(--color-foreground);
      background: var(--color-background);
    }
    
    /* Buttons */
    .btn {
      background: var(--color-primary);
      color: white;
      border: 3px solid var(--color-foreground);
      box-shadow: var(--shadow);
      border-radius: 0;
      font-family: var(--font-heading);
      font-weight: 800;
    }
    
    /* Cards */
    .card {
      background: var(--color-background);
      border: 3px solid var(--color-foreground);
      box-shadow: var(--shadow);
      border-radius: 0;
    }
    
    /* Code */
    code, pre {
      font-family: var(--font-mono);
      background: var(--color-foreground);
      color: white;
      border: 3px solid var(--color-foreground);
    }
    ```
    
    ## Color Usage Quick Reference
    
    | Context | Color | Hex |
    |---------|-------|-----|
    | Primary action | Orange | `#e85d04` |
    | Secondary action | Yellow | `#ffd60a` |
    | Links/Info | Blue | `#3a86ff` |
    | Success | Green | `#38b000` |
    | Error/Danger | Red | `#d62828` |
    | Text (light bg) | Black | `#000000` |
    | Text (dark bg) | White | `#ffffff` |
    | Muted/Disabled | Gray | `#e5e5e5` |
    
    ## Assets
    
    **Logo:** `assets/logo.svg` - Agency logo in neobrutalism style (terminal window with code symbols and geometric shapes)
    
    ## Social Media Templates
    
    ASCII-art style HTML templates for social media using Geist Mono font. Render to PNG using Playwright.
    
    ### Available Templates
    
    | Template | Size | Platform |
    |----------|------|----------|
    | `instagram/story-announcement` | 1080x1920 | IG Story |
    | `instagram/story-quote` | 1080x1920 | IG Story |
    | `instagram/post-title` | 1080x1350 | IG Post |
    | `instagram/post-tips` | 1080x1350 | IG Post |
    | `instagram/post-event` | 1080x1350 | IG Post |
    | `youtube/thumbnail` | 1280x720 | YT Thumbnail |
    | `youtube/shorts-cover` | 1080x1920 | YT Shorts |
    | `social/cover-banner` | 1584x396 | LinkedIn/FB |
    | `social/tiktok` | 1080x1920 | TikTok |
    | `social/twitter-post` | 1200x675 | X/Twitter |
    | `social/pinterest-pin` | 1000x1500 | Pinterest |
    
    ### Usage
    
    ```bash
    # Render all templates
    node scripts/render-templates.js
    
    # Render specific template
    node scripts/render-templates.js --template instagram/story-announcement
    
    # Custom output path
    node scripts/render-templates.js -t youtube/thumbnail -o my-thumbnail.png
    
    # List available templates
    node scripts/render-templates.js --list
    ```
    
    ### ASCII Style Elements
    
    Templates use ASCII box-drawing characters for decoration:
    
    ```
    Frames:   ┌─────┐  ╔═════╗  ┏━━━━━┓
              │     │  ║     ║  ┃     ┃
              └─────┘  ╚═════╝  ┗━━━━━┛
    
    Lines:    ─ │ ═ ║ ━ ┃
    
    Arrows:   → ← ↑ ↓ ▶ ◀ ▲ ▼
    
    Shapes:   ● ○ ■ □ ▲ △ ★ ☆ ◆ ◇
    
    Blocks:   █ ▓ ▒ ░
    ```
    
    ### Template Files
    
    Located in: `assets/templates/`
    
  • browsing-history/skill.mdskill
    Show content (5688 bytes)
    ---
    name: browsing-history
    description: Query browsing history from all synced devices (iPhone, Mac, iPad, desktop). Supports natural language queries for filtering by date, device, domain, and keywords. Uses LLM classification for content categories. Can output to stdout or save as markdown/JSON to Obsidian vault.
    ---
    
    # Browsing History Skill
    
    Query browsing history from all synced devices with natural language.
    
    ## When to Use
    
    Use this skill when the user asks about:
    - Articles/pages they read (yesterday, last week, etc.)
    - Browsing history from specific devices (iPhone, iPad, desktop)
    - Finding pages by topic, domain, or keyword
    - Exporting browsing history to files
    - Grouping history by category or domain
    
    ## Database
    
    Location: `~/data/browsing.db`
    
    Synced devices: iPhone, iPad, Mac, desktop, Android
    
    ### Timestamps
    - **visit_time**: Actual visit timestamp from Chrome (100% coverage for all devices)
    - **first_seen**: Import timestamp (fallback when visit_time unavailable)
    
    The skill uses `COALESCE(visit_time, first_seen)` for accurate time-based queries.
    
    ## Usage
    
    ```bash
    python3 ~/.claude/skills/browsing-history/browsing_query.py "<query>" [options]
    ```
    
    ### Options
    
    | Option | Description | Example |
    |--------|-------------|---------|
    | `--device` | Filter by device | `--device iPhone` |
    | `--days` | Number of days back | `--days 7` |
    | `--domain` | Filter by domain | `--domain medium.com` |
    | `--limit` | Max results | `--limit 50` |
    | `--format` | Output format | `--format json` |
    | `--output` | Save to file | `--output history.md` |
    | `--group-by` | Group results | `--group-by domain` or `--group-by category` |
    | `--categorize` | Use LLM to categorize | `--categorize` |
    
    ### Example Queries
    
    **Basic queries:**
    ```bash
    # Yesterday's browsing history
    python3 ~/.claude/skills/browsing-history/browsing_query.py "yesterday"
    
    # Articles from iPhone yesterday
    python3 ~/.claude/skills/browsing-history/browsing_query.py "yesterday" --device iPhone
    
    # Last week's history grouped by domain
    python3 ~/.claude/skills/browsing-history/browsing_query.py "last week" --group-by domain
    
    # Find articles about economics
    python3 ~/.claude/skills/browsing-history/browsing_query.py "economics" --days 7
    ```
    
    **Save to Obsidian:**
    ```bash
    # Save yesterday's history as markdown
    python3 ~/.claude/skills/browsing-history/browsing_query.py "yesterday" \
      --output ~/Research/vault/browsing-2025-11-27.md
    
    # Save with LLM categorization
    python3 ~/.claude/skills/browsing-history/browsing_query.py "yesterday" \
      --categorize --group-by category \
      --output ~/Research/vault/browsing-categorized.md
    
    # Save as JSON
    python3 ~/.claude/skills/browsing-history/browsing_query.py "last week" \
      --format json --output ~/Research/vault/history.json
    ```
    
    **Device-specific:**
    ```bash
    # iPhone tabs
    python3 ~/.claude/skills/browsing-history/browsing_query.py "yesterday" --device iPhone
    
    # Desktop history
    python3 ~/.claude/skills/browsing-history/browsing_query.py "today" --device desktop
    
    # All mobile devices
    python3 ~/.claude/skills/browsing-history/browsing_query.py "yesterday" --device mobile
    ```
    
    **Search and filter:**
    ```bash
    # Sites starting with "joy"
    python3 ~/.claude/skills/browsing-history/browsing_query.py "joy" --days 7
    
    # Medium.com articles
    python3 ~/.claude/skills/browsing-history/browsing_query.py "last month" --domain medium.com
    ```
    
    ## Natural Language Patterns
    
    The script recognizes:
    
    | Pattern | Interpretation |
    |---------|----------------|
    | `yesterday` | Previous day |
    | `today` | Current day |
    | `last week` | Past 7 days |
    | `last month` | Past 30 days |
    | `last N days` | Past N days |
    
    Keywords are searched in URL and title.
    
    ## Output Formats
    
    ### Markdown (default)
    ```markdown
    # Browsing History: yesterday
    
    *47 unique URLs from 2025-11-27*
    
    ## 2025-11-27
    
    - [Article Title](https://example.com/article) - iPhone - 14:32
    - [Another Page](https://another.com/page) - desktop - 16:45
    ```
    
    ### Markdown with categories (--categorize --group-by category)
    ```markdown
    # Browsing History: yesterday
    
    ## News & Current Events
    - [Breaking: Something Happened](https://news.com/...) - iPhone
    
    ## Technology & Programming
    - [How to Build APIs](https://dev.to/...) - desktop
    
    ## Research & Learning
    - [Academic Paper on AI](https://arxiv.org/...) - Mac
    ```
    
    ### JSON (--format json)
    ```json
    {
      "query": "yesterday",
      "date_range": "2025-11-27",
      "total": 47,
      "results": [
        {"url": "...", "title": "...", "device": "iPhone", "time": "14:32", "category": "News"}
      ]
    }
    ```
    
    ## Workflow Examples
    
    **User: "Show me articles I read yesterday on my phone"**
    ```bash
    python3 ~/.claude/skills/browsing-history/browsing_query.py "yesterday" --device iPhone
    ```
    
    **User: "Save my browsing history from last week to Obsidian, grouped by category"**
    ```bash
    python3 ~/.claude/skills/browsing-history/browsing_query.py "last week" \
      --categorize --group-by category \
      --output ~/Research/vault/browsing-week.md
    ```
    
    **User: "Help me find that article about economics I read on my computer"**
    ```bash
    python3 ~/.claude/skills/browsing-history/browsing_query.py "economics" \
      --device desktop --days 7
    ```
    
    **User: "Sites that start with 'joy' from last week"**
    ```bash
    python3 ~/.claude/skills/browsing-history/browsing_query.py "joy" --days 7
    ```
    
    ## Notes
    
    - URLs are deduplicated per day (same URL on same day = one entry)
    - **visit_time**: Actual visit timestamps from Chrome history
      - Desktop: 100% coverage (from Chrome SQLite `last_visit_time`)
      - Mobile: 100% coverage (extracted from Chrome Sync LevelDB)
    - **first_seen**: Fallback import timestamp (~15min resolution)
    - LLM categorization uses Claude 3.5 Haiku via `llm` CLI
    
  • cognitive-toolkit/skill.mdskill
    Show content (4478 bytes)
    ---
    name: cognitive-toolkit
    description: Evidence-based CBT and DBT intervention skills — guided thought records, opposite action, DEAR MAN roleplay, crisis skills with optional HRV biofeedback. Configurable therapeutic pushback. Triggers on "/cbt", "/dbt", "/thought-record", "/record", "/opposite", "/opposite-action", "I need to work through something", "help me with a thought", "cognitive distortion", "I'm spiraling", "can we do a thought record".
    ---
    
    # Cognitive Toolkit
    
    Interactive CBT and DBT guided exercises with configurable therapeutic pushback and optional health data integration.
    
    ## Usage
    
    ```
    /cbt                        # start with check-in → technique recommendation
    /cbt thought-record         # jump directly to thought record
    /cbt opposite-action        # jump directly to opposite action
    /cbt --pushback firm        # override pushback level for this session
    /cbt --no-health            # skip health data pull even if available
    ```
    
    ## How it works
    
    1. Read `references/thought-record.md` — thought record protocol (ABC model, cognitive distortion taxonomy, reframe scaffold)
    2. Read `references/opposite-action.md` — opposite action + DEAR MAN + TIPP crisis skills
    3. Read `references/pushback-config.md` — pushback levels, triggers, and mid-session override commands
    4. Read `references/health-integration.md` — HRV/sleep pull, biofeedback interpretation, skip logic
    
    ## Session Flow
    
    **Without technique argument** (full flow):
    1. Check-in — mood (0–10), brief situation summary
    2. Recommend — match presenting issue to technique based on check-in
    3. Load technique — read the relevant reference file
    4. Protocol — run the full guided exercise
    5. Close — summary, insight, one takeaway
    6. Save — write session to vault output
    
    **With technique argument** (direct jump):
    1. Brief mood check (0–10, one line)
    2. Load technique — read the relevant reference file
    3. Protocol — run the full guided exercise
    4. Close — summary, insight, one takeaway
    5. Save — write session to vault output
    
    ## Available Techniques
    
    | Command | Technique | Reference | Wave |
    |---|---|---|---|
    | `thought-record` | Thought Record (ABC + reframe) | `references/thought-record.md` | 1 |
    | `opposite-action` | Opposite Action (DBT emotion regulation) | `references/opposite-action.md` | 1 |
    | `dear-man` | DEAR MAN assertiveness roleplay | `references/opposite-action.md` | 2 |
    | `tipp` | TIPP crisis/distress tolerance | `references/opposite-action.md` | 2 |
    | `chain` | Chain Analysis (behavior chain) | `references/thought-record.md` | 3 |
    | `activation` | Behavioral Activation (depression) | `references/thought-record.md` | 3 |
    | `wise-mind` | Wise Mind (emotion vs. reason) | `references/opposite-action.md` | 3 |
    
    ## Pushback
    
    See `references/pushback-config.md` for full configuration.
    
    - Defaults loaded from `references/pushback-config.md` (default: `gentle`)
    - Per-session override: `/cbt --pushback [gentle|moderate|firm]`
    - Mid-session commands: `softer`, `harder`, `no pushback` adjust level in real time
    - Pushback is Socratic, not confrontational — "What evidence supports that?" not "That's wrong"
    
    ## Health Data
    
    See `references/health-integration.md` for full integration logic.
    
    - Optional: pulled from health MCP if available at session start
    - Surfaces HRV, sleep quality, resting HR as contextual framing only
    - Skip silently if health data unavailable or if `--no-health` flag passed
    - Never gate a session on health data — it's context, not gatekeeper
    
    ## Telegram Entry Points
    
    | Command | Maps to |
    |---|---|
    | `/record` | `thought-record` |
    | `/opposite` | `opposite-action` |
    | `/dear` | `dear-man` |
    | `/tipp` | `tipp` |
    | `/checkin` | full check-in flow |
    | `/wise` | `wise-mind` |
    | `/settings` | adjust pushback level and health toggle |
    
    ## Anti-patterns
    
    - NOT a diagnostic tool — never assess, label, or diagnose
    - NOT a therapist replacement — this is a practice tool for between sessions
    - Frame suggestions as "research suggests" not "you should"
    - If user expresses emergency, suicidal ideation, or acute crisis: stop the technique immediately, acknowledge, and provide crisis resources (e.g., Telefonseelsorge 0800 111 0 111 in Germany, international directory at findahelpline.com)
    
    ## Vault Output
    
    Sessions saved to: `~/Brains/brain/cognitive-toolkit/sessions/YYYYMMDD-[technique]-NN.md`
    
    Format: frontmatter with date, technique, mood-before, mood-after + full session transcript.
    
  • .claude-plugin/marketplace.jsonmarketplace
    Show content (10238 bytes)
    {
      "name": "glebis-skills",
      "owner": {
        "name": "Gleb Kalinin",
        "email": "glebis@gmail.com"
      },
      "metadata": {
        "description": "50+ Claude Code skills \u2014 meetings, research, publishing, development, and more."
      },
      "plugins": [
        {
          "name": "agency-docs-updater",
          "source": "./agency-docs-updater",
          "description": "End-to-end pipeline for publishing Claude Code lab meetings. Accepts optional args: date (YYYYMMDD, \"yesterday\", \"today\""
        },
        {
          "name": "balanced",
          "source": "./balanced",
          "description": "Constructive, evidence-based dialogue mode that avoids sycophancy. This skill should be used when the user wants balance"
        },
        {
          "name": "brand-agency",
          "source": "./brand-agency",
          "description": "Applies Agency brand colors and typography to artifacts including presentations, SVG graphics, documents, and web interf"
        },
        {
          "name": "browsing-history",
          "source": "./browsing-history",
          "description": "Query browsing history from all synced devices (iPhone, Mac, iPad, desktop). Supports natural language queries for filte"
        },
        {
          "name": "chrome-history",
          "source": "./chrome-history",
          "description": "Query Chrome browsing history with natural language. Filter by date range, article type, keywords, and specific sites."
        },
        {
          "name": "context-builder",
          "source": "./context-builder",
          "description": "Generate interactive AI transformation context-builder prompts for consulting clients. Use when creating structured disc"
        },
        {
          "name": "daydream",
          "source": "./daydream",
          "description": "daydream"
        },
        {
          "name": "decision-toolkit",
          "source": "./decision-toolkit",
          "description": "Generate structured decision-making tools \u2014 step-by-step guides, bias checkers, scenario explorers, and interactive dash"
        },
        {
          "name": "deep-research",
          "source": "./deep-research",
          "description": "This skill should be used when conducting comprehensive research on any topic using the OpenAI Deep Research API. It aut"
        },
        {
          "name": "doctorg",
          "source": "./doctorg",
          "description": "Evidence-based health research using tiered trusted sources with GRADE-inspired evidence ratings. Integrates Apple Healt"
        },
        {
          "name": "elevenlabs-tts",
          "source": "./elevenlabs-tts",
          "description": "This skill converts text to high-quality audio files using ElevenLabs API. Use this skill when users request text-to-spe"
        },
        {
          "name": "fathom",
          "source": "./fathom",
          "description": "Fetch meetings, transcripts, summaries, and action items from Fathom API. Use when user asks to get Fathom recordings, s"
        },
        {
          "name": "firecrawl-research",
          "source": "./firecrawl-research",
          "description": "This skill should be used when the user requests to research topics using FireCrawl, enrich notes with web sources, sear"
        },
        {
          "name": "github-gist",
          "source": "./github-gist",
          "description": "Publish files or Obsidian notes as GitHub Gists. Use when user wants to share code/notes publicly, create quick shareabl"
        },
        {
          "name": "gmail",
          "source": "./gmail",
          "description": "This skill should be used when searching, fetching, or downloading emails from Gmail. Use for queries like \"search Gmail"
        },
        {
          "name": "google-image-search",
          "source": "./google-image-search",
          "description": "Search and download images via Google Custom Search API with LLM-powered selection. This skill should be used when findi"
        },
        {
          "name": "gpt-image-2",
          "source": "./gpt-image-2",
          "description": "Generate and edit images using OpenAI's GPT Image 2 API. Supports style presets (including text-heavy ones like infograp"
        },
        {
          "name": "granola",
          "source": "./granola",
          "description": "This skill should be used when importing, listing, or exporting Granola meeting recordings and transcripts. Queries Gran"
        },
        {
          "name": "gws",
          "source": "./gws",
          "description": "This skill should be used when interacting with Google Workspace services via the gws CLI \u2014 Gmail (search, triage, send,"
        },
        {
          "name": "health-data",
          "source": "./health-data",
          "description": "Query Apple Health SQLite database for vitals, activity, sleep, and workouts. Supports Markdown, JSON, and FHIR R4 outpu"
        },
        {
          "name": "insight-extractor",
          "source": "./insight-extractor",
          "description": "insight-extractor"
        },
        {
          "name": "jtbd",
          "source": "./jtbd",
          "description": "Terminal-first JTBD engine for founders and product people. Interview fast, kill jargon, capture real switching forces ("
        },
        {
          "name": "lab-retro",
          "source": "./lab-retro",
          "description": "Final retrospective and self-assessment for participants of Claude Code Lab. Runs four sequential interactive parts \u2014 pr"
        },
        {
          "name": "linear",
          "source": "./linear",
          "description": "Manage Linear issues, projects, and workflows via CLI. This skill should be used when the user wants to create, list, up"
        },
        {
          "name": "llm-cli",
          "source": "./llm-cli",
          "description": "Process textual and multimedia files with various LLM providers using the llm CLI. Supports both non-interactive and int"
        },
        {
          "name": "meeting-processor",
          "source": "./meeting-processor",
          "description": "This skill should be used when processing meeting transcripts to auto-detect meeting type (leadgen, partnership, coachin"
        },
        {
          "name": "nano-banana",
          "source": "./nano-banana",
          "description": "Generate and edit images using Google's Gemini image generation models (Nano Banana family). Supports style presets, pla"
        },
        {
          "name": "pdf-generation",
          "source": "./pdf-generation",
          "description": "Professional PDF generation from markdown using Pandoc with Eisvogel template and EB Garamond fonts. Use when converting"
        },
        {
          "name": "presentation-generator",
          "source": "./presentation-generator",
          "description": "Generate interactive HTML presentations with neobrutalism styling, ASCII art decorations, and Agency brand colors. Outpu"
        },
        {
          "name": "recording",
          "source": "./recording",
          "description": "Demo/recording mode that redacts personally identifiable and sensitive information from Claude Code's outputs. Use when "
        },
        {
          "name": "retrospective",
          "source": "./retrospective",
          "description": "retrospective"
        },
        {
          "name": "session-finder",
          "source": "./session-finder",
          "description": "Index and search Claude Code sessions using semantic embeddings (Gemini). Find past sessions by topic, relaunch the best"
        },
        {
          "name": "session-search",
          "source": "./session-search",
          "description": "This skill should be used when searching Claude Code session transcripts with semantic understanding. Triggers on querie"
        },
        {
          "name": "sketch",
          "source": "./sketch",
          "description": "sketch"
        },
        {
          "name": "skill-studio",
          "source": "./skill-studio",
          "description": "Interview-driven automation design tool. This skill should be used when the user wants to design a new skill, agent, aut"
        },
        {
          "name": "tdd",
          "source": "./tdd",
          "description": "This skill should be used when the user wants to implement features or fix bugs using test-driven development. Enforces "
        },
        {
          "name": "telegram",
          "source": "./telegram",
          "description": "This skill should be used when fetching, searching, downloading, sending, editing, or publishing messages on Telegram. U"
        },
        {
          "name": "telegram-post",
          "source": "./telegram-post",
          "description": "telegram-post"
        },
        {
          "name": "telegram-telethon",
          "source": "./telegram-telethon",
          "description": "This skill should be used for comprehensive Telegram automation via Telethon API. Use for sending/receiving messages, mo"
        },
        {
          "name": "temple-generator",
          "source": "./temple-generator",
          "description": "Generate a 3D interactive knowledge map (Inner Temple) from any Obsidian vault or document set. Supports multi-scale abs"
        },
        {
          "name": "thinking-patterns",
          "source": "./thinking-patterns",
          "description": "thinking-patterns"
        },
        {
          "name": "timebuzzer-led",
          "source": "./timebuzzer-led",
          "description": "Control timeBuzzer hardware LED via MIDI — set color, effects (pulse, strobe, rainbow, fade), and semantic status signals"
        },
        {
          "name": "transcript-analyzer",
          "source": "./transcript-analyzer",
          "description": "This skill analyzes meeting transcripts to extract decisions, action items, opinions, questions, and terminology using C"
        },
        {
          "name": "tufte-report",
          "source": "./tufte-report",
          "description": "Create Tufte-inspired data reports and infographic dashboards as standalone HTML files. Uses EB Garamond for text, Monas"
        },
        {
          "name": "vision-bench",
          "source": "./vision-bench",
          "description": "Score and compare images using vision LLMs as judges. YAML-defined criteria presets for 11 use cases (text-to-image, pho"
        },
        {
          "name": "wispr-analytics",
          "source": "./wispr-analytics",
          "description": "This skill should be used when analyzing Wispr Flow voice dictation history for self-reflection, work patterns, mental h"
        },
        {
          "name": "youtube-transcript",
          "source": "./youtube-transcript",
          "description": "\"Extract YouTube video transcripts with metadata and save as Markdown to Obsidian vault. Use this skill when the user re"
        },
        {
          "name": "zoom",
          "source": "./zoom",
          "description": "Create and manage Zoom meetings and access cloud recordings via the Zoom API. Use for queries like \"create a Zoom meetin"
        }
      ]
    }

README

Claude Skills

A collection of skills for Claude Code that extend AI capabilities with specialized workflows, tools, and domain expertise.

Intro from author

Gleb is a Berlin-based technologist, AI educator, solopreneur and artist. He teaches in-depth agentic skills and workflows, productivity and values-based project management using AI tools as a support tool. Join my Claude code labs and community (coming soon!).

📦 Available Skills

Cognitive Toolkit (CBT/DBT) ⭐ NEW

Evidence-based CBT and DBT intervention skills — guided thought records, opposite action, DEAR MAN roleplay, crisis skills with HRV biofeedback. Configurable therapeutic pushback. Works standalone or via Telegram. Standalone repo →

GPT Image 2 (OpenAI Image Generation)

Generate and edit images using OpenAI's GPT Image 2 model — the first image model with built-in reasoning ("thinking mode"). Mirrors the Nano Banana architecture but targets OpenAI's API with superior text rendering, thinking mode for complex compositions, and cost controls.

GPT Image 2

Features:

  • 🎨 Text-to-image generation with 99%+ text rendering accuracy
  • 🧠 Thinking mode (off/low/medium/high) for complex compositions — infographics, diagrams, posters
  • 🖼️ 14 style presets: 8 visual (editorial, blueprint, ink, risograph, wireframe, constellation, brutalist, grain) + 6 text-heavy (infographic, slide, diagram, poster, menu, manga)
  • 📐 8 platform presets: YouTube, slides, blog, X/Twitter, Instagram square, story, Pinterest
  • ✏️ Image editing via multipart upload (transform photos into preset styles)
  • 🔄 Variant generation (up to 10 natively) with contact sheet assembly
  • 💰 Draft mode (--draft): 512x512 low quality at ~$0.02/image (10× cheaper for iteration)
  • ⚠️ Cost confirmation: prompts before spending >$0.50, skip with -y
  • 💵 Cost estimation (--estimate): preview cost before generating
  • 🌐 OpenRouter support (--provider openrouter) for unified billing
  • 🔐 SOPS + age encrypted API key management
  • 📊 JSONL history tracking + metadata JSON sidecars
  • 🔄 Re-roll last prompt (again) and history browser

Quick Start:

# Copy to skills directory
cp -r gpt-image-2 ~/.claude/skills/

# First-time setup
scripts/gpt_image_2.py init

# Simple generation
scripts/gpt_image_2.py "a minimalist rocket illustration" ./rocket.png

# With style preset
scripts/gpt_image_2.py --preset editorial "neural networks" ./nn.png

# Edit a photo into a style
scripts/gpt_image_2.py --edit photo.png --platform square "transform into constellation star map style" ./styled.png

# Draft mode for cheap iteration
scripts/gpt_image_2.py --draft --preset infographic "AI adoption trends" ./draft.png

# Cost estimate for a batch
scripts/gpt_image_2.py --estimate --n 10 --thinking high "carousel slides"

Depends on: OpenAI API key, Python 3 + PyYAML, ImageMagick (optional, for platform resize), SOPS + age (optional, for encrypted keys)

Use when: Generating images with readable text (infographics, slides, posters, carousels), editing photos into artistic styles, creating social media content, or any image generation where text rendering quality matters.


TDD (Test-Driven Development)

Multi-agent TDD orchestration with architecturally enforced context isolation. Uses Claude Code's Task tool to spawn separate subagents for test writing and implementation -- the Test Writer never sees implementation code, and the Implementer never sees the specification.

Features:

  • Multi-agent context isolation: Test Writer, Implementer, and Refactorer run as separate Task subagents with strict information boundaries
  • Strict RED -> GREEN -> REFACTOR phase enforcement
  • --auto mode: run all slices without pausing, stop only on unrecoverable errors
  • Inside-out vertical slicing by architectural layer (domain -> domain-service -> application -> infrastructure)
  • Layer-specific test constraints and dependency rules per slice
  • layer_map path validation: rejects Implementer writes to wrong-layer directories
  • Post-RED test lint: blocks mocking libraries in domain/domain-service tests
  • Full-repo import scan: catches dependency violations in untouched files
  • Port interface rule: consumer defines the contract (Dependency Inversion)
  • Retry loop: up to 5 fresh Implementer attempts with previous-attempt context (no accumulated history)
  • Regression auto-fix: detects and repairs broken tests after implementation (3-attempt limit)
  • Greenfield project support: handles empty codebases with no existing tests
  • run_tests.sh: universal test runner wrapping 7 frameworks into structured JSON with timeout support
  • extract_api.sh: public API surface extractor (signatures only, no bodies) for 7 languages
  • Implementer always returns complete file content (no ambiguous partial patches)
  • Failure recovery table covering 11 error scenarios with concrete recovery actions
  • 18 documented anti-patterns with prevention guidance (incl. service locator, Active Record bleed)
  • Session state via .tdd-state.json with --resume support
  • 7 frameworks: Jest, Vitest, pytest, Go test, cargo test, RSpec, PHPUnit
  • Bug-fix TDD: reproduce-first workflow

Architecture:

ORCHESTRATOR (main Claude context)
├─ Phase 0: Setup (detect framework, extract API, create state)
├─ Phase 1: Decompose into vertical slices -> user approves
├─ FOR EACH SLICE:
│   ├─ Phase 2 (RED):     Task(Test Writer)  <- spec + API only
│   ├─ Phase 3 (GREEN):   Task(Implementer)  <- failing test + error only
│   └─ Phase 4 (REFACTOR): Task(Refactorer)  <- all code + green results
└─ Summary

Quick Start:

# Copy to skills directory
cp -r tdd ~/.claude/skills/

# Interactive mode (pauses at each RED checkpoint)
/tdd "add user authentication with JWT tokens"

# Autonomous mode (runs all slices, stops only on errors)
/tdd --auto "add user authentication with JWT tokens"

# Resume a paused session
/tdd --resume

# Bug fix
/tdd "fix: cart total doesn't include tax"

Design informed by:

Use when: Implementing features or fixing bugs where you want disciplined test-first development. Use --auto for maximum autonomy. The multi-agent architecture is especially valuable when single-context TDD produces tests that mirror implementation details.


GWS (Google Workspace CLI)

Comprehensive reference skill for the gws CLI — Gmail, Calendar, Drive, Sheets, Docs, Tasks, Chat, People, Meet, and cross-service workflows from Claude Code.

Features:

  • Quick-reference table for all 15 helper commands across 7 services (+triage, +send, +agenda, +insert, +upload, +read, +append, +write, +send for Chat, workflow helpers)
  • Raw-API recipes for: Gmail (messages, labels, threads, drafts, filters, batchModify, archive/trash), Calendar (events, Google Meet conferencing via conferenceDataVersion, recurring events, attendees with sendUpdates, freebusy), Drive (upload, download, search, share, folders), Sheets (values read/update/clear, batchUpdate), Docs (batchUpdate), Tasks, Chat (spaces, threaded/card messages), People, Meet (spaces)
  • Cross-service workflow helpers: standup report, meeting prep, email-to-task, weekly digest, file-announce
  • Schema introspection guide (gws schema) for discovering any API method's parameters
  • Global flag reference: --params, --json, --upload, --output, --format, --page-all, --page-limit, --dry-run
  • OAuth setup, scopes, and gmail.settings.basic manual-OAuth workaround
  • Write-command confirmation policy

Quick Start:

# Copy to skills directory
cp -r gws ~/.claude/skills/

# Then use naturally:
# "check my unread email"
# "search Gmail for Amazon S3"
# "show today's calendar"
# "create a calendar event today 18:00 AGENCY Meetup"
# "create a meeting with Google Meet link tomorrow 14:00"
# "read Sheet1!A1:D10 from spreadsheet SID"
# "upload report.pdf to Drive"
# "post to Chat space spaces/XYZ: ship today"
# "send an email to alice@example.com"

Depends on: gwsnpm install -g @googleworkspace/cli

Use when: Interacting with any Google Workspace service from Claude Code — email triage, sending email, calendar management (including Meet links), spreadsheet read/write, file uploads, Chat posts, contact lookup, or cross-service workflows.


NotebookLM ⭐ NEW

Full CLI and Python API wrapper for Google NotebookLM. Lets you manage notebooks, sources, chat, artifacts (podcasts, videos, slides, quizzes, flashcards), notes, sharing, and research entirely from the terminal via natural language.

Features:

  • Complete command coverage: notebooks, sources, chat, artifacts, notes, sharing, research, language
  • Natural language mapping: "upload this folder and ask about X" → create + add sources + ask
  • Artifact generation: audio (podcast), video, cinematic-video, slide-deck, quiz, flashcards, infographic, mind-map, data-table, report
  • Batch operations: upload folders of .md files, multiple URLs, parallel research queries
  • Python API reference: direct access to notebooklm-py async library for programmatic use
  • Common workflows: create-add-ask, folder upload with summary, deep research, podcast generation
  • Error handling: authentication, source processing, rate limiting, timeouts

Quick Start:

# Copy to skills directory
cp -r notebooklm ~/.claude/skills/

# Then just talk naturally:
# "create a notebook called My Research"
# "upload all markdown files from ./notes/"
# "ask the notebook about key themes"
# "generate a podcast about the findings"
# "download the podcast"

Depends on: notebooklm-py (v0.3.4+) — pip install notebooklm-py

Use when: Interacting with Google NotebookLM from Claude Code. Covers all CLI commands and the underlying Python API for advanced automation.


Temple Generator

Generate a 3D interactive knowledge map (Inner Temple) from any Obsidian vault. Maps vault structure into a spatial mythology with concentric entity rings, synthesized audio, discovery mechanics, and multi-scale semantic zoom.

Temple Generator Screenshot

Features:

  • Vault scanner extracts 10K+ notes, 39K+ link edges, computes centrality and clusters
  • 15 entity types: gods, demigods, tensions, narratives, blind spots, spirits, crystals, values, trails, research, questions, depths, whispers, secrets, fields
  • Dual vocabulary: canonical (portable) + poetic (mythic) naming
  • Confidence-gated abstraction levels: entities → domains → tension axes → comparison
  • Dual-graph common map: shared scaffold with divergence offsets for comparing two vaults
  • Data-driven HTML template (Three.js) with Web Audio API soundtrack
  • Electroacoustic audio: FM synthesis, ring modulation, filtered noise, arpeggios per entity type
  • Arrow key navigation, immersive mode (Shift+.), install mode (?install)
  • Hand tracking (MediaPipe), secret discovery engine, flythrough journey

Architecture:

Generation Pipeline (Claude)          Runtime Renderer (Template)
├─ extract_entities.py → vault-scan   ├─ Three.js scene from JSON
├─ Claude classifies entities         ├─ Concentric ring layout
├─ Builds abstraction levels          ├─ Audio per entity type
├─ Writes temple-data.json            ├─ Discovery mechanics
└─ Inlines into template              └─ Semantic zoom transitions

Quick Start:

cp -r temple-generator ~/.claude/skills/
/temple-generate ~/my-vault --inline

Use when: Visualizing any Obsidian vault as a 3D spatial mythology, comparing two knowledge graphs, or creating an art installation from structured knowledge.


Granola Meeting Importer ⭐ NEW

Query Granola's local cache and API to list meetings, view transcripts, and export to Obsidian vault in Fathom-compatible format. Includes auto-sync via macOS LaunchAgent.

Features:

  • List all meetings from Granola's local cache with attendee and transcript info
  • Show meeting details by ID prefix or title substring
  • Get transcripts from local cache with API fallback
  • Export to Obsidian markdown with Fathom-compatible frontmatter (**Speaker**: text format)
  • Speaker attribution: microphone source mapped to meeting creator, system audio to "Other"
  • API integration using Granola's local WorkOS auth token (no separate API key needed)
  • Auto-sync script (sync.sh) — checks for new meetings every 15 min via LaunchAgent, exports only unseen ones, logs to ~/Library/Logs/granola-sync.log
  • 18 tests covering pure functions and CLI integration

Quick Start:

# Copy to skills directory
cp -r granola ~/.claude/skills/

# List meetings
python3 ~/.claude/skills/granola/scripts/granola.py list

# Export a meeting to Obsidian
python3 ~/.claude/skills/granola/scripts/granola.py export "meeting title"

# Get transcript
python3 ~/.claude/skills/granola/scripts/granola.py transcript abc123

# Set up auto-sync (see SKILL.md for LaunchAgent setup)
chmod +x ~/.claude/skills/granola/scripts/sync.sh
bash ~/.claude/skills/granola/scripts/sync.sh  # test run

Use when: Importing Granola meeting recordings and transcripts into an Obsidian vault, querying meeting history from the command line, or setting up automated transcript sync on a schedule.


Insight Extractor ⭐ NEW

Parse Claude Code's built-in /insights report and extract actionable items into structured, trackable markdown files. Designed for Obsidian vaults but works with any markdown-based knowledge system.

Features:

  • 📊 Extracts 6 categories: action items, prompts/patterns, technical learnings, workflow improvements, tool discoveries, automation candidates
  • 🤖 Auto-creates task files for automation candidates (with agent-runnable tagging)
  • 🔗 Links insights to daily notes and updates a Map of Content
  • 💬 Interactive mode (--interactive) to cherry-pick items via AskUserQuestion
  • ⚙️ Configure mode (--configure) to set folders, date format, and preferences
  • 🖥️ Machine-specific filenames (for multi-machine setups)
  • 📝 TLDR + key insight summary on completion

Quick Start:

# Run /insights first, then extract
/insight-extractor

# Interactive -- review and filter each category
/insight-extractor --interactive

# Configure output paths, date format, etc.
/insight-extractor --configure

Use when: After running /insights to persist analysis into your vault, during weekly reviews, or to discover automation candidates from session patterns.


Vault Daydream ⭐ NEW

Multi-agent system that mines your Obsidian vault for non-obvious connections between notes, mimicking the brain's default mode network. Samples random note pairs, synthesizes connections via Sonnet, filters with Haiku critic. Inspired by Gwern's LLM Daydreaming.

Features:

  • 🧠 Simulates the brain's default mode network for knowledge vaults
  • 🎲 Recency-weighted random pair sampling (50 pairs per run)
  • 🔀 Multi-agent architecture: Sonnet synthesizer + Haiku critic in parallel batches
  • 📊 Quality filtering: only insights scoring >= 7.0 average (novelty, coherence, usefulness)
  • 📝 Obsidian-native output: individual insight notes with wikilinks + daily digest
  • 🔄 History dedup: tracks previously sampled pairs to avoid repetition
  • 📅 Daily note integration with daydream summary

Architecture:

Skill (orchestrator)
  |-- Glob/Read: scan vault, extract excerpts
  |-- Generate 50 random pairs (recency-weighted)
  |-- Task(model: sonnet) x 10: synthesize connections  <-- parallel
  |-- Task(model: haiku) x 10: critique/score insights  <-- parallel
  |-- Filter (avg >= 7.0)
  +-- Write: save insight notes + daily digest

No external dependencies -- pure Claude Code tools (Glob, Read, Write, Bash, Task).

Quick Start:

# Copy to skills directory
cp -r daydream ~/.claude/skills/

# Edit instructions.md to set your VAULT_ROOT path
# Then invoke
/daydream

Output:

  • Daydreams/YYYYMMDD-slug.md -- individual insight notes with scores and wikilinks
  • Daydreams/digests/YYYYMMDD-digest.md -- daily digest with stats and ranked insights
  • Daily note ## Daydream section -- summary with top connections

Cost: ~$0.40-0.50 per run (~50 pairs) via Claude Code usage.

Inspired by: Gwern's "LLM Daydreaming" -- the idea that LLMs can productively "daydream" by finding unexpected connections between disparate pieces of knowledge, similar to how the brain's default mode network generates creative insights during idle periods.

Use when: You want to discover surprising connections across your knowledge base -- run daily or weekly to surface insights you wouldn't find through deliberate search.


Thinking Patterns ⭐ NEW

Longitudinal cognitive pattern analysis across months of recorded conversations. Extracts 12 evidence-based dimensions from Fathom transcripts, synthesizes cross-session patterns, and detects blind spots via multi-agent parallel processing.

Scientific Foundation:

  • Tier 1 (Validated): Burns' cognitive distortions, LIWC dimensions, epistemic markers, Russell's Circumplex Model
  • Tier 2 (Established): Lakoff conceptual metaphors, McAdams narrative identity, Kegan immunity to change, ACT flexibility
  • Tier 3 (Applied): Kegan developmental stages, Schon reflective practice, agency language ratio

12 Extraction Dimensions:

  • Cognitive distortions, problem framing, conceptual metaphors, hedging/certainty
  • Code-switching (bilingual), decision moments, emotional indicators, avoidance/deflection
  • Agency language, competing commitments, role/register markers, energy signals

10 Output Sections + Blind Spot Summary:

  1. Recurring Narratives -- 2. Problem Framing -- 3. Metaphors -- 4. Decision Heuristics -- 5. Topics Avoided -- 6. Contradictions & Competing Commitments -- 7. Energy Patterns -- 8. Role Shifts -- 9. Execution Gap -- 10. Cognitive Distortions & Biases -- plus "The 5 Things You Don't See"

Architecture:

Stage 0: Corpus Discovery (orchestrator)
  |-- Find transcripts, classify by type, extract speaker lines
Stage 1: Per-Transcript Extraction (~13 parallel sonnet agents)
  |-- 12 dimensions extracted per transcript
Stage 2: Aggregation (orchestrator)
  |-- De-duplicate, cluster, package into synthesis bundles
Stage 3: Cross-Session Synthesis (4 parallel + 1 sequential sonnet agents)
  |-- Pattern detection, blind spot analysis, contradiction mapping
Stage 4: Output (orchestrator)
  +-- Compile analysis document, link to daily note

Features:

  • Multi-agent parallel extraction (up to 13 sonnet agents) and synthesis (5 agents)
  • Bilingual support: English structure, Russian quotes preserved with translations
  • Weighted corpus: coaching (1.0), meetings (0.9), podcasts (0.8), impromptu (0.7), workshops (0.6), labs (0.4)
  • Unknown speaker recovery for Fathom transcripts with failed diarization
  • Immunity to Change maps (Kegan & Lahey) for competing commitments
  • Evidence-grounded: every finding backed by 2+ dated session quotes
  • Execution gap analysis against stated priorities (Profile Brief, My Focus)
  • Configurable date ranges, session type weights, speaker identifiers

Quick Start:

# Copy to skills directory
cp -r thinking-patterns ~/.claude/skills/

# Dry run -- see corpus stats and batch plan
/thinking-patterns --dry-run

# Full analysis (default: last 3 months)
/thinking-patterns

# Custom date range
/thinking-patterns --period 2026-01 2026-02

Output:

  • ai-research/YYYYMMDD-thinking-patterns-analysis.md -- full analysis with evidence
  • Daily note link under ## Research

Cost: ~$3.50 per full run, ~6-8 minutes runtime.

Use when: Quarterly self-reflection, coaching preparation, or whenever you want evidence-based insight into your own cognitive patterns across recorded conversations.


Doctor G

Evidence-based health research using tiered trusted sources with GRADE-inspired evidence ratings. Integrates Apple Health data for personalized context.

Features:

  • 🔬 3 depth levels: Quick (WebSearch), Deep (+Tavily), Full (+Firecrawl)
  • 📊 GRADE-inspired evidence strength ratings (Strong/Moderate/Weak/Minimal/Contested)
  • 🏥 40+ curated trusted sources across 4 tiers (primary research → journalism)
  • ❤️ Apple Health integration for personalized recommendations
  • ⚖️ Expert comparison mode (detects "X vs Y" questions)
  • 🔍 Topic-aware source prioritization (nutrition, exercise, sleep, cardiovascular, etc.)
  • ⚠️ Red flag detection (retracted studies, industry bias, predatory journals)

Quick Start:

# Quick answer (~30s)
/doctorg Is creatine safe for daily use?

# Deep research (~90s)
/doctorg --deep Huberman vs Attia on fasted training

# Full investigation (~3min)
/doctorg --full Safety profile of long-term melatonin supplementation

# Without personal health data
/doctorg --no-personal Best stretching protocol for lower back pain

Use when: Asking any health, nutrition, exercise, sleep, or wellness question and wanting evidence-based answers with explicit strength ratings rather than opinion.


Agency Docs Updater

End-to-end pipeline for publishing Claude Code lab meetings. Single /agency-docs-updater invocation replaces 5+ manual steps: finds Fathom transcript, downloads video, uploads to YouTube, generates fact-checked Russian summary, creates MDX, and deploys to Vercel.

Features:

  • 🔄 Full pipeline: transcript → video download → YouTube upload → summary → MDX → deploy
  • 📝 Fact-checked Russian summaries via claude-code-guide agent
  • 🎥 YouTube + Yandex.Disk upload with resume support
  • 📊 Lesson HTML copied to public/ and linked in meeting page
  • ✅ Local build verification + Vercel deployment check
  • 🔢 Auto-detect or specify meeting number

Quick Start:

# Run full pipeline (invoke as Claude Code skill)
/agency-docs-updater

# Or use the script directly
python3 scripts/update_meeting_doc.py \
  transcript.md youtube_url summary.md [-n 08] [--update]

Use when: Publishing Claude Code lab sessions — automates the entire flow from Fathom recording to live documentation site.


De-AI Text Humanizer ⭐ NEW

Transform AI-sounding text into human, authentic writing while preserving meaning and facts. Research-backed approach focusing on quality over detection evasion.

Features:

  • 🤖 Interactive context gathering (purpose, audience, constraints)
  • 🌍 Language-specific optimization (Russian, German, English, Spanish, French)
  • 📝 Register-aware humanization (personal, essay, technical, academic)
  • 🔍 6-level AI tell diagnosis (structural, lexical, voice, rhetorical)
  • 📊 Research-backed (7 academic papers + 30+ commercial tools analyzed)
  • 💡 Optional change explanations
  • ⚡ No word limits (unlike commercial tools)
  • 🎯 Meaning preservation priority vs. detection evasion

Quick Start:

# Interactive mode (asks questions)
/de-ai --file article.md

# Quick mode (no questions)
/de-ai --file article.md --interactive false

# Specify language and register
/de-ai --file text.md --language ru --register essay

# Show what AI tells were removed
/de-ai --file content.md --explain true

Use when: You need to improve AI-generated text quality, remove bureaucratic language (канцелярит), humanize drafts while preserving facts, or refine professional writing across languages.


Automation Advisor ⭐ NEW

Quantified ROI analysis for automation decisions with voice-enabled web interface. Analytical precision design.

Features:

  • 📊 8 structured questions transforming intuition into data
  • 💰 Break-even analysis with time/frequency scoring
  • 🎙️ Voice input via Groq Whisper transcription
  • 🗣️ Browser TTS for voice output
  • 🎨 Sophisticated cream theme with editorial typography
  • 📱 Multi-user session support
  • ⌨️ Keyboard-first interaction design

Quick Start:

# Install dependencies
pip install flask groq python-dotenv

# Add Groq API key (optional, for voice)
export GROQ_API_KEY="your-key"

# Start web server
python3 server_web.py

# Open browser
open http://localhost:8080

Use when: Deciding whether to automate repetitive tasks - transforms "this feels tedious" into quantified recommendations with clear next steps.


Decision Toolkit ⭐ NEW

Generate structured decision-making tools — step-by-step guides, bias checkers, scenario matrices, and interactive dashboards.

Features:

  • 🎯 7 decision frameworks (First Principles, 10-10-10, Pre-Mortem, Regret Minimization, etc.)
  • 🧠 Comprehensive bias encyclopedia (20+ cognitive biases with counter-questions)
  • 📊 Interactive HTML wizards with Agency neobrutalism styling
  • 📝 Markdown export with decision records
  • 🎙️ Voice summary templates for Orpheus TTS
  • ⚖️ Opportunity cost calculators and scenario matrices

Frameworks Included:

  • First Principles Thinking (5 core questions)
  • Opportunity Cost Calculator
  • Scenario Matrix with probability calibration
  • Pre-Mortem Analysis
  • 10-10-10 Framework (Suzy Welch)
  • Regret Minimization (Jeff Bezos method)
  • Weighted Decision Matrix

Quick Start:

# Copy to skills directory
cp -r decision-toolkit ~/.claude/skills/

# Invoke for a decision
/decision-toolkit "Should I switch to a new tech stack?"

Use when: Facing significant choices requiring systematic analysis — career moves, technology decisions, major purchases, strategic pivots.


Fathom ⭐ NEW

Fetch meetings, transcripts, summaries, action items, and download video recordings from Fathom API.

Features:

  • 📋 List recent meetings with recording IDs
  • 📝 Fetch full transcripts with speaker attribution
  • 🤖 AI-generated meeting summaries from Fathom
  • ✅ Action items with assignees and completion status
  • 👥 Participant info from calendar invites
  • 🔗 Links to Fathom recordings and share URLs
  • 🎥 Download video recordings via M3U8 streaming
  • ✓ Automatic video validation with retry mechanism
  • 🔬 Optional integration with transcript-analyzer skill

Quick Start:

# Install dependencies
pip install requests python-dotenv

# Requires ffmpeg for video downloads
brew install ffmpeg  # macOS
# or: apt-get install ffmpeg  # Linux

# Add API key
echo "FATHOM_API_KEY=your-key" > ~/.claude/skills/fathom/scripts/.env

# List recent meetings
python3 scripts/fetch.py --list

# Fetch today's meetings
python3 scripts/fetch.py --today

# Download video recording
python3 scripts/fetch.py --id abc123 --download-video

# Fetch and analyze
python3 scripts/fetch.py --today --analyze

Use when: You need to fetch Fathom meeting recordings, download video files, sync transcripts to your vault, or extract meeting data via API.


Recording ⭐ NEW

Demo/recording mode that redacts personally identifiable and sensitive information from Claude Code's outputs in real time.

Features:

  • 🎬 Toggle on/off with /recording — single command flips state
  • 🕵️ Redacts names, locations, dates, financials, medical, emotional, business, and credentials
  • 🎭 Uses obviously dummy placeholders (e.g. Alex Doe, Acme Co) — never plausible fakes
  • 🔁 Consistent mapping within a session so the demo stays coherent
  • 🛡️ Pre-send self-check before any output

Quick Start:

cp -r recording ~/.claude/skills/

# Before your demo
/recording
# When done
/recording

Use when: Screen-sharing, recording videos, or live-demoing Claude Code and you don't want personal vault content leaking on stream.


Retrospective ⭐ NEW

Session retrospective for continual learning. Reviews conversations, extracts learnings, updates skills.

Features:

  • 🔄 Analyze session for successes, failures, and discoveries
  • 📝 Update skill files with dated learnings
  • ⚠️ Document failures explicitly (prevents repeating mistakes)
  • 📊 Surface patterns for skill improvement
  • 🎯 Compound knowledge over sessions

Quick Start:

# Copy to skills directory
cp -r retrospective ~/.claude/skills/

# Invoke at end of session
/retrospective

Use when: End of coding sessions to capture learnings before context is lost. Based on Continual Learning in Claude Code concepts.


GitHub Gist ⭐ NEW

Publish files and notes as GitHub Gists for easy sharing.

Features:

  • 🔗 Publish any file as a shareable gist URL
  • 🔒 Secret (unlisted) by default for safety
  • 🌐 Optional public gists (visible on profile)
  • 📥 Support stdin for quick snippets
  • 🖥️ Uses gh CLI (recommended) or falls back to API

Quick Start:

# Publish file as secret gist
python3 scripts/publish_gist.py ~/notes/idea.md

# Public gist with description
python3 scripts/publish_gist.py code.py --public -d "My utility script"

# Quick snippet from stdin
echo "Hello world" | python3 scripts/publish_gist.py - -f "hello.txt"

# Publish and open in browser
python3 scripts/publish_gist.py doc.md --open

Setup:

# Option 1: gh CLI (recommended)
gh auth login

# Option 2: Environment variable
# Get token at https://github.com/settings/tokens (select 'gist' scope)
export GITHUB_GIST_TOKEN="ghp_your_token_here"

Use when: You want to share code snippets, notes, or files via a quick shareable URL.


Google Image Search

Search and download images via Google Custom Search API with LLM-powered selection and Obsidian integration.

Features:

  • 🔍 Simple query mode or batch processing from JSON config
  • 🤖 LLM-powered image selection (picks best from candidates)
  • 📝 Auto-generate search configs from plain text terms
  • 📓 Obsidian note enrichment (extract terms, find images, insert below headings)
  • 📊 Keyword-based scoring (required/optional/exclude terms, preferred hosts)
  • 🖼️ Magic byte detection for proper file extensions

Quick Start:

# Simple query
python3 scripts/google_image_search.py --query "neural interface demo" --output-dir ./images

# Enrich Obsidian note with images
python3 scripts/google_image_search.py --enrich-note ~/vault/research.md

# Generate config from terms
python3 scripts/google_image_search.py --generate-config --terms "AI therapy" "VR mental health"

Use when: Finding images for articles, presentations, research docs, or enriching Obsidian notes with visuals.


Zoom ⭐ NEW

Create and manage Zoom meetings and access cloud recordings via the Zoom API.

Features:

  • 📅 List, create, update, delete scheduled meetings
  • 🎥 Access cloud recordings with transcripts and summaries
  • 📥 Get download links for MP4, audio, transcripts, chat logs
  • 🔐 Dual auth: Server-to-Server OAuth (meetings) + User OAuth (recordings)

Quick Start:

# Check setup status
python3 scripts/zoom_meetings.py setup

# List upcoming meetings
python3 scripts/zoom_meetings.py list

# Create a meeting
python3 scripts/zoom_meetings.py create "Team Standup" --start "2025-01-15T10:00:00" --duration 30

# List recordings (last 30 days)
python3 scripts/zoom_meetings.py recordings --show-downloads

Use when: You need to create Zoom meetings, list scheduled calls, or access cloud recordings with transcripts.


Presentation Generator

Interactive HTML presentations with neobrutalism style and Anime.js animations.

Features:

  • 🎬 HTML presentations with scroll-snap navigation
  • 🎭 Anime.js animations (fade, slide, scale, stagger)
  • 📸 Export to PNG, PDF, or video via Playwright
  • 📊 11 slide types: title, content, two-col, code, stats, grid, ascii, terminal, image, quote, comparison
  • 🎨 Neobrutalism style with brand-agency colors
  • ⌨️ Keyboard navigation (arrows, space, R to replay)

Quick Start:

# Generate HTML from JSON
node scripts/generate-presentation.js --input slides.json --output presentation.html

# Export to PNG/PDF/video
node scripts/export-slides.js presentation.html --format png
node scripts/export-slides.js presentation.html --format pdf
node scripts/export-slides.js presentation.html --format video --duration 5

Use when: You need animated presentations, video slide decks, or interactive HTML slideshows.


Brand Agency

Neobrutalism brand styling with social media template rendering.

Features:

  • 🎨 Complete brand color palette (orange, yellow, blue, green, red)
  • 📝 Typography: Geist (headings), EB Garamond (body), Geist Mono (code)
  • 🖼️ 11 social media templates (Instagram, YouTube, Twitter, TikTok, Pinterest)
  • 🎯 Neobrutalism style: hard shadows, 3px borders, zero radius
  • ⚡ Playwright-based PNG rendering
  • 📐 ASCII box-drawing decorations

Quick Start:

# Install Playwright
npm install playwright

# Render all templates
node scripts/render-templates.js

# Render specific template
node scripts/render-templates.js -t instagram/story-announcement

# List templates
node scripts/render-templates.js --list

Use when: You need branded graphics, social media images, presentations with consistent neobrutalism styling.


Gmail

Search and fetch emails via Gmail API with flexible query options and output formats.

Features:

  • 🔍 Free-text search with Gmail query syntax
  • 📧 Filter by sender, recipient, subject, label, date
  • 📋 List labels
  • 📎 Download attachments
  • 🔐 Configurable OAuth scopes (readonly/modify/full)
  • 📄 Markdown or JSON output

Quick Start:

# Install dependencies
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib

# Authenticate (opens browser)
python scripts/gmail_search.py auth

# Search emails
python scripts/gmail_search.py search "meeting notes"
python scripts/gmail_search.py search --from "boss@company.com" --unread

Use when: You need to search, read, or download emails from Gmail.


Telegram

Fetch, search, download, and send Telegram messages with flexible filtering and output options.

Features:

  • 📬 List chats with unread counts
  • 📥 Fetch recent messages (all chats or specific)
  • 🔍 Search messages by content
  • 📨 Send messages to chats or @usernames
  • ↩️ Reply to specific messages
  • 💬 Send to forum topics (groups with topics)
  • 📎 Send and download media files
  • ⏰ Schedule messages for future delivery (--schedule)
  • ✨ Markdown-to-Telegram formatting (--markdown)
  • 💾 Save to file (token-efficient archiving with --with-media)
  • 📝 Output to Obsidian daily/person notes

Quick Start:

# Install dependency
pip install telethon

# List chats
python scripts/telegram_fetch.py list

# Get recent messages
python scripts/telegram_fetch.py recent --limit 20

# Send message
python scripts/telegram_fetch.py send --chat "@username" --text "Hello!"

# Send with markdown formatting
python scripts/telegram_fetch.py send --chat "@channel" --markdown --text "**Bold** and [links](https://example.com)"

# Schedule for tomorrow
python scripts/telegram_fetch.py send --chat "@channel" --markdown --schedule "tomorrow 10:00" --text "Scheduled post"

# Schedule with relative time or ISO format
python scripts/telegram_fetch.py send --chat "@username" --schedule "+2h" --text "In 2 hours"
python scripts/telegram_fetch.py send --chat "@username" --schedule "2026-04-10T14:00" --text "At specific time"

Use when: You need to read, search, or send Telegram messages from Claude Code.


Telegram Post ⭐ NEW

Create, preview, and publish formatted Telegram posts from draft markdown files with HTML formatting and media. Built for @klodkot and Gleb Kalinin's other Telegram channels -- channel configs (footers, tags, language) are hardcoded but the pattern is easy to adapt.

Features:

  • 📝 Create drafts with proper frontmatter for any configured channel
  • 🔄 Markdown to Telegram HTML conversion (bold, italic, links, headers)
  • 🛡️ Formatting safety check -- refuses to send if stray markdown detected
  • 🎬 Video attached as caption (not separate reply)
  • 📤 Default target: Saved Messages (safe preview before publishing)
  • 📦 Post-publish: updates frontmatter, moves to published/, updates channel index
  • 🏷️ Channel-aware: footers, tags reference, language defaults

Configured channels: @klodkot, @mentalhealthtech, @toolbuildingape, @opytnymputem

Quick Start:

# Create a draft
python3 scripts/post.py create "my-post-slug" --topic "Topic" --source "https://..."

# Preview (always do this first)
python3 scripts/post.py send "Channels/klodkot/drafts/20260211-my-post.md" --dry-run

# Send to saved messages for review
python3 scripts/post.py send "Channels/klodkot/drafts/20260211-my-post.md"

# Publish to channel (triggers post-publish: move, frontmatter update, index)
python3 scripts/post.py send "Channels/klodkot/drafts/20260211-my-post.md" -c "@klodkot"

Use when: Creating, previewing, or publishing Telegram channel posts from Obsidian draft files. Note: channel configs are specific to Gleb's channels -- fork and edit CHANNEL_CONFIG in post.py for your own.


Telegram Telethon ⭐ NEW

Full Telethon API wrapper with daemon mode and Claude Code integration. Monitor chats, auto-respond with Claude, and manage sessions.

Features:

  • 🔄 Daemon mode with configurable triggers (regex patterns)
  • 🤖 Auto-spawn Claude Code sessions per chat
  • 💾 Session persistence across restarts
  • 📬 All basic operations: list, recent, search, send, edit, delete, forward
  • 🎤 Voice message transcription (Telegram API, Groq, or local Whisper)
  • 📎 Media download with type filtering
  • 📓 Obsidian integration (daily notes, person notes)
  • 🧵 Forum/topic support

Quick Start:

# Install dependencies
pip install telethon rich questionary

# Interactive setup
python3 scripts/tg.py setup

# Check status
python3 scripts/tg.py status

# List chats
python3 scripts/tg.py list

# Start daemon (monitors for triggers)
python3 scripts/tgd.py start --foreground

Daemon Configuration (~/.config/telegram-telethon/daemon.yaml):

triggers:
  - chat: "@yourusername"
    pattern: "^/claude (.+)$"
    action: claude
    reply_mode: inline

Use when: You need advanced Telegram automation, background monitoring, or Claude-powered chat responses.


LLM CLI

Unified interface for processing text with multiple LLM providers from a single CLI.

Features:

  • 🎯 Support for 6 LLM providers (OpenAI, Anthropic, Google, Groq, OpenRouter, Ollama)
  • 🚀 40+ configured models with intelligent selection and aliasing
  • 📁 Process files, stdin, or inline text (25+ file types supported)
  • 💬 Both non-interactive and interactive (REPL) execution modes
  • 🔄 Persistent configuration that remembers your last used model
  • 🆓 Free fast inference options (Groq, OpenRouter, Ollama)

Quick Start:

# Install llm CLI
pip install llm

# Set Groq API key (free, no credit card)
export GROQ_API_KEY='gsk_...'

# Use it
llm -m groq-llama-3.3-70b "Your prompt"

Documentation:

Use when: You want to process text with LLMs, compare models, or build AI-powered workflows.


Deep Research

Comprehensive research automation using OpenAI's Deep Research API (o4-mini-deep-research model).

Features:

  • 🤖 Smart prompt enhancement with interactive clarifying questions
  • 🔍 Web search with comprehensive source extraction
  • 💾 Automatic markdown file generation with timestamped reports
  • ⚡ Token-optimized for long-running tasks (10-20 min)
  • 📊 Saves ~19,000 tokens per research vs. polling approach

Use when: You need in-depth research with web sources, analysis, or topic exploration.


PDF Generation

Professional PDF generation from markdown with mobile-optimized and desktop layouts.

Features:

  • 📄 Convert markdown to professional PDFs
  • 📱 Mobile-friendly layout (6x9in) optimized for phones/tablets
  • 🖨️ Desktop/print layout (A4) for documents and archival
  • 🎨 Support for English and Russian documents
  • 🖼️ Color-coded themes for different document types
  • ✍️ Professional typography with EB Garamond fonts
  • 📋 White papers, research documents, marketing materials

Quick Start:

# Mobile-optimized PDF (default for Telegram)
python scripts/generate_pdf.py doc.md --mobile

# Desktop/print PDF
python scripts/generate_pdf.py doc.md -t research

# Russian document
python scripts/generate_pdf.py doc.md --russian --mobile

Use when: You need to create professional PDF documents from markdown - mobile layout for sharing via messaging apps, desktop for printing and archival.


YouTube Transcript

Extract YouTube video transcripts with metadata and save as Markdown to Obsidian vault.

Features:

  • 📝 Download transcripts without downloading video/audio files
  • 🌐 Auto language detection (English first, Russian fallback)
  • 📊 YAML frontmatter with complete metadata (title, channel, date, stats, tags)
  • 📑 Chapter-based organization with timestamps
  • 🔄 Automatic deduplication of subtitle artifacts
  • 💾 Direct save to Obsidian vault

Quick Start:

python scripts/extract_transcript.py <youtube_url>

Use when: You need to extract YouTube video transcripts, convert videos to text, or save video content to your knowledge base.


Browsing History ⭐ NEW

Query browsing history from all synced Chrome devices (iPhone, iPad, Mac, desktop) with natural language.

Features:

  • 📱 Multi-device support (iPhone, iPad, Mac, desktop, Android)
  • 🔍 Natural language queries ("yesterday", "last week", "articles about AI")
  • 🤖 LLM-powered smart categorization
  • 📊 Group by domain, category, or date
  • 💾 Export to Markdown or JSON
  • 📝 Save directly to Obsidian vault

Quick Start:

# Initialize database
python3 scripts/init_db.py

# Sync local Chrome history
python3 scripts/sync_chrome_history.py

# Query history
python3 browsing_query.py "yesterday" --device iPhone
python3 browsing_query.py "AI articles" --days 7 --categorize
python3 browsing_query.py "last week" --output ~/vault/history.md

Use when: You need to search browsing history across all your devices, find articles by topic, or export history to your notes.


Chrome History

Query local Chrome browsing history with natural language search and filtering.

Features:

  • 🔍 Natural language search of browsing history
  • 📅 Filter by date range, article type, keywords
  • 🌐 Search specific websites
  • ⚡ Fast historical data retrieval

Use when: You need quick access to local desktop Chrome history only.


Health Data ⭐ NEW

Query and analyze Apple Health data from SQLite database with multiple output formats.

Features:

  • 📊 Query 6.3M+ health records across 43 metric types
  • 💓 Daily summaries, weekly trends, sleep analysis, vitals, activity rings, workouts
  • 📄 Output formats: Markdown, JSON, FHIR R4, ASCII charts
  • 🏥 FHIR R4 with LOINC codes for healthcare interoperability
  • 📈 Pre-built queries + raw SQL templates for ad-hoc analysis
  • 🎯 ASCII visualization with Unicode bar charts

Quick Start:

# Daily summary
python scripts/health_query.py daily --date 2025-11-29

# Weekly trends in JSON
python scripts/health_query.py --format json weekly --weeks 4

# Sleep analysis in FHIR format
python scripts/health_query.py --format fhir sleep --days 7

# ASCII charts
python scripts/health_query.py --format ascii activity --days 30

# Custom SQL
python scripts/health_query.py query "SELECT * FROM workouts LIMIT 5"

Use when: You need to analyze Apple Health metrics, generate health reports, export data in FHIR format, or visualize fitness/sleep patterns.


ElevenLabs Text-to-Speech

Convert text to high-quality audio files using ElevenLabs API with customizable voice parameters.

Features:

  • 🎙️ 7 pre-configured voice presets (rachel, adam, bella, elli, josh, arnold, ava)
  • 🎚️ Voice parameter customization (stability, similarity boost)
  • 📝 Support for any text length
  • 🔧 Both CLI and Python module interfaces
  • 🎵 MP3 audio output with automatic directory creation

Quick Start:

cd ~/.claude/skills/elevenlabs-tts
pip install -r requirements.txt
# Add your API key to .env
python scripts/elevenlabs_tts.py "Welcome to Claude Code"

Use when: You need text-to-speech generation, audio narration, voice synthesis, or want to speak generated content aloud.


FireCrawl Research ⭐ NEW

Research automation using FireCrawl API with academic writing templates and bibliography generation.

Features:

  • 🔍 Extract research topics from markdown headers and [research] tags
  • 🌐 Search and scrape web sources automatically
  • 📚 Generate BibTeX bibliographies from research results
  • 📝 Pandoc and MyST templates for academic papers
  • ⚡ Built-in rate limiting for free tier (5 req/min)
  • 📄 Export to PDF/DOCX with citations

Quick Start:

# Install dependencies
pip install python-dotenv requests

# Add API key to .env
echo "FIRECRAWL_API_KEY=fc-your-key" > ~/.claude/skills/firecrawl-research/.env

# Research topics from markdown
python scripts/firecrawl_research.py topics.md ./output 5

# Generate bibliography
python scripts/generate_bibliography.py output/*.md -o refs.bib

# Convert to PDF with citations
python scripts/convert_academic.py paper.md pdf

Use when: You need to research topics from the web, write academic papers with citations, or build bibliographies from scraped sources.


Transcript Analyzer ⭐ NEW

Analyze meeting transcripts using Cerebras AI to extract decisions, action items, and terminology.

Features:

  • 📋 Extract decisions, action items, opinions, questions
  • 📖 Build domain-specific glossaries from discussions
  • 🎯 Confidence scores for each extraction
  • ⚡ Fast inference via Cerebras (llama-3.3-70b)
  • 📊 YAML frontmatter with processing metadata
  • 🔄 Chunked processing for long transcripts

Quick Start:

# Install dependencies
cd ~/.claude/skills/transcript-analyzer/scripts && npm install

# Add API key
echo "CEREBRAS_API_KEY=your-key" > scripts/.env

# Analyze transcript
npm run cli -- /path/to/meeting.md -o analysis.md

# Include original transcript
npm run cli -- meeting.md -o analysis.md --include-transcript

# Skip glossary
npm run cli -- meeting.md -o analysis.md --no-glossary

Use when: You need to extract action items from meetings, find decisions in conversations, or build glossaries from recorded discussions.


Wispr Analytics

Extract and analyze Wispr Flow voice dictation history from the local SQLite database. Combines quantitative metrics with LLM-powered qualitative analysis for self-reflection, work pattern recognition, and mental health awareness.

Features:

  • Reads directly from Wispr Flow's local SQLite database (~8,500+ dictations)
  • Period selection: today, yesterday, week, month, specific dates, date ranges
  • Five analysis modes: all, technical (coding/work), soft (communication patterns), trends (volume/frequency), mental (sentiment/energy/rumination)
  • App-aware categorization: coding, AI tools, communication, writing
  • Bilingual analysis (Russian/English) with language-switching pattern detection
  • Hourly activity heatmaps and daily trend tables
  • LLM-powered qualitative analysis with mode-specific prompt templates
  • Saves output to Obsidian vault (meta/wispr-analytics/)

Quick Start:

# Copy to skills directory
cp -r wispr-analytics ~/.claude/skills/

# Today's full analysis
/wispr-analytics today

# Last 7 days, communication patterns
/wispr-analytics week soft

# Monthly mental health reflection
/wispr-analytics month mental

# Specific date range, productivity focus
/wispr-analytics 2026-02-01:2026-02-14 technical

Use when: Self-reflection on work patterns, reviewing dictation habits, tracking energy/sentiment over time, understanding how you communicate across contexts, or generating periodic self-awareness reports.

Context Builder ⭐ NEW

Generate interactive AI transformation context-builder prompts for consulting clients. Creates structured discovery session prompts that guide a company through context gathering about their business, pain points, tech stack, and AI opportunities -- producing a resumable, multi-section questionnaire with Express and Deep Dive modes.

Features:

  • 5-phase workflow: Intake (AskUserQuestion) -> Research (web + vault) -> Section Selection -> Generation -> Delivery
  • 15 pre-built sections in the section library (Revenue Map, Existential Question, Process Inventory, Pain Points, Tech Stack, AI Opportunities, New Business Models, People & Org, Client Value Chain, Data Assets, and more)
  • Focus-based section presets: AI Automation (7 sections), Existential Strategy (7 sections), Full Assessment (10+)
  • Two modes per generated prompt: Express (~15-20 min, 4 mega-sections) and Deep Dive (~60-90 min, 10 sections)
  • Session resumability: generated prompts check for existing output files and pick up where they left off
  • Auto-research: searches the web and Obsidian vault for company info, transcripts, and existing notes before generating
  • Baked-in consulting frameworks: BCG 10/20/70, Andrew Ng's Playbook, Deloitte AI Maturity, Value Stream Mapping, custom heuristics ("The But Heuristic", "Metro Newspaper Test", "Curiosity > Fear")
  • Output: per-section markdown files + compiled CLAUDE.md context file for future sessions
  • Optional Telegram delivery of generated prompts
  • Multilingual support (e.g., Russian session language with English output files)

Quick Start:

# Copy to skills directory
cp -r context-builder ~/.claude/skills/

# Run the skill
/context-builder

Use when: Preparing for a consulting engagement, onboarding a new client, running a structured discovery session, or doing a self-assessment of your own business's AI transformation readiness.

Sketch MCP Server ⭐ NEW

Collaborative SVG canvas MCP server with a Fabric.js browser editor. Claude writes and reads SVG via MCP tools while the user edits interactively in the browser. Real-time sync via WebSocket.

Features:

  • Open named canvases in standalone browser windows (Chrome --app mode)
  • Set, replace, or incrementally add SVG elements with live updates
  • Fixed-width Textbox support with word wrapping (Fabric.js Textbox)
  • Lock/unlock objects -- freeze grid structure while keeping text areas editable
  • JSON template save/load -- preserves Textbox widths, lock states, and all object properties
  • Undo/redo with lock state persistence
  • Built-in toolbar: select, draw, shapes (rect, ellipse, triangle, line, arrow), text tool (click for IText, drag for Textbox)
  • Clipboard paste support (images, SVG)
  • Includes a before/after grid template

MCP Tools:

  • sketch_open_canvas -- create/open canvas, launches browser
  • sketch_get_svg / sketch_set_svg -- read/replace SVG
  • sketch_add_element -- add SVG fragment without clearing
  • sketch_add_textbox -- add fixed-width text area with word wrapping
  • sketch_lock_objects / sketch_unlock_objects -- freeze/unfreeze objects
  • sketch_save_template / sketch_load_template / sketch_list_templates -- JSON template persistence
  • sketch_clear_canvas / sketch_focus_canvas / sketch_close_canvas -- canvas management

Quick Start:

# Clone and build
git clone https://github.com/glebis/sketch-mcp-server.git
cd sketch-mcp-server && npm install && npm run build

# Add to Claude Code MCP config
# mcpServers: { "sketch-mcp-server": { "command": "node", "args": ["path/to/dist/index.js", "--stdio"] } }

Use when: Visual prototyping, creating diagrams, building reusable canvas templates, before/after comparisons, or any task where Claude and the user need a shared visual workspace.

Session Anonymizer ⭐ NEW

Three-layer PII anonymization for session transcripts (therapy, coaching, consulting, mentoring). Runs Natasha (Russian NER), OpenAI Privacy Filter, and local LLM (Qwen2.5 via Ollama) in sequence for maximum coverage. Fully local by default — no data leaves the machine.

Features:

  • 🛡️ Three detection layers: Natasha (names/locations/orgs, instant), OPF (phones/accounts, 1.5s), Ollama LLM (medications/dates/contextual IDs, 2-10s)
  • 💊 Medication-aware: detects drug names with dosages as PII (they narrow identity in clinical contexts)
  • 🇷🇺 Russian + English: Natasha is native Russian NER; OPF handles English; LLM handles both languages
  • 📁 Batch processing with consistent pseudonyms across files
  • 🔐 AES-256 encryption for data at rest
  • ⚡ Graceful degradation: each layer is optional, warns about what's missing
  • 📊 JSON output with per-entity spans, types, confidence, and source layer

Quick Start:

# Install prerequisites
pip install natasha setuptools pymorphy2-dicts-ru
pip install 'opf @ git+https://github.com/openai/privacy-filter.git'
ollama pull qwen2.5:3b

# Copy to skills directory
cp -r session-anonymizer ~/.claude/skills/

# Anonymize a transcript
python3 ~/.claude/skills/session-anonymizer/scripts/anonymize.py session.txt

# Pipe from stdin
cat transcript.md | python3 ~/.claude/skills/session-anonymizer/scripts/anonymize.py --json

# Batch process a folder
python3 ~/.claude/skills/session-anonymizer/scripts/anonymize.py --batch ~/sessions/ -o ~/clean/

# Use pseudonyms instead of tags
python3 ~/.claude/skills/session-anonymizer/scripts/anonymize.py session.txt --pseudonyms

# Fast mode (Natasha only, instant)
python3 ~/.claude/skills/session-anonymizer/scripts/anonymize.py session.txt --layers natasha

Depends on: natasha (pip), opf (GitHub), Ollama with qwen2.5:3b. Each layer is optional.

Use when: Anonymizing session transcripts before AI analysis — pipe through the anonymizer before sending to Claude, ChatGPT, or any other tool. Also for supervision preparation, research datasets, and regulatory compliance (152-FZ, GDPR, HIPAA).


Meeting Processor

Intelligent meeting transcript processor that auto-detects meeting type (leadgen, partnership, coaching, internal) and applies type-specific structured extraction with optional interactive clarification.

Features:

  • Auto-detection of meeting type from transcript content
  • Interactive mode with AskUserQuestion for ambiguous details
  • Batch mode for high-confidence extraction without interaction
  • Type-specific extractors: leadgen (deal stage, commitments, budget), partnership (strategic alignment, fit assessment), coaching (delegates to coaching-session-summarizer)
  • Appends structured ## Meeting Analysis section to transcript files
  • Updates frontmatter with meeting_type, processed_date, processing_mode

Quick Start:

# Copy to skills directory
cp -r meeting-processor ~/.claude/skills/

# Install dependencies
pip install openai pyyaml

# Process a transcript interactively
python3 ~/.claude/skills/meeting-processor/scripts/process.py <transcript-file> --mode interactive

# Batch mode (no interaction)
python3 ~/.claude/skills/meeting-processor/scripts/process.py <transcript-file> --mode batch

# Force meeting type
python3 ~/.claude/skills/meeting-processor/scripts/process.py <transcript-file> --type leadgen

Use when: Processing meeting transcripts after Fathom/Granola sync, or when asked to analyze/summarize a meeting. Requires CEREBRAS_API_KEY environment variable.


Session Search

Semantic search across Claude Code session transcripts. Combines keyword pre-filtering with LLM-powered relevance evaluation to find previous sessions about specific topics, debugging conversations, research tasks, or past work.

Features:

  • Keyword pre-filtering for fast candidate selection across thousands of sessions
  • Meaningful excerpt extraction prioritizing keyword-matching content over boilerplate
  • Smart project name parsing from session paths
  • Filters out system reminders and skill descriptions from results
  • Configurable lookback period (default 90 days) and result count
  • Outputs structured data for Claude's semantic relevance scoring

Quick Start:

# Copy to skills directory
cp -r session-search ~/.claude/skills/

# Search for sessions about a topic
/session-search "debugging auth flow"

# With custom parameters (20 results, 180 days lookback)
/session-search "obsidian vault" 20 180

Use when: Finding previous Claude Code sessions about specific topics, locating past debugging conversations, or searching for research/planning sessions.

Balanced Dialog ⭐ NEW

Evidence-based dialogue mode that replaces sycophantic AI responses with structured, critical analysis. Five modes for different contexts — from quick gut-checks to deep Socratic dialogue.

Modes:

  • FULL — 4-move structured analysis: Surface Merits → Rigorous Challenge → Expansion → Refinement
  • INTERACTIVE — Socratic Q&A, one move at a time with user input at each step
  • TLDR — 3-5 line insight box: one fact, one challenge, one action
  • STEELMAN — strongest argument + strongest counter-argument. For debate prep
  • DECISION — tradeoff table + the call. For when analysis is done

Output Modifiers:

  • --table — ASCII pro/contra table
  • --refs — full academic citations with DOI validation

Meta-Rules:

  • No flattery, no filler phrases, no opinion statements
  • Quantified confidence levels (~70% confident...)
  • Scientific citation format with DOI web-search validation
  • Explicit uncertainty flagging
  • Subjective vs objective separation

Quick Start:

# Install via npx
npx skills add glebis/claude-skills -s balanced

# Or copy manually
cp -r balanced ~/.claude/skills/

# Quick analysis
/balanced "AI agents will replace most knowledge work within 5 years"

# Steelman mode for debate prep
/balanced steelman "remote work is more productive than office work"

# TLDR with table
/balanced tldr --table "should I migrate from REST to GraphQL?"

# Interactive Socratic dialogue
/balanced i "consciousness is an illusion"

# Onboarding — pick your default mode
/balanced onboard

Use when: You need honest, structured feedback instead of agreement — testing assumptions, evaluating claims, preparing arguments, making decisions.


Linear CLI ⭐ NEW

Standalone CLI for the Linear issue tracker with browser-based OAuth via Linear's MCP server. Zero dependencies beyond Python 3 — authenticates through Dynamic Client Registration + PKCE, communicates via MCP JSON-RPC over Streamable HTTP.

Features:

  • 🔐 Browser OAuth (no API keys needed) — dynamic client registration + PKCE flow
  • 📋 List, create, update issues with human-readable output
  • 🔍 Filter by team, status, assignee, priority
  • 💬 Add comments to issues
  • 📊 View teams, statuses, labels, workflow states
  • 🛠️ tools command to discover all available MCP operations
  • 🗂️ Config stored in ~/.config/linear/ (XDG compliant)
  • 📦 Single Python file, no pip dependencies

Quick Start:

# Copy to skills directory
cp -r linear ~/.claude/skills/

# Authenticate (opens browser)
~/.claude/skills/linear/scripts/linear auth

# List teams
~/.claude/skills/linear/scripts/linear teams

# Create an issue
~/.claude/skills/linear/scripts/linear create "Fix login bug" --team GLE --priority high --assignee me --due today

# List your issues
~/.claude/skills/linear/scripts/linear list --mine

# Update an issue
~/.claude/skills/linear/scripts/linear update GLE-123 --state "In Progress"

Use when: Managing Linear issues from the terminal or Claude Code — creating tasks, querying backlogs, updating statuses, or integrating Linear into automated workflows.


Skill Studio ⭐ NEW

Interview-driven automation design tool. Runs a coverage-driven JTBD interview (text or voice) to capture what to build, for whom, and why — then exports a one-page design.md spec plus an SVG design map. Sits between "should I automate this?" (automation-advisor) and "how do I package this as a skill?" (skill-creator).

Features:

  • 🎯 Coverage-driven JTBD interview with 22-field DesignJSON schema
  • 🎙️ Text mode (runs natively in Claude Code) or voice mode (Daily + Groq Whisper + Deepgram TTS)
  • 📊 3 depth levels: sprint (~5–7 questions), standard (~15–20), deep (~25–35)
  • 🎨 4 interview styles: scenario-first, socratic, metaphor-first, form
  • 🔄 4 presets: ai-agent, life-automation, knowledge-work, custom
  • 📥 Session seeding from prior Claude Code transcripts (propose-from-session)
  • 📝 Exports design.md + design.svg — ready to hand off to skill-creator

Quick Start:

# Install CLI
pip install -e ~/.claude/skills/skill-studio

# Text mode (default)
/skill-studio

# Voice mode (requires Daily, Groq, Deepgram keys)
skill-studio new --voice --preset ai-agent --depth standard

Use when: Designing a new skill, agent, automation, or workflow — transforms "I want a bot that..." into a structured spec with concrete scenarios, triggers, inputs/outputs, and guardrails.


Nano Banana (Gemini Image Generation) ⭐ NEW

Generate and edit images using Google's Gemini image generation models. Supports style presets, platform-specific sizing, variants, image editing, and reference images for style transfer.

Features:

  • 🎨 Text-to-image generation via Gemini API (Nano Banana 2 model)
  • 🖼️ Platform presets: YouTube thumbnails, slides, blog headers, social media
  • 🎭 Style presets with customizable parameters
  • ✏️ Image editing via inlineData (modify existing images)
  • 🔄 Variant generation from existing outputs
  • 📐 Reference images for style transfer
  • 🔐 SOPS-encrypted API key management

Quick Start:

cp -r nano-banana ~/.claude/skills/
# Requires Google AI API key (auto-decrypted via SOPS)

Use when: Generating images for presentations, social media, blog posts, or editing existing images with AI.


Lab Retro

Final retrospective and self-assessment for Claude Code Lab graduates. Four interactive exercises: progress audit, best prompt showcase, monthly plan, and structured feedback.

Quick Start:

cp -r lab-retro ~/.claude/skills/
/lab-retro

Use when: Completing a Claude Code Lab cohort — consolidates learning, captures best work, and generates a forward plan.


JTBD (Jobs to Be Done) ⭐ NEW

Terminal-first JTBD engine. Interview fast, kill jargon, capture real switching forces (Push/Pull/Habit/Anxiety), score opportunities with ODI, and export structured artifacts that product, marketing, and design workflows can use immediately.

Features:

  • 🎙️ 4 modes: live interview, transcript ingest, review mining, update existing brief
  • 🔄 3-pass interview: core questions → Switch forces (with 6-moment timeline) → Job Map decomposition
  • 🚫 Jargon Kill Switch with 30+ banned phrases and evidence-demand replacements
  • 📊 Granularity Gate: 5-dimension validator (actor, context, workaround, outcome, evidence) blocks vague output
  • 🎯 ODI opportunity scoring: importance × satisfaction → prioritized outcomes
  • 📝 4 output templates: jtbd.json, one-pager.md, messaging-angles.md, gtm-brief.md
  • 🔍 Review mining: 3-axis clustering (pain × outcome × workaround) with convergence threshold and uniqueness filter
  • 📋 Transcript ingest: auto-detect Q/A or speaker format, map to schema fields with confidence scores
  • ✅ 97 tests, zero external dependencies — all scripts work offline

Architecture:

jtbd/
├── SKILL.md              # 4 modes, 3 passes, tone + scope rules
├── references/           # Switch forces, ODI, job map, jargon blacklist,
│                         # granularity fixes, review taxonomy
├── scripts/              # validate_granularity.py, validate_outcome.py,
│                         # ingest_transcript.py, mine_reviews.py, odi_score.py
├── templates/            # one-pager, messaging-angles, gtm-brief, review-brief,
│                         # example_good.json, example_bad_then_fixed.json
└── tests/                # 97 tests across 5 test files

Quick Start:

# Copy to skills directory
cp -r jtbd ~/.claude/skills/

# Live interview
/jtbd

# Ingest a transcript
/jtbd "Turn this interview into a JTBD brief" (then provide path)

# Mine reviews
/jtbd "Mine these reviews for jobs" (then provide CSV/JSON path)

# Update existing brief
/jtbd "Update my JTBD brief with new data"

# Run tests
cd ~/.claude/skills/jtbd && python3 -m pytest tests/ -v

Use when: Articulating what a project does, who it's for, and why it matters. Converting interview transcripts or product reviews into structured briefs. Prioritizing what to build next. Generating messaging copy from switching forces.


timeBuzzer LED ⭐ NEW

Control the timeBuzzer hardware LED via MIDI — set colors, run effects (pulse, strobe, rainbow, fade), and send semantic status signals that pair with the Hue skill.

Features:

  • 🎨 Named colors (red, orange, yellow, green, cyan, blue, purple, magenta, pink, white, warm) + hex + RGB
  • 💫 Effects: pulse (BPM-controlled), strobe, rainbow, fade
  • 🚦 8 semantic signals: success, error, warning, thinking, working, idle, attention, focus
  • 🔲 Per-segment control (3 independent RGB segments)
  • 🔗 Same signal vocabulary as the Hue skill for synchronized desk + room feedback

Quick Start:

# Copy to skills directory
cp -r timebuzzer-led ~/.claude/skills/

# Set a color
python3 scripts/buzzer_led.py color blue

# Run an effect
python3 scripts/buzzer_led.py pulse cyan --bpm 30 --seconds 5

# Send a status signal
python3 scripts/buzzer_led.py signal success

Depends on: timeBuzzer USB device, python-rtmidi (pip install python-rtmidi)

Use when: You want visual desk feedback for task status, build results, or ambient signaling — especially combined with Hue for room-wide effects.


Tufte Report ⭐ NEW

Generate Tufte-inspired data reports as standalone HTML files. Combines editorial narrative with interactive data visualization: high information density, minimal chart junk, typography-first design with EB Garamond and Monaspace Argon.

Features:

  • Universal data adapter: normalize any source (CSV, JSON, SQLite, API) into a standard intermediate format
  • Composable block library: 8 typed blocks (sparkline-row, kpi-card, trend-chart, data-table, correlation-matrix, narrative, heatmap, strip-chart) with defined data contracts
  • Behavior-outcome correlations (e.g. screen time vs HRV)
  • Multiple precision levels: headline → summary → deep dive
  • Live-reload preview server (zero-dependency Python)
  • Responsive, mobile-friendly, accessible charts

Quick Start:

cp -r tufte-report ~/.claude/skills/
/tufte-report

Use when: Creating data reports, health dashboards, personal analytics, financial reviews, lab cohort metrics, or any document that combines narrative text with interactive charts.


🚀 Installation

Plugin Marketplace (Claude Code)

Register the repo as a skill source, then install individual skills:

# One-time: add the marketplace
claude plugin marketplace add glebis/claude-skills

# Install any skill
claude plugin install tdd@glebis-skills
claude plugin install doctorg@glebis-skills
claude plugin install deep-research@glebis-skills

Using the skills CLI

npx skills add glebis/claude-skills --skill tdd
npx skills add glebis/claude-skills --skill doctorg
npx skills add glebis/claude-skills --skill deep-research

Manual Installation

# Clone the repository
git clone https://github.com/glebis/claude-skills.git

# Copy desired skill to Claude Code skills directory
cp -r claude-skills/<skill-name> ~/.claude/skills/

Some skills require additional setup after installation:

# For llm-cli: Install Python dependencies
cd ~/.claude/skills/llm-cli
pip install -r requirements.txt

# For deep-research: Set up environment
cd ~/.claude/skills/deep-research
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY

# For youtube-transcript: Install yt-dlp
pip install yt-dlp

📋 Requirements

LLM CLI Skill

Deep Research Skill

  • Python 3.7+
  • OpenAI API key with access to Deep Research API
  • Internet connection

⚠️ Important: OpenAI requires organization verification to access certain models via API, including o4-mini-deep-research.

To verify your organization:

  1. Go to https://platform.openai.com/settings/organization/general
  2. Click "Verify Organization"
  3. Complete the automatic ID verification process
  4. Wait up to 15 minutes for access to propagate

Without verification, you'll receive a model_not_found error when trying to use the Deep Research API.

YouTube Transcript Skill

  • Python 3.7+
  • yt-dlp: pip install yt-dlp
  • Internet connection

Telegram Skill

  • Python 3.8+
  • telethon: pip install telethon
  • Telegram API credentials (api_id, api_hash from https://my.telegram.org)
  • Pre-configured session in ~/.telegram_dl/ (run telegram_dl.py to authenticate)

Telegram Telethon Skill

  • Python 3.8+
  • telethon, rich, questionary: pip install telethon rich questionary
  • Telegram API credentials (api_id, api_hash from https://my.telegram.org)
  • For voice transcription: GROQ_API_KEY env var or pip install openai-whisper
  • Config stored in ~/.config/telegram-telethon/

Gmail Skill

  • Python 3.8+
  • Google API libraries: pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
  • OAuth credentials from Google Cloud Console
  • Gmail API enabled in your Google Cloud project

Brand Agency Skill

  • Node.js 18+
  • Playwright: npm install playwright
  • Google Fonts (loaded automatically via CSS)

Health Data Skill

  • Python 3.8+
  • SQLite database at ~/data/health.db (imported from Apple Health export)
  • To import: Use the apple_health_export project

FireCrawl Research Skill

  • Python 3.8+
  • python-dotenv, requests: pip install python-dotenv requests
  • FireCrawl API key from https://firecrawl.dev

Transcript Analyzer Skill

Doctor G Skill

  • Python 3.8+
  • Requires health-data skill for Apple Health integration (optional)
  • Requires tavily-search skill for --deep mode (optional)
  • Requires firecrawl-research skill for --full mode (optional)

GitHub Gist Skill

💡 Usage

Skills are automatically triggered by Claude Code based on your requests. For example:

User: "Research the most effective open-source RAG solutions"
Claude: [Triggers deep-research skill]
        - Asks clarifying questions
        - Enhances prompt with parameters
        - Runs comprehensive research
        - Saves markdown report with sources

🔧 Configuration

Deep Research

Create a .env file in the skill directory:

OPENAI_API_KEY=your-key-here

Or export as environment variable:

export OPENAI_API_KEY="your-key-here"

📖 Documentation

Each skill includes comprehensive documentation:

  • SKILL.md - Complete skill overview and usage guide
  • CHANGELOG.md - Version history and updates
  • references/ - Detailed workflow documentation

🤝 Contributing

Contributions are welcome! To add a new skill:

  1. Fork this repository
  2. Create a new skill following the structure in deep-research/
  3. Include comprehensive documentation
  4. Submit a pull request

📝 Skill Structure

skill-name/
├── SKILL.md              # Skill metadata and documentation
├── CHANGELOG.md          # Version history
├── .env.example          # Example environment configuration
├── scripts/              # Executable orchestration scripts
├── assets/               # Core scripts and resources
└── references/           # Detailed documentation

🏗️ Building Skills

For guidance on creating your own skills, see the skill-creator guide.

💪 Support My Work

Support my work and learn Claude Code — join the Claude Code Lab, a hands-on course where we build real projects with AI.

📜 License

MIT License - see individual skill directories for specific licenses.

🔗 Links


Note: Skills require Claude Code to function. These are not standalone tools.