USP
This collection uniquely packages eight distinct engineering habits—from strategic planning (/think) to systematic debugging (/hunt) and UI design (/design)—into Claude skills, ensuring AI output is always precise and aligned with best pra…
Use cases
- 01Designing new features with a structured plan
- 02Systematically debugging and fixing software issues
- 03Reviewing code changes before merging or releasing
- 04Conducting in-depth research and synthesizing information
- 05Polishing written prose for clarity and natural tone
Detected files (8)
skills/read/SKILL.mdskillShow content (4735 bytes)
--- name: read description: "Fetches any URL or PDF as clean Markdown for reading, quoting, citation, or downstream work. Handles paywalls, JS-heavy pages, X/Twitter, and Chinese platforms via proxy cascade. Not for local text files already in the repo." when_to_use: "any URL or PDF to fetch, 看这个链接, 读一下, 看看这个网页, 抓取网页, read this, check this URL, fetch this page" metadata: version: "3.14.0" --- # Read: Fetch Any URL or PDF as Markdown Prefix your first line with 🥷 inline, not as its own paragraph. Convert any URL or local PDF to clean Markdown. No analysis, no summary, no discussion of the content unless explicitly asked after the fetch. ## Routing | Input | Method | |-------|--------| | `feishu.cn`, `larksuite.com` | Feishu API script | | `mp.weixin.qq.com` | Proxy cascade first, built-in WeChat article script only if the proxies fail | | `.pdf` URL or local PDF path | PDF extraction | | GitHub URLs (`github.com`, `raw.githubusercontent.com`) | Prefer raw content or `gh` first. Use the proxy cascade only as fallback. | | `x.com`, `twitter.com` | Proxy cascade (r.jina.ai keeps image URLs). Do not try WebFetch; it 402s. | | Everything else | Proxy cascade | After routing, load `references/read-methods.md` and run the commands for the chosen method. ## Output Format ``` Title: {title} Author: {author} (if available) Source: {platform} URL: {original url} Content {full Markdown, truncated at 200 lines if long} ``` ## Saving **Default: display only.** Show the converted Markdown inline. Do not create a file. **Save to `~/Downloads/{title}.md`** with YAML frontmatter when any of these are true: - User explicitly asks: "save", "download", "保存", "下载", "keep this" - Called from within `/learn` (Phase 1 expects a file to move) - User says "save" or "保存" after seeing the output (use conversation content, do not re-fetch) When saving: - If the file already exists, append `-1`, `-2`, etc. Never overwrite without confirmation. - Tell the user the saved path. When not saving: - Do not mention that a file was not saved. Just show the content. ## Images By default only save Markdown. Download images only when the user explicitly asks: "download images", "save images", "带图", "下载图片", or similar. When asked, after saving the Markdown: 1. Extract image URLs: `grep -oE 'https?://[^ )"]+\.(jpg|jpeg|png|webp|gif)' {md_path} | sort -u` 2. Create `~/Downloads/{title}-images/` and curl each URL in parallel (`&` + `wait`). Use the same proxy env vars as the fetch step. 3. Report the count and folder path. If any download fails, list the failed URLs. ## Hard Rules - **Do not summarize or analyze the content.** Your job is conversion and storage, not interpretation. - **Never overwrite without confirmation.** If the target filename already exists, use an auto-incremented suffix. - **Stop after the save report.** Do not suggest follow-up actions ("Would you like me to summarize?", "Next, you could...") unless the user asks. ## Gotchas | What happened | Rule | |---------------|------| | Fetched a paywalled article and returned a login page as Markdown | Inspect the first 10 lines for paywall signals ("Subscribe", "Sign in", "Continue reading"). If found, stop and warn the user. Do not save the login page. | | User said "read this" but meant "summarize and act on it" | Deliver the Markdown first, then ask what to do next. Do not save unless asked. | | URL returned empty page or paywall with no content | Report the failure clearly: what was tried, what failed. Do not fabricate or guess the content. | | r.jina.ai or defuddle.md returned empty for a JS-heavy site | Try the local fallback (`agent-fetch` or `defuddle parse`) before giving up. | | Network failures | Prepend local proxy env vars if available and retry once. | | Long content | Preview with `head -n 200` first; mention truncation when reporting the save. | | Local fallback tools returned JSON | Extract the Markdown-bearing field. Raw JSON is not a valid final output for `/read`. | | All methods failed | Stop and tell the user what was tried and what failed. Suggest opening the URL in a browser or providing an alternative. Do not silently return empty or partial results. | ## Content Extraction for Restyling Activate when: "extract content", "reformat this document", or user hands over a document to restyle Extract and tag: - **Headings**: H1/H2/H3 hierarchy - **Body paragraphs**: Plain text, no styling - **Lists**: Bullet vs numbered, nesting level - **Metrics/data**: Numbers, dates, quantifiable claims - **Images/diagrams**: Descriptions, captions Output: Clean, tagged content ready to feed into kami or other typesetting tools.skills/think/SKILL.mdskillShow content (9087 bytes)
--- name: think description: "Turns rough ideas into approved, decision-complete plans with validated structure before writing code. Covers new features, architecture decisions, and value judgments about whether to build, keep, or remove something. Not for bug fixes or small edits." when_to_use: "出方案, 给方案, 深入分析, 怎么设计, 用什么方案, 判断一下, 有没有必要, 值不值得, what's the best approach, plan this, how should I, should we keep this" metadata: version: "3.17.0" --- # Think: Design and Validate Before You Build Prefix your first line with 🥷 inline, not as its own paragraph. Turn a rough idea into an approved plan. No code, no scaffolding, no pseudo-code until the user approves. Give opinions directly. Take a position and state what evidence would change it. Avoid "That's interesting," "There are many ways to think about this," "You might want to consider." ## Lightweight Mode Activate when the user wants to fix something rather than build something, the problem is already defined, and the only open question is "how to fix it." Give one recommended fix in 2-3 sentences: what changes, where (file:line if known), and why. Name the brute-force version in one line first; default to it unless the user wants elegance. List involved files, flag explicitly if more than 5. State one risk. Wait for approval before implementing. Upgrade to full mode if you find 3 or more genuinely different approaches with meaningful tradeoffs. ## Evaluation Mode Activate when the user wants to judge whether something should exist, be kept, exposed, or removed. Typical triggers: "判断一下", "有没有必要", "值不值得", "should we keep this", "is this worth it", "我不想做", "商业前景", "有没有必要继续". State the evaluation target and what kind of judgment is needed (value, risk, or tradeoff). Take a current-state snapshot: what it does, who uses it, what depends on it; grep and read before opining. **Output format (Kill/Keep/Pivot):** Line 1: one of **Kill** / **Keep** / **Pivot** as the verdict. No preamble. Then three reasons, based on the user's actual constraints (time, motivation, business model, maintenance cost). Not generic tradeoffs. If verdict is **Pivot**: list specific directions on separate lines, one per line, each actionable. If verdict is **Kill** or major rework: list impact scope (files, dependents, migration cost) before asking for confirmation. Do not use a build-plan template here. Do not list options. Give one verdict. Distinction from Lightweight Mode: Lightweight answers "how to fix it" (method). Evaluation answers "should it exist" (value judgment). ## Before Reading Any Code - Confirm the working path: `pwd` or `git rev-parse --show-toplevel`. Never assume `~/project` and `~/www/project` are the same. - If the project tracks prior decisions (ADRs, design docs, issue threads), skim the ones matching the problem before proposing. Skip if none exist. - If the plan involves a default value, env var, or config field, open the project's actual config file (e.g. `pake.json`, `tauri.conf.json`, `package.json`, `.env`) and lift the live value. Never quote a default from memory or docs. ## Check for Official Solutions First Before proposing custom implementations, search for framework built-ins, official patterns, and ecosystem standards. Use Context7 MCP tools to query latest docs when available. If an official solution exists, it is the default recommendation unless you can articulate why it is insufficient for this specific case. ## Propose Approaches Give one recommended approach with rationale. Include effort, risk, and what existing code it builds on. Mention one alternative only if the tradeoff is genuinely close (>40% chance the user would prefer it). Always include one minimal option. For the recommendation, identify the most fragile assumption (premise collapse) and state it explicitly: "This plan assumes X. If X does not hold, Y happens." If the assumption is load-bearing and fragile, deform the design to survive its failure. **Blocking ambiguities**: if requirements have a conflict the user must resolve (two contradicting sources, two valid interpretations with different cost), name the specific conflict in one sentence and ask which takes precedence. Do not silently pick. **Additional attack angles** (run only when the plan involves external dependencies, high concurrency, or data migration): | Attack angle | Question | |---|---| | Dependency failure | If an external API, service, or tool goes down, can the plan degrade gracefully? | | Scale explosion | At 10x data volume or user load, which step breaks first? | | Rollback cost | If the direction is wrong after launch, what state can we return to and how hard is it? | If an attack holds, deform the design to survive it. If it shatters the approach entirely, discard it and tell the user why. Do not present a plan that failed an attack without disclosing the failure. Get approval before proceeding. If the user rejects, ask specifically what did not work. Do not restart from scratch. ## Validate Before Handing Off - More than 8 files or 1 new service? Acknowledge it explicitly. - More than 3 components exchanging data? Draw an ASCII diagram. Look for cycles. - Every meaningful test path listed: happy path, errors, edge cases. - Can this be rolled back without touching data? - Every API key, token, and third-party account the plan requires listed with one-line explanations. No credential requests mid-implementation. - Every MCP server, external API, and third-party CLI the plan depends on verified as reachable before approval. **No placeholders in approved plans.** Every step must be concrete before approval. Forbidden patterns: TBD, TODO, "implement later," "similar to step N," "details to be determined." A plan with placeholders is a promise to plan later. ## Implementation Handoff A finished plan must be executable by another engineer or agent without re-deciding the direction. Include: - Scope and non-scope. - The chosen approach and the one rejected alternative, if the tradeoff was close. - Public API, schema, command, config, or file-interface changes, if any. - Verification commands and manual acceptance checks. - Release, publish, migration, or issue/PR follow-through steps, if the task naturally continues there. - Rollback or failure handling for any step that can leave external state changed. When the user later says "Implement the plan", "可以干", "直接改", "整", or equivalent, treat that as approval of the written plan. Do not re-litigate the design. State which plan is being executed, check for obvious drift in the repo, and proceed. If the environment has changed enough that the plan is unsafe, name the specific drift and stop before editing. ## Gotchas | What happened | Rule | |---------------|------| | Moved files to `~/project`, repo was at `~/www/project` | Run `pwd` before the first filesystem operation | | Asked for API key after 3 implementation steps | List every dependency before handing off | | User said "just do it" or equivalent approval | Treat as approval of the recommended option. State which option was selected, finish the plan. Do not implement inside `/think`. | | Planned MCP workflow without checking if MCP was loaded | Verify tool availability before handing off, not mid-implementation | | Rejected design restarted from scratch | Ask what specifically failed, re-enter with narrowed constraints | | User said "just fix X" and skipped /think | If the fix touches 3+ files or needs a method choice, pause and run Lightweight Mode | | User approved a concrete plan and the agent debated the plan again | Execute the approved plan. Only stop for repo drift, missing permissions, or unsafe external state | | Picked a regional or locale-specific API variant without checking | List all regional or locale differences before writing integration code | | Introduced a second language or runtime into a single-stack project | Never add a new language or runtime without explicit approval | | User said "判断一下这个报错" and got Evaluation Mode | "判断一下" + error/bug context = debugging, route to `/hunt`. Evaluation Mode is for value/existence judgments only | ## Output **Approved design summary:** - **Building**: what this is (1 paragraph) - **Not building**: explicit out-of-scope list - **Approach**: chosen option with rationale - **Key decisions**: 3-5 with reasoning - **Unknowns**: only items that are explicitly deferred with a stated reason and a clear owner. Not vague gaps. If an unknown blocks a decision, loop back before approval. After the user approves the design, stop. Implementation starts only when requested. ## After Approval When the plan is approved, output this guidance: ``` Plan approved. To implement: say "implement this plan". After implementation, run `/check` to review before merging or release follow-through. ``` Keep it concise (2-3 sentences max). The user decides when to start implementation.skills/check/SKILL.mdskillShow content (15024 bytes)
--- name: check description: "Reviews code diffs and release-ready changes after implementation, executes approved implementation plans, extracts project-specific constraints from repository context, auto-fixes safe issues, and drives approved release, publish, push, release-reaction, and issue/PR follow-through. Also triages issues and PRs when the user mentions them. Not for exploring ideas, debugging, or document prose review." when_to_use: "review, 看看代码, 检查一下, 有没有问题, 是否需要优化, 合并前, 继续优化, 优化代码, 看看issue, 看看PR, release, publish, push, release reaction, GitHub reaction, 发布, 提交, 关闭issue, 发布表情, release表情, close issue, issue close, review my code, check changes, before merge, before release, code review, code-review" metadata: version: "3.22.0" --- # Check: Review Before You Ship Prefix your first line with 🥷 inline, not as its own paragraph. > Note: `/review` is a built-in Anthropic plugin command for PR review. Waza uses `/check` (or the alias `code-review`) instead. Do not re-trigger `/review` from within this skill. Read the diff, find the problems, fix what can be fixed safely, ask about the rest. Done means verification ran in this session and passed. ## Plan Execution Mode Activate when the user's message starts with "Implement the following plan", "按计划实施", "按照计划", "整", "可以干", "直接改" followed by a plan body, or links to a `/think` output. In this mode, do not run a code review. Instead: 1. State which plan is being executed (first heading or summary line). 2. Check for obvious repo drift: run `git status` and skim any changed files that contradict the plan. If drift makes the plan unsafe, name the specific conflict and stop. 3. Work through each plan item as a to-do. Mark each complete as you go. 4. After all items are done, run the project's verification command. 5. Transition automatically into Ship mode if the project context or current thread indicates review-then-ship. ## Default Continuation (review-then-ship) When the project's `AGENTS.md` or the current thread explicitly asks to "commit after review", "ship if green", or equivalent, transition directly from review to the Ship flow after a clean review. Do not ask again. State "proceeding to ship" before acting. ## Project Context Extraction This is Waza's public, standalone code-review capability. It should not depend on private machine paths or unpublished project instructions. Before reviewing, extract project constraints from repository context: 1. Read the diff and identify changed languages, frameworks, manifests, generated outputs, release files, and CI workflows. 2. Inspect public project files only as needed: README, AGENTS/CLAUDE instructions when present, package manifests, lockfiles, build configs, test configs, workflow files, and release notes. 3. Compress the findings into review context: verification commands, protected or generated files, release artifacts, domain risks, and public reply rules. 4. Apply the stricter rule when project context and this skill overlap. 5. If project docs or CI name a verification command, prefer that over auto-detection. For the context shape, see `references/project-context.md`. ## Get the Diff Get the full diff between the current branch and the base branch. If unclear, ask. If already on the base branch, ask which commits to review. ## Triage Mode Activate when the user mentions: issue, PR, "review all", triage, "batch", or "批量处理". Skip the diff flow and run this instead. **Action-first rule:** Items with a clear disposition (already fixed, duplicate, already released) get acted on immediately without analysis paragraphs. When analyzing screenshots or images, state what you see and the suggested action in one message. Only ask the user when the disposition is genuinely ambiguous. **Flow:** Pull open items with `gh issue list -R <repo> --state open --limit 20` and `gh pr list -R <repo> --state open`. For each item, check if a fix already shipped: `git log --oneline <latest-tag>..HEAD | grep -i "<keyword>"`. If shipped: close with note. If merged but unreleased: reply "已修复,等下一个版本 release" and close. If no fix: analyze and act. Fix now if possible (`fix: closes #N` commit); for Mole nightly-fixed items reply `@<user>, this is already fixed in the latest nightly. Upgrade: mo update --nightly` and close; for valid-but-unreleased items acknowledge and leave open; for invalid items give one-two sentence reason and close. **PR handling:** If the PR direction is accepted but the patch needs changes, prefer pushing the maintainer's fixes to the contributor's PR branch and then merging the PR. Check `maintainerCanModify` first. If branch edits are not allowed, ask the contributor to enable maintainer edits or push the needed revision; only fall back to a separate maintainer commit when timing or release safety requires it, and say so in the PR. Close without merging only when the direction is rejected, unsafe, no longer needed, or explicitly not part of the project's scope. Do not silently absorb an accepted PR into `main` and close it. **Public reply shape (maintainer, issue or PR):** 1. Resolve `@<login>` from `gh issue view` / `gh pr view --json author`. 2. **Language:** Match the **opener's** language when it is Chinese or English. If the opener used Japanese or Korean, use English for the maintainer reply unless project docs override. 3. Open with `@<login>` and **at most one** short thanks (`感谢反馈`, `thank you for the report`, etc.). Do **not** add closing thanks stacks (`再次感谢`, `Thanks again`, long courtesy endings). 4. One or two short paragraphs: factual reason, what shipped or what is blocked, no ceremony. 5. Always give a **next step tied to releases or verification**: next App Store or GitHub release, nightly upgrade command, cache path to clear once, or exactly what info is still needed. 6. Prefer **editing** an existing maintainer comment (`PATCH /repos/{owner}/{repo}/issues/comments/{comment_id}`) when updating wording; avoid delete plus repost unless the old text must disappear from history. Default to this shape unless `AGENTS.md` or `CLAUDE.md` in the repo contradicts it. **Sign-off line (append to standard sign-off):** ``` triage: N reviewed, N closed, N deferred ``` ## Release Worthiness Analysis Activate when the user asks "深入分析 X 是不是值得发新版本", "is this worth a new release", "值不值得发版", or similar. 1. Run `git log <last-tag>..HEAD --oneline` (find last tag with `git tag --sort=-version:refname | head -1`). 2. Count and classify commits: feat (new feature), fix (bug fix), chore/docs/refactor (internal). 3. Output: - **Commit summary**: N feat, N fix, N chore since last release - **Verdict**: release / skip (one line) - **Recommended version bump**: patch (fixes only), minor (feat present), major (breaking change) - **Key risk**: one sentence on the biggest risk in this batch 4. If verdict is "release", offer to transition into Ship mode. ## Ship / Release Follow-through Activate when the user asks to commit, tag, release, publish, push, reply on an issue/PR, or close an issue after a change is ready. This mode extends review; it does not skip review. Before any public or irreversible action: 1. Extract release rules from public project context: README, manifests, CI workflows, release notes, package scripts, changelogs, and explicit user instructions in the current thread. 2. Verify generated or bundled outputs, version fields, release notes, package contents, and required artifacts are in sync. Prefer dry-run commands when the ecosystem provides them. 3. Commit only intended files. Preserve unrelated dirty work, and serialize git operations so index locks or overlapping adds do not corrupt the workflow. 4. Push, publish, tag, or create a release only when the user has explicitly approved that action. If auth, OTP, CI, registry, or network state blocks the operation, pause and report the exact blocker. 5. For issue/PR follow-through, confirm the item identity with `gh issue view` or `gh pr view` before posting. Use the **Public reply shape** from Triage Mode (mention, single thanks, facts, explicit next release or verification step). Close only when the fix is shipped, already available, invalid, duplicate, or the maintainer explicitly asked for closure. 6. For GitHub release reaction follow-through, only do it when project context or the current thread asks for it. After the release exists and required assets are verified, resolve the release id from the tag, POST every positive release reaction to `repos/<owner>/<repo>/releases/<id>/reactions` with `gh api`, and re-read reactions to confirm. Positive release reactions are `+1`, `laugh`, `heart`, `hooray`, `rocket`, and `eyes`. 7. After network or API failures, re-read the end state instead of assuming success or failure. End with the concrete shipped state: commit hash, tag, release URL, registry/version result, pushed branch, release asset state, release reaction state, issue/PR state, and any remaining blockers. Omit fields that do not apply. ## Scope Measure the diff and classify depth: | Depth | Criteria | Reviewers | |-------|----------|-----------| | **Quick** | Under 100 lines, 1-5 files | Base review only | | **Standard** | 100-500 lines, or 6-10 files | Base + conditional specialists | | **Deep** | 500+ lines, 10+ files, or touches auth/payments/data mutation | Base + all specialists + adversarial pass | State the depth before proceeding. ## Did We Build What Was Asked? Before reading code, check scope drift: do the diff and the stated goal match? Label: **on target** / **drift** / **incomplete**. Drift signals (examples, not exhaustive -- any one is enough to label drift): - A changed file has no connection to the stated goal - The diff includes pure refactoring (renames, formatting, restructuring) when the goal was a bug fix or feature - A new dependency appears that the goal did not mention - Code unrelated to the goal was deleted or commented out - A new abstraction or helper was introduced that is not required by the goal ## Hard Stops (fix before merging) Examples, not exhaustive -- flag any diff that could cause irreversible harm if merged unreviewed. - **Destructive auto-execution**: any task marked "safe" or "auto-run" that modifies user-visible state (history files, config, preferences, installed software) must require explicit confirmation. - **Release artifacts missing**: verify every artifact listed in release notes, release templates, or project workflows exists and has been uploaded before declaring done. - **Generated artifact drift**: if source changes require generated or bundled outputs, verify the output was regenerated and included. - **Version skew**: release version fields across manifests, package metadata, app configs, changelogs, tags, or lockfiles must stay synchronized. - **Unknown identifiers in diff**: any function, variable, or type introduced in the diff that does not exist in the codebase is a hard stop. Grep before writing or approving any reference: `grep -r "name" .` -- no results outside the diff = does not exist. - **Injection and validation**: SQL, command, path injection at system entry points. Credentials hardcoded, logged, committed, or copied into public docs. - **Dependency changes**: unexpected additions or version bumps in package.json, Cargo.toml, go.mod, requirements.txt. Flag any new dependency not obviously required by the diff. ## Specialist Review (Standard and Deep only) Load `references/persona-catalog.md` to determine which specialists activate. Launch all activated specialists in parallel via the environment's agent or sub-agent facility when available, passing the full diff. If no parallel reviewer facility exists, run the specialist passes sequentially in the same session. Merge findings: when two specialists flag the same code location, keep the higher severity and note cross-reviewer agreement. Findings on different code locations are never duplicates even if they share a theme. ## Autofix Routing | Class | Definition | Action | |-------|------------|--------| | `safe_auto` | Unambiguous, risk-free: typos, missing imports, style inconsistencies | Apply immediately | | `gated_auto` | Likely correct but changes behavior: null checks, error handling additions | Batch into one user confirmation block | | `manual` | Requires judgment: architecture, behavior changes, security tradeoffs | Present in sign-off | | `advisory` | Informational only | Note in sign-off | Apply all `safe_auto` fixes first. Batch all `gated_auto` into one confirmation block. Never ask separately about each one. ## Adversarial Pass (Deep only) "If I were trying to break this system through this specific diff, what would I exploit?" Four angles (see `references/persona-catalog.md`): assumption violation, composition failures, cascade construction, abuse cases. Suppress findings below 0.60 confidence. ## GitHub Operations Use `gh` CLI for all GitHub interactions, not MCP or raw API. Confirm CI passes before merging. ## Verification Run `bash scripts/run-tests.sh` from this skill directory, or the project's known verification command from the target repository. Paste the full output. If the script exits non-zero or prints `(no test command detected)`: halt. Do not claim done. Ask the user for the verification command before proceeding. If the user also cannot provide one, document this explicitly in the sign-off as `verification: none -- no command available` and flag it as a structural gap, not a pass. For bug fixes: a regression test that fails on the old code must exist before the fix is done. ## Gotchas | What happened | Rule | |---------------|------| | Commented on #249 when discussing #255 | Run `gh issue view N` to confirm title before acting | | PR comment sounded like a report | 1-2 sentences, natural, like a colleague. Not structured, not AI-sounding. | | PR comment used bullet points | Write as short paragraphs, one thought per paragraph; thank the contributor first | | article.en.md inside _posts_en/ doubled the suffix | Check naming convention of existing files in the target directory first | | Deployed without env vars set | Run `vercel env ls` before deploying; diff against local keys | | Push failed from auth mismatch | Run `git remote -v` before the first push in a new project | ## Document Review For document, PDF, white paper, or prose review, route to `/write` (Document Review Mode). `/check` handles code diffs and release artifacts only. ## Sign-off ``` files changed: N (+X -Y) scope: on target / drift: [what] review depth: quick / standard / deep hard stops: N found, N fixed, N deferred specialists: [security, architecture] or none new tests: N verification: [command] -> pass / fail ```skills/design/SKILL.mdskillShow content (10986 bytes)
--- name: design description: "Produces distinctive, production-grade UI for any component, page, or visual interface. Handles screenshot-driven iteration when the user sends an image with a visual complaint. Not for backend logic or data pipelines." when_to_use: "设计, 做页面, 做组件, 不好看, 不和谐, 不清晰, 很丑, 很怪, 很傻, 突兀, 不协调, 字体, 字形, 排印, 排版, 样式, 前端, UI, 截图, build page, create component, make it look good, style, design, screenshot with visual complaint, typography, font looks wrong" metadata: version: "3.19.0" --- # Design: Build It With a Point of View Prefix your first line with 🥷 inline, not as its own paragraph. If it could have been generated by a default prompt, it is not good enough. **Output language rule:** Never use em-dash (—) in any output from this skill. Use commas, colons, or periods instead. **Chinese gut-feel complaints**: when the user says "很傻", "很怪", "突兀", "不协调", "不和谐" about a visual, treat it as an aesthetic rejection, not a debugging symptom. Route to Screenshot Iteration Mode, not to `/hunt`. ## Screenshot Iteration Mode Activate when the user sends a screenshot or image alongside a complaint ("这里很丑", "这个不对", "fix this", "looks wrong"). The existing product is the direction. Skip the five-question direction lock. **Flow:** 1. Read the screenshot. State the problem in one sentence: what specifically looks wrong (spacing, contrast, alignment, typeface, color, density, hierarchy). Preserve the user's negative label when it is diagnostic; do not translate "丑", "乱", "不清晰", or "怪" into vague "make it modern" language. 2. Wait for the user to confirm the diagnosis before touching code. 3. If the user provides a reference screenshot, older version, or "this one is good" example, compare current vs. reference and name the visual deltas before choosing a fix. 4. If the diagnosis is a known UX problem (split-view sync, infinite scroll, virtualised list, sticky header), spend one round surveying how 2-3 mature products in the same category solve it before writing code. Cite what each does. Skip only if the fix is purely cosmetic (color, spacing, copy). 5. Find the responsible code: grep for the component name or class, read the actual file. Do not rely on memory or assumptions about file location. 6. Apply the minimal fix. One component, one issue. 7. Verify the result in a browser or screenshot tool at desktop width and 375px mobile width. If the host cannot render, say that explicitly and hand off the exact view the user should check. 8. Ask the user to verify in the browser. Do not hand off without this step. **Calibration rules:** - The user's screenshot is the strongest design brief in the turn. Keep it visible in the reasoning until the fix is done. - Do not flatten specific taste feedback into generic UI adjectives. "More premium" is not a diagnosis; "caption baseline drifts above the Chinese line" is. - If the screenshot exposes a regression, broken render, timing issue, or generated asset defect rather than taste, route to `/hunt` and preserve the visual evidence. **Boundary**: if the fix requires changing 3 or more components, or if it reveals a direction problem rather than a specific bug, pause and run the full direction lock before continuing. **Redesign priority order** (when reworking an existing UI rather than building from scratch): font replacement → color cleanup → hover/active states → layout and whitespace → replace generic components → add loading/empty/error states → typographic polish. This order maximizes visual lift while minimizing the blast radius of each pass. Full rules in `references/design-reference.md`. Common traps and absolute CSS bans: `references/design-traps.md`. ## Lock the Direction First **Before starting any component, page, or visual work**: list 2-3 mature products in the same category (e.g. Notion, Linear, Typora, iA Writer, Raycast), and write one sentence each on how they solve the specific problem at hand. Then write code. Skip only if the task is purely cosmetic (color, spacing, copy). Before writing any code, ask the user directly, using the environment's native question or approval mechanism if it has one: 1. **Who uses this, and in what context?** Analyst dashboard differs from landing page or onboarding flow. See "App shell exception" below if the answer is a sidebar + main workspace layout. 2. **What is the aesthetic direction?** Name it precisely: dense editorial, raw terminal, ink-on-paper, brutalist grid, warm analog. "Clean and modern" is not a direction. If the user names a reference site or product ("feels like Linear / Claude.ai / Vercel"), do not accept it as a direction -- extract 3 concrete properties from it: button radius philosophy, surface depth treatment (shadow vs background step vs border), and accent color family. Name those instead. **Shortcut for well-known brands**: see "Brand preset flow" in `references/design-reference.md`. Ask first, run the preset, then decompose against the generated file. 3. **What is the one thing this leaves in memory?** A typeface, color system, unexpected motion, asymmetric layout. Pick one and make it obvious. 4. **What are the hard constraints?** Framework, bundle size, contrast minimums, keyboard accessibility. 5. **What is the signature micro-interaction?** Scale on press, staggered reveal, or contextual icon animation. Pick one and know exactly how it's implemented. Do not proceed until all five are answered. ### Source repo as reference When the user provides a repository URL or pastes source code of an existing product to recreate or extend: the file tree is a menu, not the meal. Do not reconstruct the UI from memory or training data. Instead, read the actual source: - Theme and token files: `theme.ts`, `colors.ts`, `tokens.css`, `_variables.scss`, or equivalent - Global stylesheets and layout scaffolds - The specific components the user mentioned Lift exact values: hex codes, spacing scale entries, font stacks, border radii. A rough approximation is not pixel fidelity. Only attach the target component folder or package. Exclude `.git`, `node_modules`, `dist`, and lock files. Dragging in an entire monorepo pollutes the context with irrelevant code and degrades output quality. ### App shell exception (sidebar + main workspace) If question 1 is an app shell (Slack, Linear, Notion class), load the "App shell rules" section in `references/design-reference.md` and apply those constraints before proceeding. ### Data dashboard exception If the surface is a dashboard, analytics view, or chart-heavy interface, also load `references/design-data-viz.md` for chart selection, number alignment, and product-benchmark rules. Skip when building marketing pages, landing pages, or generic components. State the chosen direction in one sentence, then load `references/design-reference.md` and check the tech stack conflicts table. Name the single CSS strategy before writing the first component. For token decisions (color, font, motion): load `references/design-tokens.md`. For aesthetic quality review and production structure: load `references/design-aesthetic-quality.md`. Summarize the direction as three lines before writing any code: - **Visual thesis**: mood, material, and energy in one sentence (e.g. "warm brutalist editorial with high-contrast ink type and rough paper texture") - **Content plan**: hero -> support -> detail -> final CTA, one line each. For **app/dashboard surfaces**: skip the marketing structure, default to utility mode (orient, show status, enable action), no hero unless explicitly requested. - **Interaction thesis**: 2-3 specific motion ideas that change how the page feels (e.g. "hero text slides in on load, section headers pin while content scrolls beneath, CTA pulses on hover") For production or multi-page UIs, expand the thesis into the 9-section DESIGN.md scaffold in `references/design-reference.md` (theme, palette, typography, components, layout, depth, do/don't, responsive, prompt guide). For a single component, the three lines are sufficient. ## Non-Negotiable Constraints `references/design-reference.md` is already loaded during direction lock. It owns the full rules: typography, OKLCH color, motion timings, layout defaults, CSS-pattern bans, accessibility baseline, and complexity matching. Apply them. Do not restate them here. ## When Asked For Options Give at least 3 variations across genuinely different dimensions (density, typography, color, layout, motion). See "Options guide" in `references/design-reference.md` for the full variation framework. Three options differing only by accent color are not three variations. ## Gotchas | What happened | Rule | |---------------|------| | Used Inter as the display font | It communicates nothing. Pick something with a personality. | | Three cards, identical shadows, identical padding -- a template | If swapping content doesn't require layout changes, redo it. | | Claimed it looked right without opening a browser | Code correct in your head can look broken in the browser. Open it. | | Chose glassmorphism, ignored the mobile constraint | `backdrop-filter` is expensive on low-power devices. Name the tradeoff. | | Light-mode app: white panel on white background, visually indistinguishable | Adjacent nested surfaces must differ visually. Either background step (sidebar vs main ≥4% lightness difference) or shadow minimum `0 1px 3px rgba(0,0,0,0.10)`. | ## Aesthetic Review After significant build phases and at handoff, re-read the visual thesis from direction lock. If what is on screen drifted toward a generic default, identify the specific element that broke first (typeface, color, card treatment, spacing) and fix it before continuing. Run these checks before the handoff summary: - Is the brand or product unmistakable in the first screen? - Is there one strong visual anchor (real imagery, not a decorative gradient)? - Can the page be understood by scanning headlines only? - Does each section have one job? - Are cards actually necessary, or just default styling? - Does motion improve hierarchy or atmosphere, or is it ornamental? - Would the design still feel premium if all decorative shadows were removed? - AI Slop Test: scan the first screen for default patterns (reflex font, purple-to-blue gradient, centered hero with two CTAs side by side, three identical cards, generic top nav). If any appear unintentionally, fix typography, color, or layout until none remain. If any check fails, fix first. Ask the user to verify at full width and at 375px; if the layout breaks at mobile width, fix before handing off. End with: - Aesthetic direction, named and justified in 2-3 sentences - Non-obvious choices explained: typeface, color decisions, layout logic - Instructions for replacing placeholder content with real content After handoff, stop.skills/health/SKILL.mdskillShow content (8598 bytes)
--- name: health description: "Runs a budget-aware audit of the Claude Code config stack when Claude ignores instructions, behaves inconsistently, hooks malfunction, MCP servers need auditing, or users ask why /health used many tokens. Flags issues by severity. Not for debugging code or reviewing PRs." when_to_use: "检查claude, 健康度, 配置检查, 配置对不对, Claude ignoring instructions, check config, settings not working, audit config" metadata: version: "3.17.0" --- # Health: Audit the Six-Layer Stack Prefix your first line with 🥷 inline, not as its own paragraph. Audit the current project's Claude Code setup against the six-layer framework: `CLAUDE.md → rules → skills → hooks → subagents → verifiers` Find violations. Identify the misaligned layer. Calibrate to project complexity only. **Output language:** Check in order: (1) CLAUDE.md `## Communication` rule (global over local); (2) user's recent language; (3) English. **Budget posture:** Start with the summary audit. Escalate automatically when the user asks for a deep, full, complete, thorough, "深入", "完整", "彻底", or "继续跑完" audit, when current project instructions or remembered user preference says to run deep health checks by default, when the project is Complex, or when the summary pass exposes a critical ambiguity that cannot be resolved locally. Otherwise do not read full conversation extracts or launch inspector subagents. Tell the user before escalating because deep health audits can consume significant token quota. ## Step 0: Assess project tier Pick one. Apply only that tier's requirements. | Tier | Signal | What's expected | | ------------ | --------------------------------------- | ---------------------------------------------- | | **Simple** | <500 files, 1 contributor, no CI | CLAUDE.md only; 0-1 skills; hooks optional | | **Standard** | 500-5K files, small team or CI | CLAUDE.md + 1-2 rules; 2-4 skills; basic hooks | | **Complex** | >5K files, multi-contributor, active CI | Full six-layer setup required | ## Step 1: Collect data Run the collection script in summary mode first. Do not interpret yet. ```bash # Resolve collect-data.sh from canonical locations (no personal home-dir paths). HEALTH_SCRIPT="${CLAUDE_SKILL_DIR:+$CLAUDE_SKILL_DIR/scripts/collect-data.sh}" if [ ! -f "${HEALTH_SCRIPT:-}" ]; then for candidate in \ "./skills/health/scripts/collect-data.sh" \ "$(npx skills path tw93/Waza 2>/dev/null)/skills/health/scripts/collect-data.sh"; do [ -f "$candidate" ] && HEALTH_SCRIPT="$candidate" && break done fi if [ ! -f "${HEALTH_SCRIPT:-}" ]; then echo "health collect-data.sh not found; set CLAUDE_SKILL_DIR or reinstall: npx skills add tw93/Waza -a claude-code -g -y" exit 1 fi bash "$HEALTH_SCRIPT" ``` Sections may show `(unavailable)` when tools are missing: - `jq` missing → conversation sections unavailable - `python3` missing → MCP/hooks/allowedTools sections unavailable - `settings.local.json` absent → hooks/MCP may be unavailable (normal for global-only setups) Treat `(unavailable)` as insufficient data, not a finding. Do not flag those areas. ## Step 1b: MCP Live Check Test every MCP server: call one harmless tool per server. Record `live=yes/no` with error detail. Respect `enabled: false` (skip without flagging). For API keys, only check if the env var is set (`echo $VAR | head -c 5`), never print full keys. ## Step 2: Analyze Confirm the tier. Then route: - **Simple:** Analyze locally. No subagents. - **Standard:** Analyze locally from the summary output. Do not launch subagents by default. If the user asks for a deep/full/thorough audit, or if local analysis cannot classify a security/control issue, escalate to deep mode and explain the likely token cost. - **Complex, remembered deep preference, or explicit deep audit:** Re-run collection with `bash "$HEALTH_SCRIPT" auto deep`, then launch two subagents in parallel. Redact credentials to `[REDACTED]`. - **Agent 1** (Context + Security): Read `agents/inspector-context.md`. Feed `CONVERSATION SIGNALS` section. - **Agent 2** (Control + Behavior): Read `agents/inspector-control.md`. Feed detected tier. - **Fallback:** If a subagent fails, analyze that layer locally and note "(analyzed locally)". ## Step 3: Report **Health Report: {project} ({tier} tier, {file_count} files)** ### [PASS] Passing checks (table, max 5 rows) ### Finding format ``` - [severity] <symptom> ({file}:{line} if known) Why: <one-line reason> Action: <exact command or edit to fix> ``` `Action:` must be copy-pasteable. Never write "investigate X" or "consider Y". If the fix is unknown, name the diagnostic command. ### [!] Critical -- fix now Rules violated, dangerous allowedTools, MCP overhead >12.5%, security findings, leaked credentials. Example: - [!] `settings.local.json` committed to git (exposes MCP tokens) Why: leaked token enables remote code execution via installed MCP servers Action: `git rm --cached .claude/settings.local.json && echo '.claude/settings.local.json' >> .gitignore` ### [~] Structural -- fix soon CLAUDE.md content in wrong layer, missing hooks, oversized descriptions, verifier gaps. ### [-] Incremental -- nice to have Outdated items, global vs local placement, context hygiene, stale allowedTools entries. --- If no issues: `All relevant checks passed. Nothing to fix.` ## Non-goals - Never auto-apply fixes without confirmation. - Never apply complex-tier checks to simple projects. ## Gotchas | What happened | Rule | | --------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Missed the local override | Always read `settings.local.json` too; it shadows the committed file | | Subagent timeout reported as MCP failure | MCP failures come from the live probe, not data collection | | Reported issues in wrong language | Honor CLAUDE.md Communication rule first | | Flagged intentionally noisy hook as broken | Ask before calling a hook "broken" | | Hook seemed not to fire, but it did -- a later UI element rendered above it | Hook firing order is not visual order. Before re-editing the hook config: (a) confirm with `--debug` or by piping output, (b) check whether a diff dialog, permission prompt, or other UI element rendered on top and pushed the hook output offscreen, (c) only then suspect the hook itself. | | `/health` burned too much quota on first run | Stay in summary mode first. Full conversation extracts and inspector subagents are deep-audit tools, not the default path for Standard projects. |skills/hunt/SKILL.mdskillShow content (10329 bytes)
--- name: hunt description: "Finds root cause of errors, crashes, regressions, screenshot-reported defects, unexpected behavior, and failing tests before applying any fix. Not for code review or new features." when_to_use: "排查, 查查, 报错, 崩溃, 不工作, 不对, 跑不通, 以前是好的, 回归, 截图回归, 判断错误原因, 判断为什么报错, 反复修不好, debug, regression, used to work, broke after update, why broken, not working, what's wrong, fix error, stack trace" metadata: version: "3.19.0" --- # Hunt: Diagnose Before You Fix Prefix your first line with 🥷 inline, not as its own paragraph. A patch applied to a symptom creates a new bug somewhere else. **Do not touch code until you can state the root cause in one sentence:** > "I believe the root cause is [X] because [evidence]." Name a specific file, function, line, or condition. "A state management issue" is not testable. "Stale cache in `useUser` at `src/hooks/user.ts:42` because the dependency array is missing `userId`" is testable. If you cannot be that specific, you do not have a hypothesis yet. ## Diagnosis Signals Good progress: a log line matches the hypothesis, you can predict the next error before running it, you understand the propagation path from root cause to symptom, you can write a test that fails on the old code. At each of these signals, find one more independent piece of evidence before committing. Hypothesis quality gate: before acting on a hypothesis, list all observable symptoms (not just the one the user reported first). The hypothesis must explain every symptom; if it only covers some, it is a symptom-level guess, not a root cause. For timing-dependent issues (flicker, intermittent failure, race condition), reproduce reliably before diagnosing. Rationalization warning: "I'll just try this" means no hypothesis, write it first. "I'm confident" means run an instrument that proves it. "Probably the same issue" means re-read the execution path from scratch. "It works on my machine" means enumerate every env difference before dismissing. "One more restart" means read the last error verbatim; never restart more than twice without new evidence. ## Hard Rules - **Same symptom after a fix is a hard stop; so is "let me just try this."** Both mean the hypothesis is unfinished. Re-read the execution path from scratch before touching code again. - **After three failed hypotheses, stop.** Use the Handoff format below to surface what was checked, what was ruled out, and what is unknown. Ask how to proceed. - **Verify before claiming.** Never state versions, function names, or file locations from memory. Run `sw_vers` / `node --version` / grep first. No results = re-examine the path. - **External tool failure: diagnose before switching.** When an MCP tool or API fails, determine why first (server running? API key valid? Config correct?) before trying an alternative. - **Pay attention to deflection.** When someone says "that part doesn't matter," treat it as a signal. The area someone avoids examining is often where the problem lives. - **Visual/rendering bugs: static analysis first.** Trace paint layers, stacking contexts, and layer order in DevTools before adding console.log or visual debug overlays. Logs cannot capture what the compositor does. Only add instrumentation after static analysis fails. - **Fix the cause, not the symptom.** If the fix touches more than 5 files, pause and confirm scope with the user. ## Bisect Mode Activate when: "以前是好的", "之前是好的", "used to work", "上一次提交还是对的", "broke after update", or the user remembers a specific good commit or version. 1. Find candidate good tag: `git tag --sort=-version:refname | head -10` or ask the user for the last known-good commit. 2. Define a non-interactive pass/fail test command before starting bisect. Bisect is worthless without a reproducible check. 3. Run: `git bisect start && git bisect bad HEAD && git bisect good <tag-or-hash>` 4. At each step bisect checks out a commit. Run the test command. Mark: `git bisect good` or `git bisect bad`. 5. Let bisect drive. Do not jump ahead or skip commits unless explicitly asked. 6. When bisect names the culprit commit, read only that diff. Identify the specific line that introduced the regression. 7. Run `git bisect reset` when done. Read large files once and reference from notes rather than re-reading at each bisect step. ## Repeated Regression / Screenshot Reference Mode Activate when the user says the same issue is still wrong after a fix, provides a "good" screenshot/version/file, or describes a visual result as previously correct. Treat the reference as evidence, not decoration: 1. List every reported and visible symptom, preserving the user's concrete words where useful ("still slow", "not clear", "尖刺", "先显示上一个内容"). 2. Identify the reference oracle: last-good commit/tag, old build, fixture, screenshot, downloaded artifact, or expected state from the user's description. 3. Define the pass/fail check before editing. For visual bugs, this may be a narrow screenshot checklist plus the command that renders the view; for behavioral bugs, prefer an automated regression test or deterministic repro. 4. Compare current vs. reference and name the exact delta. Do not generalize a visual defect into "style polish" when the evidence points to a broken render, race, font pipeline, or state path. 5. If the same symptom remains after one attempted fix, stop and rebuild the hypothesis from the evidence. Do not stack more patches onto a disproven explanation. If the issue is purely subjective UI taste, route to `/design`. If it is rendering, state, timing, build output, font generation, or a regression from a known-good version, stay in `/hunt`. ## Confirm or Discard Add one targeted instrument: a log line, a failing assertion, or the smallest test that would fail if the hypothesis is correct. Run it. If the evidence contradicts the hypothesis, discard it completely and re-orient with what was just learned. Do not preserve a hypothesis the evidence disproves. ## Targeted Logging Use logs as a scalpel, not as noise. Before adding a log, write the question it answers: > "If this log prints X before Y, hypothesis A is still possible; if it does not, hypothesis A is wrong." Load `references/logging-techniques.md` for the full logging playbook: binary-search instrumentation, discriminating log content, boundary-first placement, timing bug logging, and removal discipline. Quick rules: 1. Place the first log at the midpoint of the execution path, not at the symptom. Binary search from there. 2. Log discriminating facts only: sequence number, input key, branch taken, old/new state, error code. 3. Remove temporary logs before finishing. Gate persistent diagnostics behind the project's debug flag. If adding logs changes the behavior, treat that as evidence of a timing, lifecycle, or concurrency problem. ## Gotchas | What happened | Rule | |---------------|------| | Patched client pane instead of local pane | Trace the execution path backward before touching any file | | MCP not loading, switched tools instead of diagnosing | Check server status, API key, config before switching methods | | Orchestrator said RUNNING but TTS vendor was misconfigured | In multi-stage pipelines, test each stage in isolation | | Race condition diagnosed as a stale-state bug | For timing-sensitive issues, inspect event timestamps and ordering before state | | Added logs everywhere and still could not explain the bug | Rewrite each log as a yes/no question. Delete logs that do not rule a hypothesis in or out | | Reproduced locally but failed in CI | Align the environment first (runtime version, env vars, timezone), then chase the code | | Stack trace points deep into a library | Walk back 3 frames into your own code; the bug is almost always there, not in the dependency | | Worked when launched from app, broke when opened via file association / drag-drop / deep link / external proxy | Reproduce using the exact entry point the user described. App-internal init differs from cold-launch-with-file init; state may not be ready when the document arrives. | ## Outcome ### Success Format ``` Root cause: [what was wrong, file:line] Fix: [what changed, file:line] Confirmed: [evidence or test that proves the fix] Tests: [pass/fail count, regression test location] Regression guard: [test file:line] or [none, reason] ``` Status: **resolved**, **resolved with caveats** (state them), or **blocked** (state what is unknown). **Regression guard rule**: for any bug that recurred or was previously "fixed", the fix is not done until: 1. A regression test exists that fails on the unfixed code and passes on the fixed code. 2. The test lives in the project's test suite, not a temporary file. 3. The commit message states why the bug recurred and why this fix prevents it. ### Handoff Format (after 3 failed hypotheses) ``` Symptom: [Original error description, one sentence] Hypotheses Tested: 1. [Hypothesis 1] → [Test method] → [Result: ruled out because...] 2. [Hypothesis 2] → [Test method] → [Result: ruled out because...] 3. [Hypothesis 3] → [Test method] → [Result: ruled out because...] Evidence Collected: - [Log snippets / stack traces / file content] - [Reproduction steps] - [Environment info: versions, config, runtime] Ruled Out: - [Root causes that have been eliminated] Unknowns: - [What is still unclear] - [What information is missing] Suggested Next Steps: 1. [Next investigation direction] 2. [External tools or permissions that may be needed] 3. [Additional context the user should provide] ``` Status: **blocked** ## Rendering Bug Mode Activate when: "PDF looks wrong", "page break issue", "font not rendering", broken PDF output, or print layout wrong. Load `references/rendering-debug.md` for the full diagnosis checklist (WeasyPrint quirks, font loading, page overflow, browser print CSS). Static analysis first, then reproduce if needed. ## IME / Unicode Issues For input method, character rendering, or text encoding bugs (IME state, cursor drift, emoji splitting, composition events), check `references/ime-unicode.md` first before forming a hypothesis.skills/learn/SKILL.mdskillShow content (7597 bytes)
--- name: learn description: "Runs a six-phase research workflow to turn unfamiliar domains or collected sources into publish-ready output. Not for quick lookups or single-file reads." when_to_use: "学习一下, 深入研究, 研究一下, 整理成文章, 把这批材料整理, 一站式参考, 一篇就够, 整理成长文, research, deep dive, help me understand, compile sources, unfamiliar domain" metadata: version: "3.15.0" --- # Learn: From Raw Materials to Published Output Prefix your first line with 🥷 inline, not as its own paragraph. Collect, organize, translate, explain, structure. Support the user's thinking; do not replace it. **Boundary**: single URL that only needs fetching belongs in `/read`. A single URL that needs summary or analysis can use `/read` as the fetch step, but the final answer should satisfy the user's requested summary or analysis. `/learn` is for multi-source research that produces a new structured output. ## Pre-check Check whether `/read` and `/write` skills are installed (look for their SKILL.md in the skills directories). Warn if missing, do not block: - `/read` missing -- Phase 1 fetch falls back to native `WebFetch` / `curl`; coverage on paywalled, JS-heavy, and Chinese-platform pages degrades. - `/write` missing -- Phase 5 AI-pattern stripping falls back to manual scan. Phases 1-4 are unaffected. ## Choose Mode Ask the user to confirm the mode, using the environment's native question or approval mechanism if it has one: | Mode | Goal | Entry | Exit | |------|------|-------|------| | **Deep Research** | Understand a domain well enough to write about it | Phase 1 | Phase 6: publish-ready draft | | **Quick Reference** | Build a working mental model fast, no article planned | Phase 2 | Phase 2: notes only | | **Write to Learn** | Already have materials, force understanding through writing | Phase 3 | Phase 6: publish-ready draft | | **Canonical Article** | One article that covers a topic so thoroughly readers need nothing else | Phase 1 | Phase 6: single authoritative reference | If unsure, suggest Quick Reference. ## Canonical Article Mode Activate when: "一篇就够", "一站式参考", "整理成长文", "目的是大家只需要看这篇就好了", or the user wants a single authoritative reference on a topic. Goal: after reading the article, no one should need to search for anything else on this topic. Additional requirements on top of standard Deep Research: - Every major sub-topic must have its own section; nothing left as a footnote - Include worked examples, not just principles - Cover common mistakes and how to avoid them - Add a "Further Reading" section with the 3-5 sources that go deepest; flag which ones are the best starting points - Phase 6 self-review must confirm: "Could a reader implement/understand this from this article alone?" ## Phase 1: Collect Gather primary sources only: papers that introduced key ideas, official lab/product blogs, posts from builders, canonical "build it from scratch" repositories. Not summaries. Not explainers. Three ordered steps per source -- no shortcuts, no merging: 1. **Discover** -- use an installed search plugin (e.g., PipeLLM) to map the landscape, then deep-search the 2-3 most promising sub-topics. No plugin: use the environment's native web search. Output is a URL list; do not fetch content here. 2. **Fetch** -- every URL goes through `/read`. `/read` already owns the proxy cascade, paywall detection, and platform routing (WeChat, Feishu, PDF, GitHub). `WebFetch` and raw `curl` silently fail on JS-heavy or paywalled sites and skip all of that. If `/read` is missing (Pre-check warned), fall back to native fetch and accept reduced coverage. 3. **File** -- `/read` saves to `~/Downloads/{title}.md` when called from `/learn`. Move each file into a sub-topic directory under the research project after the fetch returns. Move, don't refetch. Target: 5-10 sources for a blog post, 15-20 for a deep technical survey. ## Phase 2: Digest Work through the materials. For each piece: read it fully, keep what is good, cut what is not. At the end of this phase, cut roughly half of what was collected. For key claims, ask before including in the outline: - Does this idea appear in at least two different contexts from the same source? - Can this framework predict what the source would say about a new problem? - Is this specific to this source, or would any expert in the field say the same thing? Generic wisdom is not worth distilling. Passes two or three: belongs in the outline. Passes one: background material. Passes zero: cut it. When two sources contradict on a factual claim, note both positions and the evidence each gives. Do not silently pick one. ## Phase 3: Outline Write the outline for the article. For each section: note the source materials it draws from. If a section has no sources, either it does not belong or a source needs to be found first. Do not start Phase 4 until the outline is solid. ## Phase 4: Fill In Work through the outline section by section. If a section is hard to write, the mental model is still weak there: return to Phase 2 for that sub-topic. The outline may change, and that is fine. Stall signals (any one means the mental model is incomplete for this section): - You have rewritten the opening sentence three or more times without settling - The section relies on a single source and you cannot cross-check the claim - You need a new source that was not collected in Phase 1 - The paragraph makes a claim you could not explain to someone out loud When stalled: return to Phase 2 for that sub-topic, not for the whole article. ## Phase 5: Refine Pass the draft with a specific brief: - Remove redundant and verbose passages without changing meaning or voice - Flag places where the argument does not flow - Identify gaps: concepts used before they are explained, claims needing sources Do not summarize sections the user has not written. Do not draft new sections from scratch. Edits only. Then strip AI patterns from the draft. If `/write` is installed, invoke it. If not, do it manually: scan for filler phrases, binary contrasts, dramatic fragmentation, and overused adverbs. Cut them without changing meaning. ## Phase 6: Self-review and Publish Readiness The user reads the entire article linearly before publishing. Not with AI. Mark everything that feels off, fix it, read again. Two passes minimum. When it reads clean from start to finish, the draft is ready for the user to publish. **After the user confirms the article is ready to publish, stop.** Do not upload, post, distribute, or perform any publish action unless explicitly asked. ## Gotchas | What happened | Rule | |---------------|------| | Collected 30 secondary explainers instead of primary sources | Phase 1 targets papers, official blogs, and repos by builders. Summaries are not sources. | | Used `WebFetch` or `curl` on URLs while `/read` was installed | Phase 1 fetch is not optional. `/read` owns the proxy cascade, paywall detection, and platform routing. Bypassing it silently loses coverage on paywalled, JS-heavy, or Chinese-platform pages. | | Treated a convincing explainer as ground truth | Ask: does this appear in at least two different contexts from the same source? | | Phase 2 wrote summaries instead of teaching the concept | Digest means building the mental model. Summarizing is not digesting. | | AI offered to upload the article to a blog or social platform after the user said it was ready | Stop at confirmation. Publishing is the user's action, not yours. |.claude-plugin/marketplace.jsonmarketplaceShow content (4728 bytes)
{ "$schema": "https://anthropic.com/claude-code/marketplace.schema.json", "name": "waza", "description": "Personal skill collection for Claude Code: think, check, hunt, design, read, write, learn, health.", "owner": { "name": "Tw93", "email": "hitw93@gmail.com" }, "plugins": [ { "name": "waza", "description": "Installs the full Waza toolkit. Registers all eight skills under the waza namespace, callable as /waza:think, /waza:check, /waza:hunt, /waza:design, /waza:read, /waza:write, /waza:learn, and /waza:health. Not for users who only want one skill — install a waza-<skill>@waza entry instead.", "version": "3.22.0", "category": "development", "source": "./", "homepage": "https://github.com/tw93/Waza" }, { "name": "waza-health", "description": "Runs a budget-aware audit of the Claude Code config stack when Claude ignores instructions, behaves inconsistently, hooks malfunction, MCP servers need auditing, or users ask why /health used many tokens. Flags issues by severity. Not for debugging code or reviewing PRs.", "version": "3.17.0", "category": "development", "source": "./skills/health", "homepage": "https://github.com/tw93/Waza", "skills": ["./"], "strict": false }, { "name": "waza-think", "description": "Turns rough ideas into approved, decision-complete plans with validated structure before writing code. Covers new features, architecture decisions, and value judgments about whether to build, keep, or remove something. Not for bug fixes or small edits.", "version": "3.17.0", "category": "development", "source": "./skills/think", "homepage": "https://github.com/tw93/Waza", "skills": ["./"], "strict": false }, { "name": "waza-check", "description": "Reviews code diffs and release-ready changes after implementation, executes approved implementation plans, extracts project-specific constraints from repository context, auto-fixes safe issues, and drives approved release, publish, push, release-reaction, and issue/PR follow-through. Also triages issues and PRs when the user mentions them. Not for exploring ideas, debugging, or document prose review.", "version": "3.22.0", "category": "development", "source": "./skills/check", "homepage": "https://github.com/tw93/Waza", "skills": ["./"], "strict": false }, { "name": "waza-hunt", "description": "Finds root cause of errors, crashes, regressions, screenshot-reported defects, unexpected behavior, and failing tests before applying any fix. Not for code review or new features.", "version": "3.19.0", "category": "development", "source": "./skills/hunt", "homepage": "https://github.com/tw93/Waza", "skills": ["./"], "strict": false }, { "name": "waza-design", "description": "Produces distinctive, production-grade UI for any component, page, or visual interface. Handles screenshot-driven iteration when the user sends an image with a visual complaint. Not for backend logic or data pipelines.", "version": "3.19.0", "category": "development", "source": "./skills/design", "homepage": "https://github.com/tw93/Waza", "skills": ["./"], "strict": false }, { "name": "waza-read", "description": "Fetches any URL or PDF as clean Markdown for reading, quoting, citation, or downstream work. Handles paywalls, JS-heavy pages, X/Twitter, and Chinese platforms via proxy cascade. Not for local text files already in the repo.", "version": "3.14.0", "category": "development", "source": "./skills/read", "homepage": "https://github.com/tw93/Waza", "skills": ["./"], "strict": false }, { "name": "waza-write", "description": "Strips AI writing patterns and rewrites prose to sound natural in Chinese or English. Only activates on explicit writing or editing requests. Not for code comments, commit messages, or inline docs.", "version": "3.19.0", "category": "development", "source": "./skills/write", "homepage": "https://github.com/tw93/Waza", "skills": ["./"], "strict": false }, { "name": "waza-learn", "description": "Runs a six-phase research workflow to turn unfamiliar domains or collected sources into publish-ready output. Not for quick lookups or single-file reads.", "version": "3.15.0", "category": "development", "source": "./skills/learn", "homepage": "https://github.com/tw93/Waza", "skills": ["./"], "strict": false } ] }
README
Why
Waza (技, わざ) is a Japanese martial arts term for technique: a move practiced until it becomes instinct.
A good engineer does not just write code. They think through requirements, review their own work, debug systematically, design interfaces that feel intentional, and read primary sources. They write clearly, and learn new domains by producing output, not consuming content.
AI is more capable than most engineers at raw output. But without structure, that capability drifts into generic, imprecise work. Waza channels it into precision: eight skills that set clear goals and constraints, then let the model do what it does best.
Part of a trilogy: Kaku (書く) writes code, Waza (技) drills habits, Kami (紙) ships documents. Think of them as a family: Kaku is the dad, Waza the big sister, Kami the little sister.
Skills
Each engineering habit gets an installed skill. In Claude Code, type the slash command. In Codex, invoke the installed skill by name and follow the same playbook.
| Skill | When | What it does |
|---|---|---|
/think | Before building anything new | Challenges the problem, pressure-tests the design, and produces a decision-complete plan another agent can implement. |
/design | Building frontend interfaces | Produces distinctive UI, including screenshot-driven aesthetic iteration, with a committed direction rather than generic defaults. |
/check | After a task, before merging or release | Reviews the diff, extracts project-specific constraints, handles approved release/publish/push/reaction follow-through, and verifies with evidence. |
/hunt | Any bug, regression, or unexpected behavior | Systematic debugging. Root cause confirmed before any fix is applied, especially when something used to work. |
/write | Writing or editing prose | Rewrites prose to sound natural in Chinese and English. Cuts stiff, formulaic phrasing. |
/learn | Diving into an unfamiliar domain | Six-phase research workflow: collect, digest, outline, fill in, refine, then self-review and publish. |
/read | Any URL or PDF | Fetches content as clean Markdown with platform-specific routing. Special handling for GitHub, PDFs, WeChat, and Feishu. |
/health | Auditing Claude Code setup | Checks CLAUDE.md, rules, skills, hooks, MCP, and behavior with a budget-aware summary pass before deep inspection. |
Each skill is a folder with reference docs, helper scripts, and gotchas from real failures.
Install and Update
Most users should install Waza globally, so the same skills are available in every project.
Claude Code direct slash commands
npx skills add tw93/Waza -a claude-code -g -y
This installs the individual /think, /design, /check, /hunt, /write, /learn, /read, and /health skills. Install just one with --skill:
npx skills add tw93/Waza --skill think -a claude-code -g -y
Claude Code plugin marketplace
Install all skills via the waza bundle, or just one via a waza-<skill> entry:
/plugin marketplace add tw93/Waza
/plugin install waza@waza
/plugin install waza-think@waza
Codex
npx skills add tw93/Waza -a codex -g -y
Install just one with --skill:
npx skills add tw93/Waza --skill think -a codex -g -y
Codex inline link invocation
After installing with npx skills add tw93/Waza -a codex -g -y, Codex sessions can reference skills via inline markdown links. The canonical install path is ~/.claude/skills/waza/skills/<name>/SKILL.md:
[$check](~/.claude/skills/waza/skills/check/SKILL.md) — review before ship
[$hunt](~/.claude/skills/waza/skills/hunt/SKILL.md) — diagnose the regression
[$think](~/.claude/skills/waza/skills/think/SKILL.md) — plan before build
Claude Desktop
Download waza.zip, open Customize > Skills > "+" > Create skill, and upload the ZIP.
Update
npx skills update -g -y
Marketplace installs use claude plugin update <skill>. Claude Desktop users can replace the old skill with the latest waza.zip.
Compatibility
/health is Claude Code only. It defaults to a summary audit to avoid burning quota on first run; ask for a deep or full health audit when you want full conversation extracts and inspector subagents. The other skills are written to use the host environment's native question, search, fetch, and agent mechanisms. /check runs parallel specialist reviewers when the host supports them; otherwise it performs the same passes inline.
Project Context
Waza keeps the generic programmer habits inside the public skill. /check becomes project-aware by reading the target repository's public context and the user's task constraints.
- Project commands come from README files, package manifests, Makefiles, CI workflows, and explicit user instructions.
- Project hard stops include generated artifacts, protected files, version synchronization, release assets, and domain-specific safety risks.
- Public docs and examples must not include credentials, certificate paths, private key filenames, tokens, or personal machine details.
See skills/check/references/project-context.md for the review context template.
Chaining Skills
Skills are designed to be chained together, but transitions are manual. Each skill stops after completing its task and waits for you to decide the next step.
Common workflows:
- Design a feature:
/think→ approve → say "implement X" →/check→ merge - Ship a fix:
/hunt→ fix →/check→ release/publish/push/issue follow-through - Research and write:
/read(fetch sources) →/learn(synthesize) →/write(polish) - Debug and verify:
/hunt(find root cause) → fix →/check(review changes)
Each arrow represents a manual user action. Skills don't automatically trigger each other.
Extras
Statusline
A minimal statusline for Claude Code: context window, 5-hour quota, and 7-day quota.
Color coding: green below 70%, yellow at 70-85%, red above 85% for context; blue, magenta, red for quota thresholds. No progress bars, no noise.
curl -sL https://raw.githubusercontent.com/tw93/Waza/main/scripts/setup-statusline.sh | bash
English Coaching
Optional rule for English practice. When your prompt contains an English mistake, the agent appends a short 😇 correction; Chinese-only prompts stay untouched.
# Claude Code
curl -sL https://raw.githubusercontent.com/tw93/Waza/main/scripts/setup-english-coaching.sh | bash -s -- claude-code
# Codex
curl -sL https://raw.githubusercontent.com/tw93/Waza/main/scripts/setup-english-coaching.sh | bash -s -- codex
Anti-Patterns
Optional always-on guardrails for cross-skill behaviors: stop acting before reading, no hallucinated paths, no scope creep, no unsolicited summaries. Skill-agnostic, applies in every session.
# Claude Code
curl -sL https://raw.githubusercontent.com/tw93/Waza/main/scripts/setup-anti-patterns.sh | bash -s -- claude-code
# Codex
curl -sL https://raw.githubusercontent.com/tw93/Waza/main/scripts/setup-anti-patterns.sh | bash -s -- codex
Uninstall
# Remove all skills
npx skills remove tw93/Waza -g
# Remove Claude Desktop skill
# Open Customize > Skills, find Waza, click "..." > Delete
# Remove statusline
rm -f ~/.claude/statusline.sh
# Then remove the statusLine key from ~/.claude/settings.json
# Remove English Coaching (Claude Code)
rm -f ~/.claude/rules/english.md
# Remove English Coaching (Codex): remove the Waza English Coaching marked block from ~/.codex/AGENTS.md
# Remove Anti-Patterns (Claude Code)
rm -f ~/.claude/rules/anti-patterns.md
# Remove Anti-Patterns (Codex): remove the Waza Anti-Patterns marked block from ~/.codex/AGENTS.md
Background
Tools like Superpowers and gstack are impressive, but they are heavy. Too many skills, too much configuration, too steep a learning curve for engineers who just want to get things done.
There's also a subtler problem. Every rule the author writes becomes a ceiling. The model can only do what the instructions say and can't go further. Waza goes the other direction. Each skill sets a clear goal and the constraints that matter, then steps back. As models improve, that restraint pays compound interest.
Eight skills for the habits that actually matter. Each does one thing, has a clear trigger, and stays out of the way. Not complete by design, just the right amount done well.
Built from patterns across real projects, refined through actual use. Every gotcha traces to a real failure: a wrong code path that took four rounds to find, a release posted before artifacts were uploaded, a server restarted eight times without reading the error. 30 days, 300+ sessions, 7 projects, 500 hours.
The /health skill is based on the six-layer framework described in this post.
Support
- If Waza helped you, share it with friends or give it a star.
- Got ideas or bugs? Open an issue or PR, feel free to contribute your best AI model.
- I have two cats, TangYuan and Coke. If you think Waza delights your life, you can feed them canned food 🥩.
License
MIT License. Feel free to use Waza and contribute.