USP
Unlike generic task managers, Taskmaster deeply integrates with AI agents and development environments, enabling automated task execution, intelligent issue deduplication, and streamlined PR comment resolution directly within your workflow.
Use cases
- 01Automating development task workflows
- 02Managing project tasks with AI assistance
- 03Deduplicating GitHub issues
- 04Resolving PR review comments efficiently
- 05Conducting AI-powered research for tasks
Detected files (8)
.claude/commands/go/ham.mdcommandShow content (5530 bytes)
# Hamster (Go ham!) Run Task Workflow This command initiates the HAM (Hamster Automated Management) workflow for task execution. ## Usage ``` /go:ham [task-id] ``` - `task-id` (optional): Specific task identifier to work on (e.g., "1", "1.2", "2.3.1") - If provided, start working on that specific task immediately - If omitted, automatically identify the next available task ## Process When the user invokes this command, follow these steps: ### 1. Task Selection #### If task-id is provided ($ARGUMENTS is not empty): ```bash tm show $ARGUMENTS ``` Start working on the specified task immediately, skipping to step 3. #### If no task-id is provided ($ARGUMENTS is empty): ```bash tm list ``` Display all tasks with their current status to provide context. ### 2. Identify Next Task (only if no task-id provided) Determine which task should be worked on next based on: - Dependencies - Priority - Current status (pending tasks only) ### 3. Show Task Details (only if task wasn't specified in step 1) ```bash tm show <task-id> ``` Display the full details of the identified task including: - Title and description - Dependencies - Test strategy - Subtasks (if any) ### 4. Kickoff Workflow Based on the task type, follow the appropriate workflow: #### For Main Tasks (e.g., "1", "2", "3") - Review the task's subtasks - If no subtasks exist, suggest expanding the task first - Identify the first pending subtask - Begin implementation following the subtask's requirements #### For Subtasks (e.g., "1.1", "2.3") - Mark the subtask as in-progress: ```bash tm set-status --id=<subtask-id> --status=in-progress ``` - Review the task details and requirements - Check for related code files or dependencies - Create an implementation plan - Begin implementation following project conventions ### 5. Implementation Guidelines Follow these principles during implementation: 1. **Understand First**: Read related files and understand the current architecture 2. **Plan**: Create a mental model or brief plan before coding 3. **Follow Conventions**: Adhere to project structure and coding standards 4. **Test As You Go**: Validate changes incrementally 5. **Stay Focused**: Complete the current subtask before moving to the next ### 6. Task Completion When the subtask is complete: ```bash tm set-status --id=<subtask-id> --status=done ``` Then automatically check for the next available task by repeating from step 2. ## Example Flows ### With Specific Task ID ``` User: "/go:ham 1.2" 1. Claude runs: tm show 1.2 → Displays full task details 2. Claude analyzes the task and creates an implementation plan 3. Claude marks task in-progress: tm set-status --id=1.2 --status=in-progress 4. Claude begins implementation following the task requirements 5. Upon completion, Claude runs: tm set-status --id=1.2 --status=done 6. Claude automatically identifies next task with tm list ``` ### Without Specific Task ID (Auto-discovery) ``` User: "/go:ham" 1. Claude runs: tm list 2. Claude identifies next available task (e.g., 1.2) 3. Claude runs: tm show 1.2 → Displays full task details 4. Claude analyzes the task and creates an implementation plan 5. Claude marks task in-progress: tm set-status --id=1.2 --status=in-progress 6. Claude begins implementation following the task requirements 7. Upon completion, Claude runs: tm set-status --id=1.2 --status=done 8. Claude automatically identifies next task with tm list ``` ## Notes - Always verify task dependencies are complete before starting - If a task is blocked, mark it as such and move to the next available task - Keep the user informed of progress at each major step - Ask for clarification if task requirements are unclear - Follow the project's CLAUDE.md and .cursor/rules/* guidelines at all times - Unlike the usual Taskmaster process, do not bother using update-task nor update-subtask as they do not work with Hamster tasks yet. - Use only `tm list`, `tm show <sub/task id>` and `tm set status` - other commands don't yet work with it. - Do not use the MCP tools when connected with Hamster briefs - that is not yet up to date. - Use `.cursor/rules/git_workflow.mdc` as a guide for the workflow - When starting a task, mark it as in-progress. You can mark multiple task statuses at once with comma separation (i.e. `tm set-status -i 1,1.1 -s in-progress`) - Read the task, then if it has subtasks, begin implementing the subtasks one at a time. - When the subtask is done, run lint and typecheck, mark the task as done if it passes, and commit. - Continue until all subtasks are done, then run a final lint and typecheck (`npm lint` and `npm typecheck`) and create a PR using `gh` cli for that Task. - Keep committing to the same PR as long as the scope is maintained. An entire task list (brief) might fit into a single PR but not if it ends up being huge. It is preferred for everything to land in one PR if it is possible, otherwise commit to different PRs that build on top of the previous ones. Confirm with the human when doing this. - When the parent task is completed, ensure you mark is as done. - When the first task is done, repeat this process for all tasks until all tasks are done. - If you run into an issue where the JWT seems expired or commands don't work, ensure you use `tm auth refresh` to refresh the token and if that does not work, use `tm context <brief url>` to reconnect the context. If you do not have the brief url, ask the user for it (perhaps use it at the beginning) You're a fast hamster. Go go go..claude/commands/go/pr-comments.mdcommandShow content (4583 bytes)
Fix PR review comments: PR # $ARGUMENTS This command collects all review comments from a GitHub PR (including CodeRabbit, human reviewers, and other bots), consolidates them by author and severity, shows them to you for approval, then implements the approved fixes. Steps: 1. **Collect PR comments** - Run: `gh pr view $ARGUMENTS --comments` to get ALL comments (no truncation) - Parse and extract all review comments from: - PR review comments (file-level) - General comments - Review threads - Include author information for each comment - IMPORTANT: Do NOT use `head`, `tail`, or any truncation - we need complete comment history 2. **Consolidate comments** - Group comments by: - Author (CodeRabbit, human reviewers, other bots) - Severity (🚨 Critical, ⚠️ Important, 💡 Suggestion, ℹ️ Info) - Category (Security, Performance, Best Practices, Style, etc.) - Remove duplicates and group similar issues - Present in a clear, numbered list format showing author for each 3. **Show consolidated issues for approval** - Display the organized list with: - Issue number for reference - Severity indicator - File location - Description - Suggested fix - Ask: "Which issues would you like me to fix? (Enter numbers separated by commas, or 'all' for everything)" - Wait for user confirmation 4. **Implement approved fixes** - For each approved issue: - Read the relevant file(s) - Implement the suggested fix - Log what was changed 5. **Validate changes** - Run: `pnpm typecheck` - If fails: review errors, fix them, retry - Run: `pnpm lint` - If fails: review errors, fix them, retry - Continue until both pass 6. **Commit and push** - Stage changes: `git add .` - Create commit: `git commit -m "fix: address review comments from PR #$ARGUMENTS"` - Push: `git push` - Confirm completion with summary of fixes applied Notes: - If no review comments found, inform user and exit - If typecheck/lint fails after fixes, show errors and ask for guidance - Keep fixes focused on reviewers' specific suggestions - Preserve existing code style and patterns - Group related fixes in the commit message if many changes - Treat all reviewers equally - human and bot feedback both matter You previously got all the PR comments in a temporary JSON file and then ran something like this; cat > /tmp/parse_comments.js << 'EOF' const fs = require('fs'); const comments = JSON.parse(fs.readFileSync('/tmp/all-pr-comments.json', 'utf8')); const byFile = {}; const bySeverity = { critical: [], important: [], suggestion: [], info: [] }; comments.forEach((c, idx) => { const file = c.path; const author = c.user.login; const line = c.line || c.original_line || 'N/A'; const body = c.body; if (!byFile[file]) byFile[file] = []; const comment = { num: idx + 1, author, line, body: body.substring(0, 200) + (body.length > 200 ? '...' : ''), fullBody: body }; byFile[file].push(comment); // Categorize by severity const lower = body.toLowerCase(); if (lower.includes('critical') || lower.includes('security') || lower.includes('bug:')) { bySeverity.critical.push({...comment, file}); } else if (lower.includes('important') || lower.includes('error') || lower.includes('fail')) { bySeverity.important.push({...comment, file}); } else if (lower.includes('suggestion') || lower.includes('consider') || lower.includes('recommend')) { bySeverity.suggestion.push({...comment, file}); } else { bySeverity.info.push({...comment, file}); } }); console.log('\n=== SUMMARY BY SEVERITY ===\n'); console.log(`🚨 Critical: ${bySeverity.critical.length}`); console.log(`⚠️ Important: ${bySeverity.important.length}`); console.log(`💡 Suggestion: ${bySeverity.suggestion.length}`); console.log(`ℹ️ Info: ${bySeverity.info.length}`); console.log('\n=== SUMMARY BY FILE ===\n'); Object.entries(byFile) .sort((a, b) => b[1].length - a[1].length) .forEach(([file, comments]) => { console.log(`${file}: ${comments.length} comments`); }); console.log('\n=== CRITICAL ISSUES ===\n'); bySeverity.critical.forEach(c => { console.log(`\n#${c.num} [${c.author}] ${c.file}:${c.line}`); console.log(c.body); }); console.log('\n=== IMPORTANT ISSUES ===\n'); bySeverity.important.slice(0, 10).forEach(c => { console.log(`\n#${c.num} [${c.author}] ${c.file}:${c.line}`); console.log(c.body); }); EOF node /tmp/parse_comments.js And got a nice report you could act on..claude/commands/dedupe.mdcommandShow content (1850 bytes)
--- allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh api:*), Bash(gh issue comment:*) description: Find duplicate GitHub issues --- Find up to 3 likely duplicate issues for a given GitHub issue. To do this, follow these steps precisely: 1. Use an agent to check if the Github issue (a) is closed, (b) does not need to be deduped (eg. because it is broad product feedback without a specific solution, or positive feedback), or (c) already has a duplicates comment that you made earlier. If so, do not proceed. 2. Use an agent to view a Github issue, and ask the agent to return a summary of the issue 3. Then, launch 5 parallel agents to search Github for duplicates of this issue, using diverse keywords and search approaches, using the summary from #1 4. Next, feed the results from #1 and #2 into another agent, so that it can filter out false positives, that are likely not actually duplicates of the original issue. If there are no duplicates remaining, do not proceed. 5. Finally, comment back on the issue with a list of up to three duplicate issues (or zero, if there are no likely duplicates) Notes (be sure to tell this to your agents, too): - Use `gh` to interact with Github, rather than web fetch - Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.) - Make a todo list first - For your comment, follow the following format precisely (assuming for this example that you found 3 suspected duplicates): --- Found 3 possible duplicate issues: 1. <link to issue> 2. <link to issue> 3. <link to issue> This issue will be automatically closed as a duplicate in 3 days. - If your issue is a duplicate, please close it and 👍 the existing issue instead - To prevent auto-closure, add a comment or 👎 this comment 🤖 Generated with \[Task Master Bot\] ---.kiro/settings/mcp.jsonmcp_serverShow content (573 bytes)
{ "mcpServers": { "task-master-ai": { "command": "npx", "args": ["-y", "task-master-ai"], "env": { "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE", "PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE", "OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE", "GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE", "XAI_API_KEY": "YOUR_XAI_KEY_HERE", "OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE", "MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE", "AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE", "OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE" } } } }packages/claude-code-plugin/mcp.jsonmcp_serverShow content (127 bytes)
{ "mcpServers": { "task-master-ai": { "type": "stdio", "command": "npx", "args": ["-y", "task-master-ai"] } } }.mcp.jsonmcp_serverShow content (176 bytes)
{ "mcpServers": { "task-master-ai": { "type": "stdio", "command": "npx", "args": ["-y", "task-master-ai"], "env": { "TASK_MASTER_TOOLS": "all" } } } }.cursor/mcp.jsonmcp_serverShow content (646 bytes)
{ "mcpServers": { "task-master-ai": { "command": "node", "args": ["./dist/mcp-server.js"], "env": { "ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE", "PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE", "OPENAI_API_KEY": "OPENAI_API_KEY_HERE", "GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE", "GROQ_API_KEY": "GROQ_API_KEY_HERE", "XAI_API_KEY": "XAI_API_KEY_HERE", "OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE", "MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE", "AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE", "OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE", "GITHUB_API_KEY": "GITHUB_API_KEY_HERE" } } } }.claude-plugin/marketplace.jsonmarketplaceShow content (853 bytes)
{ "name": "taskmaster", "owner": { "name": "Hamster", "email": "ralph@tryhamster.com" }, "metadata": { "description": "Official marketplace for Taskmaster AI - AI-powered task management for ambitious development", "version": "1.0.0" }, "plugins": [ { "name": "taskmaster", "source": "./packages/claude-code-plugin", "description": "AI-powered task management system for ambitious development workflows with intelligent orchestration, complexity analysis, and automated coordination", "author": { "name": "Hamster" }, "homepage": "https://github.com/eyaltoledano/claude-task-master", "repository": "https://github.com/eyaltoledano/claude-task-master", "keywords": [ "task-management", "ai", "workflow", "orchestration", "automation", "mcp" ], "category": "productivity" } ] }
README
Taskmaster: A task management system for AI-driven development, designed to work seamlessly with any AI chat.
|
Docs
By @eyaltoledano & @RalphEcom
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
Documentation
Quick Links
- Quick Start Guide
- Installation
- API Keys & Providers
- Supported Editors
- MCP Tools Reference
- CLI Commands Reference
- Task Structure
- Task Dependencies
- Tags & Workstreams
- Research Command
- Loop Command
- AI Providers Overview
- Team Collaboration
- Best Practices
- FAQ
- Changelog
More from Hamster
Quick Install for Cursor 1.0+ (One-Click)
Note: After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
Claude Code Quick Install
For Claude Code users:
claude mcp add taskmaster-ai -- npx -y task-master-ai
Don't forget to add your API keys to the configuration:
- in the root .env of your Project
- in the "env" section of your mcp config for taskmaster-ai
Requirements
Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key.
You can define 3 types of models to be used: the main model, the research model, and the fallback model (in case either the main or research fail). Whatever model you use, its provider API key must be present in either mcp.json or .env.
At least one (1) of the following is required:
- Anthropic API key (Claude API)
- OpenAI API key
- Google Gemini API key
- Perplexity API key (for research model)
- xAI API Key (for research or main model)
- OpenRouter API Key (for research or main model)
- Claude Code (no API key required - requires Claude Code CLI)
- Codex CLI (OAuth via ChatGPT subscription - requires Codex CLI)
Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code or Codex CLI with OAuth). Adding all API keys enables you to seamlessly switch between model providers at will.
Quick Start
Option 1: MCP (Recommended)
MCP (Model Control Protocol) lets you run Task Master directly from your editor.
1. Add your MCP config at the following path depending on your editor
| Editor | Scope | Linux/macOS Path | Windows Path | Key |
|---|---|---|---|---|
| Cursor | Global | ~/.cursor/mcp.json | %USERPROFILE%\.cursor\mcp.json | mcpServers |
| Project | <project_folder>/.cursor/mcp.json | <project_folder>\.cursor\mcp.json | mcpServers | |
| Windsurf | Global | ~/.codeium/windsurf/mcp_config.json | %USERPROFILE%\.codeium\windsurf\mcp_config.json | mcpServers |
| VS Code | Project | <project_folder>/.vscode/mcp.json | <project_folder>\.vscode\mcp.json | servers |
| Q CLI | Global | ~/.aws/amazonq/mcp.json | mcpServers |
Manual Configuration
Cursor & Windsurf & Q Developer CLI (mcpServers)
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"GROQ_API_KEY": "YOUR_GROQ_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
}
}
}
}
🔑 Replace
YOUR_…_KEY_HEREwith your real API keys. You can remove keys you don't use.
Note: If you see
0 tools enabledin the MCP settings, restart your editor and check that your API keys are correctly configured.
VS Code (servers + type)
{
"servers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
// "TASK_MASTER_TOOLS": "all", // Options: "all", "standard", "core", or comma-separated list of tools
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"GROQ_API_KEY": "YOUR_GROQ_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
},
"type": "stdio"
}
}
}
🔑 Replace
YOUR_…_KEY_HEREwith your real API keys. You can remove keys you don't use.
2. (Cursor-only) Enable Taskmaster MCP
Open Cursor Settings (Ctrl+Shift+J) ➡ Click on MCP tab on the left ➡ Enable task-master-ai with the toggle
3. (Optional) Configure the models you want to use
In your editor's AI chat pane, say:
Change the main, research and fallback models to <model_name>, <model_name> and <model_name> respectively.
For example, to use Claude Code (no API key required):
Change the main model to claude-code/sonnet
Table of available models | Claude Code setup
4. Initialize Task Master
In your editor's AI chat pane, say:
Initialize taskmaster-ai in my project
5. Make sure you have a PRD (Recommended)
For new projects: Create your PRD at .taskmaster/docs/prd.txt.
For existing projects: You can use scripts/prd.txt or migrate with task-master migrate
An example PRD template is available after initialization in .taskmaster/templates/example_prd.txt.
[!NOTE] While a PRD is recommended for complex projects, you can always create individual tasks by asking "Can you help me implement [description of what you want to do]?" in chat.
Always start with a detailed PRD.
The more detailed your PRD, the better the generated tasks will be.
6. Common Commands
Use your AI assistant to:
- Parse requirements:
Can you parse my PRD at scripts/prd.txt? - Plan next step:
What's the next task I should work on? - Implement a task:
Can you help me implement task 3? - View multiple tasks:
Can you show me tasks 1, 3, and 5? - Expand a task:
Can you help me expand task 4? - Research fresh information:
Research the latest best practices for implementing JWT authentication with Node.js - Research with context:
Research React Query v5 migration strategies for our current API implementation in src/api.js
More examples on how to use Task Master in chat
Option 2: Using Command Line
Installation
# Install globally
npm install -g task-master-ai
# OR install locally within your project
npm install task-master-ai
Initialize a new project
# If installed globally
task-master init
# If installed locally
npx task-master init
# Initialize project with specific rules
task-master init --rules cursor,windsurf,vscode
This will prompt you for project details and set up a new project with the necessary files and structure.
Common Commands
# Initialize a new project
task-master init
# Parse a PRD and generate tasks
task-master parse-prd your-prd.txt
# List all tasks
task-master list
# Show the next task to work on
task-master next
# Show specific task(s) - supports comma-separated IDs
task-master show 1,3,5
# Research fresh information with project context
task-master research "What are the latest best practices for JWT authentication?"
# Move tasks between tags (cross-tag movement)
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done --with-dependencies
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
# Add rules after initialization
task-master rules add windsurf,roo,vscode
Tool Loading Configuration
Optimizing MCP Tool Loading
Task Master's MCP server supports selective tool loading to reduce context window usage. By default, all 36 tools are loaded (~21,000 tokens) to maintain backward compatibility with existing installations.
You can optimize performance by configuring the TASK_MASTER_TOOLS environment variable:
Available Modes
| Mode | Tools | Context Usage | Use Case |
|---|---|---|---|
all (default) | 36 | ~21,000 tokens | Complete feature set - all tools available |
standard | 15 | ~10,000 tokens | Common task management operations |
core (or lean) | 7 | ~5,000 tokens | Essential daily development workflow |
custom | Variable | Variable | Comma-separated list of specific tools |
Configuration Methods
Method 1: Environment Variable in MCP Configuration
Add TASK_MASTER_TOOLS to your MCP configuration file's env section:
{
"mcpServers": { // or "servers" for VS Code
"task-master-ai": {
"command": "npx",
"args": ["-y", "task-master-ai"],
"env": {
"TASK_MASTER_TOOLS": "standard", // Options: "all", "standard", "core", "lean", or comma-separated list
"ANTHROPIC_API_KEY": "your-key-here",
// ... other API keys
}
}
}
}
Method 2: Claude Code CLI (One-Time Setup)
For Claude Code users, you can set the mode during installation:
# Core mode example (~70% token reduction)
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="core" \
-- npx -y task-master-ai@latest
# Custom tools example
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="get_tasks,next_task,set_task_status" \
-- npx -y task-master-ai@latest
Tool Sets Details
Core Tools (7): get_tasks, next_task, get_task, set_task_status, update_subtask, parse_prd, expand_task
Standard Tools (15): All core tools plus initialize_project, analyze_project_complexity, expand_all, add_subtask, remove_task, generate, add_task, complexity_report
All Tools (36): Complete set including project setup, task management, analysis, dependencies, tags, research, and more
Recommendations
- New users: Start with
"standard"mode for a good balance - Large projects: Use
"core"mode to minimize token usage - Complex workflows: Use
"all"mode or custom selection - Backward compatibility: If not specified, defaults to
"all"mode
Claude Code Support
Task Master now supports Claude models through the Claude Code CLI, which requires no API key:
- Models:
claude-code/opusandclaude-code/sonnet - Requirements: Claude Code CLI installed
- Benefits: No API key needed, uses your local Claude instance
Learn more about Claude Code setup
Troubleshooting
If task-master init doesn't respond
Try running it with Node directly:
node node_modules/claude-task-master/scripts/init.js
Or clone the repository and run:
git clone https://github.com/eyaltoledano/claude-task-master.git
cd claude-task-master
node scripts/init.js
Join Our Team
Contributors
Star History
Licensing
Task Master is licensed under the MIT License with Commons Clause. This means you can:
✅ Allowed:
- Use Task Master for any purpose (personal, commercial, academic)
- Modify the code
- Distribute copies
- Create and sell products built using Task Master
❌ Not Allowed:
- Sell Task Master itself
- Offer Task Master as a hosted service
- Create competing products based on Task Master
See the LICENSE file for the complete license text and licensing details for more information.
