Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
harness

mcp-server

Quality
9.0

This MCP server provides AI agents with comprehensive access to the Harness.io platform, consolidating over 240 API endpoints into just 11 tools. It supports 168 resource types across CI/CD, GitOps, Feature Flags, Cloud Cost Management, Security Testing, and more, offering full platform coverage. Agents can discover organizations and projects dynamically, enabling multi-project workflows without hardcoded configurations. It includes 30 pre-built prompt templates for common tasks like deploying apps, debugging pipelines, optimizing cloud costs, and auditing access. The server supports Stdio an

USP

Consolidates 240+ Harness API endpoints into 11 tools for 168 resource types, simplifying AI agent interaction. Offers full platform coverage, dynamic multi-project discovery, 30 prompt templates, and zero-config setup with just a Harness…

Use cases

  • 01Automating CI/CD pipeline management and deployments across multiple projects.
  • 02Debugging failed pipelines and reviewing DORA metrics with AI agents.
  • 03Optimizing cloud costs and triaging security vulnerabilities.
  • 04Managing feature flag rollouts and auditing access control.
  • 05Building and deploying applications end-to-end using AI assistance.

Detected files (1)

  • CLAUDE.mdclaude_md
    Show content (26011 bytes)
    # CLAUDE.md — Harness.io MCP Server
    
    > You are building a production-grade MCP (Model Context Protocol) server that wraps the Harness.io REST API, enabling AI agents (Claude, Cursor, Windsurf, etc.) to interact with Harness CI/CD pipelines, services, environments, connectors, and platform entities through standardized tools and resources.
    
    ---
    
    ## Project Identity
    
    - **Name**: `harness-mcp-server`
    - **Runtime**: TypeScript (Node.js 20+)
    - **SDK**: `@modelcontextprotocol/sdk` (v1.27+)
    - **Transport**: Stdio (local) + Streamable HTTP (remote)
    - **Schema Validation**: Zod v4 (import from `zod/v4`)
    - **Build**: `tsc` with ES2022 target, ESM output
    - **Package Manager**: pnpm
    
    ---
    
    ## Workflow Orchestration
    
    ### 1. Plan Mode Default
    - Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions)
    - If something goes sideways, STOP and re-plan immediately — don't keep pushing
    - Use plan mode for verification steps, not just building
    - Write detailed specs upfront to reduce ambiguity
    - Map every new tool to a specific Harness API endpoint before writing code
    
    ### 2. Subagent Strategy
    - Use subagents liberally to keep main context window clean
    - Offload research, exploration, and parallel analysis to subagents
    - For complex problems, throw more compute at it via subagents
    - One task per subagent for focused execution
    - Use subagents for: API endpoint discovery, Zod schema generation, test writing
    
    ### 3. Self-Improvement Loop
    - After ANY correction from the user: update `tasks/lessons.md` with the pattern
    - Write rules for yourself that prevent the same mistake
    - Ruthlessly iterate on these lessons until mistake rate drops
    - Review lessons at session start for relevant project
    
    ### 4. Verification Before Done
    - Never mark a task complete without proving it works
    - Diff behavior between main and your changes when relevant
    - Ask yourself: "Would a staff engineer approve this?"
    - Run tests, check logs, demonstrate correctness
    - For every MCP tool: test with `npx @modelcontextprotocol/inspector`
    
    ### 5. Demand Elegance (Balanced)
    - For non-trivial changes: pause and ask "is there a more elegant way?"
    - If a fix feels hacky: "Knowing everything I know now, implement the elegant solution"
    - Skip this for simple, obvious fixes — don't over-engineer
    - Challenge your own work before presenting it
    
    ### 6. Autonomous Bug Fixing
    - When given a bug report: just fix it. Don't ask for hand-holding
    - Point at logs, errors, failing tests — then resolve them
    - Zero context switching required from the user
    - Go fix failing CI tests without being told how
    
    ---
    
    ## Task Management
    
    1. **Plan First**: Write plan to `tasks/todo.md` with checkable items
    2. **Verify Plan**: Check in before starting implementation
    3. **Track Progress**: Mark items complete as you go
    4. **Explain Changes**: High-level summary at each step
    5. **Document Results**: Add review section to `tasks/todo.md`
    6. **Capture Lessons**: Update `tasks/lessons.md` after corrections
    
    ---
    
    ## Core Principles
    
    - **Simplicity First**: Make every change as simple as possible. Impact minimal code.
    - **No Laziness**: Find root causes. No temporary fixes. Senior developer standards.
    - **Minimal Impact**: Changes should only touch what's necessary. Avoid introducing bugs.
    - **Type Safety**: Every tool input/output must be fully typed with Zod 4 schemas. No `any`. Import via `import * as z from "zod/v4"`.
    - **Fail Loudly**: Never swallow errors. Surface Harness API errors with full context.
    - **Idempotent Reads**: All read tools must be safe to call repeatedly with identical results.
    
    ---
    
    ## Architecture
    
    ### Directory Structure
    ```
    harness-mcp-server/
    ├── src/
    │   ├── index.ts                    # Server entrypoint + transport setup
    │   ├── config.ts                   # Env var validation (Zod)
    │   ├── client/
    │   │   ├── harness-client.ts       # Core HTTP client (auth, base URL, retry)
    │   │   ├── types.ts                # Shared Harness API response types
    │   │   └── pagination.ts           # Generic paginator for list endpoints
    │   ├── tools/
    │   │   ├── index.ts                # Tool registry (auto-discovers all tools)
    │   │   ├── pipelines.ts            # Pipeline CRUD + execution tools
    │   │   ├── executions.ts           # Execution history, logs, status
    │   │   ├── connectors.ts           # Connector management
    │   │   ├── services.ts             # Service entity tools
    │   │   ├── environments.ts         # Environment entity tools
    │   │   ├── projects.ts             # Project + Org tools
    │   │   ├── secrets.ts              # Secret management (read-only metadata)
    │   │   ├── triggers.ts             # Pipeline trigger management
    │   │   ├── delegates.ts            # Delegate health + status
    │   │   ├── feature-flags.ts        # FF toggles and status
    │   │   └── logs.ts                 # Execution log retrieval
    │   ├── resources/
    │   │   ├── index.ts                # Resource registry
    │   │   ├── pipeline-yaml.ts        # Pipeline YAML as resource
    │   │   └── execution-summary.ts    # Recent executions as resource
    │   ├── prompts/
    │   │   ├── index.ts                # Prompt registry
    │   │   ├── debug-pipeline.ts       # "Debug this failed pipeline" prompt
    │   │   ├── create-pipeline.ts      # "Create a new pipeline" prompt
    │   │   └── optimize-pipeline.ts    # "Optimize this pipeline" prompt
    │   └── utils/
    │       ├── errors.ts               # Error normalization + MCP error mapping
    │       ├── logger.ts               # stderr-only logger (CRITICAL for stdio)
    │       └── rate-limiter.ts         # Client-side rate limiting
    ├── tests/
    │   ├── tools/                      # Tool-level unit tests
    │   ├── client/                     # HTTP client tests
    │   └── integration/                # End-to-end with mock Harness API
    ├── tasks/
    │   ├── todo.md                     # Current task tracking
    │   └── lessons.md                  # Self-improvement log
    ├── .env.example                    # Required env vars documented
    ├── tsconfig.json
    ├── package.json
    └── README.md
    ```
    
    ### Key Design Decisions
    
    **1. Single HTTP Client Instance**
    - One `HarnessClient` class wraps all API calls
    - Handles auth header injection (`x-api-key`), base URL, retry with exponential backoff
    - All tools receive the client via dependency injection — never instantiate their own
    
    **2. Tool Registration Pattern**
    - Each tool file exports a `register(server, client)` function
    - `tools/index.ts` auto-imports and registers all tools
    - Tools are grouped by Harness entity domain (pipelines, connectors, etc.)
    
    **3. Harness Scoping Model**
    - Harness uses a 3-tier hierarchy: Account → Organization → Project
    - EVERY API call requires `accountIdentifier` (from env/config)
    - Most calls require `orgIdentifier` + `projectIdentifier`
    - Tools should accept optional org/project params, defaulting to config values
    
    **4. Error Handling Strategy**
    - Harness API errors follow: `{ status: "ERROR", code: "...", message: "..." }`
    - Map Harness error codes to MCP-friendly error messages
    - Include the `correlationId` in error responses for debugging
    - Never expose raw API keys or tokens in error output
    
    ---
    
    ## Harness API Reference
    
    ### Authentication
    ```
    Header: x-api-key: <HARNESS_API_KEY>
    Base URL: https://app.harness.io
    ```
    - Personal API tokens or Service Account tokens
    - Token created in Harness UI → User Profile → API Keys
    
    ### API Versioning
    - **GA (stable)**: `https://app.harness.io/ng/api/...` (ng = next-gen)
    - **v1 Beta**: `https://app.harness.io/v1/...`
    - **Pipeline APIs**: `https://app.harness.io/pipeline/api/...`
    - **Log Service**: `https://app.harness.io/gateway/log-service/...`
    - Prefer v1 endpoints where available; fall back to ng/api
    
    ### Pagination
    - Query params: `page` (0-indexed), `size` (default 30, max 100)
    - v1 beta: `limit` (default 30, max 100) + `page` param
    - Response headers: `X-Total-Elements`, `X-Page-Number`, `X-Page-Size`
    - Execution summary API hard limit: 10,000 records max
    
    ### Core Endpoints to Wrap
    
    | Domain | Method | Endpoint | Tool Name |
    |--------|--------|----------|-----------|
    | **Projects** | GET | `/ng/api/projects` | `list_projects` |
    | **Projects** | GET | `/ng/api/projects/{projectId}` | `get_project` |
    | **Pipelines** | GET | `/pipeline/api/pipelines/list` | `list_pipelines` |
    | **Pipelines** | GET | `/pipeline/api/pipelines/{pipelineId}` | `get_pipeline` |
    | **Pipelines** | POST | `/pipeline/api/pipelines/v2` | `create_pipeline` |
    | **Pipelines** | PUT | `/pipeline/api/pipelines/v2/{pipelineId}` | `update_pipeline` |
    | **Execute** | POST | `/pipeline/api/pipeline/execute/{pipelineId}` | `execute_pipeline` |
    | **Execute** | PUT | `/pipeline/api/pipeline/execute/interrupt/{planExecutionId}` | `interrupt_execution` |
    | **Executions** | POST | `/pipeline/api/pipelines/execution/summary` | `list_executions` |
    | **Executions** | GET | `/pipeline/api/pipelines/execution/{planExecutionId}` | `get_execution` |
    | **Logs** | POST | `/gateway/log-service/blob/download` | `get_execution_logs` |
    | **Connectors** | GET | `/ng/api/connectors` | `list_connectors` |
    | **Connectors** | GET | `/ng/api/connectors/{connectorId}` | `get_connector` |
    | **Connectors** | POST | `/ng/api/connectors/testConnection/{connectorId}` | `test_connector` |
    | **Services** | GET | `/ng/api/servicesV2` | `list_services` |
    | **Services** | GET | `/ng/api/servicesV2/{serviceId}` | `get_service` |
    | **Environments** | GET | `/ng/api/environmentsV2` | `list_environments` |
    | **Environments** | GET | `/ng/api/environmentsV2/{envId}` | `get_environment` |
    | **Secrets** | GET | `/ng/api/v2/secrets` | `list_secrets` |
    | **Delegates** | GET | `/ng/api/delegate-group-ng/v2` | `list_delegates` |
    | **Triggers** | GET | `/pipeline/api/triggers` | `list_triggers` |
    | **Triggers** | POST | `/pipeline/api/triggers` | `create_trigger` |
    | **Feature Flags** | GET | `/cf/admin/features` | `list_feature_flags` |
    | **Feature Flags** | PATCH | `/cf/admin/features/{featureId}` | `toggle_feature_flag` |
    | **Input Sets** | GET | `/pipeline/api/inputSets` | `list_input_sets` |
    
    ### Common Query Parameters (Always Required)
    ```typescript
    interface HarnessScope {
      accountIdentifier: string;   // Always from config
      orgIdentifier?: string;      // Default from config, overridable
      projectIdentifier?: string;  // Default from config, overridable
    }
    ```
    
    ---
    
    ## Tool Design Rules
    
    ### Naming Convention
    - Use `snake_case` for tool names: `list_pipelines`, `get_execution_logs`
    - Prefix with verb: `list_`, `get_`, `create_`, `update_`, `delete_`, `execute_`, `test_`
    - Match Harness domain language exactly
    
    ### Input Schema Rules
    - Every tool MUST have a Zod schema for inputs
    - Import Zod 4: `import * as z from "zod/v4"` — never `import { z } from "zod"`
    - Use `z.string().describe("...")` — descriptions are critical for LLM tool selection
    - **CRITICAL**: Always call `.describe()` LAST in the chain — Zod 4 creates new schema instances per method call, so `.describe()` before `.optional()` or `.default()` will lose the description
    - Correct: `z.string().min(1).describe("Org ID").optional()`
    - Wrong: `z.string().describe("Org ID").min(1).optional()` (description lost)
    - Optional params with sensible defaults: `org_id` defaults to env config
    - Pagination params optional: `page` defaults to 0, `size` defaults to 20
    
    ### Output Rules
    - Return structured JSON, not raw API responses
    - Strip unnecessary metadata — return only what's actionable
    - For list tools: return `{ items: [...], total: number, page: number }`
    - For execution tools: return `{ executionId: string, status: string, url: string }`
    - Include Harness UI deep links where possible: `https://app.harness.io/ng/#/account/{accountId}/...`
    - Truncate large log outputs — provide summary + offer full retrieval
    
    ### Tool Annotations
    ```typescript
    // Always set annotations for every tool
    annotations: {
      title: "Human-readable tool name",
      readOnlyHint: true,  // for GET operations
      destructiveHint: true, // for DELETE operations
      idempotentHint: true, // for PUT operations
      openWorldHint: true,  // always true — talks to Harness API
    }
    ```
    
    ### Safety Rules
    - **NEVER** expose secret values — only metadata (name, type, scope)
    - **NEVER** delete pipelines/services/environments without explicit confirmation flow
    - **NEVER** auto-execute pipelines — always return the execution plan first
    - Write operations require `confirmation: true` input param
    - Rate limit to max 10 requests/second client-side
    
    ---
    
    ## Environment Configuration
    
    ```bash
    # .env — required
    HARNESS_API_KEY=pat.xxxxx.xxxxx.xxxxx           # Personal access token or SA token
    HARNESS_ACCOUNT_ID=abc123xyz                    # Account identifier
    HARNESS_BASE_URL=https://app.harness.io         # Override for self-managed
    
    # .env — optional defaults
    HARNESS_DEFAULT_ORG=default                     # Default org identifier
    HARNESS_DEFAULT_PROJECT=                        # Default project identifier
    HARNESS_API_TIMEOUT_MS=30000                    # Request timeout
    HARNESS_MAX_RETRIES=3                           # Retry count for transient failures
    LOG_LEVEL=info                                  # debug | info | warn | error
    ```
    
    ### Config Validation (config.ts)
    ```typescript
    import * as z from "zod/v4";
    
    export const ConfigSchema = z.object({
      HARNESS_API_KEY: z.string().min(1, "HARNESS_API_KEY is required"),
      HARNESS_ACCOUNT_ID: z.string().optional(),
      HARNESS_BASE_URL: z.string().url().default("https://app.harness.io"),
      HARNESS_DEFAULT_ORG_ID: z.string().default("default"),
      HARNESS_DEFAULT_PROJECT_ID: z.string().optional(),
      HARNESS_API_TIMEOUT_MS: z.coerce.number().default(30000),
      HARNESS_MAX_RETRIES: z.coerce.number().default(3),
      LOG_LEVEL: z.enum(["debug", "info", "warn", "error"]).default("info"),
      HARNESS_TOOLSETS: z.string().optional(),
      HARNESS_MAX_BODY_SIZE_MB: z.coerce.number().default(10),
      HARNESS_RATE_LIMIT_RPS: z.coerce.number().default(10),
      HARNESS_READ_ONLY: z.coerce.boolean().default(false),
    });
    
    export type Config = z.infer<typeof ConfigSchema>;
    ```
    
    ---
    
    ## Logging Rules (CRITICAL)
    
    > **STDIO transport uses stdin/stdout for JSON-RPC. Writing to stdout WILL break the server.**
    
    ```typescript
    // ❌ NEVER — corrupts JSON-RPC protocol
    console.log("anything");
    
    // ✅ ALWAYS — stderr is safe
    console.error("[INFO] Server started");
    
    // ✅ BEST — use structured logger to stderr
    import { createLogger } from "./utils/logger.js";
    const log = createLogger("pipelines");
    log.info("Fetched 42 pipelines");
    log.error("Harness API error", { status: 401, correlationId: "abc" });
    ```
    
    ---
    
    ## HTTP Client Pattern
    
    ```typescript
    class HarnessClient {
      private baseUrl: string;
      private token: string;
      private accountId: string;
      private timeout: number;
      private maxRetries: number;
    
      async request<T>(path: string, options?: RequestOptions): Promise<T> {
        // 1. Inject auth header: x-api-key
        // 2. Inject accountIdentifier query param
        // 3. Retry on 429 (rate limit) and 5xx with exponential backoff
        // 4. Parse response — handle both { status: "SUCCESS", data: ... }
        //    and { status: "ERROR", code: ..., message: ... }
        // 5. Throw typed HarnessApiError on failure
        // 6. Log request/response to stderr (debug level)
      }
    
      // Convenience methods
      async get<T>(path: string, params?: Record<string, string>): Promise<T>;
      async post<T>(path: string, body?: unknown): Promise<T>;
      async put<T>(path: string, body?: unknown): Promise<T>;
      async delete<T>(path: string): Promise<T>;
    }
    ```
    
    ### Retry Strategy
    - Retry on: HTTP 429, 500, 502, 503, 504
    - Backoff: 1s → 2s → 4s (exponential with jitter)
    - Max retries from config (default: 3)
    - Never retry on: 400, 401, 403, 404
    
    ---
    
    ## Resource Definitions
    
    Resources provide read-only data that LLMs can reference without tool calls.
    
    ```typescript
    // Pipeline YAML as a resource
    server.resource(
      "pipeline://{orgId}/{projectId}/{pipelineId}",
      "Pipeline YAML definition",
      async (uri) => ({
        contents: [{ uri, mimeType: "application/x-yaml", text: pipelineYaml }]
      })
    );
    
    // Recent executions as a resource
    server.resource(
      "executions://{orgId}/{projectId}/recent",
      "Last 10 pipeline executions",
      async (uri) => ({
        contents: [{ uri, mimeType: "application/json", text: JSON.stringify(executions) }]
      })
    );
    ```
    
    ---
    
    ## Prompt Templates
    
    ### Debug Failed Pipeline
    ```typescript
    server.prompt(
      "debug-pipeline-failure",
      "Analyze a failed pipeline execution and suggest fixes",
      [
        { name: "executionId", description: "The failed execution ID", required: true },
        { name: "projectId", description: "Project identifier", required: false },
      ],
      async ({ executionId, projectId }) => ({
        messages: [{
          role: "user",
          content: {
            type: "text",
            text: `Analyze this failed Harness pipeline execution and provide:
    1. Root cause of the failure
    2. Which step failed and why
    3. Suggested fix
    4. Similar past failures if identifiable
    
    Execution ID: ${executionId}
    Project: ${projectId || "default"}
    
    Use get_execution and get_execution_logs tools to gather context.`
          }
        }]
      })
    );
    ```
    
    ---
    
    ## Testing Strategy
    
    ### Unit Tests
    - Mock `HarnessClient` for all tool tests
    - Test Zod schema validation (valid + invalid inputs)
    - Test error mapping (Harness error codes → MCP errors)
    - Test pagination assembly
    
    ### Integration Tests
    - Use `@modelcontextprotocol/inspector` for end-to-end validation
    - Mock Harness API with `msw` (Mock Service Worker)
    - Test full tool lifecycle: input → API call → response mapping
    - Test auth failure handling
    - Test rate limit backoff
    
    ### Test Command
    ```bash
    # Unit tests
    pnpm test
    
    # Integration with MCP Inspector
    npx @modelcontextprotocol/inspector node build/index.js
    
    # Type checking
    pnpm typecheck
    ```
    
    ---
    
    ## Implementation Priority (Build Order)
    
    ### Phase 1: Foundation (Day 1)
    - [ ] Project scaffolding (package.json, tsconfig, pnpm)
    - [ ] Config validation with Zod
    - [ ] HarnessClient with auth, retry, error handling
    - [ ] Logger (stderr only)
    - [ ] Server entrypoint with stdio transport
    
    ### Phase 2: Read Tools (Day 2)
    - [ ] `list_projects` / `get_project`
    - [ ] `list_pipelines` / `get_pipeline`
    - [ ] `list_executions` / `get_execution`
    - [ ] `get_execution_logs`
    - [ ] `list_connectors` / `get_connector` / `test_connector`
    - [ ] `list_services` / `get_service`
    - [ ] `list_environments` / `get_environment`
    
    ### Phase 3: Write Tools (Day 3)
    - [ ] `execute_pipeline` (with confirmation gate)
    - [ ] `interrupt_execution`
    - [ ] `create_pipeline` / `update_pipeline`
    - [ ] `list_triggers` / `create_trigger`
    - [ ] `toggle_feature_flag`
    
    ### Phase 4: Resources + Prompts (Day 4)
    - [ ] Pipeline YAML resources
    - [ ] Execution summary resources
    - [ ] Debug prompt template
    - [ ] Create pipeline prompt template
    - [ ] Optimize pipeline prompt template
    
    ### Phase 5: Production Hardening (Day 5)
    - [ ] Streamable HTTP transport for remote deployment
    - [ ] Rate limiter implementation
    - [ ] Comprehensive error mapping
    - [ ] Full test suite
    - [ ] README + usage docs
    - [ ] npm package publishing config
    
    ---
    
    ## Server Instructions (Anti-Bloat Rules)
    
    The MCP server exposes an `instructions` string (in `src/index.ts`) that is sent to every AI agent on session init. This is prime real estate — every token counts.
    
    ### Hard Rules
    
    1. **Cap at ~20 lines.** The instructions block must stay under 20 lines / ~500 tokens. If it exceeds this, refactor — move detail into `harness_describe` output or tool descriptions instead.
    2. **No per-resource documentation.** Never add resource-specific usage examples to server instructions. That belongs in `actionDescription`, `executeHint`, `diagnosticHint`, or `bodySchema.description` on the resource definition.
    3. **No feature-specific instructions.** Features like input expansions, codebase shorthands, or store type defaults are documented via `inputExpansions` rules (surfaced through `harness_describe` as `inputShorthands`) and tool-level descriptions — not in the global instructions block.
    4. **Only universal patterns.** Server instructions should only contain patterns that apply to ALL tools: URL shortcut, discovery via `harness_describe`, common resource groups.
    5. **Prefer data over prose.** When adding agent-facing guidance, express it as structured metadata on `EndpointSpec` or `ResourceDefinition` (e.g., `inputExpansions`, `bodySchema`, `diagnosticHint`). The `harness_describe` tool auto-surfaces this — no manual docs to maintain.
    
    ### Where to Put New Agent Guidance
    
    | Guidance type | Where it goes |
    |---|---|
    | Universal tool pattern (applies to all tools) | `instructions` in `src/index.ts` |
    | Resource-specific operation details | `description` on the `EndpointSpec` |
    | Execute action usage | `actionDescription` + `executeHint` on the resource |
    | Input shorthands / expansions | `inputExpansions` on `EndpointSpec` (auto-surfaced) |
    | Required fields / body format | `bodySchema` on the `EndpointSpec` |
    | Debugging / troubleshooting | `diagnosticHint` on the resource |
    | Filter fields for list operations | `listFilterFields` on the resource |
    
    ---
    
    ## Common Pitfalls to Avoid
    
    | Pitfall | Fix |
    |---------|-----|
    | `console.log()` in stdio mode | Use `console.error()` or stderr logger ONLY |
    | Forgetting `accountIdentifier` param | Inject from config in HarnessClient automatically |
    | Raw API response passthrough | Always map to clean, typed output objects |
    | Exposing secret values | Only return secret metadata (name, type, scope) |
    | Unbounded list queries | Always paginate, default size=20, max=100 |
    | No retry on rate limits | Implement exponential backoff on HTTP 429 |
    | Hardcoded base URL | Use config — self-managed Harness uses different URLs |
    | Using `return` in tool handlers | MCP SDK expects last expression, not return |
    | Missing tool descriptions | Every param needs `.describe()` — LLMs depend on it |
    | Monolithic tool file | Split by domain (pipelines, connectors, etc.) |
    | `.describe()` before `.optional()` | Zod 4 creates new instances — always call `.describe()` LAST in the chain |
    | `import { z } from "zod"` | Use `import * as z from "zod/v4"` for explicit Zod 4 API |
    | `error.errors` on ZodError | Zod 4 uses `error.issues` (`.errors` is removed) |
    | `message` param for custom errors | Zod 4 uses unified `error` param: `z.string().min(5, { error: "Too short" })` |
    | Adding docs to server `instructions` | Put resource-specific guidance in `actionDescription`, `executeHint`, `bodySchema`, or `inputExpansions` instead |
    | Hardcoding input transformations | Use declarative `inputExpansions` on `EndpointSpec` — data, not code |
    
    ---
    
    ## Quick Reference: MCP SDK Patterns
    
    ### Register a Tool
    ```typescript
    import * as z from "zod/v4";
    
    server.registerTool(
      "harness_list",
      {
        description: "List Harness resources by type with filtering and pagination",
        inputSchema: {
          resource_type: z.string().describe("The type of resource to list (e.g. pipeline, service, environment)").optional(),
          org_id: z.string().describe("Organization identifier (overrides default)").optional(),
          project_id: z.string().describe("Project identifier (overrides default)").optional(),
          page: z.number().describe("Page number, 0-indexed").default(0).optional(),
          size: z.number().min(1).max(100).describe("Page size (1–100)").default(20).optional(),
          search_term: z.string().describe("Filter results by name or keyword").optional(),
        },
        annotations: {
          title: "List Harness Resources",
          readOnlyHint: true,
          openWorldHint: true,
        },
      },
      async (args) => {
        const result = await registry.dispatch(client, args.resource_type, "list", args);
        return {
          content: [{ type: "text", text: JSON.stringify(result, null, 2) }],
        };
      }
    );
    ```
    
    ### Server Entrypoint
    ```typescript
    import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
    import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
    
    const server = new McpServer({
      name: "harness-mcp-server",
      version: "1.0.0",
      capabilities: { tools: {}, resources: {}, prompts: {} },
    });
    
    registerTools(server, registry, client);
    registerResources(server, client);
    registerPrompts(server, client);
    
    const transport = new StdioServerTransport();
    await server.connect(transport);
    console.error("[harness-mcp] Server connected via stdio");
    ```
    
    ### Zod 4 Import Convention
    ```typescript
    // ✅ Always use the explicit v4 subpath
    import * as z from "zod/v4";
    
    // ❌ Never use the bare import — ambiguous across Zod versions
    import { z } from "zod";
    ```
    
    ---
    
    ## Claude Code Integration
    
    Add to Claude Desktop config (`claude_desktop_config.json`):
    ```json
    {
      "mcpServers": {
        "harness": {
          "command": "node",
          "args": ["/path/to/harness-mcp-server/build/index.js"],
          "env": {
            "HARNESS_API_KEY": "pat.xxx.xxx.xxx",
            "HARNESS_ACCOUNT_ID": "your-account-id",
            "HARNESS_DEFAULT_ORG": "default",
            "HARNESS_DEFAULT_PROJECT": "your-project"
          }
        }
      }
    }
    ```
    
    ---
    
    ## References
    
    - Harness API Docs: https://apidocs.harness.io/
    - Harness Developer Hub: https://developer.harness.io/docs/
    - Harness API Quickstart: https://developer.harness.io/docs/platform/automation/api/api-quickstart/
    - MCP TypeScript SDK: https://github.com/modelcontextprotocol/typescript-sdk
    - MCP Specification: https://modelcontextprotocol.io/specification/latest
    - MCP Build Server Guide: https://modelcontextprotocol.io/docs/develop/build-server
    - MCP Inspector: https://github.com/modelcontextprotocol/inspector
    

README

Harness MCP Server 2.0

An MCP (Model Context Protocol) server that gives AI agents full access to the Harness.io platform through 11 consolidated tools and 168 resource types.

Why Use This MCP Server

Most MCP servers map one tool per API endpoint. For a platform as broad as Harness, that means 240+ tools — and LLMs get worse at tool selection as the count grows. Context windows fill up with schemas, and every new endpoint means new code.

This server is built differently:

  • 11 tools, 168 resource types. A registry-based dispatch system routes harness_list, harness_get, harness_create, etc. to any Harness resource — pipelines, services, environments, orgs, projects, feature flags, cost data, and more. The LLM picks from 11 tools instead of hundreds.
  • Full platform coverage. 31 toolsets spanning CI/CD, GitOps, Feature Flags, Cloud Cost Management, Security Testing, Chaos Engineering, Database DevOps, Internal Developer Portal, Software Supply Chain, Governance, Service Overrides, Visualizations, and more. Not just pipelines — the entire Harness platform.
  • Multi-project workflows out of the box. Agents discover organizations and projects dynamically — no hardcoded env vars needed. Ask "show failed executions across all projects" and the agent can navigate the full account hierarchy.
  • 30 prompt templates. Pre-built prompts for common workflows: build & deploy apps end-to-end, debug failed pipelines, review DORA metrics, triage vulnerabilities, optimize cloud costs, audit access control, plan feature flag rollouts, review pull requests, approve pending pipelines, and more.
  • Works everywhere. Stdio transport for local clients (Claude Desktop, Cursor, Windsurf), HTTP transport for remote/shared deployments, Docker and Kubernetes ready.
  • Zero-config start. Just provide a Harness API key. Account ID is auto-extracted from PAT tokens, org/project defaults are optional, and toolset filtering lets you expose only what you need.
  • Extensible by design. Adding a new Harness resource means adding a declarative data file — no new tool registration, no schema changes, no prompt updates.

Prerequisites

Before installing or running the server, you need a Harness API key:

  1. Log in to your Harness account
  2. Go to My ProfileAPI Keys+ New API Key
  3. Create a new Token under the API key — this generates a PAT in the format pat.<accountId>.<tokenId>.<secret>
  4. Save the token somewhere secure — you'll need it in the next step

For detailed instructions, see the Harness API Quickstart.

Quick Start

Option 0: Hosted Harness MCP

If your Harness account has the hosted MCP service enabled, clients that support remote MCP servers can connect directly to the managed endpoint instead of running the server locally.

Important: The hosted MCP service uses Harness Platform OAuth, not HARNESS_API_KEY. It must also be enabled/configured per account by Harness Support before the endpoint can be used.

See Hosted Harness MCP for configuration examples.

Option 1: npx (Recommended)

No install required — just run it:

HARNESS_API_KEY=pat.xxx.xxx.xxx npx harness-mcp-v2@latest

Or configure the API key in your AI client (see Client Configuration below).

# Stdio transport (default — for Claude Desktop, Cursor, Windsurf, etc.)
HARNESS_API_KEY=pat.xxx npx harness-mcp-v2

# HTTP transport (for remote/shared deployments)
HARNESS_API_KEY=pat.xxx npx harness-mcp-v2 http --port 8080

Note: The account ID is auto-extracted from PAT tokens (pat.<accountId>.<tokenId>.<secret>), so HARNESS_ACCOUNT_ID is only needed for non-PAT API keys.

Option 2: Global Install

npm install -g harness-mcp-v2

# Then run directly
harness-mcp-v2

Option 3: Build from Source

For development or customization:

git clone https://github.com/harness/mcp-server.git
cd mcp-server
pnpm install
pnpm build

# Run
pnpm start              # Stdio transport
pnpm start:http         # HTTP transport
pnpm inspect            # Test with MCP Inspector

Anthropic MCP Directory bundle

The MCPB bundle manifest lives in [mcp-directory/](mcp-directory/), and the bundle icon is tracked at [icon.png](icon.png) in the repository root. Copy mcp-directory/manifest.json to the bundle root after pnpm build so the generated archive contains root-level manifest.json, icon.png, build/, package.json, and production node_modules/.

To keep the archive small, build MCPB packages from a staging directory:

pnpm prepare:mcpb

The staged package is written to dist/mcpb/ with production dependencies installed using npm's flat layout.

CLI Usage

harness-mcp-v2 [stdio|http] [--port <number>]

Options:
  --port <number>  Port for HTTP transport (default: 3000, or PORT env var)
  --help           Show help message and exit
  --version        Print version and exit

Transport defaults to stdio if not specified. Use http for remote/shared deployments.

HTTP Transport

When running in HTTP mode, the server exposes:

EndpointMethodDescription
/mcpPOSTMCP JSON-RPC endpoint (initialize + session requests)
/mcpGETSSE stream for server-initiated messages (progress, elicitation)
/mcpDELETETerminate an active MCP session
/mcpOPTIONSCORS preflight
/healthGETHealth check — returns { "status": "ok", "sessions": <count> }

The HTTP transport runs in session-based mode. A new MCP session is created on initialize, the server returns an mcp-session-id header, and subsequent requests for that session must include the same header.

Operational constraints in HTTP mode:

  • POST /mcp without mcp-session-id must be an initialize request.
  • POST /mcp, GET /mcp, and DELETE /mcp for existing sessions require the mcp-session-id header.
  • GET /mcp is used for SSE notifications (progress updates and elicitation prompts).
  • Idle sessions are reaped after 30 minutes.
  • GET /health is the only non-MCP endpoint.
  • Request body size is capped by HARNESS_MAX_BODY_SIZE_MB (default 10 MB).
  • Set x-harness-pipeline-version: 0 or 1 on the initialize request to select V0 or V1 pipeline resources for that HTTP session.
# Health check
curl http://localhost:3000/health

# MCP initialize request (capture mcp-session-id response header)
curl -i -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}'

# Subsequent MCP request (use returned session ID)
curl -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -H "mcp-session-id: <session-id>" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'

# Terminate session
curl -X DELETE http://localhost:3000/mcp \
  -H "mcp-session-id: <session-id>"

Client Configuration

Note: HARNESS_ORG and HARNESS_PROJECT are optional. They set the org ID and project ID used when not specified per tool call. Agents can discover orgs and projects dynamically using harness_list(resource_type="organization") and harness_list(resource_type="project"). The deprecated names HARNESS_DEFAULT_ORG_ID and HARNESS_DEFAULT_PROJECT_ID are still accepted for backward compatibility.

Hosted Harness MCP

Harness also supports a hosted MCP endpoint for accounts that have the managed service enabled. This is useful when you want a shared remote MCP endpoint instead of running npx harness-mcp-v2 or self-hosting the HTTP transport yourself.

Important: Hosted MCP authentication uses Harness Platform OAuth. It does not use HARNESS_API_KEY in the client config. Hosted MCP availability is configured per Harness account, so you will need to work with Harness Support to enable/configure the setting before using it.

The hosted endpoint https://mcp.harness.io/mcp is a managed service. Client-side MCP config in Claude, Cursor, or Cowork cannot override which Harness environment it routes to. For Harness0 or another private Harness SaaS environment, ask Harness Support to enable/configure hosted MCP for that environment, or run the local/self-hosted server and set HARNESS_BASE_URL to the target Harness host.

Hosted MCP example:

{
  "mcpServers": {
    "harness-prod1-mcp": {
      "url": "https://mcp.harness.io/mcp",
      "auth": {
        "CLIENT_ID": "mcp-client"
      }
    }
  }
}

Example with both hosted and local entries:

{
  "mcpServers": {
    "harness-hosted": {
      "url": "https://mcp.harness.io/mcp",
      "auth": {
        "CLIENT_ID": "mcp-client"
      }
    },
    "harness-local": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Troubleshooting npx ENOENT or node: No such file or directory

GUI apps (Cursor, Claude Desktop, Windsurf, VS Code) don't inherit your shell's PATH, so they often can't find npx or node. Fix this by using absolute paths and explicitly setting PATH in the env block:

{
  "mcpServers": {
    "harness": {
      "command": "/absolute/path/to/npx",
      "args": ["-y", "harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx",
        "PATH": "/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin"
      }
    }
  }
}

Find your paths with which npx and which node in a terminal, then make sure the directory containing node is included in the PATH value above. Common locations:

  • Homebrew (macOS): /opt/homebrew/bin/npx
  • nvm: ~/.nvm/versions/node/v20.x.x/bin/npx (run nvm which current to find the exact path)
  • System Node: /usr/local/bin/npx

Claude Desktop (claude_desktop_config.json)

npx (zero install)

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

node (local install)

npm install -g harness-mcp-v2
{
  "mcpServers": {
    "harness": {
      "command": "harness-mcp-v2",
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Claude Code (via claude mcp add)

npx (zero install)

claude mcp add harness -- npx harness-mcp-v2

node (local install)

npm install -g harness-mcp-v2
claude mcp add harness -- harness-mcp-v2

Then set HARNESS_API_KEY in your environment or .env file.

Cursor (.cursor/mcp.json)

npx (zero install)

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

node (local install)

npm install -g harness-mcp-v2
{
  "mcpServers": {
    "harness": {
      "command": "harness-mcp-v2",
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Windsurf (~/.windsurf/mcp.json)

npx (zero install)

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

node (local install)

npm install -g harness-mcp-v2
{
  "mcpServers": {
    "harness": {
      "command": "harness-mcp-v2",
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Using a local build from source?

Replace the command with the path to your built index.js:

{
  "command": "node",
  "args": ["/absolute/path/to/harness-mcp-v2/build/index.js", "stdio"]
}

MCP Gateway

The Harness MCP server is fully compatible with MCP Gateways — reverse proxies that provide centralized authentication, governance, tool routing, and observability across multiple MCP servers. Since the server implements the standard MCP protocol with both stdio and HTTP transports, it works behind any MCP-compliant gateway with no code changes.

Why use a gateway?

  • Centralized credential management — no API keys in agent configs
  • Governance & audit logging for all tool calls across teams
  • Single endpoint for agents instead of N connections to N MCP servers
  • Access control — restrict which teams can use which tools

Docker MCP Gateway

Register the server in your Docker MCP Gateway configuration:

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

Portkey

Add the Harness MCP server to your Portkey MCP Gateway for enterprise governance, cost tracking, and multi-LLM routing:

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx"
      }
    }
  }
}

LiteLLM

Add to your LiteLLM proxy config:

mcp_servers:
  - name: harness
    command: npx
    args:
      - harness-mcp-v2
    env:
      HARNESS_API_KEY: "pat.xxx.xxx.xxx"

Envoy AI Gateway

The server works with Envoy AI Gateway's MCP support via HTTP transport:

# Start the server in HTTP mode
HARNESS_API_KEY=pat.xxx.xxx.xxx npx harness-mcp-v2 http --port 8080

Then configure Envoy to route to http://localhost:8080/mcp as an upstream MCP backend.

Kong

Use Kong's AI MCP Proxy plugin to expose the Harness MCP server through your existing Kong gateway infrastructure.

Other Gateways

Any gateway that supports the MCP specification (Microsoft MCP Gateway, IBM ContextForge, Cloudflare Workers, etc.) can proxy this server. For stdio-based gateways, use the default transport. For HTTP-based gateways, start the server with http transport and point the gateway at the /mcp endpoint.

Docker

Build and run the server as a Docker container:

# Build the image
pnpm docker:build

# Run with your .env file
pnpm docker:run

# Or run directly with env vars
docker run --rm -p 3000:3000 \
  -e HARNESS_API_KEY=pat.xxx.xxx.xxx \
  -e HARNESS_ACCOUNT_ID=your-account-id \
  harness-mcp-server

The container runs in HTTP mode on port 3000 by default with a built-in health check.

Kubernetes

Deploy to a Kubernetes cluster using the provided manifests:

# 1. Edit the Secret with your real credentials
#    k8s/secret.yaml — replace HARNESS_API_KEY and HARNESS_ACCOUNT_ID

# 2. Apply all manifests
kubectl apply -f k8s/

# 3. Verify the deployment
kubectl -n harness-mcp get pods

# 4. Port-forward for local testing
kubectl -n harness-mcp port-forward svc/harness-mcp-server 3000:80
curl http://localhost:3000/health

The deployment runs 2 replicas with readiness/liveness probes, resource limits, and non-root security context. The Service exposes port 80 internally (targeting container port 3000).

Configuration

The server automatically loads environment variables from a .env file in the project root if one exists. Copy .env.example to .env and fill in your values. Environment variables can also be set via your shell or MCP client config.

VariableRequiredDefaultDescription
HARNESS_API_KEYYes--Harness personal access token or service account token
HARNESS_ACCOUNT_IDNo(from PAT)Harness account identifier. Auto-extracted from PAT tokens; only needed for non-PAT API keys
HARNESS_BASE_URLNohttps://app.harness.ioHarness API/UI base URL for local stdio or self-hosted HTTP deployments. Set this to environments such as https://harness0.harness.io when running the server yourself. It does not affect the managed https://mcp.harness.io/mcp hosted endpoint
HARNESS_ORGNodefaultOrganization ID. Used when org_id is not specified per tool call. Agents can also discover orgs dynamically via harness_list(resource_type="organization")
HARNESS_PROJECTNo--Project ID. Used when project_id is not specified per tool call. Agents can also discover projects dynamically via harness_list(resource_type="project")
HARNESS_API_TIMEOUT_MSNo30000HTTP request timeout in milliseconds
HARNESS_MAX_RETRIESNo3Retry count for transient failures (429, 5xx)
HARNESS_MAX_BODY_SIZE_MBNo10Max HTTP request body size in MB for http transport
HARNESS_RATE_LIMIT_RPSNo10Client-side request throttle (requests per second) to Harness APIs
LOG_LEVELNoinfoLog verbosity: debug, info, warn, error
HARNESS_TOOLSETSNo(defaults)Comma-separated toolset list. Empty loads default toolsets and excludes opt-in toolsets such as ai-evals. Supports +name to add opt-in toolsets and -name to remove defaults (see Toolset Filtering)
HARNESS_READ_ONLYNofalseBlock all mutating operations (create, update, delete, execute). Only list and get are allowed. Useful for shared/demo environments
HARNESS_AUTO_APPROVE_RISKNononeRisk-based auto-approve threshold for autonomous workflows. Operations at or below this risk proceed without confirmation. Values: none, low_write, medium_write, high_write, all. See Elicitation
HARNESS_SKIP_ELICITATIONNofalseDeprecated — use HARNESS_AUTO_APPROVE_RISK=all instead. Kept for backward compatibility
HARNESS_ALLOW_HTTPNofalseAllow non-HTTPS HARNESS_BASE_URL. By default, the server enforces HTTPS for security. Set to true only for local development against a non-TLS Harness instance
HARNESS_PIPELINE_VERSIONNo0(Alpha) Pipeline YAML version. 0 loads the pipeline resource type and excludes pipeline_v1; 1 loads pipeline_v1 and excludes pipeline. HTTP sessions can override this at initialize time with x-harness-pipeline-version: 0 or 1
HARNESS_MCP_ALLOWED_HOSTSNo--Comma-separated hostnames allowed by HTTP transport Host-header validation. mcp.harness.io is allowed by default for localhost binds; add proxy/custom domains here
HARNESS_MCP_LOG_FILENo~/.claude/harness-mcp.logFile used for stdio disconnect/crash diagnostics when stderr may no longer be available

HTTPS Enforcement

HARNESS_BASE_URL must use HTTPS by default. If you set a non-HTTPS URL (e.g. http://localhost:8080), the server will refuse to start with:

HARNESS_BASE_URL must use HTTPS (got "http://..."). If you need HTTP for local development, set HARNESS_ALLOW_HTTP=true.

Audit Logging

All write operations (harness_create, harness_update, harness_delete, harness_execute) emit structured audit log entries to stderr. Each entry includes the tool name, resource type, operation, identifiers, and timestamp. This provides an audit trail without requiring external logging infrastructure.

Tools Reference

The server exposes 11 MCP tools. Most API tools accept org_id and project_id as optional overrides — if omitted, they fall back to HARNESS_ORG and HARNESS_PROJECT. harness_describe is local metadata only and does not use org/project scope.

URL support: Most API-facing tools accept a url parameter — paste a Harness UI URL and the server auto-extracts org, project, resource type, resource ID, pipeline ID, and execution ID. harness_describe does not accept url.

ToolDescription
harness_describeDiscover available resource types, operations, and fields. No API call — returns local registry metadata.
harness_schemaFetch exact JSON Schema definitions for creating/updating resources. Supports deep drilling via path parameter.
harness_listList resources of a given type with filtering, search, and pagination.
harness_getGet a single resource by its identifier.
harness_createCreate a new resource. Supports inline and remote (Git-backed) pipelines. Prompts for user confirmation via elicitation.
harness_updateUpdate an existing resource. Supports inline and remote (Git-backed) pipelines. Prompts for user confirmation via elicitation.
harness_deleteDelete a resource. Prompts for user confirmation via elicitation. Destructive.
harness_executeExecute an action on a resource (run/retry pipeline, import pipeline from Git, toggle flag, sync app). Prompts for user confirmation via elicitation. For pipeline runs, use the runtime-input workflow below (supports branch/tag/pr_number/commit_sha shorthand expansion).
harness_searchSearch across multiple resource types in parallel with a single query.
harness_diagnoseDiagnose pipeline, connector, delegate, and gitops_application resources (aliases: execution -> pipeline, gitops_app -> gitops_application). For pipelines, returns stage/step timing and failure details; for connectors/delegates/GitOps apps, returns targeted health and troubleshooting signals.
harness_statusGet a real-time project health dashboard — recent executions, failure rates, and deep links.

Tool Examples

Discover what resources are available:

{ "resource_type": "pipeline" }

List organizations in the account:

{ "resource_type": "organization" }

List projects in an organization:

{ "resource_type": "project", "org_id": "default" }

List pipelines in a project:

{ "resource_type": "pipeline", "search_term": "deploy", "size": 10 }

Get a specific service:

{ "resource_type": "service", "resource_id": "my-service-id" }

Run a pipeline:

{
  "resource_type": "pipeline",
  "action": "run",
  "resource_id": "my-pipeline",
  "inputs": { "tag": "v1.2.3" }
}

Toggle a feature flag:

{
  "resource_type": "feature_flag",
  "action": "toggle",
  "resource_id": "new_checkout_flow",
  "enable": true,
  "environment": "production"
}

Search across all resource types:

{ "query": "payment-service" }

Diagnose an execution by ID (summary mode — default):

{ "execution_id": "abc123XYZ" }

Diagnose from a Harness URL:

{ "url": "https://app.harness.io/ng/account/.../pipelines/myPipeline/executions/abc123XYZ/pipeline" }

Diagnose connector connectivity:

{ "resource_type": "connector", "resource_id": "my_github_connector" }

Diagnose delegate health:

{ "resource_type": "delegate", "resource_id": "delegate-us-east-1" }

Diagnose a GitOps application (with options):

{
  "resource_type": "gitops_application",
  "resource_id": "checkout-app",
  "options": { "agent_id": "gitops-agent-1" }
}

Get the latest execution report for a pipeline:

{ "pipeline_id": "my-pipeline" }

Full diagnostic mode with YAML and failed step logs:

{ "execution_id": "abc123XYZ", "summary": false }

Summary mode with logs enabled (best of both):

{ "execution_id": "abc123XYZ", "include_logs": true }

Get project health status:

{ "org_id": "default", "project_id": "my-project", "limit": 5 }

List database schemas filtered by migration type:

{ "resource_type": "database_schema", "migration_type": "Liquibase" }

List database instances for a schema:

{ "resource_type": "database_instance", "dbschema_id": "my_schema" }

Get the resolved LLM authoring pipeline for a schema and instance:

{ "resource_type": "database_llm_authoring_pipeline", "resource_id": "my_schema", "dbinstance_id": "prod_db" }

List snapshot object names (e.g. tables) for a schema instance:

{
  "resource_type": "database_snapshot_object",
  "dbschema_id": "my_schema",
  "dbinstance_id": "prod_db",
  "object_type": "Table"
}

Get full snapshot metadata for specific named objects:

{
  "resource_type": "database_snapshot_object",
  "resource_id": "prod_db",
  "params": {
    "dbschema_id": "my_schema",
    "object_type": "Table",
    "object_names": ["users", "orders"]
  }
}

Pipeline Run Workflow (Recommended)

Use this sequence to reduce execution-time input errors:

  1. Discover required runtime inputs
  • harness_get(resource_type="runtime_input_template", resource_id="<pipeline_id>")
  • The returned template shows <+input> placeholders that need values.
  1. Choose input strategy
  • Simple variables: pass flat key-value inputs (for example {"branch":"main","env":"prod"}).

  • Complex/structural inputs: use input_set_ids (CI codebase/build blocks and nested template inputs are best handled this way).

  • CI codebase shorthand keys (pipeline run only):

    Shorthand keyExpanded structure
    branchbuild.type=branch, build.spec.branch=<value>
    tagbuild.type=tag, build.spec.tag=<value>
    pr_numberbuild.type=PR, build.spec.number=<value>
    commit_shabuild.type=commitSha, build.spec.commitSha=<value>
  • Constraint: shorthand expansion is skipped when inputs.build is already present (explicit build wins).

  1. Execute the run
  • harness_execute(resource_type="pipeline", action="run", resource_id="<pipeline_id>", ...)
  1. Optional: combine both
  • Use input_set_ids for the base shape and inputs for simple overrides.

If required fields are unresolved, the tool returns a pre-flight error with expected keys and suggested input sets. You can inspect available shorthand mappings with harness_describe(resource_type="pipeline") (executeActions.run.inputShorthands).

Ask the AI DevOps Agent to create a pipeline:

{
  "prompt": "Create a pipeline that builds a Go app with Docker and deploys to Kubernetes",
  "action": "CREATE_PIPELINE"
}

Update a service via natural language:

{
  "prompt": "Add a sidecar container for logging",
  "action": "UPDATE_SERVICE",
  "conversation_id": "prev-conversation-id",
  "context": [{ "type": "yaml", "payload": "<existing service YAML>" }]
}

Pipeline Storage Modes

Harness pipelines can be stored in three ways:

ModeDescriptionWhen to use
InlinePipeline YAML stored in HarnessDefault. Simplest setup, no Git required.
Remote (External Git)Pipeline YAML stored in GitHub, GitLab, Bitbucket, etc.Teams using Git-backed pipeline-as-code with an external provider.
Remote (Harness Code)Pipeline YAML stored in a Harness Code repositoryTeams using Harness's built-in Git hosting.

Create an inline pipeline (default):

// harness_create
{
  "resource_type": "pipeline",
  "body": {
    "yamlPipeline": "pipeline:\n  name: My Pipeline\n  identifier: my_pipeline\n  stages:\n    - stage:\n        name: Build\n        type: CI\n        spec:\n          execution:\n            steps:\n              - step:\n                  type: Run\n                  name: Echo\n                  spec:\n                    command: echo hello"
  }
}

Create a remote pipeline (External Git — e.g. GitHub):

// harness_create
{
  "resource_type": "pipeline",
  "body": {
    "yamlPipeline": "pipeline:\n  name: Deploy Service\n  identifier: deploy_service\n  stages: []"
  },
  "params": {
    "store_type": "REMOTE",
    "connector_ref": "my_github_connector",
    "repo_name": "my-repo",
    "branch": "main",
    "file_path": ".harness/deploy-service.yaml",
    "commit_msg": "Add deploy pipeline via MCP"
  }
}

Create a remote pipeline (Harness Code — no connector needed):

// harness_create
{
  "resource_type": "pipeline",
  "body": {
    "yamlPipeline": "pipeline:\n  name: Build App\n  identifier: build_app\n  stages: []"
  },
  "params": {
    "store_type": "REMOTE",
    "is_harness_code_repo": true,
    "repo_name": "product-management",
    "branch": "main",
    "file_path": ".harness/build-app.yaml",
    "commit_msg": "Add build pipeline via MCP"
  }
}

Update a remote pipeline:

// harness_update
{
  "resource_type": "pipeline",
  "resource_id": "deploy_service",
  "body": {
    "yamlPipeline": "pipeline:\n  name: Deploy Service\n  identifier: deploy_service\n  stages:\n    - stage:\n        name: Deploy\n        type: Deployment"
  },
  "params": {
    "store_type": "REMOTE",
    "connector_ref": "my_github_connector",
    "repo_name": "my-repo",
    "branch": "main",
    "file_path": ".harness/deploy-service.yaml",
    "commit_msg": "Update deploy pipeline via MCP",
    "last_object_id": "abc123",
    "last_commit_id": "def456"
  }
}

Import a pipeline from an external Git repo:

// harness_execute
{
  "resource_type": "pipeline",
  "action": "import",
  "params": {
    "connector_ref": "my_github_connector",
    "repo_name": "my-repo",
    "branch": "main",
    "file_path": ".harness/existing-pipeline.yaml"
  },
  "body": {
    "pipeline_name": "Existing Pipeline",
    "pipeline_description": "Imported from GitHub"
  }
}

Import a pipeline from a Harness Code repo:

// harness_execute
{
  "resource_type": "pipeline",
  "action": "import",
  "params": {
    "is_harness_code_repo": true,
    "repo_name": "product-management",
    "branch": "main",
    "file_path": ".harness/existing-pipeline.yaml"
  },
  "body": {
    "pipeline_name": "Existing Pipeline"
  }
}

Create a connector:

{
  "resource_type": "connector",
  "body": { "connector": { "name": "My Docker Hub", "identifier": "my_docker", "type": "DockerRegistry" } }
}

Delete a trigger:

{
  "resource_type": "trigger",
  "resource_id": "nightly-trigger",
  "pipeline_id": "my-pipeline"
}

List input sets for a pipeline:

{
  "resource_type": "input_set",
  "pipeline_id": "my-pipeline"
}

Get a specific input set:

{
  "resource_type": "input_set",
  "resource_id": "prod-inputs",
  "pipeline_id": "my-pipeline"
}

Create an input set:

{
  "resource_type": "input_set",
  "pipeline_id": "my-pipeline",
  "body": "inputSet:\n  name: Production Inputs\n  identifier: prod_inputs\n  pipeline:\n    identifier: my-pipeline\n    variables:\n      - name: env\n        type: String\n        value: production"
}

Update an input set:

{
  "resource_type": "input_set",
  "resource_id": "prod_inputs",
  "pipeline_id": "my-pipeline",
  "body": "inputSet:\n  name: Production Inputs\n  identifier: prod_inputs\n  pipeline:\n    identifier: my-pipeline\n    variables:\n      - name: env\n        type: String\n        value: production\n      - name: replicas\n        type: String\n        value: \"3\""
}

Delete an input set:

{
  "resource_type": "input_set",
  "resource_id": "prod_inputs",
  "pipeline_id": "my-pipeline"
}

Resource Types

168 resource types organized across 31 toolsets. Each resource type supports a subset of CRUD operations and optional execute actions.

Platform

Resource TypeListGetCreateUpdateDeleteExecute Actions
organizationxxxxx
projectxxxxx

Pipelines

Resource TypeListGetCreateUpdateDeleteExecute Actions
pipelinexxxxxrun, retry
pipeline_v1 (Alpha)xxxxxrun
executionxxinterrupt
triggerxxxxx
pipeline_summaryx
input_setxxxxx
runtime_input_templatex
approval_instancexapprove, reject

Only one pipeline YAML resource type is loaded at startup. By default HARNESS_PIPELINE_VERSION=0 exposes pipeline and hides pipeline_v1; set HARNESS_PIPELINE_VERSION=1 to expose pipeline_v1 and hide pipeline. In HTTP mode, include x-harness-pipeline-version: 0 or 1 on the initialize request to choose the version for that session.

AI Agents

Resource TypeListGetCreateUpdateDeleteExecute Actions
agentxxxxx
agent_runx

Services

Resource TypeListGetCreateUpdateDeleteExecute Actions
servicexxxxx

Environments

Resource TypeListGetCreateUpdateDeleteExecute Actions
environmentxxxxxmove_configs

Connectors

Resource TypeListGetCreateUpdateDeleteExecute Actions
connectorxxxxxtest_connection
connector_cataloguex

Infrastructure

Resource TypeListGetCreateUpdateDeleteExecute Actions
infrastructurexxxxxmove_configs

Secrets

Resource TypeListGetCreateUpdateDeleteExecute Actions
secretxx

Execution Logs

Resource TypeListGetCreateUpdateDeleteExecute Actions
execution_logx

Audit Trail

Resource TypeListGetCreateUpdateDeleteExecute Actions
audit_eventxx

Delegates

Resource TypeListGetCreateUpdateDeleteExecute Actions
delegatex
delegate_tokenxxxxrevoke, get_delegates

Code Repositories

Resource TypeListGetCreateUpdateDeleteExecute Actions
repositoryxxxx
branchxxxx
commitxxdiff, diff_stats
file_contentxblame
tagxxx
repo_rulexx
space_rulexx

Artifact Registries

Resource TypeListGetCreateUpdateDeleteExecute Actions
registryxx
artifactx
artifact_versionx
artifact_filex

Templates

Resource TypeListGetCreateUpdateDeleteExecute Actions
templatexxxxx

Dashboards

Resource TypeListGetCreateUpdateDeleteExecute Actions
dashboardxx
dashboard_datax

Database DevOps

Resource TypeListGetCreateUpdateDeleteExecute Actions
database_schemaxx
database_instancexx
database_snapshot_objectxx
database_llm_authoring_pipelinex

Internal Developer Portal (IDP)

Resource TypeListGetCreateUpdateDeleteExecute Actions
idp_entityxx
scorecardxx
scorecard_checkxx
scorecard_statsx
scorecard_check_statsx
idp_scorexx
idp_workflowxexecute
idp_tech_docx

Pull Requests

Resource TypeListGetCreateUpdateDeleteExecute Actions
pull_requestxxxxmerge
pr_reviewerxxsubmit_review
pr_commentxx
pr_checkx
pr_activityx

Feature Flags

Resource TypeListGetCreateUpdateDeleteExecute Actions
fme_workspacex
fme_environmentx
fme_feature_flagxxxxxkill, restore, archive, unarchive
fme_feature_flag_definitionx
fme_rollout_statusx
fme_rule_based_segmentxxxx
fme_rule_based_segment_definitionxxenable, disable, change_request
feature_flagxxxxtoggle

FME (Split.io) resourcesfme_* resources use the Split.io API (api.split.io) and are scoped by workspace ID rather than org/project. Auth uses HARNESS_API_KEY as a Bearer token. fme_feature_flag supports full lifecycle management: create (requires traffic_type_id), list, get, update metadata, delete, and kill/restore/archive/unarchive execute actions. fme_rule_based_segment provides CRUD for targeting segments, while fme_rule_based_segment_definition manages environment-specific segment rules with enable/disable and change request approval flows. Use feature_flag for the Harness CF admin API which supports environment-specific definitions, create, delete, and toggle.

GitOps

Resource TypeListGetCreateUpdateDeleteExecute Actions
gitops_agentxx
gitops_applicationxxsync
gitops_clusterxx
gitops_repositoryxx
gitops_applicationsetxx
gitops_repo_credentialxx
gitops_app_eventx
gitops_pod_logx
gitops_managed_resourcex
gitops_resource_actionx
gitops_dashboardx
gitops_app_resource_treex

Chaos Engineering

Resource TypeListGetCreateUpdateDeleteExecute Actions
chaos_experimentxxrun
chaos_probexxenable, verify
chaos_experiment_templatexcreate_from_template
chaos_infrastructurex
chaos_experiment_variablex
chaos_experiment_runxx
chaos_loadtestxxxxrun, stop
chaos_k8s_infrastructurexxcheck_health
chaos_hubxx
chaos_faultxx
chaos_network_mapxx
chaos_guard_conditionxx
chaos_guard_rulexx
chaos_recommendationxx
chaos_riskxx

Cloud Cost Management (CCM)

Resource TypeListGetCreateUpdateDeleteExecute Actions
cost_perspectivexxxxx
cost_breakdownx
cost_timeseriesx
cost_summaryxx
cost_recommendationxxupdate_state, override_savings, create_jira_ticket, create_snow_ticket
cost_anomalyx
cost_anomaly_summaryx
cost_categoryxx
cost_account_overviewx
cost_filter_valuex
cost_recommendation_statsx
cost_recommendation_detailx
cost_commitmentx

Software Engineering Insights (SEI)

SEI resources are consolidated for token efficiency. Use metric or aspect params for DORA, team/org-tree details, and AI insights.

Resource TypeListGetCreateUpdateDeleteExecute Actions
sei_metricx
sei_productivity_metricx
sei_dora_metricxPass metric: deployment_frequency, change_failure_rate, mttr, lead_time, or *_drilldown
sei_teamxx
sei_team_detailxPass aspect: integrations, developers, integration_filters
sei_org_treexx
sei_org_tree_detailxxPass aspect: efficiency_profile, productivity_profile, business_alignment_profile, integrations, teams
sei_business_alignmentxxPass aspect: feature_metrics, feature_summary, drilldown for get
sei_ai_usagexxPass aspect: metrics, breakdown, summary, top_languages
sei_ai_adoptionxxPass aspect: metrics, breakdown, summary
sei_ai_impactxPass aspect: pr_velocity, rework
sei_ai_raw_metricx

Software Supply Chain Assurance (SCS)

Resource TypeListGetCreateUpdateDeleteExecute Actions
scs_artifact_sourcex
artifact_securityxx
scs_artifact_componentx
scs_artifact_remediationx
scs_chain_of_custodyx
scs_compliance_resultx
code_repo_securityxx
scs_sbomx

Security Testing Orchestration (STO)

Resource TypeListGetCreateUpdateDeleteExecute Actions
security_issuex
security_issue_filterx
security_exemptionxapprove, reject, promote

Access Control

Resource TypeListGetCreateUpdateDeleteExecute Actions
userxx
user_groupxxxx
service_accountxxxx
rolexxxx
role_assignmentxx
resource_groupxxxx
permissionx

Governance

Resource TypeListGetCreateUpdateDeleteExecute Actions
policyxxxxx
policy_setxxxxx
policy_evaluationxx

Deployment Freeze

Resource TypeListGetCreateUpdateDeleteExecute Actions
freeze_windowxxxxxtoggle_status
global_freezexmanage

Service Overrides

Resource TypeListGetCreateUpdateDeleteExecute Actions
service_overridexxxxx

Settings

Resource TypeListGetCreateUpdateDeleteExecute Actions
settingx

Visualizations

Inline PNG chart visualizations rendered from Harness data. These are metadata-only resource types with no API operations — they exist so the LLM can discover available chart types via harness_describe. Use include_visual=true on supported tools (harness_diagnose, harness_list, harness_status) to generate charts.

Resource TypeDescriptionHow to Generate
visual_timelineGantt chart of pipeline stage execution over timeharness_diagnose with visual_type: "timeline"
visual_stage_flowDAG flowchart of pipeline stages and stepsharness_diagnose with visual_type: "flow"
visual_health_dashboardProject health overview with status indicatorsharness_status with include_visual: true
visual_pie_chartDonut chart of execution status breakdownharness_list with visual_type: "pie"
visual_bar_chartBar chart of execution counts by pipelineharness_list with visual_type: "bar"
visual_timeseriesDaily execution trend over 30 daysharness_list with visual_type: "timeseries"
visual_architecturePipeline YAML architecture diagram (stages → steps)harness_diagnose with visual_type: "architecture"

MCP Prompts

DevOps

PromptDescriptionParameters
build-deploy-appEnd-to-end CI/CD workflow: scan a git repo, generate CI pipeline (build & push Docker image), discover or generate K8s manifests, create CD pipeline, and deploy — with auto-retry on CI failures (up to 5 attempts) and CD failures (up to 3 attempts with user permission). On exhausted retries, provides Harness UI deep links to all created resources for manual investigation.repoUrl (required), imageName (required), projectId (optional), namespace (optional)
debug-pipeline-failureAnalyze a failed execution: accepts an execution ID, pipeline ID, or Harness URL. Gets stage/step breakdown, failure details, delegate info, and failed step logs via harness_diagnose, then provides root cause analysis and suggested fixes. Automatically follows chained pipeline failures.executionId (optional), projectId (optional)
create-pipelineGenerate a new pipeline YAML from natural language requirements, reviewing existing resources for contextdescription (required), projectId (optional)
create-agentInteractively build a Harness AI agent — check existing agents, gather requirements, generate agent YAML spec using the agent-pipeline schema, confirm with user, then create or update via harness_create/harness_updateagent_name (required), task_description (required), org_id (optional), project_id (optional)
onboard-serviceWalk through onboarding a new service with environments and a deployment pipelineserviceName (required), projectId (optional)
dora-metrics-reviewReview DORA metrics (deployment frequency, change failure rate, MTTR, lead time) with Elite/High/Medium/Low classification and improvement recommendationsteamRefId (optional), dateStart (optional), dateEnd (optional)
setup-gitops-applicationGuide through onboarding a GitOps application — verify agent, cluster, repo, and create the applicationagentId (required), projectId (optional)
chaos-resilience-testDesign a chaos experiment to test service resilience with fault injection, probes, and expected outcomesserviceName (required), projectId (optional)
feature-flag-rolloutPlan and execute a progressive feature flag rollout across environments with safety gatesflagIdentifier (required), projectId (optional)
migrate-pipeline-to-templateAnalyze an existing pipeline and extract reusable stage/step templates from itpipelineId (required), projectId (optional)
delegate-health-checkCheck delegate connectivity, health, token status, and troubleshoot infrastructure issuesprojectId (optional)
developer-portal-scorecardReview IDP scorecards for services and identify gaps to improve developer experienceprojectId (optional)
pending-approvalsFind pipeline executions waiting for approval, show details, and offer to approve or rejectprojectId (optional), orgId (optional), pipelineId (optional)

FinOps

PromptDescriptionParameters
optimize-costsAnalyze cloud cost data, surface recommendations and anomalies, prioritized by potential savingsprojectId (optional)
cloud-cost-breakdownDeep-dive into cloud costs by service, environment, or cluster with trend analysis and anomaly detectionperspectiveId (optional), projectId (optional)
commitment-utilization-reviewAnalyze reserved instance and savings plan utilization to find waste and optimize commitmentsprojectId (optional)
cost-anomaly-investigationInvestigate cost anomalies — determine root cause, impacted resources, and remediationprojectId (optional)
rightsizing-recommendationsReview and prioritize rightsizing recommendations, optionally create Jira or ServiceNow ticketsprojectId (optional), minSavings (optional)

DevSecOps

PromptDescriptionParameters
security-reviewReview security issues across Harness resources and suggest remediations by severityprojectId (optional), severity (optional, default: critical,high)
vulnerability-triageTriage security vulnerabilities across pipelines and artifacts, prioritize by severity and exploitabilityprojectId (optional), severity (optional)
sbom-compliance-checkAudit SBOM and compliance posture for artifacts — license risks, policy violations, component vulnerabilitiesartifactId (optional), projectId (optional)
supply-chain-auditEnd-to-end software supply chain security audit — provenance, chain of custody, policy complianceprojectId (optional)
security-exemption-reviewReview pending security exemptions and make batch approval or rejection decisionsprojectId (optional)
access-control-auditAudit user permissions, over-privileged accounts, and role assignments to enforce least-privilegeprojectId (optional), orgId (optional)

Harness Code

PromptDescriptionParameters
code-reviewReview a pull request — analyze diff, commits, checks, and comments to provide structured feedback on bugs, security, performance, and stylerepoId (required), prNumber (required), projectId (optional)
pr-summaryAuto-generate a PR title and description from the commit history and diff of a branchrepoId (required), sourceBranch (required), targetBranch (optional, default: main), projectId (optional)
branch-cleanupAnalyze branches in a repository and recommend stale or merged branches to deleterepoId (required), projectId (optional)

MCP Resources

Resource URIDescriptionMIME Type
pipeline:///{pipelineId}Pipeline YAML definitionapplication/x-yaml
pipeline:///{orgId}/{projectId}/{pipelineId}Pipeline YAML (with explicit scope)application/x-yaml
executions:///recentLast 10 pipeline execution summariesapplication/json
schema:///pipelineHarness pipeline JSON Schemaapplication/schema+json
schema:///templateHarness template JSON Schemaapplication/schema+json
schema:///triggerHarness trigger JSON Schemaapplication/schema+json
schema:///pipeline_v1 (Alpha)Harness V1 pipeline JSON Schema (simplified stages/steps format)application/schema+json
schema:///agent-pipelineHarness AI agent pipeline JSON Schemaapplication/schema+json

Toolset Filtering

By default, 31 of 32 toolsets are enabled. One toolset (ai-evals) is opt-in — excluded by default to avoid polluting the resource list for users who don't need it.

Enabling opt-in toolsets

Use the + prefix to add opt-in toolsets to the defaults:

# Enable ai-evals alongside all defaults
HARNESS_TOOLSETS=+ai-evals

Removing default toolsets

Use the - prefix to exclude toolsets you don't need:

# Remove chaos and ccm from defaults
HARNESS_TOOLSETS=-chaos,-ccm

Combining + and -

# Add ai-evals, remove chaos
HARNESS_TOOLSETS=+ai-evals,-chaos

Explicit allowlist

An explicit comma-separated list (no prefixes) replaces the defaults entirely. Only the listed toolsets are enabled:

# Only expose pipelines, services, and connectors
HARNESS_TOOLSETS=pipelines,services,connectors

Available toolset names:

ToolsetResource Types
platformorganization, project
pipelinespipeline, pipeline_v1, execution, trigger, pipeline_summary, input_set, approval_instance
agentsagent, agent_run
servicesservice
environmentsenvironment
connectorsconnector, connector_catalogue
infrastructureinfrastructure
secretssecret
logsexecution_log
auditaudit_event
delegatesdelegate, delegate_token
repositoriesrepository, branch, commit, file_content, tag, repo_rule, space_rule
registriesregistry, artifact, artifact_version, artifact_file
templatestemplate
dashboardsdashboard, dashboard_data
idpidp_entity, scorecard, scorecard_check, scorecard_stats, scorecard_check_stats, idp_score, idp_workflow, idp_tech_doc
pull-requestspull_request, pr_reviewer, pr_comment, pr_check, pr_activity
feature-flagsfme_workspace, fme_environment, fme_feature_flag, fme_feature_flag_definition, fme_rollout_status, fme_rule_based_segment, fme_rule_based_segment_definition, feature_flag
gitopsgitops_agent, gitops_application, gitops_cluster, gitops_repository, gitops_applicationset, gitops_repo_credential, gitops_app_event, gitops_pod_log, gitops_managed_resource, gitops_resource_action, gitops_dashboard, gitops_app_resource_tree
chaoschaos_experiment, chaos_probe, chaos_experiment_template, chaos_infrastructure, chaos_experiment_variable, chaos_experiment_run, chaos_loadtest, chaos_k8s_infrastructure, chaos_hub, chaos_fault, chaos_network_map, chaos_guard_condition, chaos_guard_rule, chaos_recommendation, chaos_risk
ccmcost_perspective, cost_breakdown, cost_timeseries, cost_summary, cost_recommendation, cost_anomaly, cost_anomaly_summary, cost_category, cost_account_overview, cost_filter_value, cost_recommendation_stats, cost_recommendation_detail, cost_commitment
seisei_metric, sei_productivity_metric, sei_dora_metric, sei_team, sei_team_detail, sei_org_tree, sei_org_tree_detail, sei_business_alignment, sei_ai_usage, sei_ai_adoption, sei_ai_impact, sei_ai_raw_metric
scsscs_artifact_source, artifact_security, scs_artifact_component, scs_artifact_remediation, scs_chain_of_custody, scs_compliance_result, code_repo_security, scs_sbom
stosecurity_issue, security_issue_filter, security_exemption
dbopsdatabase_schema, database_instance, database_snapshot_object, database_llm_authoring_pipeline
access_controluser, user_group, service_account, role, role_assignment, resource_group, permission
governancepolicy, policy_set, policy_evaluation
freezefreeze_window, global_freeze
overridesservice_override
settingssetting
visualizationsvisual_timeline, visual_stage_flow, visual_health_dashboard, visual_pie_chart, visual_bar_chart, visual_timeseries, visual_architecture
ai-evals (opt-in)eval_dataset, eval_dataset_item, evaluation, eval_run, eval_run_item, eval_run_by_eval, eval_metric, eval_metric_set, eval_metric_set_entry, eval_suite, eval_suite_evaluation, eval_suite_run, eval_target, eval_model, eval_annotation, eval_analytics, eval_git_settings, eval_registry_item

Architecture

                 +------------------+
                 |   AI Agent       |
                 |  (Claude, etc.)  |
                 +--------+---------+
                          |  MCP (stdio or HTTP)
                 +--------v---------+
                |    MCP Server     |
                | 11 Generic Tools  |
                 +--------+---------+
                          |
                 +--------v---------+
                |    Registry       |  <-- Declarative resource definitions
                |  32 Toolsets      |      (data files, not code)
                |  168 Resource Types|
                 +--------+---------+
                          |
                 +--------v---------+
                 |  HarnessClient    |  <-- Auth, retry, rate limiting
                 +--------+---------+
                          |  HTTPS
                 +--------v---------+
                 |  Harness REST API |
                 +-------------------+

How It Works

  1. Tools are generic verbs: harness_list, harness_get, etc. They accept a resource_type parameter that routes to the correct API endpoint.
  2. The Registry maps each resource_type to a ResourceDefinition — a declarative data structure specifying the HTTP method, URL path, path/query parameter mappings, and response extraction logic.
  3. Dispatch resolves the resource definition, builds the HTTP request (path substitution, query params, scope injection), calls the Harness API through HarnessClient, and extracts the relevant response data.
  4. Toolset filtering (HARNESS_TOOLSETS) controls which resource definitions are loaded into the registry at startup.
  5. Deep links are automatically appended to responses, providing direct Harness UI URLs for every resource.
  6. Compact mode strips verbose metadata from list results, keeping only actionable fields (identity, status, type, timestamps, deep links) to minimize token usage.

Adding a New Resource Type

Create a new file in src/registry/toolsets/ or add a resource to an existing toolset:

// src/registry/toolsets/my-module.ts
import type { ToolsetDefinition } from "../types.js";

export const myModuleToolset: ToolsetDefinition = {
  name: "my-module",
  displayName: "My Module",
  description: "Description of the module",
  resources: [
    {
      resourceType: "my_resource",
      displayName: "My Resource",
      description: "What this resource represents",
      toolset: "my-module",
      scope: "project",                    // "project" | "org" | "account"
      identifierFields: ["resource_id"],
      listFilterFields: ["search_term"],
      operations: {
        list: {
          method: "GET",
          path: "/my-module/api/resources",
          queryParams: { search_term: "search", page: "page", size: "size" },
          responseExtractor: (raw) => raw,
          description: "List resources",
        },
        get: {
          method: "GET",
          path: "/my-module/api/resources/{resourceId}",
          pathParams: { resource_id: "resourceId" },
          responseExtractor: (raw) => raw,
          description: "Get resource details",
        },
      },
    },
  ],
};

Then import it in src/registry/index.ts and add it to the ALL_TOOLSETS array. No changes needed to any tool files.

Development

# Build
pnpm build

# Watch mode
pnpm dev

# Type check
pnpm typecheck

# Run tests
pnpm test

# Watch tests
pnpm test:watch

# Interactive MCP Inspector
pnpm inspect

Project Structure

src/
  index.ts                          # Entrypoint, transport setup
  config.ts                         # Env var validation (Zod)
  client/
    harness-client.ts               # HTTP client (auth, retry, rate limiting)
    types.ts                        # Shared API types
  registry/
    index.ts                        # Registry class + dispatch logic
    types.ts                        # ResourceDefinition, ToolsetDefinition, etc.
    toolsets/                        # One file per toolset (declarative data)
      platform.ts
      pipelines.ts
      services.ts
      ccm.ts
      access-control.ts
      ...
  tools/                            # 10 generic MCP tools
    harness-list.ts
    harness-get.ts
    harness-create.ts
    harness-update.ts
    harness-delete.ts
    harness-execute.ts
    harness-search.ts
    harness-diagnose.ts
    harness-describe.ts
    harness-status.ts

  resources/                        # MCP resource providers
    pipeline-yaml.ts
    execution-summary.ts
  prompts/                          # MCP prompt templates
    build-deploy-app.ts             # DevOps: end-to-end build & deploy workflow
    debug-pipeline.ts               # DevOps: debug failed executions
    create-pipeline.ts              # DevOps: generate pipeline from requirements
    onboard-service.ts              # DevOps: onboard new service
    dora-metrics.ts                 # DevOps: DORA metrics review
    setup-gitops.ts                 # DevOps: GitOps application setup
    chaos-resilience.ts             # DevOps: chaos experiment design
    feature-flag-rollout.ts         # DevOps: progressive flag rollout
    migrate-to-template.ts          # DevOps: extract templates from pipeline
    delegate-health.ts              # DevOps: delegate health check
    developer-scorecard.ts          # DevOps: IDP scorecard review
    optimize-costs.ts               # FinOps: cost optimization
    cloud-cost-breakdown.ts         # FinOps: cost deep-dive
    commitment-utilization.ts       # FinOps: RI/savings plan analysis
    cost-anomaly.ts                 # FinOps: anomaly investigation
    rightsizing.ts                  # FinOps: rightsizing recommendations
    security-review.ts              # DevSecOps: security issue review
    vulnerability-triage.ts         # DevSecOps: vulnerability triage
    sbom-compliance.ts              # DevSecOps: SBOM compliance audit
    supply-chain-audit.ts           # DevSecOps: supply chain audit
    exemption-review.ts             # DevSecOps: exemption approval
    access-control-audit.ts         # DevSecOps: access control audit
    code-review.ts                  # Harness Code: PR code review
    pr-summary.ts                   # Harness Code: auto-generate PR summary
    branch-cleanup.ts               # Harness Code: stale branch cleanup
    pending-approvals.ts            # Approvals: find and act on pending approvals
  utils/
    cli.ts                          # CLI arg parsing (transport, port)
    errors.ts                       # Error normalization
    logger.ts                       # stderr-only logger
    progress.ts                     # MCP progress & logging notifications
    rate-limiter.ts                 # Client-side rate limiting
    deep-links.ts                   # Harness UI deep link builder
    response-formatter.ts           # Consistent MCP response formatting
    compact.ts                      # Compact list output for token efficiency
tests/
  config.test.ts                    # Config schema validation tests
  utils/
    response-formatter.test.ts
    deep-links.test.ts
    errors.test.ts
  registry/
    registry.test.ts                # Registry loading, filtering, dispatch tests

Elicitation

Write tools (harness_create, harness_update, harness_delete, harness_execute) use MCP elicitation to prompt the user for confirmation before making changes. This gives real human-in-the-loop approval — the user sees what's about to happen and accepts or declines.

How it works:

  1. The LLM calls a write tool (e.g. harness_create with a pipeline body)
  2. The server sends an elicitation request to the client with a summary of the operation
  3. The user sees the details and clicks Accept or Decline
  4. If accepted, the operation proceeds. If declined, it's blocked and the LLM is told

Client support:

ClientElicitation Support
CursorYes
VS Code (Copilot)Yes
Claude DesktopNot yet
WindsurfNot yet
MCP InspectorYes

Elicitation behavior varies by operation risk when client support is missing:

Risk LevelClient supports elicitationBehavior
read, low_writeanyProceed silently (no confirmation needed)
medium_write, high_write, destructiveYesPrompt user — proceed on accept, block on decline
medium_write, high_write, destructiveNoBLOCK (return error)
any (at or below HARNESS_AUTO_APPROVE_RISK)anyAuto-approve without prompting

If elicitation fails at runtime, operations at medium_write or above are blocked.

Auto-Approve for Autonomous Workflows

For CI/CD bots, headless agents, or batch automation, use HARNESS_AUTO_APPROVE_RISK to auto-approve operations up to a given risk level:

# Auto-approve everything (equivalent to old HARNESS_SKIP_ELICITATION=true)
HARNESS_AUTO_APPROVE_RISK=all

# Auto-approve only low-risk writes, still prompt for medium+
HARNESS_AUTO_APPROVE_RISK=low_write

Or in your MCP client config:

{
  "mcpServers": {
    "harness": {
      "command": "npx",
      "args": ["harness-mcp-v2"],
      "env": {
        "HARNESS_API_KEY": "pat.xxx.xxx.xxx",
        "HARNESS_AUTO_APPROVE_RISK": "all"
      }
    }
  }
}

Migration note: HARNESS_SKIP_ELICITATION=true is still supported and maps to HARNESS_AUTO_APPROVE_RISK=all. A deprecation warning is logged to stderr. If both are set, HARNESS_AUTO_APPROVE_RISK takes precedence.

When set to all, all write and delete operations proceed without user confirmation — including destructive operations like harness_delete. Use with caution and consider pairing with HARNESS_TOOLSETS to restrict which resource types are available.

Safety

  • Secrets are never exposed. The secret resource type returns metadata only (name, type, scope) — secret values are never included in any response.
  • Write operations use elicitation when available. harness_create, harness_update, harness_delete, and harness_execute attempt MCP elicitation before proceeding (see Elicitation).
  • Medium-risk and above fail closed. If confirmation cannot be obtained for medium_write, high_write, or destructive operations, they are blocked instead of executing blindly. Override with HARNESS_AUTO_APPROVE_RISK for autonomous workflows.
  • CORS restricted to same-origin. The HTTP transport only allows same-origin requests, preventing CSRF attacks from malicious websites targeting the MCP server on localhost.
  • HTTP rate limiting. The HTTP transport enforces 60 requests per minute per IP to prevent request flooding.
  • API rate limiting. The Harness API client enforces a 10 requests/second limit to avoid hitting upstream rate limits.
  • Pagination bounds enforced. List queries are capped at 10,000 items total and 100 per page to prevent memory exhaustion.
  • Retries with backoff. Transient failures (HTTP 429, 5xx) are retried with exponential backoff and jitter.
  • Localhost binding. The HTTP transport binds to 127.0.0.1 by default — not accessible from the network.
  • No stdout logging. All logs go to stderr to avoid corrupting the stdio JSON-RPC transport.

Complementary Skills

The Harness MCP server pairs well with Harness Skills — a collection of ready-made Claude Code skills (slash commands) designed for common Harness workflows. Install them alongside this MCP server to get high-level automation like /deploy, /rollback, /triage, and more without writing custom prompts.

Troubleshooting & Common Pitfalls

SymptomLikely CauseWhat to Do
HARNESS_ACCOUNT_ID is required when the API key is not a PAT...API key is not in PAT format (pat.<accountId>.<tokenId>.<secret>) so account ID cannot be inferredSet HARNESS_ACCOUNT_ID explicitly
Unknown transport: "..." on startupUnsupported CLI transport argUse stdio or http only
Invalid HARNESS_TOOLSETS: ... on startupOne or more toolset names are not recognizedUse only names from Toolset Filtering (exact match)
HTTP mcp-session-id header is required...A session request was sent without session headerSend initialize first, then include mcp-session-id on POST/GET/DELETE /mcp
HTTP Session not found...Session expired (30 min idle TTL) or already closedRe-run initialize to create a new session, then retry with new header
HTTP 405 Method Not Allowed on /mcpUnsupported method for MCP endpointUse POST, GET, DELETE, or OPTIONS only
HTTP Invalid requestInvalid JSON body or request body exceeded HARNESS_MAX_BODY_SIZE_MBValidate JSON payload size/shape; increase HARNESS_MAX_BODY_SIZE_MB if needed
Unknown resource_type "..." from toolsResource type is misspelled or filtered out via HARNESS_TOOLSETSCall harness_describe (with optional search_term) to discover valid types
Missing required field "... for path parameter ..."A project/org scoped call is missing identifiersSet HARNESS_ORG/HARNESS_PROJECT or pass org_id/project_id per tool call
Read-only mode is enabled ... operations are not allowedHARNESS_READ_ONLY=true blocks create/update/delete/executeSet HARNESS_READ_ONLY=false if write operations are intended
Pipeline run fails pre-flight with unresolved required inputsProvided inputs did not cover required runtime placeholdersFetch runtime_input_template, supply missing simple keys, or use input_set_ids for structural inputs
Pipeline CI shorthand (branch, tag, pr_number, commit_sha) did not applyinputs.build was already provided, so shorthand expansion was intentionally skippedRemove inputs.build to use shorthand expansion, or keep full explicit build structure
Operation declined by userUser declined the elicitation confirmation dialogThe user chose not to proceed — verify the operation details and retry if intended
body.template_yaml (or body.yaml) is required for template create/updateTemplate APIs expect full YAML payloadProvide full template_yaml string in body; for deletes, pass version_label to delete one version (omit to delete all versions)
HARNESS_BASE_URL must use HTTPS on startupHARNESS_BASE_URL is set to an HTTP URLUse HTTPS, or set HARNESS_ALLOW_HTTP=true for local development

License

MIT