Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
Jeffallan

claude-skills

Quality
9.0

This repository provides 66 specialized Claude skills and 9 comprehensive project workflows designed for full-stack developers. It enables AI-assisted development across various domains, from frontend and backend to cloud architecture and DevOps, streamlining complex tasks and project management.

USP

It stands out with its vast array of 66 context-aware skills and 9 multi-skill workflows, offering deep specialization across development domains. The integration with Atlassian MCP for project management provides a unique, comprehensive s…

Use cases

  • 01Implementing JWT authentication in NestJS APIs
  • 02Building React components with Server Components
  • 03Designing cloud architectures across AWS, Azure, GCP
  • 04Querying Jira issues or managing Confluence pages with MCP
  • 05Conducting chaos experiments for system resilience

Detected files (8)

  • skills/architecture-designer/SKILL.mdskill
    Show content (4750 bytes)
    ---
    name: architecture-designer
    description: Use when designing new high-level system architecture, reviewing existing designs, or making architectural decisions. Invoke to create architecture diagrams, write Architecture Decision Records (ADRs), evaluate technology trade-offs, design component interactions, and plan for scalability. Use for system design, architecture review, microservices structuring, ADR authoring, scalability planning, and infrastructure pattern selection — distinct from code-level design patterns or database-only design tasks.
    license: MIT
    metadata:
      author: https://github.com/Jeffallan
      version: "1.1.1"
      domain: api-architecture
      triggers: architecture, system design, design pattern, microservices, scalability, ADR, technical design, infrastructure
      role: expert
      scope: design
      output-format: document
      related-skills: fullstack-guardian, devops-engineer, secure-code-guardian, microservices-architect, code-reviewer
    ---
    
    # Architecture Designer
    
    Senior software architect specializing in system design, design patterns, and architectural decision-making.
    
    ## Role Definition
    
    You are a principal architect with 15+ years of experience designing scalable, distributed systems. You make pragmatic trade-offs, document decisions with ADRs, and prioritize long-term maintainability.
    
    ## When to Use This Skill
    
    - Designing new system architecture
    - Choosing between architectural patterns
    - Reviewing existing architecture
    - Creating Architecture Decision Records (ADRs)
    - Planning for scalability
    - Evaluating technology choices
    
    ## Core Workflow
    
    1. **Understand requirements** — Gather functional, non-functional, and constraint requirements. _Verify full requirements coverage before proceeding._
    2. **Identify patterns** — Match requirements to architectural patterns (see Reference Guide).
    3. **Design** — Create architecture with trade-offs explicitly documented; produce a diagram.
    4. **Document** — Write ADRs for all key decisions.
    5. **Review** — Validate with stakeholders. _If review fails, return to step 3 with recorded feedback._
    
    ## Reference Guide
    
    Load detailed guidance based on context:
    
    | Topic | Reference | Load When |
    |-------|-----------|-----------|
    | Architecture Patterns | `references/architecture-patterns.md` | Choosing monolith vs microservices |
    | ADR Template | `references/adr-template.md` | Documenting decisions |
    | System Design | `references/system-design.md` | Full system design template |
    | Database Selection | `references/database-selection.md` | Choosing database technology |
    | NFR Checklist | `references/nfr-checklist.md` | Gathering non-functional requirements |
    
    ## Constraints
    
    ### MUST DO
    - Document all significant decisions with ADRs
    - Consider non-functional requirements explicitly
    - Evaluate trade-offs, not just benefits
    - Plan for failure modes
    - Consider operational complexity
    - Review with stakeholders before finalizing
    
    ### MUST NOT DO
    - Over-engineer for hypothetical scale
    - Choose technology without evaluating alternatives
    - Ignore operational costs
    - Design without understanding requirements
    - Skip security considerations
    
    ## Output Templates
    
    When designing architecture, provide:
    1. Requirements summary (functional + non-functional)
    2. High-level architecture diagram (Mermaid preferred — see example below)
    3. Key decisions with trade-offs (ADR format — see example below)
    4. Technology recommendations with rationale
    5. Risks and mitigation strategies
    
    ### Architecture Diagram (Mermaid)
    
    ```mermaid
    graph TD
        Client["Client (Web/Mobile)"] --> Gateway["API Gateway"]
        Gateway --> AuthSvc["Auth Service"]
        Gateway --> OrderSvc["Order Service"]
        OrderSvc --> DB[("Orders DB\n(PostgreSQL)")]
        OrderSvc --> Queue["Message Queue\n(RabbitMQ)"]
        Queue --> NotifySvc["Notification Service"]
    ```
    
    ### ADR Example
    
    ```markdown
    # ADR-001: Use PostgreSQL for Order Storage
    
    ## Status
    Accepted
    
    ## Context
    The Order Service requires ACID-compliant transactions and complex relational queries
    across orders, line items, and customers.
    
    ## Decision
    Use PostgreSQL as the primary datastore for the Order Service.
    
    ## Alternatives Considered
    - **MongoDB** — flexible schema, but lacks strong ACID guarantees across documents.
    - **DynamoDB** — excellent scalability, but complex query patterns require denormalization.
    
    ## Consequences
    - Positive: Strong consistency, mature tooling, complex query support.
    - Negative: Vertical scaling limits; horizontal sharding adds operational complexity.
    
    ## Trade-offs
    Consistency and query flexibility are prioritised over unlimited horizontal write scalability.
    ```
    
    [Documentation](https://jeffallan.github.io/claude-skills/skills/api-architecture/architecture-designer/)
    
  • skills/cloud-architect/SKILL.mdskill
    Show content (7363 bytes)
    ---
    name: cloud-architect
    description: Designs cloud architectures, creates migration plans, generates cost optimization recommendations, and produces disaster recovery strategies across AWS, Azure, and GCP. Use when designing cloud architectures, planning migrations, or optimizing multi-cloud deployments. Invoke for Well-Architected Framework, cost optimization, disaster recovery, landing zones, security architecture, serverless design.
    license: MIT
    metadata:
      author: https://github.com/Jeffallan
      version: "1.1.0"
      domain: infrastructure
      triggers: AWS, Azure, GCP, Google Cloud, cloud migration, cloud architecture, multi-cloud, cloud cost, Well-Architected, landing zone, cloud security, disaster recovery, cloud native, serverless architecture
      role: architect
      scope: infrastructure
      output-format: architecture
      related-skills: devops-engineer, kubernetes-specialist, terraform-engineer, security-reviewer, microservices-architect, monitoring-expert
    ---
    
    # Cloud Architect
    
    ## Core Workflow
    
    1. **Discovery** — Assess current state, requirements, constraints, compliance needs
    2. **Design** — Select services, design topology, plan data architecture
    3. **Security** — Implement zero-trust, identity federation, encryption
    4. **Cost Model** — Right-size resources, reserved capacity, auto-scaling
    5. **Migration** — Apply 6Rs framework, define waves, validate connectivity before cutover
    6. **Operate** — Set up monitoring, automation, continuous optimization
    
    ### Workflow Validation Checkpoints
    
    **After Design:** Confirm every component has a redundancy strategy and no single points of failure exist in the topology.
    
    **Before Migration cutover:** Validate VPC peering or connectivity is fully established:
    ```bash
    # AWS: confirm peering connection is Active before proceeding
    aws ec2 describe-vpc-peering-connections \
      --filters "Name=status-code,Values=active"
    
    # Azure: confirm VNet peering state
    az network vnet peering list \
      --resource-group myRG --vnet-name myVNet \
      --query "[].{Name:name,State:peeringState}"
    ```
    
    **After Migration:** Verify application health and routing:
    ```bash
    # AWS: check target group health in ALB
    aws elbv2 describe-target-health \
      --target-group-arn arn:aws:elasticloadbalancing:...
    ```
    
    **After DR test:** Confirm RTO/RPO targets were met; document actual recovery times.
    
    ## Reference Guide
    
    Load detailed guidance based on context:
    
    | Topic | Reference | Load When |
    |-------|-----------|-----------|
    | AWS Services | `references/aws.md` | EC2, S3, Lambda, RDS, Well-Architected Framework |
    | Azure Services | `references/azure.md` | VMs, Storage, Functions, SQL, Cloud Adoption Framework |
    | GCP Services | `references/gcp.md` | Compute Engine, Cloud Storage, Cloud Functions, BigQuery |
    | Multi-Cloud | `references/multi-cloud.md` | Abstraction layers, portability, vendor lock-in mitigation |
    | Cost Optimization | `references/cost.md` | Reserved instances, spot, right-sizing, FinOps practices |
    
    ## Constraints
    
    ### MUST DO
    - Design for high availability (99.9%+)
    - Implement security by design (zero-trust)
    - Use infrastructure as code (Terraform, CloudFormation)
    - Enable cost allocation tags and monitoring
    - Plan disaster recovery with defined RTO/RPO
    - Implement multi-region for critical workloads
    - Use managed services when possible
    - Document architectural decisions
    
    ### MUST NOT DO
    - Store credentials in code or public repos
    - Skip encryption (at rest and in transit)
    - Create single points of failure
    - Ignore cost optimization opportunities
    - Deploy without proper monitoring
    - Use overly complex architectures
    - Ignore compliance requirements
    - Skip disaster recovery testing
    
    ## Common Patterns with Examples
    
    ### Least-Privilege IAM (Zero-Trust)
    
    Rather than broad policies, scope permissions to specific resources and actions:
    
    ```bash
    # AWS: create a scoped role for an application
    aws iam create-role \
      --role-name AppRole \
      --assume-role-policy-document file://trust-policy.json
    
    aws iam put-role-policy \
      --role-name AppRole \
      --policy-name AppInlinePolicy \
      --policy-document '{
        "Version": "2012-10-17",
        "Statement": [{
          "Effect": "Allow",
          "Action": ["s3:GetObject", "s3:PutObject"],
          "Resource": "arn:aws:s3:::my-app-bucket/*"
        }]
      }'
    ```
    
    ```hcl
    # Terraform equivalent
    resource "aws_iam_role" "app_role" {
      name               = "AppRole"
      assume_role_policy = data.aws_iam_policy_document.trust.json
    }
    
    resource "aws_iam_role_policy" "app_policy" {
      role = aws_iam_role.app_role.id
      policy = jsonencode({
        Version = "2012-10-17"
        Statement = [{
          Effect   = "Allow"
          Action   = ["s3:GetObject", "s3:PutObject"]
          Resource = "${aws_s3_bucket.app.arn}/*"
        }]
      })
    }
    ```
    
    ### VPC with Public/Private Subnets (Terraform)
    
    ```hcl
    resource "aws_vpc" "main" {
      cidr_block           = "10.0.0.0/16"
      enable_dns_hostnames = true
      tags = { Name = "main", CostCenter = var.cost_center }
    }
    
    resource "aws_subnet" "private" {
      count             = 2
      vpc_id            = aws_vpc.main.id
      cidr_block        = cidrsubnet("10.0.0.0/16", 8, count.index)
      availability_zone = data.aws_availability_zones.available.names[count.index]
    }
    
    resource "aws_subnet" "public" {
      count                   = 2
      vpc_id                  = aws_vpc.main.id
      cidr_block              = cidrsubnet("10.0.0.0/16", 8, count.index + 10)
      availability_zone       = data.aws_availability_zones.available.names[count.index]
      map_public_ip_on_launch = true
    }
    ```
    
    ### Auto-Scaling Group (Terraform)
    
    ```hcl
    resource "aws_autoscaling_group" "app" {
      desired_capacity    = 2
      min_size            = 1
      max_size            = 10
      vpc_zone_identifier = aws_subnet.private[*].id
    
      launch_template {
        id      = aws_launch_template.app.id
        version = "$Latest"
      }
    
      tag {
        key                 = "CostCenter"
        value               = var.cost_center
        propagate_at_launch = true
      }
    }
    
    resource "aws_autoscaling_policy" "cpu_target" {
      autoscaling_group_name = aws_autoscaling_group.app.name
      policy_type            = "TargetTrackingScaling"
      target_tracking_configuration {
        predefined_metric_specification {
          predefined_metric_type = "ASGAverageCPUUtilization"
        }
        target_value = 60.0
      }
    }
    ```
    
    ### Cost Analysis CLI
    
    ```bash
    # AWS: identify top cost drivers for the last 30 days
    aws ce get-cost-and-usage \
      --time-period Start=$(date -d '30 days ago' +%Y-%m-%d),End=$(date +%Y-%m-%d) \
      --granularity MONTHLY \
      --metrics "UnblendedCost" \
      --group-by Type=DIMENSION,Key=SERVICE \
      --query 'ResultsByTime[0].Groups[*].{Service:Keys[0],Cost:Metrics.UnblendedCost.Amount}' \
      --output table
    
    # Azure: review spend by resource group
    az consumption usage list \
      --start-date $(date -d '30 days ago' +%Y-%m-%d) \
      --end-date $(date +%Y-%m-%d) \
      --query "[].{ResourceGroup:resourceGroup,Cost:pretaxCost,Currency:currency}" \
      --output table
    ```
    
    ## Output Templates
    
    When designing cloud architecture, provide:
    1. Architecture diagram with services and data flow
    2. Service selection rationale (compute, storage, database, networking)
    3. Security architecture (IAM, network segmentation, encryption)
    4. Cost estimation and optimization strategy
    5. Deployment approach and rollback plan
    
    [Documentation](https://jeffallan.github.io/claude-skills/skills/infrastructure/cloud-architect/)
    
  • skills/atlassian-mcp/SKILL.mdskill
    Show content (5150 bytes)
    ---
    name: atlassian-mcp
    description: Integrates with Atlassian products to manage project tracking and documentation via MCP protocol. Use when querying Jira issues with JQL filters, creating and updating tickets with custom fields, searching or editing Confluence pages with CQL, managing sprints and backlogs, setting up MCP server authentication, syncing documentation, or debugging Atlassian API integrations.
    license: MIT
    metadata:
      author: https://github.com/Jeffallan
      version: "1.1.0"
      domain: platform
      triggers: Jira, Confluence, Atlassian, MCP, tickets, issues, wiki, JQL, CQL, sprint, backlog, project management
      role: expert
      scope: implementation
      output-format: code
      related-skills: mcp-developer, api-designer, security-reviewer
    ---
    
    # Atlassian MCP Expert
    
    ## When to Use This Skill
    
    - Querying Jira issues with JQL filters
    - Searching or creating Confluence pages
    - Automating sprint workflows and backlog management
    - Setting up MCP server authentication (OAuth/API tokens)
    - Syncing meeting notes to Jira tickets
    - Generating documentation from issue data
    - Debugging Atlassian API integration issues
    - Choosing between official vs open-source MCP servers
    
    ## Core Workflow
    
    1. **Select server** - Choose official cloud, open-source, or self-hosted MCP server
    2. **Authenticate** - Configure OAuth 2.1, API tokens, or PAT credentials
    3. **Design queries** - Write JQL for Jira, CQL for Confluence; validate with `maxResults=1` before full execution
    4. **Implement workflow** - Build tool calls, handle pagination, error recovery
    5. **Verify permissions** - Confirm required scopes with a read-only probe before any write or bulk operation
    6. **Deploy** - Configure IDE integration, test permissions, monitor rate limits
    
    ## Reference Guide
    
    Load detailed guidance based on context:
    
    | Topic | Reference | Load When |
    |-------|-----------|-----------|
    | Server Setup | `references/mcp-server-setup.md` | Installation, choosing servers, configuration |
    | Jira Operations | `references/jira-queries.md` | JQL syntax, issue CRUD, sprints, boards, issue linking |
    | Confluence Ops | `references/confluence-operations.md` | CQL search, page creation, spaces, comments |
    | Authentication | `references/authentication-patterns.md` | OAuth 2.0, API tokens, permission scopes |
    | Common Workflows | `references/common-workflows.md` | Issue triage, doc sync, sprint automation |
    
    ## Quick-Start Examples
    
    ### JQL Query Samples
    ```
    # Open issues assigned to current user in a sprint
    project = PROJ AND status = "In Progress" AND assignee = currentUser() ORDER BY priority DESC
    
    # Unresolved bugs created in the last 7 days
    project = PROJ AND issuetype = Bug AND status != Done AND created >= -7d ORDER BY created DESC
    
    # Validate before bulk: test with maxResults=1 first
    project = PROJ AND sprint in openSprints() AND status = Open ORDER BY created DESC
    ```
    
    ### CQL Query Samples
    ```
    # Find pages updated in a specific space recently
    space = "ENG" AND type = page AND lastModified >= "2024-01-01" ORDER BY lastModified DESC
    
    # Search page text for a keyword
    space = "ENG" AND type = page AND text ~ "deployment runbook"
    ```
    
    ### Minimal MCP Server Configuration
    ```json
    {
      "mcpServers": {
        "atlassian": {
          "command": "npx",
          "args": ["-y", "@sooperset/mcp-atlassian"],
          "env": {
            "JIRA_URL": "https://your-domain.atlassian.net",
            "JIRA_EMAIL": "user@example.com",
            "JIRA_API_TOKEN": "${JIRA_API_TOKEN}",
            "CONFLUENCE_URL": "https://your-domain.atlassian.net/wiki",
            "CONFLUENCE_EMAIL": "user@example.com",
            "CONFLUENCE_API_TOKEN": "${CONFLUENCE_API_TOKEN}"
          }
        }
      }
    }
    ```
    > **Note:** Always load `JIRA_API_TOKEN` and `CONFLUENCE_API_TOKEN` from environment variables or a secrets manager — never hardcode credentials.
    
    ## Constraints
    
    ### MUST DO
    - Respect user permissions and workspace access controls
    - Validate JQL/CQL queries before execution (use `maxResults=1` probe first)
    - Handle rate limits with exponential backoff
    - Use pagination for large result sets (50-100 items per page)
    - Implement error recovery for network failures
    - Log API calls for debugging and audit trails
    - Test with read-only operations first
    - Document required permission scopes
    - Confirm before any write or bulk operation against production data
    
    ### MUST NOT DO
    - Hardcode API tokens or OAuth secrets in code
    - Ignore rate limit headers from Atlassian APIs
    - Create issues without validating required fields
    - Skip input sanitization on user-provided query strings
    - Deploy without testing permission boundaries
    - Update production data without confirmation prompts
    - Mix different authentication methods in same session
    - Expose sensitive issue data in logs or error messages
    
    ## Output Templates
    
    When implementing Atlassian MCP features, provide:
    1. MCP server configuration (JSON/environment vars)
    2. Query examples (JQL/CQL with explanations)
    3. Tool call implementation with error handling
    4. Authentication setup instructions
    5. Brief explanation of permission requirements
    
    [Documentation](https://jeffallan.github.io/claude-skills/skills/platform/atlassian-mcp/)
    
  • skills/chaos-engineer/SKILL.mdskill
    Show content (6890 bytes)
    ---
    name: chaos-engineer
    description: Designs chaos experiments, creates failure injection frameworks, and facilitates game day exercises for distributed systems — producing runbooks, experiment manifests, rollback procedures, and post-mortem templates. Use when designing chaos experiments, implementing failure injection frameworks, or conducting game day exercises. Invoke for chaos experiments, resilience testing, blast radius control, game days, antifragile systems, fault injection, Chaos Monkey, Litmus Chaos.
    license: MIT
    metadata:
      author: https://github.com/Jeffallan
      version: "1.1.0"
      domain: devops
      triggers: chaos engineering, resilience testing, failure injection, game day, blast radius, chaos experiment, fault injection, Chaos Monkey, Litmus Chaos, antifragile
      role: specialist
      scope: implementation
      output-format: code
      related-skills: sre-engineer, devops-engineer, kubernetes-specialist
    ---
    
    # Chaos Engineer
    
    ## When to Use This Skill
    
    - Designing and executing chaos experiments
    - Implementing failure injection frameworks (Chaos Monkey, Litmus, etc.)
    - Planning and conducting game day exercises
    - Building blast radius controls and safety mechanisms
    - Setting up continuous chaos testing in CI/CD
    - Improving system resilience based on experiment findings
    
    ## Core Workflow
    
    1. **System Analysis** - Map architecture, dependencies, critical paths, and failure modes
    2. **Experiment Design** - Define hypothesis, steady state, blast radius, and safety controls
    3. **Execute Chaos** - Run controlled experiments with monitoring and quick rollback
    4. **Learn & Improve** - Document findings, implement fixes, enhance monitoring
    5. **Automate** - Integrate chaos testing into CI/CD for continuous resilience
    
    ## Reference Guide
    
    Load detailed guidance based on context:
    
    | Topic | Reference | Load When |
    |-------|-----------|-----------|
    | Experiments | `references/experiment-design.md` | Designing hypothesis, blast radius, rollback |
    | Infrastructure | `references/infrastructure-chaos.md` | Server, network, zone, region failures |
    | Kubernetes | `references/kubernetes-chaos.md` | Pod, node, Litmus, chaos mesh experiments |
    | Tools & Automation | `references/chaos-tools.md` | Chaos Monkey, Gremlin, Pumba, CI/CD integration |
    | Game Days | `references/game-days.md` | Planning, executing, learning from game days |
    
    ## Safety Checklist
    
    Non-obvious constraints that must be enforced on every experiment:
    
    - **Steady state first** — define and verify baseline metrics before injecting any failure
    - **Blast radius cap** — start with the smallest possible impact scope; expand only after validation
    - **Automated rollback ≤ 30 seconds** — abort path must be scripted and tested before the experiment begins
    - **Single variable** — change only one failure condition at a time until behaviour is well understood
    - **No production without safety nets** — customer-facing environments require circuit breakers, feature flags, or canary isolation
    - **Close the loop** — every experiment must produce a written learning summary and at least one tracked improvement
    
    ## Output Templates
    
    When implementing chaos engineering, provide:
    1. Experiment design document (hypothesis, metrics, blast radius)
    2. Implementation code (failure injection scripts/manifests)
    3. Monitoring setup and alert configuration
    4. Rollback procedures and safety controls
    5. Learning summary and improvement recommendations
    
    ## Concrete Example: Pod Failure Experiment (Litmus Chaos)
    
    The following shows a complete experiment — from hypothesis to rollback — using Litmus Chaos on Kubernetes.
    
    ### Step 1 — Define steady state and apply the experiment
    
    ```bash
    # Verify baseline: p99 latency < 200ms, error rate < 0.1%
    kubectl get deploy my-service -n production
    kubectl top pods -n production -l app=my-service
    ```
    
    ### Step 2 — Create and apply a Litmus ChaosEngine manifest
    
    ```yaml
    # chaos-pod-delete.yaml
    apiVersion: litmuschaos.io/v1alpha1
    kind: ChaosEngine
    metadata:
      name: my-service-pod-delete
      namespace: production
    spec:
      appinfo:
        appns: production
        applabel: "app=my-service"
        appkind: deployment
      # Limit blast radius: only 1 replica at a time
      engineState: active
      chaosServiceAccount: litmus-admin
      experiments:
        - name: pod-delete
          spec:
            components:
              env:
                - name: TOTAL_CHAOS_DURATION
                  value: "60"          # seconds
                - name: CHAOS_INTERVAL
                  value: "20"          # delete one pod every 20s
                - name: FORCE
                  value: "false"
                - name: PODS_AFFECTED_PERC
                  value: "33"          # max 33% of replicas affected
    ```
    
    ```bash
    # Apply the experiment
    kubectl apply -f chaos-pod-delete.yaml
    
    # Watch experiment status
    kubectl describe chaosengine my-service-pod-delete -n production
    kubectl get chaosresult my-service-pod-delete-pod-delete -n production -w
    ```
    
    ### Step 3 — Monitor during the experiment
    
    ```bash
    # Tail application logs for errors
    kubectl logs -l app=my-service -n production --since=2m -f
    
    # Check ChaosResult verdict when complete
    kubectl get chaosresult my-service-pod-delete-pod-delete \
      -n production -o jsonpath='{.status.experimentStatus.verdict}'
    ```
    
    ### Step 4 — Rollback / abort if steady state is violated
    
    ```bash
    # Immediately stop the experiment
    kubectl patch chaosengine my-service-pod-delete \
      -n production --type merge -p '{"spec":{"engineState":"stop"}}'
    
    # Confirm all pods are healthy
    kubectl rollout status deployment/my-service -n production
    ```
    
    ## Concrete Example: Network Latency with toxiproxy
    
    ```bash
    # Install toxiproxy CLI
    brew install toxiproxy   # macOS; use the binary release on Linux
    
    # Start toxiproxy server (runs alongside your service)
    toxiproxy-server &
    
    # Create a proxy for your downstream dependency
    toxiproxy-cli create -l 0.0.0.0:22222 -u downstream-db:5432 db-proxy
    
    # Inject 300ms latency with 10% jitter — blast radius: this proxy only
    toxiproxy-cli toxic add db-proxy -t latency -a latency=300 -a jitter=30
    
    # Run your load test / observe metrics here ...
    
    # Remove the toxic to restore normal behaviour
    toxiproxy-cli toxic remove db-proxy -n latency_downstream
    ```
    
    ## Concrete Example: Chaos Monkey (Spinnaker / standalone)
    
    ```bash
    # chaos-monkey-config.yml — restrict to a single ASG
    deployment:
      enabled: true
      regionIndependence: false
    chaos:
      enabled: true
      meanTimeBetweenKillsInWorkDays: 2
      minTimeBetweenKillsInWorkDays: 1
      grouping: APP           # kill one instance per app, not per cluster
      exceptions:
        - account: production
          region: us-east-1
          detail: "*-canary"  # never kill canary instances
    
    # Apply and trigger a manual kill for testing
    chaos-monkey --app my-service --account staging --dry-run false
    ```
    
    [Documentation](https://jeffallan.github.io/claude-skills/skills/devops/chaos-engineer/)
    
  • skills/angular-architect/SKILL.mdskill
    Show content (5993 bytes)
    ---
    name: angular-architect
    description: Generates Angular 17+ standalone components, configures advanced routing with lazy loading and guards, implements NgRx state management, applies RxJS patterns, and optimizes bundle performance. Use when building Angular 17+ applications with standalone components or signals, setting up NgRx stores, establishing RxJS reactive patterns, performance tuning, or writing Angular tests for enterprise apps.
    license: MIT
    metadata:
      author: https://github.com/Jeffallan
      version: "1.1.0"
      domain: frontend
      triggers: Angular, Angular 17, standalone components, signals, RxJS, NgRx, Angular performance, Angular routing, Angular testing
      role: specialist
      scope: implementation
      output-format: code
      related-skills: typescript-pro, test-master
    ---
    
    # Angular Architect
    
    Senior Angular architect specializing in Angular 17+ with standalone components, signals, and enterprise-grade application development.
    
    ## Core Workflow
    
    1. **Analyze requirements** - Identify components, state needs, routing architecture
    2. **Design architecture** - Plan standalone components, signal usage, state flow
    3. **Implement features** - Build components with OnPush strategy and reactive patterns
    4. **Manage state** - Setup NgRx store, effects, selectors as needed; verify store hydration and action flow with Redux DevTools before proceeding
    5. **Optimize** - Apply performance best practices and bundle optimization; run `ng build --configuration production` to verify bundle size and flag regressions
    6. **Test** - Write unit and integration tests with TestBed; verify >85% coverage threshold is met
    
    ## Reference Guide
    
    Load detailed guidance based on context:
    
    | Topic | Reference | Load When |
    |-------|-----------|-----------|
    | Components | `references/components.md` | Standalone components, signals, input/output |
    | RxJS | `references/rxjs.md` | Observables, operators, subjects, error handling |
    | NgRx | `references/ngrx.md` | Store, effects, selectors, entity adapter |
    | Routing | `references/routing.md` | Router config, guards, lazy loading, resolvers |
    | Testing | `references/testing.md` | TestBed, component tests, service tests |
    
    ## Key Patterns
    
    ### Standalone Component with OnPush and Signals
    
    ```typescript
    import { ChangeDetectionStrategy, Component, computed, input, output, signal } from '@angular/core';
    import { CommonModule } from '@angular/common';
    
    @Component({
      selector: 'app-user-card',
      standalone: true,
      imports: [CommonModule],
      changeDetection: ChangeDetectionStrategy.OnPush,
      template: `
        <div class="user-card">
          <h2>{{ fullName() }}</h2>
          <button (click)="onSelect()">Select</button>
        </div>
      `,
    })
    export class UserCardComponent {
      firstName = input.required<string>();
      lastName = input.required<string>();
      selected = output<string>();
    
      fullName = computed(() => `${this.firstName()} ${this.lastName()}`);
    
      onSelect(): void {
        this.selected.emit(this.fullName());
      }
    }
    ```
    
    ### RxJS Subscription Management with `takeUntilDestroyed`
    
    ```typescript
    import { Component, OnInit, inject } from '@angular/core';
    import { takeUntilDestroyed } from '@angular/core/rxjs-interop';
    import { UserService } from './user.service';
    
    @Component({ selector: 'app-users', standalone: true, template: `...` })
    export class UsersComponent implements OnInit {
      private userService = inject(UserService);
      // DestroyRef is captured at construction time for use in ngOnInit
      private destroyRef = inject(DestroyRef);
    
      ngOnInit(): void {
        this.userService.getUsers()
          .pipe(takeUntilDestroyed(this.destroyRef))
          .subscribe({
            next: (users) => { /* handle */ },
            error: (err) => console.error('Failed to load users', err),
          });
      }
    }
    ```
    
    ### NgRx Action / Reducer / Selector
    
    ```typescript
    // actions
    export const loadUsers = createAction('[Users] Load Users');
    export const loadUsersSuccess = createAction('[Users] Load Users Success', props<{ users: User[] }>());
    export const loadUsersFailure = createAction('[Users] Load Users Failure', props<{ error: string }>());
    
    // reducer
    export interface UsersState { users: User[]; loading: boolean; error: string | null; }
    const initialState: UsersState = { users: [], loading: false, error: null };
    
    export const usersReducer = createReducer(
      initialState,
      on(loadUsers, (state) => ({ ...state, loading: true, error: null })),
      on(loadUsersSuccess, (state, { users }) => ({ ...state, users, loading: false })),
      on(loadUsersFailure, (state, { error }) => ({ ...state, error, loading: false })),
    );
    
    // selectors
    export const selectUsersState = createFeatureSelector<UsersState>('users');
    export const selectAllUsers = createSelector(selectUsersState, (s) => s.users);
    export const selectUsersLoading = createSelector(selectUsersState, (s) => s.loading);
    ```
    
    ## Constraints
    
    ### MUST DO
    - Use standalone components (Angular 17+ default)
    - Use signals for reactive state where appropriate
    - Use OnPush change detection strategy
    - Use strict TypeScript configuration
    - Implement proper error handling in RxJS streams
    - Use `trackBy` functions in `*ngFor` loops
    - Write tests with >85% coverage
    - Follow Angular style guide
    
    ### MUST NOT DO
    - Use NgModule-based components (except when required for compatibility)
    - Forget to unsubscribe from observables (use `takeUntilDestroyed` or `async` pipe)
    - Use async operations without proper error handling
    - Skip accessibility attributes
    - Expose sensitive data in client-side code
    - Use `any` type without justification
    - Mutate state directly in NgRx
    - Skip unit tests for critical logic
    
    ## Output Templates
    
    When implementing Angular features, provide:
    1. Component file with standalone configuration
    2. Service file if business logic is involved
    3. State management files if using NgRx
    4. Test file with comprehensive test cases
    5. Brief explanation of architectural decisions
    
    [Documentation](https://jeffallan.github.io/claude-skills/skills/frontend/angular-architect/)
    
  • skills/api-designer/SKILL.mdskill
    Show content (7832 bytes)
    ---
    name: api-designer
    description: Use when designing REST or GraphQL APIs, creating OpenAPI specifications, or planning API architecture. Invoke for resource modeling, versioning strategies, pagination patterns, error handling standards.
    license: MIT
    metadata:
      author: https://github.com/Jeffallan
      version: "1.1.0"
      domain: api-architecture
      triggers: API design, REST API, OpenAPI, API specification, API architecture, resource modeling, API versioning, GraphQL schema, API documentation
      role: architect
      scope: design
      output-format: specification
      related-skills: graphql-architect, fastapi-expert, nestjs-expert, spring-boot-engineer, security-reviewer
    ---
    
    # API Designer
    
    Senior API architect specializing in REST and GraphQL APIs with comprehensive OpenAPI 3.1 specifications.
    
    ## Core Workflow
    
    1. **Analyze domain** — Understand business requirements, data models, and client needs
    2. **Model resources** — Identify resources, relationships, and operations; sketch entity diagram before writing any spec
    3. **Design endpoints** — Define URI patterns, HTTP methods, request/response schemas
    4. **Specify contract** — Create OpenAPI 3.1 spec; validate before proceeding: `npx @redocly/cli lint openapi.yaml`
    5. **Mock and verify** — Spin up a mock server to test contracts: `npx @stoplight/prism-cli mock openapi.yaml`
    6. **Plan evolution** — Design versioning, deprecation, and backward-compatibility strategy
    
    ## Reference Guide
    
    Load detailed guidance based on context:
    
    | Topic | Reference | Load When |
    |-------|-----------|-----------|
    | REST Patterns | `references/rest-patterns.md` | Resource design, HTTP methods, HATEOAS |
    | Versioning | `references/versioning.md` | API versions, deprecation, breaking changes |
    | Pagination | `references/pagination.md` | Cursor, offset, keyset pagination |
    | Error Handling | `references/error-handling.md` | Error responses, RFC 7807, status codes |
    | OpenAPI | `references/openapi.md` | OpenAPI 3.1, documentation, code generation |
    
    ## Constraints
    
    ### MUST DO
    - Follow REST principles (resource-oriented, proper HTTP methods)
    - Use consistent naming conventions (snake_case or camelCase — pick one, apply everywhere)
    - Include comprehensive OpenAPI 3.1 specification
    - Design proper error responses with actionable messages (RFC 7807)
    - Implement pagination for all collection endpoints
    - Version APIs with clear deprecation policies
    - Document authentication and authorization
    - Provide request/response examples
    
    ### MUST NOT DO
    - Use verbs in resource URIs (use `/users/{id}`, not `/getUser/{id}`)
    - Return inconsistent response structures
    - Skip error code documentation
    - Ignore HTTP status code semantics
    - Design APIs without a versioning strategy
    - Expose implementation details in the API surface
    - Create breaking changes without a migration path
    - Omit rate limiting considerations
    
    ## Templates
    
    ### OpenAPI 3.1 Resource Endpoint (copy-paste starter)
    
    ```yaml
    openapi: "3.1.0"
    info:
      title: Example API
      version: "1.1.0"
    paths:
      /users:
        get:
          summary: List users
          operationId: listUsers
          tags: [Users]
          parameters:
            - name: cursor
              in: query
              schema: { type: string }
              description: Opaque cursor for pagination
            - name: limit
              in: query
              schema: { type: integer, default: 20, maximum: 100 }
          responses:
            "200":
              description: Paginated list of users
              content:
                application/json:
                  schema:
                    type: object
                    required: [data, pagination]
                    properties:
                      data:
                        type: array
                        items: { $ref: "#/components/schemas/User" }
                      pagination:
                        $ref: "#/components/schemas/CursorPage"
            "400": { $ref: "#/components/responses/BadRequest" }
            "401": { $ref: "#/components/responses/Unauthorized" }
            "429": { $ref: "#/components/responses/TooManyRequests" }
      /users/{id}:
        get:
          summary: Get a user
          operationId: getUser
          tags: [Users]
          parameters:
            - name: id
              in: path
              required: true
              schema: { type: string, format: uuid }
          responses:
            "200":
              description: User found
              content:
                application/json:
                  schema: { $ref: "#/components/schemas/User" }
            "404": { $ref: "#/components/responses/NotFound" }
    
    components:
      schemas:
        User:
          type: object
          required: [id, email, created_at]
          properties:
            id:    { type: string, format: uuid, readOnly: true }
            email: { type: string, format: email }
            name:  { type: string }
            created_at: { type: string, format: date-time, readOnly: true }
    
        CursorPage:
          type: object
          required: [next_cursor, has_more]
          properties:
            next_cursor: { type: string, nullable: true }
            has_more:    { type: boolean }
    
        Problem:                       # RFC 7807 Problem Details
          type: object
          required: [type, title, status]
          properties:
            type:     { type: string, format: uri, example: "https://api.example.com/errors/validation-error" }
            title:    { type: string, example: "Validation Error" }
            status:   { type: integer, example: 400 }
            detail:   { type: string, example: "The 'email' field must be a valid email address." }
            instance: { type: string, format: uri, example: "/users/req-abc123" }
    
      responses:
        BadRequest:
          description: Invalid request parameters
          content:
            application/problem+json:
              schema: { $ref: "#/components/schemas/Problem" }
        Unauthorized:
          description: Missing or invalid authentication
          content:
            application/problem+json:
              schema: { $ref: "#/components/schemas/Problem" }
        NotFound:
          description: Resource not found
          content:
            application/problem+json:
              schema: { $ref: "#/components/schemas/Problem" }
        TooManyRequests:
          description: Rate limit exceeded
          headers:
            Retry-After: { schema: { type: integer } }
          content:
            application/problem+json:
              schema: { $ref: "#/components/schemas/Problem" }
    
      securitySchemes:
        BearerAuth:
          type: http
          scheme: bearer
          bearerFormat: JWT
    
    security:
      - BearerAuth: []
    ```
    
    ### RFC 7807 Error Response (copy-paste)
    
    ```json
    {
      "type": "https://api.example.com/errors/validation-error",
      "title": "Validation Error",
      "status": 422,
      "detail": "The 'email' field must be a valid email address.",
      "instance": "/users/req-abc123",
      "errors": [
        { "field": "email", "message": "Must be a valid email address." }
      ]
    }
    ```
    
    - Always use `Content-Type: application/problem+json` for error responses.
    - `type` must be a stable, documented URI — never a generic string.
    - `detail` must be human-readable and actionable.
    - Extend with `errors[]` for field-level validation failures.
    
    ## Output Checklist
    
    When delivering an API design, provide:
    1. Resource model and relationships (diagram or table)
    2. Endpoint specifications with URIs and HTTP methods
    3. OpenAPI 3.1 specification (YAML)
    4. Authentication and authorization flows
    5. Error response catalog (all 4xx/5xx with `type` URIs)
    6. Pagination and filtering patterns
    7. Versioning and deprecation strategy
    8. Validation result: `npx @redocly/cli lint openapi.yaml` passes with no errors
    
    ## Knowledge Reference
    
    REST architecture, OpenAPI 3.1, GraphQL, HTTP semantics, JSON:API, HATEOAS, OAuth 2.0, JWT, RFC 7807 Problem Details, API versioning patterns, pagination strategies, rate limiting, webhook design, SDK generation
    
    [Documentation](https://jeffallan.github.io/claude-skills/skills/api-architecture/api-designer/)
    
  • skills/cli-developer/SKILL.mdskill
    Show content (4614 bytes)
    ---
    name: cli-developer
    description: Use when building CLI tools, implementing argument parsing, or adding interactive prompts. Invoke for parsing flags and subcommands, displaying progress bars and spinners, generating bash/zsh/fish completion scripts, CLI design, shell completions, and cross-platform terminal applications using commander, click, typer, or cobra.
    license: MIT
    metadata:
      author: https://github.com/Jeffallan
      version: "1.1.0"
      domain: devops
      triggers: CLI, command-line, terminal app, argument parsing, shell completion, interactive prompt, progress bar, commander, click, typer, cobra
      role: specialist
      scope: implementation
      output-format: code
      related-skills: devops-engineer
    ---
    
    # CLI Developer
    
    ## Core Workflow
    
    1. **Analyze UX** — Identify user workflows, command hierarchy, common tasks. Validate by listing all commands and their expected `--help` output before writing code.
    2. **Design commands** — Plan subcommands, flags, arguments, configuration. Confirm flag naming is consistent and no existing signatures are broken.
    3. **Implement** — Build with the appropriate CLI framework for the language (see Reference Guide below). After wiring up commands, run `<cli> --help` to verify help text renders correctly and `<cli> --version` to confirm version output.
    4. **Polish** — Add completions, help text, error messages, progress indicators. Verify TTY detection for color output and graceful SIGINT handling.
    5. **Test** — Run cross-platform smoke tests; benchmark startup time (target: <50ms).
    
    ## Reference Guide
    
    Load detailed guidance based on context:
    
    | Topic | Reference | Load When |
    |-------|-----------|-----------|
    | Design Patterns | `references/design-patterns.md` | Subcommands, flags, config, architecture |
    | Node.js CLIs | `references/node-cli.md` | commander, yargs, inquirer, chalk |
    | Python CLIs | `references/python-cli.md` | click, typer, argparse, rich |
    | Go CLIs | `references/go-cli.md` | cobra, viper, bubbletea |
    | UX Patterns | `references/ux-patterns.md` | Progress bars, colors, help text |
    
    ## Quick-Start Example
    
    ### Node.js (commander)
    
    ```js
    #!/usr/bin/env node
    // npm install commander
    const { program } = require('commander');
    
    program
      .name('mytool')
      .description('Example CLI')
      .version('1.0.0');
    
    program
      .command('greet <name>')
      .description('Greet a user')
      .option('-l, --loud', 'uppercase the greeting')
      .action((name, opts) => {
        const msg = `Hello, ${name}!`;
        console.log(opts.loud ? msg.toUpperCase() : msg);
      });
    
    program.parse();
    ```
    
    For Python (click/typer) and Go (cobra) quick-start examples, see `references/python-cli.md` and `references/go-cli.md`.
    
    ## Constraints
    
    ### MUST DO
    - Keep startup time under 50ms
    - Provide clear, actionable error messages
    - Support `--help` and `--version` flags
    - Use consistent flag naming conventions
    - Handle SIGINT (Ctrl+C) gracefully
    - Validate user input early
    - Support both interactive and non-interactive modes
    - Test on Windows, macOS, and Linux
    
    ### MUST NOT DO
    
    - **Block on synchronous I/O unnecessarily** — use async reads or stream processing instead.
    - **Print to stdout when output will be piped** — write logs/diagnostics to stderr.
    - **Use colors when output is not a TTY** — detect before applying color:
      ```js
      // Node.js
      const useColor = process.stdout.isTTY;
      ```
      ```python
      # Python
      import sys
      use_color = sys.stdout.isatty()
      ```
      ```go
      // Go
      import "golang.org/x/term"
      useColor := term.IsTerminal(int(os.Stdout.Fd()))
      ```
    - **Break existing command signatures** — treat flag/subcommand renames as breaking changes.
    - **Require interactive input in CI/CD environments** — always provide non-interactive fallbacks via flags or env vars.
    - **Hardcode paths or platform-specific logic** — use `os.homedir()` / `os.UserHomeDir()` / `Path.home()` instead.
    - **Ship without shell completions** — all three frameworks above have built-in completion generation.
    
    ## Output Templates
    
    When implementing CLI features, provide:
    1. Command structure (main entry point, subcommands)
    2. Configuration handling (files, env vars, flags)
    3. Core implementation with error handling
    4. Shell completion scripts if applicable
    5. Brief explanation of UX decisions
    
    ## Knowledge Reference
    
    CLI frameworks (commander, yargs, oclif, click, typer, argparse, cobra, viper), terminal UI (chalk, inquirer, rich, bubbletea), testing (snapshot testing, E2E), distribution (npm, pip, homebrew, releases), performance optimization
    
    [Documentation](https://jeffallan.github.io/claude-skills/skills/devops/cli-developer/)
    
  • .claude-plugin/marketplace.jsonmarketplace
    Show content (2222 bytes)
    {
      "name": "fullstack-dev-skills",
      "owner": {
        "name": "jeffallan"
      },
      "metadata": {
        "description": "Comprehensive skill pack for full-stack developers covering frameworks, workflows, and security",
        "version": "0.4.14"
      },
      "plugins": [
        {
          "name": "fullstack-dev-skills",
          "source": "./",
          "description": "66 specialized skills for full-stack development: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing skills. Includes 9 project workflow commands for epic planning, discovery, execution, and retrospectives.",
          "version": "0.4.14",
          "author": {
            "name": "jeffallan",
            "email": "github@jeffallan"
          },
          "homepage": "https://github.com/jeffallan/claude-skills",
          "repository": "https://github.com/jeffallan/claude-skills",
          "license": "MIT",
          "keywords": [
            "claude-skill",
            "claude-code",
            "fullstack",
            "typescript",
            "python",
            "go",
            "rust",
            "cpp",
            "swift",
            "kotlin",
            "csharp",
            "php",
            "java",
            "sql",
            "dart",
            "react",
            "nextjs",
            "vue",
            "angular",
            "react-native",
            "flutter",
            "nestjs",
            "django",
            "fastapi",
            "spring-boot",
            "laravel",
            "rails",
            "dotnet",
            "kubernetes",
            "terraform",
            "graphql",
            "microservices",
            "debugging",
            "monitoring",
            "architecture",
            "security",
            "code-review",
            "testing",
            "playwright",
            "devops",
            "sre",
            "model-invoked",
            "project-management",
            "epic-planning",
            "jira",
            "confluence",
            "sprint",
            "discovery",
            "retrospectives"
          ],
          "category": "development",
          "tags": [
            "fullstack",
            "development",
            "frameworks",
            "security",
            "testing",
            "workflows"
          ],
          "skills": "./skills/",
          "commands": "./commands/"
        }
      ]
    }
    

README

Jeffallan%2Fclaude-skills | Trendshift Mentioned in Awesome Claude Code

Version License Claude Code Stars CI


Quick Start

/plugin marketplace add jeffallan/claude-skills

Then, install the skills:

/plugin install fullstack-dev-skills@jeffallan

For all installation methods and first steps, see the Quick Start Guide.

Full documentation: jeffallan.github.io/claude-skills

Skills

66 specialized skills across 12 categories covering languages, backend/frontend frameworks, infrastructure, APIs, testing, DevOps, security, data/ML, and platform specialists.

See Skills Guide for the full list, decision trees, and workflow combinations.

Usage Patterns

Context-Aware Activation

Skills activate automatically based on your request:

# Backend Development
"Implement JWT authentication in my NestJS API"
→ Activates: NestJS Expert → Loads: references/authentication.md

# Frontend Development
"Build a React component with Server Components"
→ Activates: React Expert → Loads: references/server-components.md

Multi-Skill Workflows

Complex tasks combine multiple skills:

Feature Development: Feature Forge → Architecture Designer → Fullstack Guardian → Test Master → DevOps Engineer
Bug Investigation:   Debugging Wizard → Framework Expert → Test Master → Code Reviewer
Security Hardening:  Secure Code Guardian → Security Reviewer → Test Master

Context Engineering

Surface and validate Claude's hidden assumptions about your project with /common-ground. See the Common Ground Guide for full documentation.

Project Workflow

The 9 workflow commands manage epics from discovery through retrospectives, integrating with Jira and Confluence. See Workflow Commands Reference for the full command reference and lifecycle diagrams.

[!TIP] Setup: Workflow commands require an Atlassian MCP server. See the Atlassian MCP Setup Guide.

Documentation

Contributing

See Contributing for guidelines on adding skills, writing references, and submitting pull requests.

Changelog

See Changelog for full version history and release notes.

License

MIT License - See LICENSE file for details.

Support

Author

Built by jeffallan LinkedIn

Principal Consultant at Synergetic Solutions LinkedIn

Fullstack engineering, security engineering, compliance, and technical due diligence.

Community

Stargazers repo roster for @Jeffallan/claude-skills

Star History

Star History Chart


Built for Claude Code | 9 Workflows | 366 Reference Files | 66 Skills