Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
mukul975

Anthropic-Cybersecurity-Skills

Quality
9.0

This repository provides over 750 production-grade cybersecurity skills, each mapped to five industry frameworks like MITRE ATT&CK and NIST CSF. It equips AI agents with expert-level guidance for security investigations, threat hunting, and compliance, bridging the cybersecurity workforce gap.

USP

Unique as the only open-source skills library to map every skill across five major industry frameworks, providing unified cross-framework coverage. It's an AI-native knowledge base encoding real practitioner workflows, not just scripts.

Use cases

  • 01Enhancing AI agent capabilities in cybersecurity
  • 02Automating security analysis and investigations
  • 03Threat hunting and incident response
  • 04Compliance mapping and gap analysis
  • 05Penetration testing and red teaming

Detected files (8)

  • skills/acquiring-disk-image-with-dd-and-dcfldd/SKILL.mdskill
    Show content (8465 bytes)
    ---
    name: acquiring-disk-image-with-dd-and-dcfldd
    description: Create forensically sound bit-for-bit disk images using dd and dcfldd while preserving evidence integrity through
      hash verification.
    domain: cybersecurity
    subdomain: digital-forensics
    tags:
    - forensics
    - disk-imaging
    - evidence-acquisition
    - dd
    - dcfldd
    - hash-verification
    version: '1.0'
    author: mahipal
    license: Apache-2.0
    nist_csf:
    - RS.AN-01
    - RS.AN-03
    - DE.AE-02
    - RS.MA-01
    ---
    
    # Acquiring Disk Image with dd and dcfldd
    
    ## When to Use
    - When you need to create a forensic copy of a suspect drive for investigation
    - During incident response when preserving volatile disk evidence before analysis
    - When law enforcement or legal proceedings require a verified bit-for-bit copy
    - Before performing any destructive analysis on a storage device
    - When acquiring images from physical drives, USB devices, or memory cards
    
    ## Prerequisites
    - Linux-based forensic workstation (SIFT, Kali, or any Linux distro)
    - `dd` (pre-installed on all Linux systems) or `dcfldd` (enhanced forensic version)
    - Write-blocker hardware or software write-blocking configured
    - Destination drive with sufficient storage (larger than source)
    - Root/sudo privileges on the forensic workstation
    - SHA-256 or MD5 hashing utilities (`sha256sum`, `md5sum`)
    
    ## Workflow
    
    ### Step 1: Identify the Target Device and Enable Write Protection
    
    ```bash
    # List all connected block devices to identify the target
    lsblk -o NAME,SIZE,TYPE,MOUNTPOINT,MODEL
    
    # Verify the device details
    fdisk -l /dev/sdb
    
    # Enable software write-blocking (if no hardware blocker)
    blockdev --setro /dev/sdb
    
    # Verify read-only status
    blockdev --getro /dev/sdb
    # Output: 1 (means read-only is enabled)
    
    # Alternatively, use udev rules for persistent write-blocking
    echo 'SUBSYSTEM=="block", ATTRS{serial}=="WD-WCAV5H861234", ATTR{ro}="1"' > /etc/udev/rules.d/99-writeblock.rules
    udevadm control --reload-rules
    ```
    
    ### Step 2: Prepare the Destination and Document the Source
    
    ```bash
    # Create case directory structure
    mkdir -p /cases/case-2024-001/{images,hashes,logs,notes}
    
    # Document source drive information
    hdparm -I /dev/sdb > /cases/case-2024-001/notes/source_drive_info.txt
    
    # Record the serial number and model
    smartctl -i /dev/sdb >> /cases/case-2024-001/notes/source_drive_info.txt
    
    # Pre-hash the source device
    sha256sum /dev/sdb | tee /cases/case-2024-001/hashes/source_hash_before.txt
    ```
    
    ### Step 3: Acquire the Image Using dd
    
    ```bash
    # Basic dd acquisition with progress and error handling
    dd if=/dev/sdb of=/cases/case-2024-001/images/evidence.dd \
       bs=4096 \
       conv=noerror,sync \
       status=progress 2>&1 | tee /cases/case-2024-001/logs/dd_acquisition.log
    
    # For compressed images to save space
    dd if=/dev/sdb bs=4096 conv=noerror,sync status=progress | \
       gzip -c > /cases/case-2024-001/images/evidence.dd.gz
    
    # Using dd with a specific count for partial acquisition
    dd if=/dev/sdb of=/cases/case-2024-001/images/first_1gb.dd \
       bs=1M count=1024 status=progress
    ```
    
    ### Step 4: Acquire Using dcfldd (Preferred Forensic Method)
    
    ```bash
    # Install dcfldd if not present
    apt-get install dcfldd
    
    # Acquire image with built-in hashing and split output
    dcfldd if=/dev/sdb \
       of=/cases/case-2024-001/images/evidence.dd \
       hash=sha256,md5 \
       hashwindow=1G \
       hashlog=/cases/case-2024-001/hashes/acquisition_hashes.txt \
       bs=4096 \
       conv=noerror,sync \
       errlog=/cases/case-2024-001/logs/dcfldd_errors.log
    
    # Split large images into manageable segments
    dcfldd if=/dev/sdb \
       of=/cases/case-2024-001/images/evidence.dd \
       hash=sha256 \
       hashlog=/cases/case-2024-001/hashes/split_hashes.txt \
       bs=4096 \
       split=2G \
       splitformat=aa
    
    # Acquire with verification pass
    dcfldd if=/dev/sdb \
       of=/cases/case-2024-001/images/evidence.dd \
       hash=sha256 \
       hashlog=/cases/case-2024-001/hashes/verification.txt \
       vf=/cases/case-2024-001/images/evidence.dd \
       verifylog=/cases/case-2024-001/logs/verify.log
    ```
    
    ### Step 5: Verify Image Integrity
    
    ```bash
    # Hash the acquired image
    sha256sum /cases/case-2024-001/images/evidence.dd | \
       tee /cases/case-2024-001/hashes/image_hash.txt
    
    # Compare source and image hashes
    diff <(sha256sum /dev/sdb | awk '{print $1}') \
         <(sha256sum /cases/case-2024-001/images/evidence.dd | awk '{print $1}')
    
    # If using split images, verify each segment
    sha256sum /cases/case-2024-001/images/evidence.dd.* | \
       tee /cases/case-2024-001/hashes/split_image_hashes.txt
    
    # Re-hash source to confirm no changes occurred
    sha256sum /dev/sdb | tee /cases/case-2024-001/hashes/source_hash_after.txt
    diff /cases/case-2024-001/hashes/source_hash_before.txt \
         /cases/case-2024-001/hashes/source_hash_after.txt
    ```
    
    ### Step 6: Document the Acquisition Process
    
    ```bash
    # Generate acquisition report
    cat << 'EOF' > /cases/case-2024-001/notes/acquisition_report.txt
    DISK IMAGE ACQUISITION REPORT
    ==============================
    Case Number: 2024-001
    Date/Time: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
    Examiner: [Name]
    
    Source Device: /dev/sdb
    Model: [from hdparm output]
    Serial: [from hdparm output]
    Size: [from fdisk output]
    
    Acquisition Tool: dcfldd v1.9.1
    Block Size: 4096
    Write Blocker: [Hardware/Software model]
    
    Image File: evidence.dd
    Image Hash (SHA-256): [from hash file]
    Source Hash (SHA-256): [from hash file]
    Hash Match: YES/NO
    
    Errors During Acquisition: [from error log]
    EOF
    
    # Compress logs for archival
    tar -czf /cases/case-2024-001/acquisition_package.tar.gz \
       /cases/case-2024-001/hashes/ \
       /cases/case-2024-001/logs/ \
       /cases/case-2024-001/notes/
    ```
    
    ## Key Concepts
    
    | Concept | Description |
    |---------|-------------|
    | Bit-for-bit copy | Exact replica of source including unallocated space and slack space |
    | Write blocker | Hardware or software mechanism preventing writes to evidence media |
    | Hash verification | Cryptographic hash comparing source and image to prove integrity |
    | Block size (bs) | Transfer chunk size affecting speed; 4096 or 64K typical for forensics |
    | conv=noerror,sync | Continue on read errors and pad with zeros to maintain offset alignment |
    | Chain of custody | Documented trail proving evidence has not been tampered with |
    | Split imaging | Breaking large images into smaller files for storage and transport |
    | Raw/dd format | Bit-for-bit image format without metadata container overhead |
    
    ## Tools & Systems
    
    | Tool | Purpose |
    |------|---------|
    | dd | Standard Unix disk duplication utility for raw imaging |
    | dcfldd | DoD Computer Forensics Laboratory enhanced version of dd with hashing |
    | dc3dd | Another forensic dd variant from the DoD Cyber Crime Center |
    | sha256sum | SHA-256 hash calculation for integrity verification |
    | blockdev | Linux command to set block device read-only mode |
    | hdparm | Drive identification and parameter reporting |
    | smartctl | S.M.A.R.T. data retrieval for drive health and identification |
    | lsblk | Block device enumeration and identification |
    
    ## Common Scenarios
    
    **Scenario 1: Acquiring a Suspect Laptop Hard Drive**
    Connect the drive via a Tableau T35u hardware write-blocker, identify as `/dev/sdb`, use dcfldd with SHA-256 hashing, split into 4GB segments for DVD archival, verify hashes match, document in case notes.
    
    **Scenario 2: Imaging a USB Flash Drive from a Compromised Workstation**
    Use software write-blocking with `blockdev --setro`, acquire with dcfldd including MD5 and SHA-256 dual hashing, image is small enough for single file, verify and store on encrypted case drive.
    
    **Scenario 3: Remote Acquisition Over Network**
    Use dd piped through netcat or ssh for remote acquisition: `ssh root@remote "dd if=/dev/sda bs=4096" | dd of=remote_image.dd bs=4096`, hash both ends independently to verify transfer integrity.
    
    **Scenario 4: Acquiring from a Failing Drive**
    Use `ddrescue` first to recover readable sectors, then use dd with `conv=noerror,sync` to fill gaps with zeros, document which sectors were unreadable in the error log.
    
    ## Output Format
    
    ```
    Acquisition Summary:
      Source:       /dev/sdb (500GB Western Digital WD5000AAKX)
      Destination:  /cases/case-2024-001/images/evidence.dd
      Tool:         dcfldd 1.9.1
      Block Size:   4096 bytes
      Duration:     2h 15m 32s
      Bytes Copied: 500,107,862,016
      Errors:       0 bad sectors
      Source SHA-256:  a3f2b8c9d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1
      Image SHA-256:   a3f2b8c9d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1
      Verification:    PASSED - Hashes match
    ```
    
  • skills/analyzing-active-directory-acl-abuse/SKILL.mdskill
    Show content (4178 bytes)
    ---
    name: analyzing-active-directory-acl-abuse
    description: Detect dangerous ACL misconfigurations in Active Directory using ldap3 to identify GenericAll, WriteDACL, and
      WriteOwner abuse paths
    domain: cybersecurity
    subdomain: identity-security
    tags:
    - active-directory
    - acl-abuse
    - ldap
    - privilege-escalation
    version: '1.0'
    author: mahipal
    license: Apache-2.0
    nist_csf:
    - PR.AA-01
    - PR.AA-05
    - PR.AA-06
    ---
    
    
    # Analyzing Active Directory ACL Abuse
    
    ## Overview
    
    Active Directory Access Control Lists (ACLs) define permissions on AD objects through Discretionary Access Control Lists (DACLs) containing Access Control Entries (ACEs). Misconfigured ACEs can grant non-privileged users dangerous permissions such as GenericAll (full control), WriteDACL (modify permissions), WriteOwner (take ownership), and GenericWrite (modify attributes) on sensitive objects like Domain Admins groups, domain controllers, or GPOs.
    
    This skill uses the ldap3 Python library to connect to a Domain Controller, query objects with their nTSecurityDescriptor attribute, parse the binary security descriptor into SDDL (Security Descriptor Definition Language) format, and identify ACEs that grant dangerous permissions to non-administrative principals. These misconfigurations are the basis for ACL-based attack paths discovered by tools like BloodHound.
    
    
    ## When to Use
    
    - When investigating security incidents that require analyzing active directory acl abuse
    - When building detection rules or threat hunting queries for this domain
    - When SOC analysts need structured procedures for this analysis type
    - When validating security monitoring coverage for related attack techniques
    
    ## Prerequisites
    
    - Python 3.9 or later with ldap3 library (`pip install ldap3`)
    - Domain user credentials with read access to AD objects
    - Network connectivity to Domain Controller on port 389 (LDAP) or 636 (LDAPS)
    - Understanding of Active Directory security model and SDDL format
    
    ## Steps
    
    1. **Connect to Domain Controller**: Establish an LDAP connection using ldap3 with NTLM or simple authentication. Use LDAPS (port 636) for encrypted connections in production.
    
    2. **Query target objects**: Search the target OU or entire domain for objects including users, groups, computers, and OUs. Request the `nTSecurityDescriptor`, `distinguishedName`, `objectClass`, and `sAMAccountName` attributes.
    
    3. **Parse security descriptors**: Convert the binary nTSecurityDescriptor into its SDDL string representation. Parse each ACE in the DACL to extract the trustee SID, access mask, and ACE type (allow/deny).
    
    4. **Resolve SIDs to principals**: Map security identifiers (SIDs) to human-readable account names using LDAP lookups against the domain. Identify well-known SIDs for built-in groups.
    
    5. **Check for dangerous permissions**: Compare each ACE's access mask against dangerous permission bitmasks: GenericAll (0x10000000), WriteDACL (0x00040000), WriteOwner (0x00080000), GenericWrite (0x40000000), and WriteProperty for specific extended rights.
    
    6. **Filter non-admin trustees**: Exclude expected administrative trustees (Domain Admins, Enterprise Admins, SYSTEM, Administrators) and flag ACEs where non-privileged users or groups hold dangerous permissions.
    
    7. **Map attack paths**: For each finding, document the potential attack chain (e.g., GenericAll on user allows password reset, WriteDACL on group allows adding self to group).
    
    8. **Generate remediation report**: Output a JSON report with all dangerous ACEs, affected objects, non-admin trustees, and recommended remediation steps.
    
    ## Expected Output
    
    ```json
    {
      "domain": "corp.example.com",
      "objects_scanned": 1247,
      "dangerous_aces_found": 8,
      "findings": [
        {
          "severity": "critical",
          "target_object": "CN=Domain Admins,CN=Users,DC=corp,DC=example,DC=com",
          "target_type": "group",
          "trustee": "CORP\\helpdesk-team",
          "permission": "GenericAll",
          "access_mask": "0x10000000",
          "ace_type": "ACCESS_ALLOWED",
          "attack_path": "GenericAll on Domain Admins group allows adding arbitrary members",
          "remediation": "Remove GenericAll ACE for helpdesk-team on Domain Admins"
        }
      ]
    }
    ```
    
  • skills/analyzing-apt-group-with-mitre-navigator/SKILL.mdskill
    Show content (11219 bytes)
    ---
    name: analyzing-apt-group-with-mitre-navigator
    description: Analyze advanced persistent threat (APT) group techniques using MITRE ATT&CK Navigator to create layered heatmaps
      of adversary TTPs for detection gap analysis and threat-informed defense.
    domain: cybersecurity
    subdomain: threat-intelligence
    tags:
    - mitre-attack
    - navigator
    - apt
    - threat-actor
    - ttp-analysis
    - heatmap
    - detection-gap
    - threat-intelligence
    version: '1.0'
    author: mahipal
    license: Apache-2.0
    d3fend_techniques:
    - Executable Denylisting
    - Execution Isolation
    - File Metadata Consistency Validation
    - Content Format Conversion
    - File Content Analysis
    nist_csf:
    - ID.RA-01
    - ID.RA-05
    - DE.CM-01
    - DE.AE-02
    ---
    # Analyzing APT Group with MITRE ATT&CK Navigator
    
    ## Overview
    
    MITRE ATT&CK Navigator is a web-based tool for annotating and exploring ATT&CK matrices, enabling analysts to visualize threat actor technique coverage, compare multiple APT groups, identify detection gaps, and build threat-informed defense strategies. This skill covers querying ATT&CK data programmatically, mapping APT group TTPs to Navigator layers, creating multi-layer overlays for gap analysis, and generating actionable intelligence reports for detection engineering teams.
    
    
    ## When to Use
    
    - When investigating security incidents that require analyzing apt group with mitre navigator
    - When building detection rules or threat hunting queries for this domain
    - When SOC analysts need structured procedures for this analysis type
    - When validating security monitoring coverage for related attack techniques
    
    ## Prerequisites
    
    - Python 3.9+ with `attackcti`, `mitreattack-python`, `stix2`, `requests` libraries
    - ATT&CK Navigator (https://mitre-attack.github.io/attack-navigator/) or local deployment
    - Understanding of ATT&CK Enterprise matrix: 14 Tactics, 200+ Techniques, Sub-techniques
    - Access to threat intelligence reports or MISP/OpenCTI for threat actor data
    - Familiarity with STIX 2.1 Intrusion Set and Attack Pattern objects
    
    ## Key Concepts
    
    ### ATT&CK Navigator Layers
    
    Navigator layers are JSON files that annotate ATT&CK techniques with scores, colors, comments, and metadata. Each layer can represent a single APT group's technique usage, a detection capability map, or a combined overlay. Layer version 4.5 supports enterprise-attack, mobile-attack, and ics-attack domains with filtering by platform (Windows, Linux, macOS, Cloud, Azure AD, Office 365, SaaS).
    
    ### APT Group Profiles in ATT&CK
    
    ATT&CK catalogs over 140 threat groups with documented technique usage. Each group profile includes aliases, targeted sectors, associated campaigns, software used, and technique mappings with procedure-level detail. Groups are identified by G-codes (e.g., G0016 for APT29, G0007 for APT28, G0032 for Lazarus Group).
    
    ### Multi-Layer Analysis
    
    The Navigator supports loading multiple layers simultaneously, allowing analysts to overlay threat actor TTPs against detection coverage to identify gaps, compare multiple APT groups to find common techniques worth prioritizing, and track technique coverage changes over time.
    
    ## Workflow
    
    ### Step 1: Query ATT&CK Data for APT Group
    
    ```python
    from attackcti import attack_client
    import json
    
    lift = attack_client()
    
    # Get all threat groups
    groups = lift.get_groups()
    print(f"Total ATT&CK groups: {len(groups)}")
    
    # Find APT29 (Cozy Bear / Midnight Blizzard)
    apt29 = next((g for g in groups if g.get('name') == 'APT29'), None)
    if apt29:
        print(f"Group: {apt29['name']}")
        print(f"Aliases: {apt29.get('aliases', [])}")
        print(f"Description: {apt29.get('description', '')[:300]}")
    
    # Get techniques used by APT29 (G0016)
    techniques = lift.get_techniques_used_by_group("G0016")
    print(f"APT29 uses {len(techniques)} techniques")
    
    technique_map = {}
    for tech in techniques:
        tech_id = ""
        for ref in tech.get("external_references", []):
            if ref.get("source_name") == "mitre-attack":
                tech_id = ref.get("external_id", "")
                break
        if tech_id:
            tactics = [p.get("phase_name", "") for p in tech.get("kill_chain_phases", [])]
            technique_map[tech_id] = {
                "name": tech.get("name", ""),
                "tactics": tactics,
                "description": tech.get("description", "")[:500],
                "platforms": tech.get("x_mitre_platforms", []),
                "data_sources": tech.get("x_mitre_data_sources", []),
            }
    ```
    
    ### Step 2: Generate Navigator Layer JSON
    
    ```python
    def create_navigator_layer(group_name, technique_map, color="#ff6666"):
        techniques_list = []
        for tech_id, info in technique_map.items():
            for tactic in info["tactics"]:
                techniques_list.append({
                    "techniqueID": tech_id,
                    "tactic": tactic,
                    "color": color,
                    "comment": info["name"],
                    "enabled": True,
                    "score": 100,
                    "metadata": [
                        {"name": "group", "value": group_name},
                        {"name": "platforms", "value": ", ".join(info["platforms"])},
                    ],
                })
    
        layer = {
            "name": f"{group_name} TTP Coverage",
            "versions": {"attack": "16.1", "navigator": "5.1.0", "layer": "4.5"},
            "domain": "enterprise-attack",
            "description": f"Techniques attributed to {group_name}",
            "filters": {
                "platforms": ["Linux", "macOS", "Windows", "Cloud",
                              "Azure AD", "Office 365", "SaaS", "Google Workspace"]
            },
            "sorting": 0,
            "layout": {
                "layout": "side", "aggregateFunction": "average",
                "showID": True, "showName": True,
                "showAggregateScores": False, "countUnscored": False,
            },
            "hideDisabled": False,
            "techniques": techniques_list,
            "gradient": {"colors": ["#ffffff", color], "minValue": 0, "maxValue": 100},
            "legendItems": [
                {"label": f"Used by {group_name}", "color": color},
                {"label": "Not observed", "color": "#ffffff"},
            ],
            "showTacticRowBackground": True,
            "tacticRowBackground": "#dddddd",
            "selectTechniquesAcrossTactics": True,
            "selectSubtechniquesWithParent": False,
            "selectVisibleTechniques": False,
        }
        return layer
    
    layer = create_navigator_layer("APT29", technique_map)
    with open("apt29_layer.json", "w") as f:
        json.dump(layer, f, indent=2)
    print("[+] Layer saved: apt29_layer.json")
    ```
    
    ### Step 3: Compare Multiple APT Groups
    
    ```python
    groups_to_compare = {"G0016": "APT29", "G0007": "APT28", "G0032": "Lazarus Group"}
    group_techniques = {}
    
    for gid, gname in groups_to_compare.items():
        techs = lift.get_techniques_used_by_group(gid)
        tech_ids = set()
        for t in techs:
            for ref in t.get("external_references", []):
                if ref.get("source_name") == "mitre-attack":
                    tech_ids.add(ref.get("external_id", ""))
        group_techniques[gname] = tech_ids
    
    common_to_all = set.intersection(*group_techniques.values())
    print(f"Techniques common to all groups: {len(common_to_all)}")
    for tid in sorted(common_to_all):
        print(f"  {tid}")
    
    for gname, techs in group_techniques.items():
        others = set.union(*[t for n, t in group_techniques.items() if n != gname])
        unique = techs - others
        print(f"\nUnique to {gname}: {len(unique)} techniques")
    ```
    
    ### Step 4: Detection Gap Analysis with Layer Overlay
    
    ```python
    # Define your current detection capabilities
    detected_techniques = {
        "T1059", "T1059.001", "T1071", "T1071.001", "T1566", "T1566.001",
        "T1547", "T1547.001", "T1053", "T1053.005", "T1078", "T1027",
    }
    
    actor_techniques = set(technique_map.keys())
    covered = actor_techniques.intersection(detected_techniques)
    gaps = actor_techniques - detected_techniques
    
    print(f"=== Detection Gap Analysis for APT29 ===")
    print(f"Actor techniques: {len(actor_techniques)}")
    print(f"Detected: {len(covered)} ({len(covered)/len(actor_techniques)*100:.0f}%)")
    print(f"Gaps: {len(gaps)} ({len(gaps)/len(actor_techniques)*100:.0f}%)")
    
    # Create gap layer (red = undetected, green = detected)
    gap_techniques = []
    for tech_id in actor_techniques:
        info = technique_map.get(tech_id, {})
        for tactic in info.get("tactics", [""]):
            color = "#66ff66" if tech_id in detected_techniques else "#ff3333"
            gap_techniques.append({
                "techniqueID": tech_id,
                "tactic": tactic,
                "color": color,
                "comment": f"{'DETECTED' if tech_id in detected_techniques else 'GAP'}: {info.get('name', '')}",
                "enabled": True,
                "score": 100 if tech_id in detected_techniques else 0,
            })
    
    gap_layer = {
        "name": "APT29 Detection Gap Analysis",
        "versions": {"attack": "16.1", "navigator": "5.1.0", "layer": "4.5"},
        "domain": "enterprise-attack",
        "description": "Green = detected, Red = gap",
        "techniques": gap_techniques,
        "gradient": {"colors": ["#ff3333", "#66ff66"], "minValue": 0, "maxValue": 100},
        "legendItems": [
            {"label": "Detected", "color": "#66ff66"},
            {"label": "Detection Gap", "color": "#ff3333"},
        ],
    }
    with open("apt29_gap_layer.json", "w") as f:
        json.dump(gap_layer, f, indent=2)
    ```
    
    ### Step 5: Tactic Breakdown Analysis
    
    ```python
    from collections import defaultdict
    
    tactic_breakdown = defaultdict(list)
    for tech_id, info in technique_map.items():
        for tactic in info["tactics"]:
            tactic_breakdown[tactic].append({"id": tech_id, "name": info["name"]})
    
    tactic_order = [
        "reconnaissance", "resource-development", "initial-access",
        "execution", "persistence", "privilege-escalation",
        "defense-evasion", "credential-access", "discovery",
        "lateral-movement", "collection", "command-and-control",
        "exfiltration", "impact",
    ]
    
    print("\n=== APT29 Tactic Breakdown ===")
    for tactic in tactic_order:
        techs = tactic_breakdown.get(tactic, [])
        if techs:
            print(f"\n{tactic.upper()} ({len(techs)} techniques):")
            for t in techs:
                print(f"  {t['id']}: {t['name']}")
    ```
    
    ## Validation Criteria
    
    - ATT&CK data queried successfully via TAXII server
    - APT group mapped to all documented techniques with procedure examples
    - Navigator layer JSON validates and renders correctly in ATT&CK Navigator
    - Multi-layer overlay shows threat actor vs. detection coverage
    - Detection gap analysis identifies unmonitored techniques with data source recommendations
    - Cross-group comparison reveals shared and unique TTPs
    - Output is actionable for detection engineering prioritization
    
    ## References
    
    - [MITRE ATT&CK Navigator](https://mitre-attack.github.io/attack-navigator/)
    - [ATT&CK Groups](https://attack.mitre.org/groups/)
    - [attackcti Python Library](https://github.com/OTRF/ATTACK-Python-Client)
    - [Navigator Layer Format v4.5](https://github.com/mitre-attack/attack-navigator/blob/master/layers/LAYERFORMATv4_5.md)
    - [CISA Best Practices for MITRE ATT&CK Mapping](https://www.cisa.gov/sites/default/files/2023-01/Best%20Practices%20for%20MITRE%20ATTCK%20Mapping.pdf)
    - [Picus: Leverage MITRE ATT&CK for Threat Intelligence](https://www.picussecurity.com/how-to-leverage-the-mitre-attack-framework-for-threat-intelligence)
    
  • skills/analyzing-android-malware-with-apktool/SKILL.mdskill
    Show content (2224 bytes)
    ---
    name: analyzing-android-malware-with-apktool
    description: Perform static analysis of Android APK malware samples using apktool for decompilation, jadx for Java source
      recovery, and androguard for permission analysis, manifest inspection, and suspicious API call detection.
    domain: cybersecurity
    subdomain: malware-analysis
    tags:
    - Android
    - APK
    - apktool
    - jadx
    - androguard
    - mobile-malware
    - static-analysis
    - reverse-engineering
    version: '1.0'
    author: mahipal
    license: Apache-2.0
    nist_csf:
    - DE.AE-02
    - RS.AN-03
    - ID.RA-01
    - DE.CM-01
    ---
    
    # Analyzing Android Malware with Apktool
    
    ## Overview
    
    Android malware distributed as APK files can be statically analyzed to extract permissions, activities, services, broadcast receivers, and suspicious API calls without executing the sample. This skill uses androguard for programmatic APK analysis, identifying dangerous permission combinations, obfuscated code patterns, dynamic code loading, reflection-based API calls, and network communication indicators.
    
    
    ## When to Use
    
    - When investigating security incidents that require analyzing android malware with apktool
    - When building detection rules or threat hunting queries for this domain
    - When SOC analysts need structured procedures for this analysis type
    - When validating security monitoring coverage for related attack techniques
    
    ## Prerequisites
    
    - Python 3.9+ with `androguard`
    - apktool (for resource decompilation)
    - jadx (for Java source recovery, optional)
    - Isolated analysis environment (VM or sandbox)
    - Sample APK files for analysis
    
    ## Steps
    
    1. Parse APK with androguard to extract manifest metadata
    2. Enumerate requested permissions and flag dangerous combinations
    3. List activities, services, receivers, and providers from manifest
    4. Scan for suspicious API calls (reflection, crypto, SMS, telephony)
    5. Detect dynamic code loading patterns (DexClassLoader, Runtime.exec)
    6. Extract hardcoded URLs, IPs, and C2 indicators from strings
    7. Generate risk assessment report with MITRE ATT&CK mobile mappings
    
    ## Expected Output
    
    - JSON report with permission analysis, component listing, suspicious API calls, network indicators, and risk score
    - Extracted strings and potential IOCs from the APK
    
  • skills/analyzing-api-gateway-access-logs/SKILL.mdskill
    Show content (2173 bytes)
    ---
    name: analyzing-api-gateway-access-logs
    description: 'Parses API Gateway access logs (AWS API Gateway, Kong, Nginx) to detect BOLA/IDOR attacks, rate limit bypass,
      credential scanning, and injection attempts. Uses pandas for statistical analysis of request patterns and anomaly detection.
      Use when investigating API abuse or building API-specific threat detection rules.
    
      '
    domain: cybersecurity
    subdomain: security-operations
    tags:
    - analyzing
    - api
    - gateway
    - access
    version: '1.0'
    author: mahipal
    license: Apache-2.0
    nist_csf:
    - DE.CM-01
    - RS.MA-01
    - GV.OV-01
    - DE.AE-02
    ---
    
    # Analyzing API Gateway Access Logs
    
    
    ## When to Use
    
    - When investigating security incidents that require analyzing api gateway access logs
    - When building detection rules or threat hunting queries for this domain
    - When SOC analysts need structured procedures for this analysis type
    - When validating security monitoring coverage for related attack techniques
    
    ## Prerequisites
    
    - Familiarity with security operations concepts and tools
    - Access to a test or lab environment for safe execution
    - Python 3.8+ with required dependencies installed
    - Appropriate authorization for any testing activities
    
    ## Instructions
    
    Parse API gateway access logs to identify attack patterns including broken object
    level authorization (BOLA), excessive data exposure, and injection attempts.
    
    ```python
    import pandas as pd
    
    df = pd.read_json("api_gateway_logs.json", lines=True)
    # Detect BOLA: same user accessing many different resource IDs
    bola = df.groupby(["user_id", "endpoint"]).agg(
        unique_ids=("resource_id", "nunique")).reset_index()
    suspicious = bola[bola["unique_ids"] > 50]
    ```
    
    Key detection patterns:
    1. BOLA/IDOR: sequential resource ID enumeration
    2. Rate limit bypass via header manipulation
    3. Credential scanning (401 surges from single source)
    4. SQL/NoSQL injection in query parameters
    5. Unusual HTTP methods (DELETE, PATCH) on read-only endpoints
    
    ## Examples
    
    ```python
    # Detect 401 surges indicating credential scanning
    auth_failures = df[df["status_code"] == 401]
    scanner_ips = auth_failures.groupby("source_ip").size()
    scanners = scanner_ips[scanner_ips > 100]
    ```
    
  • skills/analyzing-azure-activity-logs-for-threats/SKILL.mdskill
    Show content (2368 bytes)
    ---
    name: analyzing-azure-activity-logs-for-threats
    description: 'Queries Azure Monitor activity logs and sign-in logs via azure-monitor-query to detect suspicious administrative
      operations, impossible travel, privilege escalation, and resource modifications. Builds KQL queries for threat hunting in
      Azure environments. Use when investigating suspicious Azure tenant activity or building cloud SIEM detections.
    
      '
    domain: cybersecurity
    subdomain: security-operations
    tags:
    - azure
    - cloud-security
    - azure-monitor
    - kql
    - threat-hunting
    - activity-logs
    version: '1.0'
    author: mahipal
    license: Apache-2.0
    nist_csf:
    - DE.CM-01
    - RS.MA-01
    - GV.OV-01
    - DE.AE-02
    ---
    
    # Analyzing Azure Activity Logs for Threats
    
    
    ## When to Use
    
    - When investigating security incidents that require analyzing azure activity logs for threats
    - When building detection rules or threat hunting queries for this domain
    - When SOC analysts need structured procedures for this analysis type
    - When validating security monitoring coverage for related attack techniques
    
    ## Prerequisites
    
    - Familiarity with security operations concepts and tools
    - Access to a test or lab environment for safe execution
    - Python 3.8+ with required dependencies installed
    - Appropriate authorization for any testing activities
    
    ## Instructions
    
    Use azure-monitor-query to execute KQL queries against Azure Log Analytics workspaces,
    detecting suspicious admin operations and sign-in anomalies.
    
    ```python
    from azure.identity import DefaultAzureCredential
    from azure.monitor.query import LogsQueryClient
    from datetime import timedelta
    
    credential = DefaultAzureCredential()
    client = LogsQueryClient(credential)
    
    response = client.query_workspace(
        workspace_id="WORKSPACE_ID",
        query="AzureActivity | where OperationNameValue has 'MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE' | take 10",
        timespan=timedelta(hours=24),
    )
    ```
    
    Key detection queries:
    1. Role assignment changes (privilege escalation)
    2. Resource group and subscription modifications
    3. Key vault secret access from new IPs
    4. Network security group rule changes
    5. Conditional access policy modifications
    
    ## Examples
    
    ```python
    # Detect new Global Admin role assignments
    query = '''
    AuditLogs
    | where OperationName == "Add member to role"
    | where TargetResources[0].modifiedProperties[0].newValue has "Global Administrator"
    '''
    ```
    
  • skills/analyzing-bootkit-and-rootkit-samples/SKILL.mdskill
    Show content (12966 bytes)
    ---
    name: analyzing-bootkit-and-rootkit-samples
    description: 'Analyzes bootkit and advanced rootkit malware that infects the Master Boot Record (MBR), Volume Boot Record
      (VBR), or UEFI firmware to gain persistence below the operating system. Covers boot sector analysis, UEFI module inspection,
      and anti-rootkit detection techniques. Activates for requests involving bootkit analysis, MBR malware investigation, UEFI
      persistence analysis, or pre-OS malware detection.
    
      '
    domain: cybersecurity
    subdomain: malware-analysis
    tags:
    - malware
    - bootkit
    - rootkit
    - UEFI
    - MBR-analysis
    version: 1.0.0
    author: mahipal
    license: Apache-2.0
    nist_csf:
    - DE.AE-02
    - RS.AN-03
    - ID.RA-01
    - DE.CM-01
    ---
    
    # Analyzing Bootkit and Rootkit Samples
    
    ## When to Use
    
    - A system shows signs of compromise that persist through OS reinstallation
    - Antivirus and EDR are unable to detect malware despite clear evidence of compromise
    - UEFI Secure Boot has been disabled or shows integrity violations
    - Memory forensics reveals rootkit behavior (hidden processes, hooked system calls)
    - Investigating nation-state level threats known to deploy bootkits (APT28, APT41, Equation Group)
    
    **Do not use** for standard user-mode malware; bootkits and rootkits operate at a fundamentally different level requiring specialized analysis techniques.
    
    ## Prerequisites
    
    - Disk imaging tools (dd, FTK Imager) for acquiring MBR/VBR sectors
    - UEFITool for UEFI firmware volume analysis and module extraction
    - chipsec for hardware-level firmware security assessment
    - Ghidra with x86 real-mode and 16-bit support for MBR code analysis
    - Volatility 3 for kernel-level rootkit artifact detection
    - Bootable Linux live USB for offline system analysis
    
    ## Workflow
    
    ### Step 1: Acquire Boot Sectors and Firmware
    
    Extract MBR, VBR, and UEFI firmware for offline analysis:
    
    ```bash
    # Acquire MBR (first 512 bytes of disk)
    dd if=/dev/sda of=mbr.bin bs=512 count=1
    
    # Acquire first track (usually contains bootkit code beyond MBR)
    dd if=/dev/sda of=first_track.bin bs=512 count=63
    
    # Acquire VBR (Volume Boot Record - first sector of partition)
    dd if=/dev/sda1 of=vbr.bin bs=512 count=1
    
    # Acquire UEFI System Partition
    mkdir /mnt/efi
    mount /dev/sda1 /mnt/efi
    cp -r /mnt/efi/EFI /analysis/efi_backup/
    
    # Dump UEFI firmware (requires chipsec or flashrom)
    # Using chipsec:
    python chipsec_util.py spi dump firmware.rom
    
    # Using flashrom:
    flashrom -p internal -r firmware.rom
    
    # Verify firmware dump integrity
    sha256sum firmware.rom
    ```
    
    ### Step 2: Analyze MBR/VBR for Bootkit Code
    
    Examine boot sector code for malicious modifications:
    
    ```bash
    # Disassemble MBR code (16-bit real mode)
    ndisasm -b16 mbr.bin > mbr_disasm.txt
    
    # Compare MBR with known-good Windows MBR
    # Standard Windows MBR begins with: EB 5A 90 (JMP 0x5C, NOP)
    # Standard Windows 10 MBR: 33 C0 8E D0 BC 00 7C (XOR AX,AX; MOV SS,AX; MOV SP,7C00h)
    
    python3 << 'PYEOF'
    with open("mbr.bin", "rb") as f:
        mbr = f.read()
    
    # Check MBR signature (bytes 510-511 should be 0x55AA)
    if mbr[510:512] == b'\x55\xAA':
        print("[*] Valid MBR signature (0x55AA)")
    else:
        print("[!] Invalid MBR signature")
    
    # Check for known bootkit signatures
    bootkit_sigs = {
        b'\xE8\x00\x00\x5E\x81\xEE': "TDL4/Alureon bootkit",
        b'\xFA\x33\xC0\x8E\xD0\xBC\x00\x7C\x8B\xF4\x50\x07': "Standard Windows MBR (clean)",
        b'\xEB\x5A\x90\x4E\x54\x46\x53': "Standard NTFS VBR (clean)",
    }
    
    for sig, name in bootkit_sigs.items():
        if sig in mbr:
            print(f"[{'!' if 'clean' not in name else '*'}] Signature match: {name}")
    
    # Check partition table entries
    print("\nPartition Table:")
    for i in range(4):
        offset = 446 + (i * 16)
        entry = mbr[offset:offset+16]
        if entry != b'\x00' * 16:
            boot_flag = "Active" if entry[0] == 0x80 else "Inactive"
            part_type = entry[4]
            start_lba = int.from_bytes(entry[8:12], 'little')
            size_lba = int.from_bytes(entry[12:16], 'little')
            print(f"  Partition {i+1}: Type=0x{part_type:02X} {boot_flag} Start=LBA {start_lba} Size={size_lba} sectors")
    PYEOF
    ```
    
    ### Step 3: Analyze UEFI Firmware for Implants
    
    Inspect UEFI firmware volumes for unauthorized modules:
    
    ```bash
    # Extract UEFI firmware components with UEFITool
    # GUI: Open firmware.rom -> Inspect firmware volumes
    # CLI:
    UEFIExtract firmware.rom all
    
    # List all DXE drivers (most common target for UEFI implants)
    find firmware.rom.dump -name "*.efi" -exec file {} \;
    
    # Compare against known-good firmware module list
    # Each UEFI module has a GUID - compare against vendor baseline
    
    # Verify Secure Boot configuration
    python chipsec_main.py -m common.secureboot.variables
    
    # Check SPI flash write protection
    python chipsec_main.py -m common.bios_wp
    
    # Check for known UEFI malware patterns
    yara -r uefi_malware.yar firmware.rom
    ```
    
    ```
    Known UEFI Bootkit Detection Points:
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    LoJax (APT28):
      - Modified SPI flash
      - Added DXE driver that drops agent to Windows
      - Persists through OS reinstall and disk replacement
    
    BlackLotus:
      - Exploits CVE-2022-21894 to bypass Secure Boot
      - Modifies EFI System Partition bootloader
      - Installs kernel driver during boot
    
    CosmicStrand:
      - Modifies CORE_DXE firmware module
      - Hooks kernel initialization during boot
      - Drops shellcode into Windows kernel memory
    
    MoonBounce:
      - SPI flash implant in CORE_DXE module
      - Modified GetVariable() function
      - Deploys user-mode implant through boot chain
    
    ESPecter:
      - Modifies Windows Boot Manager on ESP
      - Patches winload.efi to disable DSE
      - Loads unsigned kernel driver
    ```
    
    ### Step 4: Detect Kernel-Level Rootkit Behavior
    
    Analyze the running system for rootkit artifacts:
    
    ```bash
    # Memory forensics for rootkit detection
    # SSDT hook detection
    vol3 -f memory.dmp windows.ssdt | grep -v "ntoskrnl\|win32k"
    
    # Hidden processes (DKOM)
    vol3 -f memory.dmp windows.psscan > psscan.txt
    vol3 -f memory.dmp windows.pslist > pslist.txt
    # Diff to find hidden processes
    
    # Kernel callback registration (rootkits register callbacks for filtering)
    vol3 -f memory.dmp windows.callbacks
    
    # Driver analysis
    vol3 -f memory.dmp windows.driverscan
    vol3 -f memory.dmp windows.modules
    
    # Check for unsigned drivers
    vol3 -f memory.dmp windows.driverscan | while read line; do
        driver_path=$(echo "$line" | awk '{print $NF}')
        if [ -f "$driver_path" ]; then
            sigcheck -nobanner "$driver_path" 2>/dev/null | grep "Unsigned"
        fi
    done
    
    # IDT hook detection
    vol3 -f memory.dmp windows.idt
    ```
    
    ### Step 5: Boot Process Integrity Verification
    
    Verify the integrity of the entire boot chain:
    
    ```bash
    # Verify Windows Boot Manager signature
    sigcheck -a C:\Windows\Boot\EFI\bootmgfw.efi
    
    # Verify winload.efi
    sigcheck -a C:\Windows\System32\winload.efi
    
    # Verify ntoskrnl.exe
    sigcheck -a C:\Windows\System32\ntoskrnl.exe
    
    # Check Measured Boot logs (if TPM is available)
    # Windows: BCDEdit /enum firmware
    bcdedit /enum firmware
    
    # Verify Secure Boot state
    Confirm-SecureBootUEFI  # PowerShell cmdlet
    
    # Check boot configuration for tampering
    bcdedit /v
    
    # Look for boot configuration changes
    # testsigning: should be No
    # nointegritychecks: should be No
    # debug: should be No
    bcdedit | findstr /i "testsigning nointegritychecks debug"
    ```
    
    ### Step 6: Document Bootkit/Rootkit Analysis
    
    Compile comprehensive analysis findings:
    
    ```
    Analysis should document:
    - Boot sector (MBR/VBR) integrity status with hex comparison
    - UEFI firmware module inventory and integrity verification
    - Secure Boot status and any bypass mechanisms detected
    - Kernel-level hooks (SSDT, IDT, IRP, inline) identified
    - Hidden processes, drivers, and files discovered
    - Persistence mechanism (SPI flash, ESP, MBR, kernel driver)
    - Boot chain integrity verification results
    - Attribution to known bootkit families if possible
    - Remediation steps (reflash firmware, rebuild MBR, replace hardware)
    ```
    
    ## Key Concepts
    
    | Term | Definition |
    |------|------------|
    | **Bootkit** | Malware that infects the boot process (MBR, VBR, UEFI) to execute before the operating system loads, gaining persistent low-level control |
    | **MBR (Master Boot Record)** | First 512 bytes of a disk containing bootstrap code and partition table; MBR bootkits replace this code with malicious loaders |
    | **UEFI (Unified Extensible Firmware Interface)** | Modern firmware interface replacing BIOS; UEFI bootkits implant malicious modules in firmware volumes or modify the ESP |
    | **Secure Boot** | UEFI security feature verifying digital signatures of boot components; bootkits like BlackLotus exploit vulnerabilities to bypass it |
    | **SPI Flash** | Flash memory chip storing UEFI firmware; advanced bootkits like LoJax and MoonBounce modify SPI flash for firmware-level persistence |
    | **DKOM (Direct Kernel Object Manipulation)** | Rootkit technique modifying kernel structures to hide processes, files, and network connections without hooking functions |
    | **Driver Signature Enforcement (DSE)** | Windows security feature requiring kernel drivers to be digitally signed; bootkits disable DSE during boot to load unsigned rootkit drivers |
    
    ## Tools & Systems
    
    - **UEFITool**: Open-source UEFI firmware image editor and parser for inspecting firmware volumes, drivers, and modules
    - **chipsec**: Intel hardware security assessment framework for verifying SPI flash protection, Secure Boot, and UEFI configuration
    - **Volatility**: Memory forensics framework with SSDT, IDT, callback, and driver analysis plugins for kernel rootkit detection
    - **GMER**: Windows rootkit detection tool scanning for SSDT hooks, IDT hooks, hidden processes, and modified kernel modules
    - **Bootkits Analyzer**: Specialized tool for analyzing MBR/VBR code including disassembly and comparison against known-good baselines
    
    ## Common Scenarios
    
    ### Scenario: Investigating Persistent Compromise Surviving OS Reinstallation
    
    **Context**: An organization reimaged a compromised workstation, but the same C2 beaconing resumed within hours. Standard disk forensics finds no malware. UEFI bootkit is suspected.
    
    **Approach**:
    1. Boot from a Linux live USB to avoid executing any compromised OS components
    2. Dump the SPI flash firmware using chipsec or flashrom for offline analysis
    3. Dump the MBR and VBR sectors with dd for boot sector analysis
    4. Copy the EFI System Partition for bootloader integrity verification
    5. Open the SPI dump in UEFITool and compare module GUIDs against vendor-provided firmware
    6. Look for additional or modified DXE drivers that should not be present
    7. Analyze any suspicious modules with Ghidra (x86_64 UEFI module format)
    8. Verify Secure Boot configuration and check for exploit-based bypasses
    
    **Pitfalls**:
    - Analyzing the system while the compromised OS is running (rootkit may hide from live analysis)
    - Not checking SPI flash (only analyzing disk-based boot components misses firmware-level implants)
    - Assuming Secure Boot prevents all bootkits (known bypasses exist, e.g., CVE-2022-21894)
    - Not preserving the original firmware dump before reflashing (critical evidence for attribution)
    
    ## Output Format
    
    ```
    BOOTKIT / ROOTKIT ANALYSIS REPORT
    ====================================
    System:           Dell OptiPlex 7090 (UEFI, TPM 2.0)
    Firmware Version: 1.15.0 (Dell)
    Secure Boot:      ENABLED (but bypassed)
    Capture Method:   Linux Live USB + chipsec SPI dump
    
    MBR/VBR ANALYSIS
    MBR Signature:    Valid (0x55AA)
    MBR Code:         MATCHES standard Windows 10 MBR (clean)
    VBR Code:         MATCHES standard NTFS VBR (clean)
    
    UEFI FIRMWARE ANALYSIS
    Total Modules:    287
    Vendor Expected:  285
    Extra Modules:    2 UNAUTHORIZED
      [!] DXE Driver GUID: {ABCD1234-...} "SmmAccessDxe_mod" (MODIFIED)
          Original Size: 12,288 bytes
          Current Size:  45,056 bytes (32KB ADDED)
          Entropy: 7.82 (HIGH - encrypted payload)
    
      [!] DXE Driver GUID: {EFGH5678-...} "UefiPayloadDxe" (NEW - not in vendor firmware)
          Size: 28,672 bytes
          Function: Drops persistence agent during boot
    
    BOOT CHAIN INTEGRITY
    bootmgfw.efi:     MODIFIED (hash mismatch, Secure Boot bypass via CVE-2022-21894)
    winload.efi:      MODIFIED (DSE disabled at load time)
    ntoskrnl.exe:     CLEAN (but unsigned driver loaded after boot)
    
    KERNEL ROOTKIT COMPONENTS
    Driver:           C:\Windows\System32\drivers\null_mod.sys (unsigned, hidden)
    SSDT Hooks:       3 (NtQuerySystemInformation, NtQueryDirectoryFile, NtDeviceIoControlFile)
    Hidden Processes: 2 (PID 6784: beacon.exe, PID 6812: keylog.exe)
    Hidden Files:     C:\Windows\System32\drivers\null_mod.sys
    
    ATTRIBUTION
    Family:           BlackLotus variant
    Confidence:       HIGH (CVE-2022-21894 exploit, ESP modification pattern matches)
    
    REMEDIATION
    1. Reflash SPI firmware with clean vendor image via hardware programmer
    2. Rebuild EFI System Partition from clean Windows installation media
    3. Reinstall OS from verified media
    4. Enable all firmware write protections
    5. Update firmware to latest version (patches CVE-2022-21894)
    ```
    
  • .claude-plugin/marketplace.jsonmarketplace
    Show content (1147 bytes)
    {
      "name": "anthropic-cybersecurity-skills",
      "owner": {
        "name": "mukul975",
        "email": "mukuljangra5@gmail.com"
      },
      "metadata": {
        "description": "754 cybersecurity skills for AI agents mapped to 5 frameworks: MITRE ATT&CK, NIST CSF 2.0, MITRE ATLAS, D3FEND, and NIST AI RMF.",
        "version": "1.2.0"
      },
      "plugins": [
        {
          "name": "cybersecurity-skills",
          "source": "./",
          "description": "754 cybersecurity skills covering web security, pentesting, DFIR, threat intelligence, cloud security, malware analysis, and more. Mapped to 5 frameworks.",
          "version": "1.2.0",
          "author": {
            "name": "mukul975"
          },
          "license": "Apache-2.0",
          "keywords": [
            "cybersecurity",
            "pentesting",
            "forensics",
            "threat-intelligence",
            "cloud-security",
            "malware-analysis",
            "incident-response",
            "zero-trust",
            "devsecops"
          ],
          "category": "security",
          "homepage": "https://github.com/mukul975/Anthropic-Cybersecurity-Skills",
          "repository": "https://github.com/mukul975/Anthropic-Cybersecurity-Skills"
        }
      ]
    }

README

Anthropic Cybersecurity Skills

Anthropic Cybersecurity Skills

The largest open-source cybersecurity skills library for AI agents

License Skills Frameworks Domains Platforms GitHub stars GitHub forks Last Commit agentskills.io PRs Welcome Playground Hermes Agent

754 production-grade cybersecurity skills · 26 security domains · 5 framework mappings · 26+ AI platforms

Get Started · What's Inside · Frameworks · Platforms · Contributing


⚠️ Community Project — This is an independent, community-created project. Not affiliated with Anthropic PBC.

Give any AI agent the security skills of a senior analyst

A junior analyst knows which Volatility3 plugin to run on a suspicious memory dump, which Sigma rules catch Kerberoasting, and how to scope a cloud breach across three providers. Your AI agent doesn't — unless you give it these skills.

This repo contains 754 structured cybersecurity skills spanning 26 security domains, each following the agentskills.io open standard. Every skill is mapped to five industry frameworks — MITRE ATT&CK, NIST CSF 2.0, MITRE ATLAS, MITRE D3FEND, and NIST AI RMF — making this the only open-source skills library with unified cross-framework coverage. Clone it, point your agent at it, and your next security investigation gets expert-level guidance in seconds.

Five frameworks, one skill library

No other open-source skills library maps every skill to all five frameworks. One skill, five compliance checkboxes.

FrameworkVersionScope in this repoWhat it maps
MITRE ATT&CKv1814 tactics · 200+ techniquesAdversary behaviors and TTPs
NIST CSF 2.02.06 functions · 22 categoriesOrganizational security posture
MITRE ATLASv5.416 tactics · 84 techniquesAI/ML adversarial threats
MITRE D3FENDv1.37 categories · 267 techniquesDefensive countermeasures
NIST AI RMF1.04 functions · 72 subcategoriesAI risk management

Example — a single skill maps across all five:

SkillATT&CKNIST CSFATLASD3FENDAI RMF
analyzing-network-traffic-of-malwareT1071DE.CMAML.T0047D3-NTAMEASURE-2.6

Quick start

# Option 1: npx (recommended)
npx skills add mukul975/Anthropic-Cybersecurity-Skills

# Option 2: Git clone
git clone https://github.com/mukul975/Anthropic-Cybersecurity-Skills.git
cd Anthropic-Cybersecurity-Skills

Works immediately with Claude Code, GitHub Copilot, OpenAI Codex CLI, Cursor, Gemini CLI, and any agentskills.io-compatible platform.

🚀 Try it on the Playground

Experience Casky.ai hands-on — no setup required.

→ Launch Playground on Casky.ai

The playground lets you:

  • Run live cybersecurity skill exercises against real targets
  • See AI agents execute structured skills in real time
  • Explore MITRE ATT&CK mapped workflows interactively
  • Test threat hunting, DFIR, and penetration testing scenarios

No installation. No configuration. Just open and start.

Why this exists

The cybersecurity workforce gap hit 4.8 million unfilled roles globally in 2024 (ISC2). AI agents can help close that gap — but only if they have structured domain knowledge to work from. Today's agents can write code and search the web, but they lack the practitioner playbooks that turn a generic LLM into a capable security analyst.

Existing security tool repos give you wordlists, payloads, or exploit code. None of them give an AI agent the structured decision-making workflow a senior analyst follows: when to use each technique, what prerequisites to check, how to execute step-by-step, and how to verify results. That is the gap this project fills.

Anthropic Cybersecurity Skills is not a collection of scripts or checklists. It is an AI-native knowledge base built from the ground up for the agentskills.io standard — YAML frontmatter for sub-second discovery, structured Markdown for step-by-step execution, and reference files for deep technical context. Every skill encodes real practitioner workflows, not generated summaries.

What's inside — 26 security domains

DomainSkillsKey capabilities
Cloud Security60AWS, Azure, GCP hardening · CSPM · cloud forensics
Threat Hunting55Hypothesis-driven hunts · LOTL detection · behavioral analytics
Threat Intelligence50STIX/TAXII · MISP · feed integration · actor profiling
Web Application Security42OWASP Top 10 · SQLi · XSS · SSRF · deserialization
Network Security40IDS/IPS · firewall rules · VLAN segmentation · traffic analysis
Malware Analysis39Static/dynamic analysis · reverse engineering · sandboxing
Digital Forensics37Disk imaging · memory forensics · timeline reconstruction
Security Operations36SIEM correlation · log analysis · alert triage
Identity & Access Management35IAM policies · PAM · zero trust identity · Okta · SailPoint
SOC Operations33Playbooks · escalation workflows · metrics · tabletop exercises
Container Security30K8s RBAC · image scanning · Falco · container forensics
OT/ICS Security28Modbus · DNP3 · IEC 62443 · historian defense · SCADA
API Security28GraphQL · REST · OWASP API Top 10 · WAF bypass
Vulnerability Management25Nessus · scanning workflows · patch prioritization · CVSS
Incident Response25Breach containment · ransomware response · IR playbooks
Red Teaming24Full-scope engagements · AD attacks · phishing simulation
Penetration Testing23Network · web · cloud · mobile · wireless pentesting
Endpoint Security17EDR · LOTL detection · fileless malware · persistence hunting
DevSecOps17CI/CD security · code signing · Terraform auditing
Phishing Defense16Email authentication · BEC detection · phishing IR
Cryptography14TLS · Ed25519 · certificate transparency · key management
Zero Trust Architecture13BeyondCorp · CISA maturity model · microsegmentation
Mobile Security12Android/iOS analysis · mobile pentesting · MDM forensics
Ransomware Defense7Precursor detection · response · recovery · encryption analysis
Compliance & Governance5CIS benchmarks · SOC 2 · regulatory frameworks
Deception Technology2Honeytokens · breach detection canaries

How AI agents use these skills

Each skill costs ~30 tokens to scan (frontmatter only) and 500–2,000 tokens to fully load (complete workflow). This progressive disclosure architecture lets agents search all 754 skills in a single pass without blowing context windows.

User prompt: "Analyze this memory dump for signs of credential theft"

Agent's internal process:

  1. Scans 754 skill frontmatters (~30 tokens each)
     → identifies 12 relevant skills by matching tags, description, domain

  2. Loads top 3 matches:
     • performing-memory-forensics-with-volatility3
     • hunting-for-credential-dumping-lsass
     • analyzing-windows-event-logs-for-credential-access

  3. Executes the structured Workflow section step-by-step
     → runs Volatility3 plugins, checks LSASS access patterns,
        correlates with event log evidence

  4. Validates results using the Verification section
     → confirms IOCs, maps findings to ATT&CK T1003 (Credential Dumping)

Without these skills, the agent guesses at tool commands and misses critical steps. With them, it follows the same playbook a senior DFIR analyst would use.

Skill anatomy

Every skill follows a consistent directory structure:

skills/performing-memory-forensics-with-volatility3/
├── SKILL.md              ← Skill definition (YAML frontmatter + Markdown body)
├── references/
│   ├── standards.md      ← MITRE ATT&CK, ATLAS, D3FEND, NIST mappings
│   └── workflows.md      ← Deep technical procedure reference
├── scripts/
│   └── process.py        ← Working helper scripts
└── assets/
    └── template.md       ← Filled-in checklists and report templates

YAML frontmatter (real example)

---
name: performing-memory-forensics-with-volatility3
description: >-
  Analyze memory dumps to extract running processes, network connections,
  injected code, and malware artifacts using the Volatility3 framework.
domain: cybersecurity
subdomain: digital-forensics
tags: [forensics, memory-analysis, volatility3, incident-response, dfir]
atlas_techniques: [AML.T0047]
d3fend_techniques: [D3-MA, D3-PSMD]
nist_ai_rmf: [MEASURE-2.6]
nist_csf: [DE.CM-01, RS.AN-03]
version: "1.2"
author: mukul975
license: Apache-2.0
---

Markdown body sections

## When to Use
Trigger conditions — when should an AI agent activate this skill?

## Prerequisites
Required tools, access levels, and environment setup.

## Workflow
Step-by-step execution guide with specific commands and decision points.

## Verification
How to confirm the skill was executed successfully.

Frontmatter fields: name (kebab-case, 1–64 chars), description (keyword-rich for agent discovery), domain, subdomain, tags, atlas_techniques (MITRE ATLAS IDs), d3fend_techniques (MITRE D3FEND IDs), nist_ai_rmf (NIST AI RMF references), nist_csf (NIST CSF 2.0 categories). MITRE ATT&CK technique mappings are documented in each skill's references/standards.md file and in the ATT&CK Navigator layer included with releases.

📊 MITRE ATT&CK Enterprise coverage — all 14 tactics

 

TacticIDCoverageKey skills
ReconnaissanceTA0043StrongOSINT, subdomain enumeration, DNS recon
Resource DevelopmentTA0042ModeratePhishing infrastructure, C2 setup detection
Initial AccessTA0001StrongPhishing simulation, exploit detection, forced browsing
ExecutionTA0002StrongPowerShell analysis, fileless malware, script block logging
PersistenceTA0003StrongScheduled tasks, registry, service accounts, LOTL
Privilege EscalationTA0004StrongKerberoasting, AD attacks, cloud privilege escalation
Defense EvasionTA0005StrongObfuscation, rootkit analysis, evasion detection
Credential AccessTA0006StrongMimikatz detection, pass-the-hash, credential dumping
DiscoveryTA0007ModerateBloodHound, AD enumeration, network scanning
Lateral MovementTA0008StrongSMB exploits, lateral movement detection with Splunk
CollectionTA0009ModerateEmail forensics, data staging detection
Command and ControlTA0011StrongC2 beaconing, DNS tunneling, Cobalt Strike analysis
ExfiltrationTA0010StrongDNS exfiltration, DLP controls, data loss detection
ImpactTA0040StrongRansomware defense, encryption analysis, recovery

An ATT&CK Navigator layer file is included in the v1.0.0 release assets for visual coverage mapping.

Note: ATT&CK v19 lands April 28, 2026 — splitting Defense Evasion (TA0005) into two new tactics: Stealth and Impair Defenses. Skill mappings will be updated in a forthcoming release.

📊 NIST CSF 2.0 alignment — all 6 functions

 

FunctionSkillsExamples
Govern (GV)30+Risk strategy, policy frameworks, roles & responsibilities
Identify (ID)120+Asset discovery, threat landscape assessment, risk analysis
Protect (PR)150+IAM hardening, WAF rules, zero trust, encryption
Detect (DE)200+Threat hunting, SIEM correlation, anomaly detection
Respond (RS)160+Incident response, forensics, breach containment
Recover (RC)40+Ransomware recovery, BCP, disaster recovery

NIST CSF 2.0 (February 2024) added the Govern function and expanded scope from critical infrastructure to all organizations. Skill mappings align to all 22 categories and reference 106 subcategories.

📊 Framework deep dive — ATLAS, D3FEND, AI RMF

 

MITRE ATLAS v5.4 — AI/ML adversarial threats

ATLAS maps adversarial tactics, techniques, and case studies specific to AI and machine learning systems. Version 5.4 covers 16 tactics and 84 techniques including agentic AI attack vectors added in late 2025: AI agent context poisoning, tool invocation abuse, MCP server compromises, and malicious agent deployment. Skills mapped to ATLAS help agents identify and defend against threats to ML pipelines, model weights, inference APIs, and autonomous workflows.

MITRE D3FEND v1.3 — Defensive countermeasures

D3FEND is an NSA-funded knowledge graph of 267 defensive techniques organized across 7 tactical categories: Model, Harden, Detect, Isolate, Deceive, Evict, and Restore. Built on OWL 2 ontology, it uses a shared Digital Artifact layer to bidirectionally map defensive countermeasures to ATT&CK offensive techniques. Skills tagged with D3FEND identifiers let agents recommend specific countermeasures for detected threats.

NIST AI RMF 1.0 + GenAI Profile (AI 600-1)

The AI Risk Management Framework defines 4 core functions — Govern, Map, Measure, Manage — with 72 subcategories for trustworthy AI development. The GenAI Profile (AI 600-1, July 2024) adds 12 risk categories specific to generative AI, from confabulation and data privacy to prompt injection and supply chain risks. Colorado's AI Act (effective February 2026) provides a legal safe harbor for organizations complying with NIST AI RMF, making these mappings directly relevant to regulatory compliance.

Compatible platforms

AI code assistants Claude Code (Anthropic) · GitHub Copilot (Microsoft) · Cursor · Windsurf · Cline · Aider · Continue · Roo Code · Amazon Q Developer · Tabnine · Sourcegraph Cody · JetBrains AI

CLI agents OpenAI Codex CLI · Gemini CLI (Google)

Autonomous agents Devin · Replit Agent · SWE-agent · OpenHands

Agent frameworks & SDKs LangChain · CrewAI · AutoGen · Semantic Kernel · Haystack · Vercel AI SDK · Any MCP-compatible agent

All platforms that support the agentskills.io standard can load these skills with zero configuration.

What people are saying

"A database of real, organized security skills that any AI agent can plug into and use. Not tutorials. Not blog posts."Hasan Toor (@hasantoxr), AI/tech creator

"This is not a random collection of security scripts. It's a structured operational knowledge base designed for AI-driven security workflows."fazal-sec, Medium

Featured in

WhereTypeLink
awesome-agent-skillsAwesome List (1,000+ skills index)VoltAgent/awesome-agent-skills
awesome-ai-securityAwesome List (AI security tools)ottosulin/awesome-ai-security
awesome-codex-cliAwesome List (Codex CLI resources)RoggeOhta/awesome-codex-cli
SkillsLLMSkills directory & marketplaceskillsllm.com/skill/anthropic-cybersecurity-skills
OpenflowsSignal analysis & trackingopenflows.org
NeverSight skills_feedAutomated skills indexNeverSight/skills_feed

Star history

Star History Chart

Releases

VersionDateHighlights
v1.0.0March 11, 2026734 skills · 26 domains · MITRE ATT&CK + NIST CSF 2.0 mapping · ATT&CK Navigator layer

Skills have continued to grow on main since v1.0.0 — the library now contains 754 skills with 5-framework mapping (MITRE ATLAS, D3FEND, and NIST AI RMF added post-release). Check Releases for the latest tagged version.

Contributing

This project grows through community contributions. Here is how to get involved:

Add a new skill — Domains like Deception Technology (2 skills) and Compliance & Governance (5 skills) need the most help. Follow the template in CONTRIBUTING.md and submit a PR with the title Add skill: your-skill-name.

Improve existing skills — Add framework mappings, fix workflows, update tool references, or contribute scripts and templates.

Report issues — Found an inaccurate procedure or broken script? Open an issue.

Every PR is reviewed for technical accuracy and agentskills.io standard compliance within 48 hours. Check good first issues for a starting point.

This project follows the Contributor Covenant. By participating, you agree to uphold this code.

Community

💬 Discussions — Questions, ideas, and roadmap conversations 🐛 Issues — Bug reports and feature requests 🔒 Security Policy — Responsible disclosure process (48-hour acknowledgment)

Citation

If you use this project in research or publications:

@software{anthropic_cybersecurity_skills,
  author       = {Jangra, Mahipal},
  title        = {Anthropic Cybersecurity Skills},
  year         = {2026},
  url          = {https://github.com/mukul975/Anthropic-Cybersecurity-Skills},
  license      = {Apache-2.0},
  note         = {754 structured cybersecurity skills for AI agents,
                  mapped to MITRE ATT\&CK, NIST CSF 2.0, MITRE ATLAS,
                  MITRE D3FEND, and NIST AI RMF}
}

License

This project is licensed under the Apache License 2.0. You are free to use, modify, and distribute these skills in both personal and commercial projects.


If this project helps your security work, consider giving it a ⭐

⭐ Star · 🍴 Fork · 💬 Discuss · 📝 Contribute

Community project by @mukul975. Not affiliated with Anthropic PBC.