Self-paced course · English

Cybersec Skills 101

Master the 5 frameworks behind 754 cybersecurity skills for AI agents

Module 01

What is the cybersecurity skill library?

A 754-skill, 26-domain, 5-framework knowledge base that turns any agentskills.io-compatible AI agent into a senior security analyst.

Why this module: Before we tour the five frameworks, you need the lay of the land — what the library is, what makes it different from a tool catalog or a wiki, and why "skills" (not prompts, not tools) are the right unit of expertise for AI agents working on security investigations. By the end of the module you should be able to describe the library in one sentence and explain the AI-knowledge-gap it fills.

Prerequisites: Shell + git fluency; you have used an AI agent (Claude Code / Copilot / Cursor / Gemini CLI) at least once. No deep security background required.

Core ideas

The Anthropic-Cybersecurity-Skills library is the largest open-source cybersecurity skills library for AI agents — 754 production-grade skills spanning 26 security domains, all under Apache-2.0. Every skill is cross-mapped to the five industry frameworks (MITRE ATT&CK, NIST CSF 2.0, MITRE ATLAS, MITRE D3FEND, NIST AI RMF), making it the only open-source skills library with unified cross-framework coverage. The largest domains are Cloud Security (60 skills), Threat Hunting (55), Threat Intelligence (50), Web App Security (42), and Network Security (40); the smallest are Deception Technology (2) and Compliance & Governance (5).

Why does this exist? Generic LLMs lack the structured decision-making workflow a senior analyst follows: when to use a technique, prerequisites, step-by-step execution, and verification. The library encodes real practitioner workflows, not generated summaries. Each skill costs only ~30 tokens to scan (frontmatter only) and 500–2,000 tokens to fully load — progressive disclosure that lets agents search all 754 skills in a single pass without blowing context windows.

The skills follow the agentskills.io open standard: YAML frontmatter for sub-second discovery, structured Markdown for step-by-step execution, and reference files for deep technical context. That standard is why all 754 skills work zero-config across 26+ AI platforms — Claude Code, GitHub Copilot, Cursor, Gemini CLI, Cline, and any MCP-compatible agent. Two commands install everything.

# Recommended: one-shot install via the skills CLI
npx skills add mukul975/Anthropic-Cybersecurity-Skills

# Fallback: plain git clone
git clone https://github.com/mukul975/Anthropic-Cybersecurity-Skills.git
cd Anthropic-Cybersecurity-Skills

And here is what one skill's frontmatter looks like — a contract the agent reads in milliseconds before deciding whether to load the body:

---
name: acquiring-disk-image-with-dd-and-dcfldd
description: Create forensically sound bit-for-bit disk images using
  dd and dcfldd while preserving evidence integrity through hash verification.
domain: cybersecurity
subdomain: digital-forensics
tags:
  - forensics
  - disk-imaging
  - evidence-acquisition
  - hash-verification
version: '1.0'
license: Apache-2.0
nist_csf:
  - RS.AN-01
  - RS.AN-03
  - DE.AE-02
  - RS.MA-01
---

Takeaway

The library is 754 skills · 26 domains · 5 frameworks, expressed in the agentskills.io standard so any compatible AI agent can pick it up zero-config. You don't memorise the skills — the agent does. You learn how to read them.

Check yourself

  1. How many skills, domains, and frameworks does the library cover, and what license is it under?
  2. Why is "~30 tokens per skill frontmatter" the load-bearing number that makes whole-library discovery possible?
  3. What are the two install paths, and which one does the README recommend?

Going further

Module 02

MITRE ATT&CK

The "how" of real-world adversary behaviour: 14 tactics, 200+ techniques, and the most-mapped framework in the library.

Why this module: ATT&CK is the lingua franca of modern security. Detection engineers, threat hunters, red teams, and IR responders all speak in T-numbers. If you can read an ATT&CK technique ID off a skill's frontmatter and explain what it means, you can collaborate with any security team on the planet. This is the most concrete framework in the course; we'll spend the most time here.

Prerequisites: Module 1; a vague sense that "PowerShell" and "phishing" exist as attack vectors.

Core ideas

MITRE ATT&CK is a globally-accessible, curated knowledge base of cyber adversary behaviour. The Enterprise matrix organises adversary tactics (the "why") into techniques (the "how") and sub-techniques (specific implementations). The library uses Enterprise matrix v15 for current mappings; the README cites v18 (14 tactics · 200+ techniques). It is the most-mapped of the five frameworks: 291 unique techniques across 14/14 tactics, with the library's coverage visualised by an ATT&CK Navigator layer file shipped at mappings/attack-navigator-layer.json.

The 14 Enterprise tactics, in adversary-lifecycle order, are:

TA0043  Reconnaissance         (gather info about a target)
TA0042  Resource Development   (build/buy infrastructure & tooling)
TA0001  Initial Access         (get a foothold)
TA0002  Execution              (run code on a victim system)
TA0003  Persistence            (survive reboots, credential rotation)
TA0004  Privilege Escalation   (gain higher privileges)
TA0005  Defense Evasion        (avoid detection)
TA0006  Credential Access      (steal accounts, tokens, hashes)
TA0007  Discovery              (learn the internal environment)
TA0008  Lateral Movement       (move host-to-host)
TA0009  Collection             (gather data of interest)
TA0011  Command and Control    (talk to compromised systems)
TA0010  Exfiltration           (steal data)
TA0040  Impact                 (destroy, encrypt, disrupt)

Coverage isn't uniform. Defense Evasion (48 techniques) and Persistence (36) are the deepest-covered tactics in the library; Impact (6) is the shallowest. The top-10 most-covered techniques include T1059.001 PowerShell (26 skills), T1055 Process Injection (17), T1053.005 Scheduled Task (16), T1566.001 Spearphishing Attachment (15), T1558.003 Kerberoasting (14), T1078 Valid Accounts (13), T1003.006 DCSync (13), and T1071.001 Web Protocols (12). When you see those IDs in a skill's frontmatter or body, you now know what they encode.

Mapping is bidirectional: each tactic table lists offensive subdomains (penetration-testing, red-teaming) alongside defensive ones (threat-hunting, soc-operations). Red-teaming (24 skills) covers all 14 tactics with High intensity — the most cross-cutting subdomain. Heads-up for late 2026: ATT&CK v19 is scheduled for 28 April 2026 and will split Defense Evasion (TA0005) into two new tactics, Stealth and Impair Defenses. The library plans to update mappings in a forthcoming release.

# Library coverage at a glance
291 unique techniques (149 parent + ~142 sub-techniques)
14 / 14 Enterprise tactics covered
Deepest:  Defense Evasion (48), Persistence (36)
Shallowest: Impact (6)
Visual:   mappings/attack-navigator-layer.json (ATT&CK Navigator v4.5)

Takeaway

ATT&CK answers "what is the adversary doing?" with 14 tactics and 200+ techniques. The library covers every tactic and 291 techniques. When a skill cites T1078, that's Valid Accounts — credential abuse via legitimate-looking logins. Memorise the 14 tactics; look up the techniques on demand.

Check yourself

  1. Name the 14 ATT&CK Enterprise tactics in lifecycle order. Which two are deepest-covered in the library?
  2. What does T1059.001 map to, and how many library skills cover it?
  3. What change is coming in ATT&CK v19, and which tactic is being split?

Going further

Module 03

NIST CSF 2.0

The 6-function organisational-posture framework that answers "where in our cybersecurity programme does this skill apply?"

Why this module: Where ATT&CK is technique-level, CSF is programme-level. Auditors, CISOs, and risk teams speak in CSF. If a skill carries an RS.AN-01 tag, somebody in compliance immediately knows it relates to incident analysis during the Respond function. CSF is the most natural compliance-driven discovery axis in the library.

Prerequisites: Module 1; basic security vocabulary (you've heard the words "incident", "asset", "policy").

Core ideas

NIST CSF 2.0 was published in February 2024. It organises cybersecurity activities into 6 core functions that span the full risk-management lifecycle:

GV  Govern    (NEW in 2.0 — strategy, policy, supply chain, oversight)
ID  Identify  (asset management, risk assessment, supplier risk)
PR  Protect   (identity, training, data security, platform security)
DE  Detect    (continuous monitoring, adverse-event analysis)
RS  Respond   (incident management, analysis, mitigation, communication)
RC  Recover   (recovery planning, recovery communication)

The big change in 2.0 is the Govern (GV) function — added to the original five — and an expansion of scope from critical infrastructure to all organisations. The framework breaks down into 22 categories total, and the library's mappings reference 106 subcategories. You'll see those subcategory IDs (like RS.AN-01, DE.CM-01, PR.IR-01) directly in skill frontmatter under the nist_csf: field.

Library coverage by function is uneven by design. Approximate skill counts: Govern ~54, Identify ~115, Protect ~160, Detect ~102, Respond ~111, Recover ~29. Protect and Detect are deepest; Recover is the shallowest — which honestly mirrors the real-world security-tooling market. Specific gaps the library team has flagged include GV.OC (Organizational Context, only 5 skills), GV.PO (Policy), PR.AT (Awareness/Training beyond phishing), and RC.RP/RC.CO (recovery and recovery-communication).

Each library subdomain maps cleanly to a primary CSF function. Examples: GV.SC (Supply Chain) → devsecops + container-security; PR.AA (Identity, Auth, Access Control) → identity-access-management + zero-trust-architecture (46 skills); DE.CM (Continuous Monitoring) → soc-operations + threat-hunting + network-security (101 skills); RS.AN (Incident Analysis) → digital-forensics + malware-analysis + threat-intelligence (111 skills). 24 subdomains are individually aligned with rationale — for instance, cryptography → Protect (PR), PR.DS, "Data confidentiality and integrity at rest and in transit."

# What CSF mapping looks like in a real skill
---
name: acquiring-disk-image-with-dd-and-dcfldd
domain: cybersecurity
subdomain: digital-forensics
nist_csf:
  - RS.AN-01    # Incident analysis: notifications & investigations
  - RS.AN-03    # Incident analysis: forensic evidence
  - DE.AE-02    # Adverse event analysis
  - RS.MA-01    # Incident management: triage & escalation
---

Takeaway

CSF 2.0 is the 6-function lifecycle frame: Govern → Identify → Protect → Detect → Respond → Recover. nist_csf: in frontmatter tells your auditor exactly which function and subcategory a skill maps to — making cross-walks to compliance evidence almost mechanical.

Check yourself

  1. What are the 6 CSF 2.0 functions, and which one was added in 2.0?
  2. If you see nist_csf: [RS.AN-01] in a skill, what function and intent does that encode?
  3. Which CSF function is the shallowest-covered in the library, and what does that tell you about open-source coverage gaps?

Going further

Module 04

MITRE ATLAS

The AI-native counterpart to ATT&CK — adversarial tactics and techniques specific to AI/ML systems, with a fast-moving frontier and a source-coverage caveat.

Why this module: Generic ATT&CK doesn't capture prompt injection, model poisoning, or RAG abuse. ATLAS does. As AI agents become a bigger surface area, defenders need a vocabulary for AI-specific adversary behaviour — and the library is starting to encode it. This is also the first framework in the course where we have to be honest: our source coverage is thin.

Prerequisites: Module 2 (ATT&CK structure); a working mental model of "an LLM is an API behind a system prompt".

Core ideas

MITRE ATLAS is a curated knowledge base of adversarial tactics, techniques, and case studies specific to AI and ML systems. Where ATT&CK uses T1xxx identifiers, ATLAS uses AML.T0xxx. The library tags AI-relevant skills with an atlas_techniques: frontmatter list of those IDs, flagging skills that detect or defend against threats to ML pipelines, model weights, inference APIs, and autonomous agentic workflows.

Per our raw sources, MITRE ATLAS v5.4 covers 16 tactics and 84 techniques specific to AI/ML adversarial threats. Late-2025 additions extended coverage into agentic-AI attack vectors: AI agent context poisoning, tool invocation abuse, MCP server compromises, and malicious agent deployment. The library currently maps 81 skills to ATLAS techniques per the ATTACK_COVERAGE doc — though that doc references ATLAS v5.5.0, slightly newer than the README's v5.4. (Versioning is genuinely in flux.)

Key ATLAS techniques you'll see across library skills:

AML.T0051  LLM Prompt Injection           (Execution)
AML.T0054  LLM Jailbreak                  (Privilege Escalation)
AML.T0088  Generate Deepfakes             (AI Attack Staging)
AML.T0010  AI Supply Chain Compromise     (Initial Access)
AML.T0020  Poison Training Data           (Resource Development)
AML.T0070  RAG Poisoning                  (Persistence)
AML.T0080  AI Agent Context Poisoning     (Persistence)
AML.T0056  Extract LLM System Prompt      (Exfiltration)

Honesty: source-coverage gap

Our wiki page for ATLAS notes the open question explicitly: "Library overview README is the only direct ATLAS source; we lack a captured mappings/atlas/README.md. Confidence on technique-level claims is therefore limited." Treat the technique counts and skill counts above as best-available, not authoritative — and verify against atlas.mitre.org when it matters. The two raw sources we do have disagree on ATLAS version (v5.4 vs v5.5.0); we don't know which is canonical for the current library state.

Even with thin sources, the practical guidance is clear: when you see an AML.T0051 tag on a skill, the skill is touching prompt-injection defences. When you see AML.T0070, RAG poisoning. The IDs are stable enough to use; the tactic taxonomy and total counts are the parts we'd verify upstream before quoting in production.

Takeaway

ATLAS is "ATT&CK for AI" — same shape, AI-specific TTPs, prefixed AML.T0xxx. Roughly 81 library skills carry ATLAS tags today. Source coverage in our raw corpus is thin; trust the technique IDs, verify the version numbers upstream.

Check yourself

  1. How does ATLAS relate to ATT&CK structurally, and how do their identifiers differ?
  2. Name three ATLAS techniques specific to LLM/agentic threats.
  3. What's the open question in our ATLAS wiki page — what would you have to fix in our raw corpus to close the source gap?

Going further

Module 05

MITRE D3FEND

The defensive-countermeasure inverse of ATT&CK — what defenders do to harden, detect, isolate, deceive, evict, and restore.

Why this module: ATT&CK tells you what the adversary does; D3FEND tells you what to do about it. The two are paired by design — a D3FEND countermeasure declares which ATT&CK technique it defends against. When a skill says "this defends against T1059.001", D3FEND is the framework giving that mapping a vocabulary. Same source-coverage caveat as ATLAS applies.

Prerequisites: Module 2 (ATT&CK).

Core ideas

MITRE D3FEND is an NSA-funded knowledge graph of defensive countermeasures — the inverse of ATT&CK. Where ATT&CK catalogs how adversaries attack, D3FEND catalogs how defenders respond across 7 tactical categories:

Model    (understand the system you're protecting)
Harden   (reduce attack surface preemptively)
Detect   (notice attacker activity)
Isolate  (contain blast radius)
Deceive  (lure & study the attacker)
Evict    (remove attacker access)
Restore  (recover to a known-good state)

D3FEND v1.3 contains 267 defensive techniques across those 7 categories. Critically, it is built on an OWL 2 ontology with a shared Digital Artifact layer that bidirectionally maps defensive countermeasures to offensive ATT&CK techniques — making it the natural pairing framework for ATT&CK mappings. In skill frontmatter, defensive countermeasures appear under d3fend_techniques:.

The library currently maps 11 skills to D3FEND defensive countermeasures — the smallest-mapped of the five frameworks. Per the ATTACK_COVERAGE doc, each skill's d3fend_techniques field lists the top-5 most relevant defensive countermeasures derived from the skill's ATT&CK technique tags. In other words, D3FEND mappings in the library are derived from ATT&CK mappings, not authored independently.

Here is one of the few real D3FEND-tagged skills in the library — analyzing-threat-actor-ttps-with-mitre-attack — showing how the field is populated:

---
name: analyzing-threat-actor-ttps-with-mitre-attack
domain: cybersecurity
subdomain: threat-intelligence
nist_csf:
  - ID.RA-01
  - ID.RA-05
  - DE.CM-01
  - DE.AE-02
d3fend_techniques:
  - Executable Denylisting
  - Execution Isolation
  - File Metadata Consistency Validation
  - Content Format Conversion
  - File Content Analysis
---

Honesty: source-coverage gap + format inconsistency

Our D3FEND wiki page flags two open questions. First: "Library overview README is the only direct D3FEND source; we lack mappings/d3fend/README.md and any per-technique tables. The 11-skill count and derivation methodology need primary-source confirmation." Second, the field format is inconsistent in the wild — analyzing-threat-actor-ttps-with-mitre-attack uses friendly names ("Executable Denylisting"), while the README example uses D3--prefixed IDs ("D3-MA, D3-PSMD"). Both are accepted; expect both. Treat detailed technique-level claims as best-available.

Takeaway

D3FEND is the defensive twin of ATT&CK: 267 techniques across 7 categories (Model · Harden · Detect · Isolate · Deceive · Evict · Restore). Library coverage is small (11 skills), the field accepts either friendly names or D3- IDs, and our raw sources are thin. Use D3FEND when you need to articulate what defenders do alongside ATT&CK's what attackers do.

Check yourself

  1. Name the 7 D3FEND tactical categories. Which one is missing from ATT&CK's vocabulary entirely?
  2. How are D3FEND mappings derived in this library — authored independently, or computed from ATT&CK tags?
  3. If you see both D3-MA and "Executable Denylisting" as values for d3fend_techniques, what should you conclude about the field's expected format?

Going further

Module 06

NIST AI RMF

The AI-risk-management framework — Govern, Map, Measure, Manage — that anchors the AI-risk story alongside cybersecurity, with regulatory teeth.

Why this module: AI RMF is what bridges "this is a cybersecurity skill" and "this is an AI-systems-governance concern". It's also one of the few frameworks in this course that has direct regulatory weight — Colorado's AI Act gives organisations a legal safe harbour for complying with NIST AI RMF. If you build skills that touch model behaviour, deployment, or AI-system risk, you'll be tagging them here.

Prerequisites: Module 3 (NIST CSF — same vocabulary family); Module 4 (ATLAS — adjacent AI threats).

Core ideas

The NIST AI Risk Management Framework (AI RMF 1.0) defines 4 core functions for trustworthy AI development:

GOVERN    (policies, accountability, risk strategy)
MAP       (context, AI capabilities, intended purpose)
MEASURE   (test, evaluate, verify trustworthiness)
MANAGE    (prioritize risk responses, monitor in production)

Per our raw sources, AI RMF 1.0 has 72 subcategories across the four functions. The GenAI Profile (AI 600-1, July 2024) adds 12 risk categories specific to generative AI — including confabulation, data privacy, prompt injection, and supply-chain risks. The library tags AI-relevant skills with an nist_ai_rmf: frontmatter field listing the relevant subcategory IDs.

The regulatory angle matters: Colorado's AI Act (effective February 2026) provides a legal safe harbour for organisations complying with NIST AI RMF. That makes these mappings more than best-practice — they are direct compliance evidence. Other US states and several international regulators are watching the same framework, so investments in AI RMF tagging today have a growing payoff.

The library currently maps 85 skills to NIST AI RMF subcategories. Coverage spans all 4 core functions: GOVERN-1.1/6.1/6.2, MAP-5.1/5.2/1.6, MEASURE-2.5/2.7/2.8/2.11, MANAGE-2.4/3.1. GenAI-specific subcategories applied include GOVERN-6.1 and GOVERN-6.2 (responsible deployment policies). Here's a real frontmatter example from building-cloud-siem-with-sentinel:

---
name: building-cloud-siem-with-sentinel
domain: cybersecurity
subdomain: cloud-security
nist_ai_rmf:
  - MEASURE-2.7
  - MAP-5.1
  - MANAGE-2.4
atlas_techniques:
  - AML.T0070   # RAG poisoning
  - AML.T0066
  - AML.T0082
nist_csf:
  - PR.IR-01
  - ID.AM-08
  - GV.SC-06
  - DE.CM-01
---

Honesty: source-coverage gap

Same caveat as ATLAS and D3FEND. Our wiki page flags: "Library overview README + ATTACK_COVERAGE coverage paragraph are the only AI RMF sources; we lack mappings/nist-ai-rmf/README.md. The 72 subcategories and 12 GenAI risk categories aren't enumerated in any raw file." The function names and high-level structure are reliable; specific subcategory counts and the AI-RMF-vs-ATLAS interplay are open questions. Verify on airc.nist.gov for production decisions.

Takeaway

AI RMF gives you the four-function lens for AI-system risk: Govern → Map → Measure → Manage. ~85 skills in the library carry nist_ai_rmf: tags today, and Colorado's AI Act makes those tags compliance-grade. Pair it mentally with ATLAS (the AI-threat catalogue) for full AI-security coverage.

Check yourself

  1. What are the 4 core AI RMF functions, and what does the GenAI Profile add on top?
  2. Why is the Colorado AI Act safe-harbour provision relevant to nist_ai_rmf frontmatter tagging?
  3. How does AI RMF differ in intent from MITRE ATLAS — same scope, or complementary lenses?

Going further

Module 07

Cross-framework mapping in practice

One skill, five frameworks: how to read every mapping a skill carries, infer the missing ones, and turn one playbook into evidence for compliance, detection, and purple-team work.

Why this module: This is the value-prop module. You've now seen all five frameworks individually. The point of the library is that one well-written skill becomes simultaneously a detection-engineering artefact, a compliance attestation, an AI-risk control, and an offensive-defensive pairing — without writing five different documents. Here you learn to read that.

Prerequisites: Modules 2–6 (all five frameworks).

Core ideas

The library README's headline example shows one skill mapped to all five: analyzing-network-traffic-of-malware declares T1071 (ATT&CK), DE.CM (NIST CSF), AML.T0047 (ATLAS), D3-NTA (D3FEND), and MEASURE-2.6 (AI RMF). One skill. Five compliance checkboxes. That's the library's bet.

But — and this is critical — most skills don't fill all five fields. Coverage tracks relevance, not a scoreboard. acquiring-disk-image-with-dd-and-dcfldd declares only nist_csf (no atlas/d3fend/ai_rmf/mitre_attack). analyzing-threat-actor-ttps-with-mitre-attack declares nist_csf + d3fend_techniques only — even though its body is entirely about ATT&CK. building-cloud-siem-with-sentinel declares three frameworks (nist_ai_rmf + atlas_techniques + nist_csf) but no d3fend or mitre_attack field, even though its workflow body explicitly maps detection rules to ATT&CK techniques. The mappings you can see are not the only mappings the skill has.

Here is the procedure for reading a skill across all five frameworks:

# Cross-framework reading procedure

1. Read the frontmatter for explicit declarations
   Look for: nist_csf, mitre_attack, atlas_techniques,
             d3fend_techniques, nist_ai_rmf

2. Read the body's "Workflow" and "Tools & Systems"
   Implicit ATT&CK technique IDs often appear here even
   when the frontmatter is silent. Look for T-numbers.

3. Cross-walk via the mapping directory
   mappings/mitre-attack/   per-technique → which skills
   mappings/nist-csf/       per-subcategory → which skills
   mappings/owasp/          OWASP → ATT&CK → CSF tables

4. Classify intent
   Offensive (red team / pentest)   vs.
   Defensive (SOC / IR / threat hunt)

5. Use ATT&CK Navigator to visualise
   mappings/attack-navigator-layer.json (color-coded coverage)

The cross-mapping payoff falls into four buckets: threat-informed defence (prioritise skills by real adversary behaviour), gap analysis (find uncovered techniques), purple-team exercises (pair offensive and defensive skills via shared ATT&CK techniques), and agent-driven discovery (query skills by framework ID — "show me everything tagged DE.CM-01"). The library's mapping directories are bidirectional too: there's an OWASP→ATT&CK table and an OWASP→CSF table, so web/app skills can be cross-walked between offensive web-vuln vocab and the broader frameworks.

Lab — read a skill across all 5 frameworks

Open raw/2026-05-09-cybersec-skill-cloud-siem.md in this repo. Then:

  1. List every framework field declared in the frontmatter and the value(s) for each.
  2. Scan the body for ATT&CK technique IDs that appear but are not in the frontmatter mitre_attack: field. (There are several — the body maps detection rules to ATT&CK explicitly.)
  3. Write a one-paragraph summary of what this skill teaches, organised by framework: ATT&CK → CSF → ATLAS → D3FEND → AI RMF.
  4. Identify which framework field is missing, and argue (one sentence) whether it should be filled or whether absence is correct.

Repeat with raw/2026-05-09-cybersec-skill-acquire-disk-image.md — note how only nist_csf is declared, and decide whether you'd argue for adding T1005 (Data from Local System) on a read-offensive interpretation.

Takeaway

Cross-framework mapping is the library's killer feature, but it's asymmetric: not every skill fills every field, and the body often carries technique IDs the frontmatter doesn't. Read both, infer the gaps, and don't penalise a skill for omitted-but-irrelevant frameworks — coverage tracks relevance.

Check yourself

  1. Quote the canonical "one skill, five frameworks" example from the README, including all five framework IDs.
  2. Why does building-cloud-siem-with-sentinel not declare a mitre_attack: frontmatter field even though its body maps detection rules to ATT&CK?
  3. Name the four payoffs of cross-framework mapping (threat-informed defence, gap analysis, purple-team, agent-driven discovery) and give a one-line example of each.

Going further

Module 08

Walk through a real skill

End-to-end anatomy of a real SKILL.md — frontmatter, When-to-Use, Workflow, Verification — using acquiring-disk-image-with-dd-and-dcfldd from the library.

Why this module: So far you've read frontmatter snippets in isolation. Here you read a complete SKILL.md as the agent does — front to back, deciding when to invoke it, what to run, and how to verify. After this module, any of the 754 skills should feel readable.

Prerequisites: Module 3 (CSF — this skill leans heavily on RS.AN); Module 7 (cross-mapping reading procedure).

Core ideas

Every skill follows a consistent on-disk structure defined by the agentskills.io standard: a SKILL.md with YAML frontmatter + Markdown body, plus optional references/, scripts/, and assets/ directories. The Markdown body itself follows a fixed contract: When to Use (trigger conditions), Prerequisites (tools/access), Workflow (step-by-step commands and decision points), Verification (how to confirm success). Many skills add Key Concepts, Tools & Systems, Common Scenarios, and Output Format.

The skill we'll walk through is acquiring-disk-image-with-dd-and-dcfldd, from the digital-forensics subdomain. Use it when you need a forensic copy of a suspect drive: incident response, law-enforcement chain of custody, or before any destructive analysis.

1. Frontmatter

---
name: acquiring-disk-image-with-dd-and-dcfldd
description: Create forensically sound bit-for-bit disk images using
  dd and dcfldd while preserving evidence integrity through hash verification.
domain: cybersecurity
subdomain: digital-forensics
tags:
  - forensics
  - disk-imaging
  - evidence-acquisition
  - dd
  - dcfldd
  - hash-verification
version: '1.0'
author: mahipal
license: Apache-2.0
nist_csf:
  - RS.AN-01
  - RS.AN-03
  - DE.AE-02
  - RS.MA-01
---

Read it like the agent does: name + description + tags are the discovery surface (~30 tokens). The agent loads the body only after the frontmatter scores high for the user's prompt. Note this skill declares only nist_csf — no atlas/d3fend/ai_rmf/mitre_attack. Coverage tracks relevance: forensic disk imaging is a Respond-and-Detect activity in CSF terms, and not natively an AI threat.

2. When to Use

Five trigger conditions, drawn directly from the SKILL.md:

  • Forensic copy of a suspect drive for investigation
  • IR preserving volatile disk evidence before analysis
  • Legal proceedings requiring a verified bit-for-bit copy
  • Before any destructive analysis on a storage device
  • Imaging from physical drives, USB devices, or memory cards

3. Prerequisites

Linux forensic workstation (SIFT, Kali, any distro); dd or dcfldd; write-blocker hardware or software write-blocking configured (this is the non-negotiable one); destination drive larger than source; root/sudo; SHA-256/MD5 utilities.

4. Workflow (6 steps)

Identify and write-protect the device → prepare destination + document the source → acquire with dd → acquire with dcfldd (preferred forensic method, with built-in hashing and split output) → verify integrity → document the acquisition. Here's Step 4, the heart of the skill — straight from the SKILL.md:

# Acquire image with built-in hashing and split output
dcfldd if=/dev/sdb \
   of=/cases/case-2024-001/images/evidence.dd \
   hash=sha256,md5 \
   hashwindow=1G \
   hashlog=/cases/case-2024-001/hashes/acquisition_hashes.txt \
   bs=4096 \
   conv=noerror,sync \
   errlog=/cases/case-2024-001/logs/dcfldd_errors.log

# Acquire with verification pass
dcfldd if=/dev/sdb \
   of=/cases/case-2024-001/images/evidence.dd \
   hash=sha256 \
   hashlog=/cases/case-2024-001/hashes/verification.txt \
   vf=/cases/case-2024-001/images/evidence.dd \
   verifylog=/cases/case-2024-001/logs/verify.log

5. Verification

Hash the acquired image, diff against the source pre-hash, re-hash the source after acquisition, and confirm no source-side changes. The skill's defining promise is "bit-for-bit with cryptographic proof of integrity" — and verification is the step where that promise is kept.

6. Key Concepts & Tools

The SKILL.md body also includes a Key Concepts table (bit-for-bit copy, write blocker, hash verification, block size, conv=noerror,sync, chain of custody, split imaging, raw/dd format), a Tools & Systems table (dd, dcfldd, dc3dd, sha256sum, blockdev, hdparm, smartctl, lsblk), four worked Common Scenarios (suspect laptop, USB drive, remote acquisition over SSH, failing drive with ddrescue), and an Output Format template for the final acquisition summary.

Lab — walk it yourself

Read the skill end-to-end at raw/2026-05-09-cybersec-skill-acquire-disk-image.md. Then answer:

  1. Without re-reading the frontmatter: which CSF function and category does this skill primarily map to, and why?
  2. If you were imaging a failing drive (Scenario 4), what would you change in Step 3, and which extra tool does the skill recommend?
  3. The Key Concepts table calls out conv=noerror,sync. What does that flag combination do, and why does forensics specifically need it?
  4. Open question from our wiki: should this skill also be tagged with ATT&CK T1005 (Data from Local System)? Argue yes or no in one paragraph.

Takeaway

Every SKILL.md follows the same four-section contract: When to Use → Prerequisites → Workflow → Verification. The frontmatter is the agent's discovery surface; the body is the playbook. Read in that order — frontmatter for relevance, body for execution — and any of the 754 skills opens up.

Check yourself

  1. Name the four mandatory body sections of every SKILL.md, in order.
  2. Why does this disk-imaging skill declare only nist_csf and no other framework fields?
  3. What is the difference between dd and dcfldd in this skill, and which does the skill prefer for forensic acquisition?

Going further

Module 09

Build your own security skill

From blank directory to PR-ready skill: pick a domain, write the frontmatter, fill the four body sections, add helpers, and validate.

Why this module: The library exists because practitioners contribute. After this module you can author a SKILL.md from scratch, validate it against the agentskills.io standard, and submit a PR. The capstone (Module 10) builds on this directly.

Prerequisites: Module 8 (you've read a complete SKILL.md); Module 7 (you can map across frameworks).

Core ideas

Authoring a skill is six decisions, taken in order:

  1. Pick a domain and subdomain. 26 domains exist; the largest are cloud-security, threat-hunting, threat-intelligence, web-app-security, network-security. Smallest are deception-technology and compliance-governance. If you're filling a gap, look at the CSF coverage gaps from Module 3 (GV.OC, GV.PO, PR.AT, RC.RP/RC.CO) — the library team has flagged these.
  2. Write the frontmatter. name in kebab-case (1–64 chars), keyword-rich description, domain, subdomain, tags array, version, author, license: Apache-2.0. Then add framework mappings only where genuinely relevant.
  3. Fill the four body sections. When to Use (trigger conditions, ~5 bullets), Prerequisites (tools + access), Workflow (numbered steps, real shell commands), Verification (how to confirm success).
  4. Add references/. references/standards.md for the framework mappings (especially ATT&CK details — these often live here, not in frontmatter), references/workflows.md for deeper context.
  5. Add scripts/. If your skill has a repeatable computational step, encapsulate it as scripts/<name>.py or .sh. Real practitioners ship working code, not pseudocode.
  6. Validate and PR. Conformance to agentskills.io is enforced by upstream review: every PR is reviewed for technical accuracy and standard compliance within 48 hours.

Here's a stub skeleton — copy this, rename, and fill in. Frontmatter mappings are optional per framework: only declare what genuinely applies. Coverage tracks relevance, not a checklist.

# <skills/>your-skill-name/SKILL.md
---
name: your-skill-name
description: One sentence. Keyword-rich, written for agent discovery.
  Mention the tools, the technique, and the trigger condition.
domain: cybersecurity
subdomain: <pick from the 26>
tags:
  - <3-7 specific tags an agent would search on>
version: '1.0'
author: <your-handle>
license: Apache-2.0
# Framework mappings — declare only what applies. Coverage tracks relevance.
nist_csf:
  - <e.g. DE.CM-01>
mitre_attack:
  - <e.g. T1078>
# atlas_techniques:    # only if AI/ML threats are in scope
# d3fend_techniques:   # if you have defensive-technique mappings
# nist_ai_rmf:         # if AI risk management applies
---

# <Title — same as `name` but human-readable>

## When to Use
- Trigger condition 1 (concrete, not abstract)
- Trigger condition 2
- ...

## Prerequisites
- Tools (with versions)
- Access (root? cloud creds? specific platform?)
- Knowledge assumed

## Workflow

### Step 1: <action verb + outcome>
```bash
# real shell commands here
```

### Step 2: ...

## Verification
- How to confirm success: hash match, diff output, log entry, etc.
- What "done" looks like.

## Key Concepts (optional)
| Concept | Description |

## Tools & Systems (optional)
| Tool | Purpose |

## Common Scenarios (optional)
**Scenario 1:** ...

## Output Format (optional)
```
<template for the artefact the skill produces>
```

One critical detail from our wiki: frontmatter mapping fields are populated unevenly per skill. Real examples — acquiring-disk-image-with-dd-and-dcfldd declares only nist_csf; building-incident-response-playbook declares mitre_attack + nist_csf; building-cloud-siem-with-sentinel declares nist_csf + atlas_techniques + nist_ai_rmf. Don't fake mappings to look comprehensive. Be honest about scope.

Also note: ATT&CK technique mappings are sometimes documented in references/standards.md or in the ATT&CK Navigator layer rather than in frontmatter. Both are accepted. The d3fend_techniques field accepts both friendly names ("Executable Denylisting") and D3--prefixed IDs ("D3-MA"). Pick one and be consistent within a skill.

Lab — draft your own skill

  1. Pick a security task you've actually performed (or watched a teammate perform). Don't invent.
  2. Choose a domain/subdomain. Write the name and description. Read them out loud — does the description make sense to someone who's never met your task before?
  3. List 5 trigger conditions for "When to Use". If you can't list 5, your skill is too narrow — generalise or pick a different task.
  4. Write the Workflow as numbered steps with real shell commands. Test the commands.
  5. Add a Verification section. If you can't describe what "done" looks like, your skill isn't a skill yet — it's a wish.
  6. Map the skill to as many of the 5 frameworks as genuinely apply. Leave the rest blank. Open attack.mitre.org and nist.gov/cyberframework for ID lookup.

Takeaway

A skill is a practitioner playbook expressed as a directory: SKILL.md + references/ + scripts/ + assets/. Frontmatter is keyword-rich for discovery; body follows the When-to-Use / Prerequisites / Workflow / Verification contract; framework mappings declared only where genuinely relevant.

Check yourself

  1. What are the four mandatory body sections, and what's the role of each?
  2. Should you declare all five framework mapping fields in every skill? Why or why not?
  3. Where can ATT&CK technique mappings live other than frontmatter, and why might you choose that over mitre_attack:?

Going further

Module 10

Capstone — Ship one skill mapped to all 5 frameworks

Final project: produce one production-grade security skill — SKILL.md, frontmatter mapped to all five frameworks (where honestly applicable), helper scripts, references, and a working install in your AI agent of choice.

Why this module: You've toured the frameworks (modules 2–6), seen them composed (module 7), walked a real skill (module 8), and learned the authoring shape (module 9). The capstone proves you can ship. One skill. End-to-end. Reviewed against the agentskills.io standard the same way the upstream library reviews PRs.

Prerequisites: Modules 1–9.

Core ideas

The capstone is one skill — a directory you commit to a public repo (or a fork of mukul975/Anthropic-Cybersecurity-Skills if you intend to upstream). It must produce five concrete deliverables:

  1. SKILL.md spec. A single Markdown file with YAML frontmatter on top, then the four-section body (When to Use, Prerequisites, Workflow, Verification). Add Key Concepts / Tools & Systems / Common Scenarios / Output Format if they help. The description field is keyword-rich and written for agent discovery, not human marketing.
  2. Frontmatter mapped to all 5 frameworks — with honesty. Declare mitre_attack, nist_csf, atlas_techniques, d3fend_techniques, and nist_ai_rmf where the skill genuinely applies. Where it doesn't, omit the field and add a one-line comment in the SKILL.md body explaining why. Faking mappings to hit five-of-five is a fail; honest scoping is a pass. The library itself does this — acquiring-disk-image-with-dd-and-dcfldd ships with only nist_csf declared, and that's correct.
  3. 1+ helper script in scripts/. A working, executable script — .py, .sh, or whatever fits — that automates a repeatable step from your Workflow. Real code, runs on a clean machine, has a one-line usage comment.
  4. ≥1 reference document in references/. At minimum a references/standards.md that fleshes out the framework mappings — especially the ATT&CK technique IDs in detail, since those often live here rather than in frontmatter. Add references/workflows.md for deeper context if the body Workflow needs it.
  5. The skill installs and runs in your AI agent of choice. Clone or symlink your skill directory into the right location for Claude Code / Copilot / Cursor / Gemini CLI / Cline / any MCP-compatible agent. Prove it: ask the agent a question that should trigger your skill, and confirm it loads the SKILL.md and follows the Workflow.

Suggested capstone topics

Aim for a real task — something you've done, or your team has done. The library's gaps are good targets:

  • CSF gap fillers: GV.OC (Organisational Context), GV.PO (Policy), PR.AT (Awareness/Training beyond phishing), RC.RP/RC.CO (recovery and recovery-communication). Recover function is the shallowest at ~29 skills.
  • ATLAS / agentic-AI: a defender-side skill for prompt-injection detection, RAG-content provenance verification, MCP-server-supply-chain audit, or AI-agent context-poisoning detection (AML.T0080).
  • D3FEND-first: the library only has 11 skills with D3FEND tags. A defensive-countermeasure-driven skill — Harden / Detect / Isolate / Deceive / Evict / Restore — would be unusually well-positioned to fill that gap.
  • Threat-informed pair: pick one ATT&CK technique from the top-10 list (PowerShell, Process Injection, Kerberoasting, etc.) and write a defender skill that pairs to it. Cite the offensive technique in the body, declare the defensive D3FEND countermeasures in frontmatter.

Done-when checklist

[ ] Directory: skills/<your-skill-name>/
[ ] SKILL.md present, four body sections filled
[ ] Frontmatter: name, description, domain, subdomain, tags,
    version, author, license: Apache-2.0
[ ] Framework mappings declared where applicable (1-5 of 5)
[ ] Skipped frameworks documented in the body with rationale
[ ] scripts/ contains ≥1 working helper script
[ ] references/ contains ≥1 reference doc (standards.md minimum)
[ ] Skill loads in your AI agent of choice
[ ] Agent successfully invokes the Workflow on a test prompt
[ ] (Optional) PR submitted upstream to mukul975/Anthropic-Cybersecurity-Skills

Lab — ship the capstone

Allocate 2–3 sessions. Suggested split:

  1. Session 1 (1–2 h): Pick the task. Draft the frontmatter and "When to Use" + "Prerequisites" sections. Decide which of the 5 framework fields apply and which you'll honestly omit.
  2. Session 2 (2–3 h): Write the Workflow with real commands. Build the helper script in scripts/. Write references/standards.md. Write the Verification section.
  3. Session 3 (1–2 h): Install the skill in your agent. Run the test prompt. Iterate on the description and tags until the agent reliably picks up the skill on the first try. (Optional) open a PR upstream.

When you ship, use the done-when checklist above as your final review pass. If any item is unchecked, you're not done.

Takeaway

You now have a skill in production: one playbook, declared across the right frameworks, runnable by any agentskills.io-compatible agent. That's the same artefact the 754-skill library is made of. You've gone from reader to author — and the library accepts contributions.

Check yourself

  1. Why is "honestly omit a framework field" a passing capstone, while "declare all five even if some don't apply" is a fail?
  2. Where in your skill directory does the ATT&CK technique detail live — frontmatter, body, or references/standards.md — and why might you choose each location?
  3. What's the agent-side test that proves your skill is actually usable, beyond passing local validation?

Going further

  • Wiki: (synthesises all 15 wiki pages) — start with wiki-index
  • Source: all of raw/, especially the seven sample SKILL.md captures
  • Upstream: CONTRIBUTING.md (PR conventions) · agentskills.io (standard reference)