---
id: framework:mitre-atlas
type: framework
title: MITRE ATLAS
status: active
confidence: 0.5
sources:
  - 2026-05-09-cybersec-library-overview.md
  - 2026-05-09-cybersec-attack-coverage.md
created: 2026-05-09
updated: 2026-05-09
updated_log:
  - 2026-05-09: created
tiers: semantic
half_life_days: 180
tags: [framework, atlas, ai-ml, adversarial]
---

# MITRE ATLAS

## Summary

[MITRE ATLAS](https://atlas.mitre.org/) is a curated knowledge base of adversarial tactics, techniques, and case studies specific to AI and machine-learning systems — the AI-native counterpart to [[framework:mitre-attack]]. The [[concept:cybersec-skill-library]] uses ATLAS to flag skills that detect or defend against threats to ML pipelines, model weights, inference APIs, and autonomous agentic workflows. Skill frontmatter carries an `atlas_techniques` list of relevant AML.T-prefixed IDs.

> **Source-coverage caveat:** ATLAS gets one paragraph in our raw sources (the library README's framework deep-dive); we do not yet have a captured mapping README from `mappings/atlas/` or equivalent. Treat detailed claims with low confidence until more sources land.

## Claims

- MITRE ATLAS v5.4 covers **16 tactics and 84 techniques** specific to AI/ML adversarial threats. `[src: raw/2026-05-09-cybersec-library-overview.md] {conf: 0.6}`
- ATLAS additions in late 2025 cover agentic AI attack vectors: AI agent context poisoning, tool invocation abuse, MCP server compromises, and malicious agent deployment. `[src: raw/2026-05-09-cybersec-library-overview.md] {conf: 0.55}`
- The library currently maps **81 skills** to ATLAS adversarial-ML techniques (per the ATTACK_COVERAGE doc, which references ATLAS v5.5.0 — slightly newer than the README's v5.4). `[src: raw/2026-05-09-cybersec-attack-coverage.md] {conf: 0.65}`
- Key ATLAS techniques applied across library skills: AML.T0051 LLM Prompt Injection (Execution), AML.T0054 LLM Jailbreak (Privilege Escalation), AML.T0088 Generate Deepfakes (AI Attack Staging), AML.T0010 AI Supply Chain Compromise (Initial Access), AML.T0020 Poison Training Data (Resource Development), AML.T0070 RAG Poisoning (Persistence), AML.T0080 AI Agent Context Poisoning (Persistence), AML.T0056 Extract LLM System Prompt (Exfiltration). `[src: raw/2026-05-09-cybersec-attack-coverage.md] {conf: 0.7}`
- Skills tagged with ATLAS technique IDs help agents identify and defend against threats to ML pipelines, model weights, inference APIs, and autonomous workflows. `[src: raw/2026-05-09-cybersec-library-overview.md] {conf: 0.55}`

## Relationships

- complements → [[framework:mitre-attack]] `{conf: 0.7}`
- complements → [[framework:nist-ai-rmf]] `{conf: 0.65}`
- maps-to → [[concept:cybersec-skill-library]] `{conf: 0.6}`

## Open questions

- [ ] **Source-coverage gap:** Library overview README is the only direct ATLAS source; we lack a captured `mappings/atlas/README.md`. Confidence on technique-level claims is therefore limited.
- [ ] The two raw sources disagree on ATLAS version (README v5.4 vs ATTACK_COVERAGE v5.5.0). Which is authoritative for the current library state?

## Changelog

- 2026-05-09 — created
