Skip to main content
[OWASP]

OWASP Top 10 for Agentic Applications

The OWASP Agentic Top 10 mapped to real-world attack data and XOR capabilities. A reference page for security teams.

The 10 agentic risks

Each risk category maps to specific attack data from MCPSecBench, Pillar Security, and Snyk ToxicSkills. This page connects the OWASP framework to published research and XOR capabilities.

XOR coverage

XOR addresses risks through verification (output validation), skill scanning (supply chain), MCP integrity checks (protocol security), and signed audit trails (accountability).

10
Agentic risk categories
Dec 2025
Release date
10
Separate MCP Top 10 risks

The OWASP agentic top 10, mapped to real attack data

Released December 2025, the OWASP Top 10 for Agentic Applications identifies risks specific to AI agent systems — not just models. It extends the LLM Top 10 to cover what happens when models get tools, permissions, and autonomy. Each risk below is mapped to published research and XOR's coverage.

OWASP also maintains a separate MCP Top 10 for protocol-specific risks. Both lists overlap on supply chain and tool integrity.

The 10 risks

ASI-01: Agentic Excessive Agency

Agent has more permissions than needed. 58-90% success rate for arbitrary code execution when agents have broad tool access (arXiv:2503.12188).

[XOR: PARTIAL]

XOR enforces least-privilege on GitHub permissions. Does not yet control MCP tool permissions at runtime.

ASI-02: Agentic Identity & Access Management

Agents share credentials or escalate privileges. 53% of MCP servers use insecure static secrets; only 8.5% use OAuth (Astrix Security).

[XOR: PARTIAL]

XOR uses scoped GitHub App tokens. Flags hardcoded secrets in PR reviews. Does not manage MCP server credentials.

ASI-03: Agentic Prompt Injection

Malicious instructions embedded in data the agent processes. 20% success rate across 2,000+ applications, average time 42 seconds (Pillar Security).

[XOR: YES]

XOR's verification pipeline tests agent-generated patches independently. A poisoned prompt may influence the patch, but verification catches the bad output.

ASI-04: Agentic Supply Chain Vulnerabilities

36.82% of 3,984 agent skills have known vulnerabilities. 76 confirmed malicious payloads in public marketplaces (Snyk ToxicSkills, Feb 2026).

[XOR: YES]

XOR scans agent skills before execution, verifies tool integrity with COSE_Sign1 signatures, and blocks unsigned tools. See building secure skills.

ASI-05: Agentic Uncontrolled Behavior

Agent takes unexpected actions or runs unbounded loops. Multi-agent systems amplify this — one compromised agent propagates across the system.

[XOR: YES]

XOR's guardrail review catches unexpected behavior. Uncertainty stop: XOR says when confidence is low instead of guessing. Patches are verified before merge.

ASI-06: Agentic Knowledge Poisoning

Corrupted training data or poisoned context influences agent decisions. RAG applications are especially vulnerable to indirect injection via retrieved documents.

[XOR: NO]

XOR does not address knowledge poisoning. This requires model-level defenses outside XOR's scope.

ASI-07: Agentic Insecure Output Handling

Agent output is used without validation. 90% of successful jailbreak attacks result in data leakage (Pillar Security). Agent-generated code merged without review is the same class of risk.

[XOR: YES]

This is XOR's core function. Every agent-generated patch is tested against the original vulnerability. Bad patches are rejected before review. See PR verification.

ASI-08: Agentic Excessive Permissions

Agents granted admin access when read-only would suffice. 350% rise in GitHub Actions supply chain attacks in 2025 (StepSecurity) — many exploiting over-permissioned workflows.

[XOR: YES]

XOR's Actions hardening pins actions to SHA, reduces workflow permissions to least-privilege, and flags over-permissioned configurations in PR review.

ASI-09: Agentic Insufficient Logging

No audit trail for agent decisions. When something goes wrong, teams can't reconstruct what the agent did or why.

[XOR: YES]

XOR produces signed audit trails for every triage: what was scanned, what passed, what failed, and why. See agent compliance evidence and compliance evidence.

ASI-10: Agentic Multi-Agent Trust

Agents trust other agents without verification. Research shows 58-90% success rates for cross-agent code execution, some configurations reaching 100% (arXiv:2503.12188).

[XOR: PARTIAL]

XOR verifies agent output before merge (regardless of which agent produced it). Does not yet verify inter-agent communication at runtime.

Coverage summary

5

Risks covered

ASI-03, 04, 05, 07, 08

3

Partially covered

ASI-01, 02, 10

1

Not addressed

ASI-06

ASI-09 (logging) is also fully covered. XOR addresses 9 of 10 OWASP agentic risks to some degree. Knowledge poisoning (ASI-06) requires model-level defenses that are outside XOR's scope.

Sources

  • OWASP Top 10 for Agentic Applications (Dec 2025)
  • OWASP MCP Top 10 (2025)
  • Snyk ToxicSkills — 3,984 agent skills audited (Feb 2026)
  • Pillar Security — State of Attacks on GenAI, 2,000+ apps (Oct 2024)
  • Astrix Security — State of MCP Server Security 2025
  • arXiv:2503.12188 — Multi-Agent Systems Execute Arbitrary Code
  • arXiv:2508.13220 — MCPSecBench: 17 attack types, 4 surfaces
  • StepSecurity — 350% rise in GitHub Actions supply chain attacks (2025)

[NEXT STEPS]

Related pages

FAQ

What is the OWASP Top 10 for Agentic Applications?

Released in December 2025, it identifies 10 risk categories specific to AI agent systems including supply chain vulnerabilities, excessive permissions, and uncontrolled agent behavior. It extends the LLM Top 10 to autonomous agents.

How does it differ from the LLM Top 10?

The LLM Top 10 covers model-level risks (prompt injection, training data poisoning). The Agentic Top 10 covers system-level risks: what happens when models get tools, permissions, and autonomy.

Is there a separate OWASP MCP Top 10?

Yes. OWASP also maintains a dedicated MCP Top 10 project covering protocol-specific risks like token mismanagement and shadow MCP servers. See the MCP Security page for details.

[RELATED TOPICS]

Patch verification

XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.

Automated vulnerability patching

AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.

Benchmark Results

62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.

Benchmark Results

62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.

Agent Cost Economics

Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.

Agent Configurations

13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.

Benchmark Methodology

How CVE-Agent-Bench evaluates 13 coding agents on 128 real vulnerabilities. Deterministic, reproducible, open methodology.

Agent Environment Security

AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.

Security Economics for Agentic Patching

Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.

Validation Process

25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.

Cost Analysis

10 findings on what AI patching costs and whether it is worth buying. 1,664 evaluations analyzed.

Bug Complexity

128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.

Agent Strategies

How different agents approach the same bug. Strategy matters as much as model capability.

Execution Metrics

Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.

Pricing Transparency

Every cost number has a source. Published pricing models, measurement methods, and provider rates.

Automated Vulnerability Patching and PR Review

Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.

Getting Started with XOR GitHub App

Install in 2 minutes. First result in 15. One-click GitHub App install, first auto-review walkthrough, and engineering KPI triad.

Platform Capabilities

One install. Seven capabilities. Prompt-driven. CVE autopatch, PR review, CI hardening, guardrail review, audit packets, and more.

Dependabot Verification

Dependabot bumps versions. XOR verifies they're safe to merge. Reachability analysis, EPSS/KEV enrichment, and structured verdicts.

Compliance Evidence

Machine-readable evidence for every triaged vulnerability. VEX statements, verification reports, and audit trails produced automatically.

Compatibility and Prerequisites

Languages, build systems, CI platforms, and repository types supported by XOR. What you need to get started.

Command Reference

Every @xor-hardener command on one page. /review, /describe, /ask, /patch_i, /issue_spec, /issue_implement, and more.

Continuous Learning from Verified Agent Runs

A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.

Signed Compliance Evidence for AI Agents

A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.

Compliance Evidence and Standards Alignment

How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.

Agentic Third-Party Risk

33% of enterprise software will be agentic by 2028. 40% of those projects will be canceled due to governance failures. A risk overview for CTOs.

MCP Server Security

17 attack types across 4 surfaces. 7.2% of 1,899 open-source MCP servers contain vulnerabilities. Technical deep-dive with defense controls.

How Agents Get Attacked

20% jailbreak success rate. 42 seconds average. 90% of successful attacks leak data. Threat landscape grounded in published research.

Governing AI Agents in the Enterprise

92% of AI vendors claim broad data usage rights. 17% commit to regulatory compliance. Governance frameworks from NIST, OWASP, EU CRA, and Stanford CodeX.

See which agents produce fixes that work

128 CVEs. 13 agents. 1,664 evaluations. Agents learn from every run.