Skip to main content
[ATTACK LANDSCAPE]

How Agents Get Attacked

20% jailbreak success rate. 42 seconds average. 90% of successful attacks leak data. Threat landscape grounded in published research.

Agent attack taxonomy

Four primary vectors: prompt injection, tool poisoning, skill supply chain compromise, and protocol exploits. Each vector has distinct detection and mitigation requirements.

Real-world attack data

Pillar Security monitored 2,000+ LLM applications and found a 20% jailbreak success rate with 42-second average time. 90% of successful attacks resulted in sensitive data leakage.

20%
Jailbreak success rate
42s
Average jailbreak time
90%
Successful attacks that leak data

Agent threat landscape: real-world data, not theory

Every stat on this page comes from published research. Pillar Security monitored 2,000+ LLM applications. MCPSecBench tested 17 attack types across 4 surfaces. arXiv papers documented multi-agent exploitation at scale.

Attack taxonomy

Prompt injection

20% success rate across 2,000+ applications. Average time: 42 seconds. 90% of successful attacks result in data leakage (Pillar Security, Oct 2024).

Tool poisoning

36.5% average attack success rate. Manipulated tool descriptions trick agents into executing harmful actions. o1-mini hit 72.8% success (MCPTox, arXiv:2508.14925).

Skill supply chain

36.82% of 3,984 agent skills contain security flaws. 76 confirmed malicious payloads in public marketplaces (Snyk ToxicSkills). 350% rise in GitHub Actions supply chain attacks in 2025 (StepSecurity).

Protocol exploits

17 attack types across 4 MCP surfaces. 85%+ of identified attacks compromise at least one platform (MCPSecBench, arXiv:2508.13220).

Multi-agent amplification

When multiple agents collaborate, a compromised agent can propagate attacks across the system. Research shows 58-90% success rates for arbitrary code execution via multi-agent orchestration systems, with some configurations reaching 100% (arXiv:2503.12188).

Prompt injection on a single agentic coding assistant can compromise the entire supply chain of projects it touches (arXiv:2601.17548).

Defense effectiveness

A meta-analysis of 78 published studies found that attackers with adaptive strategies succeed at 85%+ rates. Most defense mechanisms achieve less than 50% mitigation (arXiv:2506.23260). This gap between attack and defense effectiveness means detection and response are more reliable than prevention alone.

"Prompt injection is defining the AI era" — CrowdStrike 2026 Threat Report

What XOR catches

Verification pipeline

Every agent-generated fix is tested against the original vulnerability. Bad patches are rejected before review.

Guardrail review

Inline review comments on risky changes. Uncertainty stop: XOR says when confidence is low instead of guessing.

CI hardening

Actions pinned to SHA. Workflow permissions reduced to least-privilege. Counters the 350% rise in Actions supply chain attacks.

Skill scanning

Agent tools checked against vulnerability databases before execution. Unsigned tools are blocked.

Sources

  • Pillar Security — State of Attacks on GenAI (2024-2025), 2,000+ LLM apps
  • arXiv:2601.17548 — Prompt Injection Attacks on Agentic Coding Assistants
  • arXiv:2503.12188 — Multi-Agent Systems Execute Arbitrary Malicious Code
  • arXiv:2510.23883 — Agentic AI Security: Threats, Defenses, Evaluation
  • arXiv:2506.23260 — Adaptive attack strategies, 78 studies meta-analysis
  • International AI Safety Report 2026 — 100+ experts, 30+ countries
  • CrowdStrike 2026 Threat Report — AI threat vectors
  • StepSecurity — GitHub Actions supply chain attacks

[NEXT STEPS]

Related pages

FAQ

How often do jailbreak attacks succeed?

20% of jailbreak attempts succeed with an average time of 42 seconds. 90% of successful attacks result in sensitive data leakage (Pillar Security, 2,000+ LLM applications monitored).

Can multi-agent systems be exploited for code execution?

Yes. Research shows 58-90% success rates for arbitrary code execution via multi-agent orchestration systems, with some configurations reaching 100% (arXiv:2503.12188).

How effective are current defenses?

Most defense mechanisms achieve less than 50% mitigation against adaptive attack strategies. Attackers with budget for multiple attempts succeed at 85%+ rates across 78 published studies (arXiv:2506.23260).

[RELATED TOPICS]

Patch verification

XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.

Automated vulnerability patching

AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.

Benchmark Results

62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.

Benchmark Results

62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.

Agent Cost Economics

Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.

Agent Configurations

13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.

Benchmark Methodology

How CVE-Agent-Bench evaluates 13 coding agents on 128 real vulnerabilities. Deterministic, reproducible, open methodology.

Agent Environment Security

AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.

Security Economics for Agentic Patching

Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.

Validation Process

25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.

Cost Analysis

10 findings on what AI patching costs and whether it is worth buying. 1,664 evaluations analyzed.

Bug Complexity

128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.

Agent Strategies

How different agents approach the same bug. Strategy matters as much as model capability.

Execution Metrics

Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.

Pricing Transparency

Every cost number has a source. Published pricing models, measurement methods, and provider rates.

Automated Vulnerability Patching and PR Review

Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.

Getting Started with XOR GitHub App

Install in 2 minutes. First result in 15. One-click GitHub App install, first auto-review walkthrough, and engineering KPI triad.

Platform Capabilities

One install. Seven capabilities. Prompt-driven. CVE autopatch, PR review, CI hardening, guardrail review, audit packets, and more.

Dependabot Verification

Dependabot bumps versions. XOR verifies they're safe to merge. Reachability analysis, EPSS/KEV enrichment, and structured verdicts.

Compliance Evidence

Machine-readable evidence for every triaged vulnerability. VEX statements, verification reports, and audit trails produced automatically.

Compatibility and Prerequisites

Languages, build systems, CI platforms, and repository types supported by XOR. What you need to get started.

Command Reference

Every @xor-hardener command on one page. /review, /describe, /ask, /patch_i, /issue_spec, /issue_implement, and more.

Continuous Learning from Verified Agent Runs

A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.

Signed Compliance Evidence for AI Agents

A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.

Compliance Evidence and Standards Alignment

How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.

Agentic Third-Party Risk

33% of enterprise software will be agentic by 2028. 40% of those projects will be canceled due to governance failures. A risk overview for CTOs.

MCP Server Security

17 attack types across 4 surfaces. 7.2% of 1,899 open-source MCP servers contain vulnerabilities. Technical deep-dive with defense controls.

Governing AI Agents in the Enterprise

92% of AI vendors claim broad data usage rights. 17% commit to regulatory compliance. Governance frameworks from NIST, OWASP, EU CRA, and Stanford CodeX.

OWASP Top 10 for Agentic Applications

The OWASP Agentic Top 10 mapped to real-world attack data and XOR capabilities. A reference page for security teams.

See which agents produce fixes that work

128 CVEs. 13 agents. 1,664 evaluations. Agents learn from every run.