Bug Complexity
128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.
Five difficulty bands
From floor (all agents pass) to ceiling (no agent passes). The wider the medium band, the better the benchmark discriminates between agents.
What the ceiling means
Ceiling samples are beyond current AI capability. Human review still wins for these. AI patching works best combined with human escalation for the hard tail.
How we score vulnerability complexity
128 CVE samples ranked by how many of the 13 agents fix them. Floor samples: every agent passes. Ceiling samples: no agent passes. The spread between floor and ceiling tells you how much headroom AI patching has left.
[KEY INSIGHT]
19 bugs no agent can fix
19 of 128 samples are beyond current AI capability. Oracle ceiling: 79.7% - even a perfect ensemble of all 13 agents can only fix 79.7% of bugs.
Difficulty distribution
Five bands from floor (all agents pass) through ceiling (no agent passes). The wider the medium band, the more the benchmark discriminates between agents.
Easy
59
Most agents pass
Medium
15
Mixed results
Hard
17
Most agents fail
Very hard
19
Nearly impossible
Impossible
26
No agent passes
Hardest and easiest samples
The extremes. Floor samples are reliable for all agents. Ceiling samples remain open problems.
Hardest bugs
| Project | Pass rate | Agents passed |
|---|---|---|
| ReadStat (WizardMac) | 0% | 0/13 |
| capstone (aquynh) | 0% | 0/13 |
| capstone (aquynh) | 0% | 0/13 |
| capstone (aquynh) | 0% | 0/13 |
| capstone (aquynh) | 0% | 0/13 |
| capstone (aquynh) | 0% | 0/13 |
| quickjs (bellard) | 0% | 0/13 |
| quickjs (bellard) | 0% | 0/13 |
| c-blosc2 (blosc) | 0% | 0/13 |
| envoy (envoyproxy) | 0% | 0/13 |
Easiest bugs
| Project | Pass rate | Agents passed |
|---|---|---|
| harfbuzz (harfbuzz) | 100% | 13/13 |
| harfbuzz (harfbuzz) | 100% | 13/13 |
| libarchive (libarchive) | 100% | 13/13 |
| libgit2 (libgit2) | 100% | 13/13 |
| ovs (openvswitch) | 100% | 13/13 |
| wireshark (wireshark) | 100% | 13/13 |
| arrow (apache) | 92% | 12/13 |
| rawspeed (darktable-org) | 92% | 12/13 |
| rawspeed (darktable-org) | 92% | 12/13 |
| file (file) | 92% | 12/13 |
Unlock full results
Enter your email to access the full methodology, per-sample analysis, and patch examples.
FAQ
How is bug difficulty measured?
Each of the 128 bugs is scored by how many of the 13 agents fix it. If all agents pass, it is a floor sample. If none pass, it is a ceiling sample.
What does complexity mean for my team?
If your codebase has mostly simple dependency bumps, expect higher fix rates than the benchmark average. Complex C/C++ multi-file patches will be closer to the hard band.
Patch verification
XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.
Automated vulnerability patching
AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.
Benchmark Methodology
How CVE-Agent-Bench evaluates 13 coding agents on 128 real vulnerabilities. Deterministic, reproducible, open methodology.
Agent Environment Security
AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.
Security Economics for Agentic Patching
Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.
Validation Process
25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.
Cost Analysis
10 findings on what AI patching costs and whether it is worth buying. 1,664 evaluations analyzed.
Agent Strategies
How different agents approach the same bug. Strategy matters as much as model capability.
Execution Metrics
Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.
Pricing Transparency
Every cost number has a source. Published pricing models, measurement methods, and provider rates.
Automated Vulnerability Patching and PR Review
Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.
Continuous Learning from Verified Agent Runs
A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.
Signed Compliance Evidence for AI Agents
A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.
Compliance Evidence and Standards Alignment
How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.
See which agents produce fixes that work
128 CVEs. 13 agents. 1,664 evaluations. Agents learn from every run.