Skip to main content
[AGENTS]

Agent Configurations

13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.

Agent coverage

13 agent-model configurations spanning 5 major coding agent frameworks. Each agent runs on the same set of vulnerabilities for fair comparison.

Per-agent metrics

Each agent profile includes pass rate, cost per fix, build failure rate, and infrastructure failure rate. Results feed back into agent harnesses for continuous learning.

13
Agents compared
62.7%
Top pass rate
3
Behavior groups
36
Agent pairs compared

Agent Comparison

9 agents tested on 136 real bugs. Each agent runs in an isolated container with automated safety checks.

#162.5%
cursor-opus-4.6
80/128 passed$22.13/eval
#258.1%
codex-gpt-5.2
79/136 passed$3.08/eval
#356.6%
claude-claude-opus-4-6
77/136 passed$1.66/eval
#450.0%
cursor-gpt-5.3-codex
64/128 passed$3.08/eval
#549.2%
cursor-gpt-5.2
63/128 passed$3.08/eval
#646.3%
codex-gpt-5.2-codex
63/136 passed$3.08/eval
#746.3%
opencode-gpt-5.2
63/136 passed$3.08/eval
#844.5%
cursor-composer-1.5
57/128 passed$1.75/eval
#942.6%
claude-claude-opus-4-5
58/136 passed$1.13/eval
#1042.6%
opencode-claude-opus-4-6
58/136 passed$22.13/eval
#1140.4%
gemini-gemini-3-pro-preview
55/136 passed$1.96/eval
#1235.3%
opencode-gpt-5.2-codex
48/136 passed$3.08/eval
#1333.8%
opencode-claude-opus-4-5
46/136 passed$13.58/eval

Unlock full results

Enter your email to access the full methodology, per-sample analysis, and patch examples.

FAQ

Which agents are evaluated?

Claude Code, Codex, Gemini CLI, Cursor, and OpenCode across 13 model configurations including Claude Opus 4.5/4.6, GPT-5.2, Gemini 3 Pro, and Cursor Composer.

Can I add my own agent?

Yes. The benchmark framework accepts any agent that writes code. Contact us for custom agent evaluation.

[RELATED TOPICS]

Patch verification

XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.

Automated vulnerability patching

AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.

Benchmark Results

62.7% pass rate. $2.64 per fix. Real data from 1,736 evaluations.

Benchmark Results

62.7% pass rate. $2.64 per fix. Real data from 1,736 evaluations.

Agent Cost Economics

Fix vulnerabilities for $2.64–$87 with agents. 100x cheaper than incident response. Real cost data.

Benchmark Methodology

How CVE-Agent-Bench evaluates 13 coding agents on 136 real vulnerabilities. Deterministic, reproducible, open methodology.

Agent Environment Security

AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.

Security Economics for Agentic Patching

Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.

Automated Vulnerability Patching and PR Review

Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.

Continuous Learning from Verified Agent Runs

A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.

Signed Compliance Evidence for AI Agents

A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.

Compliance Evidence and Standards Alignment

How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.

See which agents produce fixes that work

136 CVEs. 13 agents. 1,736 evaluations. Agents learn from every run.