Agentic Third-Party Risk
33% of enterprise software will be agentic by 2028. 40% of those projects will be canceled due to governance failures. A risk overview for CTOs.
The adoption curve
33% of enterprise software will include agentic AI by 2028 (Gartner). These agents use third-party skills, MCP servers, and tool integrations that most security teams have no process for vetting.
Three risk categories
Skill supply chain: 36.82% of agent skills have vulnerabilities. Protocol security: 7.2% of MCP servers are exploitable. Agent-to-agent trust: multi-agent systems enable 58-90% arbitrary code execution success rates.
What CTOs need to know about agentic third-party risk
33% of enterprise software will include agentic AI by 2028, up from less than 1% in 2024 (Gartner). These agents use third-party skills, MCP servers, and tool integrations that most security teams have no process for vetting. 40% of agentic AI projects will be canceled by end of 2027 due to inadequate risk controls (Gartner).
Traditional vendor risk management evaluates software. Agentic third-party risk evaluates behavior — what an agent does with the tools you gave it.
Three risk categories
Skill supply chain
36.82% of 3,984 agent skills have known vulnerabilities. 13.4% have critical issues including credential theft and data exfiltration (Snyk ToxicSkills, Feb 2026).
Protocol security (MCP)
7.2% of 1,899 open-source MCP servers contain vulnerabilities. 5.5% exhibit tool poisoning. 85%+ of identified attacks compromise at least one platform.
Agent-to-agent trust
Multi-agent systems enable 58-90% success rates for arbitrary code execution. Some configurations reach 100% (arXiv:2503.12188).
What traditional VRM misses
Traditional SAST/DAST
Finds code vulnerabilities in static artifacts
Traditional SCA
Finds dependency vulnerabilities in package manifests
XOR
Evaluates agent behavior: what the agent does with its tools, how skills interact, whether the output is safe to merge
Gap
No existing tool evaluates runtime agent behavior against third-party skill integrity
Key stats from published research
36.82%
Agent skills with any security flaw (of 3,984 audited)
Snyk ToxicSkills
85%+
MCP attacks compromising at least one platform
MCPSecBench
20%
Jailbreak success rate across 2,000+ LLM apps
Pillar Security
92%
AI vendors claiming broad data usage rights
Stanford CodeX
[NEXT STEPS]
Deep-dive pages
FAQ
What is agentic third-party risk?
AI agents use external tools, MCP servers, and skills with real permissions. 36.82% of agent skills have vulnerabilities (Snyk ToxicSkills, 3,984 audited, Feb 2026). Traditional vendor risk management doesn't evaluate agent behavior.
How fast is agentic AI adoption growing?
33% of enterprise software will include agentic AI by 2028, up from less than 1% in 2024 (Gartner). 40% of agentic AI projects will be canceled by end of 2027 due to inadequate risk controls.
How does XOR address third-party agent risk?
XOR verifies agent behavior, not just agent code. The platform scans skills for vulnerabilities, checks MCP server integrity, and produces signed compliance evidence for every triage.
Patch verification
XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.
Automated vulnerability patching
AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.
Benchmark Methodology
How CVE-Agent-Bench evaluates 13 coding agents on 128 real vulnerabilities. Deterministic, reproducible, open methodology.
Agent Environment Security
AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.
Security Economics for Agentic Patching
Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.
Validation Process
25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.
Cost Analysis
10 findings on what AI patching costs and whether it is worth buying. 1,664 evaluations analyzed.
Bug Complexity
128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.
Agent Strategies
How different agents approach the same bug. Strategy matters as much as model capability.
Execution Metrics
Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.
Pricing Transparency
Every cost number has a source. Published pricing models, measurement methods, and provider rates.
Automated Vulnerability Patching and PR Review
Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.
Getting Started with XOR GitHub App
Install in 2 minutes. First result in 15. One-click GitHub App install, first auto-review walkthrough, and engineering KPI triad.
Platform Capabilities
One install. Seven capabilities. Prompt-driven. CVE autopatch, PR review, CI hardening, guardrail review, audit packets, and more.
Dependabot Verification
Dependabot bumps versions. XOR verifies they're safe to merge. Reachability analysis, EPSS/KEV enrichment, and structured verdicts.
Compliance Evidence
Machine-readable evidence for every triaged vulnerability. VEX statements, verification reports, and audit trails produced automatically.
Compatibility and Prerequisites
Languages, build systems, CI platforms, and repository types supported by XOR. What you need to get started.
Command Reference
Every @xor-hardener command on one page. /review, /describe, /ask, /patch_i, /issue_spec, /issue_implement, and more.
Continuous Learning from Verified Agent Runs
A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.
Signed Compliance Evidence for AI Agents
A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.
Compliance Evidence and Standards Alignment
How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.
MCP Server Security
17 attack types across 4 surfaces. 7.2% of 1,899 open-source MCP servers contain vulnerabilities. Technical deep-dive with defense controls.
How Agents Get Attacked
20% jailbreak success rate. 42 seconds average. 90% of successful attacks leak data. Threat landscape grounded in published research.
Governing AI Agents in the Enterprise
92% of AI vendors claim broad data usage rights. 17% commit to regulatory compliance. Governance frameworks from NIST, OWASP, EU CRA, and Stanford CodeX.
OWASP Top 10 for Agentic Applications
The OWASP Agentic Top 10 mapped to real-world attack data and XOR capabilities. A reference page for security teams.
See which agents produce fixes that work
128 CVEs. 13 agents. 1,664 evaluations. Agents learn from every run.