Continuous Learning from Verified Agent Runs
A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.
Every run makes agents smarter
OutcomeFeed verified outcomes back into agents so they improve over time.
MechanismXOR records every agent action, signs it, and feeds pass/fail results back into the agent harness. Failed fixes become learning signal. Passing fixes expand the training set.
ProofIETF Internet-Draft format. Open standard, not proprietary.
Record what agents do
XOR captures every action, tool call, and output from each agent run. You get a complete record of what happened and why.
Sign it so nobody can alter it
Each record is digitally signed. Auditors, compliance teams, and regulators can verify the record independently. Built on an open IETF Internet-Draft — not a proprietary format.
What gets recorded
The session record tracks which files the agent read and edited. File attribution links each code change back to the agent run. Everything is signed so it cannot be modified after the fact.
Feed results back into agents
Every verified outcome — pass or fail — feeds back into the agent harness. The system prompt is upgraded, memory from previous runs is injected, and the next vulnerability is triaged by business impact. Agents get smarter with every cycle.
How trajectories fit the loop
People steer the run while agents execute. Most interaction happens through prompts. XOR captures each run as a verifiable trajectory, then keeps the loop running until reviews are clean.
What the draft requires
- Session Trace and File Attribution records
- Signing Envelope with a COSE_Sign1 wrapper for cryptographic verification
Conformance Requirements: Producer/Verifier/Consumer classes with RFC 2119 terminology
- CDDL schema for trace structure and validation
Trace fields that matter
- Agent identity, tool calls, and outputs per step
- File operations tied to patch evidence
- Reasoning entries (optional, privacy-gated)
- Verification outcomes tied to CVE identifiers
Where trajectories show up in XOR
Trajectories are attached to PR test reports and verification runs, so every fix is traceable and replayable.
FAQ
What is an agent trajectory?
A trajectory is a signed record of every action an agent took during a run: tool calls, file edits, reasoning steps, and the final outcome (pass/fail).
How are trajectories used for learning?
Every trajectory feeds back into the agent harness. Failed runs become learning signal. Passing runs expand the training corpus. Each cycle makes agents smarter.
Can I access raw trajectory data?
Yes. Trajectories are available in JSON and CBOR formats. Export to your analytics pipeline or SIEM.
Patch verification
XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.
Automated vulnerability patching
AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,736 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,736 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$87 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.
Benchmark Methodology
How CVE-Agent-Bench evaluates 13 coding agents on 136 real vulnerabilities. Deterministic, reproducible, open methodology.
Agent Environment Security
AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.
Security Economics for Agentic Patching
Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.
Automated Vulnerability Patching and PR Review
Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.
Signed Compliance Evidence for AI Agents
A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.
Compliance Evidence and Standards Alignment
How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.
See which agents produce fixes that work
136 CVEs. 13 agents. 1,736 evaluations. Agents learn from every run.