AI Regression Database
The public corpus of patterns that AI coding tools consistently generate incorrectly. Not an academic benchmark, but real-world developer pitfalls.
Proactive Pattern Prevention
Academic benchmarks measure AI tools on curated test sets. Incident databases catalog production outcomes. But nobody was documenting the repeatable patterns that AI tools reliably get wrong in normal developer workflows. Until now.
Report
Observed an AI tool reliably producing incorrect code? Document it. Anonymous submissions welcome.
Detect
Scan your repos for known patterns. Get alerted when new patterns match your code through GitHub Actions.
Track
Watch as AI tool vendors acknowledge and fix patterns from the test harness. Measure continuous vendor improvement over time.
Latest Patterns
No regressions published yet.
Be the first to document →Recently Fixed by Vendors
Test harness pending automated runs against models.
Vendor Portal →Browse By
By severity
By tool
Connect and Integrate
Incorporate ARD feeds into your CI or track them in your security channels. Vendors can track their tool's patterns directly via the vendor portal.
Ethical Commitments
- ✓ No AI tool shaming. Neutral infrastructure.
- ✓ Reproducibility over anecdotes.
- ✓ Version awareness. We track improvements.
- ✓ No adversarial use. Defensive only.
- ✓ Seven day vendor notification before publish.
- ✓ Publicly recognize vendor improvements.