AI has gotten good at finding bugs, not so good at swatting them
The Register
February 24, 2026
AI-Generated Deep Dive Summary
Artificial intelligence has made significant strides in identifying software vulnerabilities, but its ability to address them remains limited. Anthropic's Claude Code Security tool successfully detected over 500 vulnerabilities, yet only a fraction were actually fixed. This highlights the gap between detection and resolution, emphasizing that without proper validation and coordination, many findings remain unresolved.
Experts warn that while AI excels at spotting issues, translating these into actionable fixes is still challenging. Guy Azari, a cybersecurity researcher, notes that most reported vulnerabilities lack validated CVE entries, indicating incomplete processes. Open-source maintainers face overwhelming reports, leading some projects like curl to halt bug bounty programs due to excessive false positives.
The broader implication is clear: AI's role in cybersecurity is evolving but not yet sufficient on its own. While it lowers discovery costs, the harder task lies in validation and collaboration between developers and AI systems. Without addressing these challenges, AI tools risk overwhelming teams with unmanageable data.
As AI models improve, the focus must shift to enhancing post-detection processes—validating findings, assessing impacts, and facilitating patch development. This evolution is crucial for maximizing AI's potential in cybersecurity, ensuring that vulnerabilities are not just identified but also resolved effectively.
Verticals
tech
Originally published on The Register on 2/24/2026