Exclusive: Anthropic rolls out AI tool that can hunt software bugs on its own—including the most dangerous ones humans miss
Fortune
by Sharon GoldmanFebruary 20, 2026
AI-Generated Deep Dive Summary
Anthropic has unveiled Claude Code Security, an AI-powered tool designed to help security teams identify and address software vulnerabilities more effectively. This groundbreaking product leverages the company's advanced Opus 4.6 model to analyze entire codebases, uncovering even the most elusive bugs that human teams often miss. Unlike traditional tools that only scan for known patterns, Claude Code Security emulates a human expert by evaluating how different parts of the software interact and how data flows through systems. It not only detects issues but also assigns severity levels and suggests fixes, though it requires developer approval before making any changes.
The development of this tool is rooted in over a year of research conducted by Anthropic's Frontier Red Team, which specializes in testing AI systems for potential misuse. The team discovered that the Opus 4.6 model excels at finding high-severity vulnerabilities across vast codebases without relying on specialized tools or prompts. In tests, it identified flaws that had gone undetected for decades in open-source software critical to enterprises and infrastructure.
Logan Graham, leader of the Frontier Red Team, emphasized that Claude Code Security is designed to empower defenders by giving security teams a powerful new tool to enhance their capabilities. The product is being rolled out as a limited research preview, initially available to Anthropic's
Verticals
businessfinance
Originally published on Fortune on 2/20/2026