96% of developers don’t trust AI code: Here’s a step toward the fix
The New Stack
by Manish KapurFebruary 20, 2026
AI-Generated Deep Dive Summary
The vast majority of developers—96%—do not fully trust AI-generated code without manual intervention, according to a recent survey. This lack of confidence has led to significant "toil," with teams spending nearly 24% of their time auditing and validating AI output. As the software development industry moves beyond the initial excitement of large language models (LLMs) into the era of agentic AI, where systems autonomously refactor code and manage deployments, the challenge lies in balancing speed with reliability.
The traditional metric of productivity—speed—has become a double-edged sword. While AI can generate thousands of lines of code in seconds, if those lines introduce vulnerabilities or architectural flaws, the net productivity gain is nullified. Instead, organizations must shift their focus to "impact," measuring how effectively AI reduces friction and improves outcomes. Automating verification processes is crucial here, allowing developers to transition from auditors to orchestrators, ensuring they remain in a flow state rather than being bogged down by debugging AI-generated errors.
To scale AI-driven development, organizations need a governed framework that treats autonomous code with the same level of scrutiny as human-written code. This involves implementing deterministic verification methods, such as static analysis tools, to ensure every line of code is secure and maintainable before it reaches production. Without this layer of consistency, teams risk introducing technical debt, where the short-term gains of rapid development are offset by long-term maintenance challenges.
The operational path to
Verticals
devopscloud
Originally published on The New Stack on 2/20/2026