Anthropic weakens its safety pledge in the wake of the Pentagon's pressure campaign

Engadget
by Will Shanklin
February 25, 2026
AI-Generated Deep Dive Summary
Anthropic, the maker of the Claude AI chatbot, has significantly altered its approach to AI safety amid pressure from the U.S. Department of Defense. The company dropped its strict "Responsible Scaling Policy" (RSP), which previously halted model training unless safety guidelines were met, in favor of a more flexible framework focused on risk reporting and transparency. This shift coincided with reports that Defense Secretary Pete Hegseth is pressuring Anthropic to grant the military unrestricted access to its AI technology, threatening penalties if the company refuses. The revised RSP now emphasizes "collective action" over unilateral safety measures, suggesting that Anthropic believes it cannot afford to slow down development while competitors like OpenAI push forward. The new policy introduces "Risk Reports" and "Frontier Safety Roadmaps" to maintain public trust without enforcing hard red lines on AI development. This change aligns with comments from Anthropic's leadership, who argue that rapid AI advancement necessitates a more pragmatic approach to safety. The timing of these changes raises concerns about external influences. Axios reported that Hegseth has demanded Anthropic provide the military unrestrained access to Claude by Friday or face serious consequences, including legal action under the Defense Production Act. While Anthropic has offered tailored usage policies for the Pentagon, it has drawn the line at allowing AI for mass surveillance or autonomous weapons systems. The situation highlights a critical junction in the AI industry: balancing innovation with ethical considerations and national security interests. Experts warn that Anthropic's new approach could erode its once-bold safety commitments, potentially leading to a "frog-boiling" effect where incremental changes normalize increasingly risky AI practices. This development underscores the growing tension between the rapid pace of AI advancement and the need for robust safeguards against misuse. For tech enthusiasts and industry watchers, this story reveals how high-stakes pressures from governments and market competition are reshaping AI ethics. As Anthropic navigates these challenges, its decisions will likely set a precedent for other AI developers, influencing the future trajectory of responsible AI innovation.
Verticals
techconsumer-tech
Originally published on Engadget on 2/25/2026