US military leaders pressure Anthropic to bend Claude safeguards
The Guardian World
by Nick Robins-EarlyFebruary 25, 2026
AI-Generated Deep Dive Summary
US military leaders, including Defense Secretary Pete Hegseth, have clashed with Anthropic, a leading AI company known for its focus on safety, over the use of its Claude AI model. During a meeting with Anthropic executives, Hegseth reportedly set a deadline for Friday for the company to agree to Pentagon’s terms, warning that failure to comply could result in penalties. The dispute centers on how the military can utilize Claude, with defense officials pushing for unrestricted access while Anthropic has resisted its use in mass surveillance or autonomous weapons systems capable of lethal actions without human oversight.
The conflict began weeks ago as the Department of Defense (DoD) sought to integrate Claude into its operations. While Anthropic markets itself as a pioneer in AI safety, Pentagon brass perceives the company’s restrictions as obstructive. The DoD views unfettered access to Claude’s capabilities as critical for national security and military advancements, while Anthropic maintains its stance against enabling technologies that could lead to unethical or harmful applications.
This standoff highlights the growing tension between the rapid development of AI technology and ethical considerations in its deployment. Anthropic’s resistance reflects a broader debate within the AI community about responsible innovation, particularly when it comes to potentially dangerous uses like autonomous weapons. Meanwhile, the Pentagon’s demand underscores the military’s reliance on cutting-edge technologies and its determination to leverage AI for strategic advantage.
The outcome of this dispute could set a precedent for how government agencies interact with private AI companies and influence future policies on AI usage in national security contexts. For readers interested in global affairs and technology, this story highlights the delicate balance between innovation and ethics in AI development, as well as the potential consequences of failing to address these challenges. The clash between Anthropic’s safety-first approach and Pentagon’s demands raises questions about accountability, transparency, and the long-term implications of AI in warfare and surveillance.
Verticals
worldpolitics
Originally published on The Guardian World on 2/25/2026