Anthropic vs the Pentagon: Why AI firm is taking on Trump administration

Al Jazeera
February 25, 2026
AI-Generated Deep Dive Summary
The U.S. government is locked in a tense standoff with Anthropic, an AI company known for its Claude language model, over demands to relax restrictions on how its technology can be used by the Pentagon. The Department of Defense has given Anthropic until Friday to comply or risk losing its classified military contracts. Anthropic, which positions itself as a "responsible" developer, has refused to budge on safeguards that prevent its AI from being utilized for domestic surveillance or autonomous weapons systems capable of targeting without human intervention. Anthropic was the first AI firm approved for classified military networks and is valued for its ethical stance. Founded in 2021 by former OpenAI executives, the company describes itself as a Public Benefit Corporation committed to long-term human benefit through advanced AI. This includes refusing to allow its technology to be used in ways that could harm civilians or violate international laws. The conflict arises as the Pentagon has awarded defense contracts to Anthropic, Google, OpenAI, and Elon Musk’s xAI. These companies are each eligible for up to $200 million in funding under the initiative. However, the Trump-era Pentagon leadership, led by Secretary Pete Hegseth, seeks unrestricted access to AI tools, arguing that ideological constraints should not limit lawful military applications. Anthropic's refusal highlights a broader tension between advancing military capabilities and ensuring ethical AI use. Earlier this month, an internal resignation over concerns about AI risks underscored the company’s commitment to responsible development. Meanwhile, Anthropic has also faced criticism for allegations of Chinese state-sponsored hacking attempts using its technology, raising questions about security and accountability. This dispute is significant globally, as it shapes how advanced AI technologies are integrated into military operations. The outcome could set a precedent for balancing innovation with ethical safeguards, impacting not only U.S. defense policies but also the broader international community’s approach to AI in warfare and surveillance.
Verticals
worldpolitics
Originally published on Al Jazeera on 2/25/2026