Trump declares war on one of his weapons
Sydney Morning Herald
by Stephen BartholomeuszMarch 3, 2026
AI-Generated Deep Dive Summary
Donald Trump’s recent directive to halt government use of Anthropic’s Claude AI tools has sparked a high-stakes conflict with the Pentagon. Despite Trump’s orders to immediately cease all use of the technology, Claude was reportedly used in the U.S.-led assault on Iran. Deeply integrated into military operations, Claude is utilized for strategic planning, intelligence analysis, targeting, and cyber activities. The administration granted the Department of Defense six months to phase out Claude and transition to rival AI models.
The dispute between the Pentagon and Anthropic stems from an attempt by the department to modify its contract with the AI company. Anthropic co-founder Dario Amodei has been vocal about ethical AI use, setting “red lines” against employing the technology for mass surveillance or autonomous weapons without human oversight. While Pentagon officials argue that current uses of AI comply with legal and ethical standards, they insist on maintaining control over how purchased technologies are deployed. This stance is framed as a principle, with Hesgeth emphasizing that private contractors should not restrict military operations.
Amodei’s concerns about the potential misuse of AI resonate with broader debates about regulation in the field. He warns that even seemingly benign data collection could enable comprehensive surveillance at scale. The conflict has also taken on political dimensions, with Trump labeling Anthropic as a “radical left” company and its executives as “leftwing nutjobs.” Hesgeth’s push for a more Realpolitik approach to AI procurement aligns with Trump’s broader agenda of dismantling previous administrations’ policies, including those on diversity and inclusion in technology.
This confrontation marks a significant shift in the relationship between the military-industrial complex and AI developers. By designating Anthropic as a “supply chain risk,” the Pentagon aims to sever ties not just with Anthropic but potentially with its partners in the tech ecosystem. This move could force major companies like Nvidia, Amazon, and Google to divest stakes in Anthropic, further isolating the firm.
The implications of this clash extend beyond AI regulation. It highlights tensions between technological innovation and ethical governance, raising critical questions about how AI should be controlled in a militarized world. As global powers increasingly rely on artificial intelligence for national security, the debate over who holds ultimate authority—government or private entities—is likely to intensify.
Verticals
worldaustralia
Originally published on Sydney Morning Herald on 3/3/2026