The Pentagon’s battle with Anthropic is really a war over who controls AI
Vox
February 26, 2026
United States Secretary of War Pete Hegseth speaks during a visit to Sierra Space in Louisville, Colorado, on Monday, February 23, 2026. | AAron Ontiveroz/Denver Post via Getty Images
Secretary of War Pete Hegseth sometimes appears as if he’s more interested in the optics of playing the part of a military leader than he is in actually being a military leader.
Maybe that’s why he has chosen a Hollywood-esque high noon — or, at least, late afternoon — showdown for his deepening dispute with the AI company Anthropic. Hegseth has given Anthropic until 5:01 pm on Friday to respond to his demands that the company give the US military full and unfettered access to its AI, or face consequences that could threaten its survival. Anthropic has so far refused, and on Thursday evening CEO Dario Amodei said in a statement that the company “cannot in good conscience accede to their request.”
What’s unfolding this week is the biggest confrontation between the US government and a tech company over AI ethics since Google employees rebelled against working with the Pentagon in 2018. But with AI far more advanced and far more essential to both the American economy and American defense than it was eight years ago, the stakes now are much greater — certainly for Anthropic itself, but also for the question of just who has final control over an existential technology. (Disclosure: Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic. They do not have any editorial input into our content.)
This has all raised plenty of questions, starting with:
What does the Pentagon actually want?
Anthropic is already a supplier for the Pentagon, having signed a $200 million contract in July to provide advanced AI for national security challenges, and its chatbot Claude was the first AI model that could be deployed on the government’s confidential networks. But the department now insists that Anthropic sign a contract allowing its Claude AI to be used for “all lawful purposes.”
That might sound fine — it has “lawful” in the words, after all — but what it means in practice is that Anthropic would have no say over individual use cases, no ability to review how Claude is being used in classified settings, and no right to restrict specific applications. It would be the military that would decide how to deploy Anthropic’s AI technology.
Okay, but if Anthropic is already supplying its AI to the military, why should the company get to decide how that AI is used? It’s not like the Pentagon has to call up Boeing before it uses one of its jets in a military strike.
Hmm, do you currently work at the Pentagon press department? As it happens, that’s precisely the analogy that Hegseth reportedly presented to Anthropic’s Amodei in a tense meeting on Tuesday.
So, why won’t Anthropic play ball?
It’s not being fully recalcitrant. Even beyond the $200 million Pentagon contract, Anthropic has already been deeply involved in government work, including in more direct military uses like missile defense. Anthropic has been one of the most outspoken proponents of the idea that the US is in a civilizational race with China over AI supremacy. While Anthropic has a (mostly if not entirely) deserved reputation as the most safety-minded of the major AI labs, they’re not a bunch of bleeding-heart softies.
Anthropic’s policies allow its models to be used as part of targeted military strikes, foreign surveillance, or even drone strikes when a human approves the final call. But it has maintained two specific “red lines” it won’t cross: fully autonomous weapons, meaning AI systems that select and engage targets without a human involved, and mass domestic surveillance of American citizens. Amodei said in his statement that “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties,“ while frontier AI systems were “simply not reliable enough to power fully autonomous weapons.”
It’s not that Anthropic would never be involved in building lethal autonomous weapons. Just look at Ukraine — the realities of modern warfare have made it all but inevitable that such weapons and systems will be built. But Anthropic does not believe the models are capable of carrying this out effectively today.
So, what’s happening is that the Pentagon is demanding Anthropic allow it to use Claude for a use Anthropic says Claude can’t even do now?
Pretty much.
How did this all happen?
Things started going sideways after the operation in early January that resulted in the capture of Venezuelan President Nicolas Maduro. Claude, according to reporting by Axios, was deployed during the operation through a platform operated by the very military-friendly AI company Palantir. Soon after the operation, an Anthropic employee reportedly asked a Palantir counterpart how Claude might have been used in the operation, apparently in a way that indicated Anthropic might have a problem with it. Palantir then allegedly flagged the discussion for the Pentagon.
The Pentagon was already reportedly unhappy with Anthropic’s insistence on its red lines, and the company has not been included so far on the GenAI.mil platform the department built out in late 2025. At a speech in January, Hegseth pointedly said that “we will not employ AI models that won’t allow you to fight wars.”
That brings us to the Friday 5:01 pm showdown.
If Anthropic sticks to its guns, what can the Pentagon do?
It could simply cancel the $200 million contract, which it would be in its rights to do. Hegseth isn’t wrong to say that suppliers as a rule do not dictate government policy. That would be a minor financial bummer for Anthropic, but the company is currently valued at $380 billion, so I think it would be okay. Other AI companies like xAI seem more than happy to take Anthropic’s place.
But Hegseth does not seem ready to take this relatively rational course of action. Instead, he’s talking as if he wants to make an example out of Anthropic and demonstrate that it is the Trump administration that will tell US AI companies how to act.
The Pentagon has threatened to use the Defense Production Act, a Cold War-era law that allows the president to compel companies to accept defense contracts. In the past that’s meant things like bolstering domestic production of critical supplies, as during the Covid pandemic, when President Trump invoked it to force additional ventilator production. But deliberately using it to target a domestic company over a policy dispute about AI safety rules — and essentially force Anthropic to train what some are calling a “War Claude” — would be unprecedented and certainly lead to drawn-out legal wrangling.
So, that’s not good for Anthropic, AI safety, and maybe even the rule of law. But even worse, for Anthropic at least, would be the last option: designating Anthropic a “supply chain risk.” This label — typically reserved for companies from adversary nations, like China’s Huawei — would prohibit every defense contractor from using Anthropic’s products. Since many of America’s largest corporations hold military contracts, this could effectively poison nearly all of Anthropic’s enterprise business and potentially torpedo a planned IPO. Axios has reported that the Pentagon has already started by asking Boeing and Lockheed Martin to assess their reliance on Claude.
Wait, I’m confused. So, essentially, the Pentagon is saying that Anthropic might be both a serious supply chain risk, but, also, it would like to compel the company to let it use Claude in just about any way it sees fit?
Yes, as Vox contributing editor and Argument staff writer Kelsey Piper put it: “It’s patently ridiculous to both claim that Claude poses a national security threat and also that it’s so necessary for wartime production you have to nationalize the company.”
So, what happens next?
Amodei has refused to back down, and much of the AI world is on his side. That includes competitors like Jeff Dean of Google and voices like Dean Ball, a former Trump AI adviser, who wrote on X that what the Pentagon is considering would represent “the strictest regulations of AI being considered by any government on Earth, and it all comes from an administration that bills itself (and legitimately has been) deeply anti-AI-regulation.” What seems clear is that, if the Pentagon successfully compels compliance — whether through the DPA, supply chain blacklisting, or commercial pressure — it will establish that no American AI company can maintain independent safety restrictions against government demands. Unless Congress does what it should do and passes laws constraining how the Pentagon uses lethal AI, we could be headed for a very dark future indeed — and one out of our control.
Update, February 26, 2026, 6:45 pm: This piece has been updated to include Anthropic CEO Dario Amodei’s statement.
Verticals
politicsnews
Originally published on Vox on 2/26/2026