Days before Anthropic and Pentagon clashed over right use of AI, US military used company's Claude AI chatbot to capture former Venezuelan President Nicolas Maduro
Times of India
by TOI TECH DESKFebruary 14, 2026
AI-Generated Deep Dive Summary
The US military reportedly used Anthropic’s AI chatbot Claude in an operation to capture former Venezuelan President Nicolás Maduro, raising questions about the ethical and practical implications of AI deployment in combat. This revelation comes just days before Anthropic and the Pentagon reached a standstill over a $200 million contract, with the company opposing the use of its technology for autonomous weapons or surveillance. The incident highlights tensions between tech companies prioritizing ethical guidelines and government agencies seeking to leverage AI for military operations.
According to reports, Claude was accessed through Anthropic’s partnership with Palantir Technologies, whose tools are already integrated into Pentagon operations. The mission involved bombing sites in Caracas last month, and while Anthropic claims any use of its AI must comply with its usage policies, which prohibit violence or surveillance, the company has not confirmed whether Claude was specifically used in the operation. This uncertainty adds urgency to the ongoing contract dispute, with Anthropic CEO Dario Amodei emphasizing that AI should support national defense but avoid crossing ethical红线 like autonomous weapons or mass surveillance.
The Pentagon, however, argues that it should have greater flexibility in deploying AI tools as long as US law is not violated. Defense Secretary Pete Hegseth has been critical of Anthropic’s stance, suggesting that the military needs AI models that allow for effective warfare. Meanwhile, the Department is pushing other AI companies, including OpenAI and Google, to deploy their models on classified networks with fewer safety restrictions typically applied to civilian users.
The case underscores broader concerns about the role of AI in modern warfare and the potential for misuse. As governments increasingly rely on advanced technologies, questions persist about how to balance national security needs with ethical considerations. The standoff between Anthropic and the Pentagon serves as a critical reminder of the delicate line between innovation and accountability in military AI applications.
Verticals
worldasia
Originally published on Times of India on 2/14/2026