OpenAI will amend Defense Department deal to prevent mass surveillance in the US

Engadget
by Mariella Moon
March 3, 2026
AI-Generated Deep Dive Summary
OpenAI has announced plans to amend its agreement with the U.S. Defense Department to explicitly prohibit the use of its AI systems in mass surveillance of American citizens. Sam Altman, OpenAI’s CEO, shared an internal memo revealing that the updated deal includes clear language barring the intentional use of AI for domestic surveillance, aligning with laws like the Fourth Amendment and other national security regulations. This move comes after pressure from the Defense Department to remove restrictions on Anthropic’s AI systems, which were intended to prevent misuse in areas such as mass surveillance and autonomous weapons. The situation arose following President Trump’s order banning U.S. government agencies from using Anthropic’s services, including its Claude AI, amid concerns about compliance with ethical guidelines. The Defense Department had sought to designate Anthropic as a “supply chain risk,” typically reserved for foreign companies believed to have ties to their governments. However, Anthropic stood firm on its refusal to comply with demands that would compromise its principles. Altman emphasized in his memo that OpenAI’s agreement now includes specific safeguards to prevent any misuse of AI technology by intelligence agencies like the NSA. He acknowledged that the company had rushed the initial deal announcement, which appeared opportunistic and drew criticism. The CEO also highlighted that OpenAI would prioritize ethical considerations over contractual obligations, stating he’d rather face legal consequences than comply with unconstitutional orders. The controversy has sparked a broader conversation about AI governance and responsible technology use. Anthropic’s stance against government pressure has resonated with the public, leading to a surge in downloads of its Claude app, which topped the App Store leaderboard. Meanwhile, OpenAI’s ChatGPT saw a significant drop in installations, reflecting consumer sentiment. This development underscores the growing importance of ethical AI deployment and the challenges companies face when balancing legal, moral, and business interests. As governments increasingly rely on AI for national security and surveillance, ensuring transparency and accountability becomes critical to maintaining public trust. OpenAI’s decision sets a precedent for responsible AI use, offering valuable insights for both tech developers and policymakers navigating this evolving landscape.
Verticals
techconsumer-tech
Originally published on Engadget on 3/3/2026