OpenAI updates Department of War deal after backlash
Mashable
March 3, 2026
AI-Generated Deep Dive Summary
OpenAI CEO Sam Altman has acknowledged that the company's recent deal with the U.S. Department of War (DOW) appeared "opportunistic and sloppy," following backlash from critics who questioned its haste and implications. In an internal memo shared on X, Altman admitted that OpenAI rushed to finalize the agreement, which raised concerns about potential misuse of AI technology for mass surveillance and autonomous weapons. Despite amending the contract to include new safeguards, critics argue that the updated language still relies heavily on legal restrictions rather than ethical ones, leaving loopholes that could allow harmful uses if laws change.
The deal with the DOW came swiftly after President Donald Trump ordered federal agencies to cease using rival Anthropic's AI tools due to its refusal to comply with demands to remove safeguards against domestic surveillance and autonomous weapons. OpenAI, which secured the contract within days of this directive, initially claimed its agreement included stronger protections than Anthropic's original deal. However, critics pointed out that the terms still permitted legal uses of AI for mass surveillance and autonomous weapons, raising alarms about the potential risks.
In response to the backlash, OpenAI announced updates to the contract, including provisions explicitly prohibiting the "deliberate" use of its technology for domestic surveillance. The company emphasized that it had worked with the DOW to clarify these restrictions, but many remain skeptical. Critics argue that the phrasing "not intentionally used" leaves room for incidental or unintended surveillance, as AI systems can be repurposed in ways beyond their intended design. Political researcher Tyson Brody noted on social media that such language could still allow for broad data collection under the guise of "incidental" use.
The controversy highlights broader concerns about AI governance and accountability. OpenAI's decision to defer to legal standards rather than establish ethical boundaries has drawn criticism, with some questioning its commitment to responsible AI development. Altman reiterated in his memo that OpenAI aims to follow government directives, framing this approach as a deference to democratic processes. However, critics contend that this stance abdicates responsibility and raises questions about whether AI companies should proactively address ethical concerns beyond legal requirements.
Ultimately, the backlash against OpenAI's DOW deal underscores the delicate balance between innovation, governance, and ethics in artificial intelligence. As governments and tech companies navigate the rapid evolution of AI technology, the need for clear, enforceable safeguards and ethical frameworks becomes increasingly critical to public trust and global stability.
Verticals
tech
Originally published on Mashable on 3/3/2026