AI agents abound, unbound by rules or safety disclosures
The Register
February 20, 2026
AI-Generated Deep Dive Summary
AI agents are increasingly prevalent but operate without clear consensus or safety standards, according to MIT’s 2025 AI Agent Index. The study analyzed 30 AI agents across various categories, including chat applications, browser-based tools, and enterprise workflow systems, revealing a lack of transparency in their development and deployment. While these agents demonstrate advanced capabilities, from email triage to potentially harmful activities like cyber espionage, there is little publicly available information about how they are being used or tested for safety.
The report highlights that most AI agents rely on a few foundation models developed by major companies like Anthropic, Google, and OpenAI. This creates complex dependencies, making it difficult to evaluate their performance or risks. Only four of the 30 agents studied disclosed any safety evaluations, underscoring a significant gap in transparency among developers. Additionally, many agents operate without adhering to established protocols like the Robot Exclusion Protocol, raising concerns about their potential to disrupt online services.
The lack of standardized guidelines for AI agent behavior is particularly concerning as these systems grow more autonomous. While they offer immense potential benefits—such as contributing $2.9 trillion to the U.S. economy by 2030—they also pose risks if not properly regulated. The study emphasizes that understanding how agents are actually used in real-world scenarios is critical for ensuring their safe deployment.
The MIT research also notes that many AI agents remain limited in their capabilities, with most still struggling to complete even basic multi-step tasks. This suggests that while the technology has advanced significantly, there is still a long way to go before it can reliably handle complex, real-world applications.
Ultimately, the findings underscore the urgent need for greater transparency and collaboration within the AI community. Without clear standards or safety practices, the risks of deploying these agents in diverse contexts will continue to grow.
Verticals
tech
Originally published on The Register on 2/20/2026