How AI could eat itself: Competitors can probe models to steal their secrets and clone them
The Register
February 14, 2026
AI-Generated Deep Dive Summary
AI competitors, particularly Chinese companies like DeepSeek, are increasingly using "distillation attacks" to replicate the capabilities of major AI models developed by companies like Google and OpenAI. These attacks involve probing AI models with prompts to steal their underlying logic and reasoning, allowing competitors to create similar AI systems at a fraction of the cost and effort required to develop them from scratch. Both Google and OpenAI have raised alarms about this growing threat, with Google detecting one campaign that used over 100,000 prompts to replicate Gemini's abilities in non-English languages.
Distillation attacks pose a significant risk because AI models represent billions of dollars of investment and intellectual property (IP). If competitors can reverse-engineer these models, they can bypass the costly process of training their own large language models (LLMs) and instead leverage stolen insights to create similar systems. This not only threatens the financial investments of tech giants but also raises concerns about the global race to dominate AI technology.
While companies like Google and OpenAI are taking steps to detect and prevent distillation attacks, such as blocking accounts that violate terms of service or using advanced detection methods, the challenge is inherently difficult to eliminate. The nature of LLMs—being widely accessible and reliant on public access for training—makes them vulnerable to exploitation. As more organizations provide access to their AI models, the risk of distillation attacks is likely to grow, especially as competitors develop their own sophisticated methods.
The stakes are high because AI technology is becoming increasingly critical to global innovation and economic power. The ability to protect AI models from unauthorized replication will determine whether companies can maintain their competitive edge or face the costly consequences of IP theft. This issue also highlights the need for a coordinated effort between the private sector and governments to develop robust security measures and enforce intellectual property rights in the AI domain.
Verticals
tech
Originally published on The Register on 2/14/2026