Anthropic accuses China's AI labs of ripping off content - just like it did

The Register
February 24, 2026
AI-Generated Deep Dive Summary
Anthropic, the company behind the Claude AI models, has accused three Chinese AI labs—DeepSeek, Moonshot AI, and MiniMax—of engaging in "industrial-scale campaigns" to steal data from its models using a technique called model distillation. This process involves transferring knowledge from a large, complex "teacher" model to a smaller, more efficient "student" model, effectively replicating the teacher's capabilities. Anthropic claims that these labs used networks of fraudulent accounts to interact with Claude models over 16 million times, violating terms of service and regional access restrictions. The company fears that this unauthorized use of its technology could enable authoritarian regimes to develop harmful applications like cyberattacks, disinformation campaigns, or mass surveillance. Model distillation is a deep learning technique often used for creating more efficient AI models. While it can be legitimate, Anthropic argues that the scale and method used by the Chinese labs amount to intellectual property theft. The company has previously faced legal challenges related to alleged copyright infringement and unauthorized web scraping, including lawsuits from Bartz v. Anthropic and Concord Music Group v. Anthropic. As courts continue to grapple with how AI training on copyrighted material should be regulated, Anthropic is now focusing its concerns on foreign labs exploiting the technology without proper authorization. The company's blog post also raised alarms about the potential risks of open-sourcing distilled models, which could spread dangerous capabilities beyond any single government's control. This concern aligns with warnings from OpenAI,
Verticals
tech
Originally published on The Register on 2/24/2026