Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI

The Verge
February 23, 2026
AI-Generated Deep Dive Summary
Anthropic has accused three Chinese AI companies—DeepSeek, MiniMax, and Moonshot—of misusing its Claude AI model. The company claims that these firms engaged in "industrial-scale campaigns" to train their own AI systems by creating around 24,000 fraudulent accounts and engaging in over 16 million interactions with Claude. This process, known as distillation, involves using a larger, more advanced AI model like Claude to train a smaller, less resource-intensive one for commercial purposes. While Anthropic acknowledges that distillation is a legitimate training method, it argues that the scale and nature of these activities cross ethical boundaries. The company asserts that this misuse could undermine innovation by devaluing the efforts of original creators while allowing others to benefit from their technology without proper authorization. The allegations highlight a growing tension in the AI industry over how models can be used and shared. As AI development accelerates, questions about data ownership, intellectual property, and ethical usage practices are becoming increasingly critical. This case underscores the challenges of regulating AI training methods while balancing innovation and fair competition. For tech enthusiasts and professionals, this story raises important concerns about the potential for misuse of advanced AI models and the need for clearer guidelines to ensure fair play in the industry. It also points to the broader implications of how AI technologies are developed, shared, and
Verticals
techconsumer-tech
Originally published on The Verge on 2/23/2026