Anthropic claims 3 Chinese companies ripped it off, using its AI tools to train their models: ‘How the turn tables’
Fortune
by Nick LichtenbergFebruary 24, 2026
AI-Generated Deep Dive Summary
Anthropic, the AI company behind the Claude chatbot, has accused three prominent Chinese artificial intelligence firms—DeepSeek, Moonshot AI, and MiniMax—of using its technology to train their own models. In a blog post, Anthropic alleged that these companies engaged in large-scale campaigns to extract Claude’s capabilities by interacting with its system through over 16 million exchanges across 24,000 fraudulent accounts. The company claims the Chinese firms used "distillation," a legitimate training method where one model learns from another, but argues this was done illicitly to gain an unfair advantage in AI development.
Anthropic further accused the companies of bypassing geofencing and access restrictions on Claude in China by routing traffic through proxy services. It alleged that these labs scripted lengthy conversations with Claude to gather detailed responses for their own training data, effectively using Claude as an unwilling teacher. Anthropic emphasized that this practice violates its terms of service and U.S. export controls aimed at limiting China's access to advanced AI technology. While the company has not filed lawsuits against the three firms yet, it has severed known access points and called for stricter export controls on AI tools.
The allegations come amid broader skepticism toward Anthropic’s own practices. Earlier in 2023, the company settled a $1.5 billion copyright lawsuit after being accused of downloading books from unauthorized sources to train its AI models. Critics have pointed out the irony of Anthropic accusing others of unethical behavior while facing similar scrutiny over its data collection methods. This highlights the ongoing tension in the AI industry between enforcing proprietary systems and promoting open innovation, with U.S. firms increasingly taking a hardline against foreign competitors they accuse of copying their technologies.
This situation underscores the competitive stakes in AI development and the ethical dilemmas surrounding data usage and model training. As Anthropic pushes for stronger legal and regulatory measures, the case raises questions about who gets to set the rules in an industry where boundaries between fair competition and innovation are often blurry. The outcome could have significant implications for global AI collaboration and the future of cross-border technology development.
Verticals
businessfinance
Originally published on Fortune on 2/24/2026