Anthropic says it's been targeted in massive distillation attacks
CoinTelegraph
by Brian QuarmbyFebruary 25, 2026
AI-Generated Deep Dive Summary
Anthropic has accused three Chinese AI companies—DeepSeek, Moonshot, and MiniMax—of engaging in distillation attacks targeting its large language model, Claude. The firm claims these attacks involved the creation of over 24,000 fraudulent accounts and more than 16 million exchanges to scrape data from Claude for training purposes. This technique, known as distillation, involves using outputs from a stronger AI model to train a less capable one, potentially undermining the original model's competitive edge.
In its blog post, Anthropic detailed how it identified these attacks and emphasized their severity. The companies allegedly used these illicit interactions to improve their own models, raising concerns about intellectual property theft and unfair competition in the AI space. Anthropic has not yet taken legal action but plans to address the issue through ongoing investigations and unspecified measures to prevent future attacks.
This incident highlights critical security vulnerabilities in AI development and deployment. Distillation attacks pose a significant threat to companies like Anthropic, as they rely on maintaining the integrity of their models to preserve their competitive advantage. The case also underscores the challenges of regulating AI training practices, particularly across
Verticals
cryptoblockchain
Originally published on CoinTelegraph on 2/25/2026