Anthropic, the AI company behind Claude, is alleging that three Chinese AI firms attempted to gain an unfair advantage by misusing its AI model. The company claims these firms created over 24,000 fraudulent accounts to siphon data from Claude, raising concerns about intellectual property and the potential for misuse of AI technology. This incident highlights the growing tension surrounding AI development and the need for stricter regulations and safeguards.
Illicit Data Extraction
Anthropic has accused three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of improperly using its Claude AI model to train their own systems [1]. The accusation centers around the practice of "distillation," where a smaller AI model is trained using the output of a larger, more advanced model. While distillation can be a legitimate training method, Anthropic claims it was used illicitly in this case.According to Anthropic, these companies generated over 16 million interactions with Claude through more than 24,000 fraudulent accounts [1]. This activity violated Anthropic's terms of service and regional access restrictions [3]. The scale of the alleged data extraction raises concerns about the security and integrity of AI models.
Targeting Advanced Capabilities
Anthropic claims that DeepSeek, MiniMax, and Moonshot specifically targeted the advanced capabilities of Claude, such as agentic reasoning, tool use, and coding [1]. By focusing on these areas, the companies aimed to rapidly improve their own AI models. DeepSeek allegedly targeted Claude’s reasoning capabilities, while generating ‘censorship-safe alternatives to politically sensitive questions’ [3].MiniMax targeted agentic coding, tool use, and orchestration [3]. Anthropic detected the campaign while it was still active — before MiniMax released the model it was training [3]. This suggests a proactive approach from Anthropic in monitoring and identifying suspicious activity on its platform.
The Call for Action and Chip Export Controls
Anthropic is calling for a coordinated response from the AI industry, cloud providers, and policymakers to address these issues [1]. The company emphasizes the need for stronger defenses against data extraction and misuse. This includes investing in technologies that make distillation attacks harder to execute and easier to identify.Anthropic also argues that this incident reinforces the need for export controls on advanced chips [1]. They believe that restricting access to these chips would limit both direct model training and the scale of illicit distillation attempts. The company suggests that the scale of extraction performed by DeepSeek, MiniMax, and Moonshot “requires access to advanced chips" [1].
Potential Security Risks
The incident raises concerns about the potential for misuse of AI technology, particularly by authoritarian governments. Anthropic pointed to authoritarian governments deploying frontier AI for things like “offensive cyber operations, disinformation campaigns, and mass surveillance,” a risk that is multiplied if those models are open-sourced [1]. This highlights the importance of responsible AI development and deployment.Anthropic has begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities and suggest patches [4]. Claude Code Security is designed to counter AI-enabled attacks by giving defenders an advantage and improving the security baseline [4]. This is in direct response to potential misuse of AI, even if unintentional, by actors who wish to cause harm.







