Anthropic alleges misuse of its Claude chatbot by three Chinese AI groups, adding pressure to an already strained landscape shaped by US export controls and technology safeguards.
The company said in a blog post on Monday that DeepSeek, Moonshot and MiniMax generated more than 16 million interactions with Claude through roughly 24,000 fake accounts, allegedly breaching its terms of service and regional restrictions. Anthropic said the firms used “distillation”, a method that trains a less capable model on the outputs of a stronger one.
“These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region,” the company said.
Anthropic warned that improperly distilled models may lack built-in safeguards, creating “significant national security risks”. It added: “If these models are open-sourced, the risk multiplies as capabilities spread freely beyond any single government’s control.”
The allegations come weeks after OpenAI told US lawmakers that DeepSeek was attempting to replicate leading American models. Anthropic said the reported activity supports tighter semiconductor export controls, arguing that chip restrictions limit both direct model training and the scope of improper distillation.
According to the company, DeepSeek targeted reasoning tasks and censorship-sensitive queries, Moonshot focused on agentic reasoning and coding, and MiniMax pursued agentic coding and orchestration. Anthropic said it detected MiniMax’s activity before the model under development was released.
“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system,” the blog post said.
Requests for comment sent to DeepSeek, Moonshot and MiniMax were not returned at the time of publication.